CSAM detection
All uploaded images and videos are scanned against globally verified hash databases and image-based classifiers. On match, content is blocked instantly and authorities are notified.
Mio's core promise is simple: there's no 'For You' algorithm deciding what you see, exploiting your attention or building addiction. AI does not rank your feed.
But AI is a tool far beyond that. The same technology, instead of exploiting attention, can be used to protect you from harassment, scams, child abuse content and harmful material. We do that — and on this page we transparently document where we use it and where we will never use it.
All for safety, accessibility or user benefit. None for feed ranking or ad profiling.
All uploaded images and videos are scanned against globally verified hash databases and image-based classifiers. On match, content is blocked instantly and authorities are notified.
Images, video frames and profile photos pass through a real-time NSFW classifier. Sensitive content is blurred or fully blocked, kept behind an age gate.
Signup flows, messaging patterns and behavioral signals (device fingerprint, IP pool, signup velocity) flag bot/spam accounts — auto-suspended and routed to human moderators.
Comments, messages and posts are classified in multiple languages for hate speech, harassment, threats and sexual content. Above-threshold content is escalated to human review — no auto-deletion.
Comments and posts can be translated to the user's language on demand. Translation is opt-in and does not modify the user's original text.
Voice messages can be transcribed to text for hearing-impaired users on demand. Data is processed per-message and not stored.
Uploaded photos/videos suggest topics, locations and themes — but tags are never added unless the user confirms. The user decides, not the algorithm.
Messages or posts containing risk signals trigger a private, non-judgmental support message with local crisis hotline information.
There is no 'For You' algorithm on the main feed. Order is driven by tag match, interests and chronology. You decide what you see — not us.
We do not run AI systems engineered to extend session time via rage bait, polarizing content, or addiction loops — and we never will.
Mio is ad-free. We do not train or use any AI to profile users for ad targeting purposes.
User content, messages, images or behavior is never sold, shared or licensed to third-party AI companies as training data.
No user content is deleted by AI alone. AI generates signals; the final decision always passes through a human moderator. The user always has the right to appeal.
Each solves a real user problem. None of them shipped just because 'AI is trendy'.
Expand hate speech and harassment detection to 50+ languages. Contribute to open-source models in low-resource languages (Turkish dialects, Arabic dialects, Kurdish variants).
Automatically detect and visibly label AI-generated images and videos. Mandatory transparency label — cannot be hidden.
Optional AI assistant that improves drafts, corrects grammar and suggests translations on demand. Off by default. No post is ever auto-generated — all generation is explicit with user consent.
Auto-generated alt-text on all images for vision-impaired users. The creator can always edit the alt-text.
Real-time multi-language translation in voice messages and live broadcasts. Data is processed per session and never recorded.
We're aiming beyond 2 years too. Open-sourcing our child safety models, keeping data on-device with federated learning, contributing to local LLM ecosystems — these aren't 'maybes', they're our roadmap.
We plan to open-source Mio's child safety classifiers and training methodology — a Türkiye-to-world contribution so that small platforms can be just as safe as the giants.
Train models directly on user devices without ever sending raw data to our servers (federated learning). We want to prove safety doesn't have to come at the cost of privacy.
If your content was hidden/warned/removed, an Explainable AI layer answers 'why' — in human language, not technical jargon.
Financial and data contributions to open-source Turkish-focused LLMs (TURNA, BERTurk, etc.) — strengthening the independent Turkish AI ecosystem.
A community-verified trust layer based on social interaction patterns — without using personal identity. Structural defense against fake accounts, scams and manipulation.
A personal AI assistant scoped only to your own content, fully under your control, portable to other platforms. Disable-able, delete-able, exportable.
Annual third-party audits of our AI systems with publicly published reports. Transparency proven by an independent body — not just by our word.
An independent council of academics, rights advocates and user representatives reviewing major AI features before launch.
These principles are part of our product process. If an AI feature can't meet even one, it doesn't ship.
We publicly document which AI we use, where, what decisions it makes and what its limits are. This page is proof of that promise.
Users can appeal every AI decision. There is no auto-deletion; a human moderator makes the final call.
AI only processes the minimum data needed for its task. Raw content is discarded after classification — only metadata is logged.
We use AI to make content safe, NOT to generate content. AI generation features are always opt-in and clearly labeled.
We prefer open-source models when possible. When using closed source, we publicly disclose the provider and data policy.
We will NEVER use AI to make users addicted or to manipulate them. AI exists to serve the user — not to extract data from them.
For details on our AI policies, models used, audit reports and data-handling protocols, please reach out. Academic research requests are prioritized.
We don't sell data. We don't show ads. Our AI exists to protect you — not to monetize you.
Most social networks say "we care about AI ethics" but never document which model, for what purpose, within which limits. This page is that documentation.
Mio's AI approach will go through annual independent audits; reports will be publicly published. By 2027, we aim to open-source the core architecture of all our safety models.
For questions, feedback and academic collaborations: [email protected]