Meet Live & Chat Now

Privacy Hub

Moderation Decision Making

Moderation Decision Making

How do we make moderation decisions on Aveola?


Why we’re sharing this

Your safety and privacy matter to us, so we want to be clear about how we make moderation decisions and enforce our Community Guidelines.


What “automated decision” means

An automated decision is one made by software—not a person—that could affect someone’s rights or experience on the platform.


How moderation works

We use a mix of human moderators and AI. Some content is removed automatically to prevent harm. For example, if our systems detect violence or nudity in a live video, the live may be closed right away. We also auto-filter inappropriate or illegal messages in chat.


What happens when we spot a problem

If something looks like it breaks our Guidelines or the law, we create an internal report. A human moderator reviews it and takes the right action. While we investigate, the reported account may be flagged


Your reports matter

Moderators and AI automated tools also review all reports submitted by users.


Humans stay in the loop

Our automated tools are supervised by our teams to reduce mistakes (like false positives) and keep decisions fair.


Think we got it wrong?

If you believe we made a mistake, please contact support support@aveola.app and we’ll review it.

Cookies icon
We use cookies to enhance your browsing experience, serve personalised ads or content and analyse our traffic. By clicking "Accept all" or continuing browsing, you agree to our use of cookies and tracking technologies that require your consent, particularly Meta Pixel.