2021 Live Streaming Transparency Report – The Meet Group

2021 Live Streaming Transparency Report

April 29, 2022

Every day, The Meet Group helps millions of people find connection and community within our portfolio of mobile apps and our video solution, vPaaS. As we look to finally come out of the COVID pandemic, the ability to find connection and meet new people has never been more important. But dating and social apps remain a target for bullies, trolls, and fraudsters. At The Meet Group, we are dedicated to providing an environment where our members treat each other fairly, and can connect and interact as safely as possible.

We devote more than 500 full-time paid moderators and leverage sophisticated AI to help keep our communities safe. We also collaborate extensively and partner with industry leaders to fight online toxicity and abuse. To further our collaborations and to continue the conversation of safety in the industry, we are publishing this, our first Transparency Report. We believe transparency is critical to establishing industry standards and best practice and hope that others in the industry will do the same so that as an industry we can create safer spaces for all.

This report will share more transparency about the prevalence of live streaming content that violates our Content and Conduct Policy and Terms of Service, as well as insight into our enforcement practices, turn-around times, and how we work with law enforcement to keep the community safe. Unless otherwise noted, this report covers the period from January-December 2021 and only applies to users within our live streaming video networks.


Streamer Totals

On average, there were over 588,534 live streams created daily, for a total of 214,226,431 streams in 2021. In sheer broadcast minutes, this was 4,513,117,256 minutes (or 75,218,620 hours or 3,134,109 days or 8,581 years!). This includes apps across The Meet Group portfolio, as well as our partners.


How Do We Moderate Live Video Content?

The Meet Group developed a comprehensive Content & Conduct policy that is ever-changing as industry standards and community trends continue to evolve. Our senior management meets weekly to review samples of content, and make sure our moderation standards are proper for our community. Our Content & Conduct policy strictly prohibits harassment and bullying, sexually explicit content, hate speech, promotion or depictions of self-harm or violence, spammy content, and child exploitation.

Reporting

The Meet Group uses an industry-leading abuse reporting system developed in conjunction with law enforcement experts. Live streaming features have easily accessible in-app reporting systems that allow users to quickly report inappropriate behavior observed in live streams to our moderators, who investigate the report and take appropriate action. User-reported content is reviewed and acted upon in no more than 60 seconds, but typically much sooner.

Outside of our in-app reporting features, users have the option to email our support channel(s) to report inappropriate behavior. Email reports of abusive behavior are investigated within 24 hours of receipt, but generally within just minutes or hours of submission.

Proactive Discovery

We take a highly proactive approach to moderating our live video community. Utilizing a mixture of AI and real-time human review, all live video streams are reviewed for inappropriate behavior. In the event that a user is found to be violating our Content & Conduct policy, their stream is ended & a ban is issued. In more serious cases, the profile may immediately be closed and the device is blocked from returning to the app.

The majority of Live video streams are proactively sampled every five seconds, if not even more frequently. Streamers with a long history of good conduct are sampled every 10 seconds. Nobody is exempt. Screenshots are programmatically generated and evaluated by proprietary Machine Learning algorithms. Screenshot samples that our AI is unable to confirm as appropriate are queued for human review.

Moderators review every screenshot not approved by AI, applying our comprehensive content moderation standards to determine the appropriate course of action. This could include, but is not limited to, ending the stream, temporarily banning the user from live video, permanent account deletion, or in rare cases, potential escalation to law enforcement.

In addition to review of live streams via system-generated screenshots, The Meet Group employs a team of dedicated live video moderators to watch streams live in real time. In addition to reviewing the content of the actual video stream, the moderators also evaluate the user’s display names, profile photos, and stream description for appropriateness.

When a moderator observes content or behavior that violates our Content & Conduct policy, they are able to swiftly end the stream and/or ban the user.

Taking Action

While the occasional bad actor exists, over 99% of streams and related content are in-line with our Content & Conduct Policy. In 2021, moderators ended 1,933,590 streams for content violations, which accounted for less than 1% of all streams. The vast majority of bans were from discovering abuse through proactive sampling, rather than reaction to viewer reports.

Disciplinary action varies based on the severity of the offense. Lesser offenses result in warnings and temporary bans from live video. For each offense, the user receives a detailed warning message advising them of the inappropriate conduct. In an effort to educate users on what is and is not appropriate, a link to our Content & Conduct policy is provided at the start of every stream as well as with each ban and warning. Repeat offenses can result in additional penalties, up to and including account deletion. Severe first offenses can result in immediate account deletion. When an account is deleted, we take measures to permanently prevent them from creating new accounts and rejoining the community, including banning the email address or phone number from reregistering, as well as blocking the device as possible. Our use of iOS DeviceCheck has had good results.

STREAM MODERATION – STREAM ENDS BY OFFENSE

Streams Ended by Offense

Non-nude sexual content906,18846.87%
Attempted broadcast of nudity656,95533.98%
Driving while broadcasting152,9317.91%
Child in view, but not streamer99,6125.15%
Hate speech, bullying, and threats of violence11,6240.60%
Spam6,5930.34%
Profile photo mismatch2,6910.14%
Suspected hard drug use1,8780.10%
Other1,0120.05%

Viewer Chats

In the second half of 2021, live stream viewers posted 2,815,853,241 textual chats in live streams. On average, that’s over 15 million per day!

Of these near 3 billion viewer chats, less than 1% were found to be inappropriate or otherwise in violation of our Terms of Service.

Disciplinary action varies based on the severity of the content in the chat. This could include simply suppressing the chat or issuing a warning & ban from the live video feature.

All viewer chats are proactively moderated using Machine Learning Algorithms that are trained to discover abuse including hate speech, sexual harassment, and other toxic content in 12 languages – English, French, German, Dutch, Russian, Spanish, Japanese, Chinese, Korean, Italian, Portuguese, and Turkish.

The predictive algorithms are supplemented by a stop-list, which enables us to quickly stop new trends of abusive text in between retrainings of the algorithms.

Any viewer chats that are reported by other users as being inappropriate are queued for human review. All chats made by the viewer are shown to provide the moderator with additional context. Bans are issued for repeated and flagrant violations.


Member Safety

Criminal Conduct

The safety of our members is The Meet Group’s #1 priority. Users engaging in criminal conduct both on and off our apps are not tolerated. We investigate all reports of criminal behavior or prior convictions and take the appropriate action.

We compare the data provided by every US registrant against a nationwide sex offender database to find & remove matches at the time of registration. Additionally, we conduct daily searches to identify those who may have become a registered sex offender since the time they registered on our app. We do not otherwise conduct criminal background checks of users.

Exigent Circumstances

The Meet Group takes the mental health of our users very seriously, and as such, investigates any and all reports of users threatening self-harm or suicide. Throughout 2021, we escalated four instances of users in live video making alarming, specific suicidal statements to the local law enforcement agency in their geographic location.

When a user makes a threat of mass violence, The Meet Group immediately alerts and cooperates with the local Law Enforcement agency in their area. There were no escalations to law enforcement regarding threats of mass violence in 2021.

Child Exploitation & Abuse

The Meet Group was a leading contributor to Project Artemis, and takes a firm stance against any and all forms of corruption or endangerment of minors. Behaviors in this vein are illegal, morally reprehensible, and absolutely prohibited by our Terms of Service. Prevention and detection of these behaviors drive a large part of our moderation practices and policies.

Our Trust & Safety team uses proactive detection measures to identify instances of exploitative behavior involving minors. We algorithmically discover minor users misrepresenting their age to gain access to the service and immediately remove their profiles. We use AWS technology to estimate the age of people appearing in profile photos. When the AWS age estimation technology has confidence a minor is present, the profile is queued for The Meet Group safety team to perform a holistic review of the profile. We have begun applying a similar approach to age estimate live streamers.

We thoroughly investigate any report involving a possible minor user or child exploitation. If possible exploitation or abuse of a child is identified, we preserve any-and-all data available and report the incident to the National Center for Missing and Exploited Children (NCMEC) with whom we have a longstanding relationship. NCMEC reviews the provided report and relays the information to the law enforcement authorities as appropriate. The Meet Group escalated 464 live video incidents to NCMEC in 2021 which represented about 0.0002166% of streams.