Technology Enables Bullying, but Can It Empower Survivors, Too?

Some technologists are trying to create more humane systems online

Credit: jossnatu/iStock/Getty Images Plus

Nikki Mattocks was about to do her high school exams when the bullying started. “My friend told me on the phone that her Mum thought I was going to get a knife from the kitchen and kill them all,” says Mattocks, who is now 21 and speaking from her room in a South London hospital where she is being treated for the recurrent depressive disorders, psychosis, and PTSD she has suffered with since she was a teen. “I had just come out of hospital, and her Mum had read something in the paper about ‘people like me’, who have mental illnesses, being violent.”

People noticed Nikki had been missing school, and when rumors started spreading on Facebook, Nikki started to get abusive messages. “The news of my illness spread like wildfire, but nobody actually understood my condition. They just stigmatized it and spread fear and hatred of me. People kept saying ‘everyone is talking about you — you shouldn’t come back to school.’”

“One person reported the original status update to the school, who had it taken down, but that just meant that the cyberbullying became invisible,” she explains. The bullying on Facebook and BlackBerry Messenger was a constant presence for Mattocks, but went largely unseen by anyone who might have been able to help. “I received direct messages, saying ‘please don’t come back to school, we don’t want you here.’ And people would send me screen grabs of other stuff which was being said about me online.”

The stress of Mattocks’ online life started to amplify difficulties she was facing elsewhere. She wasn’t living with her Mum at the time, but with her Dad and older sister, and spent a lot of time on her own. “I had no confidence to stand up for myself, and I didn’t know anyone else who heard voices. So I withdrew.”

Vulnerable, and without proper support, she started hanging out with older kids, which led to her using drugs and being abused. But the helplessness Mattocks felt during that dark time is shared by millions of children and adults the world over.

Michael Brennan, who founded the award-winning safeguarding platform tootoot, was himself a victim of cyberbullying at school. “There were too many barriers for me to speak up, especially in high school. It was all happening on places like Bebo and MySpace, where there was no way to tackle it. So, I vowed to find a solution to the problem.”

Since Michael launched tootoot in 2014, the reporting app has worked with more than 1,000 British schools, with over 400,000 children registered on the platform. Children can log in and report problematic messages to their school or local council, and are assigned a unique number when they log in to report bullying. Schools can keep track of how many times an individual child has experienced bullying, build a chronology, and identify patterns on a dashboard. If they feel it’s necessary, they can click to reveal the identity of a child reporting bullying.

The company worked with children and schools to develop three basic principles with which to build the app. “First, profiles are anonymous until children are in danger,” he says. “Second, children wanted to be able to attach evidence, and third was the link to a teacher.”

Collective responsibility is a theme that runs throughout responses to cyberbullying, both in real life, and in technological solutions to it. Another company working to tackle the problem is Besedo, which provides content moderation software called Implio to classified sites such as Gumtree, as well as dating sites like Match.com and Meetic. Much of this is designed to identify cyberbullying.

“Our A.I. filters look for toxic language, like sexual terms or expressions which could be used as harassment,” says Maxence Bernard, Besedo’s head of research and development. “We’re giving companies tools to monitor everything, rather than just the first few messages as some companies do. At the same time, that’s very difficult. To determine whether something is problematic, you still need a human eye.”

Besedo analyzes public data sets from sites like Twitter to fine-tune its filters, helping make its system more accurate, and less likely to pick up false positives. “We use models which will be trained with labels like ‘toxic’ or ‘non-toxic,’” explains Bernard. “You train the models to detect the toxic sentences. Data scientists and linguists work on this together to find semantic patterns.”

Monitoring people on the internet can throw up challenging questions about when to intervene, and how to decide whether to censor, ban, or report someone who is being abusive. “No matter how good the A.I., you will, at some point, need the context,” admits Bernard. “What you see on one site isn’t necessarily a comprehensive representation of what’s going on between two people.”

“If you let bad behavior happen — like say on YouTube, where you see so many racist or sexist comments — you imply that behavior is OK.”

I put it to Bernard that larger sites appear to exhibit less social responsibility than smaller ones, when it comes to bad behavior. “The larger companies are dealing with far more data, and tend to be free services, so maybe paying sites have more resources to deal with that,” he says. “I get the feeling that they don’t know where they stand. They don’t want to be policing public conversations too closely, or censoring people. They get blamed very easily when public conversations go wrong, but the decision as to what crosses the line is very subjective.”

If prevention is better than cure, what would Bernard recommend to encourage a better quality of engagement online? “The best way to encourage best behavior is not to accept anything but,” he replies. “If you let bad behavior happen — like say on YouTube, where you see so many racist or sexist comments — you imply that behavior is OK.”

Others are going a little further, though. Project Rez is a social responsibility platform — a kind of experimental online safe space where users are encouraged to identify how they feel through images and colors.

Rez’s most radical feature is that it doesn’t use words — the most common vehicle for bullying online. When a user logs on using an avatar, they pick an issue — such as gun violence, racism, or grief — and then select an emotion to go with it. They can color their avatar with how they’re feeling each day.

Developed in Denver, Colorado, Rez was set up by Rhonda and Aris Persidis of Grid News Bureau, who originally tested the technology in sports and in politics — notably during the 2016 election, where they saw need for positive connection on a larger scale.

“There is a very strong connection between colors and emotions,” explains Rhonda Persidis, who has used academic research to inform the “emotional palettes” of the site’s avatars. “The color coding provides a visual snapshot, so that users can look and see how they are coping day after day. A dashboard gives an overview of how all users feel on any given day — a powerful measure of the impact of a shared event like a school shooting, for example.”

But when it comes to bullying, Persidis hopes that in this way, before people take to the internet to spread hate, they might instead take a second to properly label their emotions, and break the cycle of knee-jerk reactions from which bullying often stems. Rez hopes to provide a bridge between raw unfiltered emotion and social media, acting like a cushion and allowing people to take a step back and examine their feelings before sharing. Only when people reach out to services will they have the facility to communicate with written messages.

“Anger comes out of hurt,” explains Persidis. “We’d rather provide an outlet to someone who is inflicting pain. At their core, they are probably dealing with an emotion, and don’t know how to deal with it in a way that doesn’t hurt anyone else. I hope that it will help both victims and people who are bullying.”

Writer and journalist based in London. The Littlest Hobo.

Sign up for Pattern Matching

By OneZero

A newsletter that puts the week's most compelling tech stories in context, by OneZero senior writer Will Oremus. Take a look.

By signing up, you will create a Medium account if you don’t already have one. Review our Privacy Policy for more information about our privacy practices.

Check your inbox
Medium sent you an email at to complete your subscription.

The undercurrents of the future. A publication from Medium about technology and people.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store