Using AI to Make the Internet Safer

 

Today’s Web—characterized by social media and user-generated content—is a powerful, open medium that gives everyone a voice. Unfortunately, some use their voices to bully or harass other users. Social media platforms, online bulletin boards, blog sites, media companies, and anyone else who opens up posts to comments, struggle to identify and deter online harassment. So Intel has joined with a number of other organizations to create Hack Harassment—a collaborative effort to reduce the prevalence and severity of online harassment through increasing awareness and accountability, advancing anti-harassment technology solutions, and effecting change for individuals and communities. And we’re tackling a portion of this goal with artificial intelligence (AI).

Online harassment is a large and growing problem. Forty percent of online users say they have personally experienced harassment: trolling, flaming, stalking, even threats of physical or mental harm. Young Internet users are even more likely to have been harassed (70% of those 18-24 according to Pew Research)1. Women, people of color and LGBTQ are more likely to experience harassment online. While this is alarming, there’s evidence that online harassment is actually under-reported.

It’s not just a problem for users, but for companies and organizations who are trying to make their site or service safe and welcoming. Appearing on one of the morning news shows recently, Instagram co-founder Mike Krieger recounted the days when he and co-founder Kevin Systrom personally spent hours of their time scanning posts for harassment. Now, many sites have a mechanism for users to report abuse, but site operators are overwhelmed by the volume of reports to be investigated and dealt with, and actions are often taken too late to help the victim.

With a human moderator, it may be easy to recognize harassment.  But using simple filters to spot abuse is unreliable. When a teen responds to her friend, “You got tickets to that concert??? I hate you!!!” she probably means no harm; whereas, much subtler statements lacking inflammatory words can be menacing in tone and intent.

Hack Harassment is working the problem on many fronts. We’re increasing awareness and fostering collaboration among organizations and users who are committed to fighting online harassment. But we’re also applying technology to create and implement tools to help fight the problem.

That’s why we’re tackling the problem with AI. The nuances of human language make it an ideal opportunity for machine learning. There are huge volumes of data available to train a model to spot abuse, so we’re tapping into that to compile a dataset for training of machine learning algorithms. We’ve created a machine learning classifier application to detect harassment in digital media like online posts, email, chat, and texts—and to do it at scale. These are open source efforts, so the results of our work will be available to Internet sites who want to apply it to reduce harassment, to businesses who want to stop workplace harassment in email or chat, and to those who might offer services others can link to in real time.

Want to learn more and be part of the solution? Take the Hack Harassment pledge. It’s not only a good thing to do, but you can provide your email address so you receive updates on our efforts to make the Internet safer and more inclusive. You’ll also be among the first to have a chance to try out the classifier when it’s ready for beta testing in the coming months.

 

1: http://www.pewinternet.org/2014/10/22/online-harassment/