Interview / Eva and Franco Mattes

Tue 10 Nov 2015

To coincide with the release of the first chapter of their new work, Dark Content, we interviewed Eva and Franco Mattes to find out more about the moderation of ‘offensive’ online material and to better understand the workings of the Darknet…

How did Dark Content develop? What was your starting point for the work?

It all began when I first started using Facebook and noticed there were hardly any Neo-Nazi or Ku Klux Klan or beheading videos. If social media is a tool for people to share whatever they want, there should be a lot of this kind of stuff, for better or worse. So we started investigating if and how this kind of content gets filtered and it turned out that it’s not done by computers but by humans.

Since social media became a multibillion-dollar industry they now employ a massive labour force of ‘content moderators’ who remove the offensive material from social networking sites. Although very elusive, content moderators are crucial to understanding this process, they are the ones who decide how much breast is too much breast on Instagram, or remove photos of Osama Bin Laden from search engines. This is done in order to protect, and sometimes divert, the rest of us.

How did you track down the content moderators, and how forthcoming were they in sharing their stories?

Most of the content moderators work from their cubicles or apartments in the Philippines, Arizona or Bulgaria, miles away from the anonymous companies that hire them. Understandably, they wanted to remain completely anonymous. We tried many different ways to get in touch, until, after several months, we decided to try posing as a company ourselves, using the same services the big companies employ to anonymously crowd-source content moderators, and that worked out.

In the videos the workers faces keep changing while the voice remains the same because, even after interviewing them, we have no idea about their age, gender, ethnicity etc.

How important do you think anonymity is within this content moderation? 

The interesting thing is that both the content moderators and the companies that hire them want to remain anonymous. Big companies like Google or Facebook don’t want their users – us – to know that their content is being reviewed. They want to be perceived as transparent tools of self expression, where everyone can ‘share’ whatever they want, when in fact they have incredibly sophisticated ways to control the content.

Moderators, on the other hand, want to remain anonymous either because they have non-disclosure agreements with the companies they work for, or just because they don’t want their families and friends to know that their job consists in watching disgusting videos for ten hours in a row.

Why have you decided to release the chapters on the Darknet and how will audiences be able to access these?

Most of the content that is removed from social media – the ‘surface’ internet – very likely ends up on the Darknet, so it makes sense to publish our videos there. We also wanted to invite people to browse anonymously into this side of the internet most of us are not familiar with. The Darknet is largely presented by mainstream media as a marketplace for drugs, weapons and pornography, but it is also the platform that allowed free speech for activists living in oppressive regimes, during the Arab Spring for example, or the revelations of whistle-blowers like Edward Snowden.

On the Darknet anyone is anonymous. The general thought is that if you want to be anonymous, it means you’re doing something illegal. This is incredibly wrong. Anonymity is a fundamental right. It is at the base of democracy. Think about voting, for example, it is expressed anonymously to avoid pressure, manipulation or trade… There is no democracy or free speach without anonymity.

Can you tell us more about your interest in making invisible practices or procedures more visible?

When looking at something, being it a newspaper or TV or YouTube, I sometimes ask myself what’s missing, what’s not there? It’s a good way to start defining the boundaries of a given system, its limitations, and of course the next question is, why is this thing missing? Who removed it? Often what’s missing is more important than what’s present.

How do you see this content moderation evolving in the future? Do you think this is something an algorithm will ever be able to do?

I’m sure most of the companies would love it, that’s what they would like us to think, that all this content is handled by algorithms and not by actual people. But there are a lot of decisions based on morality that it is hard to imagine computers taking over. For example: an algorithm can identify a swastika, but it has a hard time understanding if the poster is from a Neo-Nazi (so it should be removed) or a holocaust survivor (so it should stay), these kind of choices still require humans.

In addition, what is considered ‘difficult’ content is not just gross or violent, it is very often politically problematic. These companies often collaborate with governments to remove content. For example when Osama Bin Laden was killed, incidentally during the US presidential campaign, there was an order from ‘the top’ to remove any video or image featuring him. Similarly the video of a Buddhist monk setting himself on fire to protest against the Chinese government has been removed by Facebook, at a time when they are trying hard to expand into China, and therefore willing to please the Chinese government.

So, basically, whether or not the final execution of the removal order is carried out by a human or an algorithm is not so relevant, what’s important here is who gives the order and why.

More information: instructions on how to view Dark Content

 

Additional Links

Recent Journals

Other Journals