Tech Against Terrorism: Evolving Narratives

Tech Against Terrorism is taking the conversation around how technology companies respond to terrorist use of their platforms public - with an event last month at Chatham House, London. The first in a series of events that explores this important topic publicly, the session brought together policy makers at Google, Facebook and Twitter, with smaller tech businesses and startups represented by Mariusz Zurawek, from JustPaste.it.

What’s interesting about the event and the debate that took place was the openness from platform owners, once keen to keep policy private, now very much opening up about what they can do to help.

The Tech Against Terrorism project aims to bring this conversation out from the shadows and behind closed doors of policy makers and tech businesses, and out into the open in order to foster dialogue, understanding an, awareness and collaboration. The fact that it was held at Chatham House was not lost on attendees. A great venue for debate and discussion - the venue’s own ‘Chatham House Rule’ was not in force; meaning the discussion was public, and live streamed. 

The event was chaired by Adam Hadley, Project Director, ICT4Peace

“It’s not just the tech giants that are affected by terrorist use of their platforms; smaller tech companies don’t all have the resources or skills (or, for that matter, the technology) to handle this themselves.”

What do tech companies need help with?

  • Terms of Service violations
  • Transparent reporting
  • Respecting human rights and freedom of expression in a complex landscape

The Tech Against Terrorism project aims to bring tech companies together with policy makers toanswer questions like “What is role of violent content online, and how does its impact compare to more subtle content?”, “Is content takedown the only solution?”, “how significant is online versus offline activity?” and “what about countering extremism?”. Furthermore, it will explore:

- the benefits of algorithmic responses
- the need of human involvement in terms of service violations
- how to provide small tech companies with the support they need  

Erin Saltman, Policy Manager, EMEA Counter-Terrorism and CVE, at Facebook, then spoke about Facebook’s approach: 

“There is no place for terrorism on Facebook.”

Erin explained that advancements in Machine Learning mean it’s becoming more possible to use image and video matching (matching multiple similar or identical uploads and stop them hitting the platform).  Simply blocking or removing content is not enough as it’s vital to understand the context.? Facebook, relies on human expertise - “machines can’t do nuance”. She explained Facebook has teams working 24/7 globally, made up of Counter Terrorism Subject Matter Experts who review content and assess threats.

But partnerships are key and no tech business should operate in a vacuum. Erin pointed out “people don’t just have a Facebook or a Twitter account”. Like everyone else, terrorists also use a broad toolkit of Social Media (and other digital tools) and the toolkit changes. Ways companies are already working together might include a hash sharing database enabling various businesses to work together to identify and block images that have been determined as in violation of Terms of Service. 

Nick Pickles, Head of Public Policy and Government, UK & Israel at Twitter explained Twitter’s approach:

"Systems designed and built to protect the platform from spam (for example people setting up multiple accounts to flood the system with the same content) are being harnessed to identify and block content that violates terms of service but Twitter does not rely on technology alone." Transparency Reports allow the business to share information on how many ‘terrorist accounts’ have been removed.

Ankur Vora, Public Policy Analyst, Google spoke about Google’s approach, specifically in relation to YouTube. 

"YouTube works hard to foster Freedom Of Information, Opportunity, Speech and Belonging and that there was no place on the platform for terrorist content."

A 4 step approach includes:

  1. Using technology more to identify problematic content - including machine learning
  2. Developing partnerships with experts so they can understand the nuance of content “the trusted finders”. 
  3. Understanding ‘borderline content’; that doesn’t fully violate policy but is in a grey area. 
  4. Using Social Media as a beacon for discussion - and to speak about positive communities; aiming to create change and amplify positive voices online. Additionally they work to redirect visitors using identified search terms and provide counter narrative playlists. 

Mariusz Zuravek from JustPaste.it explained that as a small business, it is hard to respond to law enforcement requests which can come in many different formats.  

As an entrepreneur he founded and funded his business himself, designing it for ease of use. He never expected that it may be used for any ill purposes, or that ‘he’d be sat in a forum discussing take-downs with Facebook, Twitter and Google’. Mariusz has become somewhat famous for his openness about the issue, and his candour means the industry is more aware of the challenges for small and micro businesses in dealing with this complex issue. 

He summed it up as:

  1. It’s difficult to know what is legal and how to respond (especially when faced with take-down requests in different formats and languages); and
  2. Small companies need know-how, legal advice, and tech help.

David Scharia, Director, Chief of Branch at CTED, United Nations Security Council then explained the UN’s role.

The UN has oversight of the impacts of terrorist use of the Internet, and recognises that the issue is not limited to content / propaganda.  Technology is used for radicalisation, training, planning and committing attacks. Any terrorist group will use social media platforms along side any possible tool they can to achieve their goal. The issue is serious and Cyber and IoT (Internet of Things) are the new attack vectors.  As the threat evolves so too does the response, and this is a fast moving space. It is not just governments, or secret services that are impacted, ‘all of society must be engaged’.  David highlighted and reiterated the need for freedoms to be protected, whilst supporting multiple parties in addressing the issues.

Indeed whilst the public may have been exposed to negative press surrounding Tech Companies in recent months regarding this issue, it is clear that they are now engaged.  To sum up, the practical things that can be done to help all tech companies respond include:

  • Guidance on Terms of Service and when cones does or doesn’t breech TOS
  • Take Downs - encouraging law enforcement agencies to use a standard format
  • Government Requests for Information - what does it look like and how do you deal with it?
  • Clarity / a single source on International Laws 
  • Transparent Reporting 

Lots of people share content who are not terrorists and human processes and tech systems alike must protect freedom of expression.  Broader than this issue, more needs to be done to understand when radicalisation turns into action, and help people before they engage with terrorist groups.  In order for tech companies to be effective, we ‘all parties’ must achieve international norms around defending Freedom of Expression.

The debate happening in the open is a good thing; previously the conversation(s) had been behind closed doors. It could be this factor that led to a hostile media response, with a lack of common ground between governments and tech companies.  The fact that large companies are prepared to work with other types of business and organisations is fostering collaboration already and it’s heartening to see that suggestions from small businesses are being taken forward in conjunction with the capabilities from large tech businesses. 

The Tech Against Terrorism project launched earlier this year is supporting dialogue and action; if you're interested in getting involved please visit their website here.