New Zealand telcos pen tech CEOs scathing letter over terror attack virality

New Zealand’s biggest telecoms providers have penned a scathing letter to several tech CEOs over failing to prevent a terror attack video from going viral.

A far-right terror attack targeted at the Islamic community in New Zealand left 50 dead and many injured. The perpetrator live-streamed part of the attack on Facebook.

Copies of the video spread like wildfire across social media platforms and Silicon Valley giants have failed to explain why they were unable to stop it.

Here is the letter jointly penned by Spark, Vodafone NZ, and 2degrees:

Mark Zuckerberg, Chairman and CEO, Facebook

Jack Dorsey, CEO, Twitter

Sundar Pichai, CEO, Google  

You may be aware that on the afternoon of Friday 15 March, three of New Zealand’s largest broadband providers, Vodafone NZ, Spark and 2degrees, took the unprecedented step to jointly identify and suspend access to web sites that were hosting video footage taken by the gunman related to the horrific terrorism incident in Christchurch. 

As key industry players, we believed this extraordinary step was the right thing to do in such extreme and tragic circumstances. Other New Zealand broadband providers have also taken steps to restrict availability of this content, although they may be taking a different approach technically.

We also accept it is impossible as internet service providers to prevent completely access to this material. But hopefully we have made it more difficult for this content to be viewed and shared - reducing the risk our customers may inadvertently be exposed to it and limiting the publicity the gunman was clearly seeking. 

We acknowledge that in some circumstances access to legitimate content may have been prevented, and that this raises questions about censorship. For that we apologise to our customers. This is all the more reason why an urgent and broader discussion is required. 

Internet service providers are the ambulance at the bottom of the cliff, with blunt tools involving the blocking of sites after the fact. The greatest challenge is how to prevent this sort of material being uploaded and shared on social media platforms and forums.

We call on Facebook, Twitter and Google, whose platforms carry so much content, to be a part of an urgent discussion at an industry and New Zealand Government level on an enduring solution to this issue.

We appreciate this is a global issue, however the discussion must start somewhere. We must find the right balance between internet freedom and the need to protect New Zealanders, especially the young and vulnerable, from harmful content. Social media companies and hosting platforms that enable the sharing of user generated content with the public have a legal duty of care to protect their users and wider society by preventing the uploading and sharing of content such as this video. 

Although we recognise the speed with which social network companies sought to remove Friday’s video once they were made aware of it, this was still a response to material that was rapidly spreading globally and should never have been made available online. We believe society has the right to expect companies such as yours to take more responsibility for the content on their platforms.

Content sharing platforms have a duty of care to proactively monitor for harmful content, act expeditiously to remove content which is flagged to them as illegal and ensure that such material – once identified – cannot be re-uploaded. 

Technology can be a powerful force for good. The very same platforms that were used to share the video were also used to mobilise outpourings of support. But more needs to be done to prevent horrific content being uploaded. Already there are AI techniques that we believe can be used to identify content such as this video, in the same way that copyright infringements can be identified. These must be prioritised as a matter of urgency.

For the most serious types of content, such as terrorist content, more onerous requirements should apply, such as proposed in Europe, including take down within a specified period, proactive measures and fines for failure to do so. Consumers have the right to be protected whether using services funded by money or data.

Now is the time for this conversation to be had, and we call on all of you to join us at the table and be part of the solution. 

The letter acknowledges the huge task faced by the platforms. Facebook claims it removed 1.5 million videos in the first 24 hours after the attack (1.2 million before they were seen by users).

AI has played a part in detecting and removing such content, but YouTube noted its software failed to work as expected. A team of YouTube executives worked through the night to remove tens of thousands of videos that were uploaded as quickly as one per second in the hours following the massacre.

YouTube’s engineers “hashed” the video so any clones of the video uploaded would be automatically deleted. However, the many edited versions were unable to be picked up by the algorithm.

There's no clear solution to the problem, but more effort needs to be made to find one. Such a horrific video should not have been able to spread as it did.

Interested in hearing industry leaders discuss subjects like this? Attend the co-located IoT Tech Expo, Blockchain Expo, AI & Big Data Expo, and Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London, and Amsterdam.

Related Stories

Leave a comment

Alternatively

This will only be used to quickly provide signup information and will not allow us to post to your account or appear on your timeline.