Get Even More Visitors To Your Blog, Upgrade To A Business Listing >>

Create tools to block terrorist content: UK to tech firms

In the latest chapter of the saga involving states’ reactions and crackdowns on Terrorist activity, UK Home Secretary Amber Rudd has been reported to be holding talks with several major Internet Companies today, with the objective of urging them to do more to tackle the spread of extremist content online. Companies attending include Google, Microsoft, Twitter and Facebook, along with some smaller Internet companies.

Following the terror attack in London last week, Rudd promised an updated counterterrorism strategy by the UK government will shortly, one will prioritize doing more to tackle radicalisation online. She wrote in The Telegraph:

Of paramount importance in this strategy will be how we tackle radicalisation online, and provide a counter-narrative to the vile material being spewed out by the likes of Daesh, and extreme Right-wing groups such as National Action, which I made illegal last year. Each attack confirms again the role that the internet is playing in serving as a conduit, inciting and inspiring violence, and spreading extremist ideology of all kinds.

A significant aspect of this strategy seems to be depending on tech firms to build tools.

According to a government source, Rudd will call on web companies today to use technical solutions to automatically identify terrorist content before it reaches a wider range of people. It is also expected that Rudd will bring up the issue of encryption, covered in yesterday’s article, to argue that law enforcement agencies must be able to “get into situations like encrypted WhatsApp”.

In her Telegraph article, Rudd, in her Telegraph piece, argued for the government and Internet companies coming together to fight terrorism. She said:

We need the help of social media companies, the Googles, the Twitters, the Facebooks of this world. And the smaller ones, too: platforms such as Telegram, WordPress and Justpaste.it. We need them to take a more proactive and leading role in tackling the terrorist abuse of their platforms. We need them to develop further technology solutions. We need them to set up an industry-wide forum to address the global threat.

Noteworthy, according to a biannual Transparency Report published last week, Twitter revealed that it had conducted suspensions of as many as 636,248 accounts, between August 1, 2015 through to December 31, 2016, for violations linked to the promotion of terrorism, with the majority of the accounts (74 percent) being flushed out by its own “internal, proprietary spam-fighting tools” rather than via user reports.

The issue of online terrorist content was also discussed back in February by Facebook CEO Mark Zuckerberg, who especially mentioned his hope that AI will play a larger role in future to tackle this challenge, throwing out the warning that “it will take many years to fully develop these systems”. He said:

Right now, we’re starting to explore ways to use AI to tell the difference between news stories about terrorism and actual terrorist propaganda so we can quickly remove anyone trying to use our services to recruit for a terrorist organization. This is technically difficult as it requires building AI that can read and understand news, but we need to work on this to help fight terrorism worldwide.

According to a  report in the Telegraph last week, the UK government is also considering a new law to prosecute Internet companies in the event that terrorist content is not immediately taken down after it is reported. However, it brought up the question from many ministers of how such a law could be enforced with overseas headquartered companies, like most of the Internet companies in question are.

Yesterday, digital and humans rights groups including Privacy International, the Open Rights Group, Liberty and Human Rights Watch issued a statement to the UK government to be “transparent” and “open” about the discussions it is conducting with Internet companies. They wrote:

Private, informal agreements are not consistent with open, democratic governance. Government requests directed to tech companies to take down content is de facto state censorship. Some requests may be entirely legitimate but the sheer volumes make us highly concerned about their validity and the accountability of the processes.

The group also criticized Rudd for failing to publicly reference existing powers at the government’s disposal, expressing concern that any “technological limitations to encryption” they seek could have damaging implications for citizens’ “personal security”. They further wrote:

We also note that Ms Rudd may seek to use Technical Capability Notices (TCNs) to enforce changes [to encryption]; and these would require secrecy. We are therefore surprised that public comments by Ms Rudd have not referenced her existing powers.

We do not believe that the TCN process is robust enough in any case, nor that it should be applied to non-UK providers, and are concerned about the precedent that may be set by companies complying with a government over requests like these.

When asked to comment on the group’s open letter, the Home Office did not respond, nor did it throw light on its discussions today with Internet companies, with a government source saying that the meeting is private.

Share the post

Create tools to block terrorist content: UK to tech firms

×

Subscribe to Hotstar Takes On Netflix, Launches Hotstar Premium To Offer Us Tv Shows And Movies, 'unspoiled' | The Tech Portal

Get updates delivered right to your inbox!

Thank you for your subscription

×