San Francisco: Microsoft President Brad Smith has asked social media platforms to learn and act more in the wake of the March 15 Christchurch shootings that was livestreamed on Facebook.
In a blog post on Monday, Smith said words alone were not enough.
“Across the tech sector, we need to do more. Especially for those of us who operate social networks or digital communications tools or platforms that were used to amplify the violence, it’s clear that we need to learn from and take new action based on what happened in Christchurch,” he said.
Australian Brenton Tarrant, a 28-year-old self-proclaimed white supremacist, has been charged with one murder in connection with the attacks at the two mosques that killed 50 people, and he is expected to face further charges.
Smith said that across Microsoft, we have identified improvements we can make and are moving promptly to implement them.
“This includes the accelerated and broadened implementation of existing technology tools to identify and classify extremist violent content and changes for the process that enables our users to flag such content,” he posted.
Smith emphasised on developing an industry-wide approach that will be principled, comprehensive and effective.
Over two years ago, four companies — YouTube, Facebook, Twitter and Microsoft — came together to create the Global Internet Forum to Counter Terrorism (GIFCT).
Among other things, the group’s members have created a shared hash database of terrorist content and developed photo and video matching and text-based machine learning techniques to identify and thwart the spread of violence on their platforms.
These technologies were used more than a million times in 24 hours to stop the distribution of the video from Christchurch, informed Smith.
“As (New Zealand) Prime Minister Jacinda Ardern noted last week, gone are the days when tech companies can think of their platforms akin to a postal service without regard to the responsibilities embraced by other content publishers,” noted Smith.
According to him, tech firms must also continue to improve upon newer, AI-based technologies that can detect whether brand-new content may contain violence.
“We should also pursue new steps beyond the posting of content. For example, we should explore browser-based solutions – building on ideas like safe search – to block the accessing of such content at the point when people attempt to view and download it,” he added.