Amazon.com Inc (AMZN.O) plans to take a more proactive approach to determine what types of content violate its cloud service policies, such as rules against promoting violence, and enforce its removal, according to two sources, a move likely to renew debate about how much power tech companies should have to restrict free speech.
Over the coming months, Amazon will hire a small group of people in its Amazon Web Services (AWS) division to develop expertise and work with outside researchers to monitor for future threats, one of the sources familiar with the matter said.
It could turn Amazon, the leading cloud service provider worldwide with 40% market share according to research firm Gartner, into one of the world’s most powerful arbiters of content allowed on the internet, experts say.
Amazon made headlines in the Washington Post last week for shutting down a website hosted on AWS that featured propaganda from Islamic State that celebrated the suicide bombing that killed an estimated 170 Afghans and 13 U.S. troops in Kabul last Thursday. They did so after the news organization contacted Amazon, according to the Post.
The proactive approach to content comes after Amazon kicked social media app Parler off its cloud service shortly after the Jan. 6 Capitol riot for permitting content promoting violence.
“AWS Trust & Safety works to protect AWS customers, partners, and internet users from bad actors attempting to use our services for abusive or illegal purposes,” an AWS spokesperson said in a statement. “When AWS Trust & Safety is made aware of abusive or illegal behavior, they act quickly to investigate and engage with customers to take appropriate actions. As AWS continues to expand, this team (like most teams in AWS) will continue to grow.”
Activists and human rights groups are increasingly holding not just websites and apps accountable for harmful content, but also the underlying tech infrastructure that enables those sites to operate, while political conservatives decry the curtailing of free speech.
AWS already prohibits its services from being used in a variety of ways, such as illegal or fraudulent activity, to incite or threaten violence or promote child sexual exploitation and abuse, according to its acceptable use policy.
Amazon first requests customers remove content violating its policies or have a system to moderate content. If Amazon cannot reach an acceptable agreement with the customer, it may take down the website.
Amazon aims to develop an approach toward content issues that it and other cloud providers are more frequently confronting, such as determining when misinformation on a company’s website reaches a scale that requires AWS action, the source said.
The new team within AWS does not plan to sift through the vast amounts of content that companies host on the cloud, but will aim to get ahead of future threats, such as emerging extremist groups whose content could make it onto the AWS cloud, the source added.
Amazon is currently hiring for a global head of policy on the AWS trust and safety team, which is responsible for “protecting AWS against a wide variety of abuse,” according to a job posting on its website.
AWS’s offerings include cloud storage and virtual servers and counts major companies like Netflix (NFLX.O), Coca-Cola (KO.N) and Capital One (COF.N) as clients, according to its website.