Twitter to rely more on AI than staff to detect hate speech amid rising reports of racism on platform
Twitter is relying more on artificial intelligence to moderate content instead of banking on its staff to conduct manual checks as hate speech reportedly surged on the platform since Elon Musk’s takeover. Last week the Centre for Countering Digital (CCDH) reported that hate speech on Twitter has increased under Mr Musk’s ownership. “From racial slurs tripling to a shocking increase in antisemitic and misogynistic tweets, Mr Musk’s Twitter has become a safe space for hate,” the non-governmental organisation tweeted on Friday, adding that climate-sceptic tweets have also risen since the multibillionaire’s takeover of the company. Another research group at the Network Contagion Research Institute (NCRI) had earlier found that the use of the N-word increased by nearly 500 per cent in the 12 hours immediately after Mr Musk’s deal to buy Twitter was finalized. Research also suggested that slurs against gay men and antisemitic posts rose in the days following the Tesla chief’s buyout of Twitter. Mr Musk, however, refuted these claims, deeming them “utterly false”. Amid these concerns, Twitter’s new head of trust and safety has reportedly said the company is now banking more on automation to moderate content. The company’s vice president of Trust and Safety Product Ella Irwin told Reuters that the platform is doing away with manual reviews by its staff, and is favoring restrictions instead of removing some content outright. “The biggest thing that’s changed is the team is fully empowered to move fast and be as aggressive as possible,” Ms Irwin said. The latest news comes as Twitter struggles to moderate content on its platform following layoffs last month in which the company slashed its entire workforce from 7,500 to roughly 2,000. Reports suggest that the company’s entire human rights and machine learning ethics teams, as well as outsourced contract workers working on the platform’s safety concerns, were all reduced to no staff or a handful of people. A key team on Twitter dedicated to removing child sexual abuse material across Japan and the Asia-Pacific region was also left with only one person following the layoffs, according to Wired. This team’s reduction in size is also contrary to Mr Musk’s earlier assertion that the removal of such content is his “Priority 1” after taking over the company. Last weekend, the microblogging platform was flooded for hours with adult spam content, which researchers said was an attempt to obscure news about widespread protests across China. Analysts pointed out that the spam bot attack was an attempt to stop people from finding updates on protests against China’s strict ‘zero Covid’ lockdown policy. “This is a known problem that our team was dealing with manually, aside from automations we put in place,” an ex-Twitter staff told The Washington Post on the condition of anonymity. On Thursday, Ms Irwin said Twitter would now use automation to “aggressively” restrict abuse-prone hashtags as well as search results in areas such as child exploitation. She added that the platform would now automatically take down tweets reported by trusted figures who have a track record of correctly flagging such content. Read More Kanye West suspended from Twitter by Elon Musk for ‘inciting violence’ Twitter alternative Hive shuts down as hackers could access people’s private messages Kanye West dares Elon Musk to ban him by handing Twitter account over to Alex Jones Elon Musk appears to have taken his ‘battle’ with Apple to its campus Elon Musk trolled over his understanding of ‘free speech’ as he slams Apple What is actually happening inside Elon Musk’s Twitter?
Twitter is relying more on artificial intelligence to moderate content instead of banking on its staff to conduct manual checks as hate speech reportedly surged on the platform since Elon Musk’s takeover.
Last week the Centre for Countering Digital (CCDH) reported that hate speech on Twitter has increased under Mr Musk’s ownership.
“From racial slurs tripling to a shocking increase in antisemitic and misogynistic tweets, Mr Musk’s Twitter has become a safe space for hate,” the non-governmental organisation tweeted on Friday, adding that climate-sceptic tweets have also risen since the multibillionaire’s takeover of the company.
Another research group at the Network Contagion Research Institute (NCRI) had earlier found that the use of the N-word increased by nearly 500 per cent in the 12 hours immediately after Mr Musk’s deal to buy Twitter was finalized.
Research also suggested that slurs against gay men and antisemitic posts rose in the days following the Tesla chief’s buyout of Twitter.
Mr Musk, however, refuted these claims, deeming them “utterly false”.
Amid these concerns, Twitter’s new head of trust and safety has reportedly said the company is now banking more on automation to moderate content.
The company’s vice president of Trust and Safety Product Ella Irwin told Reuters that the platform is doing away with manual reviews by its staff, and is favoring restrictions instead of removing some content outright.
“The biggest thing that’s changed is the team is fully empowered to move fast and be as aggressive as possible,” Ms Irwin said.
The latest news comes as Twitter struggles to moderate content on its platform following layoffs last month in which the company slashed its entire workforce from 7,500 to roughly 2,000.
Reports suggest that the company’s entire human rights and machine learning ethics teams, as well as outsourced contract workers working on the platform’s safety concerns, were all reduced to no staff or a handful of people.
A key team on Twitter dedicated to removing child sexual abuse material across Japan and the Asia-Pacific region was also left with only one person following the layoffs, according to Wired.
This team’s reduction in size is also contrary to Mr Musk’s earlier assertion that the removal of such content is his “Priority 1” after taking over the company.
Last weekend, the microblogging platform was flooded for hours with adult spam content, which researchers said was an attempt to obscure news about widespread protests across China.
Analysts pointed out that the spam bot attack was an attempt to stop people from finding updates on protests against China’s strict ‘zero Covid’ lockdown policy.
“This is a known problem that our team was dealing with manually, aside from automations we put in place,” an ex-Twitter staff told The Washington Post on the condition of anonymity.
On Thursday, Ms Irwin said Twitter would now use automation to “aggressively” restrict abuse-prone hashtags as well as search results in areas such as child exploitation.
She added that the platform would now automatically take down tweets reported by trusted figures who have a track record of correctly flagging such content.
Read More Kanye West suspended from Twitter by Elon Musk for ‘inciting violence’
Twitter alternative Hive shuts down as hackers could access people’s private messages
Kanye West dares Elon Musk to ban him by handing Twitter account over to Alex Jones
Elon Musk appears to have taken his ‘battle’ with Apple to its campus
Elon Musk trolled over his understanding of ‘free speech’ as he slams Apple
What is actually happening inside Elon Musk’s Twitter?