Twitter said Wednesday it was launching an activity on “responsible AI” that will remember audits of algorithmic decency for the social media platform.
The California messaging services said the plan aims to offer more transparency in its artificial intelligence and tackle “the potential harmful effects of algorithmic decisions.”
The move comes amid heightened concerns over algorithms used by online services, which some say can promote violence or extremist content or reinforce racial or gender bias.
“Responsible technological use includes studying the effects it can have over time,” said a blog post by Jutta Williams and Rumman Chowdhury of Twitter’s ethics and transparency team.
“When Twitter uses (machine learning), it can impact hundreds of millions of tweets per day and sometimes, the way a system was designed to help could start to behave differently than was intended.”
The initiative calls for “taking responsibility for our algorithmic decisions” with the aim of “equity and fairness of outcomes,” according to the researchers.
“We’re also building explainable ML solutions so you can better understand our algorithms, what informs them, and how they impact what you see on Twitter.”
Williams and Chowdhury said the team would be sharing what it learn with outside analysts “to improve the business’ aggregate comprehension of this topic, help us with improving our methodology, and consider us responsible.”
The Twitter move follows a series of discussions at Google’s AI ethics team which resulted about the terminating of two top analysts and the resignation of a high-ranking scientist.