Twitter released the findings of an internal study Thursday that reveal the service's timeline algorithm amplifies right-leaning news content and tweets from right-wing officials.
Twitter's Rumman Chowdhury, director of software engineering, and Luca Belli, a staff machine learning researcher, wrote a blog post analyzing the study. In it, they noted the research was originally conducted to track whether Twitter's recommendation algorithms amplified political content at all.
Right-wing pundits, politicians and social media users have long cited a liberal bias in social media companies, with many accusing services of setting up algorithms that are designed to stifle their voices. Twitter's findings suggest the opposite may actually be the case on its platform.

Examining the period of April 1, 2020, through August 15, 2020, the researcher tracked millions of tweets from elected officials in seven countries: the U.S., the U.K., Canada, France, Germany, Japan and Spain. Using the same period, the company also delved into hundreds of millions of tweets containing links to articles shared by people on Twitter.
The study used third-party sources to categorize politicians and news sources along political lines.
"In six out of seven countries—all but Germany—tweets posted by accounts from the political right receive more algorithmic amplification than the political left when studied as a group," Chowdhury and Belli wrote in their summary.
The duo also said tweets from right-leaning news outlets "see greater algorithmic amplification on Twitter compared to left-leaning news outlets."
In the U.S., Fox News and the New York Post saw more amplification on Twitter than other outlets did.
Tweets from Republican senators and House members were amplified more than those from their Democratic counterparts in the U.S., which was found to be the case in most countries. The U.K. had the highest incidence of conservative politicians' tweets being amplified more than other parties' messages.
What the research does not attempt to do is examine why the platform favors one political side more than the other.
"Establishing why these observed patterns occur is a significantly more difficult question to answer as it is a product of the interactions between people and the platform," Chowdhury and Belli wrote.
They said that the next step will be taken by the ML Ethics, Transparency and Accountability (META) team at Twitter, which will attempt to identify and "mitigate any inequity that may occur."
Twitter first offered the algorithm-determined timeline in 2016, in addition to its already in-use reverse chronological one. Users can still choose between the two for how they view their timelines.
Following user complaints, Twitter admitted in May that its automatic cropping algorithm repeatedly cropped out Black faces in favor of light-skinned ones and favored men over women. Since then, it has attempted to address those issues, the company said.
Newsweek reached out to Twitter for further comment but did not hear back before publication.