Twitter Denies Racial Bias After Algorithm Crops Black Man From Photo, Admits 'Potential for Harm'

Twitter says it has found no evidence of "racial or gender bias" in its image-cropping algorithm, after a user discovered that it repeatedly cropped a Black man out of an image he posted to the site.

A series of tweets from student Colin Madland went viral last month, when he told how Zoom's face-detection algorithm kept failing to detect the face of his Black colleague.

Madland, a white man, also reported that a picture of the two men was repeatedly cropped to remove his Black colleague, when the image was posted on Twitter.

Based on some experiments I tried, I think @colinmadland's facial hair is affecting the model because of the contrast with his skin. I removed his facial hair and the Black man shows in the preview for me. Our team did test for racial bias before shipping the model.

— Dantley Davis (@dantley) September 19, 2020

Twitter acknowledged the controversy, and said it had been "reviewing the way we test for bias in our systems and discussing ways we can improve how we display images on Twitter," and was "conducting additional analysis."

In a blog post, the company's chief design officer Dantley Davis explained how the algorithm works, and how it was tested for bias.

"The image cropping system relies on saliency, which predicts where people might look first. For our initial bias analysis, we tested pairwise preference between two demographic groups (White-Black, White-Indian, White-Asian and male-female)," Davis wrote.

"In each trial, we combined two faces into the same image, with their order randomized, then computed the saliency map over the combined image. Then, we located the maximum of the saliency map, and recorded which demographic category it landed on.

"We repeated this 200 times for each pair of demographic categories and evaluated the frequency of preferring one over the other."

Though the company is yet to find racial or gender bias in the algorithm, it has admitted that "the way we automatically crop photos means there is a potential for harm."

It has also committed to sharing the findings from its analysis.

"We are prioritizing work to decrease our reliance on [machine learning]-based image cropping by giving people more visibility and control over what their images will look like in a Tweet ... We hope that giving people more choices for image cropping and previewing what they'll look like in the Tweet composer may help reduce the risk of harm," the blog post continues.

twitter logo smartphone
In this photo illustration, a Twitter logo is displayed on a mobile phone on May 27, 2020, in Arlington, Virginia. The micro-blogging site says it has found no evidence of "racial or gender bias" in its image-cropping algorithm, but is continuing to investigate. Olivier Douliery/AFP via Getty Images

"Going forward, we are committed to following the 'what you see is what you get' principles of design, meaning quite simply: the photo you see in the Tweet composer is what it will look like in the Tweet.

"There may be some exceptions to this, such as photos that aren't a standard size or are really long or wide. In those cases, we'll need to experiment with how we present the photo in a way that doesn't lose the creator's intended focal point or take away from the integrity of the photo."

Davis has also conducted his own experiments into claims of bias, and has raised the possibility that the algorithm may have favored Madland over his colleague due to his facial hair.

It’s 100% our fault. No one should say otherwise. Now the next step is fixing it.

— Dantley Davis (@dantley) September 19, 2020