Algorithms are biased—and Facebook’s is no exception.
Just last week, the tech giant was sued by the US Department of Housing and Urban Development over the way it let advertisers purposely target their ads by race, gender, and religion—all protected classes under US law. The company announced that it would stop allowing this.
But new evidence shows that Facebook’s algorithm, which automatically decides who is shown an ad, carries out the same discrimination anyway, serving up ads to over two billion users on the basis of their demographic information.
A team led by Muhammad Ali and Piotr Sapiezynski at Northeastern University ran a series of otherwise identical ads with slight variations in available budget, headline, text, or image. They found that those subtle tweaks had significant impacts on the audience reached by each ad—most notably when the ads were for jobs or real estate. Postings for preschool teachers and secretaries, for example, were shown to a higher fraction of women, while postings for janitors and taxi drivers were shown to a higher proportion of minorities. Ads about homes for sale were also shown to more white users, while ads for rentals were shown to more minorities.
“We’ve made important changes to our ad-targeting tools and know that this is only a first step,” a Facebook spokesperson said in a statement in response to the findings. “We’ve been looking at our ad-delivery system and have engaged industry leaders, academics, and civil rights experts on this very topic—and we’re exploring more changes.”
In some ways, this shouldn’t be surprising—bias in recommendation algorithms has been a known issue for many years. In 2013, for example, Latanya Sweeney, a professor of government and technology at Harvard, published a paper that showed the implicit racial discrimination of Google’s ad-serving algorithm. The issue goes back to how these algorithms fundamentally work. All of them are based on machine learning, which finds patterns in massive amounts of data and reapplies them to make decisions. There are many ways that bias can trickle in during this process, but the two most apparent in Facebook’s case relate to issues during problem framing and data collection.
Bias occurs during problem framing when the objective of a machine-learning model is misaligned with the need to avoid discrimination. Facebook’s advertising tool allows advertisers to select from three optimization objectives: the number of views an ad gets, the number of clicks and amount of engagement it receives, and the quantity of sales it generates. But those business goals have nothing to do with, say, maintaining equal access to housing. As a result, if the algorithm discovered that it could earn more engagement by showing more white users homes for purchase, it would end up discriminating against black users.
Bias occurs during data collection when the training data reflects existing prejudices. Facebook’s advertising tool bases its optimization decisions on the historical preferences that people have demonstrated. If more minorities engaged with ads for rentals in the past, the machine-learning model will identify that pattern and reapply it in perpetuity. Once again, it will blindly plod down the road of employment and housing discrimination—without being explicitly told to do so.
While these behaviors in machine learning have been studied for quite some time, the new study does offer a more direct look into the sheer scope of its impact on people’s access to housing and employment opportunities. “These findings are explosive!” Christian Sandvig, the director of the Center for Ethics, Society, and Computing at the University of Michigan, told The Economist. “The paper is telling us that […] big data, used in this way, can never give us a better world. In fact, it is likely these systems are making the world worse by accelerating the problems in the world that make things unjust.”
The good news is there might be ways to address this problem, but it won’t be easy. Many AI researchers are now pursuing technical fixes for machine-learning bias that could create fairer models of online advertising. A recent paper out of Yale University and the Indian Institute of Technology, for example, suggests that it may be possible to constrain algorithms to minimize discriminatory behavior, albeit at a small cost to ad revenue. But policymakers will need to play a greater role if platforms are to start investing in such fixes—especially if it might affect their bottom line.
This originally appeared in our AI newsletter The Algorithm. To have it directly delivered to your in-box, sign up here for free.
Join To Our Newsletter
You are welcome