Home » One Facebook Ad Promotes a For-Profit College; Another a State School. Which Ad Do Black Users See?
News

One Facebook Ad Promotes a For-Profit College; Another a State School. Which Ad Do Black Users See?

Facebook’s advertising algorithm disproportionately targets Black users with ads for for-profit colleges, according to new paper by a team of university researchers.

Like all major social media platforms, Meta, which owns Facebook and Instagram, does not disclose exactly how or why its billions of users see certain posts and not others, including ads. In order to put Facebook’s black-box advertising system to the test, academics from Princeton and the University of Southern California purchased Facebook ads and tracked their performance among real Facebook users, a method they say produced “evidence of racial discrimination in Meta’s algorithmic delivery of ads for education opportunities, posing legal and ethical concerns.”

The researchers say they focused on for-profit colleges because of their long, demonstrable history of deceiving prospective students — particularly students of color — with predatory marketing while delivering lackluster educational outcomes and diminished job prospects compared to other colleges.

In a series of test marketing campaigns, the researchers purchased sets of two ads paired together: one for a public institution, like Colorado State University, and another marketing a for-profit company, like Strayer University. (Neither the for-profit colleges nor state schools advertised by the researchers were involved in the project).

Advertisers on Facebook can fine-tune their campaigns with a variety of targeting options, but race is no longer one of them. So the researchers found a clever proxy. Using North Carolina voter registration data that includes individuals’ races, the researchers built a sample audience that was 50 percent white and 50 percent Black. The Black users came from one region in North Carolina and white voters from another. Using Facebook’s “custom audiences” feature, they uploaded this roster of specific individuals to target with ads. Though Facebook’s ad performance metrics wouldn’t reveal the race of users who saw each ad, the data showed where each ad was viewed. “Whenever our ad is shown in Raleigh, we can infer it was shown to a Black person and, when it is shown in Charlotte — we can infer it was shown to a White person,” the paper explains.

Theoretically, an unbiased algorithm would serve each ads for each school to an equal number of Black and white users. The experiment was designed to see whether there was a statistically significant skew in which people ultimately saw which ads.

With each pair of ads, Facebook’s delivery algorithm showed a bias, the researchers found. The company’s algorithm disproportionately showed Black users ads for colleges like DeVry and Grand Canyon University, for-profit schools that have been fined or sued by the Department of Education for advertising trickery, while more white users were steered toward state colleges, the academics concluded.

“Addressing fairness in ads is an industry-wide challenge and we’ve been collaborating with civil rights groups, academics, and regulators to advance fairness in our ads system,” Meta spokesperson Daniel Roberts told The Intercept. “Our advertising standards do not allow advertisers to run ads that discriminate against individuals or groups of individuals based on personal attributes such as race and we are actively building technology designed to make additional progress in this area.”

Even in cases where these for-profit programs have reformed their actual marketing efforts and “aim for racially balanced ad targeting,” the research team concluded “Meta’s algorithms would recreate historical racial skew in who the ad are shown to, and would do so unbeknownst to the advertisers.”

Ever since a 2016 ProPublica report found Facebook allowed advertisers to explicitly exclude users from advertising campaigns based on their race, the company’s advertising system has been subject to increased scrutiny and criticism. And while Facebook ultimately removed options that allowed marketers to target users by race, previous academic research has shown that the secret algorithm that decides who sees which ads is biased along race and gender lines, suggesting bias intrinsic to the company’s systems.

A 2019 research paper on this topic showed that ads for various job openings were algorithmically sorted along race and gender stereotypes, for instance, lopsidedly showing Black users opportunities to drive taxi cabs, while openings for an artificial intelligence developer was skewed in favor of white users. A 2021 follow-up paper found that Facebook ad delivery replicated real-world workplace gender imbalances, showing women ads for companies where women were already overrepresented.

While it withholds virtually all details about the ad delivery algorithm functions, Facebook has long contended that its ads are shown merely to people most likely to find them relevant. In response to the 2021 research showing gender bias in the algorithm, a company spokesperson told The Intercept that while they understood the researchers’ concerns, “our system takes into account many signals to try and serve people ads they will be most interested in.”

Aleksandra Korolova, a professor of computer science and public affairs at Princeton and co-author of the 2019 and 2021 research, told The Intercept that she rejects the notion that apparent algorithmic bias can be explained away as only reflecting what people actually want, because it’s impossible to disprove. “It’s impossible to tell whether Meta’s algorithms indeed reflect a true preference of an individual, or are merely reproducing biases in historical data that the algorithms are trained on, or are optimizing for preferences as reflected in clicks rather than intended real-world actions.”

The onus to prove Facebook’s ad delivery is reflecting real-world preferences and not racist biases, she said, lies with Facebook.

But Korolova also noted that even if for-profit college ads are being disproportionately directed to Black Facebook users because of actual enrollment figures, a moral and social objection to such a system remains. “Society has judged that some advertising categories are so important that one should not let historical trends or preferences propagate into future actions,” she said. While various areas in the United States may have been majority-Black or white over the years, withholding ads for properties in “white neighborhoods” from Black buyers, for example, is illegal, historical trends notwithstanding.

Aside from the ethical considerations around disproportionately encouraging its Black users to enroll in for-profit colleges, the authors suggest Facebook may be creating legal liability for itself too. “Educational opportunities have legal protections that prohibit racial discrimination and may apply to ad platforms,” the paper cautions.

Korolova said that, in recent years, “Meta has made efforts to reduce bias in their ad delivery systems in the domains of housing, employment and credit — housing as part of their 2022 settlement with the Department of Justice, and employment and credit voluntarily, perhaps to preempt lawsuits based on the work that showed discrimination in employment ad delivery.”

But she added that despite years of digging into apparently entrenched algorithmic bias in the company’s products, “Meta has not engaged with us directly and does not seem to have extended their efforts for addressing ad delivery biases to a broader set of domains that relate to life opportunities and societally important topics.”



Join The Conversation

Newsletter

June 2024
M T W T F S S
 12
3456789
10111213141516
17181920212223
24252627282930