AP Technology Writer Matt O’Brien
Cambridge, Massachusetts (AP) — After retreating from workplace diversity, equity and comprehensive programs, tech companies have been able to face DEI work on AI products for the second time.
In the White House and Republican-led Congress, “Woke AI” has replaced harmful algorithmic discrimination as an issue that needs to be fixed. The investigation aims to produce “harmful and biased production,” according to past efforts to “enhancing equity” in AI development and a subpoena sent by the House Judiciary Committee last month to Amazon, Google, Meta, Microsoft, Openai and 10 other technologies.
The U.S. Department of Commerce’s Standards Setting Division also removed mentions of AI’s fairness, safety and “responsible AI” to collaboration with external researchers. Instead, a copy of the document obtained by the Associated Press directs scientists to focus on “reducing ideological bias” in ways that “enable human prosperity and economic competitiveness.”
In some ways, tech workers are used to the Washington-led priorities whipping.
However, the latest shift has raised concerns among experts in the field, including Harvard sociologist Ellis Monk. A few years ago, Google approached AI products to make them more comprehensive.
At the time, the tech industry already knew there was a problem with AI branches that train machines to “see” and understand and understand images. Computer vision had great commercial promises, but repeated the historical biases seen in previous camera technology, depicting people in black and brown in astonishing light.
“Black or skinned people appear in the photos and we sometimes look ridiculous,” said Monk, a scholar of colourism, a form of discrimination based on people’s skin tone and other characteristics.
Google adopted the color scale invented by Monk, improving the way AI imaging tools portray the diversity of human skin tones, replacing decades-old standards designed for physicians treating white dermatology patients.
“Consumers definitely had a big positive response to change,” he said.
Now the monks wonder whether such efforts will continue in the future. He doesn’t believe that the tone scale of his monk’s skin is threatened as he’s already burned into dozens of products on Google and elsewhere, including camera phones, video games, and AI image generators, but he and other researchers are worried that the new mood will cool future initiatives and funds to make technology even better.
“Google wants to work with our products for everyone, like India, China, Africa, and that part is kind of an immunity,” Monk said. “But could future funding for these types of projects fall? Absolutely, when the political mood changes and there is a lot of pressure to get to the market very quickly.”
While Trump has cut hundreds of science, technology and health funding grants by touching on the theme of DEI, the impact on the commercial development of chatbots and other AI products is more indirect. In an investigation into AI companies, Rep. Jim Jordan, chairman of the Judiciary Committee, said he wanted to know if former President Joe Biden’s administration “forced or conspiracy” to censor legal speeches.
Michael Kratzos, director of the White House’s Department of Science and Technology Policy, said at a Texas event this month that Biden’s AI policy “fosters social division and redistribution in the name of equity.”
The Trump administration refused to make Kratsios available for interviews, but cited some examples of what he meant. One was a line from the Biden era AI research strategy that said, “without proper control, AI systems can amplify, perpetuate, or exacerbate unfair or unwanted outcomes for individuals and communities.”
Even before Biden took office, growing numbers of research and personal anecdotes focused on the harms of AI bias.
One study showed that autonomous automotive technology struggles to detect dark-skinned pedestrians and increase the risk of escaping. Another study asking popular AI text-to-image generators to create surgeon photos found that men produce white men at about 98% of the time, much higher than their actual proportions, even in a dominantly vast field.
Face matching software for unlocking your phone has a misidentified Asian face. Police in U.S. cities have mistakenly arrested a black man based on false facial recognition matches. Ten years ago, Google’s own photography app divided two black photos into categories labeled “Gorillas.”
Even the first Trump administration government scientists concluded in 2019 that facial recognition technology works unevenly based on race, gender, or age.
Biden’s election pushed some tech companies to focus on AI equity. Openai’s 2022 arrival of ChatGPT added new priorities, creating a commercial boom in new AI applications for document creation and image generation, putting pressure on companies like Google to promote attention and keep up.
Then came Google’s Gemini Ai Chatbot. Last year, a flawed product was introduced that became the symbol of “Woke AI” that conservatives wanted to solve. Because it is left to its own devices, AI tools that generate images from written prompts tend to perpetuate stereotypes accumulated from all trained visual data.
Google’s is similar, and when asked to portray people from various occupations, young women are more likely to prefer skin-skinned faces and men, and when women are chosen, young women, according to the company’s official research.
Google tried to place technical guardrails to reduce these disparities before deploying Gemini’s AI image generator over a year ago. It was overcompensated due to bias, placing people of color and women in an inaccurate historical environment. For example, responding to the demands of the fathers of the American founder, who have the image of men dressed in the 18th century, who look like black, Asian, and Native Americans. Google quickly apologised and pulled out the feature temporarily, but the anger became a rally cry that was undertaken by political rights.
With Google CEO Sundar Pichai sitting nearby, Vice President JD Vance used the AI Summit in Paris in February to condemn the advances in “a truly historic social agenda through AI.”
“We need to remember the lessons we learned from those ridiculous moments,” Vance declared at the gathering. “And what we’re taking from that is to ensure that the Trump administration’s AI systems developed in the United States are freed from ideological bias and never limit our citizens’ rights to free speech.”
A former Biden Science adviser who attended that speech, Alondra Nelson, said the new focus on the Trump administration’s AI’s “ideological bias” is a perception of years of work to address algorithmic biases that could somehow affect housing, mortgages, healthcare and other aspects of people’s lives.
“Essentially, to say that AI systems are ideologically biased, means that we identify, recognize and worry about the algorithm bias issues that many of us have been concerned about for a long time.”
However, Nelson sees little room for collaboration amid a slander of impartial AI initiatives.
“Unfortunately, in this political space, I think that’s very unlikely,” she said. “The problems that are given different names — algorithm identification or algorithm bias on the one hand, and ideological bias on the other, unfortunately, are considered two different problems.”
Original issue: April 28, 2025, 12:41pm EDT