Not many people would have the courage to stand up for what they believe in, if that means going against one of the biggest companies in the world - and especially if doing so could cost them their job. But Timnit Gebru is a person who does stand up for her beliefs. She's someone with a mission. The courage she showed when she went head-to-head with Google in December 2020 has made her an icon in the tech world.
Gebru has been at the center of a media frenzy ever since. Her bravery and integrity have won her even more critical acclaim than she had already as an expert in technology and artificial intelligence (AI). She's also a widely respected leader in the ethics of AI, and an advocate for diversity in technology.
Gebru has been named one of the "World's 50 Greatest Leaders" by the business magazine Fortune; one of TIME magazine's "100 Most Influential People"; and one of the "10 People Who Helped Shape Science" chosen by the science journal Nature.
These are impressive titles to hold, especially for someone who's only 40 years old. But it was Gebru's recent dismissal from Google that suddenly put her into the headlines.
Questioning the ethics
Google had hired Gebru in 2018 to lead the company's AI ethics team, where she was tasked with building a diverse team of experts who would ensure that Google's AI products didn't perpetuate inequality or racism. However, Gebru found that racial discrimination did indeed exist within the company's AI systems, and she co-authored a groundbreaking paper questioning the ethics of what are called "large language AI models", which are created to understand and reproduce human language.
The paper detailed specifically how Google's facial recognition software was less accurate when it came to identifying women and people of color. The reason for this, Gebru showed, is that AI models are trained on huge amounts of data collected by researchers from the internet, with the risk that racist and sexist language will therefore find its way into the training data. Shifts in cultural norms - including anti-sexist and anti-racist vocabulary recently established by movements such as MeToo and Black Lives Matter - will most likely not yet have significant influence on the AI models.
According to Gebru's paper, Google was focused on growing its language models as quickly as possible to make them bigger and more powerful, but wasn't taking enough time to consider the kinds of biases that were being built into them. While Google's goal was to become the biggest language model in the industry, their ethical standards seemed to be taking a back seat.
It may seem as though Gebru was doing exactly what she had been asked to do, but people higher up the Google ladder were unhappy with the picture she was painting. She was asked to withdraw the paper, or to remove any mention of Google employees. She declined: It would have meant going against everything she stood for, including the reason she had been hired by the company in the first place. As a result, she was forced to leave her position, with immediate effect.
"I had so many issues at Google," Gebru told TIME in an article published in January 2022. "But the censorship of my paper was the worst instance."
Systemic racism
Life has certainly not been plain sailing for Gebru. She's worked hard for her position. Ethics, diversity, and inclusion have all been a driving force for her since she was a child.
Gebru was born in Addis Ababa, Ethiopia, in 1983. She was a teenager when war broke out between Ethiopia and neighboring Eritrea, and she experienced firsthand what conflict and division meant. Her father, who died when she was five, had been an electrical engineer and her mother was an economist. At the age of 15, it was no longer safe for the family to live in Ethiopia; Gebru fled to Ireland to escape deportation. After a "miserable experience" with the U.S. asylum system, she was finally granted asylum and was able to join her mother and sisters in Somerville, Massachusetts.
Gebru thought she had moved to greener pastures, but immediately experienced the realities of racism in the U.S. school system. Despite being one of the best students in her high school class, her efforts were rarely recognized. Her teachers often discouraged her from taking higher classes that better suited her level. And, of course, the discrimination didn't end there.
After she left high school, a Black woman friend of hers was assaulted in a bar. When Gebru called the police, instead of dealing with the attacker, they handcuffed her friend and put Gebru in a cell. The assault was never filed by the police. "It was a blatant example of systemic racism," Gebru told TIME.
A lot of rhetoric
In 2001, Gebru received a place at Stanford, one of the top universities in the U.S., to study electrical engineering. She went on in the early 2010s to get a PhD in computer vision, working under the renowned computer scientist Dr. Fei-Fei Li.
Gebru's research culminated in her leading a project to analyze 50 million images of street scenes collected from Google Street View, and showing how it was possible to predict income, voting patterns, race, and education based on the cars people drove in the various neighborhoods.
In an interview with The New York Times in 2019, Gebru said: "A lot of times, people are talking about bias in the sense of equalizing performance across groups."
She went on: "They're not thinking about the underlying foundation, whether a talk should exist in the first place, who creates it, who will deploy it on which population, who owns the data and how is it used?"
By the time she left Stanford, Gebru had decided to use her expertise to bring ethics into the field. As an example, she has described her own experiences at a 2015 conference, which was attended by approximately 8,500 people. She was able to count only six Black people among the attendees.
In 2018, she told MIT Technology Review: "I saw a lot of rhetoric about diversity and how a lot of companies think it's important. And I saw a mismatch between the rhetoric and action. Because six Black people out of 8,500 - that's a ridiculous number, right?"
The TIME article from January 2022 quoted some lines from a previous article that Gebru had written but never published.
"I am very concerned about the future of AI," she wrote. "Not because of the risk of rogue machines taking over. But because of the homogeneous, one-dimensional group of men who are currently involved in advancing the technology."
Bringing AI back to earth
By 2017, Gebru had become an AI researcher at the Microsoft group for Fairness, Accountability, Transparency, and Ethics in AI (FATE), where she co-authored a study called GenderShades, auditing commercial recognition software. The study showed that while facial recognition systems developed by IBM and Microsoft were perfect at detecting images of white people, there were high error rates for recognition of dark-skinned women - higher than for any other group.
That same year, Gebru and Rediet Abebe co-founded the tech research organization Black in AI, which "addresses the dire lack of Black professionals in the field of artificial intelligence and the lack of visibility and support for those who are already in the field." In 2021, she launched the Distributed AI Research Institute (DAIR), an independent ethics research group that documents the effect of AI on marginalized groups. "AI needs to be brought back to earth," she is quoted as saying, on the DAIR website.
With AI being such a confusing minefield for anyone who doesn't work in tech, the world needs scientists like Gebru to provide an honest voice of clarity. While many people would have rolled over and submitted to tech giant Google's requests, she did not. She stood by her beliefs and her rights, regardless of whether she would lose her job.
As she wrote in MIT Technology Review: "When I started Black in AI, I started it with a couple of my friends. I had a tiny mailing list before that, where I literally would add any Black person I saw in this field into the mailing list, and be like, ‘Hi, I'm Timnit. I'm Black person number two. Hi, Black person number one. Let's be friends.'"