What Will It Take to Fix AI's Bias Problem? | Opinion

School librarian Jean Darnell prompted ChatGPT to write a paper on Black history, and the result had glaring omissions. That's just one part of the problem, she says.

AI ChatGPT imagery concept. Illustration by  Getty Images
Vertigo3d/Getty Images

In the 11 months since it launched in November 2022, ChatGPT has changed how educators grade, cite, and interact with technology. When one is wearing the proper rose-colored glasses, advances in AI will make menial tasks irrelevant—which translates to more time to do the things we want.

For instance, Twee can take any YouTube video and create an exam, discussion questions, fill-in-the-blank questionnaires, vocabulary, etc., in seconds. In less than five minutes, I turned a video about bats (SLJ School Librarians' Back-to-School Hacks for 2023-24) into a lesson complete with discussion questions, a listening comprehension quiz, and a fill-in-the-blank handout (I wrote the lesson; Twee compiled the last three links).

It’s exciting that hours of brainstorming, notetaking, and reading comprehension can dissolve with a few clicks of the mouse.

But here’s the elephant in the room: Because most AI is based on large-language models (LLMs) seeking information from all corners of the internet, the content output is only as good as the content input. So if someone purposely puts something on the internet that’s rooted in a basket full of lying posies, it raises key questions:

Can AI discern fact from fiction; and misinformation from disinformation and mal-information (information close enough to the truth to be believable at face value, but intentionally harms a group or individual)? 

And: Can AI differentiate between hate-group propaganda and lived, experiential learning from a diverse perspective?

Right now, the answer to both is no.

AI has a bias problem—here’s just one example. In August 2023, an Asian MIT student asked AI to make her headshot photo more professional. AI turned her eyes blue and lightened her skin.

Why? Remember, the input data has to be free of biases for the output data to exceed expectations—or even meet them, in this case. Initiatives like pocstock, is a stock media company focusing on people of color, may help when it comes to the information AI uses as a source for the results produced via a prompt.

AI uses algorithms, LLMs, and humans to stop the spread of misinformation, after it’s been detected. But if we input reliable, diverse, and inclusive data, we can teach AI how to detect inaccuracies.

(Fun fact to consider in the meantime: A 2018 Twitter survey showed that false news stories were more commonly retweeted by humans than bots and 70 percent more likely to be retweeted than true stories.)

Representation matters. And the more trained AI is in finding diverse sources, the more AI will be able to see us as we see ourselves.

A research wild card

Librarians understand investigative research. With very young students, we cover the difference between fiction and nonfiction. We teach them to analyze online website addresses (.gov = official, .com = business, .edu = education). We coach students on digging for deeper biases when reading articles for education or entertainment. We advise them on organizing their thoughts via brainstorming, differentiating narrative from expository essays, and including applicable research in their academic work.

With AI, tasks of investigative research with hallmark tangibles (like URL endings) have fallen victim to deep-fake, maleficent missteps that can erase cultures.

Earlier this year, I asked ChatGPT to write a paper on Black History. I had two goals: to discern what it was programmed to learn about a culture I’m fully immersed in experiencing, and to see if it gave a fairly balanced, accurate representation.

The essay mentioned enslavement, civil rights, four notable Black Americans, and provided a conclusion. It didn’t mention the remarkable accomplishments of Reconstruction for Black Americans, the first Black president, or our lives since the 1965 Civil Rights Movement.

That paper could make you think nothing historically significant has occurred regarding Black Americans for nearly 60 years. Based on that, I’d say the inclusion of some cultural groups and historically disenfranchised communities was not part of AI’s development. It’s enough to make you think Black folks are on the wrong side of the technology. I turned that experience into a teachable moment.

ChatGPT doesn’t provide citations automatically unlike its competitor, Google’s Bard. I was curious. So this month, I asked Bard the following prompt: “Create a speech from a Black woman's perspective on how book bans suppress the speech and freedoms of Black people. include specific examples from the last 10 years.” Here's the essay in its entirety. There’s also an audio version on YouTube. Spoiler alert: I was pleasantly surprised.

Still, there’s a ways to go. Others have also shown how AI is discriminating and prejudiced: See “Who Is Making Sure the AI Machines Aren’t Racist?” and “Is AI in favor of Racists?” When 42 percent of the U.S. population identifies as Black, Hispanic, Asian, Indigenous, and two or more racial ethnicities and are not included in AI regulation, then “Houston, we have a problem.”

In the future, will AI be more inclusive? Will it solve or address racism since we just can’t seem to shirk that lesion off our backs? Also, how will it impact book bans?

We don’t know yet because the technology is evolving at an exponential pace. But let’s look at the evidence.

How has AI affected libraries and our mission for intellectual freedom and informational integrity?

Well, an Iowa District Used AI to Figure Out Which Books to Ban. Here’s the kicker: Administrators didn’t have time to read books before the new school year. So they relied on AI… not the true experts, school librarians. AI sourced this list with information from discriminating and targeted proposed house bills, opinionated personal websites by “concerned parents.” as well as any and everything on the internet, truthful or not.

Will AI be more inclusive and less racist in the future?
It depends on whether efforts will be able to keep up. Last month, Mark Zuckerberg, Elon Musk, and Bill Gates met with senators and others in Washington, D.C. for an AI Insight Forum to discuss regulations for AI. Quick translation: Three of the richest white men in the world fielded questions and concerns in a private, closed session to the public. (And which one of those has a history of violating the intellectual freedoms of citizens previously?) Time will tell if those concerns will be addressed.

I applaud Senator Chuck Schumer, who is pushing for federal legislation regarding AI. “Government must play a role in requiring these safeguards. Because even if individual companies promote safeguards, there will always be rogue actors, unscrupulous companies, and foreign adversaries that seek to harm us,” he said in the forum’s opening remarks. “And on the transformational side, other governments, including adversaries like China, are investing huge resources to get ahead. We could fall behind, to the detriment of our national security.”

I promise to do my own part to hold AI accountable. It’s part of my investigative skills as a librarian. And it’s my goal to do my part to ensure the world I hand over to my students and my kids is as equitable and safe as possible.

So let’s end on a good note. AI is a fun tool. It’s a shortcut in learning that echoes the same instant thrill, at the tip of your fingers, sensation that the iPad did within the first decade of the 21st century. Here’s how I used ChatGPT to teach literacy as a school librarian: 7 Innovative Prompts to Use with AI for Literacy. My favorite example from the list is #5, Feedback from Beyond, where Edgar Allan Poe advocates for kids to read scary stories.

Jean Darnell is a Texas librarian who advocates for all students via her blog, AwakenLibrarian.com, where she shares lesson ideas and content aimed at improving the diverse role of librarians of color and the young adults they serve. She is the 2023 Texas Library Association Intellectual Freedom Committee Chairperson and has served on other national and local committees including the 2020 Caldecott Medal Book Award, the Coretta Scott King Tech Committee and the 2015 Texas Bluebonnet Committee.

Be the first reader to comment.

Comment Policy:
  • Be respectful, and do not attack the author, people mentioned in the article, or other commenters. Take on the idea, not the messenger.
  • Don't use obscene, profane, or vulgar language.
  • Stay on point. Comments that stray from the topic at hand may be deleted.
  • Comments may be republished in print, online, or other forms of media.
  • If you see something objectionable, please let us know. Once a comment has been flagged, a staff member will investigate.



We are currently offering this content for free. Sign up now to activate your personal profile, where you can save articles for future viewing