Skepticism and “Wow!” in Library AI
by David Hurley
I am suspicious of many claims about AI. The immense power of artificial intelligence is undeniable every time I ask my phone for directions, or search my photos using just the word “recipes” and find pictures of cookbook pages and handwritten note cards that do, in fact contain recipes.
But still, somedays it seems like every library product has a “Now with 20% more AI!” sticker on its metaphorical box. “Machine learning” seems to be the answer to every question about how a system works. (And it is an answer that is very effective at ending that line of questioning, since “not even the people who coded it can fully understand it”.)
So I am skeptical that AI and ML are really going to be able to solve many of the problems we are currently trying to solve with them, and, on the flip side, that many of the things that AI and ML are ‘solving’ are actually problems we need to spend energy on.
It is like we’re sprinkling some magic AI sauce on everything and anything just to see what happens. And while that feels cynical in the commercial realm, participating in the IDEA Institute on Artificial Intelligence also reignited in me the excitement, wonder, and even joy that comes with this sort of wide-open experimentation. Which project, whether small and silly or ambitious and grand, will open up a new possibilities for (or understanding of) our collections, services or communities? Which will give us new insights into AI itself?
But it isn’t just the possibilities that need exploring it is also the problems. With each new project we discover new ways that AI/ML can be problematic, reinforcing and making invisible (while literally codifying) existing injustices, and the assumptions, biases, and (perhaps narrow) worldviews of the people coding and implementing AI.
This is an area where library people have much to contribute. While we don’t have the resources to implement AI on a scale comparable to major tech companies, we also don’t have the profit motive tempting us to gloss over ethical concerns or systemic impacts. And we do have the experience to think about the impacts of information at scale: libraries *are* information systems, and we have been thinking systemically for a long time.
Take privacy, for example. Long before grocery stores had loyalty cards, when “tracking cookies” was not a meaningful phrase, we were dealing with the kind of personal information that could be used to infer a person’s interests and intentions. And in the U.S., we saw this information become interesting to the FBI as early as the 1970s.
In our living professional memory, we grappled with the different scale of privacy implications of our changing systems: from the very salient and transparent risk of writing your name on the card in the book, where anyone else who picks up that copy of that book will see that you borrowed it, until the card gets replaced, to a system where the risks were invisible to the patron, but in a few seconds, someone with access could know *every* book they had borrowed, or every person who had borrowed any copy of a book.
Library people weren’t fooled by the argument that if you don’t try to hide your cart while shopping in a supermarket, you shouldn’t mind having a corporation track all your purchases across all your visits.
To bring this back to the AI context, library people see the “Wow!” with AI, but we can also see past it. We have the tools and interdisciplinary expertise to think critically about AI and its impacts on society, individuals, and the environment. We need to make the time and space, in all libraries, to let us (and our communities) get our hands dirty working with artificial intelligence and machine learning, so that we, our libraries, our patrons and our communities, can be asking informed questions and aware of the tradeoffs we are making as the role of AI expands in all aspects of our lives.