Libraries play an important role in the future of AI
by Mary Beth Holm
AI is already a major part of everyday life, whether people are aware of it or not. Credit, healthcare, career opportunities, and even personal freedom may be impacted by AI. It is important that people understand AI and that there are equitable opportunities to work with AI and machine learning.
This is where libraries can play a key role in the future of AI. Education is critical to providing broader access to AI and increasing comfort with AI. In academic libraries, making AI tools part of library instruction, and/or providing workshops on AI and machine learning are easy ways to ensure that the communities that we serve are aware of, and feel comfortable using, AI-based tools. Knowledge of AI is essential for all fields, not just computer science.
However, librarians cannot teach users about AI tools if they are not aware of them. Prior to this week, I thought that I kept up with many of the new resources as they became available. From attending the IDEA Institute on AI this week, I learned that there are a lot of resources available of which I had no awareness. Educational resources like listservs, conferences, and participating in institutes such as this one, are all things that we should be doing, so that we can share this knowledge with our users.
In Jason Griffey’s fascinating talk this week, he described how the research and publishing models may change dramatically in the next 5-10 years, due to the likely availability of automated tools throughout this process. The sooner that universities are aware of this upcoming significant change, the better that they can prepare for the future.
Libraries can also increase the likelihood that new tools being developed are fair and human-centered. Ensuring that people who are designing these tools are aware of, and follow best practices when creating such tools, is vital for the future.
There is always bias in any system that is being created; however, it can be limited if the creators of these systems solicit feedback from diverse groups of people. Testing, and not just deploying a new AI-based tool immediately, is important so that the final design is inclusive, and does not adversely affect users of the system.
Coded Bias makes it clear that AI is already being used in many aspects of life. Some systems may provide transparency in terms of design and the algorithms being used. Others may be black boxes, where one can only guess as to what factors were considered. If you are not hired for a job due to AI bias, how would you know that this is the case?
As a librarian at an HBCU, I am heartened to take note of initiatives such as Black in AI. The more inclusive we are in who gets to work in AI, the more effective the tools being developed will be in meeting the needs of all people.
AI tools can be very useful for libraries, including HBCU libraries. Deploying these products at an HBCU may require more in-depth user testing, to make sure that we understand how the tools will work with groups of people who may not have been considered during the design process.