Skip to content

Assessing the Accessibility of Digital Special Collections for Print-disabled Users

This webinar will share mid-stage findings from “Crowdsourced Data: Accuracy, Accessibility, Authority” (CDAAA), a 3-year IMLS-funded Early Career Research Development program led by Victoria Van Hyning to investigate the sociotechnical barriers that Libraries, Archives and Museums (LAMs) face in making crowdsourced transcriptions derived from cultural heritage materials open and accessible to a broad public. Accessibility here refers not only to the ability of fully sighted users to search LAM discovery systems, but the ability for people who are Blind, low vision, Dyslexic or otherwise print-disabled to use their preferred screen reader software to navigate LAM systems, find content, and hear digital information read aloud.
Participants will learn about:
  • the applications of crowdsourcing work as it relates to accessibility for print-disabled users
  • ongoing challenges at the intersection of digital accessibility in GLAMS for and with print-disabled users
  • methodological practices and approaches to inclusive user testing and interview work
Principle 1 of the Web Content Accessibility Guidelines (WCAG 2.1) requirements states that web content should be "Perceivable: Information and user interface components must be presentable to users in ways they can perceive." Guideline 1.1 further states that web content creators should "provide text alternatives for any non-text content so that it can be changed into other forms people need, such as large print, braille, speech, symbols or simpler language." Section 508 of the Rehabilitation Act of 1973 (added in 1998) requires access to ICT developed, procured, maintained, or used by federal agencies in the USA to be accessible to people with disabilities.
Southwell and Slater (2012) found that only 42 percent of the sampled digital collection items in their study of state-funded universities libraries were compliant. The remaining 58 percent of items lacked the alternative text or transcriptions needed to make content such as handwritten manuscripts and photographs accessible for people who are Blind or have low-vision and use screen readers to navigate and access digital content. Through interviews with LAM practitioners and community crowdsourcing leaders, and usability testing sessions of crowdsourced data and its source images with people who are Blind or low vision, we investigate to what extent crowdsourced transcriptions enhance the accessibility of special collections, and what barriers may still stand in the way. We will briefly discuss some of the promises and perils of using emerging AI tools, such as ChatGPT, to enhance transcriptions to make them more legible to screen reader technologies. Southwell, K. L., & Slater, J. (2012). Accessibility of digital special collections using screen readers. Library Hi Tech, 30(3), 457–471. https://doi.org/10.1108/07378831211266609

Presenter

Victoria Van Hyning

Victoria joined the University of Maryland iSchool in 2020 and is an affiliate of the English Department. From 2018-2020 she served as a Senior Innovation Specialist for the Library of Congress’ crowdsourcing project By the People. She held a British Academy Postdoctoral Fellowship in English literature at Oxford University, where she also served as the Humanities PI of the crowdsourcing program Zooniverse.org (2015-2018). Her teaching and research interests focus on giving more oxygen to marginalized voices and people, whether in the historical record—such as religious minorities, women, and Black artists—or people alive today. She leads the David C. Driskell Papers Project with Driskell Center colleagues, and is a founding member of the Center for Archival Futures (CAFe), and Data Rescue and Reuse (RRAD) Lab where she leads investigations about the long-term preservation, use and reuse of crowdsourced data. She has emerging interests in prison librarianship and education, and supporting returning citizens.