With the large diffusion of ChatGPT, used for engaging conversations with a human user, the need of understanding how this and other similar tools could help people with disabilities is becoming essential.
An autoethnographic case study [1] conducted by a research group of the University of Science & Engineering of Washington, shows interesting results about the use of Generative Artificial Intelligence (GAI) inside a team of researchers with and without disabilities.
Recently, reflections on the potential of the GAI to negatively impact inclusion, representation, and equity for marginalized communities, including people with disabilities, are multiplying.
This is particularly true as GAI is rapidly adopted and embedded in existing tools and workflows. It is urgent to explore the potential benefits and challenges posed by GAI.
The study has been conducted on a team of seven individuals with and without disabilities, conducting a three-month autoethnography of their use of GAI to meet personal and professional needs. GAI has been used in domains including summarization, communication, image generation, graphical user interface and visualization design, and making documents and visualizations accessible.
During the data collection period, each participant independently summarized the experiences in a shared document, describing the motivations for using generative AI to address access needs, and noted what went well, what didn’t go well. Results have been presented using amalgams of data.
Here two examples are reported from the study [1].
In the first case ChatGPT4 helped Sam (fictional name), who is autistic, to rewrite messages at work. Sam, in fact, experienced frequently anxiety due to past misunderstandings related to his communication problems. He spent a lot of time figuring out if the message was good or not.
He wanted to be confident and concise. ChatGPT4 rewrite his messages in a way that he wanted to write originally. Now Sam feels a relief when he needs to write messages.
On the other side, who received his messages, find a colder language and in most cases, they preferred the original message.
In the second case AI based on visualization (DALL -E 2 and Midjourney) helped Ally (fictional name) to visualize her favorite fiction novels. Ally, in fact, suffers of aphantasia, an inability of the process of imagination. Ally describes her excitement when she can finally see for herself what the scenes and characters in her favorite book could look like.
Ally’s inability to visualize also affects her ability to visually imagine new designs for crafts as well. Ally uses Midjourney to generate photorealistic sketches of a novel craft concept and the results were very satisfactory.
In general, the work demonstrates the potential for GAI today to be used by people with disabilities to provide on-demand support for their accessibility needs, in low-stakes, easily verifiable contexts. Unsurprisingly, some of the limitations the researchers encountered reflect GAI’s current ability to parrot information without truly integrating it. For example, in the case of visualization needs, more trainings are needed. Further research should move beyond single-case explorations of GAI’s capabilities.
[1] An Autoethnographic Case Study of Generative Artificial Intelligence’s Utility for Accessibility; Kate Glazko, Momona Yamagami, Aashaka Desai, Kelly Avery Mack, Venkatesh Potluri, Xuhai Xu, and Jennifer Mankoff – The 25th International ACM SIGACCESS Conference on Computers and Accessibil-ity (ASSETS ’23), October 22–25, 2023, New York, NY, USA; https://doi.org/10.1145/3597638.3614548