Microsoft and Team Gleason, the nonprofit organization founded by NFL player Steve Gleason, today launched Project Insight to create an open dataset of facial imagery of people with amyotrophic lateral sclerosis (ALS). The organizations hope to foster innovation in computer vision and broaden the potential for connectivity and communication for people with accessibility challenges.
Microsoft and Team Gleason assert that existing machine learning datasets don’t represent the diversity of people with ALS, a condition that affects as many as 30,000 people in the U.S. This results in issues accurately identifying people, due to breathing masks, droopy eyelids, watery eyes, and dry eyes from medications that control excessive saliva.
Project Insight will investigate how to use data and AI with the front-facing camera already present in many assistive devices to predict where a person is looking on a screen. Team Gleason will work with Microsoft’s Health Next Enable team to gather images of people with ALS looking at their computer so it can train AI models more inclusively. (Microsoft’s Health Next team, which is within its Health AI division, focuses on AI and cloud-based services to improve health outcomes.) Participants will be given a brief medical history questionnaire and be prompted through an app to submit images of themselves using their computer.
“ALS progression can be as diverse as the individuals themselves,” Team Gleason chief impact officer Blair Casey said. “So accessing computers and communication devices should not be a one-size-fits-all. We will capture as much information as possible from 100 people living with ALS so we can develop tools for all to effectively use.”
Microsoft and Team Gleason estimate that the project will collect and share 5TB of anonymized data with researchers on data science platforms like Kaggle and GitHub.
“There is a significant lack of disability-specific data that is needed to go after these innovative and complex opportunities within the disability experience,” Microsoft senior accessibility architect Mary Bellard said. “It’s not about disability and accessibility making AI better. It’s about how AI makes accessibility better and … better adapting to the disability experience.”
Project Insight follows Microsoft’s GazeSpeak, an accessibility app for people with ALS that runs on a smartphone and uses AI to convert eye movements into speech so a conversation partner can understand what’s being said in real time. In tests, GazeSpeak proved much faster than boards that display letters in different groups, a method people with ALS have historically used. Specifically, the app helped users complete sentences in 78 seconds on average, compared with 123 seconds using the boards.
Microsoft isn’t the only one leveraging AI to tackle the challenges associated with ALS. London-based engineer Julius Sweetland created OptiKey, free software that uses off-the-shelf eye-tracking hardware to analyze a user’s eye movements. An on-screen keyboard enables the user to select letters in this way, with automatic suggestions that pop up like they would on an iOS or Android smartphone.
In August, Google AI researchers working with the ALS Therapy Development Institute shared details about Project Euphonia, a speech-to-text transcription service that can drastically improve the quality of speech synthesis and generation for people with speaking impairments. To develop the model, Google solicited data from people with ALS and used phoneme mistakes — involving the perceptually distinct units of sound in a language — to reduce word error rates.