Making AI inclusive

Artificial intelligence and related technologies are now shaping important parts of the digital economy, deeply impacting our lives and transforming our futures in ways both visible and hidden. But have we ever given a thought about how this technology is affecting the differently abled? Is the AI I am using currently- even simple ones like food ordering, calling a cab or even virtual assistants, inclusive enough? Not entirely right?

A 2019 research firm Gartner has predicted that by 2023, the number of disabled people employed will triple due to artificial intelligence (AI) and emerging technologies, reducing barriers to access. The study also cites that organisations that actively employ people with disabilities will not only cultivate goodwill from their communities, but also see 89 percent higher retention rates, a 72 per cent increase in employee productivity, and a 29 percent increase in profitability. Today India has 26 million people living with disabilities and to enable everyone to fully participate in the 21st century economy, finding new ways to use this technology could be key.

The last few years saw many organisations stepping up to the task. In 2018, Microsoft launched a five-year, $25 million “AI for Accessibility” programme with an intent to harness the power of AI to amplify human capability. Google similarly has a dedicated accessibility team that actively partners with other organisations including Facebook, Apple and Microsoft and connects with users to build a community which feeds back to ultimately help shape the future of Google’s products. One of India’s leading technology education institutions, IIT Delhi, also has AssisTech, which is an interdisciplinary group of faculty, research staff and students, which is engaged in using modern technology for finding affordable solutions for the visually impaired.

Let’s look a few such AI based tech solutions in various fields:
Wearables:

  • Wearables such as OrCam’s MyEye are helping the visually impaired read books (both texts and pictures), recognize faces, and even distinguish between products and brands
  • Widex’s Evoke are wirelessly powered hearing aids, connected to smartphones that can learn sounds from your environment and classify them as “background noise” or “important noise”. It allows the hearing impaired to focus on the sounds that they want to hear and even set preferences for sounds they want to hear.
  • Empower Me, developed for children dealing with Autism connects the wearer of the glasses with a digital coach that helps the wearer interpret emotions, and give feedback to the wearer to facilitate interaction.

Mobilty:

  • SmartCane, is an affordable obstacle detection system that is mounted on the white cane carried by the visually challenged and recognises thousands of everyday objects
  • Microsoft’s Seeing AI and Google’s Lookout app reads out text from signs and documents, describes faces and more.
  • Google’s Live Caption generates captions for all
  • Hearing AI app that would translate sound into visual representations on screen.

Education :

  • DotBook which is India’s first braille laptop
  • IntelliGaze is a tool that allows people with mobility impairments to operate their computer using eye control.
  • NVDA is a screen reading software that interprets information displayed on a screen, converting to audio via text-to-speech engines
  • TacRead is a refreshable braille display (RBD) that enables people with visual impairment to read digital text through a tactile interface, due to lack of affordable braille displays

Livelihood:

  • Select restaurants are starting to pilot AI robotics technology that enables paralysed employees to control robotic waiters remotely.
  • Auticon is a new kind of workplace that is making it easier for people with ASD to work alongside people without ASD. It is an IT service provider who is creating ASD friendly work environments.
  • Accenture Corporate Citizenship partnered with Leonard Cheshire Disability (LCD), one of UK’s oldest and largest disability-focused civil society organizations. LCD’s Access to Livelihoods (A2L) program assists people with disabilities seeking employment in Africa and Asia, and provides training and career guidance.

AI is also touching arts and recreation. For example, the National Theatre in the UK uses smart caption glasses to display live captions for the wearer. They’ve meant a huge increase in the number of deaf and hard of hearing theatregoers.

One major consideration for adoption is that since these are AI based systems, their performance is limited to the kind of data used for training these models. In case a particular kind of data is missing the system is unable to learn. Think how adversely can it affect tasks like resume shortlisting and loan approvals using automated systems when it has never seen such data before.

This pushes for a need for both organisations and users to consciously include PwDs while building these systems. An example of such an effort is Google Euphonia whose main focus is collecting more voice data from people with impaired speech to remedy the problem of AI bias created by limited training data. It hopes by collecting this data it can improve its algorithms, eventually integrating the updates into Google Assistant.

NGOs like Samarthanam, who have a rich context of the problems faced by the community have a potential to transform this landscape by being active participants in all legs of AI development by providing data, testing these technologies and suggesting ways of improvement. Each one of us have a role to play here as advocates of inclusiveness and for sensitising these issues.

Next time we read about or use any such technologies, let’s consciously give a thought about how it can make a difference in lives of the differently abled, are there ways of making it can improve and vocalise it in order to be real changemakers!

Share with:


Leave a Reply

Your email address will not be published. Required fields are marked *

Our Supporters

A big thank you for believing in our cause.

Back to Top