The Internet of Things (IoT) is a concept many of us have become very familiar with, in some cases without even realizing it. One of the most popular applications are smart home devices, which have become very popular since the rise of virtual assistants. While the IoT existed prior to these artificial intelligence-powered aides, virtual assistants and smart home devices are precisely what changed the perception of IoT in popular culture from niche to mainstream. While these products and services provide a new level of convenience to everyone who uses them, they are particularly beneficial for people with disabilities because they augment our daily living activities. So, why is IoT so useful for this community and what are the major challenges facing them in terms of adopting and using these technologies? Let’s find out!
Virtual assistants allow all of us to more easily keep track of and update our schedules, daily routines, as well as our shopping and to do lists. They also enable hands-free control of our digital music libraries. Smart home devices such as Philips Hue light bulbs and the Nest thermostat allow users to control the lighting and temperature of our home with our smartphones and, in conjunction with virtual assistants and connected speakers, our voices. This saves us valuable time and energy, allowing us to maximize our productivity and efficiency. While these features are particularly helpful for people with disabilities, they also pose unique challenges for this population, which highlight the current limitations of the technology that these tools are built on.
What types of disabilities benefit from IoT and virtual assistants and why are they so useful?
In order to understand exactly why IoT, virtual assistants, and smart home devices are all assets for those of us with disabilities, we need to understand what types of disabilities they impact. To start, imagine that that you relied on a wheelchair, walker, or crutches to move about your environment. Now imagine that you woke up in the middle of the night urgently needing to use the toilet. Think about how much energy it might take not just to get out of bed to your mobility assistive device, but to get across the room to a lightswitch and then to the bathroom. Smart light bulbs and virtual assistants such as Alexa and Google Assistant take some of the effort and potential anxiety out of this situation since they allow users to turn the lights on from a smartphone or with their voice.
Alexa, the Google Assistant, Siri, and Cortana are all also powerfully useful for people with cognitive and memory impairments. This is due to their ability to prompt users with reminders and in some cases guide them with step-by-step instructions. For example, as we age, it can become more difficult to retain information in our memory. Virtual assistants can remind us to take our medicine, exercise, and engage in other self-care tasks. What if you experienced a traumatic brain injury and now can’t remember how to cook your favorite dish? There’s an Alexa skill for that! Despite the fact that the Allrecipes Alexa skill has bad reviews, the fact that it theoretically can allow Alexa to walk you through the process illustrates that the potential to empower people to get back in the kitchen is there if the functionality is implemented effectively.
People who are blind can use their smartphones to better understand their environments with tools such as Microsoft’s Seeing AI iOS application and the wearable Aira. RFID tags can also help them cross busy streets safely and independently. Google Lens is another object recognition application. What’s interesting about Seeing AI and Google Lens as examples is that they were designed for completely different reasons for different types of users, but function very similarly and provide similar information to users. Google Lens allows users to conduct web searches by taking a photo of an object or environment and to take action on text such as a phone number, event details, addresses, and even foreign languages. Both apps are meant to help users explore the world around them, but Microsoft’s offering, which focuses on describing the content of photos taken by users with speech output, is designed to bridge a gap and help its users reach a level of understanding that Google assumes its users already have.
What are the challenges and limitations people with disabilities face when using IoT?
About half of the most popular virtual assistants currently available rely heavily on voice user interfaces to engage and interact with users. This drastically hinders the ability of people with speech impairments to take advantage of them. If a user has a relatively mild speech impairment, they may be able to get some use out of it, but the more severe the impairment, the more difficult it becomes for users to have an effective interaction with any of these assistants. If a user is unable to articulate or enunciate clearly, or if they are unable to raise the volume of their voice to a level that any of the assistants can recognize, the usability of virtual assistants deteriorates. Furthermore, if a user cannot verbalize audibly whatsoever, some of the assistants are completely useless. Alternatively, people who are deaf can’t hear the speech output from virtual assistants. A simple solution for these barriers is to provide an alternative input method, the most straightforward of which is typing, with responses being provided via written transcripts. The Google Assistant and to a lesser extent, Cortana, incorporate this, but Siri does not. For its part, Amazon is beginning to incorporate such a feature into their Echo line of connected devices but the feature, called Alexa Captioning, is only available on those devices that have a built-in screen.
Cognition can also be a barrier for potential users. Although virtual assistants and smart home products are often touted as being intuitive, intuition is subjective. What one person may find easy to learn and use, another may not.
So, why aren’t more IoT devices designed with accessibility in mind?
While I’ve mentioned a few technologies above designed specifically for people with disabilities, I’m left to wonder why the most popular virtual assistants and smart home devices are not designed with the needs and abilities of the disabilities community in mind, nor are they marketed to them specifically. Although it’s impossible to know for certain without speaking to the companies behind these products and services directly, one theory is that the people making such decisions at these organizations assume that designing these tools more inclusively will hurt their bottom line. They assume that it’s not worth the resources and effort. The thing is, those of us with disabilities comprise the largest minority globally according to the UN. Ultimately, by choosing not to embrace this group of people, these businesses are missing out on a massive potential revenue stream.
Another potential reason which builds on the previous is that because of the variety of challenges encompassed by “disability” that we are not capable and thus not worthy. For example, some people who have autism have difficulty communicating verbally. An all-too-common misconception is that because someone cannot communicate verbally, they are entirely unintelligent. So, even if they are otherwise competent and may even utilize an alternative form of communication, their ability to engage with a virtual assistant is hampered because of a company’s decision not to incorporate an alternative interaction method to voice. The fact that someone may not be able to communicate verbally doesn’t mean that they can’t or won’t benefit from using Alexa or Siri. For companies to not design inclusively, they are denying a lot of potential customers the opportunity to even try.
What are two ways this problem can be solved?
One way this problem can be solved is for the teams behind connected devices and virtual assistants to hold conversations or focus groups with users who have disabilities. Many of us have a lot to say on the topic of user experience and are extremely eager to have our voices not only heard but listened to, but find it hard to speak up on our own.
Another way would be for voice interfaces to be subject to the Web Content Accessibility Guidelines just like keyboard and mouse interfaces already are. The problem with this solution at the moment is that these guidelines in their current state were written with keyboard, mouse, and visual interactions in mind and don’t apply quite so easily to voice interactions.
The rise and prevalence of the Internet of Things presents massive potential to positively impact the lives of people with disabilities. Unfortunately, the industries behind these innovations haven’t entirely caught on to this need. People with disabilities represent a huge population globally and many of us are avid technology users. It’s profoundly unfortunate that even though those of us with disabilities can arguably benefit more than the general public from IoT’s most mainstream and consumer-ready forms, that our needs and abilities aren’t adequately considered in their design, development, and implementation.