Share on facebook
Share on twitter
Share on linkedin
Share on email

Artificial Intelligence and Inclusion

blog-3

After I was diagnosed with Parkinson’s six years ago, I resisted looking into Diversity and Inclusion (D&I) for a long time, specifically because I knew I would periodically wind up being labelled “disabled” by the world we inhabit. Nor did I want to be the “disability” analyst. But that’s stupid. For one, because it’s my reality, and two, if I have a voice, why not lend it to educating others about this concept that’s confusing to so many people?

I ran across an amazing article called Disability, Bias, and AI, produced last month by the AI Now Institute at NYU. It reports and builds on ideas from a meeting sponsored by NYU and Microsoft that took place on March 28, 2019. The meeting included “disability scholars, AI developers, and computer science and human-computer interaction researchers to discuss the intersection of disability, bias, and AI, and to identify areas where more research and intervention are needed.”

Almost every sentence in this article provided fodder for blogging, so I’m sure this isn’t the last time I’ll write about it, but if you’re at all interested in D&I, AI, or the disability community, you should check it out. Here are three important takeaways that struck me.

Disability isn’t just disability.

I was watching a television show once and the host said that the only minority that anyone can become a part of instantly is the disability community. How true. Young, old, black, white, male, female— any of us can be disabled from birth, or temporarily or permanently disabled at a moment’s notice due to any of a million reasons. And when we generalize about the “disability community” we forget just how different everyone is in that group. We not only have unique issues and distinct combinations of issues, but we also carry with us our other identities—such as race, gender or creed. The article from the AI Now Institute noted that “Scholar Meryl Alper also notes that ‘[o]ne billion people, or 15 percent of the world’s population, experience some form of disability, making them one of the largest (though also most heterogeneous) groups facing discrimination worldwide.'”

AI amplifies data.

AI is all about taking historical data, applying variables and algorithms, and making recommendations. And as AI grows more commonplace, it will start to “help” us make hiring decisions, legal decisions, medical decisions and more. In the past, people with disabilities have often been invisible, and ordinary society has excluded them entirely. Because many disabilities fall outside of the “normal” range of the population, these outliers are sometimes cast out of data sets as well. This is the best case scenario. At worst, people with disabilities are marked as “abnormal” or “aberrant” in the data models. When amplified, the results often discount or marginalize them further. To quote the article:

These systems, often marketed as capable of making smarter, better, and more objective decisions, have been shown repeatedly to produce biased and erroneous outputs, from voice recognition that doesn’t “hear” higher-pitched (i.e., “more feminine”) voices to diagnostic systems that work poorly for people with dark skin to hiring algorithms that downgrade women’s résumés.

Automating mistakes.

The really scary part about AI decision-making is that when those decisions start out horribly wrong, further use and repetition of them only cements these mistakes in ways that can cause not only psychic harm to those with disabilities, but actual physical harm and even death. Take this passage, for example:

Scholar Jutta Treviranus discusses testing an AI model designed to guide autonomous vehicles, hoping to understand how it would perform when it encountered people who fell outside the norm and “did things unexpectedly.” To do this, she exposed the model to footage of a friend of hers who often propels herself backward in a wheelchair. Treviranus recounts, “When I presented a capture of my friend to the learning models, they all chose to run her over. . . . I was told that the learning models were immature models that were not yet smart enough to recognize people in wheelchairs. . . . When I came back to test out the smarter models they ran her over with greater confidence. I can only presume that they decided, based on the average behavior of wheelchairs, that wheelchairs go in the opposite direction.

AI is big business, and is clearly here to stay. But as with all technology, the old saying holds true: garbage in, garbage out. If data models only include certain groups as outliers—or those deviating from the “norm” and therefore not considered—can we trust the decisions they make? I love data, and I think data-driven decisions, including those aided by AI, are important. But remember, “AI” should also stand for “another input,” as we look to determining and creating the guidelines for exactly how much of our future can and should be automated.

Leave a Reply

Your email address will not be published. Required fields are marked *

Related Articles

Connect With Us

Sign up for the newsletter and get latest updates and news. Most importantly, find out how you can participate in our latest research, focus groups and events.

  • This field is for validation purposes and should be left unchanged.