Plain language is an important part of making things accessible for disabled people. We are very worried that people are using artificial intelligence to translate text into plain language without realizing that it cannot do that work correctly. We tested multiple artificial intelligence models, and all of them made big mistakes that changed the meaning of the text. For this and other reasons, we call on other organizations not to use artificial intelligence for plain language translation.
Artificial intelligence is when a computer program does things that normally need to be done by humans. We call artificial intelligence “AI” for short. There are lots of different kinds of AI.
For example: the “Spam” filter for most email addresses was made using AI. The AI looks at your emails and decides which ones are spam. The AI moves the emails it thinks are spam into your Spam folder.
Another kind of AI is machine translators. These take text in one language and try to translate it into a different language.
Both of these AI tools can be helpful. But both of these tools also make some mistakes. The Spam filter may accidentally move emails that are not spam. Or the machine translation may have spelling or grammar mistakes.
The way AI works is by using data from people. Data is information a computer can read. AI programs read data made by people to understand what to do.
AI has become a popular topic recently. When people talk about AI today, they usually mean generative AI. Generative AI is a specific kind of AI. Generative AI can use data to make new things.
Some kinds of generative AI you may have heard of are:
- ChatGPT
- Google Gemini
- Midjourney
- Microsoft Copilot
For this paper, we will use an imaginary generative AI in our examples. This AI will be called “BobAI”.
Generative AI can create many kinds of new things, like:
- text
- images
- music/movies
- and more
For example:
Sabine wants to know more about whales. She types to BobAI, “Give me some facts about whales.” BobAI replies with some facts about whales.
BobAI used data people put on the internet to find whale facts. But the text BobAI “wrote” got made right after Sabine asked her question. So BobAI’s words are “new” words, and count as generative AI.
Generative AI might seem like a fun and helpful new idea. But there are a lot of problems with generative AI. These problems get even bigger when people use generative AI to write in plain language.
People have already started using generative AI to write plain language. Here is a common example:
Abdul wrote a research paper. He asks BobAI, “Rewrite this paper so it is at a 6th grade reading level.” BobAI gives Abdul the writing it comes up with. Abdul tells others that he now has a “plain language” version of his research paper.
Patty has an intellectual disability. She wants to read Abdul’s plain language paper. But when she tries to, she finds out it doesn’t make sense. It seems like parts of the paper are missing. And the rest of the paper is hard to understand.
In this paper, we will list the reasons why people should not use generative AI to write in plain language. Plain language is an important part of making things accessible for people with disabilities. Using generative AI makes “plain language” that is not actually accessible. We hope anyone who writes in plain language will not use generative AI.
Generative AI changes what things mean
The ways words get used are important. Generative AI changes a lot of words when it tries to write in plain language. Many times, generative AI also changes what something means.
For example:
Yolanda wrote a paper about sheltered workshops. The paper talks about how sheltered workshops are bad. They hurt workers with disabilities by keeping them separate from non-disabled workers.
Yolanda asks BobAI to put her paper into plain language. BobAI rewrites the paper, but it changed some of the words. Now, the paper makes it seem like sheltered workshops are a good thing.
This mistake could be because a “shelter” or a “workshop” sounds like a good thing. Or, BobAI could have gotten data from places that think sheltered workshops are good. Either way, BobAI changed what Yolanda’s paper meant.
Here is another example:
As an experiment, staff at ASAN took things we had written and asked a generative AI to lower the reading level. The text we put in was about how autistic people deserve rights. The AI added new ideas that were not in the text we wrote. The AI added sentences about how some autistic people have very rare and amazing talents. This is not something that ASAN would say. We believe autistic people deserve rights even if we don’t have amazing talents.
Generative AI also can’t tell the difference between facts and lies. If generative AI gets data from places that lie, that AI could spread those lies to other people. Sometimes these lies are dangerous.You might have heard of an AI that told people to put glue in their pizza. Generative AI can come up with lies at any time. That’s why it is not a good tool for writing or getting ideas.
ASAN believes that giving information to people with disabilities in plain language is really important. The information in plain language resources should be true. If plain language resources are full of mistakes and lies, they do not give people the information they need.
Plain language is an idea that is too new for generative AI to understand
Generative AI needs a lot of data to do a good job. AI uses data to build a “model”. A model is a guide the AI uses to answer questions.
For example, BobAI needs a model for what a cat looks like. BobAI looks at millions of cat pictures on the internet. Now, BobAI has a model for what a cat looks like. Later, Sabine asks BobAI to make her a cat picture. BobAI makes a new picture of a cat.
Building a model is more simple for things like cat pictures. There are lots of cat pictures generative AI can learn from. But there are not a lot of things written in plain language. Generative AI has not learned enough about plain language to make a model for it. That’s a big reason why generative AI makes a lot of mistakes when it tries to write plain language.
Generative AI focuses on words, not ideas
An important part of plain language is reading level. Plain language should use words that most people can understand. That’s why most plain language gets written at around a 6th grade reading level.
But generative AI has problems when it tries to change words to a lower reading level. The AI might take out important ideas that readers need to know. Or, it might turn a word into something that doesn’t mean the same thing.
Another big part of plain language is explaining ideas in more detail. But most people only ask generative AI to “translate” text they already wrote. That means the AI can’t know which parts need more detail. Even if someone asks an AI to add more detail, the details may be wrong.
For example:
Abdul’s research paper has a lot of difficult words. In plain language, difficult words should get a definition. BobAI doesn’t know which words need definitions, or how to find out. BobAI can’t make Abdul a list of definitions for the difficult words.
Abdul can instead ask BobAI to make a definition for each word one by one. But BobAI might not make a good definition. The definition might not mean the same thing as the word does in Abdul’s paper.
Generative AI has discrimination built-in
AI gets data from many different places. But most AI programs are made by people who have a lot of power and privilege. Then, those AI programs get data from those people in power. That means certain groups of people get left out. Generative AI can end up discriminating against these groups.
Discrimination means getting treated unfairly because of who you are. For example, racism is discrimination against people of color. Ableism is discrimination against people with disabilities.
Generative AI can make things that discriminate against people. Some generative AI can make pictures of people. Studies found that generative AI made pictures of Black people that used racist stereotypes. This is discrimination against Black people.
For example:
Billy asked BobAI for pictures of doctors. BobAI showed pictures of white men in medical coats. Then Billy asked BobAI for pictures of fast food workers. BobAI showed pictures of young Black men in fast food uniforms. Saying that doctors are white and fast food workers are Black is using stereotypes. It is discrimination.
The ways that generative AI writes can also add discrimination into people’s words. We talked earlier about how BobAI changed Yolanda’s paper. This is also a kind of discrimination. Saying sheltered workshops are a good thing is unfair to disabled people.
There is one more big way generative AI discriminates when it tries to write in plain language.
Generative AI has trouble understanding that plain language is for adults. Generative AI assumes only children read at a 6th grade level. That makes it so generative AI writes plain language like it is for children. This is not fair to disabled people. We all deserve to get treated like adults.
Plain language is by and for disabled people
Plain language works because people with disabilities are part of writing it. Disabled people used their lived experience to make sure the writing is actually accessible. Generative AI does not have this experience. AI can never have the experience of being disabled and needing plain language.
Many disabled people write in plain language as their only job. For some, it is the only kind of job they can do because their disabilities make other jobs inaccessible. Generative AI takes jobs away from us. We already know that generative AI does not do a good job writing in plain language. Writing in plain language is a job that should be left to people, especially disabled people.
People with disabilities who need plain language should get to look over plain language work. When people who need plain language get to make edits, the writing becomes more accessible. Working with people with disabilities is the best way to edit plain language. And people with disabilities should get paid fairly for their work.
—
There are other reasons why people should try not to use AI that we couldn’t fit too much about in this statement. Using generative AI takes a lot of energy and water, which hurts the environment. Some kinds of generative AI also steal other people’s data to use in their models.
We hope you understand why people shouldn’t use generative AI for plain language. But there are other AI tools that can be helpful for plain language. Reading level checkers like Readable or Hemingway are not generative AI. They show specific words or sentences that are a high reading level. But they don’t replace words for you, or tell you what words to use. It is up to you to make the changes you need to make your writing accessible. These tools are helpful because they give advice, but don’t change things for you. We think people should only use these kinds of AI tools for plain language.
People writing in plain language should always talk to disabled people first. People with disabilities can give feedback to make stronger plain language papers. We do not need AI to do this work when we already know disabled people can. Nothing about us without us!