If you cannot take AI-cited sources at face value and you (or the AI programmers) cannot determine where the information is sourced from, how are you going to assess the validity of what AI is telling you? Here you should use the most important method of analysis available to you: lateral reading. Lateral reading is done when you apply fact-checking techniques by leaving the AI output and consulting other sources to evaluate what the AI has provided based on your prompt. You can think of this as “tabbed reading”, moving laterally away from the AI information to sources in other tabs rather than just proceeding “vertically” down the page based on the AI prompt alone.
This video from the University of Louisville Library tells you how to sort fact from fiction when online.
What does this process look like specifically with AI-based tools? Learn more in the sections below.
Lateral reading can (and should) be applied to all online sources, but you will find fewer pieces of information to assess through lateral reading when working with AI. While you can typically reach a consensus about online sources by searching for a source’s publication, funding organization, author or title, none of these bits of information are available to you when assessing AI output. As a result, it is critical that you read several sources outside the AI tool to determine whether credible, non-AI sources can confirm the information the tool returned.
With AI, instead of asking “who’s behind this information?” we have to ask “who can confirm this information?” In the video above, lateral reading is applied to an online source with an organization name, logo, URL, and authors whose identities and motivations can be researched and fact-checked from other sources. AI content has no identifiers and AI output is a composite of multiple unidentifiable sources. This means you must take a look at the factual claims in AI content and decide on the validity of the claims themselves rather than the source of the claims.
Since AI output is not a single source of information but rather drawn from multiple sources that could be both factual and false, you will find it useful to break apart AI output into smaller components of information that can be evaluated independent of each other. For instance, let’s see what happens when we ask ChatGPT to respond to the prompt "who are some famous alumni of Wayne State University, Detroit, who graduated with degrees."
Wayne State University in Detroit has produced numerous notable alumni who have excelled in various fields. Here are some famous graduates:
These individuals have made significant contributions to their respective fields, showcasing the diverse and successful paths Wayne State University alumni have taken. |
AI's response illustrates a critical distinction to be made about AI; it will take what you provide it and try to answer your question as best it can, but it will NOT fact check you or spot incorrect assumptions in the prompt you give it. For example, Jerry Bruckheimer is from Detroit, Michigan, but AI is incorrect in stating he is a WSU alumni. His college education is verified by cross-checking information from other sources, and you would find Bruckheimer graduated from the University of Arizona. James Lipton attended Wayne State University, but reading additional information about him you will find he stopped attending after his first year and did not graduate.
AI has made additional errors by identifying the following as Wayne State University degreed alumni, and they are not - Eugenides, Schwimmer and Radner--giving AI a failing grade on this response of 50% correct. These are examples of the AI “hallucinating” seemingly factual answers that sounds quite plausible, but are unfounded upon some quick lateral reading.