
Like other sources, AI-generated output must be evaluated for accuracy, credibility, currency, bias, and relevance.
AI generated content may include inaccurate information. It may cite sources that don't exist or it may draw conclusions based on flawed training data. It's important to check AI generated content against other trusted sources. Do not use sources cited by an AI tool without reading those sources yourself.
Some questions you may ask as you evaluate AI generated content:

Lateral reading is an evaluation strategy of using other sources to confirm the claims made by the source you are evaluating.
Here's how to fact-check something you got from ChatGPT or another AI tool:
Image and content from University of Maryland Fact-Checking AI guide
Here's a real-life example of a fake citation generated by an AI tool:
Baker, C. K., Niolon, P. H., & Oliphant, H. (2021). Linking gender-based violence and housing instability: Expanding solutions for survivors. American Journal of Preventive Medicine, 61(1), 121-129.
It looks like an appropriately cited scholarly journal article, right?
A search of Google Scholar did not turn up this article, and a web search for this publication also confirmed that the an article with this title and page number doesn't exist in the issue of this journal. You may find that parts of a citation are correct but if the whole thing isn't accurate you've been duped.
©2024 St. Catherine University Library, St. Paul, Minnesota, USA
