Skip to Main Content

Media Literacy

What is generative AI?

Generative AI is a type of Artificial Intelligence (AI) that uses machine learning techniques to create content. Generative AI tools "learn" by processing large amounts of data, such as text and images from the internet, and intuiting patterns from this data to generate new content. (This definition is from the Generative AI research guide.) 

Detecting AI in writing

Large Language Models (LLMs), such as ChatGPT, are machine learning models trained on large sets of data. They are considered artificial intelligence (AI) models. LLMs are trained on text-based data sets including sources such as literature, online reviews, blog posts, comments, and other writing. Because these models are constantly being trained on new and diverse types of writing, the writing they produce can vary greatly over time, and they are consistently improving. Prompts given to LLMs can specify tone, length, format, and many other factors that can adapt the writing for a wide variety of purposes. It can be very difficult to pinpoint what is and is not AI writing, but there are some techniques that can help.

AI detection software is highly inaccurate. AI detecting tools have claimed that historical documents such as the Declaration of Independence are AI-written, and they often fail to detect material that actually is produced by LLMs. They often indicate that English writing completed by English language learners is AI-generated when it is not. It is better to rely on your own judgment to determine what is or is not AI.

The following factors do not guarantee that a piece of writing is AI-generated, but are typical hallmarks of AI writing: 

  • Hallucinations: Some LLMs, when prompted to include citations or other indications of sources, will input incorrect sources or sources that do not exist. These fabrications are called "hallucinations," and their presence is an indication that the piece was written through AI. To check whether or not sources in AI-generated text are real, evaluate the sources using techniques such as lateral reading or SIFT. If you are using an LLM, be sure to conduct your evaluation outside of the LLM platform to ensure the text does not contain hallucinations.
  • Phrasing: Some AI writing will include nonsensical words or ideas. There may also be instances where the use of a particular phrase or saying is not entirely correct. Because LLMs are predictive models, they may deploy incomplete two-part phrases or use turns of phrase in a strange or unnatural way. 
  • Repetition: LLMs may use particular words and phrases repeatedly. If a word or phrase is used multiple times within a short piece, it may be an indication of AI written content. 
  • Tone and context: LLMs will produce writing based on the prompts they are given, which can include tone and language. Asking an LLM to produce an academic piece will result in writing vastly different from a query requesting a blog post. However, LLMs are not always able to produce an appropriate response for the context of a piece. 
  • Speed of response: It takes only seconds for an LLM to produce paragraphs of text. If text is being sent in a chat faster than any person could type, it could be a sign that the person on the other end is working from a pre-written template or that the chat is run by AI. 

Detecting AI in images

Similar to LLMs, AI image generators will produce images based on written prompts. They can emulate various artistic styles and formats. It can be difficult to distinguish what is or isn't AI-generated, but there are a few details to look out for:

  • Multiple items or appendages where they are not expected: One hallmark of AI-generated images is the inclusion of extra fingers, teeth, or other such details. Hands may have fewer or more fingers, for example, or a smile might have more teeth than expected. 
  • Asymmetry and incongruency: Details that you might expect to be mirrored exactly may not appear in AI-generated images. Details you would expect to be repeated may be different shapes or unevenly placed. Buttons on a coat may be placed seemingly at random along the coat's opening, for example, or a butterfly's wings may not look the same from one side to the other.
  • Blended details: AI-generated images often contain details that blur together, such as architecture fusing with a piece of furniture or a strand of hair seeming to melt into a jacket. 
  • Lighting and texture: Many AI-generated images use warm colors that seem to glow within the painting. Textures within the image may also appear unnaturally smooth. 
  • Image does not return any results in a reverse image search: If no record of the image exists and there are no signs of authorship, it may be AI-generated.
  • Context. Does the image show something impossible or outlandish (like a dragon reading a book in a non-existent Princeton library building)? If so, you may be looking at an AI-generated image. 

 

This image was created in Microsoft's AI image generator, Copilot, using the prompt "a dragon wearing sunglasses reading in a Princeton library." Note the asymmetry in the window panes and the strange side table toward the left of the image. 

A dragon wearing sunglasses while reading in a room labeled "Princeton"

Spotting AI video content

This video from the Australian Broadcasting Corporation includes both simple and advanced methods to determine whether or not a person in a video is AI generated. 

AI resources at Princeton