To understand how AI works, it will help to define our terms. The following terms are commonly associated with generative AI. All definitions are provided by Merriam Webster Online.
A procedure for solving a mathematical problem in a finite number of steps that frequently involves repetition of an operation. Broadly : a step-by-step procedure for solving a problem or accomplishing some end.
Artificial intelligence that is capable of generating new content (such as images or text) in response to a submitted prompt (such as a query) by learning from a large reference database of examples.
In computing : a plausible but false or misleading response generated by an artificial intelligence algorithm.
A language model that utilizes deep methods on an extremely large data set as a basis for predicting and constructing natural-sounding text.
A computational method that is a subfield of artificial intelligence and that enables a computer to learn to perform tasks by analyzing a large dataset without being explicitly programmed.
A computer architecture in which a number of processors are interconnected in a manner suggestive of the connections between neurons in a human brain and which is able to learn by a process of trial and error.
The craft of creating specific instructions or questions to get the desired output from generative AI chatbot.
Generative AI is a specific type of artificial intelligence that can generate new content such as text, images, or code in response to human generated prompts or questions. The type of generative AI that has received the most attention are LLMs or large language models. (e.g. ChatGPT) These models are setup through neural networks--a multi-layered computer architecture that is modeled after the human brain, and which can learn through trial and error by recognizing predictive patterns in large sets of data, also known as deep learning.
Generative AI models are fed sets of human generated content and can then respond to prompts by predicting what comes next based on the patterns it sees in the large data sets. It cannot apply critical thinking to eliminate inaccurate data or understand why something is outside of common sense. In order to get quality responses from a generative AI model, it needs to have both large, accurate, and useful datasets as well as well-crafted prompts.
Because it is designed to generate new content--create content that follows the pre-existing pattern--it can make up facts that feel true, but are actually false. These are called hallucinations. Since we can't guarantee that the information we get from generative AI is factually correct, we should always fact check AI claims using reliable sources like those found in the Oakton Library Databases.
Hallucinations are not the only problems with AI. There are also problems with biases in the datasets, copyright infringement, impact on labor, environmental concerns, and web security. This guide will discuss these issues more in depth on the AI Ethics page.
As lay participants, we have two possible points of interaction with LLMs. First, part of the data they feed off of is the information we put into the web. So, when you post content into various forms of social media, or when you submit your essays to be corrected by generative AI, you are feeding content to LLMs; and when you input prompts to AI chatbots, you receive the output of those prompts.
Determining how useful LLMs' output is to research is a multi-step process. If you use AI for information you should:
When you search Oakton Library databases for articles and sort through the results for relevant information, you critically engage with texts, learning how to recognize quality resources, and practice analyzing the writing of others for readability and usefulness. You not only gain information about the topic you are researching, but you learn how to become a researcher and are exposed to good writing.
When you surrender this aspect of your writing process to generative AI, you don't learn the full breadth of what sources exist, only what sources the AI tool guesses are the predictive response to your prompt. AI is not carefully weighing your options, it is predicting likely patterns. Additionally, it is only ever giving you the resources that are in its datasets. It is less like Wikipedia than it is like a Google Search. You would not use a Google Search results list as the foundation of your research paper, you would follow the results to proper sources, noting which were reliable, and accurate.
When you write a research paper, you learn how to structure persuasive arguments, how to use evidence to support your main points, and work out in real time what you think about a given issue. Your unique phrasing and viewpoint are like your signature. When read by your professor, your voice comes through in your writing.
Just as basic math skills are the foundation for more advanced math, these basic writing skills are the foundation for more advanced writing. You will need these skills not just for college, but beyond. If you have AI write your essays, discussion posts, or papers, you lose the chance to become a more skilled and effective communicator. When you ask it to smooth your language of the writing you've already done, you submerge your voice in favor of the most predictable phrasing.
LLMs are useful whenever processing big datasets and predicting patterns is needed. In specific situations, like cancer diagnosis, recognizing patterns can help lead to early detection and even offer treatment suggestions. In this case, the data the LLM uses is highly accurate and specific and the prompts are specifically directed by experts for a targeted application.
Another application for predicting patterns is in assistive speech technology. Realtime communication assistance for individuals with speech limitations, or in generating more accurate captions is a positive application for these forms of generative AI; so is generating ALT text for online images for readers with vision impairment, and providing executive function support for many learners with disabilities.
These are just a couple examples of what is possible using this technology when its functions are smartly harnessed.