Skip to Main Content

Using and Evaluating AI Tools

Tools and strategies for considering, using, and evaluating artificial intelligence (AI)

AI Tools

Artificial intelligence is a broad term that encompasses a number of technologies that attempt simulate human intelligence, problem solving and decision making. It’s likely that you interact with AI tools throughout your daily life, including voice assistants like Siri, video recommendations on YouTube, or even generative AI like ChatGPT or Midjourney. These are examples of Narrow (or Weak) AI, which are designed to complete specific tasks rather than simulate the whole of a human consciousness.

 

AI icon from Radhika StudioGeneral (or Strong) AI is currently a goal in the development of AI, but there are currently no AI tools that qualify as Artificial General Intelligence (AGI). Just because a tool appears to present general intelligence and mimic human cognitive functions, does mean that it actually does. For example, if you’ve used ChatGPT, you might find it to be a pretty convincing conversation partner. It can accomplish this narrow task because it’s a Large Language Model (LLM). LLMs use large datasets of human-generated text to model language and the relationships between words, concepts, and their contexts. These models can then be used to answer questions, generate content, and create pretty useful chat bots. But it’s important to recognize that tools like ChatGPT are not examples of AGI and are not simulating human consciousness. Rather, they are simulating relationships between words and concepts found in their training datasets.

Looking for a specific AI tool to use for your own teaching and learning? Ithaka S+R maintains a list of higher-education focused AI tools and is a great place to start. You might also consider the tools below.

When selecting an AI tool, it’s important to consider the ethics surrounding the use of AI as well as creating a plan to evaluate the tools that you plan to use.

Ethical Considerations

Bias

Bias refers to the preference for or against a thing, a person, a group, or an idea. As people with our own rich experiences, thoughts, and perspectives, we are all prone to bias at times, often without realizing it! Though using generative AI tools might feel like you are interacting with a computer, these tools, and the materials used to train them, are all made by people and will reflect the biases of their creators intentionally or unintentionally.

When using generative AI tools, it is important for you as a user to be mindful of both your own biases, and the potential algorithmic bias in the tools you’re using. Biased algorithms like those used in generative AI can have serious consequences for real people. For example, there are instances where AI-based hiring tools have been shown to disproportionately reject female applicants and AI-driven predictive policing models have been shown to disproportionately target people of color.

If using AI, consider having students reflect on what bias might be present for example in how is the dataset generated or how the question is prompt is framed for the AI tool.

Environment

We often think about the use of digital technology as primarily virtual, but in reality, generative AI tools rely on a vast network of physical infrastructure to function!

Training and operating an AI system is extremely costly in terms of energy use, which emits greenhouse gasses; computing power, which requires massive qualities of water to cool servers; and rare earth minerals, the extraction of which has long been connected to human rights violations and environmental destruction. As large language models grow bigger and bigger, they use more and more resources to develop and operate. In one study, the authors found that the carbon footprint of training a single big language model is equal to around 300,000 kg of carbon dioxide emissions.

Academic Integrity

While the role of AI in the classroom and in academic conversations is still up for debate, many would consider the practice of including unattributed AI-generated content in your academic work to be unethical, similar to plagiarism. For more information see the CSUDH Academic Integrity Policies.

Copyright and Intellectual Property

The reason that generative AI is able to create output that looks and seems like it was made by a human is because these tools are trained using the creative works of real people. In many cases, the original creators and/or copyright holders of these works did not give permission, and were not compensated, for this use. This includes major deals with scholarly publishers (e.g. Wiley and Taylor & Francis) for already published works without even notifying authors.

Labor

The work of generative AI isn’t just digital- most AI tools rely on human labor to function. Besides the creators of the algorithms and interfaces we typically associate with AI tools, there are countless other humans whose labor makes AI possible. This includes the creators of the materials AI scrapes from the open web to learn from, users of tools like ChatGPT whose queries and prompts are used to train AI models, and workers (primarily in the global south) doing content moderation work for less than $2 per hour.