Identifying the purpose of the AI tool. What task was it designed to do?
Investigate information published by the creator of the tool. Are their training, testing, and validation methods publicly available?
Evaluate the tool’s usage and privacy policies. How will the creators of the tool use the data that you upload to their service? Will your data be used to train future tools? If you're uploading others' works to an AI tool, will their privacy be respected?
Investigate the Training Data
If available, assess the data used to train and test the AI tool. Is it relevant to the task the AI is designed to perform? Is the training data a comprehensive representation of the issue?
What bias might be introduced by the training data? Do the creators of the tool discuss bias in their own documentation? Do they specify how their training methods attempt to address and reduce bias in outputs?
Evaluate the Output
Check to see if a third party has evaluated the reliability of a tool. Creators of AI tools will often release information as marketing for their tool, which can obscure vital information that helps evaluate reliability. Third-party testing is important for ensuring a well-rounded evaluation.
Finally, evaluate the output yourself.
Does the tool explain its output? Does it cite sources or provide a justification for its decision?
Consider information unavailable to the AI tool. If you’re using an AI to make decisions, does the decision made by the AI tool fit with all the information available to you.
If you are using an AI to learn new information or conduct research, you should evaluate the truth of its output. Consider fact-checking the information and trying to verify it with additional sources.
How to Evaluate Information Generally
Here are the library's tutorials and other online instructional tools on evaluating all types of information.
Evaluating Information
A tutorial on evaluating information, including popular and scholarly, peer-review sources.