GenAI tools will not always give accurate results. Seemingly plausible responses will often be of poor quality, be out of date, biased and in some cases just plain wrong, making them unreliable study tools.
Always take steps to verify what an AI tool has produced: check facts and run author/title searches on Library Search to confirm that any references produced by an AI tool actually exist. Go to the Evaluating GenAI content section of this guide for more information.
GenAI tools can produce biased results due to the data that has been used to train them. Always check content for bias:-
Gender bias: GenAI tools can reinforce gender stereotypes, such as favouring men for science roles or portraying women in caregiving roles. For example, job application sifting algorithms that use AI could prioritise male candidates for technical jobs.
Racial and ethnic bias: Facial recognition systems are les accurate for minority ethnic groups, leading to misidentifications and potential harm, such as wrongful arrests or disparities due to non-diverse training datasets.
Cultural bias: GenAI may fail to account for cultural nuances, such as dialects or traditions, resulting in unfair outcomes.
Confirmation bias: GenAI can perpetuate existing beliefs by favouring data that aligns with prior assumptions, For instance, loan approval algorithms that use GenAI may unjustly favour certain demographics based on historical biases in the data.
Look at the images below. A GenAI tool was prompted to produce pictures of a university student asking their librarian for help. Looks like all librarians are female and all students are male!
Training sources
Very few GenAI tools provide information on the sources that have been used to train them (there are exceptions). Unlike Library Search, they cannot access resources that are behind paywalls or subscription based and instead rely on information and data that is used to train them.
GenAI tools do not know whether the content they produce is true or not! Many cannot tell you whether the content they produce is up-to-date and some cannot say when they were last updated.
Hallucinations
Remember GenAI tools predict words, and do not actually search for resources. They are designed to sound plausible, but cannot distinguish between accurate information and fiction.
They often generate references as part of the content produced in response to a prompt. It is important to check these carefully as they can appear accurate but often don't actually exist, they are "hallucinations".
Transparency
There is often no way to find out what a GenAI tool has used to generate a response and it is often difficult to trace the source of the information. This compromises the accuracy and bias of the content produced.
Some tools such as Co-pilot and Perplexity can search for sources. Always check whether they are real, up to date and of a high enough standard to use (i.e.Co-pilot can produce a lot of websites from companies rather than journal articles).
Watch the two short videos below. One demonstrates a GenAI tool providing a false reference to a non-existent journal article (hallucination) the other shows a GenAI tool admitting it does not have access to databases.