Unlike search engines, GenAI tools do not retrieve information published on the internet. Instead, they use the text and data they have been trained on to predict patterns and produce new content such as text, video or images in response to prompts.
It's impossible to know for certain what sources have been used to train GenAI tools, but they rarely include things like the subject databases, ebooks and journals the Library has access to.
When a GenAI tool is asked a question or given a prompt, rather than signposting to a website, it creates new information in response to what it has been asked. Essentially, it is guessing and isn't worried about whether the end result is true or not.
Go to the What to watch out for section of this guide for more information.
Watch this short video for more information on how GenAI works:
Coursera. (2024, July 30). How does GenAI work? YouTube. https://www.youtube.com/watch?v=VkGYCHHUo84
The table below provides a non-exhaustive overview of the wider impact of GenAI use.
Issue | What is being done? | What can we do? |
Biased outputs: GenAI platforms are trained on large datasets that often reflect existing cultural, political and social biases. As a result, they can produce outputs that reinforce stereotypes, misrepresent certain groups, or exclude marginalised voices. |
Developers are investing in techniques to mitigate bias in training data and outputs (e.g. refining models with more representative or synthetic datasets). Standards and guidelines (e.g. UNESCO's Recommendation on the ethics of AI) are calling for stronger regulation of and ethical governance in AI development. |
Use critical thinking when using GenAI. Recognise how algorithms can reinforce existing inequalities and bias. Find a diverse range of perspectives to counterbalance the limitations of GenAI outputs. |
Environmental impact: training and running GenAI tools consumes vast amounts of energy which in turn contributes to carbon emissions, raising questions about the sustainability of large scale AI development. |
Scaling of renewable energy and lowering carbon approaches to cool and power data centres that support GenAI workloads. Investment in green energy by technology corporations to help offset emissions. Development of smaller, more efficient models and workflows aimed at reducing the energy intensity of tasks. |
Use GenAI selectively and purposefully, avoiding overuse or unnecessary prompting. Minimise repeated or high-intensity tasks that demand large computer resources. Delete unnecessary GenAI outputs to reduce data storage emissions. |
Job displacement: As GenAI tools advance, some roles - particularly those involving repetitive tasks, content generation or data processing - are at risk of automation. This can lead to job losses and the widening of social inequality. |
Policymakers and advocacy groups (e.g. United Nations and the World Economic Forum) are calling for "just transition" strategies, supporting affected workers with reskilling, job protection and digital inclusion. Researchers and Non Governmental Organisations (NGOs) are challenging exploitative labour practices in AI supply chains, prompting companies to improve working conditions and transparency. |
Recognise the value of transferable, human centric skills (creativity, communication, critical thinking etc.) that complement and not compete with GenAI use. Use digital literacy skills to help understand how technology intersects with other social justice practices (internationalisation, sustainable development etc.). |