Plausible Nonsense and Carbon Footprint: Micro and Macro Ethics of Generative AI in the Classroom

 

Plausible Nonsense and Carbon Footprint: Micro and Macro Ethics of Generative AI in the Classroom


A very common use case for ChatGPT, especially for students, is to pose questions to learn something new or seek new information. More generally, one expected benefit of AI-powered chatbots is to offer real-time tutoring to students at scale. However, text-based generative tools have an inherently severe limitation with respect to this type of task: they generate false information that appears very convincing. We will call this phenomenon “Plausible Nonsense” instead of the frequently used term “hallucination” as it can be misleading, both because the underlying phenomenon is different from human hallucinations and because the term conveys the idea that these systems “think” like humans. For use in an educational context, Plausible Nonsense is an issue to be taken seriously. But what is the risk exactly?

One reason that makes Plausible Nonsense very hard to detect is that humans tend to overly trust automated systems – a long documented phenomenon called automation bias (e.g. see Suresh et al., 2020). Unfortunately, all tools generating text with artificial neural networks share this limitation to different extents, including translation tools (e.g. Xu et al., 2023), summarization tools (e.g. Choubey et al., 2023), etc. Although the occurrence of Plausible Nonsense has reduced in more recent models, it is unclear when, or even if, the issue of Plausible Nonsense can be fixed

Depending on estimations, training the AI may only account for approximately 20% of the tool’s total carbon footprint; the bulk of the impact comes from its usage (Patterson et al., 2021). 

https://www.sefi.be/2023/10/14/plausible-nonsense-and-carbon-footprint-micro-and-macro-ethics-of-generative-ai-in-the-classroom/?sender_ctype=email&sender_campaign=egKMM6&sender_customer=rkW10Np