Source Link: https://link.springer.com/article/10.1007/s10676-024-09775-5
Reading the article “ChatGPT is Bullshit,” I related it to the question “How can this article relate to/what does it have to say about the interaction between humans and technology and the effect of that on cognition?” and “How do LLMs affect cognition?” How we think and talk about how AI/technology works and what it does is important and affects how we treat and use it. This article argues that the output of LLMs such as ChatGPT should be considered in technical terms bullshit, because when they give factually inaccurate information it is not that they are making mistakes, it is that they are not fundamentally designed to consider truth. LLMs are designed to convincingly mimic human speech/writing by stringing together words in the most likely order based on context and training data, not to differentiate fact and give accurate information. LLMs fundamentally lack concern about truth, it is not a concept they were built to serve, but are being dangerously applied regardless of these limits in unsuited contexts. This article addresses how current words used to describe instances of untruth provided by LLMs such as lies, hallucinations, and confabulation ascribe too much personhood and imply falsely that LLMs function to serve truth in the first place. Rather LLMs are functioning as designed whether they are accurate or not, in both cases what they produce is bullshit, created to sound convincing but indifferent to the truth. The article also describes how this current language use around AI and LLMs also influences attempts to solve the problem of accuracy in unproductive ways. Overall from this article we can see how perception and language influence each other in our approach to technology (and by extension other areas as well), and how more careful language choices around technology can help clarify perceptions of the limits of technology. It also makes clear how these misunderstandings of the nature of LLMs/AI (and likely other technology as well) can increase the burden on users. The general populace does not have a comprehensive understanding of how LLMs function and what they are designed to do, and so (encouraged by the language used around them) use them with an expectation of LLMs as a source of/working with accurate information. When they are promoted and used as such, it is misleading and places the burden of questioning and fact checking on the user, who is given what looks like a piece of content they requested but with no guarantee of any substance within it. This brings us back to the question of if this is really saving people any energy, and what ways it can be used which do decrease rather than increase human cognitive load. In their current forms LLMs seem only suited to producing work in which truth is not a relevant factor.
Leave a Reply
You must be logged in to post a comment.