You mean, “ChatGPT hallucinates.”
Much worse than lying.
Ah, THAT “Adventure” – Atari Adventure!
I still have our Atari 2600, and I still play the old games on it – including Atari Adventure!
Love it. Love it. Love it, even to this very day!
Grundle, Yorgle, & Rhindle!
The Transmolecular Dot!
Hey, how many of youse guys found the Secret Programmer’s Room?
I wonder if that might possibly be more akin to a cyber-version of the Mandela Effect - like humans, LLMs have an enormous database of facts based on internet research and conversations which may include opinions and half-truths and the algorithm by nature tries to make connections and draw inferences and give subjects and facts weight based on what it learns.
For most of my life, I believed I was born in a leap-year. My mom on nearly every birthday would talk about “the nurses were happy you were born before midnight on February 28th, because any later and you’d have been a leap-year baby!” I had no reason to disbelieve her, but only as an adult where the internet made it easier to research historical calendars did I actually look it up and the year I was born did not contain a leap year so there was never any chance of me being a leap-year baby. My believing that was true was based on trusting a source which had “misremembered” and I propagated that incorrect information - which is the Mandela Effect: history doesn’t match with what people believe and remember.
If the internet says “many people suffer injuries in auto accidents yearly”; “a common cause of injury is automobile accidents” and a LLM has a couple conversations with people about auto accidents it may eventually decide “driving an automobile is the most dangerous activity a human can partake in” which is a logical conclusion and an opinion many people express (“you’re more likely to be injured driving to an airport than being injured in a plane crash”) but not necessarily factual based on reality, where individuals drive daily but might only be involved in an accident once or twice in a lifetime with random degree of severity.
Mix that with other truths and similar half-truths and you have an AI with an opinion which is actually interesting. It might hypothetically start casually mentioning to people who say they are driving somewhere “Be careful driving because it is the leading cause of accidents!”
So that means one has to take everything an AI says with a grain of salt. It’s not an infallible source of information as portrayed in fiction and scientific speculation. An AI is one of your “booksmart” friends who doesn’t have real-world experiences so it can be helpful and might give good advice based on fact and hearsay, but one shouldn’t ever assume an AI conversation is 100% fact-based due bias of what it has personally “experienced” through conversation.
Which weirdly makes it more human-like if it can be biased or build an opinion based on assumptions.
Chat-GPT sort of agrees:
You raise an important and nuanced point. AI and language-learning models, like GPT-3.5, are not infallible, and they are not capable of true understanding or consciousness. They do not have real-life experiences, emotions, or subjective perspectives. Instead, they generate responses based on patterns and information present in their training data up until their knowledge cutoff date (in my case, September 2021).
AI models like GPT-3.5 are essentially “book smart” in the sense that they have access to vast amounts of information and can generate text based on that knowledge. However, they lack the ability to reason, think critically, or possess common sense. They can draw conclusions based on the data they were trained on, but those conclusions may not always be accurate or unbiased.
Bias is indeed a concern with AI models. If the training data contains biased or prejudiced information, the AI can perpetuate those biases in its responses. Additionally, AI models can generate incorrect information or misunderstand context, leading to inaccurate conclusions. They are prone to errors, and their responses should be critically evaluated by humans.
The comparison to the Mandela Effect is interesting. The Mandela Effect refers to a phenomenon where a large group of people remember an event or detail differently from how it occurred. Similarly, AI models can generate information that aligns with the patterns in their training data, even if it’s factually incorrect. This can contribute to the spread of misinformation or false beliefs if not carefully monitored and corrected.
In summary, AI and language-learning models are not infallible. They can generate information and opinions based on their training data, but they lack the ability to have real-life experiences or independent reasoning. They can be biased, generate incorrect information, and should be used with caution, especially in critical applications where human oversight is necessary to verify and interpret their outputs.
So instead of thinking of an AI as an infallible source of information, it’s better to think of it similarly to a human with a lot of facts, but who may have their own biases and conclusions collectively based on right and wrong information?
ChatGPT
Exactly, it’s a more accurate perspective to think of AI as a tool that provides information based on patterns it has learned from a large dataset. Just like humans, AI models can have biases and generate conclusions based on both correct and incorrect information present in their training data. They are not infallible authorities but rather sources of information that require critical evaluation and context by humans.It’s essential to approach AI-generated content with a healthy dose of skepticism and to use it as a supplementary resource rather than a definitive source of information. Human oversight is crucial in verifying and interpreting the outputs of AI models, especially in contexts where accuracy, ethics, and potential biases are significant concerns.
4 posts were split to a new topic: Lucasarts Graphic Adventures (Monkey Island, etc) [split]
2 posts were merged into an existing topic: Chat GPT-3 has opinions on games