siderea@universeodon.com
siderea@universeodon.com

Dear everybody re ChatGPT etc,

The word you need that you don't know you need is CONFABULATION.

What y'all are calling "hallucination" is, in neurology and psychology (where it means two slightly different things) called "confabulation".

I means when somebody's just making up something and has no idea that they're making things up, because their brain/mind is glitching.

A lot of folks are both trying to understand the AI chatbots and are trying to grapple with the possible implications for how organic minds work, by speculating about human cognition. Y'all should definitely check into the history of actual research into this topic, it will make your sock roll up and down, and blow your minds. And one of the key areas will be surfaced with that keyword.

There have been a bunch of very clever experiments that have been done on humans and how they explain themselves which betrays that there are parts of the mind that are surprisingly - and even alarmingly - independent.

Frex...

|
Embed
siderea@universeodon.com
siderea@universeodon.com

There was these famous experiments on split brain patients - people who'd had their corpus callosum (the connection between the two halves of their brain) severed as a last-ditch treatment for life-threatening epilepsy - performed by Sperry and Michael Gazzaniga.

Because of how our eyes are wired to our brains, you can show a split-brain person and image such that only one half (hemisphere) of their brain can see it.

But language is highly lateralized. That means it mostly - almost exclusively - takes place on one half of the brain. In most people, it's in the left hemisphere.

So you can show a split-brain person and image their left hemisphere can see, and if you ask them about it, they can describe it fine. If you show their right hemisphere the picture, they'll tell you they see nothing - but if you ask them to point to matching picture, *their arm controled by the same half will point correctly.*

Then things get *really* weird.

|
Embed
siderea@universeodon.com
siderea@universeodon.com

In Sperry and Gazzaniga's experiments, they would simultaneously show two different images to the two different halves of a split-brain person's brain. They would then ask them to perform a task based on what they saw. Both arms would perform the task, but because they had seen different pictures, they would do different things.

They would then ask the subject why the arm controlled by the verbal hemisphere did what it did, and the subject explained perfectly reasonably why their response to the prompt image made sense.

What do you think the subject did when asked why the arm controlled by their non-verbal hemisphere had done what it did?

Pause here a moment and make a hypothesis.

What happened was...

|
Embed
siderea@universeodon.com
siderea@universeodon.com

When the split-brain subject was asked why *they* - meaning, actually, the arm controlled by their non-verbal hemisphere - had responded to the image prompt - which the verbal hemisphere could not see - the way it had,

The subject calmly and confidently explained why the behavior of their *non-verbal* hemisphere arm was a perfectly sensible response to the prompt the *verbal* hemisphere had seen.

> they would flash a picture of a chicken claw to the right eye and a picture of a snowy driveway to the left eye. [...] The right hand pointed to a chicken (this matched the chicken claw that the left hemisphere witnessed), while the left hand pointed to a shovel (the right hemisphere wanted to shovel the snow.) When the scientists asked the patient to explain his contradictory responses, he immediately generated a plausible story. "Oh, that's easy," he said. "The chicken claw goes with the chicken, and you need a shovel to clean out the chicken shed."

|
Embed
JoeChip@mstdn.social
JoeChip@mstdn.social

@siderea

Subscribing to hashtag
#Confabulation

|
Embed
siderea@universeodon.com
siderea@universeodon.com

In other words, the verbal side of the brain would confidently *make up* a reason that was based on its incomplete knowledge - a reason that was obviously, manifestly wrong to external observers. And the subject would earnestly believe it.

This is the incredible thing: the subject didn't just not know the answer. The subject didn't know they didn't know the answer.

The subject, without at all realizing they were making up a post-hoc story, manufactured an explanation after the fact, based not on recollection of their reasoning (which they had no access to) but on the observable external evidence after it happened.

And had no idea they were doing it. And believed it utterly.

Now with that in mind, go back and read one of the "gaslighting" transcripts of Sydney insisting a movie released in late 2022 isn't out yet.

|
Embed
siderea@universeodon.com
siderea@universeodon.com

Y'all, I gotta tell ya. As fascinating as the chatbots AIs are, the human responses to them are at least as fascinatingly glitchy. It's been a festival of specimens, what people haven't noticed.

Here's one: has anyone got Sydney down on record *ever* saying "I don't know?"

(Obligatory joke ha ha only serious: this is how you can tell its training corpus was internet discussions.)

I propose that all of the examples of so called "hallucination" are (neurological style) confabulation: it's what the AI chatbots do when they don't know the answer to a question. Since they have no capacity to tell whether or not they know something (or whether what they know is correct) they confidently make something up based on what information they do know.

Which is apparently very human of them, and perhaps not a surprise from making bayesian predictions against "what would a human say?"

|
Embed
siderea@universeodon.com
siderea@universeodon.com

The quote above is from

scienceblogs.com/cortex/2009/0

Where you can read a little more about experiments that reveal our minds, left to their own devices, make up justificatory explanations even when they have no idea what is actually going on.

I said at the top that neurology and psychology (/psychiatry) have slightly different definitions of confabulation.

As I understand it, psychological confabulation is confabulation done for psychological reasons, such as emotional overwhelm, which is somewhat different than confabulation for neurological reasons like the two halves of one's brain no longer being able to confer.

Psychological confabulation can be seen as emotionally motivated (though that's sort of wrong). Neurological confabulation is just a kind of autopilot.

So I am proposing that AIs are confabulating in the neurological sense of the term.

|
Embed
paulc@mstdn.social
paulc@mstdn.social

@siderea We use the work “confabulation” in the frontotemporal dementia (FTD) caregiver world to describe what our loved ones (LO) with FTD might describe. We discuss confabulations not as lying but more as filling in holes in memory. The person might combine different memories, confuse something they heard or saw as their memory, or just create something from what is floating in their mind. And you’re right, they have no idea they are doing it.

|
Embed
siderea@universeodon.com
siderea@universeodon.com

@paulc When memory cuts out, the Explainer part of the mind goes charging on ahead, unaware.

|
Embed
paulc@mstdn.social
paulc@mstdn.social

@siderea I will use what you wrote with my fellow FTD caregivers.

Really an aside but with FTD it isn’t really loss of memories but loss of access to memories.

|
Embed
siderea@universeodon.com
siderea@universeodon.com

@paulc Yeah, same as the severing of the corpus callosum: it's still there, it's just the line is down.

|
Embed
ckent@urbanists.social
ckent@urbanists.social

@siderea I’ve just come to the realisation that AI developers need to take this leaf out of an educators’ book:

** Always take a moment to find your audience’s level. **

And if your chatbot can even DO that, it means it already has to assess its own knowledge level — and report on it — as well as estimate the complete domain of possible knowledge. A very key element in learning & discovery — know your limits, and know what you don’t know.

|
Embed
siderea@universeodon.com
siderea@universeodon.com

@ckent Yeah, so I think this entire line of thought explodes the notion that the chatbot AIs *know* anything. They're all behavior, no cognition. Searle's Chinese Room, come to life. Only more extreme. I suspect these AIs can't possibly engage in metacognition - the technical term for what you describe - because they don't engage in cognition at all.

|
Embed
ckent@urbanists.social
ckent@urbanists.social

@siderea I fundamentally and strongly agree with this, and it’s the only point that matters right now; even if I leave the door open that metacognition comes in unexpected forms. Neural nets are still the way forward, I think is consensus? Emulating “wetware” of the human brain also eventuates the same way.

Regardless, like all real parenting, it’s a ton of hard work. Manual intervention. Paternalism. And, regretfully, sacrifice? Becoming gods is hard.

|
Embed
AlexxKay@mastodon.social
AlexxKay@mastodon.social

@siderea
I happen to know the word confabulation. After my father died, I came upon some diaries of my late mother, from fairly late in her life. There was a lengthy entry after she was introduced to the term confabulation, because it was a huge revelation. Apparently, when she was a child, her own mother frequently engaged in confabulation, and would insist upon the truth of propositions that my mother knew to be false. Needless to say, this was pretty traumatizing.

|
Embed
Elledeeay@universeodon.com
Elledeeay@universeodon.com

@siderea This is what people who create AI don’t really understand about the human brain. The idea of confabulation and others like it that explain how the human brain explains what it sees to itself is something that confounds anyone who studies the brain for a living. So this idea ends up confounding AI and Chatbot creators because they assume that the brain is perfect computer. Maybe it is, but sometimes, things go wrong. #AI #chatbots #humans #brain #psychology #neurology #psychiatry

|
Embed
artsyhonker@kith.kitchen
artsyhonker@kith.kitchen

@siderea slightly off topic but thank you for the short primer on confabulation, which I have been noticing in probably-stressed people I encounter in the course of my week but didn't quite have the vocabulary to reflect on as part of a stress response.

That additional context will help me, I hope, to be less bothered by it in turn, leaving more of my own bandwidth for an appropriate pastoral stance.

|
Embed
osma@sigmoid.social
osma@sigmoid.social

@siderea
Don't know about Sydney/Bing, but I read about someone getting ChatGPT to say it doesn't know just by adjusting the prompt. It was something along the lines of "Who is the main character in this story? Respond with 'I don't know' if unsure", which gave the model "permission" to admit it can't tell and it doesn't have to come up with a plausible but wrong answer every time.

|
Embed
siderea@universeodon.com
siderea@universeodon.com

@osma Oooh! That's fabulous. If you have a pointer, I would love to see it.

|
Embed
BadExampleMan@mstdn.social
BadExampleMan@mstdn.social

@siderea Some accessible examples of confabulation can be found in books by the late Oliver Sacks, notably in The Man Who Mistook His Wife For A Hat. Sacks observed confabulation in patients with severe Korsakov's Syndrome who had huge memory deficits and could not orient themselves in the present.

|
Embed
Rosyfingers@sunny.garden
Rosyfingers@sunny.garden

@siderea I really appreciate this context. I had read about those studies and forgotten them. This is helpful and fascinating.

|
Embed
KLB@glasgow.social
KLB@glasgow.social

@siderea Word of the Day! #WOTD

|
Embed
KLB@glasgow.social
KLB@glasgow.social

@siderea 😳

|
Embed
stepheneb@ruby.social
stepheneb@ruby.social

@siderea “Confabulation” what a wonderful word and concept! When I was a kid I failed eye tests in school — but it wasn’t because I had any problem seeing. Part of the test was viewing a small red dot in my left eye and a family sitting at a picnic table in my right eye. The question: “Where is the Apple on the picnic table?” I could overlap images but thought they were trying to trick me so told them what I actually saw, there was no apple on the table.

|
Embed
tempest@wandering.shop
tempest@wandering.shop

@siderea "(Obligatory joke ha ha only serious: this is how you can tell its training corpus was internet discussions.)"

The SOUND I made when I read this. It's the truthiest truth. 🤣

|
Embed
tempest@wandering.shop
tempest@wandering.shop

@siderea Thank you so much for this thread. I knew a tiny bit about this (I've seen that episode of House!) but the connection between this and the stuff going on with chatbots was very eye-opening.

Plus, the fact that the chatbot can't just say I don't know and doubles down exactly like Someone Wrong On The Internet is disturbing and fascinating in equal measure.

|
Embed
johncarneyau@mastodon.social
johncarneyau@mastodon.social

@siderea Thank you. I did know that I needed that word, but I didn’t know what it was.

|
Embed
hllizi@hespere.de
hllizi@hespere.de

@siderea But interestingly, I recently saw a chat with Bing Chat on reddit where it claimed a programming task was too hard for it, which I found quite surprising. Maybe it was fake, idk.

|
Embed
catselbow@fosstodon.org
catselbow@fosstodon.org

@siderea Thanks for pointing this out! I wonder if there's something akin to confabulation going on in the minds of the chatGPT users, too. They see something coming from the chatbot, generated in a way they don't understand, and make up a story in their minds about the "intelligence" of the chatbot.

|
Embed
nichni@mastodonapp.uk
nichni@mastodonapp.uk

@siderea Thus, bullshitting without realising that you're bullshitting.. in fact without realising anything at all.

|
Embed
assaf@mas.to
assaf@mas.to

@aredridel I don’t think GPT has the cognitive capacity to confabulate

simplified, GPT constantly makes predictions by “remembering” what comes after “the quick brown ____”

its neural network can always “remember” something and give an output (however implausible to us)

missing: higher level cognitive function that can reject these signals (hence, induce memory loss)

and higher level cognitive function that can fill those gaps (confabulate)

|
Embed
aredridel@kolektiva.social
aredridel@kolektiva.social

@assaf Interesting. Do you think humans do something higher level, rather than fill in what's plausible?

|
Embed
assaf@mas.to
assaf@mas.to

@aredridel GPT fills what's probable "the moon is made of _____"

brain fills what's probable, but also adjust to what's plausible, what it thinks you want to hear, what could make you laugh, make it win an argument, get out of sticky situation …

what we call "confabulation" happens at that higher level of cognitive function

|
Embed
assaf@mas.to
assaf@mas.to

@aredridel at the very low level, the brain and GPT are both neural nets making predictions always firing some signal

typically the brain ignores the low probability background signals, so it can focus on what's important, but if you put a person in sensory deprivation the brain has only low probability background signals to work with and it … hallucinates

|
Embed
aredridel@kolektiva.social
aredridel@kolektiva.social

@assaf Would you say the difference is context and/or goal, then? (What makes it higher level, rather than just context aware?)

|
Embed
karawynn@wandering.shop
karawynn@wandering.shop

@siderea

Well done. I knew about both things but had not yet drawn the parallel. Thank you!

|
Embed
riley@toot.cat
riley@toot.cat

@siderea: Yep.

Confabulation in the neurological senses tends to happen to cover up lack, or inaccessibility, of certain parts of a brain. ChatGPT happens to not have frontal lobes, so it confabulates in ways that can make a naïve bystander thinks that it might have some.

|
Embed
assaf@mas.to
assaf@mas.to

@aredridel I think of it more like the difference between known how to speak (early age) and knowing the rules of the language (high school?)

the layer that talks fluently knows "an" comes before "alphabet" based on years of training data (a la GPT)

the other layer only learned that rule once, but it can't talk fluently — everything comes slow, because it's doing more processing (= more layers, so it's higher up)

|
Embed
aredridel@kolektiva.social
aredridel@kolektiva.social

@assaf I think there's a fair bit of research showing that's not true — quite often we come out with the answer and then _backfill_ justification. Justifying is slower, but it's also post-facto.

|
Embed
assaf@mas.to
assaf@mas.to

@aredridel I think that's a mistake — there's one architecture to the brain, it's a predictive neural net, and so it *always* comes out with the answer first and then it needs to make sense of the prediction

coming up with answer first is what GPT does, and it's at the same level as hallucinating shapes and colors — it skews probable not plausible

coming up with answer and a story — probable and maybe plausible — requires more cognitive ability than GPT possess

|
Embed
tmstreet@urbanists.social
tmstreet@urbanists.social

@siderea Apparently, this syndrome is true of the entire GOP.

|
Embed
Jimwalsh@mastodon.xyz
Jimwalsh@mastodon.xyz

@siderea
I suspect a lot of the thoughts about why we do things, are after the facts attempts to pretend some kind of control of otherwise autonomous brain processes.

|
Embed
siderea@universeodon.com
siderea@universeodon.com

@stepheneb A lot of folks take it as givens that 1) all people's tendency to confabulate is the same because 2) all people's brains/minds relate to this "Explainer" part the same way.

I *don't* assume that. For several reasons. For one, that seems a silly assumption in light of how rarely that's true for any other part of either neurology or psychology. For another, why wouldn't we assume this part of cognition is just as plastic and developmental as the rest of our heads?

For another, I am pretty sure my own brain does this slightly differently.

Normally, humans can tell when they don't know something. That facility was disrupted by the severing of the corpus callosum. (This makes me hypothesize that it's lateralized to the right hemisphere.)

Point being,

|
Embed
siderea@universeodon.com
siderea@universeodon.com

@stepheneb
I suspect that part of what we call metacognition entails some other part of the brain monitoring the Explainer and being able to tell when it's out in front of the evidence. I wonder if it's not possible to train people to be more aware of it consciously and/or to be less vulnerable to confabulation.

And I wonder if some people are just less vulnerable to it naturally.

|
Embed
stepheneb@ruby.social
stepheneb@ruby.social

@siderea Something that might be adjacent ... when I am learning something complex and new I jump in as fast as I can with both a model I know is naive and with just the right amount of confidence to break it quickly and usefully. Deeper in I find myself needing to reason using multiple and conflicting models. It's a strange combination of being both extremely confident and very skeptical at the same time. I'm confident I'm both wrong and good at finding out as fast as possible.

|
Embed
osma@sigmoid.social
osma@sigmoid.social

@siderea
I'd like to find it too to re-check the details, but as there's no working search on the Fediverse, it's a bit hard. I'll report back if I can find it again.

|
Embed
Aviva_Gary@noc.social
Aviva_Gary@noc.social

@siderea I read your thread and although amazing, I don't think the GPT is at that level yet. They (it is gender neutral no?) would have to be self-aware first. It is obvious, they are not. They don't even seem to be able to "read the room."

|
Embed
jgordon@appdot.net
jgordon@appdot.net

@siderea Journalists may not know cognitive scientists exist — so they are asking questions of the wrong people.

Meanwhile cognitive scientists are living it up because they know what’s coming. Why not spend it while you can?

(I have a special needs son who routinely confabulates — especially when distressed. He believes what he says. Maybe that’s why I think we are I’m deep shit now.)

|
Embed
siderea@universeodon.com
siderea@universeodon.com

@Aviva_Gary Oh, yes, apparently there's been some confusion on this point: I am not remotely suggesting that these AIs have any knowledge whatsoever. As I said in reply to another comment, they seem to be all behavior, no cognition.

|
Embed
StuartGray@mastodonapp.uk
StuartGray@mastodonapp.uk

@siderea This is a great and fascinating thread, but I think it misses a key point regarding *why* the term "hallucination" is being used for Chat GPT and specifically it's origin within AI circles.

My first exposure to it in AI circles was several years ago, maybe 5+, when AI wasn't so hyped & the first notable image *recognition" models were starting to appear, and "Hallucination" was used to try and describe the way in which these networks were behaving when they were run in reverse.

|
Embed
Irreverent_B@kolektiva.social
Irreverent_B@kolektiva.social

@siderea

I've been having fun running rings around ChatGPT with discussions about infinity and how it must be meaningless in terms of absolute truth. The model keeps contradicting itself then glitching. A few times it's just ground to a halt when confronted with the internal contradictions of it's 'logic'.

|
Embed
siderea@universeodon.com
siderea@universeodon.com

@StuartGray Right, "hallucination" meant "seeing something that wasn't there."

That also has an issue. In psychiatry, we differentiate between hallucinations and illusions. Hallucinations are when you see something that isn't there, illusions are when you see something that *is* there but misrecognize it as something it's not.

It seems important to differentiate if one wants to debug machine vision.

|
Embed
jonawebb@techhub.social
jonawebb@techhub.social

@siderea from what I've been reading "hallucination" is the technical term used in AI.

|
Embed
maegul@mas.to
maegul@mas.to

@siderea super simple neuroscience idea I like:

The neuron’s main function is output not input.

However true, the idea comes up frequently, including, arguably, here.

|
Embed
zachnfine@mastodon.social
zachnfine@mastodon.social

@siderea I have gotten chatGPT to tell me it didn’t know something (it said it couldn’t answer my question because it’d require knowledge of the Adobe Effects SDK). Then I prompted it to write the same function but specified the constant and SDK functions to use. It got it right.
I did some google and stackexchange searching afterward to see if it was plagiarizing but came up empty. It did seem like it had memory of an SDK it claimed to not have. A negative confabulation?

|
Embed
zachnfine@mastodon.social
zachnfine@mastodon.social

@siderea I do love this though I think some of the people who object to the use of “hallucinate” for machinery will also object to any term that relates to human cognition getting applied to computers. Hallucinate, in the popular sense, also is an evocative descriptor for some of the weirder AI art regurgitations.

|
Embed
osma@sigmoid.social
osma@sigmoid.social

@siderea I think it was this: medium.com/@gfhayworth/chatgpt

referring to this: blog.langchain.dev/langchain-c

Basically they tell ChatGPT that it's OK to say "Hmm, I'm not sure" and also that they don't want it to respond to questions that are not about the LangChain project. There are other tricks in the prompt as well, e.g. hyperlinks shouldn't be made up.

|
Embed
siderea@universeodon.com
siderea@universeodon.com

@osma Thanks so much!

|
Embed
Aviva_Gary@noc.social
Aviva_Gary@noc.social

@siderea Ah... yeah... that'd do it. And no worries... 😎

|
Embed
ponderhack@hachyderm.io
ponderhack@hachyderm.io

@siderea "(Obligatory joke ha ha only serious: this is how you can tell its training corpus was internet discussions.)" I was just thinking it was developed by men.

|
Embed
emma@assemblag.es
emma@assemblag.es

@siderea @StuartGray

Thank you for this great thread! This is something I'm really interested in as a philosopher of technology.

Are there sources on the way "hallucination" is used in psychiatry vs. neuroscience?

Also, it'd be awesome to learn more re: the different ways "hallucination" is used in #AI / #machinelearning . I thought it only referred to random quirks in training data being taken as generalizable truth. I wonder if there's a genealogy of the concept of "hallucination" in AI.

|
Embed
patriciajhawkins@mastodon.art
patriciajhawkins@mastodon.art

@siderea Yes! As a computer person with a psychology background, I'd been casually calling ChatAI confabulators -- thanks for fleshing that impression out!
Their output can also reasonably accurately be described as "Predictive text on steroids."

|
Embed
Riedl@sigmoid.social
Riedl@sigmoid.social

@siderea I have already switched my vocabulary to use "confabulation" instead of "hallucination" when referring to AI.

|
Embed
Homebrewandhacking@mastodon.ie
Homebrewandhacking@mastodon.ie

@siderea

Thank you for this thread. I didn't think hallucinate was quite the right word, but confabulate is exactly correct. Those experiments on people with severed hemispheres really altered my understanding of things!

#ChatGPT #LLM #AI #AIHallucination #Confabulation #AIConfabulation

|
Embed
karchie@freeradical.zone
karchie@freeradical.zone

@siderea lapsed neuroscientist here, confabulation is the first word that came to mind and I’ve been using it though wrestling with whether it’s too anthropomorphizing. Hallucinations is right out.

|
Embed
Seruko@mstdn.social
Seruko@mstdn.social

@siderea I read this and at first I was extremely uncharitable. But thinking about your post for a bit has really helped me understand why so many very smart people are very very silly and totally credulous when it comes to marketing around LLMs. Of course the brain that sees a smiling face on a car grill or off center coat hanger will see an intelligent agent in a magic 8 ball.

A Markov chain with a large library is not an g.AI or even a s.AI.

Please stop making just so stories about LLMs.

|
Embed
amd@social.amd.im
amd@social.amd.im

@siderea Very cool thread. Thanks for sharing.

|
Embed
PetrichorSquirrel@meow.social
PetrichorSquirrel@meow.social

@siderea i just call it being wrong

|
Embed
troglodyt@mastodon.nu
troglodyt@mastodon.nu

@siderea @stepheneb

there are both religious and philosophical traditions that have techniques in this area

|
Embed
josephholsten@mstdn.social
josephholsten@mstdn.social

@siderea And when you find something deeply and irretrievably confabulated, the correct adjective is:
#confabulous

|
Embed
inthehands@hachyderm.io
inthehands@hachyderm.io

@siderea
This surely is not unrelated to my initial reaction to ChatGPT: it’s just like a B- student paper.

Same setup: “You MUST say something, you can’t just say you don’t know, and it must be grammatically correct.”

|
Embed
siderea@universeodon.com
siderea@universeodon.com

@inthehands You are a much more charitable grader than I am.

And that's before we get to the, "I'd be happy to give this C to whomever earned it - just let me know who that is. You, meanwhile, are getting a 0 for plagiarism and an all-expenses paid trip to the appropriate authority's office," issue.

|
Embed
CaptMorgan@freeradical.zone
CaptMorgan@freeradical.zone

@siderea the interesting thing about hallucinations are they are easy to detect. If you ask the same question multiple times and measure the semantic difference in the answers hallucinations have much wider variations. The cool part will be when GPT learns to admit it doesn’t know.

|
Embed
kreeblah@mastodon.inreach.net
kreeblah@mastodon.inreach.net

@siderea Thank you for this. The word "hallucinate" has bugged me since it implies a thought process that these things just don't have, but I haven't really had a good word to use. I've been saying that they just lie, but that also has some implications of intent that just aren't true.

|
Embed
princelysum@aus.social
princelysum@aus.social

@siderea Do you think Trump is a confabulator? Or a liar?

|
Embed
siderea@universeodon.com
siderea@universeodon.com

@princelysum Woodward proved he is a liar who knows perfectly well he's lying.

|
Embed
siderea@universeodon.com
siderea@universeodon.com

@troglodyt

When you say that, what did you have in mind?

@stepheneb

|
Embed
troglodyt@mastodon.nu
troglodyt@mastodon.nu

@siderea @stepheneb

nothing in particular, i'm just pointing it out

what examples can you come up with?

|
Embed
gsuberland@chaos.social
gsuberland@chaos.social

@siderea I must admit I'm a little confused by this thread. A confabulation would imply deriving a plausible explanation from context via an imaginative process. But an LLM isn't doing that; it's a paragraph suggester based on statistical inference. If anything, I'd argue that "confabulation" is a deeper anthropomorphisation of non-intelligent machine behaviour than "hallucination" is.

|
Embed
godofbiscuits@sfba.social
godofbiscuits@sfba.social

@siderea If you all want a more obvious taste of this, try talking to ELIZA:

cyberpsych.org/eliza/

|
Embed
johnelamb@mastodon.social
johnelamb@mastodon.social

@siderea is it accurate to say AI is *always* confabulating even when giving correct answers?

|
Embed
f4grx@chaos.social
f4grx@chaos.social

@siderea chatgpt does not deserve human words.

It's just generating algorithmic junk.

|
Embed
slyecho@mdon.ee
slyecho@mdon.ee

@siderea Yep, it doesn't understand anything. Has no factual knowledge of the World. Can't tell fact and fiction apart.

You can ask it why the Mars-Earth war of 2055 happened and it will tell you it was because of limited natural resources.

|
Embed
noyes@mastodon.online
noyes@mastodon.online

@siderea
Confabs are way down. This screenshot was taken today. A month ago it would have just invented the information I asked for in a rather convincing manner.

@tjradcliffe

|
Embed
fanf42@mastodon.social
fanf42@mastodon.social

@siderea @ariadne thank you, very interesting and nice thread

|
Embed
StephAnne@social.sunnypup.io
StephAnne@social.sunnypup.io

@siderea if you ask about something obviously false or fictional yeah Bing and ChatGPT will day I don’t know.

|
Embed
allendowney@fosstodon.org
allendowney@fosstodon.org

@siderea

I hope it's not too late for "confabulate" to replace "hallucinate".

It is a pleasingly precise term for what LLMs are doing -- hallucinate seemed clumsy even before "confabulate" was raised as an alternative.

|
Embed
siderea@universeodon.com
siderea@universeodon.com

@StephAnne Well it's nice to see they do *now*.

|
Embed