manton
manton

There’s a lot of talk about how AI can get facts wrong. That’s fair, but in my experience it’s correct most of the time. Even when it’s slightly off, there’s usually some useful truth in the answer. Much more frustrating is voice assistants who can’t even begin to give an answer.

|
Embed
Progress spinner
mjkaul
mjkaul

@manton But how does one judge if it’s right or not?

I think the challenge is the appearance of authority, which makes people implicitly trust it more, whether or not it’s deserving on a given topic.

|
Embed
Progress spinner
handy
handy

@manton most people are making sh*t up all the time. I bet it's better than the average person. :)

|
Embed
Progress spinner
KimberlyHirsh
KimberlyHirsh

@manton For me, basic facts aren't as much a concern as when it fabricates research.

|
Embed
Progress spinner
rom
rom

@manton That is great IF you know if it is correct or not, but if you don't and it hallucinates, then that is the problem.

|
Embed
Progress spinner
In reply to
manton
manton

@KimberlyHirsh Oh yeah. Probably no one should cite AI as a reference source on anything. I like what Bing is doing with "footnotes" to actual sources, if you need to do your own research.

|
Embed
Progress spinner
dgreene196
dgreene196

@manton It's nice to see that attribution to primary-ish sources.

|
Embed
Progress spinner
fgtech
fgtech

@manton Lack of source attribution is the fatal flaw for this application of LLMs. If you have a way to verify the answer you get then it becomes potentially useful, but there remains an ethical taint to the tool not giving credit to its sources.

|
Embed
Progress spinner
KimberlyHirsh
KimberlyHirsh

@manton What I mean is those specific footnotes, at least in the case of ChatGPT, are often fabricated.

|
Embed
Progress spinner