manton
manton

AI hallucinations just reinforce that humans should always be involved. Today, AI switched to Python syntax in the middle of JavaScript. Oops! A human would never make that mistake, right? Except that I have definitely typed the wrong language when my brain hadn’t completely switched contexts yet.

|
Embed
Progress spinner
randomgeek@hackers.town
randomgeek@hackers.town

@manton honestly a constant tripping point is how I say split, join, grep, and map in the language I'm using today. So many are so close to each other, but just different enough to trip you up.

Yeah regardless of how one's code is written it will always need extra eyes.

|
Embed
Progress spinner
In reply to
Archimage
Archimage

@manton I wonder if “AI hallucinations” just reinforce human ones…

|
Embed
Progress spinner
brandonscript@appdot.net
brandonscript@appdot.net

@manton the idea that computers trained on human content would somehow be better or more reliable than humans is deeply flawed :)

|
Embed
Progress spinner
me1000@mastodon.social
me1000@mastodon.social

@manton interestingly it’s possible to deterministically prevent an LLM from doing that.

At the end of an inference run you get a list of all the possible tokens with a probability distribution. The runner samples one of those tokens to choose it.

Given the surrounding context you can ensure the runner only samples syntactically valid tokens in your context.

|
Embed
Progress spinner