manton
manton

ChatGPT Health: manton.org

|
Embed
Progress spinner
davoh
davoh

@manton this sounds sane

|
Embed
Progress spinner
bax
bax

@manton Great post. There is so much pre-mature kvetching over things like people daring to use AI for health at all (and I’m sure there will inevitably be a media wildfire when the first bad advice inevitably hits…), but you only need to look at things like image analysis where AI is spotting tumors no human could to understand its both got a long way to go yet will bring humanity so much improvement over the status quo.

|
Embed
Progress spinner
jordon
jordon

@manton I was thinking about this after reading the Verge’s reporting on this. I’m still trying to formulate my exact thoughts, but generally we need to learn (and to teach others!) how to internalize what AI (and– to be frank– society and government) tell us. It’s understanding that the dichotomy of fact and fiction doesn’t really apply to everything that it historically has because sources of information are now much more accessible and much more varied. Information is now a spectrum that you have to decide–sometimes on a case by case basis– how much you trust or distrust what is being told to you. AI is (probably) more accurate than asking a random person on the street, but it’s (probably) less accurate than asking an expert. Everyone (should) know not to seek medical advice from your barista, but how do you accept medical advice from someone who sounds right, is probably mostly right, but there’s no good way to confirm? We have to figure that out!

|
Embed
Progress spinner
stevex@mastodon.social
stevex@mastodon.social

@manton Generally agree on explaining professional data; for symptom diagnosis it's still very steerable - you can talk even recent models into things that the data doesn't support. But doctors should be working with AI, not fighting it.

Talk to a cardiologist about Apple Watch EKG results - some find it useful data, some are offended that you're showing it to them.

|
Embed
Progress spinner
SimonPeng
SimonPeng

@manton This feels like such a deeply American take. AI shouldn’t be the band-aid on a healthcare system that is so strained that doctors don’t have time to discuss treatments or medical history with patients. Systemic change may be unlikely, but I don’t think this is a solution you should accept.

|
Embed
Progress spinner
manton
manton

@SimonPeng It is an American take but there are millions of people who could benefit here. I think it’s more than a band-aid… Sadly I’m not hopeful that our system can fundamentally improve without something new to force a change.

|
Embed
Progress spinner
manton
manton

@jordon Agreed, and I don’t want to see people led astray. For me it comes down to all the people who could be helped who currently are not getting what they need.

|
Embed
Progress spinner
In reply to
ffmike
ffmike

@manton Perhaps I’m too cynical, but ChatGPT Health immediately struck me as (mostly? partly?) a play to get their hands on a big chunk of potential training data that is otherwise off-limits.

|
Embed
Progress spinner
SimonPeng
SimonPeng

@manton You could be right! It just feels very similar to the idea that self-driving cars can solve traffic, while ignoring public transit as an option. This kind of tech is sold as the solution to problems that don’t need to exist, and buying in makes fixing the root causes harder in the future.

|
Embed
Progress spinner