petebrown
petebrown
I think just wrote about this the other day, but I am once again struck by the fact that a bunch of well-meaning tech people are still stuck in the mode of “We should try out this new thing because it might be cool” and don’t get (or refuse to see) that most of the new things are deeply troubling. And the... explodingcomma.com
|
Embed
Progress spinner
jmanes
jmanes

@petebrown Amen. Completely agree.

|
Embed
Progress spinner
mcg
mcg

@petebrown Very much agree.

|
Embed
Progress spinner
pimoore
pimoore

@petebrown

“We should try out this new thing because it might be cool”

If only it stopped there. This quickly becomes, ”There's profit to be had in this, let's get on it.”

|
Embed
Progress spinner
petebrown
petebrown

@pimoore For big companies, serial “founders,” and the LinkedIn growth-hacking creeps, for sure.

I’m talking about individuals technologists, enthusiasts, and hobbyists and trying to assume generally good intentions.

|
Embed
Progress spinner
tracydurnell.com
tracydurnell.com
@petebrown

If you found out the winner of the Boston Marathon had cheated, would you say, well that’s fine because surely next time they’ll find a way to win without cheating? Or if not next time, someday.

No, that person who took the subway was DQ’d.

But in effect that’s what tech companies are asking us to do for their fatally flawed products, hoping they’ll dig so deep into the market so they’ll become too “important” to get shut down. The answer to valid criticism is always that they’ll fix that later — just ignore the man behind the curtain and focus on the potential of this technology.

When OpenAI says they will provide attributions for their Stack Overflow training data, they’re lying.

When Open AI says they’ll get rid of the bias and the racism and the sexism later, they’re lying.

When generative AI companies say the environmental costs will come down (and somehow its use won’t increase with efficiency gains, cancelling out any savings), they’re lying.

It’s “self-driving” cars too; if your product will run over humans, block emergency vehicles, fail to recognize cyclists, and lull drivers into a false sense of security that leaves them inattentive while faulting them for crashes your system failed to react to appropriately, your product does not drive more safely than humans and I don’t believe you when you say it will “one day.”

Because that day won’t come, and instead the onus will be put on us to accommodate the failures of their technology. We’ll ask cyclists to use transponders to alert autonomous vehicles to their existence. Didn’t have a transponder? Your fault you died when struck by a 3000 pound vehicle on a road without a bike lane 🤷‍♀️ Cyclists are all entitled assholes anyway, stealing space from cars, amirite? /s We’ll spend public funds to build new infrastructure meant to make up for their cruddy products. (How to actually make roads safer? Design roads for lower speeds, with protective features for “vulnerable road users” aka anyone not inside cars. But that requires escaping “car brain.”)

If we don’t make these tech companies fix these major problems with their products — now, before the tech is widespread and integrated everywhere — they’re never going to prioritize it. Even if it were possible to get rid of, they don’t care about bias — if they’re white guys, the bias even works in their favor. They don’t care about the safety of people walking or biking — walkers are poor people and everyone hates cyclists so they’re easy to out-PR. They don’t care about the artists and writers whose work they’ve stolen uncompensated and without permission — everyone knows artists are too poor to sue them. (Until you fuck with Scarlett Johansson lol) They don’t care about the environment — they’re not the ones running out of clean water or being driven from their homes by rising tides, and they’ve got a bunker for worst case scenarios anyway. The status quo works in their favor, and their technologies reinforce it.

The fix will always be five years out. Why would they invest money in these “boring” (but foundational for society) elements of their products if they don’t have to? Their products are made to exploit externalities, and they’ll externalize everything they can. Self-driving car companies will happily trolley-problem people walking, biking, and rolling, just as generative AI companies fire their safety teams to focus on growth. They think it’s good to move fast and break things, even if we’re the ones being broken. Because it sounds cool” isn’t sufficient justification to play fast and loose with our lives; after so many unconsidered technologies have gone badly, dismissing vulnerable users should be seen as a company-breaking, product-cancelling failure. We have to stop letting them off the hook and make them re-internalize their costs instead of offloading them all on us — and we need to remember that flashy new technology isn’t always the solution to our problems.

|
Embed
Progress spinner
petebrown
petebrown

@tracydurnell.com I agree with you. That was the point of my original post. It doesn’t matter if this AI stuff seems cool, it is harmful and unethical. People should stop falling for the lies and if they don’t, they are part of the problem.

|
Embed
Progress spinner
tracydurnell
tracydurnell

@petebrown 😅 aw shucks, I'm sorry, I forgot micro.blog sends Webmention replies to everyone mentioned in a reply, not just the article I was replying to... I wasn't trying to be annoying. I reposted this as a quote post instead of a reply but I don't think that'll get rid of it here -- @help is there a way to remove Webmention replies? Or could I suggest not adding an @ mention at top if it wasn't included in the original post?

|
Embed
Progress spinner
In reply to
petebrown
petebrown

@tracydurnell Ah—gotcha. No worries!

|
Embed
Progress spinner