Seth Godin has one of the clearest blog posts about AI. Much of AI is an illusion. That doesn’t mean we should ignore it, but it does mean we need to be thoughtful about how we use it, with reasonable expectations.
Seth Godin has one of the clearest blog posts about AI. Much of AI is an illusion. That doesn’t mean we should ignore it, but it does mean we need to be thoughtful about how we use it, with reasonable expectations.
@manton So, do you think this will become an "AI Bubble,” like the Dot-com bubble of 1997-2000?
Don’t ignore AI because it’s dumb. Figure out how to create patterns and processes where you can use it as the useful tool it’s becoming.
Honestly, this is exactly how I’m using it. Also, replacing Google with AI for very specific search queries is so much better.
@cesco A little, but the hype about AI is real. It is rare in technology to get something so new that it changes everything it touches.
@manton Sage advice. It’s worth remembering that any intelligence you read in text from a generative model (or in “art” you see) originates from the training data. These systems are worthless without the humans who created that data, none of whom are compensated for their work.
@fgtech I'm glad that people are thinking about the ethics of training these models. I think it'll settle out okay, but it'll take a little while, and I think we need new copyright laws to cover the fair use gray areas of AI data that were never a problem before.
@manton It’s really worse than people not getting compensated for their work. They do not even receive credit for helping build the models. Last I checked, OpenAI still refuses to disclose the sources used to train their models.
@manton We should clarify the existing laws in light of what is happening, yes. Cases working through the courts should help. It is very hard for me to see the process of crawling every written word available on the internet and ingesting it into a model as “fair use.”
@fgtech We're going to need better licensing. Wikipedia is Creative Commons and can be used with some limits. My blog and book are freely available too. But for most web pages, it's not clear what the author's intention is.
@manton the fact that he refers to it as “dumb” because it doesn’t know anything just shows he has no idea what he’s talking about. I truly think the only people who get the seismic shift that’s happening are those who develop AI apps. Once you see how it works you would never refer to it as “dumb”
@manton Yes, people who are okay with having their work included should have a way to indicate that. Nobody should assume it counts as fair use to ingest entire web sites into a model without that permission. Are you sure about Wikipedia? The license requires attribution.
@ronkjeffries Yes, I want to do that. There are a lot of good examples there that are hard to find.
@jhull What I liked is that it addresses what a lot of people have problems with: that because AI is wrong sometimes, maybe it’s worthless. I agree that “dumb” or even “smart” aren’t really applicable. It’s so advanced it continues to blow my mind.
@manton having done a fair amount of GAI development and integration where I work, my favorite metaphor for using AI has landed somewhere along the lines of a "co-pilot" or "co-worker." Used appropriately, it can add value or retrieve "knowledge" or even do work for you/saving time -- not to be relied on exclusively or trusted without limit. Rather, to be potentially be leveraged as another member of the team.
@manton oh totally. And my use case is for creative writing, so hallucination is a feature BUT when I do need it to reason, it’s unparalleled. If you haven’t tried out the Assistants API for mb, I would give it a shot. Huge difference and will inspire all kinds of new tooling for your work.