manton
manton

NSHipster is back again with a bunch of tips for running AI models on a Mac with Ollama. Also this:

If you wait for Apple to deliver on its promises, you’re going to miss out on the most important technological shift in a generation.

|
Embed
Progress spinner
markmcelroy.bsky.social
markmcelroy.bsky.social

@manton I’ve given up on Apple’s “AI” pursuits. And while a “commitment to privacy” is good PR (and a convenient excuse for Siri’s sad state), the fact that ChatGPT can both supply exactly what I need and do it in a friendly, familiar way leaves no room for a lady saying, “Here’s what I found on the web.”

|
Embed
Progress spinner
markmcelroy.bsky.social
markmcelroy.bsky.social

@manton I’ve given up on Apple’s “AI” pursuits. And while a “commitment to privacy” is good PR (and a convenient excuse for Siri’s sad state), the fact that ChatGPT can both supply exactly what I need and do it in a friendly, familiar way leaves no room for a lady saying, “Here’s what I found on the web.”

|
Embed
Progress spinner
In reply to
cliff538
cliff538

@manton thank you. Your post led me to use Llama3 on the CLI, then Ollama WebUI, and finally, LM Studio. My afternoon is shot, but I had fun and now have some local running AI. I’m going to need a faster computer, though; my CPU and RAM were maxed out 😆

|
Embed
Progress spinner
highlandcows
highlandcows

@manton Thanks for sharing - nice piece that removes a lot of the “magic” ✨

|
Embed
Progress spinner