manton
manton

I’ve tinkered with smaller Llama models, on my Mac and Linux servers, so I thought I’d try Llama 3.1. No surprise the 405-billion parameter model is huge, a 200+ GB download. But even the 70b model seems too much for my M3 with 48 GB RAM. Going to stick with cloud models for the foreseeable future.

|
Embed
Progress spinner
numericcitizen
numericcitizen

@manton this should educate us about the upcoming challenges Apple is facing in running Apple Intelligence (or a portion of it) on-device, isn't?

|
Embed
Progress spinner
In reply to
manton
manton

@numericcitizen Yes, I think on-device models are going to be very limited for years.

|
Embed
Progress spinner