manton
manton

Really nice FAQ-style format for today’s Stratechery update on DeepSeek. Even if you don’t read the whole thing, you can tell something big is happening. I downloaded a medium-sized R1 model last week to test with Ollama on my Mac. Very impressed.

|
Embed
Progress spinner
phillycodehound@indieweb.social
phillycodehound@indieweb.social

@manton I’m tempted.

|
Embed
Progress spinner
js@podcastindex.social
js@podcastindex.social

@manton which one did you try?

I was just trying out the 70b version (43 gigs) over the weekend to stress test my new macbook - really got the gpus working, fans whirring

gonna try out the big guy 671b version (404 gigs) later today

|
Embed
Progress spinner
dgreene196
dgreene196

@js Wow! Did you play with the smaller models? I’ve got 14b set up now, and trying to decide if it’s overkill for summarizing webpages - it definitely takes its time with chat responses answers relative to large online models I’ve used (this is my first Ollama experiment).

|
Embed
Progress spinner
In reply to
manton
manton

@js I only tried the 14b one and it ran pretty fast. This is an M3 / 48 GB RAM which feels like both a lot and not nearly enough for the huge models.

|
Embed
Progress spinner