manton
manton

I’ve tried to have a thoughtful approach to AI-based features in Micro.blog, like the opt-out checkbox to disable everything. I’m still open to reevaluating models too. I’ve tried my own servers, but it’s more expensive and would actually use more energy running 24/7 instead of on-demand.

|
Embed
Progress spinner
birming
birming

@manton I like MB’s approach to AI. A perfect balance of being useful without getting in your way or “taking over”.

|
Embed
Progress spinner
numericcitizen
numericcitizen

@birming agreed cc @manton

|
Embed
Progress spinner
In reply to
vincent
vincent

@manton I wonder if we could run a very small host that is responsible for firing up a larger server, on demand, to handle requests, and then automatically turn off and delete.

|
Embed
Progress spinner
manton
manton

@vincent We could, but that is a lot of complexity to manage. Also I’ve been trying to reduce the lag in accessibility text generation, and I feel like that would make things worse. But I like you’re thinking, maybe there will be new options later.

|
Embed
Progress spinner
manton
manton

@numericcitizen @birming Thanks y’all.

|
Embed
Progress spinner
vincent
vincent

@manton Super complex for sure. I think for tagging photos, and something we could run in a background process that doesn’t need immediate feedback, could be something that would fit a model like this. But yeah, one day we’ll have options. And as time goes, running an LLM on premises will become cheaper.

|
Embed
Progress spinner
jonah
jonah

@manton I appreciate that you allow the option to disable.

|
Embed
Progress spinner