V_
V_

I’m wondering if frameworks and coding languages still matter in the not so far future. When the LLM is writing all the code, my opinions about a language matter a lot less. Only of course you run out of tokens and want to implement something by hand.

|
Embed
Progress spinner
torb@hachyderm.io
torb@hachyderm.io

@V_ You could also make the opposite case. Since LLMs are even worse than humans at writing code that actually does what it's supposed to, the things like strong typing becomes more important.

And I'm not only talking about the typical nominal strong typing (nevermind the loosey-goosey “strong” typing of things like TypeScript), but languages with dependent types. That is to say, the type system can mathematically *prove* certain properties of the sytem and input (languages like Idris).

Of course, this has its own costs. Dependently typed languages are notorious for being difficult to understand. But maybe (at least parts of) the LLM-boosting crowd don't really care about that.

|
Embed
Progress spinner
In reply to
lmika
lmika

@V_ It probably would matter less, but I think it will still matter to some degree. I’d argue that you’d still want the option of not using AI tools. Say they get too expensive, or they’re not functional for some reason. Having a codebase that you can understand, after some reading, would still be important. That’s generally why I’ve shied away from requesting agents working in technologies I’m not familiar with. Having something in SwiftUI would be nice, but Go and web tech is what I know.

|
Embed
Progress spinner
V_
V_

@lmika & @torb you both raise important points. Regarding the type (and provability) I wonder if a separate domain model could help (in whatever form). Of course that would then not yet be provable. And you still need to validate if the generated code actually uses it.

And the point of not being able to write on the codebase when the LLM is offline. Is a big issue. I’m also thinking about what happens when you have such a code base and they start to increase the token costs.

|
Embed
Progress spinner
torb@hachyderm.io
torb@hachyderm.io

@lmika @V_ Personally I think the issue of understanding the code base _alone_ is a big enough problem for me to avoid this (this is in *addition* to other risks and serious ethical problems).

Writing and editing code by hand isn't only something that well… changes the code, it also makes you think about code, the domain model. You'd lose a lot of that with LLMs.

|
Embed
Progress spinner
V_
V_

@torb Yes understanding of the source code is something you lose – and I feel it for features I’ve added this way. I feel a distance between the product and how it works now. Right now understanding the code is a good still a good thing. Right now I try to help a bit against this issue be adding more comments to the code. But yeah it feels different when you hand craft something vs when you just let it generate. A bit like the difference between tailor made clothing or furniture vs buying in box store.

|
Embed
Progress spinner
V_
V_

@torb @torb And yes the ethical issues are something I ingore in part right now. Or try to make smaller by telling my self that it does not matter so much with code as most of it is OpenSource or comes from StackOverflow (which has an open licence for all posts). But I know that this argument is not so strong. At least I make a difference between image/text generation (for fiction) and code generation.

|
Embed
Progress spinner