cleverdevil
cleverdevil
Also, the notion that you can’t have fantastic results with generative models training on only content you have permission to use is ridiculous. OpenAI and Meta are bad actors that are disengenuous. Its *easier* to get good results with ethical shortcuts, but you can acehieve amazing results without steali... cleverdevil.io
|
Embed
Progress spinner
fgtech
fgtech

@cleverdevil This is 100% correct. Properly curated training data (which we have not yet seen) will yield dramatically better LLM results. Not more reliable, mind you, but should avoid some of the creepy and dark stuff we have seen emerge. Curating requires humans and will be expensive.

|
Embed
Progress spinner
cleverdevil
cleverdevil

@fgtech one thing that I see becoming more common is large foundation models trained on open data sets that are mostly used to provide the fundamentals of written communication, combined with specialized models trained on smaller data sets for very specific use cases. This gives you the best of both worlds.

|
Embed
Progress spinner
In reply to
fgtech
fgtech

@cleverdevil Sounds great! Ethically sourced, transparent data sources will be key to cleaning up these models.

|
Embed
Progress spinner