@cleverdevil This is 100% correct. Properly curated training data (which we have not yet seen) will yield dramatically better LLM results. Not more reliable, mind you, but should avoid some of the creepy and dark stuff we have seen emerge. Curating requires humans and will be expensive.
@fgtech one thing that I see becoming more common is large foundation models trained on open data sets that are mostly used to provide the fundamentals of written communication, combined with specialized models trained on smaller data sets for very specific use cases. This gives you the best of both worlds.
@cleverdevil Sounds great! Ethically sourced, transparent data sources will be key to cleaning up these models.