That and bias, absolutely need improvements. That doesn’t mean LLMs can’t be extremely effective if given appropriate tasks. The problem is that the people who make decisions about where they’re used aren’t technical enough to understand their strengths and limitations
I don’t think technical knowledge gives as good a sense as a lot of experience working with one.
Like saying the guys who designed a particular car would know best how it’ll perform on various racetracks. My sense is a driver would have a better sense.
Really.
Says all you need to know about their opinion lol
Still AI misalignment is a real issue. I just don’t remember which model was studied and had been found out that it was missaligned.
That and bias, absolutely need improvements. That doesn’t mean LLMs can’t be extremely effective if given appropriate tasks. The problem is that the people who make decisions about where they’re used aren’t technical enough to understand their strengths and limitations
I don’t think technical knowledge gives as good a sense as a lot of experience working with one.
Like saying the guys who designed a particular car would know best how it’ll perform on various racetracks. My sense is a driver would have a better sense.
I guess what I meant by technical knowledge meant to be less about general tech and more about specifically LLM tech