My argument was and is that neural models don’t produce anything truly new. That they can’t handle things outside what is outlined by the data they were trained on.
Are you not claiming otherwise?
You say it’s possible to guide models into doing new things, and I can see how that’s the case, especially if the model is a very big one, meaning it is more likely that it has relevant structures to apply to the task.
But I’m also pretty damn sure they have insurmountable limits. You can’t “guide” and LLM into doing image generation, except by having it interact with an image generation model.
My argument was and is that neural models don’t produce anything truly new. That they can’t handle things outside what is outlined by the data they were trained on.
Are you not claiming otherwise?
You say it’s possible to guide models into doing new things, and I can see how that’s the case, especially if the model is a very big one, meaning it is more likely that it has relevant structures to apply to the task.
But I’m also pretty damn sure they have insurmountable limits. You can’t “guide” and LLM into doing image generation, except by having it interact with an image generation model.