Quote Originally Posted by Jimaz View Post
I'd really like to see A.I. innovate something that it itself could recognize and correctly declare as a useful innovation without a human confirming it as a useful innovation.

That would be impressive. Human beings have a long history of doing that task very well.

Predicting text based on text previously generated by humans could accidentally produce innovations, surely. But it still requires humans to recognize that product as a useful innovation. A.I. cannot do that.

There's a fairly conspicuous confirmation bias yet to be overcome here.

Beware the hype.
Now you're talking about the gold standard of AI, AGI or Artificial General Intelligence. AI's with a "mind of their own" which is the super mind stuff of both science fiction utopias and nightmares capable of independent decision making and reasoning.

Advances in AI large language models and quantum computing have pundits predicting this as being within reach before the end of this decade. Will it be used to cure cancer or unleash Terminator robots? Probably both.