This feels like a ready-made reply that you've copy-pasted but I'll bite.
I don't put much stock in aha moments. I think you conflate "aha moments" as some epiphenomenal experience of insight with actual thought. Thought requires not only mental representations but representations about representations. Not only representations about representations, but that those representations about representations are reflectively accessible to some unified psychological cast, a self (a biological phenomenon we have pretty much zero clue how it's realized). ChatGPT cannot think. It cannot justify its own statements.
Look into "internalist justification" for this. The core reason human intelligence is able to expand knowledge about the world is that it reflects about the viability of its sources of knowledge. In other words, it asks itself: how do I know this? What does it mean to 'know' it? What is the scope of validity of this methodology? That is, we try to justify the beliefs we hold reflectively. In other words, it's not enough that our assumptions and axioms be on a sound foundation, we have to be able to give reasons for that foundation (self-evidence for axioms etc). ChatGPT cannot genuinely do this.
"GPT does logic quite well". Can ChatGPT conceive of new logics? Intuitionistic logic? Non-monotonic logic? Maybe you can get it to prove the Goldbach conjecture?
Look, to get present AI to what you're imputing to it you need an architecture that has complex, layered, hierarchical knowledge of the world and higher-level abstractions. Seems possible. It's nowhere near that yet. Even then, it won't have reflective knowledge, the cornerstone of human intelligence.
So, while ChatGPT is smarter than every human in some ways, it's still a silly little gimmick.