Because Large Language Models are trained on human language about reality, their chatbots can often sound like they are in touch with reality. Sometimes they give remarkably lucid responses about things in the world that we would all agree with. But that can hide the fact that these models have no basis in reality whatsoever…only in the recorded interpretations of reality available to them.
Humans are meaning making machines. We are constantly interpreting the randomness of the world as stories which more or less work, but may also not be objectively true. This is why the events of the world continue to surprise us, or cause us anguish when they do not fit into our stories.
Chatbots don’t do that because they don’t know reality, just our stories about reality.
It would be like asking someone about the ocean who has read many books about the ocean, but has never actually experienced the ocean. You could learn a lot, but you’d always be a layer or two (philosophers can argue about this) removed from reality. And I think that makes a difference.