Because Large Language Models are trained on human language about reality, their chatbots can often sound like they are in touch with reality. Sometimes they give remarkably lucid responses about things in the world that we would all agree with. But that can hide the fact that these models have no basis in reality whatsoever…only in the recorded interpretations of reality available to them.

Humans are meaning making machines. We are constantly interpreting the randomness of the world as stories which more or less work, but may also not be objectively true. This is why the events of the world continue to surprise us, or cause us anguish when they do not fit into our stories.

Chatbots don’t do that because they don’t know reality, just our stories about reality.

It would be like asking someone about the ocean who has read many books about the ocean, but has never actually experienced the ocean. You could learn a lot, but you’d always be a layer or two (philosophers can argue about this) removed from reality. And I think that makes a difference.

The First Battleground of the Age of AI Is Art

Kottke says:

Like, just think about how powerful this is: normal people who have ideas but lack technical skills can now create imagery. Is it art? Perhaps not in most cases, but some of it will be.

My definition of art is “craft + courage.” I think it’s pretty clear that typing a prompt into a text box to make art isn’t an act of courage. So for now I’ll say that generating images using AI — especially in honing the prompts — is craft but not art.