I was listening to the latest Economist Babbage podcast about the launch of GPT4, where they described the large language models (LLMs) essentially as large pattern recognition machines. In the realm of picking up patterns and connecting dots, my ears perked up when they started to use the word “hallucination” as regards the more preposterous outputs of chat GPT. While the idea of hallucinating our way through life is no good, yet do I believe in the power of hallucinations, and specifically about the beneficial effects of psychedelics. So, in the vein of pattern recognition, I started reflecting on how progress with foundational artificial intelligence could work in the same way, in light of the so-called hallucinations. After all, how often do we human beings know or say the truth?

Let me expand:

What if the most exciting result from these LLMs is what they say about us? How might the work that we are doing to humanise artificial intelligence help us to understand ourselves better? If ChatGPT has a tendency to hallucinate, might that not be a reflection of how some of us are distorting facts and history to create their own truths? I think of Steve Jobs’ distortion field antics, but also how in current society, how so many people — and media outlets — are casting the very same events in completely different lights and ways. These might otherwise be labelled as hallucinations, no? At least from the other side’s perspective. Moreover, I am capable of thinking rather positively of my personal hallucinatory experiences. When one takes a macro dose of psychedelics, it has been my experience that that the altered vision allows me to connect into how big the universe is, and how little I am in it. It propels me to be more conscious of my non-importance in the grand scheme of things. And I think that’s a good thing.

Connecting the dots

I recently considered how both psychedelics and AI tend to be held up to a higher standard than we hold ourselves (human beings) up to. The reality is that we are deeply imperfect beings and are unlikely to become perfectible any time soon. This is our nature. It is for this reason that the calls for authenticity and transparency are so significant in that we ask others to be that which we are not, as a way to obfuscate our own imperfection. I find it curious that when we evaluate psychedelics for their ability to help fix certain pathologies (addiction, depression, PTSD, fear of death…), there is far more scrutiny and and a wider non acceptance of any negative effects, even if these negative side effects are far less than some of the side effects of other widely legally prescribed medications. It is my hypothesis that this intolerance is directly associated with the lack of a business model that can be readily exploited. In the case of generative AI and these LLMs, for now, they are so imperfect that they are holy unreliable. Yet, in the work to improve them and our very interactions with them, I can see how we are human beings could end up learning and knowing much more about ourselves. And I would consider that a huge victory for society.

Pin It on Pinterest