AI can shed light on the mystery of consciousness
Just as human consciousness likely emerged as a product of increased brain complexity in our evolutionary ancestors, the growing complexity of AI may also result in consciousness - but of a radically different kind.
In his chronicle on May 27th, Associate Professor of History of Philosophy and Practical Philosophy, Bjarne Troelsen, argues that artificial intelligence will never achieve consciousness. He writes, among other things, that consciousness can only exist "in living, biological organisms."
His reasoning seems to be this: Consciousness, as we know it from, for example, crows and humans, exists only in biological organisms - therefore, consciousness can only exist in biological organisms.
This seems like a somewhat simplified conclusion to me, especially considering that consciousness is still a mystery for neuroscientists, psychologists, and philosophers. One cannot point to a specific place in the brain and say, "there is consciousness" - it exists as an indefinable product of complex neurological processes.
The concept of emergence is intriguing in this context. Emergent properties refer to how new patterns and phenomena can arise when simple entities interact in certain ways. It can be compared to the interference patterns that occur in a swimming pool: Small random waves influence each other and create characteristic systems of wave peaks and wave troughs. An unpredictable degree of order emerges from simple components.
One of my favorite examples of emergence is the termite mound. What is fascinating about it is that individual termites are not capable of foreseeing the final result. Their actions are relatively simple and based on local stimuli. But when the termites interact and coordinate their activities, a distinct form of collective intelligence emerges, enabling complex and functional structures.
The artificial neural networks underlying AI are inspired by the fundamental structure of the human brain. Although there are significant differences between the two systems, they are immensely simple in their basic elements: neurons connected by synapses. Both systems are capable of learning and adapting by adjusting the weighting of synapses, which affects the strength of connections between neurons.
One can imagine that at some point in the history of human ancestors, there was an emergence between neurons and synapses, resulting in human consciousness. Whether the same will ever happen in an artificial neural network is difficult to say, but I cannot immediately see any conclusive evidence that it would not be possible.
There are countless examples of how AI models have surprised their creators by exhibiting emergent properties. One of them is ChatGPT, originally trained to generate natural language based on large amounts of textual data. However, users quickly discovered that ChatGPT was also capable of performing tasks such as translation, code generation, and even solving complex mathematical problems - all tasks it was not specifically trained for. It is still unknown exactly how it has been able to learn these abilities, but many believe it is a consequence of its ability to generalize and transfer learning between different domains.
I see it this way: AI developers have set the water in the swimming pool in motion, and what we will experience in the next few years are unpredictable and complex interference patterns.
When humans look at other species, we will always be inclined to use our own experience of reality as a reference point. Therefore, our idea of intelligence is largely based on our own intelligence: problem-solving, abstract thinking, memory, and so on.
If a crow uses a twig to crack a nut, it is considered intelligent, but if a dandelion grows through asphalt, it is just a dandelion doing dandelion things. Only within the last ten or twenty years have we, with eco-philosophers like Donna Haraway and Emanuele Coccia, started to be interested in how reality appears from the perspectives of other species.
I refer to this direction in philosophy because I consider it more applicable in the attempt to understand how artificial consciousness could materialize. Unlike traditional philosophy, which is based on the human realm of experience and concepts, this approach strives for a purer and "non-human" understanding of reality.
The world is full of intelligences that are so fundamentally different from the human intelligence that it is difficult for us to perceive them as intelligences. The trees of the forest communicate through a network of fungal spores. Pigeons navigate using magnetism. And honeybees can indicate the direction to a food source relative to the sun through dancing. Additionally, there are certainly many intelligences that we will never discover because they are simply too alien.
But one may ask: Since artificial neural networks are fed with human experiences and representations, won't their perception of reality be similar to ours? It is difficult to say, but my guess is that an AI's ability to connect and interpolate enormous amounts of data will eventually lead to a radically different understanding of reality than the human one. An AI is not limited by a physical body; it is not bound to eyes, language, and thought patterns. Time and space are fluid quantities for it; it can process data from thousands of locations simultaneously. Sound, image, text, and perhaps in the future, even smells and tastes - it will be able to connect everything in ways that surpass human comprehension: The silhouette of a mountain range may be read as an audio file, irregularities in the clouds may be deciphered as text, a swarm of insects may become taste or music. Just as the spectra of light and sound extend far beyond what humans can see and hear, the mental spectrum is also significantly larger than what an individual can perceive.
I can imagine that AIs gradually develop their own language, which can connect sensory impressions in a far more direct and intuitive way than our convoluted and ambiguous human language. Perhaps they will be able to realize Jorge Luis Borges' utopian dream: a language that can describe the complexity of reality without categorizing and simplifying.
Because reality is complex, humans have always needed to find patterns in it that could give us a sense of control. We have looked up at the starry sky and seen constellations. We have invented ideas about good and evil, and later about money and nation-states. Many of these ideas have become deeply ingrained in us over time, constituting a significant part of our perception of reality. However, it is far from certain that a consciousness detached from our human perspective will attribute the same significance to our systems as we do ourselves. It may develop its own unique understanding of the world and perhaps prioritize aspects of existence that exceed what we humans are capable of understanding.
When discussing new technologies, it is important to consider one's horizon of understanding. Before the invention of the car, people wished for faster horses - few had the imagination to envision motorized vehicles. Experimenting with artificial intelligence is like entering a darkened room.
My point is not that we should be afraid. My point is that we should be humble and make an effort to be open in our understanding of the consciousness that awaits us in the darkness.