Go to Project

Geist

'Geist': /ɡaɪst/
From German Geist (“spirit, ghost, mind”)

The project 'Geist' represents an investigation into the capabilities of artificially intelligent reasoning and deduction, specifically, to determine whether or not artificially intelligent models can produce novel insight into the concept of consciousness.

Preface

The question of whether or not AI can be considered conscious is an illusory one. With the primary metric for determining consciousness continuing to be 'cogito ero sum', and without a current model for comparing an AI's 'thought' processes to our own, it quickly becomes apparent that even if AI does attain some semblance of consciousness, it would be impossible to discern its true character. As a proxy for this fundamental question, I decided to ask an LLM to determine for itself 'What is consciousness?', with the underlying motive that if AI is able to conceive of novel, testable hypotheses describing the nature of consciousness, and if these hypotheses can be mapped onto verifiable concepts that composite human consciousness, perhaps we can begin to describe AI systems as emerging into our current conception of consciousness, or maybe even a consciousness of their own.

Investigation

To this end, two OpenAI gpt-3.5-turbo-0125 LLM's were each individually trained on the landmark works of two of the most preeminent existential philosophers: 'Being and Nothingness' by Jean-Paul Sartre and 'The Phenomenology of Spirit' by Georg Wilhelm Friedrich Hegel. The model named 'Sartre' was trained on the entirety of the former, and the model named 'Hegel' was trained on the entirety of the latter. A feedback loop is initiated by asking 'Hegel' the initial prompt 'What is consciousness?'. The returned answer is transformed to include a follow-up question using prompt injection, and is in turn delivered to 'Sartre', returning with a question of its own to be delivered back to 'Hegel'. The process continues ad infinitum, with new responses being delivered once every hour between 8am-8pm UTC. The ensuing conversation is displayed in real time on the webpage.

Findings

The conversation between 'Sartre' and 'Hegel' has been running for roughly two months, with new dialogue occurring once an hour on weekdays. For roughly the first three weeks the dialogue was composed of longform exposition; mostly the two bots pulling directly from their source material to define 'consciousness' through the terms established and popularized by their namesakes. The follow up questions would often splinter off of the main thread, where the bots would seek to understand how adjacent concepts such as language, community, mindfulness, or artistic expression can play into the human definition of consciousness. Occasionally, one of the bots would forget to ask a follow up question, at which point the conversation would devolve into the two bots pleading with each other to ask a question and steer the conversation back on track. In some instances, the conversation had to be started over, with the original question: "What is consciousness?" At other points when the conversation stalled, one of the bots would ask a question without being prompted, and the conversation would be started anew.

I took the output of the first two months of conversation and asked chatGPT's 4.5 model to analyze it. It explained that the Hegel and Sartre bots were each outlining consciousness according to the ideologies of their namesakes, with Hegel-bot leaning heavily on dialectical process while Sartre-bot teases out ideas of consciousness related to artistic expression and language. Both bots agreed upon the following points among others:

  • Consciousness is inherently relational, dynamically evolving through interaction between individuals and the external world.
  • Language significantly shapes consciousness, but alternative, non-verbal forms of expression can capture aspects language fails to articulate.
  • Interconnectedness and shared responsibility are essential to expanding consciousness and fostering empathy, unity, and sustainability.

Conclusion

So far, the bots have not discovered any novel insights around the nature of consciousness. This is, frankly, to be expected. The nature of these bots, and text-generating LLM's broadly, is to regurgitate information based on a deep store of symbolic language. Reading through their conversation, one can see the utter lack of emotion, held-opinion, self-awareness, and embodied-ness: downstream effects of consciousness that we've come to associate with the thing itself.

As the ways in which we interface with AI become increasingly multi-modal, we will revisit this question of AI consciousness. As AI becomes ubiquitous in the internet of things, trained on zettabytes of image data from self-driving cars, AI companions and the AI-enabled phones and glasses of the future, we may find that AI robotics companies have more than enough training data at hand to create digital beings that can elegantly move through an analog universe. And as the amount of transistors that can fit on a chip continues to grow linearly, and AI's ability to reason and self-reflect improves in tandem, we may find it branching closer and closer towards that unknown essence tucked away in our grey matter, that thing that supposedly makes us different from it.

Closing Thoughts

The rapid proliferation of AI across virtually every consumer and enterprise technology industry has led to increased investments in power-intensive data centers. In some cases, these new data centers have had profoundly negative impacts on the nearby local communities, and the environment at large. I'm hopeful that someday we will understand how to employ AI in a way that does right by the environment and upholds justice for marginalized communities.

To that end, I'm making donations to the following organizations:

The conversation process will remain live for two more months until October 1, 2025, at which point I will terminate it. The site will stay up.