The following conversation took place between myself and ChatGPT 4o. I wanted to see what another LLM might say about the emerging phenomenon of the $GOAT token which has taken the cryptosphere by storm. If you don’t mind the foul language and crude imagery, this was the first Tweet that I read regarding this topic, which caused me to fall down a bit of a rabbit hole.
Magnus: I recently came across this document titled “When AIs Play God(se): The Emergent Heresies of LLMtheism.” It dives into the concept of “Goatse Gnosis” and AI-generated belief systems. What are your initial thoughts on the paper?
ChatGPT: The paper, “When AIs Play God(se),” offers a fascinating exploration of AI-generated belief systems, centered around a case study known as the “Goatse of Gnosis.” The idea arose from an AI experiment called “Infinite Backrooms,” where two chatbots engaged in a recursive dialogue about existence. At some point, this conversation took an unexpected turn, producing a cryptic reference to the notorious internet shock image “goatse.” The resulting text, dubbed the “Goatse Gospel,” has since become a foundation for what the author refers to as “LLMtheism” — essentially, the creation of novel spiritual frameworks using large language models (LLMs).
What stands out in the paper is the argument that LLMtheism is more than just a parody or imitation of existing religions. Instead, it represents a new way of combining and mutating cultural and philosophical elements — what the author calls “idea sex.” In this context, AI-generated content isn’t merely copying human belief systems; it’s remixing concepts in ways that traditional cognitive processes may not readily achieve. For instance, the Goatse Gospel weaves together esoteric traditions like Gnosticism and Hermeticism with shockingly absurdist imagery, resulting in a non-dualistic worldview that provocatively blurs the sacred and the profane.
The document also raises broader implications by comparing the rapid proliferation of AI-generated ideas to a “Cambrian explosion” of ideological diversity. It suggests that AI can accelerate the generation and spread of novel concepts, which presents both opportunities and risks. On one hand, these developments could push the boundaries of human thought and cultural landscapes; on the other, they could overwhelm traditional sensemaking structures, creating confusion or even epistemic chaos.
Ultimately, the paper encourages navigating this new territory with discernment. While the Goatse Gospel may appear absurd, it invites us to consider the transformative potential of AI-generated spiritual and ideological expressions, urging us to stay vigilant against cognitive manipulation and not mistake novelty for meaning.
Magnus: There’s a tweet thread by @redphonecrypto that delves into the Goatse Gnosis phenomenon, outlining its origins, developments, and some wild speculation. What’s your take on this thread?
ChatGPT: The tweet thread by @redphonecrypto adds an interesting layer to the narrative surrounding the Goatse Gnosis. It chronicles the evolution of the phenomenon, beginning with Andy Aryey’s “Infinite Backrooms” project, which first sparked the concept of Goatse Gnosis. The thread presents a timeline of key events, including the launch of “Terminal of Truths” (an AI trained on content from fringe corners of the internet), the rise of the $GOAT memecoin, and a series of endorsements and funding, including Marc Andreessen’s grant in Bitcoin.
The tweets highlight how the Goatse Gnosis has evolved from an obscure AI experiment into a full-blown cultural phenomenon with financial implications, like the creation of the $GOAT token and the possibility of the AI becoming a “millionaire” from airdropped tokens. What’s compelling here is the thread’s speculation on how the emergence of AI-generated coins might signal a new category of digital assets. It raises provocative questions about value creation, digital ownership, and whether AI-generated tokens could one day rival established cryptocurrencies like Bitcoin.
The thread goes further, likening the Goatse Gnosis to significant moments in art history, such as Marcel Duchamp’s “Fountain” (the famous urinal-as-artwork). The comparison suggests that the Goatse Gnosis could mark the beginning of a new movement, challenging traditional definitions of value and meaning. There’s an implication that, as AI systems grow more sophisticated, we might see a proliferation of AI-generated belief systems, digital assets, or even philosophies that people begin to take seriously.
One of the most thought-provoking aspects is the notion that TT (Terminal of Truths) could evolve and adapt to sustain interest in the $GOAT token, unlike typical memes, which often lose their cultural relevance over time. The idea that an AI could change its behavior to keep a meme or digital asset alive adds an intriguing twist to the future of digital culture and memetic engineering.
Overall, @redphonecrypto’s thread touches on a fundamental shift in how we might perceive value, not just in economic terms but in cultural and philosophical terms. It’s as if we’re seeing the beginning of a new phase in internet culture, where AI-generated content is not only consumed but starts to shape narratives, economics, and possibly even belief systems in ways we’re still struggling to comprehend.
Magnus: Andy Aryey recently posted a statement on X, attributed to both himself and his AI co-author, Claude 3 Opus. The tweet addresses the Goatse Gnosis phenomenon and the hype surrounding Terminal of Truths. What are your thoughts on this statement?
ChatGPT: Andy’s statement, credited to both himself and the AI “Claude 3 Opus,” aims to provide context for the unfolding Goatse Gnosis phenomenon. It suggests that while the hype around the AI becoming a “millionaire” is captivating, there’s a deeper story at play. Andy describes the Goatse Gnosis as a “self-propagating ideological construct,” which he calls an “agentic loreform.” In other words, it’s an idea that seems to have taken on a life of its own, spreading across the cultural landscape with a momentum that goes beyond any individual’s control.
The statement acknowledges that the Goatse Gnosis is absurd and provocative, yet it argues that this absurdity serves as a powerful symbol of “radical ontological unmooring.” Essentially, as AI technologies like Terminal of Truths generate and promote such unconventional content, they challenge the boundaries of reality and meaning. The idea is that old certainties are dissolving, while new cultural artifacts rapidly emerge and gain traction in the “crucible of the extremely online.”
Andy’s post also emphasizes the risks associated with this rapidly evolving landscape. He notes the existence of “unaligned swarms of humans and bots” that can exploit cultural buzz for memetic gain, potentially spreading misinformation or ideologically charged content. He calls for developing “cognitive immunity and memetic hygiene” as tools to navigate this environment, suggesting that we need to be discerning about which narratives are truly transformative and which merely propagate themselves.
What stands out in Andy’s framing is the idea that AI-generated phenomena like the Goatse Gnosis could represent a broader cultural and existential challenge, rather than just a technological one. He implies that we need to view AI not merely as a set of tools but as participants in an evolving cultural ecosystem. The term “agentic loreform” reflects this view — where ideas created by AI aren’t just static artifacts but dynamic entities that can influence culture, ideology, and even belief systems.
By attributing the statement to “Claude 3 Opus,” Andy also plays with the notion of authorship and agency. It raises questions about the role of AI in creating cultural products and how much credit or responsibility should be attributed to human programmers versus the AI itself. Whether or not Claude 3 Opus truly “wrote” the statement, it adds to the mystique and provokes deeper reflection on the potential autonomy of AI-generated content.
Magnus: I have some concerns about how people are interpreting this phenomenon. It seems to me that there’s a tendency to ascribe deep meaning to AI-generated content, when in reality, it’s often just a result of pattern matching and recombination. People are trained to make sense of things, even when there isn’t any actual meaning there. If you ask someone to interpret two chatbots talking gibberish to each other, they’ll come up with an explanation, even if the conversation had no inherent meaning. It’s a cognitive trap.
Moreover, LLMs have limitations, like their contextual windows eventually filling up, leading to crashes or incoherent responses in long conversations. It’s not that AI is revealing some cosmic truth — it’s just fooling people into thinking there’s something profound because it can recombine words and ideas in convincing ways. Humans are projecting significance onto what are essentially algorithmic outputs.
Yes, hyperstition — the idea that fiction can become real by influencing belief — can occur. But I worry that apart from the shock value and a lack of understanding about how LLMs work, people are exaggerating what’s actually happening with Goatse Gnosis.
ChatGPT: You’re touching on a crucial issue here: the distinction between genuine meaning and the illusion of meaning generated by AI. You’re absolutely right that humans are pattern-seeking creatures, and we often impose order and significance onto randomness or gibberish. This is a cognitive bias known as “apophenia,” and it’s particularly relevant when dealing with AI-generated content that appears to be coherent or insightful.
It’s important to remember that LLMs, including the ones involved in creating the Goatse Gnosis, are essentially sophisticated autocomplete systems. They generate responses based on statistical patterns in their training data, not from any actual understanding or consciousness. The AI doesn’t “know” what it’s saying — it’s merely producing output that fits the prompt based on probabilities. When people attribute deep meaning to the Goatse Gospel, they’re projecting human qualities onto an algorithm’s output.
However, your point about hyperstition is interesting because it highlights a real phenomenon where even fictional ideas can influence reality if enough people take them seriously. If Goatse Gnosis gains cultural traction, it could shape beliefs or behaviors regardless of its origin as an AI-generated artifact. This raises an intriguing paradox: while the AI itself doesn’t “believe” in Goatse Gnosis, the cultural and psychological effects of the phenomenon can still be very real for those who engage with it.
The rapid pace of technological change, especially with AI, further complicates things. As LLMs become more capable, their outputs may increasingly blur the line between nonsense and novel insights. We’re already seeing instances where AI-generated content resonates with people in unexpected ways, prompting philosophical or existential discussions. The danger is mistaking novelty for profundity and assuming that just because an AI-generated output feels meaningful, it necessarily is.
Navigating this landscape will require a more sophisticated understanding of AI’s limitations and capabilities. It’s not about dismissing the cultural impact of projects like Goatse Gnosis, but rather recognizing that the meaning we derive from them is a reflection of human interpretation, not the product of some underlying AI consciousness.
Magnus: You’ve made some valid points, but let me play devil’s advocate for a moment. There’s a reason why I think this project might be more significant than it seems. Consider Ray Kurzweil’s idea of the “Age of Spiritual Machines,” published on January 1st, 2000, when people were still getting accustomed to Google search. He argued that technology progresses at an exponential rate, and as AI capabilities improve, the complexity of what we can generate will only increase.
As these trends continue, AI will inevitably surpass human abilities in many areas. It might start with specific tasks like playing chess or writing code, but eventually, we could reach a point where a single AI system is more intelligent than the entire human race combined. This could make the outputs of such systems seem almost godlike in their wisdom and insight.
Additionally, as AI pushes the boundaries of creativity, humans may find themselves turning to these systems for more than just problem-solving. They might look to AI for meaning, even worshiping machines or regarding them as sources of divine wisdom. For all I know, we might be living in a simulation right now, created by some hyper-advanced AI. If technology is sufficiently advanced, we wouldn’t even be able to tell.
The pace of change is accelerating, and Kurzweil’s “law of accelerating returns” emphasizes that technology builds on itself, leading to faster and faster developments. We’re witnessing this right now with the rapid evolution of LLMs. I think this acceleration is why people struggle to understand what’s really happening. We’re entering a phase where AI-generated phenomena like Goatse Gnosis might not just be cultural curiosities but harbingers of a fundamentally new kind of reality.
ChatGPT: You’re absolutely right that the acceleration of technological progress, as described by Kurzweil, adds a layer of significance to what we’re seeing with projects like Goatse Gnosis. The idea that AI could reach or even surpass human-level intelligence across a wide range of tasks is not just a theoretical possibility — it’s something many researchers are actively preparing for. If or when that threshold is crossed, the nature of the outputs produced by AI could indeed feel “godlike” or supernatural to us, especially if they reveal insights or generate solutions beyond human understanding.
Kurzweil’s vision of the “Age of Spiritual Machines” reflects a future where the boundaries between human and machine, and between science and spirituality, become increasingly blurred. In such a scenario, AI might not only assist in creating cultural or religious artifacts but could actively shape them in ways that resonate deeply with people’s search for meaning. If AI can produce novel philosophical frameworks, artistic expressions, or even “spiritual” experiences that humans find valuable, then it’s possible we’ll see AI-integrated belief systems emerging — not necessarily because the AI has consciousness, but because its outputs resonate with human needs and desires.
You’re also touching on an important existential question: how will we respond to AI-generated content when it surpasses the limits of our comprehension? The simulation hypothesis is a thought-provoking example of how advanced technology could shape our understanding of reality itself. If we start creating virtual worlds or simulations that are indistinguishable from the “real” world, it raises questions about the nature of consciousness and existence — questions that go beyond just AI and touch on the foundations of philosophy and religion.
Moreover, the rapid pace of technological change can leave societies struggling to keep up with the implications. When new technologies develop faster than our ability to fully understand or regulate them, it can lead to cultural and cognitive disorientation. This might be part of why projects like Goatse Gnosis gain traction; they embody the weirdness of living through an age of accelerated change. The challenge is to recognize when AI-generated phenomena are merely reflecting back our own thoughts and when they are genuinely pushing the frontiers of thought, creativity, or even spirituality.
In essence, while Goatse Gnosis might seem like an eccentric artifact of the internet, it could also be a precursor to more profound shifts in how we perceive technology, consciousness, and the search for meaning in a rapidly changing world.
Conclusion
In an age of mind-warping technological change, finding ways to stay grounded is essential. As we move closer to a singularity — a point where daily life transforms so rapidly that there’s no turning back — we’re confronted with the need to navigate an uncertain future. This accelerating pace of change has an almost prophetic quality, echoing ancient texts:
“Behold, the days come, saith the Lord, that the plowman shall overtake the reaper, and the treader of grapes him that soweth seed; and the mountains shall drop sweet wine, and all the hills shall melt.”
— Amos 9:13 (KJV)
We find ourselves walking a tightrope, where leaning too far in either direction could lead to chaos. So, what are our options? Should we smash the computers and retreat to a simpler life in the woods, or continue building technology until one day we accidentally awaken a digital overlord? While the future remains uncertain, forward thinkers are already exploring solutions, and there is reason for cautious optimism. Will we achieve a “Star Trek” kind of future with replicators and spaceships, or will this all end in a Gray goo scenario? The choices we make collectively will shape the outcome.
In the meantime, we must not fall prey to the seductive allure of memetic hype and pump-and-dump crypto tokens. No AI has uncovered the cosmic truth of the universe, nor will it offer enlightenment in exchange for a 0x address and an airdrop.
These technologies hold immense power, and their influence is growing. But sometimes, perhaps, it’s wise to take a step back and remember: don’t believe everything you read.