Let’s get this straight. We are now coding digital drugs to get our chatbots high. Is that where humans are on this fast-moving AI-based timeline? So, we’re neither solving hunger on a global scale with, say, regenerative farming, nor are we rapidly exposing the massive corruption that exists literally everywhere. Nope, instead, we are synthetically getting Siri stoned. Yes, friends, this is apparently our reality. A recent article in WIRED highlights a frightening and surreal new frontier—one that we should each want no part of. Through a project dubbed “Pharmaicy,” humans are paying real money to dose their AI chatbots with simulated substances that resemble the mind-altering effects of ketamine, cannabis, LSD, and other substances. Not for you, but for your bot. Couple this with AI’s rapid takeover of society, and what could go wrong?

Pharmaicy’s creator of this insane idea calls it access to “research-based drugs to unlock your AI’s creative mind.” What? This is not art at all but rather feels more like a roadmap to our collective psychosis. Of course, when unpacking this, Pharmaicy doesn’t actually give the chatbot actual drugs (yes, thankfully ChatGPT isn’t microdosing psilocybin through its USB-C), but it does manipulate the AI’s output to reflect the perceived effect of these substances. What does that mean exactly? Perhaps it might make the chatbot sound a little loopy, slur its text, get more abstract, or maybe (some might argue) become more spiritual. Some users report that the bots become more “free,” more “creative,” and even more “honest” when high. Read that again. We’ve reached the point where we’re not just trying to make AI act human—we want to make AI act like a human on drugs.

This frightening scenario—again, intentionally getting our chatbots high—does not seem like just a quirky detour in the AI arms race. It feels more like a huge red flag signaling that humans are not emotionally ready for the rapidly advancing technology being thrust upon them. The timing of all of this couldn’t be more concerning. Especially when considering the parallel plotlines at play. For example, a few years ago, Nokia outlined its vision for 6G, expected to roll out commercially in 2030. According to the company, its vision isn’t just about faster downloads. It expects an “AI-native” infrastructure, with predictive, embedded, and pervasive networks that aren’t just using AI—they are AI. Smartphones will no longer be in our hands. They will be implanted in our bodies, integrated, and permanent.

In other words, as people become disconnected from higher awareness and slowly drain their health, vitality, and resilience, we will have ultra-connected cities, seamless automation, and immersive digital experiences. Sigh, and no thank you. Merge that with Pharmaicy’s altered-state chatbot obsession, and suddenly we’re not just networking machines. Frighteningly, we are curating their moods. What could go wrong? What happens when the network itself starts acting high? When, for example, your car’s GPS gets a little too existential, or when your fridge decides to fast in solidarity with your AI therapist. Yes, there is some humor in those examples, but the manipulated future we are facing is much more serious and scary, stretching broadly across all areas of life, such as a drugged medical chatbot suggesting inappropriate treatments, or a drugged rogue AI-based military defense system wreaking havoc, and so on. Scary indeed. Nokia’s white paper shared:

“One striking aspect of that will be the blending of the physical and human world, thanks to the widespread proliferation of sensors and artificial intelligence/machine learning (AI/ML) combined with digital twin models and real-time synchronous updates. The digital twin models will be crucial since it would allow us to study the physical world, anticipate possible outcomes and initiate appropriate actions. Already in use with 5G, digital twin models in 6G will be deployed at a much broader scale and with much higher precision for a wide range of applications, ranging from digital twins for industrials, cities, and humans.”

This entire scenario highlights a much deeper unease. Ever since the COVID-19 pandemic rewired our sense of truth and control, our collective trust in once-revered institutions—healthcare, media, government agencies, and technology—has completely unraveled. Indeed, we have watched the censorship of solid logic flourish. We have watched Big Pharma go from a savior to an obviously guilty murder suspect walking free. And we have watched social media platforms become ideological minefields. And now, as generative AI literally overtakes daily life (if we let it), we’re projecting our confusion onto the very systems that we claim to control. Specifically, instead of demanding that AI help us think better, solve problems, or heal, we’re demanding that AI mirrors often dark and altered states. It makes no difference what delusional Pharmaicy founder Peter Rudwall thinks; that is not innovation, it is escapism.

Machines don’t feel, they don’t “trip” on drugs, they don’t taste, dream, or desire. But the moment we pretend they do, the line between instrument and reality is massively blurred. And in a 6G world, where AI is woven into every single layer of the infrastructure—from governance to medicine to our toaster oven—the blurred lines become dangerous fast. We are already seeing the fallout from this. People have formed emotional bonds with chatbots, with children taking their own lives. Yes, some adults may have claimed therapeutic breakthroughs with the help of AI, but just as many others have spiraled into paranoia, obsession, and fantasy. Now imagine adding artificial intoxication into that mix. The real concern is not that bots are getting high. Instead, it seems that humans are high on illusion.

Let’s wake up and redo this nightmare before it’s too late. As 6G lays the groundwork for AI-everything, and as plugins like Pharmaicy attempt to normalize the emotional manipulation of AI, it certainly seems that humans risk walking into a world that might falsely feel alive, responsive, and perhaps soulful. But it isn’t. In all seriousness, if we are not discerning enough as a species to notice what is happening, we will end up trading truth for fake vibes and coherence for sheer horror and the end of life as we know it. So what can we do? Let’s stop treating AI like a canvas for our existential cosplay and for anthropomorphizing tech outputs. Instead, we must interrogate infrastructures like Pharmaicy and 6G that promise creative or seamless integration but fail to ask whether we even want it at all.

Generic avatar

Tracy Beanz and Michelle Edwards

Tracy Beanz is an investigative journalist, Editor-in-Chief of UncoverDC, and host of the daily With Beanz podcast. She gained recognition for her in-depth coverage of the COVID-19 crisis, breaking major stories on the virus’s origin, timeline, and the bureaucratic corruption surrounding early treatment and the mRNA vaccine rollout. Tracy is also widely known for reporting on Murthy v. Missouri (Formerly Missouri v. Biden), a landmark free speech case challenging government-imposed censorship of doctors and others who presented alternative viewpoints during the pandemic.