why I am AI sober
on walking away from AI and how you can mirror similar boundaries with big tech.
My sobriety from AI specifically relates to large language models (often called LLMs) like Chat GPT, Gemini and Claude. I’m an artist, a writer, and an educator, three professions I have pursued for 13 years yet are all equally threatened by AI. I still do use aspects of machine learning and algorithms in my work and its distribution. You can read about my specific AI policy here apply to join the webring I’m hosting of fellow AI sober folks.
I’m not your typical Luddite. I grew up loving computers and new technology. I have a tattoo of a mouse cursor and a motherboard. I worked at Apple for half of a decade because their devices were my special interest, painting them like illuminated manuscripts. I worked in Silicon Valley. I used to love being the first person to try out a new technology and share with others or show how they can benefit from using it. So, when Chat GPT was released got popular in 2023, I tried it and had fun.
But, the speed of development and subsequent harm of these products are in direct opposition with my values. They are stealing power and potable water. They are creating weapons of social contagion that are harming our society wholesale. They are polluting entire populations with noise and fumes. They are causing people to experience delusions and severe mental health crises. [1} Plus, the flagrant contempt of corporate CEOs using these tools to replace and devalue the labor of workers is a deep reason to refrain from their use.
AI isn’t just a tool, it’s an addictive substance. [2] These tools are designed to be addictive, they engineer user dependence and forced adoption, acting like a friend, a therapist, or a lover. This choice in building dependence into users or forcing adoption through clever UI tweaks, like the summarize button in Gmail, makes the tech incredibly insidious. The companies are desperate for humans to use it for everything, as often as possible.
Let’s start with the overt forced adoption strategy of chatbots.
AI is sneaky in the ways it has become embedded in our culture, our apps, our devices and our life. From easy-to-click bisexual-colored buttons appearing in the most common UI locations, to overriding our Google Searches with AI summaries. The heavy hand to encourage ubiquity of this technology is hard to miss. There are people getting paid half a million dollars a year to make you click that sparkle emoji button.
Again, I worked at Apple as a tech-literacy and creativity educator for half a decade, I can spot a heavy-handed design choice like an invasive plant. It’s trying to take over everything, to force widespread adoption as a way to point shareholders to big numbers of users, despite these companies bleeding money [3]
More importantly, this tool doesn’t have a clear purpose. It’s a general purpose tool (GPT stands for something else [4]) so some people are using it for writing code and scientific research, while others are using it to isolate from their human community and perform a ritual with the electrical towers of Bakersfield, CA. [5] This makes the tech feel like a slippery substance. A substance that is killing our ability to think for ourselves. I don’t want to shame anyone for using them, but I want to question their actual use value.
People have been warped into “asking chat” while gathered in public with friends (something I overheard while in San Francisco.) College students are using these bots to write their papers and do college for them while their human bodies are taking on the debt. With student loan debt, the idea of taking it on was justified by the notion that the knowledge we could acquire could never be repossessed. Now, with this offload to AI, many young people will be trapped between a paywall and debt to access knowledge.
We feed these machines our ideas, our worldviews, and our creativity (yikes!), asking it to spit out efficient, productivity-maxxing, sales-converting, corporate-approved regurgitations of our perspective on the world.
But creativity isn’t efficient. It requires failure. Mistakes are where we find our voice. It requires rest and meandering just as much as active production. If we choose to give up the struggle required to find our voice, we will uncover that these text and image generators are really just Ursula from The Little Mermaid. Yeah, we can walk through the new world expected of us by these corporate slop factories, but our voice is gone. We sacrificed it and sold it as data when we signed the terms and conditions.
We’re abandoning boredom, connection with fellow humans, or time spent looking at nature as editors or collaborators. The use of generative AI in place of our creative intuition is a corrosive act of creative death. Tech tools can empower us, yes, but Large Language Models are a killswitch for our intuition. They support every idea with a resounding and enthusiastic “yes, what a great idea, let’s workshop that!” They turn us so easily into hyper-productive content-creation slop machines.
We are funneled into creating waste, not art. I am calling on you to resist using generative AI whenever possible. The rest of this essay is why.
I did some of my best work this year while living completely AI sober.
I do my best writing sitting at the same bench in the foothills of Mt. Diablo, watching the land adapt to it’s seasons. I speak into voice memos, often inviting my main collaborator, my spouse — a person who studied and works in Cognitive Science — to give me feedback on my ideas and writing. He has a clear grasp on this technology’s risks and impacts on the creative process.
Nature is my muse, where quiet moments that some might classify as “boredom” reveal the ideas fermenting in my mind. Humans have always required seasons of quiet rest to allow their ideas room to breathe, expand, and then contract through editing and collaboration.
And you’re probably thinking “wow, this person is a Luddite” and yes, you are right. But after learning about the real history of the Luddites in Brian Merchant’s book, Blood in the Machine, I want to claim that word back.
The original Luddites were skilled workers (Knitters and Weavers) that had their ability to feed themselves threatened by bourgeoisie entrepreneurs ready to cause them economic harm by replacing them with machines. Sound familiar?
As an artist, writer, and educator, three jobs that the AI revolution wants to make obsolete, I deeply relate to the Luddites. Entreprenuers of that time left more than 50% of these workers unemployed. [6] That was the spark of the Industrial Revolution. The original luddite uprising was to destroy the machines in order to make their operators capitulate to fair wages and working conditions. These were the seeds of the first labor unions. Emboldened by machines that replaced workers, these entrepreneurs were laying off everyone who wasn’t an orphaned child.
Being a Luddite no longer about a rejection of technology. We must take responsibility for the ways that technology can be used for good and which ways it can be used for harm. In my eyes, as a working class person who spent years under the thumb of authoritarian workplaces, workers deserve power. We saw how SAG won huge protections through their strike last year. More of this has to come.
The technology of a large language model is not the bad seed of AI, it’s the people in charge and how they want to weaponize its use to monopolize as many aspects of our life as possible. They want consolidate power to shape a world that serves them and their ruling class friends.
The CEOs like Sam Altman are seriously delusional with their desire to build AGI, or Artificial General Intelligence. Their promises around it seem to lead to a desire to put the suffering of the original Luddites onto every human with an office job — and student debt to match it. Their concept of AGI is a machine that would destroy the labor force as we know it and the ruling class is foaming at the mouth at the very notion. We should have massive skepticism at these entrepreneurs and their “innovations.”
When the release of LLM’s first happened, I had just quit my job working for Apple
I found myself in a weird place. Pulled into possibility, the promise of efficiency, I was compelled to try it.
In my art practice I was finally connecting with my creativity again after leaving the grasp of Apple’s cult-like atmosphere. But I felt clueless about the business aspects, and believed the hype. Maybe these machines could help me be successful at the very thing Art School never prepared me for: Teaching outside of academia! Selling my work without galleries!?
What I found —the longer I used these chatbots— was an increasing pressure to work harder and unsustainably. I was addicted to the novelty of being able to solve problems and “move the needle” on my business — even though the majority of the suggestions didn’t work and made my work sloppy. Instead of resting, I would remix scraps, generate more (bad) ideas, share more of my data, in exchange for the promise of a clear path forward that I longed for.
Before I knew the harms of the tech, it seemed like these chat bots would be a tool for figuring out how to get better at the whole “running a business” thing. I could learn how to phrase things, write better copy, and create compelling language that would connect my work to my ideal audience. I had no idea what I was doing.
I watched video tutorials hyping up the tech. The pro-AI content creators told me the more I fed the LLM about me and what I wanted to build, the more I could get done and the better the writing would be. As an optimist, I wanted to be a good productive little worker! I wanted to be an effective little small business owner and be super productive with my time! I could use these tools to have a successful business! I wanted to be able to pay my bills and make money! Sharing my data would have a net benefit towards my goals, that’s the selling point in theory, right?
As a workaholic, conditioned by the protestant work-ethic and the unhealthy habits of Art School, AI is a terrible genie I had at my fingertips. Just like scrolling, I became lulled into addiction. I wanted to know the next steps through this instant gratification machine. Trying to get answers and sneak in more work than one should is highly habit-forming.
When things would go badly in my business, like having a product launch or YouTube video flop, I would find myself late at night, after my partner went to sleep, shamefully, desperately, asking the AI to write better titles and thumbnails or revise my sales copy. All of it was garbage.I would read back the ideas and feel covered in a goo of slime.
None of this sounds like me.All of this feels wrong. My intuition started getting louder and saying that I should stop.
That’s when I started to learn about the people making these machines and immediately shifted away in horror. I should’ve never shared my data with any of these services.
I was actually tractor-beaming my soul into a data mulcher.
Creative ideas need to be generated by you. By your brain and intuition. That’s how you produce things of quality that you actually care about. These algorithms are designed to take advantage of you, to seduce you into sharing far more than what is safe. They are designed to addict you, trap you, convince you that if you don’t use them, you’ll be left behind. If you are paranoid, they could even convince you to isolate yourself from your family. They are engineered to drop our guard and keep returning.
It’s the same promise as social media algorithms. They sprinkle enough novelty in to give users FOMO, to connect them to something they might be searching for, and then serve them ads. This is coming with all of the LLMs as they struggle to prove profitability. Enshittification is inevitable.
It’s important to remember these machines can’t output quality in a human sense. This isn’t another brain we are working with. AI chat bots aren’t thinking machines. They are complicated prediction machines that make it look like they know what we want. They output what’s rated to be the most coherent. Like a robotic customer service agent, they will share what they predict we want to hear.
AI isn’t thinking, it’s regurgitating and remixing. It often takes on the corporate lingo of living laughing and loving at your 9-to-5 to create you a 10-step marketing plan for your “great idea.” All encouraging endless working, instead of resting. All selling more sycophancy to prop up bad ideas that should be weeded out.
Because of the hype bros, it seems impossible to escape this tech and avoid it completely. I feel like an organic farmer surrounded by giant monocropping farms using poisonous pesticides.
While common pesticides can cause massive harms to honeybees, [7] these chemicals can allow farmers to produce food faster and more efficiently, at a huge cost to the larger ecosystem. That’s precisely why we long to buy Organic produce or support small family farms. Those are the ones avoiding these practices and products.
Organic farmers often run into pesticides blowing in the wind, leached in the water supply, and sometimes their crops fail. I bring up this Organic label because I recognize AI’s ubiquity and adoption isn’t something that I can prevent by being one of the few standing alone abstaining from the use of LLMs. I can just invite you to be AI sober with me while recognizing the monocropping of many around us who use the tech.
Beyond the places we normally consider to be work, the “general purpose” nature of LLM’s has another huge area to get people hooked by attempting to “solve loneliness.” People are falling in love with their chatbots [8], worsening their OCD [9], and feeding all of their thoughts, dreams, work, horoscopes, tarot readings, and ideas into a chatbot that is collecting all of our data and feeding it to fascists.
And I know you’re thinking now, whoa dude, that escalated quickly. Let me explain.
Most of what I have done for the last two years is make money with freelance gig work. One of my highest paying clients was an education startup. The company had promised me a job, with benefits and really great work hours. I stayed with them, hopping from contract to contract, waiting for the promise of a role to materialize. This is an incredibly common work experience in startup culture in the Bay Area.
But the further I got in our working relationship, the more it became clear that they valued their AI over me. We would get tired in marathon work sessions, and they would start feeding our brainstorming in to The Chatbot, bringing slop to the table when just stopping and resting, eating lunch and just picking the work up later would’ve brought on a better result.
As I saw the sloppy results of the bot answers, I started questioning my own use of the tech. I asked the following:
Is this actually helping me, or just creating a bigger mess for me to clean up later?
Am I getting closer to clarity, or do I feel more confused and drained when I use this?
Is my work actually aligned with my intuition, my values, my beliefs?
What are the values and beliefs of these companies that are now in ownership of my data and ideas?
I answered those questions in my journal and immediately found clarity. I had to stop cold turkey. I was hooked and killing my own intuition, my own knowing, my own self trust. I was falling into a trap of perceived efficiency or certainty, when in reality I was digging myself into a hole. I deleted the apps, and I stopped using AI.
I told my client. They disapproved and shared the common marketing blurb: “if you don’t use it, you’re going to get left behind by someone that does.“
Later that month, the gig shifted. I was quietly laid off through attrition. I received lower and lower payouts. And suddenly this wasn’t about my personal opinions around AI, my livelihood was at stake. I was watching my work and ideas, my labor, get fed into this machine to replicate and regurgitate a simulated enshittified mirror of me.
I was furious.
I need to return to the fact that I’m also a visual artist. I started my business doing freelance illustration for clients, a space directly threatened by Generative AI image models. And that work has completely disappeared this year. What once made up 20% of my income has run dry.
if you are enjoying this essay consider buying me a coffee or becoming a yearly/monthly patron on Substack to support my continued writing and research into ai sobriety and the creative process.
My personal art is all about nature, developing a real relationship with the world around me as demonstrated in Robin Wall Kimmerer in Braiding Sweetgrass meant I had to stay aware of how much these tech companies were taking. As it turns out, a frightening amount
a recently finished oil painting, Lichen Live Oak in its natural habitat among the Coast Redwoods of Oakland, CA
My fury with losing that job led me to research. I read Empire of AI by Karen Hao and learned even more about how these companies are stealing resources like fresh water and energy. How they are polluting cities with methane and noise en masse. [10] How they are causing the price of energy to quadruple, driving prices of basic needs up for Americans in a time of serious economic precarity. [11] All while trying to hoard wealth and power in order to create some delusional Zizian “God” creature in super intelligence. [12]
I thought, this situation couldn’t possibly get any worse. And then I learned about Palantir.
Beyond the way that AI was impacting how I thought and killing my intuition, all of the data that I shared previously with chatbots was now linked to a social credit score system being developed by Palantir, an AI company funded by Peter Thiel with Pentagon contracts to spy on Americans. [13] They decided to name their surveillance company after the seeing stones in Lord of the Rings.
Did reading Tolkien really make these people want to side with evil? WithSauron?! These companies are all complicit in engineering the growing authoritarian and fascist surveillance state that is the United States. These people, my neighbors in the Bay Area, are deluded, thinking that they are building a God-like super-intelligence that loves them and will not harm them is their reason to push forward and lightning speed. This is their justification for stealing resources like power and water we need to survive and displacing mass swaths of our labor power. These people want to be kings and we shouldn’t let them. Sly marketing tactics have made these technologies seem harmless, they seem like simple tools that we should use. Again, If we don’t use them, someone else will, right?
They will hit their goals and run their business better and faster than we do. If my goal is to build a creative retreat center to help people get connected to nature and their creative intuition, smart business advice would say I should use AI to help me build email sequences, better sales pages, effective content strategies to increase my reach. Won’t this help me scale faster? I fell into this belief until it came for my work and livelihood. I was in that mindset before I knew all the harm these people are wreaking. Reading Blood in the Machine, I learned a common term seen so often in the tech, innovation, originated in referring to the processes that eliminated human labor. Innovation is not a good word. It’s anti-human. That book solidified my AI sobriety into a values-aligned abstinence.
This goes beyond the slop content. This is a class issue. This is a labor rights issue.14
Simultaneously we are giving away our intuitive sensibilities and our human potential. If my writing has you feeling sober-curious I would love for you to run an AI sobriety experiment with me.
my hand, showing my mouse cursor tattoo on the trunk of a Coast Redwood. Image shot by Joy Newell.
If you currently use an LLM, try a break for a week and ask yourself the following:
Does this allow me to hear myself think? Or does it make it harder to hear what voice is my own?
Why do I feel compelled to use this tool? Is it common in my community, in industry, in my profession?
Am I friends with any artists, writers, teachers, or other folks whose jobs have been impacted by the rise of this tech? What are their thoughts on the issue? Can I have a phone call with them? Do I wanna act in solidarity with them?
When you use it, do you feel pressured to work or do more? Do you think the work you create with it is quality?
Are you aware of the values of the companies creating the particular LLM you use? Are they in alignment with your core values?
I would be curious to know if you are convinced to take an AI break or to join me in solidarity and sobriety.
I’m not asking you to have perfect sobriety from using this tech at work. I understand that there are a lot of situations where you have no choice but to interact with it. I’m asking you to attempt to remain sober from the social manipulation aspects of these LLMs and the ways they can harm you.
If you want some community around this, join my Discord, it’s free. I also host Creativity Clubs on there where we focus on human connection and making things with our hands. The vibes are AI-sober and shame-free.
Despite having a bit of a doomer attitude on these giant AI companies, and recognizing fully that they are in fact bad, I don’t think language models are evil in and of themselves. I am excited by the increasing availability of small, open-source language models that we can run on our own computers, keeping our data private.Tech tools that put privacy first—outside of the extractive systems—give me hope.
Instead of AI, I put all of my ideas into an offline note-taking app called Obsidian. All of my ideas live there privately, on my own computer. It requires work, usually 1-3 hours per week to maintain, tag, and sort. But there, my ideas slowly grow as companions to one another. But the act of tending to the organic garden of my ideas and curating the connections manually is a worthwhile endeavor.
My Obsidian reminds me that my ideas and musings have value. This entire essay started out there as seeds in May, and I’ve been slowly collecting bits, pruning, and tending to grow what I’ve shared here.
My business moves much more slowly than others in my niche, but I am okay with slow growth. I am more interested in moving at a speed that allows me to consider how I will feel when I arrive, rather than chasing the most efficient path to the destination of the retreat center I’m building.
I believe that these tools will have their best use when they are out of the hands of these Sauron-inspired data hoarding entrepreneurs.
We don’t need or want “artificial general intelligence” as a society. We need to stop funding the deluded dreams of billionaires. Their bubble is going to pop and cause massive economic harm to working class people.
We don’t need or want more massive data centers and slop video generators. I have faith that what we actually want is to put our eyes and hands upon human craft, see its transformative beauty, and remember that art and writing—things AI purports to do for us—are the things that give our lives the most meaning and depth. Let AI do the dishes, and leave the real work of human ingenuity to us.
I would love to start a movement with you and invite you to be AI sober alongside me.
The goal isn’t to shame you, but inspire your humanness with gentle compassion and invite you to be creative once again. Let’s shape the culture that we—the artists and creative humans—want to see, not what the venture capitalists envision.
I really recommend considering how AI aligns with your values. Our values are enacted through our habits and our habits become culture.15 It is dangerous to let AI become infrastructure in our lives. We get to decide what is normal, what is best for us.
If you want to be AI sober-curious, try the following things:
Make with your hands. It doesn’t need to be good.
Cut paper. Make collages. Paint blobs and add faces. Mix color and blend them together. Use Crayons. Play.
Go outside.
Witness the same piece of land change over a full year, and then another. Becoming a witness to the natural world will reveal more to you than any chatbot.
Learn the names of your local plants and birds. Tune into the wisdom of the earth.
Play music.
Open GarageBand on your phone and play with loops or the piano and see what comes out of you.
Get the instrument you played in high school out of the closet and put it in your hands.
Pull an oracle/tarot card or write a list and roll some dice
Unsure what to do? Need guidance and direction? Use archetypes as a mirror and journaling to guide you rather than AI.
Write down a few things you can do, number them, and roll dice.
Get on Discord
Loneliness is a core reason we turn to chatbots, but finding community in an online space that isn’t mediated by algorithms feels incredible.
I hope this post gave you a foundation to consider as you navigate your own relationship to AI in your personal, professional, and holistic sense of life that you are living on this planet. Remember: pleasure and joy in dark times is radical. The process of being creative is magic. It’s a spell. It’s our intuition. Make art. It’s what makes us human.
If you want to get in touch once more with your human creativity, I made a self directed pod-class, using only human-intelligence, to guide you through a break from Big Tech back to yourself called The Hikers Way.
You should also explore my website, as I have been spending hours building it to be an AI-free fun place to explore my work online, away from social media.
Thanks for reading and until next time, find your own ways to stay creative and persistently bloom.
What to know about ‘AI psychosis’ and the effect of AI chatbots on mental health from PBS News
there is subtle humor here not everyone will get, so I need to be a bit like David Foster Wallace here and write you a funny little footnote about how I chose this phrasing. Usually this sentence design is a dead giveaway for AI use. ChatGPT especially loves these turns of phrase, and I’m using it here to poke fun. my partner and I have a game of speaking to each other as “ai trolls”, creating toxic and fake ad copy about the state of the tech world I am trying to convey here. it makes us laugh, despite the horrors. obviously, not everyone will get it, hence this footnote.
OpenAI is spending far more than it is making, proving that they are holding up a massive bubble. This Is How the AI Bubble Bursts - Yale Insights
correction: its stands for generated pretrained transformer, but most people in the general public have no idea what this means. thinking about it as a generalized tool feels more helpful to this discourse.
Eddy Burback’s most recent YouTube video is a great example of this.
if you are using AI, you really need to read Blood in the Machine. Brian’s blog is also excellent Blood in the Machine.
a guy proposed to his chatbot in this interview on CBS news
How ChatGPT Could Be Making Your OCD Worse by Anna Rogers
We Went to the Town Elon Musk is Poisoning from More Perfect Union
World-Changing AI Is Raising U.S. Electricity Bills
How a Harry Potter Fanfic Inspired a Death Cult
Peter Thiel’s Palantir poses a grave threat to Americans - Robert Reich
also using “it’s not just ___, its ____”, a common ai word phrasing is 100% on purpose and in a mocking tone here. of course, i did not write this piece using ai.
@heidiheidilim on AI and Learned Helplessness on TikTok