Can Your AI Art Pass the Turing Test?
Shower upon him every earthly blessing, drown him in a sea of happiness, so that nothing but bubbles of bliss can be seen on the surface; give him economic prosperity, such that he should have nothing else to do but sleep, eat cakes, and busy himself with the continuation of his species, and even then out of sheer ingratitude, sheer spite, man would play you some nasty trick.
—Fyodor Dostoevsky, Notes from the Underground
I
’ll be honest: I think most people get off on talking about how dangerous AI is. These doomsday predictions make people feel like Paul Revere riding across the countryside, warning the unwashed masses that the enemy is on our shores.
What’s worse are some of the flippant comments I hear from people who work in AI. They talk about it being a species-level event. The type of thing that could extinct us or lead to utopia, we don’t know which. I admit I’ve been guilty of this—I’ve been working for an AI company for almost a year, and I enjoy seeing the fear on people’s faces when I say I’m right there in the lab with Dr. Frankenstein.
Some of this generalized fear of AI is overblown. And trust me: if anyone is incentivized to have an extreme take on the matter, it’s an artist. Many of us build our careers online and are hyper-aware of the ever-looming threat of supplantation by AI, like innocent townsfolk waiting to be strangled by Frankenstein’s monster.
Here’s my take on the matter as a writer who has, as I said, worked in an AI company for almost a year, and who uses generative AI tools consistently: AI is an astonishing artistic tool, the likes of which we have never seen before. But, for the time being, it is not an artist—it is a means by which an artist can relay a vision.
Having said that, I won’t make too many predictions on what exactly AI will become. That’s been a fool’s errand since the advent of AI in the ‘50s. And I’ll leave the technical details to people who know more about that stuff than me. I want to talk about the philosophical implications of AI in art, which I assure you is the last thing on the minds of people creating AI.
It all started when I used AI to unwittingly pass the Turing test. Right before the AI boom, I’d just been hired by a crypto publication, and I was assigned my first feature article. I’d heard about ChatGPT and had a morbid fascination: could I use this thing to replace me? In an exercise that I couldn’t decide was self-serving or self-destructive, I spent a whole afternoon doing what I thought was training it—pumping it full of reference pieces, interview transcripts, and other content. I only realized later that it was actually training me.
After hours of tinkering, ChatGPT finally gave me something I couldn’t quite believe: an article that was shockingly—devastatingly—passable. Had I just used this piece of Promethean software to replace myself? Was I now complicit in the technological advances that would render me useless? I couldn’t know until I sent the AI-generated article to my new editor and got his feedback.
With little more than cosmetic edits on my part, I shipped the feature to my new editor, prepared to simply walk away from the gig if he sussed out its artificiality. I had little to lose in the short-term, professionally, since the gig was new. But I had much to lose, spiritually, in the long-term. The editor’s response was somehow worse than I expected. He said, “Really love this, Greg. Nicely written.”
It’s hard to describe my feeling in that moment, this cocktail of excitement and dread washing over me. There was a sneaky satisfaction that I’d gotten one over on the editor: I’d used a robot to do something I was paid for. He accepted something that was not only written by someone other than me, it was written by someone that wasn’t human.
I had made a discovery that could increase my output and make me more money, but I couldn’t help but wonder: “What’s next?” Had I discovered the very thing that would make my skills as a writer unnecessary?
Was I reveling in the creation of a monster that would one day kill me as an artist?
We are being afflicted with a new disease of which some readers may not yet have heard the name, but of which they will hear a great deal in the years to come—namely, technological unemployment. This means unemployment due to our discovery of means of economising the use of labour outrunning the pace at which we can find new uses for labour.
—John Keynes in 1930
W
as Keynes right? You could argue that we’ve integrated computers into so much of our society that we have little left but meaningless “e-mail” jobs, service jobs, and skilled labor. Heck, I use AI to replace myself at work all the time. I eventually left the gig for which I submitted my first AI-generated article. Since then, I’ve moved to an AI company full-time, where I primarily write content. As part of my job, I interview people on our engineering team, record the conversations, and use AI software to transcribe the recordings in seconds.
For a time, I would proceed to feed that transcription into an AI chatbot that would give me an article based on the conversation. Keep in mind, I did not understand most of the content of our conversations. I was writing about inferences, nodes, and network protocols that were so far over my head that when I handed in my work, and I was given positive feedback for it, I felt a new kind of dread.
Not because I was “getting one over” on somebody, like I felt with my former editor. No, my AI job encourages the use of AI tools, as you’d imagine. Instead, it felt like I was trying to get one over on myself. Over the course of months, I noticed my thinking and writing skills had deteriorated as I outsourced both to large language models like ChatGPT, Claude, and Microsoft’s chatbot. I found myself getting frustrated if an AI output wasn’t good enough to submit whole cloth and required any editing whatsoever. I didn’t want to struggle for anything.
It got to the point where I was no longer paying attention in my conversations with engineers. I didn’t care about understanding what they were saying, I just needed their words recorded so I could pump it into a chatbot and have it do the understanding for me.
Whenever I had to do any real writing or editing myself, I rebelled against it. I constantly procrastinated, because writing and editing are things robots do, not humans. Couple this with the constant inundation of AI industry news I’m expected to keep up with, and I felt my brain turning to mush. I don’t think Elon himself could have installed one of his Neuralink chips in there.
I only realized how far I’d fallen when I decided to write something myself again. The topic of the article was: how to use AI agents for governance decisions in decentralized autonomous organizations (or DAOs). It was the type of subject I’d written about many times before, a humdrum assignment that should have only interested me insofar as it helped pay the bills.
So, I don’t know what prompted me to handcraft it. Maybe I was sick of being usurped by a robot. Maybe I was endlessly fascinated by the concept of DAO governance. Whatever it was, the process of researching, learning, and hand-drafting the article unlocked something fundamental inside of me. Something that I’d neglected for months: not just the humanity in my writing, but the humanity in my learning about something.
In some ways, AI has made my writing more fearless. The blank page no longer breeds anxiety. I used to be impressed by a turn of phrase in an essay or poem, like comparing a peach pit to a thumb print, and I’d be intimidated when my own sentences didn’t seem as good. But instead of staring at a blank page, I have a chatbot assistant at hand, and I can approach the problem with a tool made for the job:
“I know a peach pit looks like something—what is it?”
“Help me polish this sentence to make it flow better.”
Large language model AI is great with that stuff; in fact, it helped me come up with the following metaphor: it’s like an apprentice mason you can ask to clean up your little messes as you continue building the cathedral.
Even without using AI, knowing I can call on it when necessary lowers the stakes enough that I can push past the artistic roadblocks that used to trip me up. I’m like Luke Skywalker after training with Yoda on his back in Empire Strikes Back: even after he’s gone, I still hear his voice in my ear.
I tried to paint what I saw. I thought someone could tell me how to paint a landscape, but I never found that person; I just had to settle down and try.
—Georgia O’Keeffe
A
I exposes how few people have vision. I don’t know if that will always be the case, but it is now. When you interact with a generative AI model like Midjourney, the premiere stable diffusion image generator, it will give you (almost) anything you tell it. But that’s the crux: you have to tell it. And you must have the artistic vision to tell it something interesting.
Using generative AI tools has taught me that there are two things that make a good artist:
- They are good at their craft
- They have an interesting vision
The craft refers to the technical prowess an artist has with their writing, painting, sculpting, etc. It’s important, but it’s little more than a medium for executing a vision. Good art is seeing things in a way nobody else sees them and expressing that vision in a way that makes people feel something.
Right now, AI in art is just another medium for expressing a vision. It’s a potentially more effective, realistic, and efficient medium than we’ve ever seen. But it is a medium nevertheless.
I don’t believe AI is anywhere close to replacing artists because AI itself doesn’t have artistic vision. We can create a neural network that we refer to as “artificial intelligence”, and that neural network can operate analogously to how a brain operates. But a brain—and by extension, a person—is the most complex thing in the universe. We don’t know everything going on in our subconscious minds. We don’t know the exact ways that our childhoods, the lives of our ancestors, and our environment impact our thought processes and creations.
Before I worked in AI, I used to think that engineers had brains like computers: black-and-white thinking, “if this, then that” lines of reasoning like Python code, hyper literal communication styles, and so on. Then I realized it’s the other way around: engineers created computers to think like themselves. One of the technological advancements that enabled the AI boom was the ability to create complex neural networks that can recognize patterns and make decisions based on vast amounts of data. While these networks can create links between disparate ideas and interpret them, their “thinking” processes still represent pale facsimiles of real human brains.
We cannot replicate ourselves, because we do not know ourselves. As it turns out, it’s actually quite reassuring to realize that we are not God.
I recently discovered that my Mom’s grandfather owned a tavern. I found this a strange piece of information given that his daughter—my maternal grandma—married an alcoholic who drank himself to death, and my mom was deathly afraid of alcohol.
Isn’t it interesting how heirlooms get inherited like that? We can see those more obvious types of traumas get passed down using the same kind of “if this, then that” logic we use to build AI networks: “If your great-grandfather is a brewer, then your mom will be terrified of alcohol.” This leads us to believe the fallacy that a machine learning model with enough information can output a person. But it’s arrogant to think we have the vision to see it all.
There are echoes inside us that go back generations—millennia—that we couldn’t possibly fathom because they influence us on genetic, emotional, and spiritual levels.
We cannot replicate the process of passing down the universe-deep complexities of humanhood because we don’t understand it fully. Even if we understand, say, genetics on a conceptual and practical level, we can’t know the origin of our individual genetics well enough to know why we have a gene mutation that makes us hate the taste of cilantro, or how that mutation impacts our art in subtle, tangential ways.
Your ancestor watched his father get mauled by a saber-toothed cat when he was five years old. The screaming, skull-crunching trauma of that moment didn’t just sear itself like an iron brand onto his body, his soul, his mind, his molecular structure; it spidered down generational lines as a blood memory, all the way to you. That’s why your hair stands on the back of your neck when your body senses a predator nearby, even if you can’t see it. That’s your ancestor telling you to watch your back. How does that influence your dreams? How did those dreams lead to your latest painting idea? In what ways does that impossible-to-know, thinner-than-a-spider-web connection between you and your ancestor lead to the unmistakable humanity in your creations?
Part of what makes great art is the story of its creation. Michelangelo sculpted the Pieta out of a single block of marble when he was only 23 years old. Later in life, he regretted his choice to sign his name across the virgin’s breast; now that’s a story. More recently, Harry Potter became a worldwide sensation for the story contained in the books. But part of what stoked the series’ early attention was the story of JK Rowling living as a single mother subsisting on food stamps when she wrote the first installment of the series.
In a museum, we often spend as much time reading the placards as we do looking at the paintings themselves. Art, it seems, isn’t quite enough without context. People are that context. If it weren’t for human consciousness, it’s not entirely obvious that the world would exist at all, let alone art.
AI doesn’t have consciousness (yet). Sure, there will be stories of the first widely-released AI-generated film, the first AI artist to win a Grammy, and the first AI actor to win an Oscar. But that territory of “firsts” is quite finite. Those will cease being good stories the moment they become expected stories. We will yearn even more for the artistic touch of a human.
It’s important for a serious artist to have high quality tools like paints, canvases, and brushes. But they’re only tools. At the moment, generative AI is a tool—a highly effective, dynamic tool, the likes of which we have never seen before—but a tool all the same. Becoming obsessed with it and its evolution is perhaps more than a distraction: it’s an impediment. The same way obsessing over the newest, best brushes and paint could be a distraction from simply painting.
I keep hearing the following sentiment: “AI won’t replace people; people who use AI will replace people.” I repeated it for a while too. But, over time, I heard it so often I realized it must be wrong somehow—it’s too smug and unsubstantiated to be true.
Don’t get me wrong: AI will progress in ways we cannot predict. It may cease being merely the paint and the canvas, and it may become the painter. At that point, yes, AI will replace some artists––but only the ones with no vision. Because AI may surpass man in intellect, but never in humanity. And humanity is the prerequisite to good art.
Greg Larson is the author of three books, Learn How to Not Suck, Clubbie: A Minor League Baseball Memoir, and The Battle Bunnies and the Unlikely Spartan. He is a copywriter for an AI company, and he lives in Austin, Texas, with his dog, Penguin.
