cover image

Illustrated by Timothy Chiu

Is AI Replacing Real Artists?

AI Art generation has swept the internet in recent months, from niche art forums to mainstream Instagram pages. This new form of content begs us to ask what it means to be an artist and what qualifies as real art.

Feb 12, 2023

Scrolling through my Instagram feed, I see the usual jumble of announcements, advertisements, and selfies flit across the screen. Without paying any serious attention, I breeze through the stream of content until something stops me in my tracks: a series of pictures on my recommended feed of someone I don’t follow, each one a beautifully rendered image of this person in several fairylike, fantastical, and eye-catching, popstar-like settings. The images are both strikingly realistic and detailed, highlighting the contours, the light, and shadows on the person’s face in these otherworldly settings, and yet strangely unnatural. After a minute or so of Googling, I find out that these images were created through Lensa AI, a hugely popular generative AI app used to turn photos into paintings.
Last week, The Gazelle featured two articles on the impact of OpenAI’s public research project ChatGPT, questioning what this means for academic integrity and the education system at large. While ChatGPT spurs conversations about the role of AI in academia, Lensa AI, OpenAI’s DALL-E, and Midjourney, which enable text-to-image translations and other forms of generative artificial intelligence programming, do the same for the art world. These programs work through a subset of machine learning called deep learning, whereby AI ingest and analyze the relationships in data to create models that allow for the creation of entirely new images. In doing so, they force us to grapple with what it means to be an artist in a world where AI can convert an idea into an image, sometimes in the style of existing artists, with just a few prompts — and whether such a world is one we desire.
In Sept. 2022, “Théâtre D’opéra Spatial,” an AI-generated piece by Jason M. Allen, won the Colorado State Fair contest for emerging digital artists. The win caused quite a stir, causing significant backlash on Twitter as users argued over whether Allen could rightfully claim to be the ‘artist’ of his work and voiced concerns about the future of human artists in an industry where AI technologies could win prizes. In a haunting conversation with The New York Times, Allen said he sympathized with artists who were worried about being replaced by technology, but nonetheless stated, “This isn’t going to stop… Art is dead, dude. It’s over. A.I. won. Humans lost.”
His statement is a sobering one, underscoring many of the issues with the growth of AI in the arts, not the least of which is the massive threat such technologies pose to the livelihoods of artists working today. If AI can do the work of designers, painters, cartoonists, and other artists, then how are people working in the industry today expected to make a living? This question is especially pertinent in a profit-driven economy where AI offers a cheaper and time-saving alternative to hiring human artists. In such an economy, humans are at a disadvantage.
There is also the very real issue of ownership. If art is AI-generated, then who is the artist? Is it the AI software? Or the person who came up with the idea and typed in the phrase? On one hand, it can take a lot of human effort to come up with exactly the right words and phrases to generate the perfect image. On the other, it is the AI that renders the image and brings the idea to life. Who, then, should receive credit for the work, and how does this change the way we think about the value of art?
These questions aren’t just philosophical, but also legal. Generative AI programs are trained by large datasets of images and captions taken from the internet. The datasets draw on a massive corpus of artwork, allowing users of the programs to generate images in the styles of their favorite artists, past and present, without asking for the permission of or compensating the artists whose work has informed the machine’s deep learning algorithms. At their worst, the AI models even include the signature of the artists in the dataset. The large-scale Artificial Intelligence Open Network, whose dataset is used to train programs like Stable Diffusion, another generative AI project produced by Stability AI, claims that all data falls under the Creative Commons, allowing the images to be used as long as the source is properly attributed. While there is a takedown form on the LAION website that allows people to remove images if they violate data protection laws, such practices are still unethical, giving organizations who seek to profit from generative AI artwork the ability to do so without safeguarding the interests of the artists who inform their programs.
Drawing from datasets in this way also comes with the problem of algorithmic bias. Programs like DALL-E have already acknowledged the potentially damaging effects of its training data, noting how the technology could be used to produce images that reinforce stereotypes, are of low-quality, or subject individuals or groups to indignity. The training datasets can be biased towards the majority, perpetuating [racist or sexist imagery][] because of where the images are sourced from, introducing new forms of abuse, particularly for women and marginalized communities.
All that being said, the advent of AI need not mean the death of art. Don Allen III, an artist and XR creator, for example, has spoken about how using AI has aided his creative process, allowing him to expedite and streamline his work. In conversation with the LA Times, the founder of Midjourney, David Holz, similarly argues that AI can be a tool for expanding the imagination of, rather than replacing, artists working today.
As a Literature and Creative Writing major, the idea of AI art is a little terrifying to me. While AI technologies may be a means for aiding rather than replacing human artistry, the jury is still out on whether AI art can qualify as great art without a “human core.” The lack of protection afforded to artists whose work is being used to train these programs makes it hard to stay optimistic. I imagine my future as a person walking a tightrope, dangerously close to falling without the security of legal and/or economic provisions to act as support. So until copyright and data protection laws have caught up to developments in AI, and we have reached a point where the use of such programs are shaped by ethics that value artistry rather than profit, maybe technological progress is not all it is cracked up to be.
Amal Surmawala is a Staff Writer. Email them at
gazelle logo