Skip to content

Art

6 posts with the tag “Art”

AI chess vs. AI art - why are they perceived differently?

Introduction

AI is fast developing in many fields, with very different results. In chess, it became a useful tool and learning aid. In art, it has caused confusion, legal issues, and ethical concerns. Chess AI is seen as a partner by many chess players. AI art is often seen as a threat. So, what are the underlying reasons for the application of AI in these domains diverging so differently in public opinion?

(Note: I’m discussing the use of chess AI as an analytical tool. Of course, it can be misused for cheating in competitions, which is a separate ethical issue. Generally, though, this doesn’t seem to be a massive issue at the highest levels of chess, while it’s easier to detect at lower levels.)

History of AI in Chess

AI in chess began in the mid-1900s with programs built to study logic and decision-making. In a pivotal moment in 1997, IBM’s Deep Blue beat world champion Garry Kasparov.

That moment kicked off the modern use of engines in training and analysis. Now, engines like Stockfish and AlphaZero help players find better moves and explore new lines. AI didn’t replace players. It helped them improve.

Players compete still, with AI assisting preparation rather than dominating public play. One thing is clear: the wider chess community has little interest in watching two robots play each other, despite it being at levels far beyond that of human play.

AI chess evaluations have become a standard feature in all modern chess coverage. Commentators and broadcasts regularly show engine assessments to help viewers understand positions.

History of AI in Art

AI-generated art, on the other hand, started as a research project. Early systems like AARON in the 1970s were created by artists experimenting with code as an art project itself.

But, with the recent AI boom, tools like Midjourney, DALLE and Stable Diffusion have arrived which use massive datasets that are often built from copyrighted material scraped online without permission.

Instead of helping artists, these tools now seem to be replacing them. They generate countless images without crediting original creators, while artists’ works and styles may be used in the datasets. However, they receive no attribution or compensation.

We (art regard) have, and still are, speaking to a lot of artists, and harms such as loss of commissions, false accusations of AI use, and a general feeling of being used are mentioned time and time again.

Similarities

There are a few key similarities between AI in chess and art. Both use large-scale computation and pattern recognition trained on countless chess games or artworks to perform tasks that previously required human intuition. Both can produce results that surprise humans, and both are often seen as impressive advances, from a technical standpoint.

Differences

However, the systems operate in fundamentally different domains. Chess exists as a game with fixed rules, defined boundaries, and objective win conditions. Art operates without boundaries, incorporates culture, emotion, and human experience, and relies on subjective reception.

Chess AI functions within this closed system where every position has concrete solutions. It processes the mathematical certainty of chess and outputs moves with value measured by effectiveness toward victory. Art AI operates in an open system where evaluation happens through human connection, cultural context, and non-quantifiable responses.

The implementation paths also diverged. Chess AI developed through decades of collaboration between players, programmers, and the chess community. Art AI emerged from tech companies without partnership from the art community, leading to one-sided development and a feeling of being used or sold out by big tech.

Chess players maintain control over when and how they use AI tools. The engines analyze when requested and provide options players may accept or reject. Art AI enters creative spaces without invitation, processes artists’ work without consent, and creates outputs that compete in the same markets.

The economics differ too. Chess AI enhances player skill but doesn’t replace players in tournaments or exhibitions. Chess players still earn income through competition, teaching, and content creation—often using AI to enhance these activities. Art AI directly threatens artist income by generating work that replaces commissions and erodes market value for human-created art.

Chess players use AI voluntarily as a tool. Artists often have no choice. AI tools generate work in their style without consent. This creates tension, not collaboration.

So Why Are They Perceived Differently?

Some of the similarity in perception comes from novelty, as both technologies seemed revolutionary when they appeared. But the underlying reasons for differences in how they are viewed are, at their core, structural.

Chess AI is seen as additive. It makes players better and reveals new ideas in a transparent way. It’s used mostly by the people it’s meant to help: chess players.

AI art is seen as subtractive. It bypasses the artist, takes their work as training data, and automates the output. It’s often used by people with little or no artistic background, and, as mentioned by artists themselves, it replaces commissions or devalues original art. The economic and creative costs fall heavily on artists, while the benefits go mostly to tech developers, platforms, and paid, usually commercial, consumers of art who now get a hefty discount.

Conclusion

The difference in perception comes down to trust, control, and context. One application of AI is used to assist and evolve chess with the support of its community, while the other seems to steal from and exploit the community it is claiming to have joined.

AI art tools have disrupted artistic labor without building relationships with artists. Chess got a tool. Art got an identity crisis.

AI Art has a Compounding Impact

AI uses human-made art

Human-made art remains essential—even as we explore and embrace AI-generated work. That’s because AI models are trained on art created by people, not other AIs. For AI art to continue evolving in meaningful ways, we must continue to value and support human creativity. Unfortunately, AI’s rapid rise is making this less common.

How AI Demotivates Artists

One of the most troubling effects of the AI art boom is its demotivating impact on human artists. Many are losing the drive to improve their craft or are abandoning art altogether. This is harmful—not only for those of us who cherish human-created work, but also for the most passionate advocates of AI art. Without fresh, original human input (which, importantly, must be used ethically and with the artist’s explicit consent), AI models have nothing meaningful to learn from. AI doesn’t innovate on its own—it relies on human creativity as its fuel. Without that, progress stagnates.

AI’s Compounding Effect

As more artists become discouraged and produce less work, AI-generated art continues to flood the digital space. This creates a dangerous feedback loop: the most visible content becomes increasingly homogeneous and derivative, eroding the diversity and quality of art available online. This is a loss for both critics and supporters of AI-generated art.

Conclusion

Supporting and protecting human artists is vital—no matter where you stand in the AI art debate.

Poisoning AI art

Protecting art from being used to train AI

Many artists are frustrated their work being used to train generative AI without their consent. Ben Zhao’s group have come up with two solutions artists can apply to their art.

Defensive measures from AI

To help achieve this, Ben Zhao’s lab at UChicago came up with Glaze, a protective filter which you can apply to your art. While not making much difference how the art appears to a human eye, the idea is that it adds a subtle amount of noise which disrupts the ability for AI models to learn from the glazed artwork. This is an example of ‘Adversarial Perturbations’. See the paper here: Glaze

Offensive measures for AI

Rather than trying to protect your art from being stolen, Nightshade attempts to poison the model. Again, while looking very similar to the human-eye, this filter can have a large effect on the AI image output. Adding noise in an attempt to hamper the models ability accurately recreate images from prompts, it worsens the image output by the AI. See the paper here: Nightshade

What next?

This will continue to be a cat-and-mouse game for the foreseeable future. AI models will be improved to evade these measures, and in response, these measures will be updated to become effective again. Ben Zhao mentions this in an article here, in response to a paper where the authors overcome Glaze.

The Legal Frontier of AI Art

AI Art in the Courts

Some of the backlash in response to generative AI art comes in the form of precedent-setting lawsuits. From visual artists to programmers and authors, people are testing the limits of intellectual property law. Here’s three examples:

Key examples

  • Artists vs. Stability AI & Midjourney
    The work of visual artists was scraped without permission to train AI models, resulting in outputs that mimic their unique styles. While some claims have been dismissed, the central issue of unauthorized data use remains unresolved. Read more

  • Getty Images vs. Stability AI
    Getty Images alleges that Stability AI unlawfully used millions of its photos to develop its AI model, even reproducing watermarks in some outputs. With cases pending in both the UK and the US, the outcome could redefine how training data is sourced. Read more

  • Thaler v U.S. Copyright Office
    In a landmark decision, courts ruled that works created solely by AI cannot be copyrighted because copyright law requires human authorship. This ruling underscores the need for a reexamination of copyright boundaries in the age of AI. Read more

Impact

These lawsuits aren’t just isolated legal skirmishes - they could set important precedents for the regulation of AI content and training in the future. Plaintiff-favoured rulings could force AI developers to secure licenses or alter how datasets are constructed, while wins for the defense might solidify current practices under fair use doctrines.

As these cases progress, they will undoubtedly shape the future of art, creativity, and law in the digital age.

Studio Ghibli in seconds?

Studio Ghibli going viral

Studio Ghibli, the Japanese animation studio famous for their lovingly handcrafted movies and distinct watercolour style, are currently going viral on social media. You may be familiar with them from movies such as My Neighbour Totoro and Spirited Away, but many people are hearing of Studio Ghibli for the first time due to a viral trend of applying a ‘Ghibli’ style filter to images using OpenAI. The Google Trends data below gives a picture of the recent spike in interest.

Google trends

Twitter is now awash with posts attempting to emulate Studio Ghibli’s style in a massive variety of images. This ranges from simple selfies to the White House tweeting a controversial picture of a person being arrested by immigration officers.

Viral X post

White house X post

This trend raises some interesting questions, which I’ll broadly group into three categories:

  1. The integrity of art
  2. The intentions of the artist
  3. Stolen images for training data

The integrity of art

Studio Ghibli is famous for painstaking attention to detail and love of their craft; all frames are hand-drawn and painted with watercolour. Four seconds from The Wind Rises, a frame of which is shown below, took Studio Ghibli 13 months to animate (Hayao Miyazaki: 10 Years With the Master). At 24 fps, 4 seconds corresponds to 96 watercolour images handpainted just for one scene. Compare that to the mere minutes it takes to get AI to produce ‘Ghibli-style’ images – how can artists compete against this?

The Wind Rises scene

However, the studio Ghibli style is not actually what is being recreated. Just like how a tomato and an apple may look similar side-by-side at a squint, closer inspection would quickly reveal differences between the two. Similarly, this filter is perhaps what someone who is unfamiliar with Ghibli might consider to be Ghibli style. As Sindu argues on her substack post AI filters don’t replace art any more than instant ramen replaces food:

“We’re seeing cultural flattening where “Studio Ghibli style” has become a catch-all term for any anime-adjacent art with soft colours or watercolour effects. This surface-level imitation… reminds me of what happened with Van Gogh’s style—people often reduce his work to swirly strokes without acknowledging how those techniques expressed his unique vision of the world and his emotional state.”

While at a glance these images may pass as ‘Ghibli’ like, there are subtleties to Ghibli art being missed by these AI models averaging out features across millions of training images. Somewhere along the way the intricacies of Ghibli’s art are lost and we are left with an empty imitation.

The intentions of the artist

Hayao Miyazaki, the head artist of Studio Ghibli, has vocalised his dislike for the use of AI in art. In a 2016 video when asked about his thoughts on AI, he stated “I would never want to incorporate this technology into my work at all … I strongly feel it is an insult to life itself.”

I find it hard to believe many artists would appreciate their style being used in this way by AI image models. As with anything on the internet, people have been quick to take things to the extreme and have created ‘Ghibli’ images of events such as the assassination of John F Kennedy. This, I imagine, is not what the artists at Ghibli would want their name associated with.

Stolen images for training data

To train these models, vast amounts of art are scraped off the web. Much of this training data will be art that the artists did not consent to be used for this purpose. This has big implications for copyright, as now anyone can use AI to try and emulate the style of artists who have skilful honed their craft over a career, undercutting them with AI which stole their work. Ben Zhao, author of the paper from our last blog post, said he’s disappointed to see OpenAI take advantage of Studio Ghibli’s beloved style to promote its products. He co-created the tool Glaze, which helps artists protect their work from being used as training data in these AI models.

What's the most effective way to validate art as human-made?

Organic or Diffused?

Late last year, Ben Zhao’s research group at the University of Chicago published a paper assessing the ability of human and machine to determine whether art was human-made or AI-generated. Here’s the paper:

Ha et al.: “Organic or Diffused: Can We Distinguish Human Art from AI-generated Images?”.

(You may already be familiar with their work: Glaze, a tool which can be applied to artwork posted on the internet to hinder the training of generative AI models, was created by Ben Zhao and his team).

Key findings

If you have the stomach for academic papers and are interested in art verification, I recommend giving it a read.

If not, here are some key points:

  • the authors looked to assess performance of humans and machine at determining the provenance of art
  • they looked at different styles of art
  • they assessed the performance of different AI-art detection tools
  • similarly, they assessed the performance of humans of different art expertise (non-artists, professional artists, and artists expert in AI art detection)
  • the most effective way to determine if a piece of art is AI-generated or not is to combine the efforts of human validators and machine classifiers!

The most effective way to detect AI art

That last point is somewhat unexpected. The best performance in detecting whether a piece of art is AI or human-made results from combining both human and AI input. This collaboration appears to lead to more accurate assessments than either group working alone, suggesting that the strengths of one may be compensating for the weakness of the other.

A big thank you to the authors of this study for conducting important research that can help in addressing the challenges posed by AI. Research like this is crucial for protecting the integrity of human art!