Skip to content

Blog

AI chess vs. AI art - why are they perceived differently?

Introduction

AI is fast developing in many fields, with very different results. In chess, it became a useful tool and learning aid. In art, it has caused confusion, legal issues, and ethical concerns. Chess AI is seen as a partner by many chess players. AI art is often seen as a threat. So, what are the underlying reasons for the application of AI in these domains diverging so differently in public opinion?

(Note: I’m discussing the use of chess AI as an analytical tool. Of course, it can be misused for cheating in competitions, which is a separate ethical issue. Generally, though, this doesn’t seem to be a massive issue at the highest levels of chess, while it’s easier to detect at lower levels.)

History of AI in Chess

AI in chess began in the mid-1900s with programs built to study logic and decision-making. In a pivotal moment in 1997, IBM’s Deep Blue beat world champion Garry Kasparov.

That moment kicked off the modern use of engines in training and analysis. Now, engines like Stockfish and AlphaZero help players find better moves and explore new lines. AI didn’t replace players. It helped them improve.

Players compete still, with AI assisting preparation rather than dominating public play. One thing is clear: the wider chess community has little interest in watching two robots play each other, despite it being at levels far beyond that of human play.

AI chess evaluations have become a standard feature in all modern chess coverage. Commentators and broadcasts regularly show engine assessments to help viewers understand positions.

History of AI in Art

AI-generated art, on the other hand, started as a research project. Early systems like AARON in the 1970s were created by artists experimenting with code as an art project itself.

But, with the recent AI boom, tools like Midjourney, DALLE and Stable Diffusion have arrived which use massive datasets that are often built from copyrighted material scraped online without permission.

Instead of helping artists, these tools now seem to be replacing them. They generate countless images without crediting original creators, while artists’ works and styles may be used in the datasets. However, they receive no attribution or compensation.

We (art regard) have, and still are, speaking to a lot of artists, and harms such as loss of commissions, false accusations of AI use, and a general feeling of being used are mentioned time and time again.

Similarities

There are a few key similarities between AI in chess and art. Both use large-scale computation and pattern recognition trained on countless chess games or artworks to perform tasks that previously required human intuition. Both can produce results that surprise humans, and both are often seen as impressive advances, from a technical standpoint.

Differences

However, the systems operate in fundamentally different domains. Chess exists as a game with fixed rules, defined boundaries, and objective win conditions. Art operates without boundaries, incorporates culture, emotion, and human experience, and relies on subjective reception.

Chess AI functions within this closed system where every position has concrete solutions. It processes the mathematical certainty of chess and outputs moves with value measured by effectiveness toward victory. Art AI operates in an open system where evaluation happens through human connection, cultural context, and non-quantifiable responses.

The implementation paths also diverged. Chess AI developed through decades of collaboration between players, programmers, and the chess community. Art AI emerged from tech companies without partnership from the art community, leading to one-sided development and a feeling of being used or sold out by big tech.

Chess players maintain control over when and how they use AI tools. The engines analyze when requested and provide options players may accept or reject. Art AI enters creative spaces without invitation, processes artists’ work without consent, and creates outputs that compete in the same markets.

The economics differ too. Chess AI enhances player skill but doesn’t replace players in tournaments or exhibitions. Chess players still earn income through competition, teaching, and content creation—often using AI to enhance these activities. Art AI directly threatens artist income by generating work that replaces commissions and erodes market value for human-created art.

Chess players use AI voluntarily as a tool. Artists often have no choice. AI tools generate work in their style without consent. This creates tension, not collaboration.

So Why Are They Perceived Differently?

Some of the similarity in perception comes from novelty, as both technologies seemed revolutionary when they appeared. But the underlying reasons for differences in how they are viewed are, at their core, structural.

Chess AI is seen as additive. It makes players better and reveals new ideas in a transparent way. It’s used mostly by the people it’s meant to help: chess players.

AI art is seen as subtractive. It bypasses the artist, takes their work as training data, and automates the output. It’s often used by people with little or no artistic background, and, as mentioned by artists themselves, it replaces commissions or devalues original art. The economic and creative costs fall heavily on artists, while the benefits go mostly to tech developers, platforms, and paid, usually commercial, consumers of art who now get a hefty discount.

Conclusion

The difference in perception comes down to trust, control, and context. One application of AI is used to assist and evolve chess with the support of its community, while the other seems to steal from and exploit the community it is claiming to have joined.

AI art tools have disrupted artistic labor without building relationships with artists. Chess got a tool. Art got an identity crisis.

AI Art has a Compounding Impact

AI uses human-made art

Human-made art remains essential—even as we explore and embrace AI-generated work. That’s because AI models are trained on art created by people, not other AIs. For AI art to continue evolving in meaningful ways, we must continue to value and support human creativity. Unfortunately, AI’s rapid rise is making this less common.

How AI Demotivates Artists

One of the most troubling effects of the AI art boom is its demotivating impact on human artists. Many are losing the drive to improve their craft or are abandoning art altogether. This is harmful—not only for those of us who cherish human-created work, but also for the most passionate advocates of AI art. Without fresh, original human input (which, importantly, must be used ethically and with the artist’s explicit consent), AI models have nothing meaningful to learn from. AI doesn’t innovate on its own—it relies on human creativity as its fuel. Without that, progress stagnates.

AI’s Compounding Effect

As more artists become discouraged and produce less work, AI-generated art continues to flood the digital space. This creates a dangerous feedback loop: the most visible content becomes increasingly homogeneous and derivative, eroding the diversity and quality of art available online. This is a loss for both critics and supporters of AI-generated art.

Conclusion

Supporting and protecting human artists is vital—no matter where you stand in the AI art debate.

The Dark Side(s) of Generative AI Images and Video

Introduction

Generative art has become a potent weapon for bad actors. From misleading propaganda and deepfake impersonations to copyright theft and fraud, malicious users are exploiting AI art to cause real harm and profit from deception.

Propaganda

AI-generated deepfakes and fabricated images have been used to spread false narratives. For example, a deepfake video was created of Ukrainian President Volodymyr Zelenskyy surrendering the Russo-Ukrainian War to Russia.

Non-consensual pornography

Another issue that has received widespread attention is deepfake pornography. Explicit images of unknowing people are being created online, almost always created without consent, to defame and humiliate. Deepfakes are almost all pornographic in nature - Dutch cybersecurity startup Deeptrace estimated that 96% of all deepfakes online were pornographic

There is also the issue of Copyright Infringement & Plagiarism, which I touch upon here.

Online scams

Finally, fraud, scams, and impersonation crimes have upgraded with the advent of AI-generated images and deepfakes. Fraudsters can leverage realistic AI-generated faces and voices to create fake personas online. These fake identities are then used in scams. From the impersonation of business leaders to catfishing victims into romantic or investment scams, a creative bad actor now has a lot more tools to play with. For example, hackers are using the likeness of Elon Musk in deepfake videos to try and perform an online scam

Impact

The use of generative AI in this way further undermines the already weakened trust the public has for digital media. Furthermore, it poses great challenges to legal, ethical, and regulatory frameworks around the world. A coordinated response among lawmakers, big tech companies, and researchers is needed to safeguard against these risks - but we aren’t there yet.

Poisoning AI art

Protecting art from being used to train AI

Many artists are frustrated their work being used to train generative AI without their consent. Ben Zhao’s group have come up with two solutions artists can apply to their art.

Defensive measures from AI

To help achieve this, Ben Zhao’s lab at UChicago came up with Glaze, a protective filter which you can apply to your art. While not making much difference how the art appears to a human eye, the idea is that it adds a subtle amount of noise which disrupts the ability for AI models to learn from the glazed artwork. This is an example of ‘Adversarial Perturbations’. See the paper here: Glaze

Offensive measures for AI

Rather than trying to protect your art from being stolen, Nightshade attempts to poison the model. Again, while looking very similar to the human-eye, this filter can have a large effect on the AI image output. Adding noise in an attempt to hamper the models ability accurately recreate images from prompts, it worsens the image output by the AI. See the paper here: Nightshade

What next?

This will continue to be a cat-and-mouse game for the foreseeable future. AI models will be improved to evade these measures, and in response, these measures will be updated to become effective again. Ben Zhao mentions this in an article here, in response to a paper where the authors overcome Glaze.

The Legal Frontier of AI Art

AI Art in the Courts

Some of the backlash in response to generative AI art comes in the form of precedent-setting lawsuits. From visual artists to programmers and authors, people are testing the limits of intellectual property law. Here’s three examples:

Key examples

  • Artists vs. Stability AI & Midjourney
    The work of visual artists was scraped without permission to train AI models, resulting in outputs that mimic their unique styles. While some claims have been dismissed, the central issue of unauthorized data use remains unresolved. Read more

  • Getty Images vs. Stability AI
    Getty Images alleges that Stability AI unlawfully used millions of its photos to develop its AI model, even reproducing watermarks in some outputs. With cases pending in both the UK and the US, the outcome could redefine how training data is sourced. Read more

  • Thaler v U.S. Copyright Office
    In a landmark decision, courts ruled that works created solely by AI cannot be copyrighted because copyright law requires human authorship. This ruling underscores the need for a reexamination of copyright boundaries in the age of AI. Read more

Impact

These lawsuits aren’t just isolated legal skirmishes - they could set important precedents for the regulation of AI content and training in the future. Plaintiff-favoured rulings could force AI developers to secure licenses or alter how datasets are constructed, while wins for the defense might solidify current practices under fair use doctrines.

As these cases progress, they will undoubtedly shape the future of art, creativity, and law in the digital age.