Skip to content

Charlie Clark

4 posts by Charlie Clark

AI chess vs. AI art - why are they perceived differently?

Introduction

AI is fast developing in many fields, with very different results. In chess, it became a useful tool and learning aid. In art, it has caused confusion, legal issues, and ethical concerns. Chess AI is seen as a partner by many chess players. AI art is often seen as a threat. So, what are the underlying reasons for the application of AI in these domains diverging so differently in public opinion?

(Note: I’m discussing the use of chess AI as an analytical tool. Of course, it can be misused for cheating in competitions, which is a separate ethical issue. Generally, though, this doesn’t seem to be a massive issue at the highest levels of chess, while it’s easier to detect at lower levels.)

History of AI in Chess

AI in chess began in the mid-1900s with programs built to study logic and decision-making. In a pivotal moment in 1997, IBM’s Deep Blue beat world champion Garry Kasparov.

That moment kicked off the modern use of engines in training and analysis. Now, engines like Stockfish and AlphaZero help players find better moves and explore new lines. AI didn’t replace players. It helped them improve.

Players compete still, with AI assisting preparation rather than dominating public play. One thing is clear: the wider chess community has little interest in watching two robots play each other, despite it being at levels far beyond that of human play.

AI chess evaluations have become a standard feature in all modern chess coverage. Commentators and broadcasts regularly show engine assessments to help viewers understand positions.

History of AI in Art

AI-generated art, on the other hand, started as a research project. Early systems like AARON in the 1970s were created by artists experimenting with code as an art project itself.

But, with the recent AI boom, tools like Midjourney, DALLE and Stable Diffusion have arrived which use massive datasets that are often built from copyrighted material scraped online without permission.

Instead of helping artists, these tools now seem to be replacing them. They generate countless images without crediting original creators, while artists’ works and styles may be used in the datasets. However, they receive no attribution or compensation.

We (art regard) have, and still are, speaking to a lot of artists, and harms such as loss of commissions, false accusations of AI use, and a general feeling of being used are mentioned time and time again.

Similarities

There are a few key similarities between AI in chess and art. Both use large-scale computation and pattern recognition trained on countless chess games or artworks to perform tasks that previously required human intuition. Both can produce results that surprise humans, and both are often seen as impressive advances, from a technical standpoint.

Differences

However, the systems operate in fundamentally different domains. Chess exists as a game with fixed rules, defined boundaries, and objective win conditions. Art operates without boundaries, incorporates culture, emotion, and human experience, and relies on subjective reception.

Chess AI functions within this closed system where every position has concrete solutions. It processes the mathematical certainty of chess and outputs moves with value measured by effectiveness toward victory. Art AI operates in an open system where evaluation happens through human connection, cultural context, and non-quantifiable responses.

The implementation paths also diverged. Chess AI developed through decades of collaboration between players, programmers, and the chess community. Art AI emerged from tech companies without partnership from the art community, leading to one-sided development and a feeling of being used or sold out by big tech.

Chess players maintain control over when and how they use AI tools. The engines analyze when requested and provide options players may accept or reject. Art AI enters creative spaces without invitation, processes artists’ work without consent, and creates outputs that compete in the same markets.

The economics differ too. Chess AI enhances player skill but doesn’t replace players in tournaments or exhibitions. Chess players still earn income through competition, teaching, and content creation—often using AI to enhance these activities. Art AI directly threatens artist income by generating work that replaces commissions and erodes market value for human-created art.

Chess players use AI voluntarily as a tool. Artists often have no choice. AI tools generate work in their style without consent. This creates tension, not collaboration.

So Why Are They Perceived Differently?

Some of the similarity in perception comes from novelty, as both technologies seemed revolutionary when they appeared. But the underlying reasons for differences in how they are viewed are, at their core, structural.

Chess AI is seen as additive. It makes players better and reveals new ideas in a transparent way. It’s used mostly by the people it’s meant to help: chess players.

AI art is seen as subtractive. It bypasses the artist, takes their work as training data, and automates the output. It’s often used by people with little or no artistic background, and, as mentioned by artists themselves, it replaces commissions or devalues original art. The economic and creative costs fall heavily on artists, while the benefits go mostly to tech developers, platforms, and paid, usually commercial, consumers of art who now get a hefty discount.

Conclusion

The difference in perception comes down to trust, control, and context. One application of AI is used to assist and evolve chess with the support of its community, while the other seems to steal from and exploit the community it is claiming to have joined.

AI art tools have disrupted artistic labor without building relationships with artists. Chess got a tool. Art got an identity crisis.

The Dark Side(s) of Generative AI Images and Video

Introduction

Generative art has become a potent weapon for bad actors. From misleading propaganda and deepfake impersonations to copyright theft and fraud, malicious users are exploiting AI art to cause real harm and profit from deception.

Propaganda

AI-generated deepfakes and fabricated images have been used to spread false narratives. For example, a deepfake video was created of Ukrainian President Volodymyr Zelenskyy surrendering the Russo-Ukrainian War to Russia.

Non-consensual pornography

Another issue that has received widespread attention is deepfake pornography. Explicit images of unknowing people are being created online, almost always created without consent, to defame and humiliate. Deepfakes are almost all pornographic in nature - Dutch cybersecurity startup Deeptrace estimated that 96% of all deepfakes online were pornographic

There is also the issue of Copyright Infringement & Plagiarism, which I touch upon here.

Online scams

Finally, fraud, scams, and impersonation crimes have upgraded with the advent of AI-generated images and deepfakes. Fraudsters can leverage realistic AI-generated faces and voices to create fake personas online. These fake identities are then used in scams. From the impersonation of business leaders to catfishing victims into romantic or investment scams, a creative bad actor now has a lot more tools to play with. For example, hackers are using the likeness of Elon Musk in deepfake videos to try and perform an online scam

Impact

The use of generative AI in this way further undermines the already weakened trust the public has for digital media. Furthermore, it poses great challenges to legal, ethical, and regulatory frameworks around the world. A coordinated response among lawmakers, big tech companies, and researchers is needed to safeguard against these risks - but we aren’t there yet.

The Legal Frontier of AI Art

AI Art in the Courts

Some of the backlash in response to generative AI art comes in the form of precedent-setting lawsuits. From visual artists to programmers and authors, people are testing the limits of intellectual property law. Here’s three examples:

Key examples

  • Artists vs. Stability AI & Midjourney
    The work of visual artists was scraped without permission to train AI models, resulting in outputs that mimic their unique styles. While some claims have been dismissed, the central issue of unauthorized data use remains unresolved. Read more

  • Getty Images vs. Stability AI
    Getty Images alleges that Stability AI unlawfully used millions of its photos to develop its AI model, even reproducing watermarks in some outputs. With cases pending in both the UK and the US, the outcome could redefine how training data is sourced. Read more

  • Thaler v U.S. Copyright Office
    In a landmark decision, courts ruled that works created solely by AI cannot be copyrighted because copyright law requires human authorship. This ruling underscores the need for a reexamination of copyright boundaries in the age of AI. Read more

Impact

These lawsuits aren’t just isolated legal skirmishes - they could set important precedents for the regulation of AI content and training in the future. Plaintiff-favoured rulings could force AI developers to secure licenses or alter how datasets are constructed, while wins for the defense might solidify current practices under fair use doctrines.

As these cases progress, they will undoubtedly shape the future of art, creativity, and law in the digital age.

What's the most effective way to validate art as human-made?

Organic or Diffused?

Late last year, Ben Zhao’s research group at the University of Chicago published a paper assessing the ability of human and machine to determine whether art was human-made or AI-generated. Here’s the paper:

Ha et al.: “Organic or Diffused: Can We Distinguish Human Art from AI-generated Images?”.

(You may already be familiar with their work: Glaze, a tool which can be applied to artwork posted on the internet to hinder the training of generative AI models, was created by Ben Zhao and his team).

Key findings

If you have the stomach for academic papers and are interested in art verification, I recommend giving it a read.

If not, here are some key points:

  • the authors looked to assess performance of humans and machine at determining the provenance of art
  • they looked at different styles of art
  • they assessed the performance of different AI-art detection tools
  • similarly, they assessed the performance of humans of different art expertise (non-artists, professional artists, and artists expert in AI art detection)
  • the most effective way to determine if a piece of art is AI-generated or not is to combine the efforts of human validators and machine classifiers!

The most effective way to detect AI art

That last point is somewhat unexpected. The best performance in detecting whether a piece of art is AI or human-made results from combining both human and AI input. This collaboration appears to lead to more accurate assessments than either group working alone, suggesting that the strengths of one may be compensating for the weakness of the other.

A big thank you to the authors of this study for conducting important research that can help in addressing the challenges posed by AI. Research like this is crucial for protecting the integrity of human art!