AI ARTS COMPETITION 2025

4th Edition

Welcome to the 4th Edition of the AI Arts Competition!

Hosted by AI-ARTS.ORG, the global platform dedicated to documenting the evolution of AI-driven creativity in contemporary art, this competition is one of our flagship events. Alongside the Permanent Online Gallery, the annual AI Arts Collections (printed and digital editions in 2023, 2024, and the upcoming 2025 edition), and the forthcoming First AI-ARTS Biennale 2026, the competition helps record and share the expanding role of AI in artistic practice worldwide.

This year’s edition introduces key developments:

  • A new dedicated category for AI Video Art, giving moving-image works their own space in the competition.
  • Winning artists will also be invited to exhibit in the First AI-ARTS Biennale 2026, presented in a permanent VR gallery with satellite events.
  • Our AI Judge has been upgraded, building on prior editions to deliver richer, fairer, and more consistent evaluations.

Important Dates

7th September 2025
7th September 2025

Submissions Open

Artists can submit AI-generated works (images, video, or a cohesive multimodal series) via the “My Area” submission link.

30 October 2025
30 October 2025

Submission Deadline

All entries must be completed and uploaded by this date.

18 October – 6 November 2025
18 October – 6 November 2025

Public Voting Period

Public can vote for their favourite artworks during this period.

18 October – 6 November 2025
18 October – 6 November 2025

Organisers' Evaluation

The competition organisers will conduct their evaluations of all submitted artworks.

18 October – 6 November 2025
18 October – 6 November 2025

AI Judge Evaluation

The AI Judge will analyse and evaluate the submitted artworks.

15 November 2025
15 November 2025

Final Results Announcement

Winners will be announced, based on the combined scores from public votes, organisers' evaluations, and the AI Judge.

November 2025
November 2025

Feedback Collection

Artists receive their AI evaluation reports and can share feedback on the process.

December 2025
December 2025

Publication of Final Report

A summary of outcomes, insights, and evaluation learnings is released on the website.

Towards a Fair and Evolving Practice of AI Art Evaluation

Evaluation Components 2025

The 2025 competition builds on the three-part evaluation system developed in earlier editions, while introducing refinements shaped by feedback from artists and the experience of previous years. The balance between community voice, artistic expertise, and AI assessment remains, but the process has been deepened to reflect the growing diversity of AI art and the need for sensitivity in evaluation.

1. Public Evaluation
Public voting continues to be a vital part of the competition, giving the wider community a chance to show appreciation and support. This year, the voting period has been extended to 20 days, ensuring more time for engagement and for artworks to reach diverse audiences across different platforms.

2. Organisers’ Panel of Past Winners
The organisers’ evaluation is carried out by a panel of artists who have previously won in AI-ARTS competitions. This ensures that the works are reviewed by peers who understand both the creative process and the evolving role of AI in art. Their perspective brings artistic depth, fairness, and recognition of nuance — qualities that participants identified as especially valuable in past feedback.

3. AI Judge Evaluation
The AI Judge continues to contribute one third of the overall evaluation, but its role in 2025 has been refined. It has been retrained with lessons from previous competitions and now provides clearer reasoning, more transparent scoring, and feedback framed in a respectful and constructive way.

Each artist is asked to select a representative piece — one image or video accompanied by its description — to guide the AI’s in-depth analysis. This ensures that the system engages with the work in line with the artist’s own framing. At the same time, all submitted works and texts are evaluated, so that the breadth of each artistic submission is fully acknowledged.

Evolving the Process

  • Expanded Mediums: Submissions now include AI-generated video, recognising the rise of moving image as a major field of AI creativity.

  • Guided AI Analysis: Representative works and their descriptions provide context for the AI, while ensuring that all pieces receive evaluation.

  • Feedback Reports: AI feedback has been redesigned to serve as structured reflection rather than final judgment. Artists retain the choice of whether to share these reports publicly.

Determining Winners
Final results are determined by combining the three perspectives: the public vote, the panel of past winners, and the AI Judge. This structure honours popularity, lived artistic experience, and technological reflection — each with its strengths, each with its limitations. Together, they create a balanced framework that respects both the creativity of artists and the evolving role of AI in the arts.

Process for Submission and Evaluation 2025
1. Submission

Artists are invited to submit their works through the official platform. In this competition, an artwork is understood as a cohesive creation that brings together three essential elements:

  • a visual form (image or video, generated with the assistance of AI),

  • a description or accompanying text,

  • and the human direction and intention that guides the process.

It is this interplay — between AI-driven generation and human artistic vision — that defines the artwork within this competition.

  • Number of works: Each artist may submit up to 20 images or videos, plus one featured piece.

  • Representative work: A single representative artwork (visual + description + intention) is nominated for in-depth AI analysis, while all submitted works are reviewed by the panel and AI Judge.

  • Publication and voting: Approved submissions are published on the competition website, where the public may engage and vote during the 20-day voting period.

All submissions must respect ethical and copyright standards. Artists remain responsible for the originality of their work and for ensuring that no unlicensed or unauthorised material is used. Works that engage with sensitive themes are welcome, provided they are presented thoughtfully and responsibly.

 

2. Evaluation

The evaluation integrates three perspectives: the public vote, the panel of past winners, and the AI Judge. Each examines artworks as cohesive entities, recognising the interplay between AI-driven generation, human direction, visual form, and text.

  • Public Voting: Reflects the resonance and appeal of works across diverse audiences.

  • Panel of Past Winners: Offers insight shaped by artistic practice and experience, with careful attention to originality, creativity, and thematic depth.

  • AI Judge: Analyses both the representative work and the full submission, considering technical execution, aesthetics, emotional impact, and the relation between text, image, and human intention.

3. Feedback

At the end of the process, each participant receives an AI evaluation report, highlighting strengths and offering constructive reflections. These reports are not definitive judgments but tools for dialogue, self-reflection, and growth. Artists may choose whether to make their reports public.

Introduction to the AI Evaluation Process

In earlier editions of the competition, we introduced the AI Judge as part of an experiment. The aim was to explore whether AI could play a role not only in the generation of artworks but also in their evaluation. We asked a simple but important question: can AI meaningfully assess art, and if so, how?

The 2024 competition tested this idea by combining an AI assessment with human evaluation and public voting. The goal was to examine the viability of AI-assisted evaluation, collect artist feedback, and ensure that fairness remained central to the process. That first step confirmed both the potential and the limitations of AI in this context. Many artists valued the structured feedback they received, while also encouraging us to make the system more transparent and more sensitive to artistic nuance.

In 2025, we take this work further. What began as an experiment is now a continuing practice, refined through the experience of previous years. The AI Judge has been improved, the process has been adjusted in response to community input, and the intention is clear: to integrate AI evaluation in a way that is balanced, respectful, and fair.

This work serves multiple purposes:

  • Reliability – to determine how consistently an AI Judge can contribute to evaluating AI-generated art.
  • Fairness – to ensure that AI assessment never replaces human judgment but works alongside the organisers’ review and the public vote as one part of a balanced process.
  • Feedback – to provide participants with detailed reports that can be useful for reflection, without presenting them as absolute judgments.
  • Research – to continue testing whether AI can effectively engage with artistic experience, and what this means for the future of art and technology.

By involving real artists in a public platform, the competition continues to be a laboratory for ideas: a place where methods of integrating AI into art evaluation are tested, refined, and shared with the wider community.

The Role of the AI Judge in 2025

The AI Judge has been improved and retrained since the last edition, incorporating feedback from artists and lessons from previous assessments. This year it will provide more consistent results, clearer articulation of its reasoning, and more respectful feedback for participants.

The AI Judge contributes one third of the overall evaluation, alongside the organisers’ panel and the public vote. Its task is not to define artistic value alone, but to provide a consistent and transparent perspective that complements human reflection. While AI evaluation can reduce certain subjective influences, it may also carry its own forms of bias, shaped by the data and methods it is built upon. Recognising this is essential: the AI Judge is therefore positioned as one voice in a balanced framework, not as an authority.

Criteria of Assessment
The evaluation continues to use a broad set of criteria designed to acknowledge different artistic dimensions:

  • Creativity – the imagination and innovation expressed in the work.

  • Originality – the distinctiveness of concept and execution.

  • Technical Execution – the skill and care in realising the work.

  • Relevance of Theme – how well the work communicates its intended idea.

  • Emotional Impact – the capacity to evoke feeling and resonance.

  • Aesthetics – the visual and artistic quality of the piece.

  • Societal Impact – how the work reflects or comments on wider issues.

  • Experience – the cohesion and depth of the work as a lived artistic encounter.

Commitment to Fairness
All works are assessed under the same framework, ensuring comparability across the competition. Each artist receives an individual evaluation report, with the option to make the results public. These reports are not intended as definitive judgments, but as structured reflections that artists may engage with, critique, or use as a starting point for further thought.

In continuing this work, our aim is not only to improve the competition itself but also to contribute to the wider conversation about how AI can take part in the arts responsibly — acknowledging its limitations, learning from its biases, and using it as a tool for dialogue, reflection, and growth.

Constraints and Limitations

Constraints and Limitations 2025

In any competition that blends human insight, public participation, and AI evaluation, it is important to recognise the constraints that shape the process. No single perspective can capture the full complexity of artistic expression, and each approach brings both strengths and limitations. By being transparent about these boundaries, we aim to provide a fairer and more thoughtful experience for all participants.

 

Limitations of the AI Judge

  • Quantitative Orientation – The AI Judge relies on structured criteria and measurable patterns. While this brings consistency, it can risk oversimplifying the depth and nuance of artistic expression.
  • Lack of Dialogue – Unlike a human panel, the AI cannot engage in discussion, debate, or collective interpretation. Its perspective is limited to the data and models it has been trained on.
  • Potential Bias – AI systems can inherit biases from their training data, which may influence assessments in ways that are not always obvious.
  • Context and Emotion – The AI Judge may struggle to recognise cultural, historical, or personal contexts, and it lacks the emotional insight that is often central to how art resonates.

Limitations of Human and Public Evaluation

  • Public Voting – Public response is valuable as a measure of resonance and accessibility, but it can also reflect popularity, visibility, or personal preference more than artistic depth.
  • Panel of Past Winners – The panel provides expertise and lived artistic experience, but like any human assessment, it is shaped by subjectivity and individual taste.

Limitations of the Overall Process

  • Balancing Perspectives – Combining public, panel, and AI evaluations offers breadth, but also creates complexity in how different weights are understood and reconciled.
  • Uniform Criteria – Using a shared framework makes comparison possible, but it may not fully capture the diversity of artistic approaches, especially experimental or unconventional works.
  • Technological Constraints – AI models remain limited by the state of current technology; while capable of detailed analysis, they cannot fully replicate human imagination or cultural sensitivity.

Commitment to Fairness, Reflection, and Learning

This competition is a space to celebrate artistic work while encouraging reflection and learning. The evaluation combines public voting, the insight of past winners, and the contribution of the AI Judge. Together, these perspectives are intended to create a process that is balanced, transparent, and respectful of artistic diversity.

Every submission is assessed within the same framework, with equal care and attention. The purpose is not to deliver absolute judgments, but to support an environment where artistic practice, evaluation, and experimentation evolve side by side.

As an evolving experiment, the competition grows through the feedback of its participants. This dialogue helps strengthen fairness, deepen understanding, and shape future editions with integrity and inclusivity.