
Select Page
Hosted by AI-ARTS.ORG, the global platform dedicated to documenting the evolution of AI-driven creativity in contemporary art, this competition is one of our flagship events. Alongside the Permanent Online Gallery, the annual AI Arts Collections (printed and digital editions in 2023, 2024, and the upcoming 2025 edition), and the forthcoming First AI-ARTS Biennale 2026, the competition helps record and share the expanding role of AI in artistic practice worldwide.
This year’s edition introduces key developments:
Artists can submit AI-generated works (images, video, or a cohesive multimodal series) via the “My Area” submission link.
All entries must be completed and uploaded by this date.
Public can vote for their favourite artworks during this period.
The competition organisers will conduct their evaluations of all submitted artworks.
The AI Judge will analyse and evaluate the submitted artworks.
Winners will be announced, based on the combined scores from public votes, organisers' evaluations, and the AI Judge.
Artists receive their AI evaluation reports and can share feedback on the process.
A summary of outcomes, insights, and evaluation learnings is released on the website.
The 2025 competition builds on the three-part evaluation system developed in earlier editions, while introducing refinements shaped by feedback from artists and the experience of previous years. The balance between community voice, artistic expertise, and AI assessment remains, but the process has been deepened to reflect the growing diversity of AI art and the need for sensitivity in evaluation.
1. Public Evaluation
Public voting continues to be a vital part of the competition, giving the wider community a chance to show appreciation and support. This year, the voting period has been extended to 20 days, ensuring more time for engagement and for artworks to reach diverse audiences across different platforms.
2. Organisers’ Panel of Past Winners
The organisers’ evaluation is carried out by a panel of artists who have previously won in AI-ARTS competitions. This ensures that the works are reviewed by peers who understand both the creative process and the evolving role of AI in art. Their perspective brings artistic depth, fairness, and recognition of nuance — qualities that participants identified as especially valuable in past feedback.
3. AI Judge Evaluation
The AI Judge continues to contribute one third of the overall evaluation, but its role in 2025 has been refined. It has been retrained with lessons from previous competitions and now provides clearer reasoning, more transparent scoring, and feedback framed in a respectful and constructive way.
Each artist is asked to select a representative piece — one image or video accompanied by its description — to guide the AI’s in-depth analysis. This ensures that the system engages with the work in line with the artist’s own framing. At the same time, all submitted works and texts are evaluated, so that the breadth of each artistic submission is fully acknowledged.
Evolving the Process
Expanded Mediums: Submissions now include AI-generated video, recognising the rise of moving image as a major field of AI creativity.
Guided AI Analysis: Representative works and their descriptions provide context for the AI, while ensuring that all pieces receive evaluation.
Feedback Reports: AI feedback has been redesigned to serve as structured reflection rather than final judgment. Artists retain the choice of whether to share these reports publicly.
Determining Winners
Final results are determined by combining the three perspectives: the public vote, the panel of past winners, and the AI Judge. This structure honours popularity, lived artistic experience, and technological reflection — each with its strengths, each with its limitations. Together, they create a balanced framework that respects both the creativity of artists and the evolving role of AI in the arts.
Artists are invited to submit their works through the official platform. In this competition, an artwork is understood as a cohesive creation that brings together three essential elements:
a visual form (image or video, generated with the assistance of AI),
a description or accompanying text,
and the human direction and intention that guides the process.
It is this interplay — between AI-driven generation and human artistic vision — that defines the artwork within this competition.
Number of works: Each artist may submit up to 20 images or videos, plus one featured piece.
Representative work: A single representative artwork (visual + description + intention) is nominated for in-depth AI analysis, while all submitted works are reviewed by the panel and AI Judge.
Publication and voting: Approved submissions are published on the competition website, where the public may engage and vote during the 20-day voting period.
All submissions must respect ethical and copyright standards. Artists remain responsible for the originality of their work and for ensuring that no unlicensed or unauthorised material is used. Works that engage with sensitive themes are welcome, provided they are presented thoughtfully and responsibly.
The evaluation integrates three perspectives: the public vote, the panel of past winners, and the AI Judge. Each examines artworks as cohesive entities, recognising the interplay between AI-driven generation, human direction, visual form, and text.
Public Voting: Reflects the resonance and appeal of works across diverse audiences.
Panel of Past Winners: Offers insight shaped by artistic practice and experience, with careful attention to originality, creativity, and thematic depth.
AI Judge: Analyses both the representative work and the full submission, considering technical execution, aesthetics, emotional impact, and the relation between text, image, and human intention.
At the end of the process, each participant receives an AI evaluation report, highlighting strengths and offering constructive reflections. These reports are not definitive judgments but tools for dialogue, self-reflection, and growth. Artists may choose whether to make their reports public.
In earlier editions of the competition, we introduced the AI Judge as part of an experiment. The aim was to explore whether AI could play a role not only in the generation of artworks but also in their evaluation. We asked a simple but important question: can AI meaningfully assess art, and if so, how?
The 2024 competition tested this idea by combining an AI assessment with human evaluation and public voting. The goal was to examine the viability of AI-assisted evaluation, collect artist feedback, and ensure that fairness remained central to the process. That first step confirmed both the potential and the limitations of AI in this context. Many artists valued the structured feedback they received, while also encouraging us to make the system more transparent and more sensitive to artistic nuance.
In 2025, we take this work further. What began as an experiment is now a continuing practice, refined through the experience of previous years. The AI Judge has been improved, the process has been adjusted in response to community input, and the intention is clear: to integrate AI evaluation in a way that is balanced, respectful, and fair.
This work serves multiple purposes:
By involving real artists in a public platform, the competition continues to be a laboratory for ideas: a place where methods of integrating AI into art evaluation are tested, refined, and shared with the wider community.
The AI Judge has been improved and retrained since the last edition, incorporating feedback from artists and lessons from previous assessments. This year it will provide more consistent results, clearer articulation of its reasoning, and more respectful feedback for participants.
The AI Judge contributes one third of the overall evaluation, alongside the organisers’ panel and the public vote. Its task is not to define artistic value alone, but to provide a consistent and transparent perspective that complements human reflection. While AI evaluation can reduce certain subjective influences, it may also carry its own forms of bias, shaped by the data and methods it is built upon. Recognising this is essential: the AI Judge is therefore positioned as one voice in a balanced framework, not as an authority.
Criteria of Assessment
The evaluation continues to use a broad set of criteria designed to acknowledge different artistic dimensions:
Creativity – the imagination and innovation expressed in the work.
Originality – the distinctiveness of concept and execution.
Technical Execution – the skill and care in realising the work.
Relevance of Theme – how well the work communicates its intended idea.
Emotional Impact – the capacity to evoke feeling and resonance.
Aesthetics – the visual and artistic quality of the piece.
Societal Impact – how the work reflects or comments on wider issues.
Experience – the cohesion and depth of the work as a lived artistic encounter.
Commitment to Fairness
All works are assessed under the same framework, ensuring comparability across the competition. Each artist receives an individual evaluation report, with the option to make the results public. These reports are not intended as definitive judgments, but as structured reflections that artists may engage with, critique, or use as a starting point for further thought.
In continuing this work, our aim is not only to improve the competition itself but also to contribute to the wider conversation about how AI can take part in the arts responsibly — acknowledging its limitations, learning from its biases, and using it as a tool for dialogue, reflection, and growth.
In any competition that blends human insight, public participation, and AI evaluation, it is important to recognise the constraints that shape the process. No single perspective can capture the full complexity of artistic expression, and each approach brings both strengths and limitations. By being transparent about these boundaries, we aim to provide a fairer and more thoughtful experience for all participants.
This competition is a space to celebrate artistic work while encouraging reflection and learning. The evaluation combines public voting, the insight of past winners, and the contribution of the AI Judge. Together, these perspectives are intended to create a process that is balanced, transparent, and respectful of artistic diversity.
Every submission is assessed within the same framework, with equal care and attention. The purpose is not to deliver absolute judgments, but to support an environment where artistic practice, evaluation, and experimentation evolve side by side.
As an evolving experiment, the competition grows through the feedback of its participants. This dialogue helps strengthen fairness, deepen understanding, and shape future editions with integrity and inclusivity.