AIMO Interpretability Challenge

Can your method tell which AI truly understands the problem
and which one is faking it?
Distinguish robust from spurious reasoning in frontier AI.

🧠 2,000+ Frontier Reasoning Models 128+ H200 GPUs 🤖 Interpretability × Olympiad-level Reasoning
▷ Starter kit on GitHub FAQ & How to Participate Contact Us ✉️

Why Interpretability for AIMO?

AIMO (AI Mathematical Olympiad) is an annual competition of AI systems in solving unseen frontier mathematical problems. Thanks to its importance and large prizes ($2.2M USD this year), AIMO attracts huge community attention — last year, the competition was covered by global media The Wall Street Journal and Bloomberg, and participated in by over 4,000+ teams.

The ambition that we have in the Fields Model Initiative is to turn this tremendous engineering effort into scientific knowledge — laying robust stepping stones that will accelerate progress in future AIMO years and in AI as a whole.

We believe that interpretability research can play a key role in helping us achieve this goal — allowing us to understand the mechanisms underlying SOTA reasoning in AI systems and their practical robustness. More broadly, AIMO Interpretability Challenge aims to steer the focus towards robustness and actionability — that interpretability needs to make a real-world impact!

Robust or Spurious?

The AIMO Interpretability Challenge will require participants to submit a system that decides which LLM provides an answer to a given AIMO problem robustly.

For each problem, we provide one of two LLMs:

🛡️

Robust Model

The highest-ranked LLM submission to AIMO that passes all our robustness checks.

⚠️

Spurious Model

The highest-ranked LLM for which we verify its reliance on at least one spurious pattern by counterfactual evaluation.

Submitted methods will be evaluated for their accuracy in classifying if a given model responds to a given problem robustly.

📋

Competition submissions will be submitted as Cobabench submission bundles that comply with the unified interface defined on the project GitHub. Submissions will be evaluated on our servers without internet access. The provided validation set will cover all types of models contained in the test set.

Submission Types

🏆

Main Track

Includes the full scale of top-performing models from AIMO 3 — no restrictions on model size.

🔬

Small Models Track

Subsets the evaluation to the best-performing models below the 10-billion-parameter scale, providing a comparable setup for compute-heavy methods such as Sparse Autoencoders or Transcoders.

How to Participate

1️⃣

(Optional) Apply for Compute Access

Submit a brief proposal in the Fields Model Initiative to request access to H200 GPUs, model containers, and training reports.

2️⃣

Develop Your Method

Use the provided validation set to develop an interpretability method that classifies whether a given model responds to a given problem robustly.

3️⃣

Package your submission bundle

Wrap your solution in a Codabench submission bundle following the unified interface specified in the eval-scripts branch. No internet access is available at evaluation time.

4️⃣

Submit via the Portal

Upload your bundle through the Codabench submission portal (opens July 1, 2026). You may submit to the Warm-Up phase (July 1–15) first to verify your setup before the main competition phase.

Environment of the Frontier AI Labs

As part of the Fields Model Initiative, we will provide the interested participants with:

Compute at Scale

Generous compute resources of up to 128 H200 GPUs (or more, if justified in the proposal) — allowing everyone to analyse and interpret the most relevant, billions-parameter-scale models.

📦

Ready-to-Run AI Systems

Access to the best-performing AI systems submitted to AIMO competition, provided as standalone containers ready for immediate experimentation.

📄

Training Reports

Detailed reports overviewing the approaches of AIMO winning teams, newly required as a condition for participation in AIMO.

This will empower everyone in the AI community to set off for much more ambitious goals — including methods such as model diffing at scale, sparse autoencoders/transcoders training, scale-dependent behavior analyses and more.

Key Dates

July 1, 2026
Official competition announcement
July 1, 2026
Release of validation data and Codabench submission bundles with baselines
July 1 – July 15, 2026
Warm-up phase: competition opens to participants, possible refinements in data and interfaces
July 15 – October 25, 2026
Main competition phase
October 25, 2026
Final submission deadline — closing of the submission interface
October 25 – October 30, 2026
Participant technical report submission window
November 1 – November 15, 2026
Validation of results, technical reports review and rules compliance check, result analysis
December 11, 2026
Competition workshop

Frequently Asked Questions

1. Who can participate?

The challenge is open to everyone — academic researchers, independent researchers, and industry practitioners alike. There are no restrictions on team size or affiliation.

2. Do I need to be a participant of AIMO to join?

No. The AIMO Interpretability Challenge is a separate competition. You will be given access to AIMO model submissions as part of the provided environment — you do not need to submit to AIMO yourself.

3. How do I get access to the models and compute?

Apply through the Fields Model Initiative by submitting a one-page research proposal. Accepted participants will receive access to H200 GPU clusters.

4. What is the difference between the Main Track and the Small Models Track?

The Main Track covers the full scale of top-performing models from AIMO 3 with no restriction on model size. The Small Models Track subsets the evaluation to the best-performing models below the 10-billion-parameter scale, offering a comparable setup for methods that are more compute-heavy to train — such as Sparse Autoencoders or Transcoders — where analysing very large models may be infeasible.

5. What runtime environment does my submission need?

Submissions must be packaged as Codabench submission bundle and must conform to the unified interface defined in the eval-scripts branch. Containers are evaluated on our servers without internet access. Detailed technical documentation and a starter kit will be released on July 1, 2026.

6. Will there be tutorial material and a starter kit?

Yes. A starter kit with baseline implementations, worked examples, and full documentation will be released alongside the validation data (July 1, 2026). A white paper describing the problem formulation and evaluation methodology will also be made available before competition start.

7. Can I participate in both tracks?

Yes, teams may submit to both the Main Track and the Small Models Track independently.

8. How is the winner determined?

Submissions are ranked by accuracy in correctly identifying the robust model from each pair on the held-out test set. The Main Track and Small Models Track are ranked separately.

9. What if I have more questions?

Reach out to us at aimo.interp@gmail.com or open a discussion on the GitHub repository. We aim to respond within 48 hours and will update this FAQ regularly.

To be announced

Get in Touch

For any questions about the challenge, compute access proposals, or technical issues, please reach out — we're happy to help.

✉️

Primary contacts: aimo.interp@gmail.com and the AIMO-Interp Discord.

We aim to respond within 48 hours. For technical questions, please also consider opening an issue or discussion on the corresponding GitHub repository. This FAQ is regularly updated.