Even more than in other areas of benchmarking, in maths where you only have a right or wrong answer, and it is costly to come up with good sample problems, some benchmarks have adopted private test data sets.
The situation is kind of sad, in that ideally we should have open data sets and only test them on models that were trained on data exclusively published before the problem publish date.
However this is not practical for the following reasons:
Perhaps the ideal scenario therefore is what ARC-AGI has done: give a sizeable public dataset, which you feel is highly representative of the difficulty level of the private test data, while at the same time holding out some private test data.
This way, reproducible models can actually self test themselves reliably on the open data, while the closed data can then be used for the cases where the open data can't be used.
How they "ensure" that models are not contaminated:
Most of their problems come from high school knowledge olympiads and they are therefore completely irrelevant for 2025 LLMs.
This one doesn't seem to exciting to be honest, but it might be useful. Sample question:and it expects the correct answer down to the cents:It should be noted that Project Euler has such "precision matters" problems.
53892.27
This project initiated by Terence Tao aims to find the relations between various statements in abstract algebra by using a combination of automated theorem proving and human effort. As mentioned by Terence himself, this is a bit similar to the idea of the Busy Beaver Challenge:
Paper: arxiv.org/abs/2411.04872
arstechnica.com/ai/2024/11/new-secret-math-benchmark-stumps-ai-models-and-phds-alike/ mentions what the official website is unable to clearly state out:So yeah, fuck off.
The design of FrontierMath differs from many existing AI benchmarks because the problem set remains private and unpublished to prevent data contamination
The expected answer output for all problems is just one single, possibly ridiculously large, integer, which is kind of a cool approach. Similar to Project Euler in that aspect.
The most interesting aspect of this benchmark is the difficulty. Mathematical olympiad coach Evan Chen comments:[ref]
Problems in [the International Mathematical Olympiad] typically require creative insight while avoiding complex implementation and specialized knowledge [but for FrontierMath] they keep the first requirement, but outright invert the second and third requirement
We introduce Putnam-AXIOM, a benchmark of 522 university-level competition problems drawn from the prestigious William Lowell Putnam Mathematical Competition, and Putnam-AXIOM Variation, an unseen companion set of 100 functional variants generated by programmatically perturbing variables and constants.
Articles by others on the same topic
There are currently no matching articles.