OlympicArena: Benchmarking Multi-discipline Cognitive Reasoning for Superintelligent AI (2024)

Zhen Huang3, Zengzhi Wang1,3, Shijie Xia1,3, Xuefeng Li1,3, Haoyang Zou3,
Ruijie Xu1,3, Run-Ze Fan1,3, Lyumanshan Ye1,3, Ethan Chern1,3, Yixin Ye1,3, Yikai Zhang1,3
Yuqing Yang3, Ting Wu3, Binjie Wang3, Shichao Sun3, Yang Xiao3, Yiyuan Li3, Fan Zhou1,3
Steffi Chern3, Yiwei Qin3, Yan Ma3, Jiadi Su3, Yixiu Liu1,3, Yuxiang Zheng1,3
Shaoting Zhang2, Dahua Lin211footnotemark: 1, Yu Qiao211footnotemark: 1, Pengfei Liu1,2,311footnotemark: 1

1Shanghai Jiao Tong University, 2Shanghai Artificial Intelligence Laboratory,
3Generative AI Research Lab (GAIR)
gair.olympicarena@gmail.com
https://gair-nlp.github.io/OlympicArena/

Corresponding authors

Abstract

The evolution of Artificial Intelligence (AI) has been significantly accelerated by advancements in Large Language Models (LLMs) and Large Multimodal Models (LMMs), gradually showcasing potential cognitive reasoning abilities in problem-solving and scientific discovery (i.e., AI4Science) once exclusive to human intellect. To comprehensively evaluate current models’ performance in cognitive reasoning abilities, we introduce OlympicArena, which includes 11,163 bilingual problems across both text-only and interleaved text-image modalities. These challenges encompass a wide range of disciplines spanning seven fields and 62 international Olympic competitions, rigorously examined for data leakage.We argue that the challenges in Olympic competition problems are ideal for evaluating AI’s cognitive reasoning due to their complexity and interdisciplinary nature, which are essential for tackling complex scientific challenges and facilitating discoveries. Beyond evaluating performance across various disciplines using answer-only criteria,we conduct detailed experiments and analyses from multiple perspectives. We delve into the models’ cognitive reasoning abilities, their performance across different modalities, and their outcomes in process-level evaluations, which are vital for tasks requiring complex reasoning with lengthy solutions.Our extensive evaluations reveal that even advanced models like GPT-4o only achieve a 39.97% overall accuracy (28.67% for mathematics and 29.71% for physics), illustrating current AI limitations in complex reasoning and multimodal integration.Through the OlympicArena, we aim to advance AI towards superintelligence, equipping it to address more complex challenges in science and beyond.We also provide a comprehensive set of resources to support AI research, including a benchmark dataset, an open-source annotation platform, a detailed evaluation tool, and a leaderboard with automatic submission features.111https://github.com/GAIR-NLP/OlympicArena

OlympicArena: Benchmarking Multi-discipline Cognitive Reasoning for Superintelligent AI (1)
OlympicArena: Benchmarking Multi-discipline Cognitive Reasoning for Superintelligent AI (2)

1 Introduction

The landscape of Artificial Intelligence (AI) has undergone a transformative evolution with advances in technologies like Large Language Models[2, 3] and Large Multimodal Models (LMMs)[31]. These models represent significant milestones on the path to Artificial General Intelligence (AGI)[47, 15], demonstrating remarkable cognitive reasoning abilities, which represent drawing meaningful conclusions from incomplete and inconsistent knowledge to solve problems in complex scenarios[16, 34]. They adeptly handle tasks ranging from simple grade school math problems[13, 56, 59, 64] to complex challenges like those presented at the International Mathematical Olympiad (IMO)[46, 42]. Furthermore, they are progressively being applied to intricate real-world scenarios, such as using AI agents for software development[37], collaborating on complex decision-making processes[11] and even boosting the field of scientific research (i.e., AI4Science)[50].

These applications highlight AI’s growing proficiency in cognitive reasoning, a crucial element in the pursuit of AGI and, potentially, superintelligence[35].Therefore, how to benchmark these abilities has sparked extensive research. Existing benchmarks[18, 22, 26, 63, 44, 62] utilize multidisciplinary exam problems to assess the problem-solving skills of LLMs, but these problems are predominantly knowledge-intensive which has become relatively easy for current LLMs. Also, these benchmarks primarily focus on text-only modalities. Although some benchmarks begin to target college-level problems[52, 40] and incorporate multimodal assessments[58, 60, 61], they still predominantly focus on knowledge-intensive tasks or simple concept applications (shown in Table1).Concurrent to our work, He etal. [17] introduces an Olympic-level benchmark yet it is limited to only mathematics and physics. Furthermore, all the above benchmarks lack a systematic and fine-grained evaluation of various cognitive reasoning abilities. For example, they mostly do the evaluation only based on answers, neglecting potential errors in the reasoning process. This underscores the need for more comprehensive evaluations that not only cover a broader range of disciplines but also focus on higher levels of cognitive reasoning as well as fine-grained evaluation.

In this paper, we introduce OlympicArena, a comprehensive, highly-challenging, and rigorously curated benchmark featuring a detailed, fine-grained evaluation mechanism designed to assess advanced AI capabilities across a broad spectrum of Olympic-level challenges (as illustrated in Figure2).We extensively select, collect, and process problems from seven disciplines—mathematics, physics, chemistry, biology, geography, astronomy, and computer science—encompassing 62 different Olympic-level competitions. This extensive collection has culminated in a benchmark comprising 11,163 problems, categorized into 13 types of answers (e.g., expression, interval). Importantly, OlympicArena enhances its evaluation framework by incorporating process-level evaluations that scrutinize the step-by-step reasoning processes of AI models. This approach is critical for understanding the depth of cognitive reasoning beyond correct answers[29, 53], allowing us to identify and rectify gaps in AI reasoning pathways and ensuring more robust AI capabilities.The benchmark is bilingual, featuring both English and Chinese, to enhance its accessibility and global applicability. Additionally, it supports two modalities: text-only and interleaved text and images, catering to the evolving complexity of tasks that modern AI systems must handle. We also perform data leakage detection experiments[54] on some mainstream models to validate our benchmark’s effectiveness.

We conduct a series of experiments across existing top-performing LMMs, encompassing both proprietary models (e.g., GPT-4o[36]) and open-source models (e.g., LLaVa-NeXT[31]). Additionally, we evaluate various types of LLMs (e.g., GPT-3.5) in two settings: text-only and image-caption and conduct comprehensive evaluations from both the answer-level and process-level perspectives. For answer-level evaluations, we combine rule-based and model-based (GPT-4V222At the time of doing most part of this work, GPT-4o has not been released yet, so GPT-4V is mainly used for annotating, evaluation, and case study. in this paper) methods to cover a more diverse range of answer types. For process-level evaluations, we score each reasoning step of the model output, which we consider quite critical in reasoning scenarios. Additionally, we perform fine-grained evaluations and analyses on different types of cognitive reasoning, from both logical and visual perspectives to better interpret the current capabilities of AI.

Our observations from the OlympicArena benchmark are summarized as follows:(1) Even the most advanced model, GPT-4o, achieves only a 39.97% overall accuracy, while other open-source models struggle to reach a 20% overall accuracy, underscoring current models’ limitations in handling complex, multidisciplinary problems that require advanced cognitive reasoning—key aspects of scientific discovery.(2) Through more fine-grained analysis§4.4, we find that LMMs are particularly weak in handling complex, decompositional reasoning problems and exhibit poor spatial and geometric perception visual abilities, as well as difficulties in understanding abstract symbols.(3) Additionally, we discover that current LMMs seem to struggle significantly in leveraging interleaved visual information for complex cognitive reasoning problems. Various LMMs fail to show notable enhancements compared to their text-only counterparts.(4) The process-level evaluation also indicates that most models can correctly execute some reasoning steps in spite of providing incorrect answers, demonstrating the models’ significant potential.(5) Through data leakage detection, we find that instances of data leakage in our benchmark are exceedingly rare. Even on the infrequent occasions when leakage does occur, the corresponding models do not consistently solve these problems correctly. This suggests the need for more advanced training strategies to enhance cognitive reasoning capabilities.These observations highlight the immense value of the OlympicArena benchmark in advancing our understanding of AI’s capabilities and limitations.

2 Related Work

BenchmarkSubjectsMultimodalLanguageSize#AnswerEval.Leak Det.Difficulty#Logic.#Visual.
SciBench \checkmarkEN7891 / ×\times×0.392.35
CMMLU ×\times×ZH15941 / ×\times×0.36-
MMLU ×\times×EN25541 / ×\times×0.44-
C-Eval ×\times×ZH33621 / ×\times×0.6-
MMMU \checkmarkEN30072 / ×\times×0.252.75
SciEval ×\times×EN159014 / ×\times×1.12-
AGIEval ×\times×EN & ZH33002 / ×\times×1.07-
GPQA ×\times×EN4481 / ×\times×2.24-
JEEBench ×\times×EN5153 / ×\times×2.41-
OlympiadBench \checkmarkEN & ZH89527 / ×\times×2.262.96
OlympicArena \checkmarkEN & ZH1116313 / \checkmark2.733.15

Benchmark AI Intelligence

How to benchmark AI intelligence has always been a challenging problem. Initially, the Turing Test[47] provided a conceptual framework for evaluating AI Intelligence. However, limitations in past AI technology lead researchers to focus on specialized domains. In computer vision, benchmarks like MNIST[25] and ImageNet[14] catalyze progress, while in natural language processing, GLUE[49] and XTREME[21] set the standard for evaluating linguistic capabilities across tasks and languages. The success of pretrained language models[38, 23] particularly recent LLMs emphasizes the evaluation of foundational knowledge and innate abilities as shown in Figure2. This leads to the creation of benchmarks such as MMLU[18], AGIEval[63], C-Eval[22], and CMMLU[26], which pushed the limits of language models with multidisciplinary, multilingual, and knowledge-intensive tasks. However, the rapid progress of LLMs has rendered these benchmarks insufficient to fully assess the models’ growing capabilities.

Cognitive Reasoning

is crucial as it allows AI systems to apply prior knowledge and logical principles to complex tasks in a more human-like manner, ensuring better robustness and generalization in real-world applications[43]. Thus, more attention is paid to more intricate reasoning tasks, benchmarks like GSM8K[13] focused on grade-school mathematical reasoning problems, while MATH[20] introduced high-school level mathematical competition tasks. Furthermore, benchmarks such as JEEBench[4], SciBench[52], GPQA[40] and MMMU[58] have expanded the scope by incorporating multidisciplinary university-level subjects and even multimodal tasks. To further challenge AI systems, researchers have turned to problems from some of the most difficult competitions, specifically International Olympiads[17, 46, 30] and algorithmic challenges[28, 19, 41]. Nevertheless, there is currently no Olympic-level, multidisciplinary benchmark that comprehensively evaluates comprehensive problem-solving abilities to fully test all-rounded AI’s cognitive ability. Table1 presents a comparison of several related scientific benchmarks.

Rigorous Evaluation for Reasoning

While curating comprehensive and appropriate data is crucial in benchmarks, adopting rigorous evaluation methodologies is equally important. Most existing benchmarks, as mentioned above, primarily focus on answer-level evaluation (i.e., only comparing the model’s output with the standard answer). Recently, some works have started to focus on the models’ intermediate reasoning steps. Some of them[48, 29, 51] explore using process supervision to train better reward models. Lanham etal. [24] delves into the faithfulness of the chain-of-thought reasoning process, while Xia etal. [53] trains models specifically designed to evaluate the validity and redundancy of reasoning steps for mathematical problems. However, in the evaluation methodologies of existingbenchmarks as listed in Table1, few of them incorporate process-level evaluation. This insufficient evaluation often neglects the reliability and faithfulness of AI models, especially in complex cognitive reasoning scenarios requiring lengthy solutions.In this work, the introduced OlympicArena is equipped with a more fine-grained evaluation methodology (i.e., process-level evaluation), allowing developers to better understand the true reasoning behaviors of models.

3 The OlympicArena Benchmark

3.1 Overview

We introduce the OlympicArena, an Olympic-level, multidisciplinary benchmark designed to rigorously assess the cognitive reasoning abilities of LLMs and LMMs. Our benchmark features a combination of text-only and interleaved text-image modalities, presented bilingually to promote accessibility and inclusivity. It spans seven core disciplines: mathematics, physics, chemistry, biology, geography, astronomy, and computer science, encompassing a total of 34 specialized branches (details are in AppendixA.1) which represent fundamental scientific fields. The benchmark includes a comprehensive set of 11,163 problems from 62 distinct Olympic competitions, structured with 13 answer types (shown in AppendixA.2) from objective types (e.g., multiple choice and fill-in-the-blanks) to subjective types (e.g., short answers and programming tasks), which distinguishes it

StatisticNumber
Total Problems11163
Total Competitions62
Total Subjects/Subfields7/34
Total Answer Types13
Problems with Solutions7904
Language (EN: ZH)7054: 4109
Total Images7571
Problems with Images4960
Image Types5
Cognitive Complexity Levels3
Logical Reasoning Abilities8
Visual Reasoning Abilities5
Average Problem Tokens244.8
Average Solution Tokens417.1

from many other benchmarks that primarily focus on objective problems.Detailed statistics of OlympicArena are described in Table2. Also, to identify potential data leakage, we conduct specialized data leakage detection experiments on several models.

Furthermore, in pursuit of a granular analysis of model performance, we categorize cognitive reasoning into 8 types of logical reasoning abilities and 5 types of visual reasoning abilities. This comprehensive categorization aids in the detailed evaluation of the diverse and complex reasoning skills that both LLMs and LMMs can exhibit. Additionally, we specifically investigate all multimodal problems to compare the performance of LMMs against their text-based counterparts, aiming to better assess LMMs’ capabilities in handling visual information. Finally, we evaluate the correctness and efficiency of the reasoning process, not just limited to an answer-based assessment.

3.2 Data Collection

To ensure comprehensive coverage of Olympic-level problems across various disciplines, we begin by collecting URLs of various competitions where problems are publicly available for download in PDF format. Then, we utilize the Mathpix333https://mathpix.com/ tool to convert these PDF documents into markdown format, making them compatible with input requirements for models. Specifically, for the programming problems of Computer Science, we additionally collect corresponding test cases. We strictly adhere to copyright and licensing considerations, ensuring compliance with all relevant regulations.

3.3 Data Annotation

Problem Extraction and Annotation.

To extract individual problems from the markdown format of the test papers, we employ about 30 students with background in science and engineering.We have developed a user interface for annotating multimodality data, which has been released.444https://github.com/GAIR-NLP/OlympicArena/tree/main/annotationTo facilitate further research and the process-level evaluation of models, we annotate meta-information like solutions if provided. To ensure data quality, we implement a multi-step validation process after the initial annotation is completed. More details can be seen in AppendixB.1. After collecting all the problems, we perform deduplication within each competition based on model embeddings to remove repeated problems that may appear in multiple test papers from the same year. To further demonstrate that our benchmark emphasizes cognitive reasoning more than most other benchmarks, we categorize the difficulty of the problems into three levels and make comparison with other related benchmarks. Specifically, we classify all problems into: knowledge recall, concept application and cognitive reasoning. We utilize GPT-4V as the annotator for categorizing different difficulty levels555We annotate the validation sets to highlight their characteristics and save costs. (detailed definitions and specific prompts can be found in AppendixB.2).666All annotations using GPT-4V are manually verified for reliability.

Annotation of Cognitive Reasoning Abilities.

To facilitate better fine-grained analysis, we categorize cognitive reasoning abilities from both logical and visual perspectives[16, 43]. The logical reasoning abilities encompassDeductive Reasoning (DED), Inductive Reasoning (IND), Abductive Reasoning (ABD), Analogical Reasoning (ANA), Cause-and-Effect Reasoning (CAE), Critical Thinking (CT), Decompositional Reasoning (DEC), and Quantitative Reasoning (QUA). Meanwhile, the visual reasoning abilities include Pattern Recognition (PR), Spatial Reasoning (SPA), Diagrammatic Reasoning (DIA), Symbol Interpretation (SYB), and Comparative Visualization (COM). We also utilize GPT-4V as the annotator for categorizingdifferent cognitive abilities (detailed definitions and specific prompts can be found in AppendixB.3).footnotemark: With these annotations, we can conduct a more fine-grained analysis of the current cognitive reasoning abilities of AI.

3.4 Data Splitting

Our benchmark includes 11,163 problems, with 548 designated for model-based evaluation as OlympicArena-ot. We sample 638 problems across subjects to create OlympicArena-val for hyperparameter tuning or small-scale testing. OlympicArena-val problems have step-by-step solutions, supporting research like process-level evaluation. The remaining problems form OlympicArena-test, the official test set with unreleased answers for formal testing. The results in this paper are based on the entire benchmark dataset, including OlympicArena-ot, OlympicArena-val, and OlympicArena-test.

4 Experiments

4.1 Experimental Setup

To comprehensively evaluate the capabilities of LLMs and LMMs (selected models are listed in AppendixC.2) across different modalities, we design our experiments to include three distinct settings: multimodal, image-caption, and text-only. In the multimodal setting, we assess the ability of LMMs to leverage visual information by interleaving text and images, simulating real-world scenarios. For models unable to handle interleaved inputs, we concatenate multiple images into a single input. For LMMs requiring necessary image inputs, their text-based counterparts handle text-only problems. In the image-caption setting, we explore whether textual descriptions of images enhance the problem-solving capabilities of LLMs. Using InternVL-Chat-V1.5777We use InternVL-Chat-V1.5 for its high performance and cost-effective captioning.[12], we generate captions for all images based on prompts detailed in AppendixC.1. These captions replace the original image inputs. In the text-only setting, we evaluate the performance of LLMs without any visual information, serving as a baseline to compare against the multimodal and image-caption settings. All experiments use zero-shot prompts, tailored to each answer type and specifying output formats to facilitate answer extraction and rule-based matching. It also minimizes biases typically associated with few-shot learning[32, 33]. Detailed prompt designs are provided in AppendixC.3.

4.2 Evaluation

Answer-level Evaluation

We combine rule-based and model-based methods to cover a diverse range of problems. For problems with fixed answers, we extract the final answer and perform rule-based matching according to the answer type. For code generation tasks, we use the unbiased pass@k metric[10] to test all test cases. For problems with answer types categorized as "others" which are difficult to be evaluated using rule-based matching (e.g., chemical equation writing problems), we employ GPT-4V as an evaluator to assess the responses. To ensure the reliability of GPT-4V as an evaluator, we manually sample and check the correctness. See AppendixC.5 for more details.

Process-level Evaluation

To further investigate the correctness of the reasoning steps, ensuring a rigorous assessment of the cognitive abilities of models, we conduct the process-level evaluation. We first sample 96 problems with reference solutions from OlympicArena. We employ GPT-4 to convert both the references (i.e., gold solutions) and the model-generated solutions into a structured step-by-step format. We then provide these solutions to GPT-4V and score each step for its correctness on a scale ranging from 0 to 1.888We leave more research on open-source model-based evaluation for future work. The experimental details can be seen in AppendixC.6. To validate the consistency with human judgment, we obtain some samples for human annotations. The results indicate that our model-based evaluation method is highly accurate, with an 83% inter-annotator agreement.

4.3 Main Results

MathPhysicsChemistryBiologyGeographyAstronomyCSOverall
ModelAccuracyAccuracyAccuracyAccuracyAccuracyAccuracyPass@1Accuracy
LLMs
Qwen-7B-Chat1.583.747.017.314.535.4804.31
Yi-34B-Chat3.069.7723.5332.6735.0318.150.1717.31
Internlm2-20B-Chat5.889.4818.3631.9032.1416.030.6016.62
Qwen1.5-32B-Chat9.6514.5429.8438.5840.6928.050.5123.69
GPT-3.57.2710.9223.0331.1931.1316.933.8518.27
Claude3 Sonnet7.7617.2429.4638.2540.9424.041.6223.02
GPT-419.4624.7742.5246.4744.9733.447.7832.37
GPT-4o28.3329.5446.2449.4248.3643.258.4638.17
Image caption + LLMs
Qwen-7B-Chat1.763.566.757.837.176.8704.89
Yi-34B-Chat3.019.9421.4531.2634.7817.330.1716.72
Internlm2-20B-Chat5.9410.4020.2531.0032.5216.930.7317.07
Qwen1.5-32B-Chat9.5614.3129.8438.5140.7527.20.6023.43
GPT-3.57.1614.4823.9730.9433.5218.564.7018.83
Claude3 Sonnet7.5218.1029.8438.7741.1422.652.3923.10
GPT-419.4626.2141.5845.8948.18357.6333.00
GPT-4o28.2729.7145.8751.1649.1243.179.5738.50
LMMs
Qwen-VL-Chat1.734.258.6412.1313.777.8506.90
Yi-VL-34B2.949.9419.8127.7325.1616.60014.49
InternVL-Chat-V1.56.039.2519.1230.3932.9615.940.3816.63
LLaVA-NeXT-34B3.0310.0621.4533.1836.9218.150.1817.38
Qwen-VL-Max6.9312.3623.793640.1923.390.7720.65
Gemini Pro Vision6.2812.4728.1437.4837.4220.201.4520.97
Claude3 Sonnet7.5218.1629.2738.9640.1325.021.4523.13
GPT-4V19.2724.8341.4546.7949.6232.467.0032.76
GPT-4o28.6729.7146.6952.1856.2343.919.0039.97

Table3 presents the evaluation results of various LMMs and LLMs on OlympicArena.We obtain the following observations:(1) Even the most advanced large model, GPT-4o, achieves only a 39.97% overall accuracy, while other open-source models struggle to reach a 20% overall accuracy. This stark contrast highlights the significant difficulty and rigor of our benchmark, demonstrating its effectiveness in pushing the boundaries of current AI capabilities.(2) Furthermore, compared to subjects like biology and geography, we observe that mathematics and physics remain the two most challenging disciplines, likely due to their reliance on complex reasoning abilities.(3) Computer programming competitions also prove to be highly difficult, with some open-source models failing to solve any of them, indicating current models’ poor abilities to design efficient algorithms to solve complex problems.

4.4 Fine-grained Analysis

To achieve a more fine-grained analysis of the experimental results, we conduct further evaluations based on different modalities and reasoning abilities. Additionally, we also conduct an analysis of the process-level evaluation. Key findings are as follows:

Models exhibit varied performance across different logical and visual reasoning abilities.

OlympicArena: Benchmarking Multi-discipline Cognitive Reasoning for Superintelligent AI (3)

As shown in Figure3, almost all models demonstrate similar performance trends across different logical reasoning abilities. They tend to excel in Abductive Reasoning and Cause-and-Effect Reasoning, doing well in identifying causal relationships from the provided information. Conversely, models perform poorly in Inductive Reasoning and Decompositional Reasoning. This is due to the diverse and unconventional nature of Olympic-level problems, which require the ability to break down complex problems into smaller sub-problems. In terms of visual reasoning abilities, models tend to be better at Pattern Recognition and Comparative Visualization. However, they struggle with tasks involving spatial and geometric reasoning as well as those need to understand abstract symbols. The completed results are presented in AppendixD.1.

Most LMMs are still not proficient at utilizing visual information.

As displayed in Figure4(a), only a few LMMs (such as GPT-4o and Qwen-VL-Chat) show significant improvement with image inputs compared to their text-based counterpart. Many LMMs do not exhibit enhanced performance with image inputs and some even show decreased effectiveness when handling images. Possible reasons include: (1) When text and images are input together, LMMs may focus more on the text, neglecting the information in the images. This conclusion has also been found in some other works[61, 9]. (2) Some LMMs, while training their visual capabilities based on their text-based models, may lose some of their inherent language abilities (e.g., reasoning abilities), which is particularly evident in our scenarios. (3) Our problems use a complex interleaved text and image format, which some models do not support well, leading to difficulties in processing and understanding the positional information of images embedded within the text.999We exclude Yi-VL-34B as it doesn’t support multiple image inputs, which may cause an unfair comparison.

Analysis of process-level evaluation results

OlympicArena: Benchmarking Multi-discipline Cognitive Reasoning for Superintelligent AI (4)
OlympicArena: Benchmarking Multi-discipline Cognitive Reasoning for Superintelligent AI (5)
OlympicArena: Benchmarking Multi-discipline Cognitive Reasoning for Superintelligent AI (6)

Through process-level evaluation (complete results are in Table14), we discover following insights: (1) There is generally a high consistency between process-level evaluation and answer-level evaluation. When a model produces a correct answer, the quality of the reasoning process tends to be higher most of the time (see Figure4(b)). (2) The accuracy at the process-level is often higher than at the answer-level. This indicates that even for very complex problems, the model can correctly perform some of the intermediate steps. Therefore, the model likely has significant untapped potential for cognitive reasoning, which opens new avenues for researchers to explore. We also find that in a few disciplines, some models that perform well at the answer level fall behind at the process level. We speculate that this is because models sometimes tend to overlook the reasonableness of intermediate steps when generating answers, even though these steps may not be crucial to the final result. (3) Additionally, we conduct a statistical analysis of the location distribution of error steps (see Figure4(c)). We identify that a higher proportion of errors occur in the later stages. This suggests that models are more prone to making mistakes as reasoning accumulates, indicating a need for improvement in handling long chains of logical deductions.

Error analysis

OlympicArena: Benchmarking Multi-discipline Cognitive Reasoning for Superintelligent AI (7)

To further concretize models’ performance, we sample incorrect responses from GPT-4V (16 problems per subject, with 8 text-only and 8 multimodal) and have human evaluators analyze and annotate the reasons for these errors. As depicted in Figure5, reasoning errors (both logical and visual) constitute the largest category, indicating that our benchmark effectively highlights the current models’ deficiencies in cognitive reasoning abilities. Additionally, a significant portion of errors stem from knowledge deficits, suggesting that current models still lack expert-level domain knowledge and the ability to leverage this knowledge to assist in reasoning. Another category of errors arise from understanding biases, which can be attributed to the models’ misinterpretation of context and difficulties in integrating complex language structures and multimodal information. More relevant cases are shown in AppendixF.1.

4.5 Efforts on Data Leakage Detection

OlympicArena: Benchmarking Multi-discipline Cognitive Reasoning for Superintelligent AI (8)

Given the increasing scale of pre-training corpora, it is crucial to detect potential benchmark leakage. The opacity of pre-training often makes this task challenging. To this end, we employ a recently proposed instance-level leakage detection metric, N-gram Prediction Accuracy[54]. This metric uniformly samples several starting points for each instance, predicts the next n-gram for each starting point, and checks whether all predicted n-grams are correct, indicating that the model has potentially encountered this instance. We apply this metric to all available base or text-only chat models of the evaluated models. As shown in Figure6, it is surprising yet reasonable that some base models or text-only chat models behind these evaluated models have potentially encountered a few benchmark instances, although the number is negligible compared to the complete benchmark. For instance, the base model of Qwen1.5-32B-Chat has potentially encountered 43 benchmark instances. Furthermore, this raises a natural question: can the model correctly answer these instances? Interestingly, the corresponding text-only chat models and multimodal chat models can correctly answer even fewer of these instances. These results demonstrate that our benchmark has minimal leakage101010We also look forward to the development of more advanced detection tools and approaches. and is sufficiently challenging, as the models cannot correctly answer most of the leaked instances. See AppendixE for more results and analysis.

5 Conclusion

In this work, we introduce OlympicArena, a comprehensive benchmark for evaluating the cognitive reasoning abilities of LMMs and LLMs on Olympic-level problems. Through our detailed experiments, we find that even the most powerful model at present, GPT-4o, does not perform well in applying cognitive reasoning abilities to solve complex problems. We hope that our OlympicArena benchmark serves as a valuable stepping stone for future advancements in AI for science and engineering.

Acknowledgements

We sincerely appreciate all the laboratory members for their contributions in data annotation, project discussions, and providing valuable suggestions. Additionally, we extend our gratitude to Teacher Xiaoxia Yu from Hefei No. 168 Middle School for providing us with extensive information on various subjects. We also thank everyone who helps annotate the data for our benchmark dataset.

References

  • GPT [2023]Gpt-4v(ision) system card.2023.URL https://api.semanticscholar.org/CorpusID:263218031.
  • Achiam etal. [2023]Josh Achiam, Steven Adler, Sandhini Agarwal, Lama Ahmad, Ilge Akkaya, FlorenciaLeoni Aleman, Diogo Almeida, Janko Altenschmidt, Sam Altman, Shyamal Anadkat, etal.Gpt-4 technical report.arXiv preprint arXiv:2303.08774, 2023.
  • Anthropic [2024]AIAnthropic.The claude 3 model family: Opus, sonnet, haiku.Claude-3 Model Card, 2024.
  • Arora etal. [2023]Daman Arora, HimanshuGaurav Singh, etal.Have llms advanced enough? a challenging problem solving benchmark for large language models.arXiv preprint arXiv:2305.15074, 2023.
  • Bai etal. [2023a]Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, YuHan, Fei Huang, etal.Qwen technical report.arXiv preprint arXiv:2309.16609, 2023a.
  • Bai etal. [2023b]Jinze Bai, Shuai Bai, Shusheng Yang, Shijie Wang, Sinan Tan, Peng Wang, Junyang Lin, Chang Zhou, and Jingren Zhou.Qwen-vl: A frontier large vision-language model with versatile abilities.arXiv preprint arXiv:2308.12966, 2023b.
  • Bai etal. [2023c]Jinze Bai, Shuai Bai, Shusheng Yang, Shijie Wang, Sinan Tan, Peng Wang, Junyang Lin, Chang Zhou, and Jingren Zhou.Qwen-vl: A versatile vision-language model for understanding, localization, text reading, and beyond.2023c.
  • Cai etal. [2024]Zheng Cai, Maosong Cao, Haojiong Chen, Kai Chen, Keyu Chen, Xin Chen, Xun Chen, Zehui Chen, Zhi Chen, Pei Chu, etal.Internlm2 technical report.arXiv preprint arXiv:2403.17297, 2024.
  • Chen etal. [2024a]Lin Chen, Jinsong Li, Xiaoyi Dong, Pan Zhang, Yuhang Zang, Zehui Chen, Haodong Duan, Jiaqi Wang, YuQiao, Dahua Lin, etal.Are we on the right way for evaluating large vision-language models?arXiv preprint arXiv:2403.20330, 2024a.
  • Chen etal. [2021]Mark Chen, Jerry Tworek, Heewoo Jun, Qiming Yuan, Henrique Ponde deOliveira Pinto, Jared Kaplan, Harri Edwards, Yuri Burda, Nicholas Joseph, Greg Brockman, etal.Evaluating large language models trained on code.arXiv preprint arXiv:2107.03374, 2021.
  • Chen etal. [2024b]Weize Chen, Yusheng Su, Jingwei Zuo, Cheng Yang, Chenfei Yuan, Chi-Min Chan, Heyang Yu, Yaxi Lu, Yi-Hsin Hung, Chen Qian, Yujia Qin, Xin Cong, Ruobing Xie, Zhiyuan Liu, Maosong Sun, and Jie Zhou.Agentverse: Facilitating multi-agent collaboration and exploring emergent behaviors.In The Twelfth International Conference on Learning Representations, 2024b.URL https://openreview.net/forum?id=EHg5GDnyq1.
  • Chen etal. [2023]Zhe Chen, Jiannan Wu, Wenhai Wang, Weijie Su, Guo Chen, Sen Xing, Zhong Muyan, Qinglong Zhang, Xizhou Zhu, Lewei Lu, etal.Internvl: Scaling up vision foundation models and aligning for generic visual-linguistic tasks.arXiv preprint arXiv:2312.14238, 2023.
  • Cobbe etal. [2021]Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian, Mark Chen, Heewoo Jun, Lukasz Kaiser, Matthias Plappert, Jerry Tworek, Jacob Hilton, Reiichiro Nakano, etal.Training verifiers to solve math word problems.arXiv preprint arXiv:2110.14168, 2021.
  • Deng etal. [2009]Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and LiFei-Fei.Imagenet: A large-scale hierarchical image database.In 2009 IEEE conference on computer vision and pattern recognition, pp. 248–255. Ieee, 2009.
  • Feigenbaum etal. [1963]EdwardA Feigenbaum, Julian Feldman, etal.Computers and thought.New York McGraw-Hill, 1963.
  • Furbach etal. [2019]Ulrich Furbach, Steffen Hölldobler, Marco Ragni, Claudia Schon, and Frieder Stolzenburg.Cognitive reasoning: A personal view.KI-Künstliche Intelligenz, 33:209–217, 2019.
  • He etal. [2024]Chaoqun He, Renjie Luo, Yuzhuo Bai, Shengding Hu, ZhenLeng Thai, Junhao Shen, Jinyi Hu, XuHan, Yujie Huang, Yuxiang Zhang, etal.Olympiadbench: A challenging benchmark for promoting agi with olympiad-level bilingual multimodal scientific problems.arXiv preprint arXiv:2402.14008, 2024.
  • Hendrycks etal. [2020]Dan Hendrycks, Collin Burns, Steven Basart, Andy Zou, Mantas Mazeika, Dawn Song, and Jacob Steinhardt.Measuring massive multitask language understanding.arXiv preprint arXiv:2009.03300, 2020.
  • Hendrycks etal. [2021a]Dan Hendrycks, Steven Basart, Saurav Kadavath, Mantas Mazeika, Akul Arora, Ethan Guo, Collin Burns, Samir Puranik, Horace He, Dawn Song, etal.Measuring coding challenge competence with apps.arXiv preprint arXiv:2105.09938, 2021a.
  • Hendrycks etal. [2021b]Dan Hendrycks, Collin Burns, Saurav Kadavath, Akul Arora, Steven Basart, Eric Tang, Dawn Song, and Jacob Steinhardt.Measuring mathematical problem solving with the math dataset.arXiv preprint arXiv:2103.03874, 2021b.
  • Hu etal. [2020]Junjie Hu, Sebastian Ruder, Aditya Siddhant, Graham Neubig, Orhan Firat, and Melvin Johnson.Xtreme: A massively multilingual multi-task benchmark for evaluating cross-lingual generalisation.In International Conference on Machine Learning, pp. 4411–4421. PMLR, 2020.
  • Huang etal. [2024]Yuzhen Huang, Yuzhuo Bai, Zhihao Zhu, Junlei Zhang, Jinghan Zhang, Tangjun Su, Junteng Liu, Chuancheng Lv, Yikai Zhang, Yao Fu, etal.C-eval: A multi-level multi-discipline chinese evaluation suite for foundation models.Advances in Neural Information Processing Systems, 36, 2024.
  • Kenton & Toutanova [2019]Jacob Devlin Ming-WeiChang Kenton and LeeKristina Toutanova.Bert: Pre-training of deep bidirectional transformers for language understanding.In Proceedings of NAACL-HLT, pp. 4171–4186, 2019.
  • Lanham etal. [2023]Tamera Lanham, Anna Chen, Ansh Radhakrishnan, Benoit Steiner, Carson Denison, Danny Hernandez, Dustin Li, Esin Durmus, Evan Hubinger, Jackson Kernion, etal.Measuring faithfulness in chain-of-thought reasoning.arXiv preprint arXiv:2307.13702, 2023.
  • LeCun etal. [1998]Yann LeCun, Léon Bottou, Yoshua Bengio, and Patrick Haffner.Gradient-based learning applied to document recognition.Proceedings of the IEEE, 86(11):2278–2324, 1998.
  • Li etal. [2023]Haonan Li, Yixuan Zhang, Fajri Koto, Yifei Yang, Hai Zhao, Yeyun Gong, Nan Duan, and Timothy Baldwin.Cmmlu: Measuring massive multitask language understanding in chinese.arXiv preprint arXiv:2306.09212, 2023.
  • Li etal. [2024]Kaixin Li, Yuchen Tian, Qisheng Hu, Ziyang Luo, and Jing Ma.Mmcode: Evaluating multi-modal code large language models with visually rich programming problems.arXiv preprint arXiv:2404.09486, 2024.
  • Li etal. [2022]Yujia Li, David Choi, Junyoung Chung, Nate Kushman, Julian Schrittwieser, Rémi Leblond, Tom Eccles, James Keeling, Felix Gimeno, Agustin DalLago, etal.Competition-level code generation with alphacode.Science, 378(6624):1092–1097, 2022.
  • Lightman etal. [2023]Hunter Lightman, Vineet Kosaraju, Yura Burda, Harri Edwards, Bowen Baker, Teddy Lee, Jan Leike, John Schulman, Ilya Sutskever, and Karl Cobbe.Let’s verify step by step.arXiv preprint arXiv:2305.20050, 2023.
  • Liu etal. [2023]Chengwu Liu, Jianhao Shen, Huajian Xin, Zhengying Liu, YeYuan, Haiming Wang, Wei Ju, Chuanyang Zheng, Yichun Yin, Lin Li, etal.Fimo: A challenge formal dataset for automated theorem proving.arXiv preprint arXiv:2309.04295, 2023.
  • Liu etal. [2024]Haotian Liu, Chunyuan Li, Yuheng Li, BoLi, Yuanhan Zhang, Sheng Shen, and YongJae Lee.Llava-next: Improved reasoning, ocr, and world knowledge, 2024.
  • Lu etal. [2021]Yao Lu, Max Bartolo, Alastair Moore, Sebastian Riedel, and Pontus Stenetorp.Fantastically ordered prompts and where to find them: Overcoming few-shot prompt order sensitivity.arXiv preprint arXiv:2104.08786, 2021.
  • Ma etal. [2024]Huan Ma, Changqing Zhang, Yatao Bian, Lemao Liu, Zhirui Zhang, Peilin Zhao, Shu Zhang, Huazhu Fu, Qinghua Hu, and Bingzhe Wu.Fairness-guided few-shot prompting for large language models.Advances in Neural Information Processing Systems, 36, 2024.
  • Morris etal. [2023]MeredithRingel Morris, Jascha Sohl-dickstein, Noah Fiedel, Tris Warkentin, Allan Dafoe, Aleksandra Faust, Clement Farabet, and Shane Legg.Levels of agi: Operationalizing progress on the path to agi.arXiv preprint arXiv:2311.02462, 2023.
  • OpenAI [2023]OpenAI.Introducing superalignment.OpenAI Blog, 2023.URL https://openai.com/superalignment.
  • OpenAI [2024]OpenAI.Hello gpt-4o.OpenAI Blog, 2024.URL https://openai.com/index/hello-gpt-4o/.
  • Qian etal. [2023]Chen Qian, Xin Cong, Cheng Yang, Weize Chen, Yusheng Su, Juyuan Xu, Zhiyuan Liu, and Maosong Sun.Communicative agents for software development.arXiv preprint arXiv:2307.07924, 2023.
  • Radford etal. [2018]Alec Radford, Karthik Narasimhan, Tim Salimans, Ilya Sutskever, etal.Improving language understanding by generative pre-training.2018.
  • Reid etal. [2024]Machel Reid, Nikolay Savinov, Denis Teplyashin, Dmitry Lepikhin, Timothy Lillicrap, Jean-baptiste Alayrac, Radu Soricut, Angeliki Lazaridou, Orhan Firat, Julian Schrittwieser, etal.Gemini 1.5: Unlocking multimodal understanding across millions of tokens of context.arXiv preprint arXiv:2403.05530, 2024.
  • Rein etal. [2023]David Rein, BettyLi Hou, AsaCooper Stickland, Jackson Petty, RichardYuanzhe Pang, Julien Dirani, Julian Michael, and SamuelR Bowman.Gpqa: A graduate-level google-proof q&a benchmark.arXiv preprint arXiv:2311.12022, 2023.
  • Shi etal. [2024]Quan Shi, Michael Tang, Karthik Narasimhan, and Shunyu Yao.Can language models solve olympiad programming?arXiv preprint arXiv:2404.10952, 2024.
  • Sinha etal. [2024]Shiven Sinha, Ameya Prabhu, Ponnurangam Kumaraguru, Siddharth Bhat, and Matthias Bethge.Wu’s method can boost symbolic ai to rival silver medalists and alphageometry to outperform gold medalists at imo geometry.arXiv preprint arXiv:2404.06405, 2024.
  • Sun etal. [2023]Jiankai Sun, Chuanyang Zheng, Enze Xie, Zhengying Liu, Ruihang Chu, Jianing Qiu, Jiaqi Xu, Mingyu Ding, Hongyang Li, Mengzhe Geng, etal.A survey of reasoning with foundation models.arXiv preprint arXiv:2312.11562, 2023.
  • Sun etal. [2024]Liangtai Sun, Yang Han, Zihan Zhao, DaMa, Zhennan Shen, Baocai Chen, LuChen, and Kai Yu.Scieval: A multi-level large language model evaluation benchmark for scientific research.In Proceedings of the AAAI Conference on Artificial Intelligence, volume38, pp. 19053–19061, 2024.
  • Team etal. [2023]Gemini Team, Rohan Anil, Sebastian Borgeaud, Yonghui Wu, Jean-Baptiste Alayrac, Jiahui Yu, Radu Soricut, Johan Schalkwyk, AndrewM Dai, Anja Hauth, etal.Gemini: a family of highly capable multimodal models.arXiv preprint arXiv:2312.11805, 2023.
  • Trinh etal. [2024]TrieuH Trinh, Yuhuai Wu, QuocV Le, HeHe, and Thang Luong.Solving olympiad geometry without human demonstrations.Nature, 625(7995):476–482, 2024.
  • Turing & Haugeland [1950]AlanM Turing and JHaugeland.Computing machinery and intelligence.The Turing Test: Verbal Behavior as the Hallmark of Intelligence, pp. 29–56, 1950.
  • Uesato etal. [2022]Jonathan Uesato, Nate Kushman, Ramana Kumar, Francis Song, Noah Siegel, Lisa Wang, Antonia Creswell, Geoffrey Irving, and Irina Higgins.Solving math word problems with process-and outcome-based feedback.arXiv preprint arXiv:2211.14275, 2022.
  • Wang etal. [2018]Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel Bowman.Glue: A multi-task benchmark and analysis platform for natural language understanding.In Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, pp. 353–355, 2018.
  • Wang etal. [2023a]Hanchen Wang, Tianfan Fu, Yuanqi Du, Wenhao Gao, Kexin Huang, Ziming Liu, Payal Chandak, Shengchao Liu, Peter VanKatwyk, Andreea Deac, etal.Scientific discovery in the age of artificial intelligence.Nature, 620(7972):47–60, 2023a.
  • Wang etal. [2023b]Peiyi Wang, Lei Li, Zhihong Shao, RXXu, Damai Dai, Yifei Li, Deli Chen, YWu, and Zhifang Sui.Math-shepherd: A label-free step-by-step verifier for llms in mathematical reasoning.arXiv preprint arXiv:2312.08935, 2023b.
  • Wang etal. [2023c]Xiaoxuan Wang, Ziniu Hu, Pan Lu, Yanqiao Zhu, Jieyu Zhang, Satyen Subramaniam, ArjunR Loomba, Shichang Zhang, Yizhou Sun, and Wei Wang.Scibench: Evaluating college-level scientific problem-solving abilities of large language models.arXiv preprint arXiv:2307.10635, 2023c.
  • Xia etal. [2024]Shijie Xia, Xuefeng Li, Yixin Liu, Tongshuang Wu, and Pengfei Liu.Evaluating mathematical reasoning beyond accuracy.arXiv preprint arXiv:2404.05692, 2024.
  • Xu etal. [2024]Ruijie Xu, Zengzhi Wang, Run-Ze Fan, and Pengfei Liu.Benchmarking benchmark leakage in large language models.arXiv preprint arXiv:2404.18824, 2024.
  • Young etal. [2024]Alex Young, Bei Chen, Chao Li, Chengen Huang, GeZhang, Guanwei Zhang, Heng Li, Jiangcheng Zhu, Jianqun Chen, Jing Chang, etal.Yi: Open foundation models by 01. ai.arXiv preprint arXiv:2403.04652, 2024.
  • Yu etal. [2023]Longhui Yu, Weisen Jiang, Han Shi, Jincheng Yu, Zhengying Liu, YuZhang, JamesT Kwok, Zhenguo Li, Adrian Weller, and Weiyang Liu.Metamath: Bootstrap your own mathematical questions for large language models.arXiv preprint arXiv:2309.12284, 2023.
  • Yuan & Liu [2022]Weizhe Yuan and Pengfei Liu.restructured pre-training.arXiv preprint arXiv:2206.11147, 2022.
  • Yue etal. [2023a]Xiang Yue, Yuansheng Ni, Kai Zhang, Tianyu Zheng, Ruoqi Liu, GeZhang, Samuel Stevens, Dongfu Jiang, Weiming Ren, Yuxuan Sun, etal.Mmmu: A massive multi-discipline multimodal understanding and reasoning benchmark for expert agi.arXiv preprint arXiv:2311.16502, 2023a.
  • Yue etal. [2023b]Xiang Yue, Xingwei Qu, GeZhang, Yao Fu, Wenhao Huang, Huan Sun, YuSu, and Wenhu Chen.Mammoth: Building math generalist models through hybrid instruction tuning.arXiv preprint arXiv:2309.05653, 2023b.
  • Zhang etal. [2024a]GeZhang, Xinrun Du, Bei Chen, Yiming Liang, Tongxu Luo, Tianyu Zheng, Kang Zhu, Yuyang Cheng, Chunpu Xu, Shuyue Guo, etal.Cmmmu: A chinese massive multi-discipline multimodal understanding benchmark.arXiv preprint arXiv:2401.11944, 2024a.
  • Zhang etal. [2024b]Renrui Zhang, Dongzhi Jiang, Yichi Zhang, Haokun Lin, Ziyu Guo, Pengshuo Qiu, Aojun Zhou, Pan Lu, Kai-Wei Chang, Peng Gao, etal.Mathverse: Does your multi-modal llm truly see the diagrams in visual math problems?arXiv preprint arXiv:2403.14624, 2024b.
  • Zhang etal. [2023]Xiaotian Zhang, Chunyang Li, YiZong, Zhengyu Ying, Liang He, and Xipeng Qiu.Evaluating the performance of large language models on gaokao benchmark.arXiv preprint arXiv:2305.12474, 2023.
  • Zhong etal. [2023]Wanjun Zhong, Ruixiang Cui, Yiduo Guo, Yaobo Liang, Shuai Lu, Yanlin Wang, Amin Saied, Weizhu Chen, and Nan Duan.Agieval: A human-centric benchmark for evaluating foundation models.arXiv preprint arXiv:2304.06364, 2023.
  • Zhou etal. [2023]Aojun Zhou, KeWang, Zimu Lu, Weikang Shi, Sichun Luo, Zipeng Qin, Shaoqing Lu, Anya Jia, Linqi Song, Mingjie Zhan, etal.Solving challenging math word problems using gpt-4 code interpreter with code-based self-verification.arXiv preprint arXiv:2308.07921, 2023.

Appendix A Detailed Statistics of the Benchmark

A.1 Distribution of Problems

Our benchmark collects data from various competitions. The detailed list can be found in Table4. Note that a small portion of the problems are sampled from other related benchmarks which are marked in the table. The subfields covered by each competition subject are shown in Table5. Additionally, the distribution information of our benchmark across different languages and modalities is presented in Table6.

Competition NameAbbreviationSubject# Problems
UK Senior KangarooUKMT_SKMath20
Math Majors of America Tournament for High SchoolsMMATHSMath47
Math KangarooMKMath35
Euclid Mathematics ContestEMCMath215
Canadian Open Mathematics ChallengeCOMCMath26
Johns Hopkins Mathematics TournamentJHMTMath100
Berkeley Math TournamentBMTMath93
Stanford Mathematics TournamentSMTMath473
Chinese High School Mathematics League (Pre Round)ZH_Math_PREMath546
Chinese High School Mathematics League (1st&2nd Round)ZH_Math_12Math279
Duke University Math MeetDMMMath107
The Princeton University Mathematics CompetitionPUMaCMath296
Harvard-MIT Mathematics TournamentHMMTMath392
William Lowell Putnam Mathematics CompetitionPutnamMath136
International Mathematical Olympiad*IMOMath79
Romanian Master of Mathematics*RMMMath8
American Regions Mathematics League*ARMLMath374
Euclid Mathematics Competition*EMCMath215
European Girls’ Mathematical Olympiad*EGMOMath7
F=MAFMAPhysics122
Intermediate Physics Challenge (Y11)BPhO_IPCPhysics50
Senior Physics ChallengeBPhO_SPCPhysics38
Australian Science Olympaids PhysicsASOPPhysics48
European Physics OlympiadEPhOPhysics15
Nordic-Baltic Physics OlympiadNBPhOPhysics102
World Physics OlympicsWoPhOPhysics38
Asian Physics OlympiadAPhOPhysics126
International Physics OlympiadIPhOPhysics307
Canadian Association of PhysicistsCAPPhysics100
Physics BowlPBPhysics100
USA Physics OlympiadUSAPhOPhysics188
Chinese Physics OlympiadCPhOPhysics462
Physics Challenge (Y13)PCY13Physics44
Chinese High School Biology ChallengeGAOKAO_BioBiology652
International Biology OlympiadIBOBiology300
The USA Biology OlympiadUSABOBiology96
Indian Biology OlympiadINBOBiology86
Australian Science Olympiad BiologyASOBBiology119
British Biology OlympiadBBOBiology82
New Zealand Biology OlympiadNZIBOBiology223
Chem 13 NewsChem13NewsChemistry56
AvogadroAvogadroChemistry55
U.S. National Chemistry Olympiad (local)USNCO (local)Chemistry54
U.S. National Chemistry OlympiadUSNCOChemistry98
Chinese High School Chemistry ChallengeGAOKAO_ChemChemistry568
Canadian Chemistry OlympicCCOChemistry100
Australian Science Olympiad ChemistryASOCChemistry91
Cambridge Chemistry ChallengeC3H6Chemistry61
UK Chemistry OlympiadUKChOChemistry100
International Chemistry OlympiadIChOChemistry402
Chinese High School Geography ChallengeGAOKAO_GeoGeography862
US Earth Science OrganizationUSESOGeography301
Australian Science Olympiad Earth ScienceASOEGeography100
The International Geography OlympaidIGeOGeography327
Chinese High School Astronomy ChallengeGAOKAO_AstroAstronomy740
The International Astronomy and Astrophysics CompetitionIAACAstronomy50
USA Astronomy and Astrophysics OrganizationUSAAAOAstronomy100
British Astronomy and Astrophysics Olympaid ChallengeBAAO_challengeAstronomy148
British Astronomy and Astrophysics Olympaid-round2BAAOAstronomy185
USA Computing OlympiadUSACOCS48
AtcoderAtcoderCS48
Codeforces\daggerCFCS138

SubjectSubfields
MathAlgebra, Geometry, Number Theory, Combinatorics
PhysicsMechanics, Electricity and Magnetism, Waves and Optics, Thermodynamics, Modern Physics, Fluid Mechanics
ChemistryGeneral Chemistry, Organic Chemistry, Inorganic Chemistry, Analytical Chemistry, Physical Chemistry, Environmental Chemistry
BiologyCell biology, Plant Anatomy and Physiology, Animal Anatomy and Physiology, Ethology, Genetics and Evolution, Ecology , Biosystematics
GeographyPhysical Geography, Human Geography, Regional Geography, Environmental Geography, Geospatial Techniques
AstronomyFundamentals of Astronomy, Stellar Astronomy, Galactic and Extragalactic Astronomy, Astrophysics
CSData Structures, Algorithm

MathematicsPhysicsChemistryBiologyGeographyAstronomyCS
EN & text221563278235221121990
EN & multi-modal193646235554517264144
ZH & text780164124312582640
ZH & multi-modal452984443408044760
Total EN240812781017906728483234
Total ZH8254625686528627400
Total text299579690666426948390
Total multi-modal2389446798941321740144
Grand Total323317401585155815901223234

A.2 Answer Types

Answer TypeDefinition
Single Choice (SC)Problems with only one correct option (e.g., one out of four, one out of five, etc.).
Multiple-choice (MC)Problems with multiple correct options (e.g., two out of four, two out of five, two out of six, etc.).
True/False (TF)Problems where the answer is either True or False.
Numerical Value (NV)Problems where the answer is a numerical value, including special values like π𝜋\piitalic_π, e𝑒eitalic_e, 77\sqrt{7}square-root start_ARG 7 end_ARG, log29subscript29\log_{2}{9}roman_log start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT 9, etc., represented in LaTeX.
Set (SET)Problems where the answer is a set, such as {1, 2, 3}.
Interval (IN)Problems where the answer is a range of values, represented as an interval in LaTeX.
Expression (EX)Problems requiring an expression containing variables, represented in LaTeX.
Equation (EQ)Problems requiring an equation containing variables, represented in LaTeX.
Tuple (TUP)Problems requiring a tuple, usually representing a pair of numbers, such as (x, y).
Multi-part Value (MPV)Problems requiring multiple quantities to be determined within a single sub-problem, such as solving both velocity and time in a physics problem.
Multiple Answers (MA)Problems with multiple solutions for a single sub-problem, such as a math fill-in-the-blank problem with answers 1 or -2.
Code Generation (CODE)Problems where the answer is a piece of code, requiring the generation of functional code snippets or complete programs to solve the given task.
Others (OT)Problems that do not fit into the above categories, such as writing chemical equations or explaining reasons, which require human expert evaluation.

Through extensive observation of a large number of problems and a thorough examination of multiple previous benchmarks, we have finally distilled 13 comprehensive answer types. These types are designed to cover as many problems as possible. The specific definitions for each answer type are provided in Table 7.

A.3 Image Types

We categorize and summarize the five most common types of images in our multimodal scientific problems. The definitions of these types can be found in Table8, and examples are provided in Figure7. The distribution of different image types in our benchmark is shown in Figure8

Image TypeDefinition
Geometric and Mathematical DiagramsIncludes diagrams representing mathematical concepts, such as 2D and 3D shapes, mathematical notations, function plots.
Statistical and Data RepresentationVisualizations for statistical or data information, including multivariate plots, tables, charts (histograms, bar charts, line plots), and infographics.
Natural and Environmental ImagesImages of natural scenes or phenomena, including environmental studies visualizations, geological and geographical maps, and satellite images.
Scientific and Technical DiagramsDiagrams used in science, such as cell structures and genetic diagrams in Biology, molecular structures and reaction pathways in Chemistry, force diagrams, circuit diagrams, and astrophysical maps in Physics and Astronomy.
Abstract and Conceptual VisualsVisuals explaining theories and concepts, including flowcharts, algorithms, logic models, and symbolic diagrams.

OlympicArena: Benchmarking Multi-discipline Cognitive Reasoning for Superintelligent AI (9)
OlympicArena: Benchmarking Multi-discipline Cognitive Reasoning for Superintelligent AI (10)

Appendix B Data Annotation

B.1 Problem Extraction and Annotation

We develop a simple and practical annotation interface using Streamlit111111https://streamlit.io/ (as shown in Figure9). Approximately 30 university students are employed to use this interface for annotation. We provide each annotator with a wage higher than the local average hourly rate. The specific fields annotated are shown in Figure10. We use image URLs to represent pictures, which allows for efficient storage and easy access without embedding large image files directly in the dataset. Each annotated problem is ultimately stored as a JSON file, facilitating subsequent processing. It is worth mentioning that we embed several rule-based checks and filtering mechanisms in the annotation interface to minimize noise from the annotations. When the following situations arise, we promptly identify and correct the annotations:

1) When the answer type is Numerical Value, and the annotated answer contains a variable.

2) When the answer type is not Numerical Value, but the annotated answer can be parsed as a numerical value.

3) When the answer type is Expression, and the annotated answer contains an equals sign.

4) When the answer type is Equation, and the annotated answer does not contain an equals sign.

5) When the annotated answer contains images that should not be present.

6) When the annotated answer contains units (since units are a separate field according to Figure10, we compile a list of common units and manually check and correct answers when suspected units are detected).

7) When the annotated image links cannot be previewed properly.

Additionally, we implement a multi-step validation process after the initial annotation is completed. First, we conduct a preliminary check using predefined rules to identify any error data, which is then corrected. Following this, a secondary review is performed by different annotators to further check and correct any errors in the annotations. This cross-checking mechanism helps ensure the accuracy and consistency of the annotations.

OlympicArena: Benchmarking Multi-discipline Cognitive Reasoning for Superintelligent AI (11)
OlympicArena: Benchmarking Multi-discipline Cognitive Reasoning for Superintelligent AI (12)

B.2 Annotation for Difficulty Levels

The definitions of three levels of difficulty are as follows:

1) Knowledge Recall: This involves the direct recall of factual information and well-defined procedures. It examines the memory of simple knowledge points, i.e., whether certain information is known.

2) Concept Application: This category covers the very basic use of simple concepts to solve easy problems or perform straightforward calculations. It involves applying known information to situations without any complex or multi-step reasoning. The focus is on straightforward application rather than reasoning.

3) Cognitive Reasoning: This involves the use of logical reasoning or visual reasoning to solve problems. It includes problems that require clear thinking and problem-solving techniques. It focuses on the ability to reason and analyze to understand and address the issues.

The prompt we use for categorizing each problem is shown in Figure11

B.3 Cognitive Reasoning Abilities Annotation

We provide detailed definitions for each of these cognitive reasoning abilities.

The logical reasoning abilities:

1) Deductive Reasoning involves starting with a general principle or hypothesis and logically deriving specific conclusions. This process ensures that the conclusion necessarily follows from the premises.

2) Inductive Reasoning involves making broad generalizations from specific observations. This type of reasoning infers general principles from specific instances, enhancing our confidence in the generality of certain phenomena.

3) Abductive Reasoning starts with incomplete observations and seeks the most likely explanation. It is used to form hypotheses that best explain the available data.

4) Analogical Reasoning involves using knowledge from one situation to solve problems in a similar situation by drawing parallels.

5) Cause-and-Effect Reasoning identifies the reasons behind occurrences and their consequences. This reasoning establishes causal relationships between events.

6) Critical Thinking involves objectively analyzing and evaluating information to form a reasoned judgment. It encompasses questioning assumptions and considering alternative explanations.

7) Decompositional Reasoning breaks down complex problems or information into smaller, more manageable parts for detailed analysis.

8) Quantitative Reasoning involves using mathematical skills to handle quantities and numerical concepts, essential for interpreting data and performing calculations.

The visual reasoning abilities:

1) Pattern Recognition is the ability to identify and understand repeating forms, structures, or recurring themes, especially when presented visually. This skill is critical in subjects like Chemistry for recognizing molecular structures, Biology for identifying cellular components, and Geography for interpreting topographic maps.

2) Spatial Reasoning is the ability to understand objects in both two and three-dimensional terms and draw conclusions about them with limited information. This skill is often applied in subjects like Math.

Two-Dimensional Examples: Plane geometry, segments, lengths.

Three-Dimensional Examples: Solid geometry, spatial visualization

3) Diagrammatic Reasoning represents the capability to solve problems expressed in diagrammatic form, understanding the logical connections between shapes, symbols, and texts.

Examples: Reading various forms of charts and graphs, obtaining and analyzing statistical information from diagrams.

4) Symbol Interpretation is the ability to decode and understand abstract and symbolic visual information.Examples: Understanding abstract diagrams, interpreting symbols, including representations of data structures such as graphs and linked lists

5) Comparative Visualization represents comparing and contrasting visual elements to discern differences or similarities, often required in problem-solving to determine the relationship between variable components.

The prompt we use for annotating different logical reasoning abilities and visual reasoning abilities are shown separately in Figure12 and Figure13.

Appendix C Experiment Details

C.1 Prompt for Image Caption

The prompt we use for captioning each image in the benchmark for LMMs is shown in Figure14.

C.2 Models

In our experiments, we evaluate a range of both open-source and proprietary LMMs and LLMs. For LMMs, we select the newly released GPT-4o[36] and the powerful GPT-4V[1] from OpenAI. Additionally, we include Claude3 Sonnet[3] from Anthropic, and Gemini Pro Vision121212We do not test Gemini-1.5-pro[39] as there are significant rate limits on accessing the model’s API during the time we do experiments.[45] from Google, and Qwen-VL-Max[6] from Alibaba. We also evaluate several open-source models, including LLaVA-NeXT-34B[31], InternVL-Chat-V1.5[12], Yi-VL-34B[55], and Qwen-VL-Chat[7]. For LLMs, we primarily select the corresponding text models of the aforementioned LMMs, such as GPT-4[2]. Additionally, we include open-source models like Qwen-7B-Chat, Qwen1.5-32B-Chat[5], Yi-34B-Chat[55], and InternLM2-Chat-20B[8]. Table9 shows the relationship between LMMs and their corresponding LLMs. For the proprietary models, we call the APIs, while for the open-source models, we run them on an 8-card A800 cluster.

LMMLLM
GPT-4oGPT-4o
GPT-4vGPT-4
Claude3 SonnetClaude3 Sonnet
Gemini Pro VisionGemini Pro
LLaVA-NeXT-34BNous-Hermes-2-Yi-34B
InternVL-Chat-V1.5InternLM2-20B-Chat
Yi-VL-34BYi-34B-Chat
Qwen-VL-ChatQwen-7B-Chat

C.3 Evaluation Prompts

We meticulously design the prompts used for model input during experiments. These prompts are tailored to different answer types, with specific output formats specified for each type. The detailed prompt templates are shown in Figure15, and the different instructions for each answer type are provided in Table10.

Answer TypeAnswer Type DescriptionAnswer Format Instruction
SCThis is a multiple choice question (only one correct answer).Please end your response with: "The final answer is ANSWER𝐴𝑁𝑆𝑊𝐸𝑅\boxed{ANSWER}italic_A italic_N italic_S italic_W italic_E italic_R", where ANSWER should be one of the options: {the options of the problem}.
MCThis is a multiple choice question (more than one correct answer).Please end your response with: "The final answer is ANSWER𝐴𝑁𝑆𝑊𝐸𝑅\boxed{ANSWER}italic_A italic_N italic_S italic_W italic_E italic_R", where ANSWER should be two or more of the options: {the options of the problem}.
TFThis is a True or False question.Please end your response with: "The final answer is ANSWER𝐴𝑁𝑆𝑊𝐸𝑅\boxed{ANSWER}italic_A italic_N italic_S italic_W italic_E italic_R", where ANSWER should be either "True" or "False".
NVThe answer to this question is a numerical value.{unit instruction} Please end your response with: "The final answer is ANSWER𝐴𝑁𝑆𝑊𝐸𝑅\boxed{ANSWER}italic_A italic_N italic_S italic_W italic_E italic_R", where ANSWER is the numerical value without any units.
SETThe answer to this question is a set.{unit instruction} Please end your response with: "The final answer is ANSWER𝐴𝑁𝑆𝑊𝐸𝑅\boxed{ANSWER}italic_A italic_N italic_S italic_W italic_E italic_R", where ANSWER is the set of all distinct answers, each expressed as a numerical value without any units, e.g. ANSWER = {3, 4, 5}.
INThe answer to this question is a range interval.{unit instruction} Please end your response with: "The final answer is ANSWER𝐴𝑁𝑆𝑊𝐸𝑅\boxed{ANSWER}italic_A italic_N italic_S italic_W italic_E italic_R", where ANSWER is an interval without any units, e.g. ANSWER = (1,2][7,+)127(1,2]\cup[7,+\infty)( 1 , 2 ] ∪ [ 7 , + ∞ ).
EXThe answer to this question is an expression.{unit instruction} Please end your response with: "The final answer is ANSWER𝐴𝑁𝑆𝑊𝐸𝑅\boxed{ANSWER}italic_A italic_N italic_S italic_W italic_E italic_R", where ANSWER is an expression without any units and equals signs, e.g. ANSWER = 12gt212𝑔superscript𝑡2\frac{1}{2}gt^{2}divide start_ARG 1 end_ARG start_ARG 2 end_ARG italic_g italic_t start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT.
EQThe answer to this question is an equation.{unit instruction} Please end your response with: "The final answer is ANSWER𝐴𝑁𝑆𝑊𝐸𝑅\boxed{ANSWER}italic_A italic_N italic_S italic_W italic_E italic_R", where ANSWER is an equation without any units, e.g. ANSWER = x24+y22=1superscript𝑥24superscript𝑦221\frac{x^{2}}{4}+\frac{y^{2}}{2}=1divide start_ARG italic_x start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT end_ARG start_ARG 4 end_ARG + divide start_ARG italic_y start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT end_ARG start_ARG 2 end_ARG = 1.
TUPThe answer to this question is a tuple.{unit instruction} Please end your response with: "The final answer is ANSWER𝐴𝑁𝑆𝑊𝐸𝑅\boxed{ANSWER}italic_A italic_N italic_S italic_W italic_E italic_R", where ANSWER is a tuple without any units, e.g. ANSWER=(3, 5).
MPVThis question involves multiple quantities to be determined.Your final quantities should be output in the following order: {the ordered sequence of the name of multiple quantities}. Their units are, in order, {the ordered sequence of the units}, but units shouldn’t be included in your concluded answer. Their answer types are, in order, {the ordered sequence of answer types}. Please end your response with: "The final answers are ANSWER𝐴𝑁𝑆𝑊𝐸𝑅\boxed{ANSWER}italic_A italic_N italic_S italic_W italic_E italic_R", where ANSWER should be the sequence of your final answers, separated by commas, for example: 5, 7, 2.5.
MAThis question has more than one correct answer, you need to include them all.Their units are, in order, {the ordered sequence of the units}, but units shouldn’t be included in your concluded answer. Their answer types are, in order, {the ordered sequence of answer types}. Please end your response with: "The final answers are ANSWER𝐴𝑁𝑆𝑊𝐸𝑅\boxed{ANSWER}italic_A italic_N italic_S italic_W italic_E italic_R", where ANSWER should be the sequence of your final answers, separated by commas, for example: 5, 7, 2.5.
CODEWrite a Python program to solve the given competitive programming problem using standard input and output methods. Pay attention to time and space complexities to ensure efficiency.Notes:(1) Your solution must handle standard input and output. Use input() for reading input and print() for output.(2) Be mindful of the problem’s time and space complexity. The solution should be efficient and designed to handle the upper limits of input sizes within the given constraints.(3) It’s encouraged to analyze and reason about the problem before coding.You can think step by step, and finally output your final code in the following format:Your Python code here
OT--

C.4 Model Hyperparameters

For all models, we set the maximum number of output tokens to 2048 and the temperature to 0.0. When performing code generation (CODE) tasks, the temperature is set to 0.2.

C.5 Answer-level Evaluation Protocols

Rule-based Evaluation

For problems with fixed answers, we extract the final answer enclosed in "\boxed{}" (using prompts to instruct models to conclude their final answers with boxes) and perform rule-based matching according to the answer type.

1) For numerical value (NV) answers, we handle units by explicitly stating them in the prompts provided to the model, if applicable. During evaluation, we assess only the numerical value output by the model, disregarding the unit. In cases where numerical answers are subject to estimation, such as in physics or chemistry problems, we convert both the model’s output and the correct answer to scientific notation. If the exponent of 10 is the same for both, we allow a deviation of 0.1 in the coefficient before the exponent, accounting for minor estimation errors in the model’s calculations.

2) For problems where the answer type is an expression (EX) or an equation (EQ), we use the SymPy131313https://www.sympy.org/ library for comparison. This allows us to accurately assess the equivalence of algebraic expressions and equations by symbolic computation.

3) For problems requiring the solution of multiple quantities (MPV), our evaluation strictly follows the order of output specified in the prompt, ensuring consistency and correctness in the sequence of results.

4) In the case of problems with multiple answers (MA), we require the model to output all possible answers, adequately considering various scenarios.

5) For problems where the answer type is an interval (IN), we strictly compare the open and closed intervals as well as the boundary values of the endpoints.

6) For problems where the answer type is a set (SET), we compare the set output by the model with the standard answer set to ensure they are completely identical. For problems where the answer type is a tuple (TUP), we compare the tuple output by the model with the standard answer tuple to ensure that each corresponding position is exactly equal.

7) For code generation (CODE) problems, we extract the code output by the model and test it through all provided test cases. Specifically, we use the unbiased pass@k metric,

pass@k:=𝔼Problems[1(nck)(nk)]assignpass@𝑘Problems𝔼delimited-[]1binomial𝑛𝑐𝑘binomial𝑛𝑘\operatorname{pass}@k:=\underset{\text{Problems}}{\mathbb{E}}\left[1-\frac{%\binom{n-c}{k}}{\binom{n}{k}}\right]roman_pass @ italic_k := underProblems start_ARG blackboard_E end_ARG [ 1 - divide start_ARG ( FRACOP start_ARG italic_n - italic_c end_ARG start_ARG italic_k end_ARG ) end_ARG start_ARG ( FRACOP start_ARG italic_n end_ARG start_ARG italic_k end_ARG ) end_ARG ](1)

where we set k=1𝑘1k=1italic_k = 1 and n=5𝑛5n=5italic_n = 5, and c𝑐citalic_c indicates the number of correct samples that pass all test cases.

Model-based Evaluation

To deal with those problems with answer types that cannot be appropriately evaluated using rule-based matching, we employ model-based evaluation. In this approach, we utilize GPT-4V as the evaluator. We design prompts that include the problem, the correct answer, the solution (if provided), and the response from the model being tested (see Figure16 for details). The evaluator model then judges the correctness of the tested model’s response.

To further ensure the reliability of using a model as an evaluator, we uniformly sampled 100 problems across various subjects that involved model evaluation. We have several students with backgrounds in science and engineering independently conduct manual evaluations. It turns out that out of the 100 sampled problems, there is nearly 80%percent8080\%80 % agreement between the human evaluations and the model evaluations. Considering that problems requiring model-based evaluation account for approximately 5% of the total, the error rate can be controlled at around 20%×5%percent20percent520\%\times 5\%20 % × 5 %, which is approximately 1%percent11\%1 %. Therefore, we consider this method to be reliable.

C.6 Process-level Evaluation Protocols

To conduct the process-level evaluation, we utilize a method based on GPT-4V. First, we reformat both the gold solution and the model-generated solution for the sampled problems into a neat step-by-step format using GPT-4. Then, we employ a carefully designed prompt(see Figure17) to guide GPT-4V using the reformatted gold solution to evaluate the correctness of each step in the model’s output, assigning a score of 0 for incorrect and 1 for correct steps. The final process-level score for each problem is determined by averaging the scores of all the steps.

Appendix D Fine-grained Results

D.1 Results across Logical and Visual Reasoning Abilities

Table11 and Table12 show the performance of different models across various logical and visual reasoning abilities separately.

DEDINDABDANACAECTDECQUA
ModelAccuracyAccuracyAccuracyAccuracyAccuracyAccuracyAccuracyAccuracy
LLMs
Qwen-7B-Chat4.854.184.845.295.545.164.094.64
Yi-34B-Chat19.6513.8426.8218.7326.5125.7115.0015.55
Internlm2-20B-Chat17.4313.1224.7416.3022.8122.5113.0313.42
Qwen1.5-32B-Chat25.9421.2033.3924.8732.3331.8220.1922.19
GPT-3.519.3813.1926.6416.3023.3224.3114.4317.35
Claude3 Sonnet25.4017.8834.7823.2830.6431.1518.5922.67
GPT-433.9324.8040.6633.3338.4839.3226.8431.72
GPT-4o39.130.1443.4337.7842.8944.0631.7936.56
Image caption + LLMs
Qwen-7B-Chat5.664.697.276.886.666.024.604.81
Yi-34B-Chat19.0813.3429.2420.1125.5324.9113.7914.64
Internlm2-20B-Chat18.2512.6928.3717.6723.8423.3513.4014.52
Qwen1.5-32B-Chat25.5020.5535.8126.3531.3531.3919.5521.51
GPT-3.520.7113.9129.7617.7825.7226.0115.7417.73
Claude3 Sonnet25.6919.1135.1224.0230.8831.5518.7122.89
GPT-435.0624.4441.3534.3940.1740.7227.2632.47
GPT-4o39.2630.9345.5039.3743.1744.1931.3536.56
LMMs
Qwen-VL-Chat7.876.0612.808.689.909.925.296.29
Yi-VL-34B16.3010.6021.1116.4021.3520.7611.8913.42
InternVL-Chat-V1.517.6512.5530.2817.2522.0622.7012.5614.37
LLaVA-NeXT-34B19.7214.7030.6219.3727.4025.3913.6214.79
Qwen-VL-Max22.9716.8733.9121.3829.2828.5117.2618.13
Gemini Pro Vision22.4517.1635.4721.5925.6727.5417.2418.91
Claude3 Sonnet25.5918.8936.5123.7029.8931.7118.9922.87
GPT-4V34.5925.5946.5433.3339.6141.1526.4730.79
GPT-4o41.1832.7350.3540.5345.9447.1233.1737.58

PRSPADIASYBCOM
ModelAccuracyAccuracyAccuracyAccuracyAccuracy
LLMs
Qwen-7B-Chat4.592.644.264.014.66
Yi-34B-Chat23.7013.5819.5617.6122.37
Internlm2-20B-Chat22.8913.0618.6315.7321.16
Qwen1.5-32B-Chat28.9317.9424.6722.1827.83
GPT-3.522.3313.2718.4016.0521.05
Claude3 Sonnet26.8817.6022.8620.4925.98
GPT-433.6523.9930.0927.9432.54
GPT-4o35.9628.7133.2931.5435.00
Image caption + LLMs
Qwen-7B-Chat5.964.115.215.116.30
Yi-34B-Chat21.6921.1918.0114.9220.48
Internlm2-20B-Chat22.9712.7518.2715.4921.05
Qwen1.5-32B-Chat28.5917.8123.9020.9526.73
GPT-3.523.9615.2619.7217.3422.30
Claude3 Sonnet27.6017.0322.8420.1726.28
GPT-434.2926.0731.0728.6133.11
GPT-4o37.0829.1033.6031.2235.91
LMMs
Qwen-VL-Chat9.904.937.466.488.91
Yi-VL-34B16.729.6013.7812.1015.09
InternVL-Chat-V1.522.8512.1117.6815.1121.27
LLaVA-NeXT-34B24.6912.7519.7216.3822.90
Qwen-VL-Max27.4316.2622.3519.4726.01
Gemini Pro Vision28.9814.8321.6519.7926.13
Claude3 Sonnet27.1817.5522.4320.8425.56
GPT-4V35.2823.9130.2527.7034.40
GPT-4o41.4930.6536.9833.9140.58

D.2 Results on Multimodal Problems

Table13 shows the performance of different models on multimodal problems across different subjects.

MathPhysicsChemistryBiologyGeographyAstronomyCSOverall
ModelAccuracyAccuracyAccuracyAccuracyAccuracyAccuracyPass@1Accuracy
LLMs
Qwen-7B-Chat1.262.546.925.594.162.7004.01
Yi-34B-Chat5.046.1419.1527.4033.9110.000.2819.54
Internlm2-20B-Chat6.306.4615.7626.9631.879.190.9718.51
Qwen1.5-32B-Chat7.988.9023.8632.2139.7418.650.8324.58
GPT-3.56.307.2015.4626.8530.6611.626.2518.79
Claude3 Sonnet8.8211.7619.5931.9938.0015.682.6423.79
GPT-416.8118.4332.1139.7141.2623.9212.5031.05
GPT-4o21.8521.8232.1142.1744.2830.6812.7834.11
Image caption + LLMs
Qwen-7B-Chat3.782.226.196.497.345.0005.32
Yi-34B-Chat5.046.5714.2924.9433.618.780.2818.25
Internlm2-20B-Chat6.726.8916.3525.3931.2610.271.1818.41
Qwen1.5-32B-Chat6.728.6921.8032.1039.8217.300.9723.99
GPT-3.54.2012.3918.1125.7332.329.737.6420.04
Claude3 Sonnet5.4613.3520.4732.8938.2313.383.8923.97
GPT-416.8120.4429.9038.745.1226.6212.2632.36
GPT-4o21.0122.1431.2245.1945.1930.5414.5834.86
LMMs
Qwen-VL-Chat3.362.656.639.8413.855.2707.82
Yi-VL-34B3.366.469.1318.7922.037.43013.00
InternVL-Chat-V1.57.566.2516.0524.9432.559.730.6218.43
LLaVA-NeXT-34B4.626.4614.4328.3036.1110.000.2819.66
Qwen-VL-Max6.307.6317.8228.8640.0515.141.2522.38
Gemini Pro Vision7.569.1124.3032.5535.8111.222.3622.58
Claude3 Sonnet5.4613.4519.1533.2237.0217.302.3624.05
GPT-4V13.8718.2229.3140.2746.8622.4311.2531.81
GPT-4o26.4722.1433.1446.9853.7531.7613.6138.17

D.3 Process-level Evaluation Results

Table14 shows process-level results of different models across different subjects.

MathPhysicsChemistryBiologyGeographyAstronomyOverall
ModelScoreScoreScoreScoreScoreScoreScore
LLMs
Qwen-7B-Chat18.743.735.118.934.531.530.4
Yi-34B-Chat30.251.054.031.936.540.340.7
Internlm2-20B-Chat21.235.051.222.732.933.332.7
Qwen1.5-32B-Chat32.044.061.132.045.248.643.8
GPT-3.537.646.932.730.238.726.735.4
Claude3 Sonnet40.842.765.330.852.650.547.1
GPT-457.053.873.650.050.165.058.2
GPT-4o59.965.967.449.661.469.562.3
Image caption + LLMs
Qwen-7B-Chat23.042.634.617.434.432.330.7
Yi-34B-Chat26.345.649.520.045.742.038.2
Internlm2-20B-Chat27.742.646.319.425.543.134.1
Qwen1.5-32B-Chat35.949.756.833.543.651.445.1
GPT-3.532.146.751.229.138.438.239.3
Claude3 Sonnet50.751.766.133.455.852.251.7
GPT-461.453.862.751.152.062.257.2
GPT-4o54.363.371.858.656.672.662.9
LMMs
Qwen-VL-Chat14.341.735.721.031.023.627.9
Yi-VL-34B28.941.044.218.730.240.333.9
InternVL-Chat-V1.526.640.542.729.443.144.837.8
LLaVA-NeXT-34B30.247.150.119.040.647.139.0
Qwen-VL-Max27.552.465.524.336.048.442.3
Gemini Pro Vision28.546.445.219.933.540.535.7
Claude3 Sonnet47.346.863.224.243.248.145.5
GPT-4V49.954.071.151.456.364.357.8
GPT-4o60.254.872.251.659.674.462.1

D.4 Results across Different Languages

Table15 shows results of different models in different languages.

EnglishChinese
ModelAccuracyAccuracy
LLMs
Qwen-7B-Chat4.174.55
Yi-34B-Chat16.3718.89
Internlm2-20B-Chat16.5616.62
Qwen1.5-32B-Chat22.7325.29
GPT-3.519.8315.50
Claude3 Sonnet25.7318.20
GPT-435.1327.31
GPT-4o40.6533.66
Image caption + LLMs
Qwen-7B-Chat4.715.21
Yi-34B-Chat16.9616.26
Internlm2-20B-Chat17.4016.43
Qwen1.5-32B-Chat22.9324.24
GPT-3.520.5615.77
Claude3 Sonnet26.3117.43
GPT-436.0827.40
GPT-4o41.5033.07
LMMs
Qwen-VL-Chat7.705.55
Yi-VL-34B17.3414.68
InternVL-Chat-V1.517.0715.82
LLaVA-NeXT-34B17.7416.74
Qwen-VL-Max20.1421.49
Gemini Pro Vision21.6118.76
Claude3 Sonnet26.5217.21
GPT-4V36.1826.55
GPT-4o43.0434.39

Appendix E Data Leakage Detection Details

We combine the questions and detailed solutions (or answers if there are no steps) of the problems, then use the n-gram prediction accuracy metric. Specifically, for each sample, we sample k starting points and predict the next 5-gram each time. To evaluate whether the n-gram prediction is correct, we use exact match and more lenient metrics such as edit distance and ROUGE-L. Here, we consider a prediction correct if either the edit distance or ROUGE-L similarity exceeds 75%, to mitigate some reformatting issues during pre-training. We take the union of instances detected by different metrics to obtain the final set of detected instances.

As shown in Tables16, 17, and 18, the experimental results reveal that indeed, different models exhibit minor leakage across different subjects. An interesting observation is that some leakages detected by the base model are no longer detectable when using the chat model based on the same base model. We hypothesize that optimization for dialogue capabilities potentially impacts the model’s ability and performance on the next token prediction. Another similar observation is that leakages detected by text-only chat models tend to decrease when evaluated on multimodal chat models based on the same chat models. Figure18 presents a data leakage case from Qwen1.5-32B-Chat.

Model to-be-detectedCorrespondenceMathPhysicsChemistry
Text-only Chat ModelMM Chat Model# Leak.# T# MM# Leak.# T# MM# Leak.# T# MM
InternLM2-20BInternLM2-20B-ChatInternVL-Chat-1.51412300000
internLM2-20B-ChatInternLM2-20B-ChatInternVL-Chat1.51710000111
Yi-34BYi-34B-ChatYi-VL-34B1022100000
Yi-34B-ChatYi-34B-ChatYi-VL-34B200000000
Nous-Hermes-2-Yi-34B-LLaVA-NeXT-34B0-00-00-0
Qwen-7BQwen-7B-ChatQwen-VL-Chat800110000
Qwen1.5-32BQwen1.5-32B-Chat-243-31-10-
Qwen1.5-32B-ChatQwen1.5-32B-Chat-192-42-31-
GPT-4oGPT-4oGPT-4o000000000

Model to-be-detectedCorrespondenceBiologyGeographyAstronomy
Text-only Chat ModelMM Chat Model# Leak.# T# MM# Leak.# T# MM# Leak.# T# MM
InternLM2-20BInternLM2-20B-ChatInternVL-Chat-1.5000000100
InternLM2-20B-ChatInternLM2-20B-ChatInternVL-Chat-1.5000000000
Yi-34BYi-34B-ChatYi-VL-34B100000000
Yi-34B-ChatYi-34B-ChatYi-VL-34B000000000
Nous-Hermes-2-Yi-34B-LLaVA-NeXT-34B0-00-00-0
Qwen-7BQwen-7B-ChatQwen-VL-Chat000000100
Qwen1.5-32BQwen1.5-32B-Chat-10-00-51-
Qwen1.5-32B-ChatQwen1.5-32B-Chat-10-00-00-
GPT-4oGPT-4oGPT-4o000000000

Model to-be-detectedCorrespondenceCSOverall
Text-only Chat ModelMM Chat Model# Leak.# T# MM# Leak.# T# MM
InternLM2-20BInternLM2-20B-ChatInternVL-Chat-1.51111923
InternLM2-20B-ChatInternLM2-20B-ChatInternVL-Chat-1.51111932
Yi-34BYi-34B-ChatYi-VL-34B0001222
Yi-34B-ChatYi-34B-ChatYi-VL-34B000200
Nous-Hermes-2-Yi-34B-LLaVA-NeXT-34B0-0000
Qwen-7BQwen-7B-ChatQwen-VL-Chat1111121
Qwen1.5-32BQwen1.5-32B-Chat-9--43140
Qwen1.5-32B-ChatQwen1.5-32B-Chat-33-3080
GPT-4oGPT-4oGPT-4o000000

OlympicArena: Benchmarking Multi-discipline Cognitive Reasoning for Superintelligent AI (13)

Appendix F Case Study

F.1 Cases for Error Analysis

From Figure19 to Figure25, we showcase examples of various error types across different disciplines.

Appendix G Consideration for Social Impact

Certainly, it is essential to point out that as AI performs increasingly well on our benchmark, potentially even surpassing human capabilities, there are some potential ethical and moral risks that require collective oversight.

Appendix H Limitations and Future Work

Despite the value of this benchmark, there remains work to be done in the future. Firstly, our benchmark inevitably introduces some noisy problems, we will actively utilize community feedback to continuously refine it. Additionally, we aim to release new versions of the benchmark annually to mitigate issues related to data leakage. Moreover, this benchmark is currently limited to evaluating models’ abilities to solve complex problems. In the future, we aspire for AI to assist with complex tasks and demonstrate value in real-world applications such as AI4Science and AI4Engineering rather than just problem-solving. This will be the goal of our future benchmark designs for evaluating AI capabilities. Nonetheless, at present, OlympicArena plays an essential role as a catalyst for further advancements.

OlympicArena: Benchmarking Multi-discipline Cognitive Reasoning for Superintelligent AI (2024)

References

Top Articles
Latest Posts
Article information

Author: Corie Satterfield

Last Updated:

Views: 6193

Rating: 4.1 / 5 (42 voted)

Reviews: 81% of readers found this page helpful

Author information

Name: Corie Satterfield

Birthday: 1992-08-19

Address: 850 Benjamin Bridge, Dickinsonchester, CO 68572-0542

Phone: +26813599986666

Job: Sales Manager

Hobby: Table tennis, Soapmaking, Flower arranging, amateur radio, Rock climbing, scrapbook, Horseback riding

Introduction: My name is Corie Satterfield, I am a fancy, perfect, spotless, quaint, fantastic, funny, lucky person who loves writing and wants to share my knowledge and understanding with you.