Long-context large language models (LLMs) have recently shown strong performance in information retrieval and long-document QA. However, to tackle the most challenging intellectual problems, LLMs must reason effectively in long and complex contexts with high information density and close inter-statement connections (e.g., frontier mathematical research). Studying how LLMs handle increasing reasoning complexity and context length is essential, yet existing benchmarks lack a solid basis for quantitative evaluation. Inspired by the abstraction of GSM-8K problems as computational graphs—and the ability to introduce noise by adding unnecessary nodes and edge—we develop a grade-school math problem generator capable of producing arithmetic problems with infinite difficulty and context length under fine-grained control. Using our newly synthesized GSM-\(\infty\) benchmark, we comprehensively evaluate existing LLMs. The results are organized into public Leaderboards at huggingface.
Our GSM-\(\infty\) benchmark provides a scalable and controllable testbed for quantitatively evaluating and advancing LLM reasoning abilities with increasing reasoning difficulty and increasing context length. Specifically, we found these observations: 1. LLMs reasoning performance drops exponentially (sigmoid on realistic context) as reasoning complexity increases. 2. LLMs reasoning performance sharply decreases when context length increases. 3. LLMs consistently better at forward-thinking problems (constructive logic) than backward-thinking problems given the same reasoning complexity. 4. When we scale repeated sampling computation exponentially, LLM performance on GSM-\(\infty\) grows linearly.
Existing benchmarks are not sufficient in evaluating the value of long-context LLMs. Mainly because of the following three reasons:
1. Lack of reasoning complexity: Most tasks rely on text retrieval, text summarization, QA.
2. Lack of context length: Some tasks are inherently short-context tasks but are bloated to long-context through injecting semantically irrelevant noise.
These tasks are not tasks that only long-context LLMs can do. We will show that RAG are robust and have performance on par with long-context LLMs. However, given the high efficiency to build and run inference on RAG systems, RAG is more favorable in practice on these tasks.
3. Lack of scalability: Admittedly, tasks with high reasoning complexity and high information density exists, but these tasks requires huge human-effort to gather, dedup, and verify. The result is lack of scalability in quantity, making it hard to prevail in the community.
Problem Statement: How can we develop a benchmark that contains sufficient problems at every fine-grained level of reasoning difficulty, from easy retrieval tasks to infinitely hard challenges, while providing infinitely customizable context length with high information density?
Computational graphs are the core to the design of GSM-\(\infty\). Inspired by Physics of Language Model 2.1 [1], we abstract out all the operations and variable assignment statements into nodes and edges in the computational graphs. . The above figure summarizes our design to build a mapping from a computational graph to a problem in real-world context. For a problem with only explicit operations, it is straightforward to map every operations appeared in the problem to the node-edge pair in the computational graph. Therefore, by randomly perturbing the computational graph, we can generate infinitely many problems with different reasoning complexity with fine-grained control. However, explicit operations are only subset of all possible operations appeared in GSM-8K. The explicit operations include problems that mentions the operations less explicitly using phrases "more than", "less", "product", etc. Most operations are implicit. Problems with text not explicitly mentioning the operations but the solution has to infer the operations from the context.
Mary earns 30 dollars in the morning, and she earns 20 dollars in the afternoon. How much does she earn in total?
is an example of a problem with implicit operations, since the problem does not explicitly mention the operation + or "more than".
Solving the problem relies on commonsense reasoning, a working day consists of both morning and afternoon.
All four grade school level operations, + - x /, can be inferred from the context. With the design of GSM-\(\infty\) (three-entity construct + reverse-thinking mode), we are capable of generating problems with all four potential implicit operations. Also, a huge amount of blood-sweat-and-tears effort has been put into the design of GSM-\(\infty\) to make sure the problems are both diverse in reasoning complexity and also, more importantly, human- and LLM- understandable. Be sure to check out the paper for more details.
[1] Ye, T., Xu, Z., Li, Y., & Allen-Zhu, Z. (2024). Physics of language models: Part 2.1, grade-school math and the hidden reasoning process. arXiv preprint arXiv:2407.20311.
Here are four interesting observations we found from the datasets:
1. We found that LLM reasoning performance naturally display a natural exponential decrease as reasoning complexity increases. In Medium and Hard tasks, the performance degradation can be naturally modeled by a sigmoid function well. The performance degradation intunitively make sense becuase for op is small. The LLMs have reasoning capacity can make sure all the problems are correctly solved. Then, we the difficulty of the problem increases, the LLM's performance quickly decreases and finally plateaus near zero.
2. Because of our design in GSM-\(\infty\), we can look at LLMs performance separately on forward-thinking problems (problems that are constructive with hidden + and x operations) and reverse-thinking problems (problems with hidden - and / operations). More details can be seen from the paper Section 4.1. We found that both forward-thinking and reverse-thinking problems can both be modeled by the sigmoid functions. However, the performance of forward-thinking problem is consistently better for most LLMs than the reverse-thinking problems. Above we show five different models showing the trend.
3. For each LLM, the reasoning performance degrade sharply when increasing the context length (meaningful context length with close semantical connections with the core reasoning problems). On the other hand, we found that LLMs performance degradation from increasing context length with close noise differ from each other. Please check out the paper for full visualization. Different from previous long-context benchmarks, RAG methods are no longer robust on GSM-Infinite. (a) and (b) show that the retriever can clearly identify necessary contents versus noise for vt in RULER than GSM-\(\infty\). (c) and (d) show RAG methods are much less robust compared to corresponding long-context LLMs.
4. An interesting observation is that when we scale repeated sampling computation exponentially, LLM performance (AUC) on GSM-\(\infty\) grows linearly. Benefitted from the specific design of GSM-\(\infty\), we reveal a fundamental traits of inference scaling and repeated sampling without additional RL finetuning for reasoning tasks.
Long-context LLMs have the potential to tackle complex, information-dense tasks requiring deep reasoning and coherent long-form generation. To advance their development and benchmarking, we introduce GSM-\(\infty\), a synthetic long-context reasoning benchmark generated entirely by a software-based system with fine-grained control over complexity and information density. Through extensive evaluations on GSM-\(\infty\), we uncover key insights to inform future LLM training and inference improvements.
If you have any questions or want to add your model to the leaderboard, please contact us at Yang Zhou, Beidi Chen, and Hongyi Liu.
@misc{zhou2025gsminfinitellmsbehaveinfinitely,
title={GSM-Infinite: How Do Your LLMs Behave over Infinitely Increasing Context Length and Reasoning Complexity?},
author={Yang Zhou and Hongyi Liu and Zhuoming Chen and Yuandong Tian and Beidi Chen},
year={2025},
eprint={2502.05252},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2502.05252},
}