• About
  • Advertise
  • Privacy & Policy
  • Contact
Monday, January 12, 2026
  • Login
  • Home
    • Home – Layout 1
    • Home – Layout 2
    • Home – Layout 3
    • Home – Layout 4
    • Home – Layout 5
    • Home – Layout 6
  • News
    • All
    • Business
    • Politics
    • Science
    • World
    Hillary Clinton in white pantsuit for Trump inauguration

    Hillary Clinton in white pantsuit for Trump inauguration

    Amazon has 143 billion reasons to keep adding more perks to Prime

    Amazon has 143 billion reasons to keep adding more perks to Prime

    Shooting More than 40 Years of New York’s Halloween Parade

    Shooting More than 40 Years of New York’s Halloween Parade

    These Are the 5 Big Tech Stories to Watch in 2017

    These Are the 5 Big Tech Stories to Watch in 2017

    Why Millennials Need to Save Twice as Much as Boomers Did

    Why Millennials Need to Save Twice as Much as Boomers Did

    Doctors take inspiration from online dating to build organ transplant AI

    Doctors take inspiration from online dating to build organ transplant AI

    Trending Tags

    • Trump Inauguration
    • United Stated
    • White House
    • Market Stories
    • Election Results
  • Tech
    • All
    • Apps
    • Gadget
    • Mobile
    • Startup
    The Legend of Zelda: Breath of the Wild gameplay on the Nintendo Switch

    The Legend of Zelda: Breath of the Wild gameplay on the Nintendo Switch

    Shadow Tactics: Blades of the Shogun Review

    Shadow Tactics: Blades of the Shogun Review

    macOS Sierra review: Mac users get a modest update this year

    macOS Sierra review: Mac users get a modest update this year

    Hands on: Samsung Galaxy A5 2017 review

    Hands on: Samsung Galaxy A5 2017 review

    The Last Guardian Playstation 4 Game review

    The Last Guardian Playstation 4 Game review

    These Are the 5 Big Tech Stories to Watch in 2017

    These Are the 5 Big Tech Stories to Watch in 2017

    Trending Tags

    • Nintendo Switch
    • CES 2017
    • Playstation 4 Pro
    • Mark Zuckerberg
  • Entertainment
    • All
    • Gaming
    • Movie
    • Music
    • Sports
    The Legend of Zelda: Breath of the Wild gameplay on the Nintendo Switch

    The Legend of Zelda: Breath of the Wild gameplay on the Nintendo Switch

    macOS Sierra review: Mac users get a modest update this year

    macOS Sierra review: Mac users get a modest update this year

    Hands on: Samsung Galaxy A5 2017 review

    Hands on: Samsung Galaxy A5 2017 review

    Heroes of the Storm Global Championship 2017 starts tomorrow, here’s what you need to know

    Heroes of the Storm Global Championship 2017 starts tomorrow, here’s what you need to know

    Harnessing the power of VR with Power Rangers and Snapdragon 835

    Harnessing the power of VR with Power Rangers and Snapdragon 835

    So you want to be a startup investor? Here are things you should know

    So you want to be a startup investor? Here are things you should know

  • Lifestyle
    • All
    • Fashion
    • Food
    • Health
    • Travel
    Shooting More than 40 Years of New York’s Halloween Parade

    Shooting More than 40 Years of New York’s Halloween Parade

    Heroes of the Storm Global Championship 2017 starts tomorrow, here’s what you need to know

    Heroes of the Storm Global Championship 2017 starts tomorrow, here’s what you need to know

    Why Millennials Need to Save Twice as Much as Boomers Did

    Why Millennials Need to Save Twice as Much as Boomers Did

    Doctors take inspiration from online dating to build organ transplant AI

    Doctors take inspiration from online dating to build organ transplant AI

    How couples can solve lighting disagreements for good

    How couples can solve lighting disagreements for good

    Ducati launch: Lorenzo and Dovizioso’s Desmosedici

    Ducati launch: Lorenzo and Dovizioso’s Desmosedici

    Trending Tags

    • Golden Globes
    • Game of Thrones
    • MotoGP 2017
    • eSports
    • Fashion Week
  • Review
    The Legend of Zelda: Breath of the Wild gameplay on the Nintendo Switch

    The Legend of Zelda: Breath of the Wild gameplay on the Nintendo Switch

    Shadow Tactics: Blades of the Shogun Review

    Shadow Tactics: Blades of the Shogun Review

    macOS Sierra review: Mac users get a modest update this year

    macOS Sierra review: Mac users get a modest update this year

    Hands on: Samsung Galaxy A5 2017 review

    Hands on: Samsung Galaxy A5 2017 review

    The Last Guardian Playstation 4 Game review

    The Last Guardian Playstation 4 Game review

    Intel Core i7-7700K ‘Kaby Lake’ review

    Intel Core i7-7700K ‘Kaby Lake’ review

No Result
View All Result
Ai News
Advertisement
  • Home
    • Home – Layout 1
    • Home – Layout 2
    • Home – Layout 3
    • Home – Layout 4
    • Home – Layout 5
    • Home – Layout 6
  • News
    • All
    • Business
    • Politics
    • Science
    • World
    Hillary Clinton in white pantsuit for Trump inauguration

    Hillary Clinton in white pantsuit for Trump inauguration

    Amazon has 143 billion reasons to keep adding more perks to Prime

    Amazon has 143 billion reasons to keep adding more perks to Prime

    Shooting More than 40 Years of New York’s Halloween Parade

    Shooting More than 40 Years of New York’s Halloween Parade

    These Are the 5 Big Tech Stories to Watch in 2017

    These Are the 5 Big Tech Stories to Watch in 2017

    Why Millennials Need to Save Twice as Much as Boomers Did

    Why Millennials Need to Save Twice as Much as Boomers Did

    Doctors take inspiration from online dating to build organ transplant AI

    Doctors take inspiration from online dating to build organ transplant AI

    Trending Tags

    • Trump Inauguration
    • United Stated
    • White House
    • Market Stories
    • Election Results
  • Tech
    • All
    • Apps
    • Gadget
    • Mobile
    • Startup
    The Legend of Zelda: Breath of the Wild gameplay on the Nintendo Switch

    The Legend of Zelda: Breath of the Wild gameplay on the Nintendo Switch

    Shadow Tactics: Blades of the Shogun Review

    Shadow Tactics: Blades of the Shogun Review

    macOS Sierra review: Mac users get a modest update this year

    macOS Sierra review: Mac users get a modest update this year

    Hands on: Samsung Galaxy A5 2017 review

    Hands on: Samsung Galaxy A5 2017 review

    The Last Guardian Playstation 4 Game review

    The Last Guardian Playstation 4 Game review

    These Are the 5 Big Tech Stories to Watch in 2017

    These Are the 5 Big Tech Stories to Watch in 2017

    Trending Tags

    • Nintendo Switch
    • CES 2017
    • Playstation 4 Pro
    • Mark Zuckerberg
  • Entertainment
    • All
    • Gaming
    • Movie
    • Music
    • Sports
    The Legend of Zelda: Breath of the Wild gameplay on the Nintendo Switch

    The Legend of Zelda: Breath of the Wild gameplay on the Nintendo Switch

    macOS Sierra review: Mac users get a modest update this year

    macOS Sierra review: Mac users get a modest update this year

    Hands on: Samsung Galaxy A5 2017 review

    Hands on: Samsung Galaxy A5 2017 review

    Heroes of the Storm Global Championship 2017 starts tomorrow, here’s what you need to know

    Heroes of the Storm Global Championship 2017 starts tomorrow, here’s what you need to know

    Harnessing the power of VR with Power Rangers and Snapdragon 835

    Harnessing the power of VR with Power Rangers and Snapdragon 835

    So you want to be a startup investor? Here are things you should know

    So you want to be a startup investor? Here are things you should know

  • Lifestyle
    • All
    • Fashion
    • Food
    • Health
    • Travel
    Shooting More than 40 Years of New York’s Halloween Parade

    Shooting More than 40 Years of New York’s Halloween Parade

    Heroes of the Storm Global Championship 2017 starts tomorrow, here’s what you need to know

    Heroes of the Storm Global Championship 2017 starts tomorrow, here’s what you need to know

    Why Millennials Need to Save Twice as Much as Boomers Did

    Why Millennials Need to Save Twice as Much as Boomers Did

    Doctors take inspiration from online dating to build organ transplant AI

    Doctors take inspiration from online dating to build organ transplant AI

    How couples can solve lighting disagreements for good

    How couples can solve lighting disagreements for good

    Ducati launch: Lorenzo and Dovizioso’s Desmosedici

    Ducati launch: Lorenzo and Dovizioso’s Desmosedici

    Trending Tags

    • Golden Globes
    • Game of Thrones
    • MotoGP 2017
    • eSports
    • Fashion Week
  • Review
    The Legend of Zelda: Breath of the Wild gameplay on the Nintendo Switch

    The Legend of Zelda: Breath of the Wild gameplay on the Nintendo Switch

    Shadow Tactics: Blades of the Shogun Review

    Shadow Tactics: Blades of the Shogun Review

    macOS Sierra review: Mac users get a modest update this year

    macOS Sierra review: Mac users get a modest update this year

    Hands on: Samsung Galaxy A5 2017 review

    Hands on: Samsung Galaxy A5 2017 review

    The Last Guardian Playstation 4 Game review

    The Last Guardian Playstation 4 Game review

    Intel Core i7-7700K ‘Kaby Lake’ review

    Intel Core i7-7700K ‘Kaby Lake’ review

No Result
View All Result
Ai News
No Result
View All Result
Home Machine Learning

Your 1M+ Context Window LLM Is Less Powerful Than You Think

AiNEWS2025 by AiNEWS2025
2025-07-17
in Machine Learning
0
Your 1M+ Context Window LLM Is Less Powerful Than You Think
0
SHARES
0
VIEWS
Share on FacebookShare on Twitter


are now able to handle vast inputs — their context windows range between 200K (Claude) and 2M tokens (Gemini 1.5 Pro). That’s between 280 and 2800 pages of text! These massive context windows suggest that in most practical scenarios, we don’t need to worry too much about hitting LLM limits regarding the input. However, our newest research shows that this is not true. For many problems with complex context, the LLM’s effective working memory can get overloaded with relatively small inputs — far before we hit context window limits.

Our paper introduces a new theoretical model of computation to explain why this happens and shows in experiments that our theory’s predictions match real-world results. Our findings can finally explain previously reported LLM failures, such as how LLMs have an inability to detect plot holes, struggle to understand long stories, or incorrectly answer questions when documents are similar.

Below we lay out the details by answering the following questions:

  1. What happens if we exceed an LLM’s working memory?
  2. Does my task need a lot of working memory?
  3. What can I do if my task needs a lot of working memory?
  4. Why do certain tasks need a lot of working memory?

What happens if we exceed an LLM’s working memory?

Intuitively speaking, tasks that require a lot of context to answer a question correctly also require the LLM to track a lot of information. As the size of this “working set” needed to correctly reason about the answer grows, it gets more likely that the LLM will make mistakes, because it is unable to retain the relevant information in its limited working memory.

Consider the following example. Say we want to debug a certain part of someone’s code and want to figure out whether the final value of the variable x7 is “a” or “b”:

x6 = "a"
x4 = "b"
x0 = x6
x2 = x4
x3 = x0
x8 = x2
x9 = x3
x7 = x3

This variable tracking task requires a lot of context to compute an answer, since failing to attend to a line from the code can result in arriving at an incorrect answer. Running experiments with a number of frontier models on this task shows that they all regress to random guessing between the two answers as the number of variables grow:

LLMs’ performance drops quickly as the number of variables to track goes up.

This experiment indicates that these LLMs can keep track of at most n = 5 to 10 variables before exceeding their working memory capacity. After this, performance rapidly degrades to 50–50 random guessing.

Does my task need a lot of working memory?

So now you’re probably curious whether working memory limits might be an issue for the task you are trying to solve. The first thing we recommend is checking if the task at hand is similar to any of the tasks we theoretically analyze in our paper. We call tasks BAPO-hard if they need a lot of working memory under our BAPO model (discussed more below). Tasks we know are hard theoretically include:

  • Graph reachability: May occur in complex summarization, entity tracking, variable tracking, or logical deduction
  • Majority: May occur in review classification, finding a consensus opinion, etc.
  • Reasoning over triples: For example, constructing answers from knowledge graphs

Likewise, you can see if your task is BAPO-easy:

  • Minimum/Maximum: For example, return the most negative or positive review in a list
  • Index or Needle-in-a-Haystack: E.g., find out whether a topic is discussed

Intuitively, problems where only a small piece of information needs to be tracked to answer the question have low working memory requirements (e.g., Needle-in-a-Haystack). If the answer requires almost all the input tokens and no short summary exists, the working memory requirements are high.

If your task is not on the above list, you can use your judgement to determine if there is an easy solution that doesn’t need a lot of memory, e.g., there is some easy attention-based lookup the LLM can perform to answer the question, or some way to summarize the context (without knowing the question a priori) so that your question can be answered from the summary. If not, your problem might require substantial working memory. In this case, LLMs are at risk of failing at your task, particularly as the size of the task increases (e.g., number of variables, relevant pieces of information). Don’t assume that because the answer is computable from the context, an LLM can compute it.

What can I do if my task needs a lot of working memory?

If you realize that your task at hand requires a lot of working memory and is failing often, here are a variety of fixes that are theoretically motivated to increase your chances of good performance:

  • Use a reasoning-enabled model (and hope it doesn’t run out of tokens). We show that theoretically, reasoning tokens enable LLMs to solve any BAPO-hard task, however, the number of reasoning tokens required to overcome working memory limits might be extremely large (as the experiments in our paper show). And in practice, even the best reasoning models still make mistakes.
  • Based on our theoretical results, you could decompose your problem into one that has a more compact intermediate representation that is less likely to exceed working memory limits. For example, instead of asking the LLM to reason over the full HTML of a webpage, provide a simplified syntax such as the rendered text only. Similarly, for RAG scenarios, it might be useful to pre-annotate or pre-combine the data in ways that makes the final answer easy to obtain from the smaller summaries.
  • Finally, you can outsource working-memory-heavy pieces to an external solver or tool, e.g., instead of asking for the majority opinion directly, classify each opinion separately (BAPO-easy) and then aggregate the results in Python instead of asking the LLM.

Keep in mind that these fixes might not work for all tasks, especially when it is not clear how to decompose tasks into less working memory intensive subtasks. This is where future research can hopefully fill the gap.

Why do certain tasks need a lot of working memory?

For those interested, this section delves a little deeper into the theory from our work. To analyze which tasks need a lot of working memory, we first developed an abstract model of how transformers compute solutions. We then used the model to prove that a task is hard or easy.

As illustration, consider the task of reading a newly released long book and then answering a question about it. There are roughly two strategies humans can use after reading. If one has a large working memory and can recall all the book’s crucial information, one can answer the question straight off the top of one’s head. If one does not, and can only recall the big picture ideas, one can use this to find the rough location of relevant information in the book and flip back to the page(s) to find the answer.

Now, consider how a transformer-based LLM processes the same task. It will read over the content of the book and then compute an answer at the last position after it reads the questionª. While processing the content of the book, the LLM can attend to a few relevant locations to compute the answer (the equivalent of flipping through pages). Or it can use contextual embeddings of the book to store important facts and answer the question from them directly (the equivalent of recall). What it cannot do is go back and read the book in its entirety again with the question in mind, because causal attention allows information to only flow forward through the context window.

In this scenario, for both humans and AI, larger working memory means that there is a better chance to have stored information that will enable computing the correct answer, particularly when things get complicated. Okay, but how do we more formally define what working memory is need for LLM tasks? In our paper, we do this through the bounded attention prefix oracle (BAPO) model.

The BAPO model provides a simplified computational characterization that we can analyze theoretically to prove which problems require more or less bandwidth (i.e., working memory) for an LLM. To compute an answer, the BAPO model uses (something like) the two strategies from above:

  • The BAPO model can use a prefix oracle f to send a bits of information forward ↔ Memorize information while reading
  • The BAPO model can also use an attention oracle g to attend to b tokens from past tokens ↔ Flip back to pages

We then define the working memory requirements for a task as the combination of two BAPO bandwidth parameters (a, b) — the first refers to how much information is pre-computed and passed on (bandwidth a) and the second refers to how much can be looked up after the fact (bandwidth b). Why is working memory the combination of two parameters? It’s because there is a trade-off: the more information one has memorized, the less information one can look up.

If a task has constant bandwidth requirements (i.e., a,b in O(1)), then the task will likely not exceed LLM working memory size, but if a task has bandwidth requirements that depend on the size of the input (e.g., sequence or alphabet length), then it will eventually exceed the working memory limits and result in failure.

Conclusions

Working memory is an important bottleneck in transformer-based LLMs. Long before information exceeds context window size, the transformer’s ability to effectively represent and communicate this information within the window is exceeded. Current long context benchmarks strongly rely on Needle-in-a-Haystack problems, which we have shown are BAPO-easy. This means that current benchmark performance will not accurately capture performance over the full range of long-context reasoning tasks.

Tasks such as complex summarization, code tracing, or inconsistency detection are hard for LLMs according to our theoretical model. They can contain BAPO-hard subtasks leading to high working memory requirements which in turn cause failures in practice. While the recent advances in context window length have broadened the applicability of LLMs, the use of longer contexts also increases complexity of the associated tasks. This will likely increase the frequency of BAPO-hard tasks and will lead to more LLM failures.

We outlined a number of strategies to lower working memory requirements of tasks, such as reasoning tokens. However, they come with their own limitations, e.g., some tasks might need a vast number of reasoning tokens to overcome bandwidth limitations in practice. We hope that future research can provide more general solutions and perhaps even new architectures beyond transformers.

References

Footnotes

ª You may wonder whether having the question first changes the working memory requirements. No — see paper for more details.

Source link

#Context #Window #LLM #Powerful #YouThink

Tags: artificial intelligenceEditors PickLlmllm failuresTransformers
Previous Post

More VMware cloud partners axed as Broadcom launches new invite-only program

Next Post

Researchers announce babies born from a trial of three-person IVF

AiNEWS2025

AiNEWS2025

Next Post
Researchers announce babies born from a trial of three-person IVF

Researchers announce babies born from a trial of three-person IVF

Stay Connected test

  • 23.9k Followers
  • 99 Subscribers
  • Trending
  • Comments
  • Latest
A tiny new open source AI model performs as well as powerful big ones

A tiny new open source AI model performs as well as powerful big ones

0
Water Cooler Small Talk: The Birthday Paradox 🎂🎉 | by Maria Mouschoutzi, PhD | Sep, 2024

Water Cooler Small Talk: The Birthday Paradox 🎂🎉 | by Maria Mouschoutzi, PhD | Sep, 2024

0
Ghost of Yōtei: The acclaimed Ghost of Tsushima is getting a sequel

Ghost of Yōtei: The acclaimed Ghost of Tsushima is getting a sequel

0
Best Headphones for Working Out (2024): Bose, Shokz, JLab

Best Headphones for Working Out (2024): Bose, Shokz, JLab

0
How to Leverage Slash Commands to Code Effectively

How to Leverage Slash Commands to Code Effectively

2026-01-11
The oceans just keep getting hotter

The oceans just keep getting hotter

2026-01-11
The full history of TiVo, and how it changed TV forever

The full history of TiVo, and how it changed TV forever

2026-01-11
Doomsday Glacier Bombarded by Earthquakes

Doomsday Glacier Bombarded by Earthquakes

2026-01-11

Recent News

How to Leverage Slash Commands to Code Effectively

How to Leverage Slash Commands to Code Effectively

2026-01-11
The oceans just keep getting hotter

The oceans just keep getting hotter

2026-01-11
The full history of TiVo, and how it changed TV forever

The full history of TiVo, and how it changed TV forever

2026-01-11
Doomsday Glacier Bombarded by Earthquakes

Doomsday Glacier Bombarded by Earthquakes

2026-01-11
Footer logo

We bring you the best Premium WordPress Themes that perfect for news, magazine, personal blog, etc. Check our landing page for details.

Follow Us

Browse by Category

  • AI & Cloud Computing
  • AI & Cybersecurity
  • AI & Sentiment Analysis
  • AI Applications
  • AI Ethics
  • AI Future Predictions
  • AI in Education
  • AI in Fintech
  • AI in Gaming
  • AI in Healthcare
  • AI in Startups
  • AI Innovations
  • AI News
  • AI Research
  • AI Tools & Automation
  • Apps
  • AR/VR & AI
  • Business
  • Deep Learning
  • Emerging Technologies
  • Entertainment
  • Fashion
  • Food
  • Gadget
  • Gaming
  • Health
  • Lifestyle
  • Machine Learning
  • Mobile
  • Movie
  • Music
  • News
  • Politics
  • Review
  • Robotics & Smart Systems
  • Science
  • Sports
  • Startup
  • Tech
  • Travel
  • World

Recent News

How to Leverage Slash Commands to Code Effectively

How to Leverage Slash Commands to Code Effectively

2026-01-11
The oceans just keep getting hotter

The oceans just keep getting hotter

2026-01-11
  • About
  • Advertise
  • Privacy & Policy
  • Contact

© 2026 JNews - Premium WordPress news & magazine theme by Jegtheme.

Welcome Back!

Login to your account below

Forgotten Password?

Retrieve your password

Please enter your username or email address to reset your password.

Log In
No Result
View All Result

© 2026 JNews - Premium WordPress news & magazine theme by Jegtheme.