View a PDF of the paper titled Decoupling the “What” and “Where” With Polar Coordinate Positional Embeddings, by Anand Gopalakrishnan and 3 other authors
Abstract:The attention mechanism in a Transformer architecture matches key to query based on both content — the what — and position in a sequence — the where. We present an analysis indicating that what and where are entangled in the popular RoPE rotary position embedding. This entanglement can impair performance particularly when decisions require independent matches on these two factors. We propose an improvement to RoPE, which we call Polar Coordinate Position Embeddings or PoPE, that eliminates the what-where confound. PoPE is far superior on a diagnostic task requiring indexing solely by position or by content. On autoregressive sequence modeling in music, genomic, and natural language domains, Transformers using PoPE as the positional encoding scheme outperform baselines using RoPE with respect to evaluation loss (perplexity) and downstream task performance. On language modeling, these gains persist across model scale, from 124M to 774M parameters. Crucially, PoPE shows strong zero-shot length extrapolation capabilities compared not only to RoPE but even a method designed for extrapolation, YaRN, which requires additional fine tuning and frequency interpolation.
Submission history
From: Anand Gopalakrishnan [view email]
[v1]
Fri, 5 Sep 2025 14:22:27 UTC (861 KB)
[v2]
Mon, 22 Dec 2025 20:13:10 UTC (1,888 KB)
Source link
#Decoupling #Polar #Coordinate #Positional #Embeddings
















![[2509.10534] Decoupling the “What” and “Where” With Polar Coordinate Positional Embeddings [2509.10534] Decoupling the “What” and “Where” With Polar Coordinate Positional Embeddings](https://i0.wp.com/arxiv.org/static/browse/0.3.4/images/arxiv-logo-fb.png?w=750&resize=750,375&ssl=1)








