...

Probing the Ideological Depth of Large Language Models


View a PDF of the paper titled Beyond the Surface: Probing the Ideological Depth of Large Language Models, by Shariar Kabir and Kevin Esterling and Yue Dong

View PDF
HTML (experimental)

Abstract:Large language models (LLMs) display recognizable political leanings, yet they vary significantly in their ability to represent a political orientation consistently. In this paper, we define ideological depth as (i) a model’s ability to follow political instructions without failure (steerability), and (ii) the feature richness of its internal political representations measured with sparse autoencoders (SAEs), an unsupervised sparse dictionary learning (SDL) approach. Using Llama-3.1-8B-Instruct and Gemma-2-9B-IT as candidates, we compare prompt-based and activation-steering interventions and probe political features with publicly available SAEs. We find large, systematic differences: Gemma is more steerable in both directions and activates approximately 7.3x more distinct political features than Llama. Furthermore, causal ablations of a small targeted set of Gemma’s political features to create a similar feature-poor setting induce consistent shifts in its behavior, with increased rates of refusals across topics. Together, these results indicate that refusals on benign political instructions or prompts can arise from capability deficits rather than safety guardrails. Ideological depth thus emerges as a measurable property of LLMs, and steerability serves as a window into their latent political architecture.

Submission history

From: Shariar Kabir [view email]
[v1]
Fri, 29 Aug 2025 09:27:01 UTC (660 KB)
[v2]
Fri, 14 Nov 2025 13:08:01 UTC (197 KB)

Source link

#Probing #Ideological #Depth #Large #Language #Models