[2406.11944] Transcoders Find Interpretable LLM Feature Circuits


View a PDF of the paper titled Transcoders Find Interpretable LLM Feature Circuits, by Jacob Dunefsky and Philippe Chlenski and Neel Nanda

View PDF
HTML (experimental)

Abstract:A key goal in mechanistic interpretability is circuit analysis: finding sparse subgraphs of models corresponding to specific behaviors or capabilities. However, MLP sublayers make fine-grained circuit analysis on transformer-based language models difficult. In particular, interpretable features — such as those found by sparse autoencoders (SAEs) — are typically linear combinations of extremely many neurons, each with its own nonlinearity to account for. Circuit analysis in this setting thus either yields intractably large circuits or fails to disentangle local and global behavior. To address this we explore transcoders, which seek to faithfully approximate a densely activating MLP layer with a wider, sparsely-activating MLP layer. We introduce a novel method for using transcoders to perform weights-based circuit analysis through MLP sublayers. The resulting circuits neatly factorize into input-dependent and input-invariant terms. We then successfully train transcoders on language models with 120M, 410M, and 1.4B parameters, and find them to perform at least on par with SAEs in terms of sparsity, faithfulness, and human-interpretability. Finally, we apply transcoders to reverse-engineer unknown circuits in the model, and we obtain novel insights regarding the “greater-than circuit” in GPT2-small. Our results suggest that transcoders can prove effective in decomposing model computations involving MLPs into interpretable circuits. Code is available at this https URL.

Submission history

From: Jacob Dunefsky [view email]
[v1]
Mon, 17 Jun 2024 17:49:00 UTC (657 KB)
[v2]
Wed, 6 Nov 2024 22:37:30 UTC (672 KB)

Source link

#Transcoders #Find #Interpretable #LLM #Feature #Circuits

Leave a Comment