...

[2411.18553] Retrofitting Large Language Models with Dynamic Tokenization


View a PDF of the paper titled Retrofitting Large Language Models with Dynamic Tokenization, by Darius Feher and 2 other authors

View PDF

Abstract:Current language models (LMs) use a fixed, static subword tokenizer. This default choice typically results in degraded efficiency and language capabilities, especially in languages other than English. To address this issue, we challenge the static design and propose retrofitting LMs with dynamic tokenization: a way to dynamically decide on token boundaries based on the input text via a subword-merging algorithm inspired by byte-pair encoding. We merge frequent subword sequences in a batch, then apply a pre-trained embedding-prediction hypernetwork to compute the token embeddings on-the-fly. For encoder-style models (e.g., XLM-R), this on average reduces token sequence lengths by >20% across 14 languages while degrading performance by less than 2%. The same method applied to pre-filling and scoring in decoder-style models (e.g., Mistral-7B) results in minimal performance degradation at up to 17% reduction in sequence length. Overall, we find that dynamic tokenization can mitigate the limitations of static tokenization by substantially improving inference speed and promoting fairness across languages, enabling more equitable and adaptable LMs.

Submission history

From: Darius Feher [view email]
[v1]
Wed, 27 Nov 2024 17:51:58 UTC (10,104 KB)
[v2]
Sat, 14 Dec 2024 23:43:54 UTC (9,170 KB)
[v3]
Wed, 11 Jun 2025 13:08:24 UTC (9,174 KB)

Source link

#Retrofitting #Large #Language #Models #Dynamic #Tokenization