[2403.17196] Text Understanding in GPT-4 vs Humans
![[2403.17196] Text Understanding in GPT-4 vs Humans [2403.17196] Text Understanding in GPT-4 vs Humans](https://i0.wp.com/arxiv.org/static/browse/0.3.4/images/arxiv-logo-fb.png?w=1200&resize=1200,700&ssl=1)
[ad_1] [Submitted on 25 Mar 2024 (v1), last revised 20 Dec 2024 (this version, v3)] View a PDF of the ...
Read more
A Rate-Distortion Framework for Black-Box Language Models
![[2403.17196] Text Understanding in GPT-4 vs Humans [2403.17196] Text Understanding in GPT-4 vs Humans](https://i0.wp.com/arxiv.org/static/browse/0.3.4/images/arxiv-logo-fb.png?w=1200&resize=1200,700&ssl=1)
[ad_1] [Submitted on 22 Jul 2024 (v1), last revised 11 Dec 2024 (this version, v2)] View a PDF of the ...
Read more
Benchmarking and Enhancing Multimodal Models on Visual Illusions
![[2403.17196] Text Understanding in GPT-4 vs Humans [2403.17196] Text Understanding in GPT-4 vs Humans](https://i0.wp.com/arxiv.org/static/browse/0.3.4/images/arxiv-logo-fb.png?w=1200&resize=1200,700&ssl=1)
[ad_1] [Submitted on 11 Dec 2024] View a PDF of the paper titled Illusory VQA: Benchmarking and Enhancing Multimodal Models ...
Read more
Simulating Legislative System for Roll Call Votes Prediction with Large Language Models
![[2403.17196] Text Understanding in GPT-4 vs Humans [2403.17196] Text Understanding in GPT-4 vs Humans](https://i0.wp.com/arxiv.org/static/browse/0.3.4/images/arxiv-logo-fb.png?w=1200&resize=1200,700&ssl=1)
[ad_1] arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website. Both ...
Read more
Extreme Context Compression for Retrieval-augmented Generation with One Token
![[2403.17196] Text Understanding in GPT-4 vs Humans [2403.17196] Text Understanding in GPT-4 vs Humans](https://i0.wp.com/arxiv.org/static/browse/0.3.4/images/arxiv-logo-fb.png?w=1200&resize=1200,700&ssl=1)
[ad_1] [Submitted on 22 May 2024 (v1), last revised 9 Dec 2024 (this version, v2)] View a PDF of the ...
Read more
[2404.02657] Rethinking Kullback-Leibler Divergence in Knowledge Distillation for Large Language Models
![[2403.17196] Text Understanding in GPT-4 vs Humans [2403.17196] Text Understanding in GPT-4 vs Humans](https://i0.wp.com/arxiv.org/static/browse/0.3.4/images/arxiv-logo-fb.png?w=1200&resize=1200,700&ssl=1)
[ad_1] [Submitted on 3 Apr 2024 (v1), last revised 8 Dec 2024 (this version, v4)] View a PDF of the ...
Read more
Benchmarking Open-ended Audio Dialogue Understanding for Large Audio-Language Models
![[2403.17196] Text Understanding in GPT-4 vs Humans [2403.17196] Text Understanding in GPT-4 vs Humans](https://i0.wp.com/arxiv.org/static/browse/0.3.4/images/arxiv-logo-fb.png?w=1200&resize=1200,700&ssl=1)
[ad_1] arXiv:2412.05167v1 Announce Type: cross Abstract: Large Audio-Language Models (LALMs) have unclocked audio dialogue capabilities, where audio dialogues are a ...
Read more
[2412.04787] Direct Quantized Training of Language Models with Stochastic Rounding
![[2403.17196] Text Understanding in GPT-4 vs Humans [2403.17196] Text Understanding in GPT-4 vs Humans](https://i0.wp.com/arxiv.org/static/browse/0.3.4/images/arxiv-logo-fb.png?w=1200&resize=1200,700&ssl=1)
[ad_1] arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website. Both ...
Read more
Democratized LLM Scaling for A Large Model Zoo in the Wild
![[2403.17196] Text Understanding in GPT-4 vs Humans [2403.17196] Text Understanding in GPT-4 vs Humans](https://i0.wp.com/arxiv.org/static/browse/0.3.4/images/arxiv-logo-fb.png?w=1200&resize=1200,700&ssl=1)
[ad_1] [Submitted on 7 Oct 2024 (v1), last revised 5 Dec 2024 (this version, v2)] Authors:Xinyu Zhao, Guoheng Sun, Ruisi ...
Read more
[2407.02820] Investigating the Contextualised Word Embedding Dimensions Specified for Contextual and Temporal Semantic Changes
![[2403.17196] Text Understanding in GPT-4 vs Humans [2403.17196] Text Understanding in GPT-4 vs Humans](https://i0.wp.com/arxiv.org/static/browse/0.3.4/images/arxiv-logo-fb.png?w=1200&resize=1200,700&ssl=1)
[ad_1] [Submitted on 3 Jul 2024 (v1), last revised 3 Dec 2024 (this version, v2)] View a PDF of the ...
Read more