View a PDF of the paper titled PIN: A Knowledge-Intensive Dataset for Paired and Interleaved Multimodal Documents, by Junjie Wang and 21 other authors
Abstract:Recent advancements in large multimodal models (LMMs) have leveraged extensive multimodal datasets to enhance capabilities in complex knowledge-driven tasks. However, persistent challenges in perceptual and reasoning errors limit their efficacy, particularly in interpreting intricate visual data and deducing multimodal relationships. To address these issues, we introduce PIN (Paired and INterleaved multimodal documents), a novel data format designed to foster a deeper integration of visual and textual knowledge. The PIN format uniquely combines semantically rich Markdown files, which preserve fine-grained textual structures, with holistic overall images that capture the complete document layout. Following this format, we construct and release two large-scale, open-source datasets: PIN-200M (~200 million documents) and PIN-14M (~14 million), compiled from diverse web and scientific sources in both English and Chinese. To maximize usability, we provide detailed statistical analyses and equip the datasets with quality signals, enabling researchers to easily filter and select data for specific tasks. Our work provides the community with a versatile data format and substantial resources, offering a foundation for new research in pre-training strategies and the development of more powerful knowledge-intensive LMMs.
Submission history
From: Junjie Wang [view email]
[v1]
Thu, 20 Jun 2024 01:43:08 UTC (1,463 KB)
[v2]
Thu, 4 Sep 2025 10:10:23 UTC (3,251 KB)
[v3]
Tue, 9 Sep 2025 04:55:02 UTC (3,265 KB)
Source link
#KnowledgeIntensive #Dataset #Paired #Interleaved #Multimodal #Documents