...

A Dataset for Mistake Action Detection from Egocentric Videos referring to Procedural Texts


View a PDF of the paper titled EgoOops: A Dataset for Mistake Action Detection from Egocentric Videos referring to Procedural Texts, by Yuto Haneji and 9 other authors

View PDF
HTML (experimental)

Abstract:Mistake action detection is crucial for developing intelligent archives that detect workers’ errors and provide feedback. Existing studies have focused on visually apparent mistakes in free-style activities, resulting in video-only approaches to mistake detection. However, in text-following activities, models cannot determine the correctness of some actions without referring to the texts. Additionally, current mistake datasets rarely use procedural texts for video recording except for cooking. To fill these gaps, this paper proposes the EgoOops dataset, where egocentric videos record erroneous activities when following procedural texts across diverse domains. It features three types of annotations: video-text alignment, mistake labels, and descriptions for mistakes. We also propose a mistake detection approach, combining video-text alignment and mistake label classification to leverage the texts. Our experimental results show that incorporating procedural texts is essential for mistake detection. Data is available through this https URL.

Submission history

From: Yuto Haneji [view email]
[v1]
Mon, 7 Oct 2024 07:19:50 UTC (9,294 KB)
[v2]
Tue, 11 Feb 2025 07:17:37 UTC (10,341 KB)
[v3]
Thu, 31 Jul 2025 01:32:29 UTC (8,598 KB)

Source link

#Dataset #Mistake #Action #Detection #Egocentric #Videos #referring #Procedural #Texts