Deblurring in the Wild: A Real-World Dataset from Smartphone High-Speed Videos

Under review

This paper introduces a large-scale, real-world dataset for image deblurring, constructed using 240 fps slow-motion videos from consumer smartphones. The authors generate blurry images by averaging frames over time, simulating realistic long-exposure blur, and use the center frame as the sharp ground truth. The resulting dataset includes 42,000+ blur–sharp image pairs featuring high-resolution and diverse content from both indoor and outdoor scenes. As a result, a broad range of camera and object motion patterns are represented in this dataset. The dataset is about 10× larger and more diverse than previous real-world deblurring datasets. The authors benchmark several state-of-the-art (SOTA) deblurring models on this dataset and find that most perform significantly worse compared to synthetic benchmarks, indicating the challenge and complexity of real-world blur. The aim of this dataset is to drive progress toward more robust and generalizable deblurring models.