Datasets from the Sensors group

All Sensors Group datasets, unless otherwise noted, are licensed under a Creative Commons Attribution-ShareAlike 4.0 International License

DAVIS camera sample data

Brief description: Various DAVIS camera sample data files to play with

Citation: T. Delbruck, "DAVIS24: DAVIS Event Camera Sample Data", Available:  

Dynamic Vision Sensor Disdrometer 2022


Brief Description: Data and code for measuring raindrop size and speed with DVS event camera

Citation: Micev, Kire, Jan Steiner, Asude Aydin, Jörg Rieckermann, and Tobi Delbruck. 2024. “Measuring Diameters and Velocities of Artificial Raindrops with a Neuromorphic Event Camera.” Atmospheric Measurement Techniques. doi:10.5194/amt-17-335-2024.

DeNoising Dynamic vision sensors 2021


Contributors: Shasha Guo, Tobi Delbruck

Brief Description: Data and code for denoising background activity.

Citation: S. Guo and T. Delbruck, “Low Cost and Latency Event Camera Background Activity Denoising,”, IEEE Transactions on Pattern Analysis and Machine Intelligence (T-PAMI), 2022.

Event Driven Optical Flow Camera

Contributors: Min Liu, Tobi Delbruck

Brief Description: Data and code for benchmarking DVS optical flow.

Citation: Liu, Min, and Tobi Delbruck. 2022. “EDFLOW: Event Driven Optical Flow Camera with Keypoint Detection and Adaptive Block Matching.” IEEE Transactions on Circuits and Systems for Video Technology

EDFLOW hardware and test setup_1.mp4

MVSEC nighttime driving labeled cars


Contributors: Yuhuang Hu

Brief Description: Labeled nighttime driving cars from MVSEC

Citation Y. Hu, S. C. Liu, and T. Delbruck, v2e: From video frames to relistic DVS event camera streams,” in 2021 IEEE/CVF Conference on Computer
Vision and Pattern Recognition Workshops (CVPRW) 2021.


A dataset for multi-channel, multi-device speech separation and speech enhancement


Contributors: E. Ceolini, I. Kiselev, S. Liu

Brief Description: Recordings from Ad-Hoc Wireless Acoustic Network using 4 modules of the WHISPER platform. 

Citation: Ceolini, Enea, Ilya Kiselev, and Shih-Chii Liu. 2020. “Evaluating Multi-Channel Multi-Device Speech Separation Algorithms in the Wild: A Hardware-Software Solution.” *IEEE/ACM Transactions on Audio, Speech, and Language Processing* 28: 1428–1439. doi:10.1109/TASLP.2020.2989545. 

DAVIS Driving Dataset 2020

Brief Description: Dataset contains recordings from DAVIS346 camera from driving scenarios primarily on highways along with  ground truth car data such as speed, steering, GPS, etc. Compared with DDD17, an additional 41h of DAVIS driving data has been collected and organized. It includes mountain, highway, freeway, freeway, day and night driving including difficult glare conditions. See DDD20 website.

DAVIS Driving Dataset 2017


Contributors: J. Binas, D. Neil, S-C. Liu, and T. Delbruck 

Citation: DDD17: End-To-End DAVIS Driving Dataset.”, J. Binas, D. Neil, S-C. Liu, and T. Delbruck, In ICML’17 Workshop on Machine Learning for Autonomous
Vehicles. Sydney, Australia, 2017

See DDD17 dataset

DVS Human Pose Estimation


Contributors:  S. Skriabine, G. Taverni, F. Corradi, L. Longinotti, K. Eng, and T. Delbruck
Chris Schmidt, Marc Bolliger,Balgrist University Hospital, Zurich, Switzerland

Brief Description: Dataset contains synchronized Recordings from 4 DAVIS346 cameras with  Vicon marker ground
truth from 17 subjects doing repeated motions.

Citation: Calabrese, E.*, Taverni, G.*, Easthope, C., Skriabine, S., Corradi, F., Longinotti, L., Eng, K., and Delbruck, T. "DHP19: Dynamic Vision Sensor 3D Human Pose Dataset". CVPR Workshop on Event-based Vision and Smart Cameras, Long Beach, CA, USA, 2019.  (or see this  PDF version)


Dynamic Audio Sensor


Contributors: SC Liu, T Delbruck

Brief Description: Recordings of complete TIDIGITS audio dataset from DAS1 binaural 64x2 channel silicon cochlea.

Inquire about collaboration possibilities

Citation:Feature representation for neuromorphic spike streams" J. Anumula, D. Neil, T. Delbruck, and S-C. Liu Frontiers in Neuroscience, 2018.

VISUALISE Predator/Prey Dataset

(was PRED16)


D. P.Moeys and T. Delbruck 

Brief Description: Dataset contains recordings from a DAVIS240 camera mounted on a computer-controlled robot (the predator) that chases and attempts to capture another human-controlled robot (the prey).

Citation:"Steering a predator robot using a mixed frame/event-driven convolutional neural network", D. Moeys and T. Delbruck, EBCCSP, 2016.

RoShamBo Rock Scissors Paper game DVS dataset


Brief description: Dataset is recorded from ~20 persons each showing the rock, scissors and paper symbols for about 2m each with a variety of poses, distances, positions, left/right hand. Data is also included for background consisting of sensor noise, bodies, room, etc. Altogether 5M 64x64 DVS images of constant event count (0.5k, 1k, 2k events) with left right flipping augmentation are included.

Citation: "Live Demonstration: Convolutional Neural Network Driven by Dynamic Vision Sensor Playing RoShamBo", I-A. Lungu, F. Corradi,  and T. Delbruck, in 2017 IEEE Symposium on Circuits and Systems (ISCAS 2017) (Baltimore, MD, USA), 2017.

DVS Datasets for Object Tracking,Action Recognition and Object Recognition


Contributors: Y. Hu, H. Liu, M. Pfeiffer, and T. Delbruck 

Brief Description: Dataset contains recordings from DVS on multiple object datasets.

Citation: "DVS Benchmark Tracking Datasets for object tracking, action recognition, and object recognition," Y. Hu, H. Liu, M. Pfeiffer, and T. Delbruck, Frontiers in Neuroscience, 2016.

DVS/DAVIS Optical Flow Dataset


Contributors:  B. Rueckauer and T. Delbruck    

Brief Description: DVS optical flow dataset contains samples of a scene with boxes, moving sinusoidal gratings, and a rotating dis The ground truth comes from
the camera's IMU rate gyro. 

Citation: "Evaluation of Algorithm for Normal Optical Flow from Dynamic Vision Sensors", B. Rueckauer and T. Delbruck, Frontiers in Neuroscience, 2015.

DVS128 Dynamic Vision Sensor Silicon Retina Data


Contributors:T. Delbruck 

Brief Description: DVS recordings from DVS128 camera from multiple scenarios: Juggling, sunglasses, driving, edges and patterns, walking from lab, spinning dot, 3 days of mouse activity, cars on 210 freeway in Pasadena. 

Citation: Delbruck, T. (2008). Frame-free dynamic digital vision. in Proceedings of Intl. Symp. on Secure-Life Electronics (Tokyo, Japan: University of Tokyo), 21–26. 

Datasets from collaborators with Sensors Group: