Event Sensors

Dynamic Vision Sensor (DVS) Event Camera

The first practical event camera is called the dynamic vision sensor (DVS) . it was invented by Patrick Lichtsteiner and Tobi Delbruck. It is a silicon retina with an asynchronous output that encodes brightness changes. This sensor is used in the computer vision and robotics community; the paper below is the 4th most cited paper in IEEE Journal of Solid State Circuits over the past decade.

The sensor is produced and distributed by inivation and detailed information can be found there and at  the original siliconretina.ini.uzh.ch site. Software for processing this sensor output is available on our software page.

A key (but not the first) publication on this sensor is this 2008 IEEE J. Solid State Circuits article:

The seminal DVS publication is this one from the 2005 IISW meeting:

PhD thesis:

Dynamic and Active Pixel Vision Sensor (DAVIS) hybrid vision sensor (HVS)

The Dynamic and Active Pixel Vision Sensor (DAVIS) combines active pixel technology with the DVS temporal contrast pixel. The two streams of frames and events are output concurrently. That way, the DAVIS produces conventional frames, which are the basis of all existing machine vision, and the event stream that allows quick responses with sparse data and high dynamic range.

A key publication on the DAVIS is about the chip that made the DAVIS240C inivation camera (the first event camera sent to space, in 2021, claimed by Western Sydney University as their chip but actually our SeeBetter DAVIS240 silicon):

In the same silicon wafer, we also designed the DAVIS346 (sold by that name by inivation). The key publication about about the DAVIS346 is our report comparing front-side-illumination (FSI) vs back-side illumination (BSI) versions of the DAVIS346:

Seminal publications:

PhD theses:


The concept of the DAVIS is illustrated in this video that includes data recorded from the very first generation sensor (SBRET10) on the local Akademischer Tennis Club tennis courts by SC Liu, C Brandli and T Delbruck. The frame rate was purposefully slowed to 6Hz. The data rate from frames was about 3x that of the DVS event rate even at this low frame rate.

Color and hybrid vision sensors

After developing the first HVS with DAVIS, we produced the first back-side illuminated (BSI) event camera (BSIDAVIS) and were firist to add a color filter array (CFA) to give DVS the capability to distinguish colors (RGBDAVIS). We also produced the first true APS+DVS hybrid (CDAVIS), now called HVS type sensor by industry.

BSIDAVIS - the first back side illuminated event camera

PhD theses

RGBDAVIS - the first event camera with color filter array (CFA)

Our SeeBetter TowerJazz wafer run included splits that included RGB color filters over our DAVIS designs. One of these is sold as the Color DAVIS346 from inivation. See the DAVIS section above for the key reference for the DAVIS346 (Taverni et al. 2018).

We also published an experimental, higher-sensitivity DAVIS with RGB color filters:

Phd Thesis:

CDAVIS - the first HVS (hybrid vision sensor) dual event camera and high-performance CMOS image sensor

PhD thesis:

AER-EAR (aka DAS)

The AER EAR that was developed mainly by Shih-Chii Liu in a collaboration that started with Andre van Schaik is a neuromorphic audio sensor that encodes the frequencies of auditory input as asynchronous events in specific frequency channels. This binarual artificial ear is good at estimating the temporal difference between two auditory inputs and can therefore for example be used for sound localization. Prototypes of the AER-EAR that have been renamed DAS (for Dynamic Audio Sensor) are sold by inilabs.

More information on this chip and the system around it can be found under aer-ear.ini.uzh.ch

Key publications on this sensor are:

A later sensor called COCHLP greatly improved the channel matching and increased the maximum possible resonance quality (Q) while burning only a trickle of power from a 0.5V supply. The COCHLP was developed mainly by M Yang and SC Liu. The COCHLP papers are:

PhD theses:

 This video shows output from the AER-EAR2 sensor in response to speech and songSee this youtube playlist of more cochlea videos.

Physiologist's Friend Chip

The Physiologist's Friend Chip is a neuromorphic analog VLSI chip that is a model of the early visual system. It has audible spiking cell responses to visual stimuli. You can use it in the lab or the lecture hall. In the lab it acts as a fake animal. You can use it to train students and to test data collection and analysis software. It sits in your toolbox like any other tool. In the lecture hall, you can use it with an overhead projector to do live demonstrations of how physiologists plot receptive fields. We have now open-sourced the complete design.

More information on this chip and the system around it can be found under www.ini.uzh.ch/~tobi/friend

Key publications on this sensor is: