Event Sensors
Event Camera and Audio Sensors
Basic research in neuromorphic sensors led to the invention and development of event cameras and silicon cochleas that output an asynchronous, variable data-rate stream of events; these events signify the location and time of some significant activity.
Dynamic Vision Sensor (DVS) Event Camera
The first practical event camera is called the dynamic vision sensor (DVS) . it was invented by Patrick Lichtsteiner and Tobi Delbruck. It is a silicon retina with an asynchronous output that encodes brightness changes. This sensor is used in the computer vision and robotics community; the paper below is the 4th most cited paper in IEEE Journal of Solid State Circuits over the past decade.
The sensor is produced and distributed by inivation and detailed information can be found there and at the original siliconretina.ini.uzh.ch site. Software for processing this sensor output is available on our software page.
A key (but not the first) publication on this sensor is this 2008 IEEE J. Solid State Circuits article:
The seminal DVS publication is this one from the 2005 IISW meeting:
64x64 Event-Driven Logarithmic Temporal Derivative Silicon Retina, (2005) P. Lichtsteiner and T. Delbruck, in 2005 IEEE Workshop on Charge-Coupled Devices and Advanced Image Sensors, Nagano, Japan, 2005, pp. 157-160.
PhD thesis:
P. Lichtsteiner, “An AER temporal contrast vision sensor,” 2006, Available: https://www.research-collection.ethz.ch/bitstream/handle/20.500.11850/149539/eth-29103-01.pdf
Dynamic and Active Pixel Vision Sensor (DAVIS) hybrid vision sensor (HVS)
The Dynamic and Active Pixel Vision Sensor (DAVIS) combines active pixel technology with the DVS temporal contrast pixel. The two streams of frames and events are output concurrently. That way, the DAVIS produces conventional frames, which are the basis of all existing machine vision, and the event stream that allows quick responses with sparse data and high dynamic range.
A key publication on the DAVIS is about the chip that made the DAVIS240C inivation camera (the first event camera sent to space, in 2021, claimed by Western Sydney University as their chip but actually our SeeBetter DAVIS240 silicon):
C. Brandli, R. Berner, M. Yang, S.-C. Liu, and T. Delbruck, “A 240x180 130dB 3us Latency Global Shutter Spatiotemporal Vision Sensor,” IEEE J. Solid State Circuits, p. 2333 - 2341, Volume:49 , Issue: 10, 2014.
In the same silicon wafer, we also designed the DAVIS346 (sold by that name by inivation). The key publication about about the DAVIS346 is our report comparing front-side-illumination (FSI) vs back-side illumination (BSI) versions of the DAVIS346:
Taverni, Gemma, Diederik Paul Moeys, Chenghan Li, Celso Cavaco, Vasyl Motsnyi, David San Segundo Bello, and Tobi Delbruck. 2018. “Front and Back Illuminated Dynamic and Active Pixel Vision Sensors Comparison.” *IEEE Transactions on Circuits and Systems II: Express Briefs* 65 (5) (May): 677–681. doi:10.1109/TCSII.2018.2824899.
Seminal publications:
R. Berner, C. Brandli, M. Yang, S. -C. Liu, and T. Delbruck, A 240×180 120dB 10mW 12μs-latency Sparse Output Vision Sensor for Mobile Applications, Proceedings of the 2013 International Image Sensor Workshop, pp. 41-44, 2013
R. Berner, C. Brandli, M. Yang, S. -C. Liu, and T. Delbruck, A 240×180 120dB 10mW 12μs-latency Sparse Output Vision Sensor for Mobile Applications , IEEE VLSI Symposium, pp. 186-187, 2013
PhD theses:
R. Berner, “Building Blocks for Event-Based Sensors,” PhD, ETH Zurich, Zurich, Switzerland, 2011. Available: https://drive.google.com/open?id=0BzvXOhBHjRheUVpSSTV6alZleHM . [Accessed: Nov. 23, 2013]
C. P. Brändli, “Event-based machine vision,” ETH Zurich, 2015. doi: 10.3929/ETHZ-A-010402138. Available: https://www.research-collection.ethz.ch/bitstream/handle/20.500.11850/99543/1/eth-47528-01.pdf
Color and hybrid vision sensors
After developing the first HVS with DAVIS, we produced the first back-side illuminated (BSI) event camera (BSIDAVIS) and were firist to add a color filter array (CFA) to give DVS the capability to distinguish colors (RGBDAVIS). We also produced the first true APS+DVS hybrid (CDAVIS), now called HVS type sensor by industry.
BSIDAVIS - the first back side illuminated event camera
G. Taverni et al., “Front and Back Illuminated Dynamic and Active Pixel Vision Sensors Comparison,” IEEE Trans. Circuits Syst. Express Briefs, vol. 65, no. 5, pp. 677–681, May 2018, doi: 10.1109/TCSII.2018.2824899. Available: http://dx.doi.org/10.1109/TCSII.2018.2824899
PhD theses
G. Taverni, “Applications of Silicon Retinas: from Neuroscience to Computer Vision,” PhD, University of Zurich, 2020. Available: https://scholar.archive.org/work/7yxrfiazajcdrhdshnenbujiby/access/wayback/https://www.zora.uzh.ch/id/eprint/217905/1/Thesis_Gemma.pdf . [Accessed: Jan. 31, 2024]
RGBDAVIS - the first event camera with color filter array (CFA)
Our SeeBetter TowerJazz wafer run included splits that included RGB color filters over our DAVIS designs. One of these is sold as the Color DAVIS346 from inivation. See the DAVIS section above for the key reference for the DAVIS346 (Taverni et al. 2018).
We also published an experimental, higher-sensitivity DAVIS with RGB color filters:
D. P. Moeys et al., “A Sensitive Dynamic and Active Pixel Vision Sensor for Color or Neural Imaging Applications,” IEEE Trans. Biomed. Circuits Syst., vol. 12, no. 1, pp. 123–136, Feb. 2018, doi: 10.1109/TBCAS.2017.2759783. Available: http://dx.doi.org/10.1109/TBCAS.2017.2759783
D. P. Moeys et al., “Color temporal contrast sensitivity in dynamic vision sensors,” in 2017 IEEE International Symposium on Circuits and Systems (ISCAS), Baltimore, MD, USA: IEEE, May 2017. doi: 10.1109/iscas.2017.8050412. Available: https://ieeexplore.ieee.org/document/8050412/
Phd Thesis:
D. P. Moeys, “Analog and digital implementations of retinal processing for robot navigation systems,” PhD, ETH Zurich, Dept. of Electrical and Information Engineering (D-ITET), Zurich, Switzerland, 2017. Available: https://drive.google.com/open?id=0BzvXOhBHjRheeVlaam1ibVJqR1k
CDAVIS - the first HVS (hybrid vision sensor) dual event camera and high-performance CMOS image sensor
C. Li et al., “An RGBW color VGA rolling and global shutter dynamic and active-pixel vision sensor,” in 2015 Intl. Image Sensors Worskhop, Available: https://www.imagesensors.org/Past%20Workshops/2015%20Workshop/2015%20Papers/Sessions/Session_13/13-05_Li_Delbruck.pdf . [Accessed: Dec. 10, 2023]
C. Li et al., “Design of an RGBW color VGA rolling and global shutter dynamic and active-pixel vision sensor,” in 2015 IEEE International Symposium on Circuits and Systems (ISCAS), Vaals, Netherlands: imagesensors.org, May 2015, pp. 718–721. doi: 10.1109/ISCAS.2015.7168734. Available: http://dx.doi.org/10.1109/ISCAS.2015.7168734
PhD thesis:
C. Li, “Two-Stream Vision Sensors,” PhD, ETH Zurich, Dept. of Electrical and Information Engineering (D-ITET), Zurich, Switzerland, 2017. doi: 10.3929/ethz-b-000164862. Available: http://dx.doi.org/10.3929/ethz-b-000164862 . [Accessed: May 01, 2021]
AER-EAR (aka DAS)
The AER EAR that was developed mainly by Shih-Chii Liu in a collaboration that started with Andre van Schaik is a neuromorphic audio sensor that encodes the frequencies of auditory input as asynchronous events in specific frequency channels. This binarual artificial ear is good at estimating the temporal difference between two auditory inputs and can therefore for example be used for sound localization. Prototypes of the AER-EAR that have been renamed DAS (for Dynamic Audio Sensor) are sold by inilabs.
More information on this chip and the system around it can be found under aer-ear.ini.uzh.ch
Key publications on this sensor are:
IEEE, TCAS - AER EAR: A Matched Silicon Cochlea Pair With Address Event Representation Interface
“Asynchronous Binaural Spatial Audition Sensor with 2x64x4 Channel Output,”** S.-C. Liu, A. van Schaik, B. A. Minch, and T. Delbruck, IEEE Trans. Biomedical Circuits and Systems, 8(4), p. 453-464, 2014.
A later sensor called COCHLP greatly improved the channel matching and increased the maximum possible resonance quality (Q) while burning only a trickle of power from a 0.5V supply. The COCHLP was developed mainly by M Yang and SC Liu. The COCHLP papers are:
M. Yang, C. H. Chien, T. Delbruck, and S. C. Liu, “22.5 A 0.5V 55uW 64x2-channel binaural silicon cochlea for event-driven stereo-audio sensing,” in 2016 IEEE International Solid-State Circuits Conference (ISSCC), 2016, pp. 388–389.
PhD theses:
M. Yang, “Silicon retina and cochlea with asynchronous delta modulator for spike encoding,” ETH Zurich, 2015. Available: https://www.research-collection.ethz.ch/bitstream/handle/20.500.11850/115964/1/ETH23110.pdf
Physiologist's Friend Chip
The Physiologist's Friend Chip is a neuromorphic analog VLSI chip that is a model of the early visual system. It has audible spiking cell responses to visual stimuli. You can use it in the lab or the lecture hall. In the lab it acts as a fake animal. You can use it to train students and to test data collection and analysis software. It sits in your toolbox like any other tool. In the lecture hall, you can use it with an overhead projector to do live demonstrations of how physiologists plot receptive fields. We have now open-sourced the complete design.
More information on this chip and the system around it can be found under www.ini.uzh.ch/~tobi/friend
Key publications on this sensor is:
T. Delbrück and S.-C. Liu, “A silicon early visual system as a model animal,” Vision Res., vol. 44, no. 17, pp. 2083–2089, 2004, doi: 10.1016/j.visres.2004.03.021. Available: http://dx.doi.org/10.1016/j.visres.2004.03.021