Dextra
Dextra (Latin: right hand) beats humans at rock-paper-scissors.
It demonstrates quick and low cost activity-driven neuromorphic perception.
Dextra is composed of a DVS event camera, a small convolutional neural network, and a quick tendon-driven robot hand.
The key principle is activity-driven computing - the faster you move, the quicker Dextra computes its response.
Datasets and Code
The ROSHAMBO17 dataset on Sensors Group Datasets page.
The dextra-roshambo-python code
The Dextra hand design paper and design
Papers describing Dextra
I. Lungu, F. Corradi, and T. Delbrück, “Live demonstration: Convolutional neural network driven by dynamic vision sensor playing RoShamBo,” in 2017 IEEE International Symposium on Circuits and Systems (ISCAS), May 2017, doi: 10.1109/ISCAS.2017.8050403.
A. Aimar et al., “NullHop: A Flexible Convolutional Neural Network Accelerator Based on Sparse Representations of Feature Maps,” IEEE Trans Neural Netw Learn Syst, vol. 30, no. 3, pp. 644–656, Mar. 2019, doi: 10.1109/TNNLS.2018.2852335.
X. Deng, S. Weirich, R. K. Katzchmann, and T. Delbruck, “A Rapid and Robust Tendon-Driven Robotic Hand for Human-Robot Interactions Playing Rock-Paper-Scissors,” presented at the International Conference on Robot and Human Interactive Communication, IEEE RO-MAN (Pasadena), Aug 21-23, 2024.
History of development
We originally developed Dextra to demonstrate quick neuromorphic perception and inference during the NPP project and did the first public demonstration at NIPS 2016 in Barcelona. Since then, Dextra has evolved over many iterations, first to demonstrate NullHop, then to use a cheap laser cut hand (which proved to be too fragile), now finally with a robust and very quick tendon-driven hand. Dextra has been demonstrated in the ETH pavilion during the World Economic Forum in 2017, and at multiple public science events since 2017.