Motion-sensitive neurons in the visual systems of many species, including humans, exhibit a depression of motion responses immediately after being exposed to rapidly moving images. This motion adaptation has been extensively studied in flies, but a neuronal mechanism that explains the most prominent component of adaptation, which occurs regardless of the direction of motion of the visual stimulus, has yet to be proposed. We identify a neuronal mechanism, namely frequency-dependent synaptic depression, which explains a number of the features of adaptation in mammalian motion-sensitive neurons and use it to model fly motion adaptation. While synaptic depression has been studied mainly in spiking cells, we use the same principles to develop a simple model for depression in a graded synapse. By incorporating this synaptic model into a neuronally based model for elementary motion detection, along with the implementation of a center-surround spatial band-pass filtering stage that mimics the interactions among a subset of visual neurons, we show that we can predict with remarkable success most of the qualitative features of adaptation observed in electrophysiological experiments. Our results support the idea that diverse species share common computational principles for processing visual motion and suggest that such principles could be neuronally implemented in very similar ways.
Taking inspiration from the visual system of the fly, we describe and characterize a monolithic analog very large-scale integration sensor, which produces control signals appropriate for the guidance of an autonomous robot to visually track a small moving target. This sensor is specifically designed to allow such tracking even from a moving imaging platform which experiences complex background optical flow patterns. Based on relative visual motion of the target and background, the computational model implemented by this sensor emphasizes any small-field motion which is inconsistent with the wide-field background motion. © 2004 IEEE.
The extent of pixel-parallel focal plane image processing is limited by pixel area and imager fill factor. In this paper, we describe a novel multi-chip neuromorphic VLSI visual motion processing system which combines analog circuitry with an asynchronous digital interchip communications protocol to allow more complex pixel-parallel motion processing than is possible in the focal plane. This multi-chip system retains the primary advantages of focal plane neuromorphic image processors: low-power consumption, continuous-time operation, and small size. The two basic VLSI building blocks are a photosensitive sender chip which incorporates a 2D imager array and transmits the position of moving spatial edges, and a receiver chip which computes a 2D optical flow vector field from the edge information. The elementary two-chip motion processing system consisting of a single sender and receiver is first characterized. Subsequently, two three-chip motion processing systems are described. The first three-chip system uses two sender chips to compute the presence of motion only at a particular stereoscopic depth from the imagers. The second three-chip system uses two receivers to simultaneously compute a linear and polar topographic mapping of the image plane, resulting in information about image translation, rotation, and expansion. These three-chip systems demonstrate the modularity and flexibility of the multi-chip neuromorphic approach.
Using a neuronally based computational model of the fly's visual elementary motion detection (EMD) system, the effects of picrotoxin, a GABA receptor antagonist, were modeled to investigate the role of various GABAergic cells in direction selectivity. By comparing the results of our simulation of an anatomically correct model to previously published electrophysiological results, this study supports the hypothesis that EMD outputs integrated into tangential cells are weakly directional, although the tangential cells themselves respond to moving stimuli in a strongly directional manner. © 2004 Published by Elsevier B.V.
Abstract Collision avoidance models derived from the study of insect brains do not perform universally well in practical collision scenarios, although the insects themselves may perform well in similar situations. In this article, we present a detailed simulation analysis of two well-known collision avoidance models and illustrate their limitations. In doing so, we present a novel continuous-time implementation of a neuronally based collision avoidance model. We then show that visual tracking can improve performance of thesemodels by allowing an relative computation of the distance between the obstacle and the observer.We compare the results of simulations of the two models with and without tracking to show how tracking improves the ability of the model to detect an imminent collision.We present an implementation of one of thesemodels processing imagery from a camera to showhow it performs in real-world scenarios. These results suggest that insects may track looming objects with their gaze. © The Author(s) 2012.