Visual motion information provides a variety of clues that enable biological organisms from insects to primates to efficiently navigate in unstructured environments. We present modular mixed-signal very large-scale integration (VLSI) implementations of the three most prominent biological models of visual motion detection. A novel feature of these designs is the use of spike integration circuitry to implement the necessary temporal filtering. We show how such modular VLSI building blocks make it possible to build highly powerful and flexible vision systems. These three biomimetic motion algorithms are fully characterized and compared in performance. The visual motion detection models are each implemented on separate VLSI chips, but utilize a common silicon retina chip to transmit changes in contrast, and thus four separate mixed-signal VLSI designs are described. Characterization results of these sensors show that each has a saturating response to contrast to moving stimuli, and that the direction of motion of a sinusoidal grating can be detected down to less than 5% contrast, and over more than an order of magnitude in velocity, while retaining modest power consumption. © 2005 IEEE.
The Hassenstein-Reichardt (HR) correlation model is commonly used to model elementary motion detection in the fly. Recently, a neuronally-based computational model was proposed which, unlike the HR model, is based on identified neurons. The response of both models increases as the square of contrast, although the response of insect neurons saturates at high contrasts. We introduce a saturating nonlinearity into the neuronally-based model in order to produce contrast saturation and discuss the neuronal implications of these elements. Furthermore, we show that features of the contrast sensitivity of movement-detecting neurons are predicted by the modified model. © 2004 Elsevier B.V. All rights reserved.
Tracking of a target in a cluttered environment requires extensive computational architecture. However, even a small housefly is adept at pursuing its prey. Biomimetic algorithms suggest a novel way of looking at this problem. In the lobula plate of a fly's brain, a neural circuit is hypothesized based on a tangential cell called the figure detection (FD) cell. The proposed small target fixation algorithm based on electrophysiological recordings does not take into account the translation of the pursuer during pursuit. We have modified the biological algorithm to include this aspect of tracking. In this paper, we present the elaborated biological algorithm for small target tracking, and an analog VLSI implementation of this algorithm.
Flies have the capability to visually track small moving targets, even across cluttered backgrounds. Previous computational models, based on figure detection (FD) cells identified in the fly, have suggested how this may be accomplished at a neuronal level based on information about relative motion between the target and the background. We experimented with the use of this "small-field system model" for the tracking of small moving targets by a simulated fly in a cluttered environment and discovered some functional limitations. As a result of these experiments, we propose elaborations of the original small-field system model to support stronger effects of background motion on small-field responses, proper accounting for more complex optical flow fields, and more direct guidance toward the target. We show that the elaborated model achieves much better tracking performance than the original model in complex visual environments and discuss the biological implications of our elaborations. The elaborated model may help to explain recent electrophysiological data on FD cells that seem to contradict the original model.
We introduce a biologically inspired computational architecture for small-field detection and wide-field spatial integration of visual motion based on the general organizing principles of visual motion processing common to organisms from insects to primates. This highly parallel architecture begins with two-dimensional (2-D) image transduction and signal conditioning, performs small-field motion detection with a number of parallel motion arrays, and then spatially integrates the small-field motion units to synthesize units sensitive to complex wide-field patterns of visual motion. We present a theoretical analysis demonstrating the architecture's potential in discrimination of wide-field motion patterns such as those which might be generated by self-motion. A custom VLSI hardware implementation of this architecture is also described, incorporating both analog and digital circuitry. The individual custom VLSI elements are analyzed and characterized, and system-level test results demonstrate the ability of the system to selectively respond to certain motion patterns, such as those that might be encountered in self-motion, at the exclusion of others. © 2002 IEEE.