Charles M Higgins
Associate Professor, Applied Mathematics - GIDP
Associate Professor, BIO5 Institute
Associate Professor, Electrical and Computer Engineering
Associate Professor, Entomology / Insect Science - GIDP
Associate Professor, Neuroscience
Associate Professor, Neuroscience - GIDP
Primary Department
Department Affiliations
(520) 621-6604
Research Interest
Charles Higgins, PhD, is an Associate Professor in the Department of Neuroscience with a dual appointment in Electrical Engineering at the University of Arizona where he is also leader of the Higgins Lab. Though he started his career as an electrical engineer, his fascination with the natural world has led him to study insect vision and visual processing, while also trying to meld together the worlds of robotics and biology. His research ranges from software simulations of brain circuits to interfacing live insect brains with robots, but his driving interest continues to be building truly intelligent machines.Dr. Higgins’ lab conducts research in areas that vary from computational neuroscience to biologically-inspired engineering. The unifying goal of all these projects is to understand the representations and computational architectures used by biological systems. These projects are conducted in close collaboration with neurobiology laboratories that perform anatomical, electrophysiological, and histological studies, mostly in insects.More than three years ago he captured news headlines when he and his lab team demonstrated a robot they built which was guided by the brain and eyes of a moth. The moth, immobilized inside a plastic tube, was mounted on a 6-inch-tall wheeled robot. When the moth moved its eyes to the right, the robot turned in that direction, proving brain-machine interaction. While the demonstration was effective, Charles soon went to work to overcome the difficulty the methodology presented in keeping the electrodes attached to the brain of the moth while the robot was in motion. This has led him to focus his work on another insect species.

Publications

Higgins, C. M., & Shams, S. A. (2002). A biologically inspired modular VLSI system for visual measurement of self-motion. IEEE Sensors Journal, 2(6), 508-528.

Abstract:

We introduce a biologically inspired computational architecture for small-field detection and wide-field spatial integration of visual motion based on the general organizing principles of visual motion processing common to organisms from insects to primates. This highly parallel architecture begins with two-dimensional (2-D) image transduction and signal conditioning, performs small-field motion detection with a number of parallel motion arrays, and then spatially integrates the small-field motion units to synthesize units sensitive to complex wide-field patterns of visual motion. We present a theoretical analysis demonstrating the architecture's potential in discrimination of wide-field motion patterns such as those which might be generated by self-motion. A custom VLSI hardware implementation of this architecture is also described, incorporating both analog and digital circuitry. The individual custom VLSI elements are analyzed and characterized, and system-level test results demonstrate the ability of the system to selectively respond to certain motion patterns, such as those that might be encountered in self-motion, at the exclusion of others. © 2002 IEEE.

Dyhr, J. P., & Higgins, C. M. (2010). Non-directional motion detectors can be used to mimic optic flow dependent behaviors. Biological Cybernetics, 103(6), 433-446.

PMID: 21161268;Abstract:

Insect navigational behaviors including obstacle avoidance, grazing landings, and visual odometry are dependent on the ability to estimate flight speed based only on visual cues. In honeybees, this visual estimate of speed is largely independent of both the direction of motion and the spatial frequency content of the image. Electrophysiological recordings from the motion-sensitive cells believed to underlie these behaviors have long supported spatio-temporally tuned correlation-type models of visual motion detection whose speed tuning changes as the spatial frequency of a stimulus is varied. The result is an apparent conflict between behavioral experiments and the electrophysiological and modeling data. In this article, we demonstrate that conventional correlation-type models are sufficient to reproduce some of the speed-dependent behaviors observed in honeybees when square wave gratings are used, contrary to the theoretical predictions. However, these models fail to match the behavioral observations for sinusoidal stimuli. Instead, we show that non-directional motion detectors, which underlie the correlation-based computation of directional motion, can be used to mimic these same behaviors even when narrowband gratings are used. The existence of such non-directional motion detectors is supported both anatomically and electrophysiologically, and they have been hypothesized to be critical in the Dipteran elementary motion detector (EMD) circuit. © 2010 Springer-Verlag.

Higgins, C. M., Pant, V., & Deutschmann, R. (2005). Analog VLSI implementation of spatio-temporal frequency tuned visual motion algorithms. IEEE Transactions on Circuits and Systems I: Regular Papers, 52(3), 489-502.

Abstract:

The computation of local visual motion can be accomplished very efficiently in the focal plane with custom very large-scale integration (VLSI) hardware. Algorithms based on measurement of the spatial and temporal frequency content of the visual motion signal, since they incorporate no thresholding operation, allow highly sensitive responses to low contrast and low-speed visual motion stimuli. We describe analog VLSI implementations of the three most prominent spatio-temporal frequency-based visual motion algorithms, present characterizations of their performance, and compare the advantages of each on an equal basis. This comparison highlights important issues in the design of analog VLSI sensors, including the effects of circuit design on power consumption, the tradeoffs of subthreshold versus above-threshold MOSFET biasing, and methods of layout for focal plane vision processing arrays. The presented sensors are capable of distinguishing the direction of motion of visual stimuli to less than 5% contrast, while consuming as little as 1 μW of electrical power. These visual motion sensors are useful in embedded applications where minimum power consumption, size, and weight are crucial. © 2005 IEEE.

Özalevli, E., Hasler, P., & Higgins, C. M. (2006). Winner-take-all-based visual motion sensors. IEEE Transactions on Circuits and Systems II: Express Briefs, 53(8), 717-721.

Abstract:

We present a novel analog VLSI implementation of visual motion computation based on the lateral inhibition and positive feedback mechanisms that are inherent in the hysteretic winner-take-all circuit. By use of an input-dependent bias current and threshold mechanism, the circuit resets itself to prepare for another motion computation. This implementation was inspired by the Barlow-Levick model of direction selectivity in the rabbit retina. Each pixel uses 33 transistors and two small capacitors to detect the direction of motion and can be altered with the addition of six more transistors to measure the interpixel transit time. Simulation results and measurements from fabricated VLSI designs are presented to show the operation of the circuits. © 2006 IEEE.

Higgins, C. M., & Goodman, R. M. (1994). Fuzzy rule-based networks for control. IEEE Transactions on Fuzzy Systems, 2(1), 82-88.

Abstract:

We present a method for learning fuzzy logic membership functions and rule to approximate a numerical function from a set of examples of the functions independent variables and the resulting function value. This method uses a three-step approach to building a complete function approximation system: first, learning the membership functions and creating a cell-based rule representation; second, simplifying the cell-based rules using an information-theoretic approach for induction of rules from discrete-valued data; and, finally, constructing a computational (neural) network to compute the function value given its independent variables. This function approximation system is demonstrated with a simple control example: learning the truck and the trailer backer-upper control system.