Jacobus J Barnard
Associate Director, Faculty Affairs-SISTA
Associate Professor, BIO5 Institute
Associate Professor, Electrical and Computer Engineering
Professor, Cognitive Science - GIDP
Professor, Computer Science
Professor, Genetics - GIDP
Professor, Statistics-GIDP
Primary Department
(520) 621-6613
Research Interest
Kobus Barnard, PhD, is an associate professor in the recently formed University of Arizona School of Information: Science, Technology, and Arts (SISTA), created to foster computational approaches across disciplines in both research and education. He also has University of Arizona appointments with Computer Science, ECE, Statistics, Cognitive Sciences, and BIO5. He leads the Interdisciplinary Visual Intelligence Lab (IVILAB) currently housed in SISTA. Research in the IVILAB revolves around building top-down statistical models that link theory and semantics to data. Such models support going from data to knowledge using Bayesian inference. Much of this work is in the context of inferring semantics and geometric form from image and video. For example, in collaboration with multiple researchers, the IVILAB has applied this approach to problems in computer vision (e.g., tracking people in 3D from video, understanding 3D scenes from images, and learning models of object structure) and biological image understanding (e.g., tracking pollen tubes growing in vitro, inferring the morphology of neurons grown in culture, extracting 3D structure of filamentous fungi from the genus Alternaria from brightfield microscopy image stacks, and extracting 3D structure of Arabidopsis plants). An additional IVILAB research project, Semantically Linked Instructional Content (SLIC) is on improving access to educational video through searching and browsing.Dr. Barnard holds an NSF CAREER grant, and has received support from three additional NSF grants, the DARPA Mind’s eye program, ONR, the Arizona Biomedical Research Commission (ABRC), and a BIO5 seed grant. He was supported by NSERC (Canada) during graduate and post-graduate studies (NSERC A, B and PDF). His work on computational color constancy was awarded the Governor General’s gold medal for the best dissertation across disciplines at SFU. He has published over 80 papers, including one awarded best paper on cognitive computer vision in 2002.

Publications

Gabbur, P., Hua, H., & Barnard, K. (2010). A fast connected components labeling algorithm and its application to real-time pupil detection. Machine Vision and Applications, 21(5), 779-787.

Abstract:

We describe a fast connected components labeling algorithm using a region coloring approach. It computes region attributes such as size, moments, and bounding boxes in a single pass through the image. Working in the context of real-time pupil detection for an eye tracking system, we compare the time performance of our algorithm with a contour tracing-based labeling approach and a region coloring method developed for a hardware eye detection system. We find that region attribute extraction performance exceeds that of these comparison methods. Further, labeling each pixel, which requires a second pass through the image, has comparable performance. © Springer-Verlag 2009.

Barnard, K., Martin, L., Coath, A., & Funt, B. (2002). A comparison of computational color constancy algorithms - Part II: Experiments with image data. IEEE Transactions on Image Processing, 11(9), 985-996.

PMID: 18249721;Abstract:

We test a number of the leading computational color constancy algorithms using a comprehensive set of images. These were of 33 different scenes under 11 different sources representative of common illumination conditions. The algorithms studied include two gray world methods, a version of the Retinex method, several variants of Forsyth's gamut-mapping method, Cardei et al,'s neural net method, and Finlayson et al.'s Color by Correlation method. We discuss a number of issues in applying color constancy ideas to image data, and study in depth the effect of different preprocessing strategies. We compare the performance of the algorithms on image data with their performance on synthesized data. All data used for this study is available online at http://www.cs.sfu.ca/~color/data, and implementations for most of the algorithms are also available (http://www.cs.sfu.ca/∼color/code). Experiments with synthesized data (part one of this paper) suggested that the methods which emphasize the use of the input data statistics, specifically Color by Correlation and the neural net algorithm, are potentially the most effective at estimating the chromaticity of the scene illuminant. Unfortunately, we were unable to realize comparable performance on real images. Here exploiting pixel intensity proved to be more beneficial than exploiting the details of image chromaticity statistics, and the three-dimensional (3-D) gamut-mapping algorithms gave the best performance.

Barnard, K., & Funt, B. (1998). Experiments in sensor sharpening for color constancy. Final Program and Proceedings - IS and T/SID Color Imaging Conference, 43-46.

Abstract:

Sensor sharpening has been proposed as a method for improving color constancy algorithms but it has not been tested in the context of real color constancy algorithms. In this paper we test sensor sharpening as a method for improving color constancy algorithms in the case of three different cameras, the human cone sensitivity estimates, and the XYZ response curves. We find that when the sensors are already relatively sharp, sensor sharpening does not offer much improvement and can have a detrimental effect However, when the sensors are less sharp, sharpening can have a substantive positive effect. The degree of improvement is heavily dependent on the particular color constancy algorithm. Thus we conclude that using sensor sharpening for improving color constancy can offer a significant benefit, but its use needs to be evaluated with respect to both the sensors and the algorithm.

Barnard, K., & Johnson, M. (2005). Word sense disambiguation with pictures. Artificial Intelligence, 167(1-2), 13-30.

Abstract:

We introduce using images for word sense disambiguation, either alone, or in conjunction with traditional text based methods. The approach is based on a recently developed method for automatically annotating images by using a statistical model for the joint probability for image regions and words. The model itself is learned from a data base of images with associated text. To use the model for word sense disambiguation, we constrain the predicted words to be possible senses for the word under consideration. When word prediction is constrained to a narrow set of choices (such as possible senses), it can be quite reliable. We report on experiments using the resulting sense probabilities as is, as well as augmenting a state of the art text based word sense disambiguation algorithm. In order to evaluate our approach, we developed a new corpus, ImCor, which consists of a substantive portion of the Corel image data set associated with disambiguated text drawn from the SemCor corpus. Our experiments using this corpus suggest that visual information can be very useful in disambiguating word senses. It also illustrates that associated non-textual information such as image data can help ground language meaning. © 2005 Elsevier B.V. All rights reserved.

Barnard, K., & Funt, B. (1997). Analysis and improvement of multi-scale retinex. Proceedings of the Color Imaging Conference: Color Science, Systems, and Applications, 221-225.

Abstract:

The main thrust of this paper is to modify the multi-scale retinex (MSR) approach to image enhancement so that the processing is more justified from a theoretical standpoint. This leads to a new algorithm with fewer arbitrary parameters that is more flexible, maintains color fidelity, and still preserves the contrast-enhancement benefits of the original MSR method. To accomplish this we identify the explicit and implicit processing goals of MSR. By decoupling the MSR operations from one another, we build an algorithm composed of independent steps that separates out the issues of gamma adjustment, color balance, dynamic range compression, and color enhancement, which are all jumbled together in the original MSR method. We then extend MSR with color constancy and chromaticity-preserving contrast enhancement.