Jacobus J Barnard
Associate Director, Faculty Affairs-SISTA
Associate Professor, BIO5 Institute
Associate Professor, Electrical and Computer Engineering
Professor, Cognitive Science - GIDP
Professor, Computer Science
Professor, Genetics - GIDP
Professor, Statistics-GIDP
Primary Department
(520) 621-6613
Research Interest
Kobus Barnard, PhD, is an associate professor in the recently formed University of Arizona School of Information: Science, Technology, and Arts (SISTA), created to foster computational approaches across disciplines in both research and education. He also has University of Arizona appointments with Computer Science, ECE, Statistics, Cognitive Sciences, and BIO5. He leads the Interdisciplinary Visual Intelligence Lab (IVILAB) currently housed in SISTA. Research in the IVILAB revolves around building top-down statistical models that link theory and semantics to data. Such models support going from data to knowledge using Bayesian inference. Much of this work is in the context of inferring semantics and geometric form from image and video. For example, in collaboration with multiple researchers, the IVILAB has applied this approach to problems in computer vision (e.g., tracking people in 3D from video, understanding 3D scenes from images, and learning models of object structure) and biological image understanding (e.g., tracking pollen tubes growing in vitro, inferring the morphology of neurons grown in culture, extracting 3D structure of filamentous fungi from the genus Alternaria from brightfield microscopy image stacks, and extracting 3D structure of Arabidopsis plants). An additional IVILAB research project, Semantically Linked Instructional Content (SLIC) is on improving access to educational video through searching and browsing.Dr. Barnard holds an NSF CAREER grant, and has received support from three additional NSF grants, the DARPA Mind’s eye program, ONR, the Arizona Biomedical Research Commission (ABRC), and a BIO5 seed grant. He was supported by NSERC (Canada) during graduate and post-graduate studies (NSERC A, B and PDF). His work on computational color constancy was awarded the Governor General’s gold medal for the best dissertation across disciplines at SFU. He has published over 80 papers, including one awarded best paper on cognitive computer vision in 2002.


Yanai, K., Kawakubo, H., & Barnard, K. (2012). Entropy-Based Analysis of Visual and Geolocation Concepts in Images. Multimedia Information Extraction: Advances in Video, Audio, and Imagery Analysis for Search, Data Mining, Surveillance, and Authoring, 63-80.
Barnard, K., & Forsyth, D. (2001). Learning the semantics of words and pictures. Proceedings of the IEEE International Conference on Computer Vision, 2, 408-415.


We present a statistical model for organizing image collections which integrates semantic information provided by associated text and visual information provided by image features. The model is very promising for information retrieval tasks such as database browsing and searching for images based on text and/or image features. Furthermore, since the model learns relationships between text and image features, it can be used for novel applications such as associating words with pictures and unsupervised learning for object recognition.

Liang, R., Shao, J., & Barnard, J. J. (2018). Resolution enhancement for fiber bundle imaging using maximum a posteriori estimation. Optics Letters.
Cardei, V. C., Funt, B., & Barnard, K. (2002). Estimating the scene illumination chromaticity by using a neural network. Journal of the Optical Society of America A: Optics and Image Science, and Vision, 19(12), 2374-2386.

PMID: 12469731;Abstract:

A neural network can learn color constancy, defined here as the ability to estimate the chromaticity of a scene's overall illumination. We describe a multilayer neural network that is able to recover the illumination chromaticity given only an image of the scene. The network is previously trained by being presented with a set of images of scenes and the chromaticities of the corresponding scene illuminants. Experiments with real images show that the network performs better than previous color constancy methods. In particular, the performance is better for images with a relatively small number of distinct colors. The method has application to machine vision problems such as object recognition, where illumination-independent color descriptors are required, and in digital photography, where uncontrolled scene illumination can create an unwanted color cast in a photograph. © 2002 Optical Society of America.

Schlecht, J., Barnard, K., Spriggs, E., & Pryor, B. (2007). Inferring grammar-based structure models from 3D microscopy data. Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition.


We present a new method to fit grammar-based stochastic models for biological structure to stacks of microscopic images captured at incremental focal lengths. Providing the ability to quantitatively represent structure and automatically fit it to image data enables important biological research. We consider the case where individuals can be represented as an instance of a stochastic grammar, similar to L-systems used in graphics to produce realistic plant models. In particular, we construct a stochastic grammar of Alternaria, a genus of fungus, and fit instances of it to microscopic image stacks. We express the image data as the result of a generative process composed of the underlying probabilistic structure model together with the parameters of the imaging system. Fitting the model then becomes probabilistic inference. For this we create a reversible-jump MCMC sampler to traverse the parameter space. We observe that incorporating spatial structure helps fit the model parts, and that simultaneously fitting the imaging system is also very helpful. © 2007 IEEE.