Jacobus J Barnard

Jacobus J Barnard

Associate Director, Faculty Affairs-SISTA
Associate Professor, BIO5 Institute
Associate Professor, Electrical and Computer Engineering
Member of the Graduate Faculty
Professor, Cognitive Science - GIDP
Professor, Computer Science
Professor, Genetics - GIDP
Professor, Statistics-GIDP
Primary Department
Contact
(520) 621-4632

Research Interest

Research Interest
Kobus Barnard, PhD, is an associate professor in the recently formed University of Arizona School of Information: Science, Technology, and Arts (SISTA), created to foster computational approaches across disciplines in both research and education. He also has University of Arizona appointments with Computer Science, ECE, Statistics, Cognitive Sciences, and BIO5. He leads the Interdisciplinary Visual Intelligence Lab (IVILAB) currently housed in SISTA. Research in the IVILAB revolves around building top-down statistical models that link theory and semantics to data. Such models support going from data to knowledge using Bayesian inference. Much of this work is in the context of inferring semantics and geometric form from image and video. For example, in collaboration with multiple researchers, the IVILAB has applied this approach to problems in computer vision (e.g., tracking people in 3D from video, understanding 3D scenes from images, and learning models of object structure) and biological image understanding (e.g., tracking pollen tubes growing in vitro, inferring the morphology of neurons grown in culture, extracting 3D structure of filamentous fungi from the genus Alternaria from brightfield microscopy image stacks, and extracting 3D structure of Arabidopsis plants). An additional IVILAB research project, Semantically Linked Instructional Content (SLIC) is on improving access to educational video through searching and browsing.Dr. Barnard holds an NSF CAREER grant, and has received support from three additional NSF grants, the DARPA Mind’s eye program, ONR, the Arizona Biomedical Research Commission (ABRC), and a BIO5 seed grant. He was supported by NSERC (Canada) during graduate and post-graduate studies (NSERC A, B and PDF). His work on computational color constancy was awarded the Governor General’s gold medal for the best dissertation across disciplines at SFU. He has published over 80 papers, including one awarded best paper on cognitive computer vision in 2002.

Publications

Guan, J., Brau, E., Simek, K., Morrison, C. T., Butler, E. A., & Barnard, K. J. (2015). Moderated and Drifting Linear Dynamical Systems. International Conference on Machine Learning.

This venue is a peer reviewed, competitive conference (acceptance rate: 26%) and the full paper is published as part of the conference proceedings [ CSRanking endorsed, A* ]

Ramanan, D., Forsyth, D. A., & Barnard, K. (2005). Detecting, localizing and recovering kinematics of textured animals. Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2, 635-642.

Abstract:

We develop and demonstrate an object recognition system capable of accurately detecting, localizing, and recovering the kinematic configuration of textured animals in real images. We build a deformation model of shape automatically from videos of animals and an appearance model of texture from a labeled collection of animal images, and combine the two models automatically. We develop a simple texture descriptor that outperforms the state of the art. We test our animal models on two datasets; images taken by professional photographers from the Corel collection, and assorted images from the web returned by Google. We demonstrate quite good performance on both datasets. Comparing our results with simple baselines, we show that for the Google set, we can recognize objects from a collection demonstrably hard for object recognition. © 2005 IEEE.

Barnard, K., Ciurea, F., & Funt, B. (2001). Sensor sharpening for computational color constancy. Journal of the Optical Society of America A: Optics and Image Science, and Vision, 18(11), 2728-2743.

PMID: 11688863;Abstract:

Sensor sharpening [J. Opt. Soc. Am. A 11, 1553 (1994)] has been proposed as a method for improving computational color constancy, but it has not been thoroughly tested in practice with existing color constancy algorithms. In this paper we study sensor sharpening in the context of viable color constancy processing, both theoretically and empirically, and on four different cameras. Our experimental findings lead us to propose a new sharpening method that optimizes an objective function that includes terms that minimize negative sensor responses as well as the sharpening error for multiple illuminants instead of a single illuminant. Further experiments suggest that this method is more effective for use with several known color constancy algorithms. © 2001 Optical Society of America.

Barnard, K., Martin, L., Funt, B., & Coath, A. (2002). A data set for color research. Color Research and Application, 27(3), 147-151.

Abstract:

We present an extensive data set for color research that has been made available online (www.cs.sfu.ca/̃colour/data). The data are especially germane to research into computational color constancy, but we have also aimed to make the data as general as possible, and we anticipate a wide range of benefits to research into computational color science and computer vision. Because data are useful only in context, we provide the details of the collection process, including the camera characterization, and the data used to determine that characterization. The most significant part of the data is 743 images of scenes taken under a carefully chosen set of 11 illuminants. The data set also has several standardized sets of spectra for synthetic data experiments, including some data for fluorescent surfaces. © 2002 Wiley Periodicals, Inc. Col. Res. Appl.

Cardei, V. C., Funt, B., & Barnard, K. (1999). White point estimation for uncalibrated images. Final Program and Proceedings - IS and T/SID Color Imaging Conference, 97-100.

Abstract:

Color images often must be color balanced to remove unwanted color casts. We extend previous work on using a neural network for illumination, or white-point, estimation from the case of calibrated images to that of uncalibrated images of unknown origin. The results show that the chromaticity of the ambient illumination can be estimated with an average CIE Lab error of 5ΔE. Comparisons are made to the grayworld and white patch methods.