Jacobus J Barnard
Associate Director, Faculty Affairs-SISTA
Associate Professor, BIO5 Institute
Associate Professor, Electrical and Computer Engineering
Professor, Cognitive Science - GIDP
Professor, Computer Science
Professor, Genetics - GIDP
Professor, Statistics-GIDP
Primary Department
(520) 621-6613
Research Interest
Kobus Barnard, PhD, is an associate professor in the recently formed University of Arizona School of Information: Science, Technology, and Arts (SISTA), created to foster computational approaches across disciplines in both research and education. He also has University of Arizona appointments with Computer Science, ECE, Statistics, Cognitive Sciences, and BIO5. He leads the Interdisciplinary Visual Intelligence Lab (IVILAB) currently housed in SISTA. Research in the IVILAB revolves around building top-down statistical models that link theory and semantics to data. Such models support going from data to knowledge using Bayesian inference. Much of this work is in the context of inferring semantics and geometric form from image and video. For example, in collaboration with multiple researchers, the IVILAB has applied this approach to problems in computer vision (e.g., tracking people in 3D from video, understanding 3D scenes from images, and learning models of object structure) and biological image understanding (e.g., tracking pollen tubes growing in vitro, inferring the morphology of neurons grown in culture, extracting 3D structure of filamentous fungi from the genus Alternaria from brightfield microscopy image stacks, and extracting 3D structure of Arabidopsis plants). An additional IVILAB research project, Semantically Linked Instructional Content (SLIC) is on improving access to educational video through searching and browsing.Dr. Barnard holds an NSF CAREER grant, and has received support from three additional NSF grants, the DARPA Mind’s eye program, ONR, the Arizona Biomedical Research Commission (ABRC), and a BIO5 seed grant. He was supported by NSERC (Canada) during graduate and post-graduate studies (NSERC A, B and PDF). His work on computational color constancy was awarded the Governor General’s gold medal for the best dissertation across disciplines at SFU. He has published over 80 papers, including one awarded best paper on cognitive computer vision in 2002.

Publications

Gabbur, P., Hoying, J., & Barnard, K. (2015). Multimodal probabilistic generative models for time-course gene expression data and Gene Ontology (GO) tags. Mathematical biosciences, 268, 80-91.

We propose four probabilistic generative models for simultaneously modeling gene expression levels and Gene Ontology (GO) tags. Unlike previous approaches for using GO tags, the joint modeling framework allows the two sources of information to complement and reinforce each other. We fit our models to three time-course datasets collected to study biological processes, specifically blood vessel growth (angiogenesis) and mitotic cell cycles. The proposed models result in a joint clustering of genes and GO annotations. Different models group genes based on GO tags and their behavior over the entire time-course, within biological stages, or even individual time points. We show how such models can be used for biological stage boundary estimation de novo. We also evaluate our models on biological stage prediction accuracy of held out samples. Our results suggest that the models usually perform better when GO tag information is included.

Kraft, R., Escobar, M. M., Narro, M. L., Kurtis, J. L., Efrat, A., Barnard, K., & Restifo, L. L. (2006). Phenotypes of Drosophila brain neurons in primary culture reveal a role for fascin in neurite shape and trajectory. The Journal of neuroscience : the official journal of the Society for Neuroscience, 26(34), 8734-47.

Subtle cellular phenotypes in the CNS may evade detection by routine histopathology. Here, we demonstrate the value of primary culture for revealing genetically determined neuronal phenotypes at high resolution. Gamma neurons of Drosophila melanogaster mushroom bodies (MBs) are remodeled during metamorphosis under the control of the steroid hormone 20-hydroxyecdysone (20E). In vitro, wild-type gamma neurons retain characteristic morphogenetic features, notably a single axon-like dominant primary process and an arbor of short dendrite-like processes, as determined with microtubule-polarity markers. We found three distinct genetically determined phenotypes of cultured neurons from grossly normal brains, suggesting that subtle in vivo attributes are unmasked and amplified in vitro. First, the neurite outgrowth response to 20E is sexually dimorphic, being much greater in female than in male gamma neurons. Second, the gamma neuron-specific "naked runt" phenotype results from transgenic insertion of an MB-specific promoter. Third, the recessive, pan-neuronal "filagree" phenotype maps to singed, which encodes the actin-bundling protein fascin. Fascin deficiency does not impair the 20E response, but neurites fail to maintain their normal, nearly straight trajectory, instead forming curls and hooks. This is accompanied by abnormally distributed filamentous actin. This is the first demonstration of fascin function in neuronal morphogenesis. Our findings, along with the regulation of human Fascin1 (OMIM 602689) by CREB (cAMP response element-binding protein) binding protein, suggest FSCN1 as a candidate gene for developmental brain disorders. We developed an automated method of computing neurite curvature and classifying neurons based on curvature phenotype. This will facilitate detection of genetic and pharmacological modifiers of neuronal defects resulting from fascin deficiency.

Ramanan, D., Forsyth, D. A., & Barnard, K. (2006). Building models of animals from video. IEEE Transactions on Pattern Analysis and Machine Intelligence, 28(8), 1319-1333.

PMID: 16886866;Abstract:

This paper argues that tracking, object detection, and model building are all similar activities. We describe a fully automatic system that builds 2D articulated models known as pictorial structures from videos of animals. The learned model can be used to detect the animal in the original video - in this sense, the system can be viewed as a generalized tracker (one that is capable of modeling objects while tracking them). The learned model can be matched to a visual library; here, the system can be viewed as a video recognition algorithm. The learned model can also be used to detect the animal in novel images - in this case, the system can be seen as a method for learning models for object recognition. We find that we can significantly improve the pictorial structures by augmenting them with a discriminative texture model learned from a texture library. We develop a novel texture descriptor that outperforms the state-of-the-art for animal textures. We demonstrate the entire system on real video sequences of three different animals. We show that we can automatically track and identify the given animal. We use the learned models to recognize animals from two data sets; images taken by professional photographers from the Corel collection, and assorted images from the Web returned by Google. We demonstrate quite good performance on both data sets. Comparing our results with simple baselines, we show that, for the Google set, we can detect, localize, and recover part articulations from a collection demonstrably hard for object recognition. © 2006 IEEE.

Barnard, K., Duygulu, P., Guru, R., Gabbur, P., & Forsyth, D. (2003). The effects of segmentation and feature choice in a translation model of object recognition. Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2, II/675-II/682.

Abstract:

We work with a model of object recognition where words must be placed on image regions. This approach means that large scale experiments are relatively easy, so we can evaluate the effects of various early and mid-level vision algorithms on recognition performance. We evaluate various image segmentation algorithms by determining word prediction accuracy for images segmented in various ways and represented by various features. We take the view that good segmentations respect object boundaries, and so word prediction should be better for a better segmentation. However, it is usually very difficult in practice to obtain segmentations that do not break up objects, so most practitioners attempt to merge segments to get better putative object representations. We demonstrate that our paradigm of word prediction easily allows us to predict potentially useful segment merges, even for segments that do not look similar (for example, merging the black and white halves of a penguin is not possible with feature-based segmentation; the main cue must be "familiar configuration"). These studies focus on unsupervised learning of recognition. However, we show that word prediction can be markedly improved by providing supervised information for a relatively small number of regions together with large quantities of unsupervised information. This supervisory information allows a better and more discriminative choice of features and breaks possible symmetries.

Barnard, K., & Finlayson, G. (2000). Shadow identification using colour ratios. Final Program and Proceedings - IS and T/SID Color Imaging Conference, 97-101.

Abstract:

In this paper we present a comprehensive method for identifying probable shadow regions in an image. Doing so is relevant to computer vision, colour constancy, and image reproduction, specifically dynamic range compression. Our method begins with a segmentation of the image into regions of the same colour. Then the edges between the regions are analyzed with respect to the possibility that each is due to an illumination change as opposed to a material boundary. We then integrate the edge information to produce an estimate of the illumination field.