Jacobus J Barnard
Associate Director, Faculty Affairs-SISTA
Associate Professor, BIO5 Institute
Associate Professor, Electrical and Computer Engineering
Professor, Cognitive Science - GIDP
Professor, Computer Science
Professor, Genetics - GIDP
Professor, Statistics-GIDP
Primary Department
(520) 621-4632
Research Interest
Kobus Barnard, PhD, is an associate professor in the recently formed University of Arizona School of Information: Science, Technology, and Arts (SISTA), created to foster computational approaches across disciplines in both research and education. He also has University of Arizona appointments with Computer Science, ECE, Statistics, Cognitive Sciences, and BIO5. He leads the Interdisciplinary Visual Intelligence Lab (IVILAB) currently housed in SISTA. Research in the IVILAB revolves around building top-down statistical models that link theory and semantics to data. Such models support going from data to knowledge using Bayesian inference. Much of this work is in the context of inferring semantics and geometric form from image and video. For example, in collaboration with multiple researchers, the IVILAB has applied this approach to problems in computer vision (e.g., tracking people in 3D from video, understanding 3D scenes from images, and learning models of object structure) and biological image understanding (e.g., tracking pollen tubes growing in vitro, inferring the morphology of neurons grown in culture, extracting 3D structure of filamentous fungi from the genus Alternaria from brightfield microscopy image stacks, and extracting 3D structure of Arabidopsis plants). An additional IVILAB research project, Semantically Linked Instructional Content (SLIC) is on improving access to educational video through searching and browsing.Dr. Barnard holds an NSF CAREER grant, and has received support from three additional NSF grants, the DARPA Mind’s eye program, ONR, the Arizona Biomedical Research Commission (ABRC), and a BIO5 seed grant. He was supported by NSERC (Canada) during graduate and post-graduate studies (NSERC A, B and PDF). His work on computational color constancy was awarded the Governor General’s gold medal for the best dissertation across disciplines at SFU. He has published over 80 papers, including one awarded best paper on cognitive computer vision in 2002.

Publications

Barnard, K., & Funt, B. (1997). Analysis and improvement of multi-scale retinex. Proceedings of the Color Imaging Conference: Color Science, Systems, and Applications, 221-225.

Abstract:

The main thrust of this paper is to modify the multi-scale retinex (MSR) approach to image enhancement so that the processing is more justified from a theoretical standpoint. This leads to a new algorithm with fewer arbitrary parameters that is more flexible, maintains color fidelity, and still preserves the contrast-enhancement benefits of the original MSR method. To accomplish this we identify the explicit and implicit processing goals of MSR. By decoupling the MSR operations from one another, we build an algorithm composed of independent steps that separates out the issues of gamma adjustment, color balance, dynamic range compression, and color enhancement, which are all jumbled together in the original MSR method. We then extend MSR with color constancy and chromaticity-preserving contrast enhancement.

Pero, L. D., Lee, P., Magahern, J., Hartley, E., & Barnard, K. (2011). Fusing object detection and region appearance for image-text alignment. MM'11 - Proceedings of the 2011 ACM Multimedia Conference and Co-Located Workshops, 1113-1116.

Abstract:

We present a method for automatically aligning words to image regions that integrates specific object classifiers (e.g., "car"detectors) with weak models based on appearance features. Previous strategies have largely focused on the latter, and thus have not exploited progress on object category recognition. Hence, we augment region labeling with object detection, which simplifies the problem by reliably identifying a subset of the labels, and thereby reducing correspondence ambiguity overall. Comprehensive testing on the SAIAPR TC dataset shows that principled integration of object detection improves the region labeling task. Copyright 2011 ACM.

Butler, E. A., Gross, J. J., & Barnard, K. (2013). Testing the effects of suppression and reappraisal on emotional concordance using a multivariate multilevel model. Biological Psychology.

Abstract:

In theory, the essence of emotion is coordination across experiential, behavioral, and physiological systems in the service of functional responding to environmental demands. However, people often regulate emotions, which could either reduce or enhance cross-system concordance. The present study tested the effects of two forms of emotion regulation (expressive suppression, positive reappraisal) on concordance of subjective experience (positive-negative valence), expressive behavior (positive and negative), and physiology (inter-beat interval, skin conductance, blood pressure) during conversations between unacquainted young women. As predicted, participants asked to suppress showed reduced concordance for both positive and negative emotions. Reappraisal instructions also reduced concordance for negative emotions, but increased concordance for positive ones. Both regulation strategies had contagious interpersonal effects on average levels of responding. Suppression reduced overall expression for both regulating and uninstructed partners, while reappraisal reduced negative experience. Neither strategy influenced the uninstructed partners' concordance. These results suggest that emotion regulation impacts concordance by altering the temporal coupling of phasic subsystem responses, rather than by having divergent effects on subsystem tonic levels. © 2013 Elsevier B.V. All rights reserved.

Barnard, K., Duygulu, P., & Forsyth, D. (2002). Modeling the statistics of image features and associated text. Proceedings of SPIE - The International Society for Optical Engineering, 4670, 1-11.

Abstract:

We present a methodology for modeling the statistics of image features and associated text in large datasets. The models used also serve to cluster the images, as images are modeled as being produced by sampling from a limited number of combinations of mixing components. Furthermore, because our approach models the joint occurrence image features and associated text, it can be used to predict the occurrence of either, based on observations or queries. This supports an attractive approach to image search as well as novel applications such a suggesting illustrations for blocks of text (auto-illustrate) and generating words for images outside the training set (auto-annotate). In this paper we illustrate the approach on 10,000 images of work from the Fine Arts Museum of San Francisco. The images include line drawings, paintings, and pictures of sculpture and ceramics. Many of the images have associated free text whose nature varies greatly, from physical description to interpretation and mood. We incorporate statistical natural language processing in order to deal with free text. We use WordNet to provide semantic grouping information and to help disambiguate word senses, as well as emphasize the hierarchical nature of semantic relationships.

Yanai, K., Kawakubo, H., & Barnard, K. (2012). Entropy-Based Analysis of Visual and Geolocation Concepts in Images. Multimedia Information Extraction: Advances in Video, Audio, and Imagery Analysis for Search, Data Mining, Surveillance, and Authoring, 63-80.