We propose four probabilistic generative models for simultaneously modeling gene expression levels and Gene Ontology (GO) tags. Unlike previous approaches for using GO tags, the joint modeling framework allows the two sources of information to complement and reinforce each other. We fit our models to three time-course datasets collected to study biological processes, specifically blood vessel growth (angiogenesis) and mitotic cell cycles. The proposed models result in a joint clustering of genes and GO annotations. Different models group genes based on GO tags and their behavior over the entire time-course, within biological stages, or even individual time points. We show how such models can be used for biological stage boundary estimation de novo. We also evaluate our models on biological stage prediction accuracy of held out samples. Our results suggest that the models usually perform better when GO tag information is included.
Subtle cellular phenotypes in the CNS may evade detection by routine histopathology. Here, we demonstrate the value of primary culture for revealing genetically determined neuronal phenotypes at high resolution. Gamma neurons of Drosophila melanogaster mushroom bodies (MBs) are remodeled during metamorphosis under the control of the steroid hormone 20-hydroxyecdysone (20E). In vitro, wild-type gamma neurons retain characteristic morphogenetic features, notably a single axon-like dominant primary process and an arbor of short dendrite-like processes, as determined with microtubule-polarity markers. We found three distinct genetically determined phenotypes of cultured neurons from grossly normal brains, suggesting that subtle in vivo attributes are unmasked and amplified in vitro. First, the neurite outgrowth response to 20E is sexually dimorphic, being much greater in female than in male gamma neurons. Second, the gamma neuron-specific "naked runt" phenotype results from transgenic insertion of an MB-specific promoter. Third, the recessive, pan-neuronal "filagree" phenotype maps to singed, which encodes the actin-bundling protein fascin. Fascin deficiency does not impair the 20E response, but neurites fail to maintain their normal, nearly straight trajectory, instead forming curls and hooks. This is accompanied by abnormally distributed filamentous actin. This is the first demonstration of fascin function in neuronal morphogenesis. Our findings, along with the regulation of human Fascin1 (OMIM 602689) by CREB (cAMP response element-binding protein) binding protein, suggest FSCN1 as a candidate gene for developmental brain disorders. We developed an automated method of computing neurite curvature and classifying neurons based on curvature phenotype. This will facilitate detection of genetic and pharmacological modifiers of neuronal defects resulting from fascin deficiency.
This paper argues that tracking, object detection, and model building are all similar activities. We describe a fully automatic system that builds 2D articulated models known as pictorial structures from videos of animals. The learned model can be used to detect the animal in the original video - in this sense, the system can be viewed as a generalized tracker (one that is capable of modeling objects while tracking them). The learned model can be matched to a visual library; here, the system can be viewed as a video recognition algorithm. The learned model can also be used to detect the animal in novel images - in this case, the system can be seen as a method for learning models for object recognition. We find that we can significantly improve the pictorial structures by augmenting them with a discriminative texture model learned from a texture library. We develop a novel texture descriptor that outperforms the state-of-the-art for animal textures. We demonstrate the entire system on real video sequences of three different animals. We show that we can automatically track and identify the given animal. We use the learned models to recognize animals from two data sets; images taken by professional photographers from the Corel collection, and assorted images from the Web returned by Google. We demonstrate quite good performance on both data sets. Comparing our results with simple baselines, we show that, for the Google set, we can detect, localize, and recover part articulations from a collection demonstrably hard for object recognition. © 2006 IEEE.
We work with a model of object recognition where words must be placed on image regions. This approach means that large scale experiments are relatively easy, so we can evaluate the effects of various early and mid-level vision algorithms on recognition performance. We evaluate various image segmentation algorithms by determining word prediction accuracy for images segmented in various ways and represented by various features. We take the view that good segmentations respect object boundaries, and so word prediction should be better for a better segmentation. However, it is usually very difficult in practice to obtain segmentations that do not break up objects, so most practitioners attempt to merge segments to get better putative object representations. We demonstrate that our paradigm of word prediction easily allows us to predict potentially useful segment merges, even for segments that do not look similar (for example, merging the black and white halves of a penguin is not possible with feature-based segmentation; the main cue must be "familiar configuration"). These studies focus on unsupervised learning of recognition. However, we show that word prediction can be markedly improved by providing supervised information for a relatively small number of regions together with large quantities of unsupervised information. This supervisory information allows a better and more discriminative choice of features and breaks possible symmetries.
In this paper we present a comprehensive method for identifying probable shadow regions in an image. Doing so is relevant to computer vision, colour constancy, and image reproduction, specifically dynamic range compression. Our method begins with a segmentation of the image into regions of the same colour. Then the edges between the regions are analyzed with respect to the possibility that each is due to an illumination change as opposed to a material boundary. We then integrate the edge information to produce an estimate of the illumination field.