Jacobus J Barnard
Publications
PMID: 22917928;PMCID: PMC3529353;Abstract:
The actin-bundling protein fascin is a key mediator of tumor invasion and metastasis and its activity drives filopodia formation, cell-shape changes and cell migration. Small-molecule inhibitors of fascin block tumor metastasis in animal models. Conversely, fascin deficiency might underlie the pathogenesis of some developmental brain disorders. To identify fascin-pathway modulators we devised a cell-based assay for fascin function and used it in a bidirectional drug screen. The screen utilized cultured fascin-deficient mutant Drosophila neurons, whose neurite arbors manifest the 'filagree' phenotype. Taking a repurposing approach, we screened a library of 1040 known compounds, many of them FDA-approved drugs, for filagree modifiers. Based on scaffold distribution, molecular-fingerprint similarities, and chemical-space distribution, this library has high structural diversity, supporting its utility as a screening tool. We identified 34 fascin-pathway blockers (with potential anti-metastasis activity) and 48 fascin-pathway enhancers (with potential cognitive-enhancer activity). The structural diversity of the active compounds suggests multiple molecular targets. Comparisons of active and inactive compounds provided preliminary structure-activity relationship information. The screen also revealed diverse neurotoxic effects of other drugs, notably the 'beads-on-a-string' defect, which is induced solely by statins. Statin-induced neurotoxicity is enhanced by fascin deficiency. In summary, we provide evidence that primary neuron culture using a genetic model organism can be valuable for early-stage drug discovery and developmental neurotoxicity testing. Furthermore, we propose that, given an appropriate assay for target-pathway function, bidirectional screening for brain-development disorders and invasive cancers represents an efficient, multipurpose strategy for drug discovery. © 2012. Published by The Company of Biologists Ltd.
Abstract:
We present a statistical learning approach for finding recreational trails in aerial images. While the problem of recognizing relatively straight and well defined roadways in digital images has been well studied in the literature, the more difficult problem of extracting trails has received no attention. However, trails and rough roads are less likely to be adequately mapped, and change more rapidly over time. Automated tools for finding trails will be useful to cartographers, recreational users and governments. In addition, the methods developed here are applicable to the more general problem of finding linear structure. Our approach combines local estimates for image pixel trail probabilities with the global constraint that such pixels must link together to form a path. For the local part, we present results using three classification techniques. To construct a global solution (a trail) from these probabilities, we propose a global cost function that includes both global probability and path length. We show that the addition of a length term significantly improves trail finding ability. However, computing the optimal trail becomes intractable as known dynamic programming methods do not apply. Thus we describe a new splitting heuristic based on Dijkstra's algorithm. We then further improve upon the results with a trail sampling scheme. We test our approach on 500 challenging images along the 2500 mile continental divide mountain bike trail, where assumptions prevalent in the road literature are violated. ©2008 IEEE.
Abstract:
Research has been devoted in recent years to relevance feedback as an effective solution to improve performance of image similarity search. However, few methods using the relevance feedback are currently available to perform relatively complex queries on large image databases. In the case of complex image queries, images with relevant concepts are often scattered across several visual regions in the feature space. This leads to adapting multiple regions to represent a query in the feature space. Therefore, it is necessary to handle disjunctive queries in the feature space. In this paper, we propose a new adaptive classification and cluster-merging method to find multiple regions and their arbitrary shapes of a complex image query. Our method achieves the same high retrieval quality regardless of the shapes of query regions since the measures used in our method are invariant under linear transformations. Extensive experiments show that the result of our method converges to the user's true information need fast, and the retrieval quality of our method is about 22% in recall and 20% in precision better than that of the query expansion approach, and about 35% in recall and about 31% in precision better than that of the query point movement approach, in MARS. © 2005 Elsevier Inc. All rights reserved.
Abstract:
We assume that the goal of content based image retrieval is to find images which are both semantically and visually relevant to users based on image descriptors. These descriptors are often provided by an example image - the query by example paradigm. In this work we develop a very simple method for evaluating such systems based on large collections of images with associated text. Examples of such collections include the Corel image collection, annotated museum collections, news photos with captions, and web images with associated text based on heuristic reasoning on the structure of typical web pages (such as used by Google(tm)). The advantage of using such data is that it is plentiful, and the method we propose can be automatically applied to hundreds of thousands of queries. However, it is critical that such a method be verified against human usage, and to do this we evaluate over 6000 query/result pairs. Our results strongly suggest that at least in the case of the Corel image collection, the automated measure is a good proxy for human evaluation. Importantly, our human evaluation data can be reused for the evaluation of any content based image retrieval system and/or the verification of additional proxy measures.
Abstract:
There is a growing trend in machine color constancy research to use only image chromaticity information, ignoring the magnitude of the image pixels. This is natural because the main purpose is often to estimate only the chromaticity of the illuminant. However, the magnitudes of the image pixels also carry information about the chromaticity of the illuminant. One such source of information is through image specularities. As is well known in the computational color constancy field, specularities from inhomogeneous materials (such as plastics and painted surfaces) can be used for color constancy. This assumes that the image contains specularities, that they can be identified, and that they do not saturate the camera sensors. These provisos make it important that color constancy algorithms which make use of specularities also perform well when the they are absent. A further problem with using specularities is that the key assumption, namely that the specular component is the color of the illuminant, does not hold in the case of colored metals. In this paper we investigate a number of color constancy algorithms in the context of specular and non-specular reflection. We then propose extensions to several variants of Forsyth's CRULE algorithm1-4 which make use of specularities if they exist, but do not rely on their presence. In addition, our approach is easily extended to include colored metals, and is the first color constancy algorithm to deal with such surfaces. Finally, our method provides an estimate of the overall brightness, which chromaticity-based methods cannot do, and other RGB based algorithms do poorly when specularities are present.