my photo


Evgeniy Bart

Other spellings of my name: Eugene Bart, Evgeny Bart, Yevgeny Bart

Email: "bart" followed by the "at" symbol followed by ""

Tel: +1-650-812-4772
Fax: +1-650-812-4334

Palo Alto Research Center
3333 Coyote Hill Road
Palo Alto, CA 94304

Position: Researcher
Supervisor: Dr. Eric Saund


is here: pdf ps

Research interests:

I am interested in how learning is used to solve various visual tasks. My work concentrates on high-level tasks, such as object recognition and scene interpretation. Currently, I am working on developing scene interpretation methods for document analysis.


13. Ian Porteous, Evgeniy Bart, and Max Welling, "Multi-HDP: A Non Parametric Bayesian Model for Tensor Factorization", in Proc. AAAI, 2008. abstract pdf ps bibtex

12. Evgeniy Bart, Ian Porteous, Pietro Perona, and Max Welling, "Unsupervised learning of visual taxonomies", in Proc. CVPR, 2008. abstract pdf ps bibtex

11. Jay Hegde, Evgeniy Bart, and Daniel Kersten, "Fragment-based learning of visual object categories", Current Biology, 2008. abstract bibtex

Full-text article is available from the publisher site here.

10. Max Welling, Ian Porteous, and Evgeniy Bart, "Infinite state bayes-nets for structured domains", in NIPS, 2007. abstract pdf ps bibtex

9. Evgeniy Bart and Shimon Ullman, "Class-based feature matching across unrestricted transformations", PAMI, 2007. abstract pdf ps bibtex Appendix pdf Appendix ps (c) IEEE

8. Evgeniy Bart and Shimon Ullman, "Object recognition by eliminating distracting information", in Proc. ICCVG, 2006. abstract pdf ps bibtex

7. E. Bart, S. Bao, D. Holcman, "Modeling the spontaneous activity of the auditory cortex", Journal of Computational Neuroscience, vol. 19, no. 3, pages 357-378, 2005. abstract bibtex 

A model of spontaneous activity in the auditory cortex is described. We analyze mathematically the effects of neural connections and synaptic depression on cortical dynamics. The analysis predicts that as the distribution of synaptic weights becomes broader, the network will switch from normal to epileptic regime. This epileptic regime can be stabilized by synaptic depression. The model adequately explains empirical observations in the animal auditory cortex.

6. E. Bart, S. Ullman, "Single-example learning of novel classes using representation by similarity", in Proc. BMVC, 2005. abstract pdf ps bibtex

Novel object classes can be represented by how similar they are to other, more common classes. This representation is often used intuitively by humans when describing object classes such as 'tiger lily' or 'catfish'. In this paper, we show that such representation has favorable properties when generalization from a single training example is required. We also propose an algorithm that uses this representation and learns to recognize a novel class from a single training example.

5. E. Bart, S. Ullman, "Cross-generalization: learning novel classes from a single example by feature replacement", in Proc. CVPR, 2005. abstract pdf ps bibtex (c) IEEE

In this paper, we describe a method for learning a novel object class from a single training example. The approach (called cross-generalization) is based on reusing the information from previously learned classes. Features known to be useful in the past are adapted to make them suitable for the novel class. For example, a dog's head (useful for classifying dogs) can be turned into a cat's head (useful for classifying cats). These adapted features are subsequently used for classification.

4. E. Bart, S. Ullman, "Image normalization by mutual information", in Proc. BMVC, pages 327-336, 2004. abstract pdf ps bibtex

In PCA, it is frequently beneficial to select only a subset of principal components. Here, I propose to base this selection on mutual information of individual components. As an application, we show how this criterion allows to combine denoising and illumination normalization in a single framework.

3. E. Bart, S. Ullman, "Class-based matching of object parts", in Proc. CVPR Workshop on Image and Video Registration, 2004. abstract pdf ps bibtex (c) IEEE

In this paper, we propose a method for matching features across significant changes in viewing conditions. For example, an eye in a frontal image can be matched to the same eye in profile view, or under different illumination. The method does not compare feature appearances across different viewing conditions. As a result, it can perform matching even if the viewing conditions significantly alter feature appearance. In particular, it is not restricted to locally planar objects or affine transformations. It also does not require examples of correct matches.

2. S. Ullman, E. Bart, "Recognition invariance obtained by extended and invariant features", Neural Networks, vol. 17, pages 833-848, 2004. abstract bibtex

Full-text article is available from Elsevier site here (click the PDF link on the right to download as PDF).

In this paper, we investigate the 'extended fragments' scheme proposed in the ECCV 2004 paper. The proposed approach is to search for optimal features and learn to compensate for their appearance variability. This approach is compared to a popular alternative of restricting the search to invariant features which do not require such compensation. We also discuss several connections to biological vision, such as fast bottom-up recognition and cue saliency.

1. E. Bart, E. Byvatov, S. Ullman, "View-invariant recognition using corresponding object fragments", in Proc. ECCV, pages 152-165, 2004. abstract pdf ps bibtex (c) Springer-Verlag

In this paper, invariant object recognition is achieved by learning to compensate for appearance variability of a set of class-specific features. For example, to compensate for pose variations of a feature representing an eye, eye images under different poses are grouped together. This grouping is done automatically during training. Given a novel face in e.g. frontal pose, the model for it can be constructed using existing frontal image patches. However, each frontal patch has profile patches associated with it, and these are also incorporated in the model. As a result, the model built from just a single frontal view can generalize well to distinctly different views, such as profile.

Note: All materials are presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors and other copyright holders (such as publishers). These works may not be reposted without the explicit permission of the copyright holder.