Caltech Resident-Intruder Mouse dataset
Caltech Resident-Intruder Mouse dataset
is formed by 237 videos of ~10 min each, recorded at 25fps with a
resolution of 640x480 pixels, 8-bit pixel depth, monochrome. Each scene
was recorded from top- and side- views using two fixed, synchronized
videos always start with a male “resident mouse” alone in a laboratory
enclosure. At some point a second mouse, the “intruder”, is introduced
and the social interaction begins. Just before the end of the video,
the intruder mouse is removed. Behavior is categorized into 12+1
different mutually exclusive action categories, i.e. 12 behaviors and
one last category, called other, used by annotators when no behavior of
interest is occurring. The mice will start interacting by “getting to
know” each other (approach, circle, sniff, walk away). Once established if the intruder is a female, the resident mouse will likely court her (copulation, chase).
If the intruder is a male, the resident mouse will likely attack it to
defend its territory. Resident mice can also choose to ignore the
intruder, engaging in solitary behaviors (clean, drink, eat, up). The introduction/removal of the intruder mouse is labeled as human.
Every video frame is labeled with one of the thirteen action categories, resulting in a segmentation of the videos into action intervals. Each ~10min video contains an average of 140 action intervals (without counting other). The approximate time spent by experts to fully annotate the dataset was around 350 hours.
- 07/10/2015: Added result prediction vectors from the paper
- 04/23/2014: Added links with features used in the paper and feature computation code
- 04/24/2012: Added links to full dataset
- 04/13/2012: Added password description, and full links
- 04/10/2012: Added dataset description and some links
- 03/21/2012: Initial version of site.
The videos are protected with username/password. To get the password, please write an email
xpburgos(at)caltech.edu with subject “CRIM13 download”. To prevent any
abuses on the distribution of the videos, an institutional email
address is required. Institutional emails include academic ones, such
as firstname.lastname@example.org, and corporate ones, but not personal ones, such
as email@example.com or firstname.lastname@example.org.
info containing behavior IDs and names, matlab code to load annotations
and tracks, example code to use .seq videos and trajectory+temporal
context feature computation.
-Videos, annotations and mouse positions (output of a tracking algorithm)
Small subset of the videos used in Section 5.2 of CVPR12 paper (18Gb) validation.zip
Full dataset (200Gb):
-Pre-computed features used in the paper
Training features + labels (7Gb) CRIM13_train_feats.zip
Testing features + labels (9Gb) CRIM13_test_feats.zip
-Results of our method reported in the paper
Full results on Test set (75Mb) CRIM13_res.zip
Required external code
functions to load .seq videos, and manipulate annotations, including
the GUI for labeling videos described in the paper (behaviorAnnotator)
are available in Piotr’s Matlab Toolbox.
Video example with method’s output output.zip
The Caltech Resident-Intruder Mouse dataset (CRIM13) consists of 237x2 videos (recorded with synchronized top and side view) of pairs of mice engaging in social behavior, catalogued into thirteen different actions. Each video lasts ~10min, for a total of 88 hours of video and 8 million frames. A team of behavior experts annotated each video on a frame-by-frame basis for a state-of-the-art study of the neurophysiological mechanisms involved in aggression and courtship in mice.
If you make use of CRIM13, please cite the following reference in any publications:
X.P. Burgos-Artizzu, P. Dollár, D. Lin, D.J. Anderson and P. Perona
Social Behavior Recognition in continuous videos.
In IEEE Conference on Computer Vision and Pattern Recognition (CVPR 2012),
For more details on the neurological study of behavior, see
D.Lin, M.P. Boyle, P. Dollár, H. Lee, E.S. Lein, P. Perona, D.J. Anderson
Functional identification of an aggression locus in the mouse hypothalamus.
Nature, 470(1):221-227, 2011.
We gratefully acknowledge Robert Robertson, who spent many long hours
annotating CRIM13 videos, as well as Dr. A. Steele for his work coordinating annotations.
This dataset was collected thanks to the full and continuous support from
the Gordon and Betty Moore Foundation, Howard Hughes Medical foundation
and ONR MURI Grant #N00014-10-1-0933.