The following is a directory of databases containing face stimulus sets available for use in behavioral studies. Please read the rights, permissions, licensing information on the database's webpage before proceeding with use. Make sure to obtain the permissions required and credit/cite as requested by the creators.
Last updated: 20 JAN 2022
This database contains 10,168 natural face photographs and several measures for 2,222 of the faces, including memorability scores, computer vision and psychology attributes, and landmark point annotations. The face photographs are JPEGs with 72 pixels/in resolution and 256-pixel height.
Citation: Bainbridge, W.A., Isola, P., & Oliva, A. (2013). The intrinsic memorability of face images. Journal of Experimental Psychology: General. Journal of Experimental Psychology: General, 142(4), 1323-1334.
Contact: brainbridgelab@gmail.com
The American Multiracial Face Database contains 110 faces (smiling and neutral expression poses) with mixed-race heritage and accompanying ratings of those faces by naive observers that are freely available to academic researchers. The faces were rated on attractiveness, emotional expression, racial ambiguity, masculinity, racial group membership(s), gender group membership(s), warmth, competence, dominance, and trustworthiness.
Citation: Chen, J.M., Norman, J.B. & Nam, Y. Broadening the stimulus set: Introducing the American Multiracial Faces Database. Behav Res (2020). https://doi.org/10.3758/s13428-020-01447-8
The ADFES is a rich stimulus set comprising 648 filmed emotional expressions. The set features displays of nine emotions: the six ‘basic’ emotions (anger, disgust, fear, joy, sadness, and surprise), as well as contempt, pride and embarrassment expressed by 22 Northern-European and Mediterranean models (10 female, 12 male).
Citation: Van der Schalk, J., Hawk, S. T., Fischer, A. H., & Doosje, B. J. (in press).Moving faces, looking places: The Amsterdam Dynamic Facial Expressions Set (ADFES), Emotion.
Contact: Agneta Fisher, a.h.fischer@uva.nl
Our Database of Faces, (formerly 'The ORL Database of Faces'), contains a set of face images taken between April 1992 and April 1994 at the lab. There are ten different images of each of 40 distinct subjects. For some subjects, the images were taken at different times, varying the lighting, facial expressions (open / closed eyes, smiling / not smiling) and facial details (glasses / no glasses). All the images were taken against a dark homogeneous background with the subjects in an upright, frontal position (with tolerance for some side movement).
Citation: Samaria, F. S. (1994). Face recognition using hidden Markov models (Doctoral dissertation, University of Cambridge).
Contact: AT&T Laboratories Cambridge
The Basel Face Database (BFD) is built upon portrait photographs of forty different individuals. All these photographs have been manipulated to appear more or less agentic and communal (Big Two personality dimensions) as well as open to experience, conscientious, extraverted, agreeable, and neurotic (Big Five personality dimensions). Thus, the database consists of forty photographs of different individuals and 14 variations of each of them signaling different personalities. Using this database therefore allows to investigate the impact of personality on different outcome variables in a very systematic way.
Citation: Walker, M., Schönborn, S., Greifeneder, R., & Vetter, T. (2018). The Basel Face Database: A validated set of photographs reflecting systematic differences in Big Two and Big Five personality dimensions. PloS one, 13(3). doi: https://doi.org/10.1371/journal.pone.0193190
Contact: Mirella Walker
The Bogazici Face Database is a database of Turkish undergraduate student targets. High-resolution standardized photographs were taken and supported by the following materials: (a) basic demographic and appearance-related information, (b) two types of landmark configurations (for Webmorph and geometric morphometrics (GM)), (c) facial width-to-height ratio (fWHR) measurement, (d) information on photography parameters, (e) perceptual norms provided by raters.
Citation: Saribay SA, Biten AF, Meral EO, Aldan P, Třebický V, Kleisner K (2018) The Bogazici face database: Standardized photographs of Turkish faces with supporting materials. PLoS ONE 13(2): e0192018. https://doi.org/10.1371/journal.pone.0192018
The dataset contains images of people collected from the web by typing common given names into Google Image Search. The coordinates of the eyes, the nose and the center of the mouth for each frontal face are provided in a ground truth file. This information can be used to align and crop the human faces or as a ground truth for a face detection algorithm. The dataset has 10,524 human faces of various resolutions and in different settings, e.g. portrait images, groups of people, etc. Profile faces or very low-resolution faces are not labeled.
Citation: Anelia Angelova, Yaser Abu-Mostafa, Pietro Perona, Pruning Training Sets for Learning of Object Categories , Proc. IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2005
Contact: Anelia Angelova, anelia@caltech.edu
The Chicago Face Database was developed at the University of Chicago by Debbie S. Ma, Joshua Correll, and Bernd Wittenbrink. The CFD is intended for use in scientific research. It provides high-resolution, standardized photographs of male and female faces of varying ethnicity between the ages of 17-65. Extensive norming data are available for each individual model. These data include both physical attributes (e.g., face size) as well as subjective ratings by independent judges (e.g., attractiveness). The database consists of a main image set and several extension sets.
CFD
CFD-MR
CFD-INDIA
Contact: Bernd Wittenbrink, bernd.wittenbrink@chicagobooth.edu
The Child Affective Facial Expressions Set (CAFE) is the first large and representative set of children posing a variety of affective facial expressions that can be used for scientific research. The set is made up of nearly 1200 photographs of over 100 children (ages 2-8) making 7 different facial expressions - happy, angry, sad, fearful, surprise, neutral, and disgust.
Citation: LoBue, V. & Thrasher, C. (2015). The Child Affective Facial Expression (CAFE) Set: Validity and reliability from untrained adults. Frontiers in Emotion Science, 5.
A novel emotional database that contains movie clip / dynamic images of 12 ethnically diverse children. This unique database contains spontaneous / natural facial expression of children in diverse settings with diverse recording scenarios showing six universal or prototypic emotional expressions (happiness, sadness, anger, surprise, disgust and fear). Children are recorded in constraint free environment (no restriction on head movement, no restriction on hands movement, free sitting setting, no restriction of any sort) while they watched specially built / selected stimuli. This constraint free environment allowed us to record spontaneous / natural expression of children as they occur.
Citation: A novel database of Children's Spontaneous Facial Expressions (LIRIS-CSE). Rizwan Ahmed Khan, Crenn Arthur, Alexandre Meyer, Saida Bouakaz. Image and Vision Computing, Volumes 83–84, March–April 2019. arXiv (2018) preprint, arXiv:1812.01555.
Contact: Request Form
This database contains 60 photographs of positive infant faces, 54 photographs of negative infant faces, and 40 photographs of neutral infant faces. The images have high criterion validity and good test–retest reliability.
Citation: Webb, R., Ayers, S. & Endress, A. The City Infant Faces Database: A validated set of infant facial expressions. Behav Res 50, 151–159 (2018). https://doi.org/10.3758/s13428-017-0859-9
Contact: Rebecca Webb
The CMU Multi-PIE face database contains more than 750,000 images of 337 people recorded in up to four sessions over the span of five months. Subjects were imaged under 15 viewpoints and 19 illumination conditions while displaying a range of facial expressions.
Citation: Sim, T., Baker, S., & Bsat, M. (2001). The CMU pose, illumination and expression database of human faces. Carnegie Mellon University Technical Report CMU-RI-TR-OI-02.
Contact: Ralph Gross, ralph@multiple.org
The Cohn-Kanade AU-Coded Facial Expression Database affords a test bed for research in automatic facial image analysis and is available for use by the research community. Image data consist of approximately 500 image sequences from 100 subjects. Accompanying meta-data include annotation of FACS action units and emotion-specified expressions. Subjects range in age from 18 to 30 years. Sixty-five percent were female; 15 percent were African-American and three percent Asian or Latino.
Subjects were instructed by an experimenter to perform a series of 23 facial displays that included single action units (e.g., AU 12, or lip corners pulled obliquely) and action unit combinations (e.g., AU 1+2, or inner and outer brows raised). Each begins from a neutral or nearly neutral face. For each, an experimenter described and modeled the target display. Six were based on descriptions of prototypic emotions (i.e., joy, surprise, anger, fear, disgust, and sadness).
Citation: Kanade, T., Cohn, J. F., & Tian, Y. (2000, March). Comprehensive database for facial expression analysis. In Proceedings Fourth IEEE International Conference on Automatic Face and Gesture Recognition (Cat. No. PR00580) (pp. 46-53). IEEE.
Contact: Takeo Kanade, kanade@andrew.cmu.edu
The Complex Emotion Expression Database (CEED), a digital stimulus set of 243 basic and 237 complex emotional facial expressions. The stimuli represent six basic expressions (angry, disgusted, fearful, happy, sad, and surprised) and nine complex expressions (affectionate, attracted, betrayed, brokenhearted, contemptuous, desirous, flirtatious, jealous, and lovesick) that were posed by Black and White formally trained, young adult actors.
Citation: Benda MS, Scherf KS (2020) The Complex Emotion Expression Database: A validated stimulus set of trained actors. PLoS ONE 15(2): e0228248. https://doi.org/10.1371/journal.pone.0228248
The Computer Vision Laboratory (CVL) Face Database contains photographs of 114 persons approximately 18 years of age, 7 images per person.
Citation: Mirage 2003, Conference on Computer Vision / Computer Graphics Collaboration for Model-based Imaging, Rendering, image Analysis and Graphical special Effects, March 10-11 2003, INRIA Rocquencourt, France, Wilfried Philips, Rocquencourt, INRIA, 2003, pp. 38-47.
Contact: Peter Peer, peter.peer@fri.uni-lj.si
The Dartmouth Database of Children's Faces contains images of 40 male and 40 female models between the ages of 6 and 16. Models are photographed on a black background and are wearing black bibs and black hats to cover hair and ears. They are photographed from 5 different camera angles and pose 8 different facial expressions. Models were rated by independent raters and are ranked for the overall believability of their poses.
Citation: Dalrymple, K. A., Gomez, J., & Duchaine, B. (2013). The Dartmouth Database of Children’s Faces: Acquisition and validation of a new face stimulus set. PloS one, 8(11), e79131.
Contact: Kristen Dalrymple, kad@umn.edu
The Face Database consists of 575 individual faces ranging from ages 18 to 93. Our database was developed to be more representative of age groups across the lifespan, with a special emphasis on recruiting older adults. The resulting database has faces of 218 adults age 18-29, 76 adults age 30-49, 123 adults age 50-69, and 158 adults age 70 and older.
Citation: Minear, M., & Park, D. C. (2004). A lifespan database of adult facial stimuli. Behavior research methods, instruments, & computers : a journal of the Psychonomic Society, Inc, 36(4), 630–633. https://doi.org/10.3758/bf03206543
An index of face databases, their features, and how to access them has been unavailable. The “Face Image Meta-Database” (fIMDb) provides researchers with the tools to find the face images best suited to their research. The fIMDb is available from: https://cliffordworkman.com/resources/
Citation: Workman, C. I., & Chatterjee, A. (2020, June 24). The Face Image Meta-Database (fIMDb) & ChatLab Facial Anomaly Database (CFAD): Tools for research on face perception and social stigma. https://doi.org/10.1016/j.metip.2021.100063
This dataset includes multiple photographs for over 200 individuals of many different races with consistent lighting, multiple views, real emotions, and disguises (and some participants returned for a second session several weeks later with a haircut, or a new beard, etc.). The images are in jpeg format, 250x250 72 dpi 24 bit color.
Citation: Righi, G, Peissig, JJ, & Tarr, MJ (2012) Recognizing disguised faces. Visual Cognition, 20(2), 143-169. doi:10.1080/13506285.2012.654624
Contact: Tarr Lab, Carnegie Mellon University, tarrlab@gmail.com
FERET
Color FERET
Contact: P. Jonathon Phillips, jonathon.phillips@nist.gov
The London Set contains Images are of 102 adult faces 1350x1350 pixels in full color.
Citation: DeBruine, Lisa; Jones, Benedict (2017): Face Research Lab London Set. figshare. Dataset. https://doi.org/10.6084/m9.figshare.5047666.v5
Face Research Toolkit: A free and open-source toolkit of three-dimensional models and software to study face perception. Contains 8 manipulatable facial expression models.
Citation: Hays, J. S., Wong, C., & Soto, F. (2020). FaReT: A free and open-source toolkit of three-dimensional models and software to study face perception. Behavior Research Methods, 5.(6), 2604-2622.
Contact: Fabian Soto, Florida International University
Dynamic FACES
Scrambled FACES
A dataset with a total of 106,863 face images* of male and female 530 celebrities, with about 200 images per person. As such, it is one of the largest public face databases.
Citation: H.-W. Ng, S. Winkler. A data-driven approach to cleaning large face datasets. Proc. IEEE International Conference on Image Processing (ICIP), Paris, France, Oct. 27-30, 2014.
Contact: Request Form
The Faces and Motion Exeter Database (FAMED) is a video database of 32 male actors for use in psychological research. Each actor was filmed from two viewpoints (full-face and three-quarter) whilst they performed a series of facial motions including the telling of three jokes, a short conversation, six facial expressions (smiling, anger, fear, disgust, surprise and sadness) and rigid motion such as head rotation from left to right and up and down. The actors performed all actions three times; once with no headgear, once wearing a swimming cap to hide hair cues and once whilst wearing a wig.
Citation: Longmore, C. A., & Tree, J. J. (2013). Motion as a cue to face recognition: Evidence from congenital prosopagnosia. Neuropsychologia, 51, 864-875
Contact: Chris Longmore, chris.longmore@plymouth.ac.uk
The FEI Face Database is a Brazilian face database that contains a set of face images taken between June 2005 and March 2006 at the Artificial Intelligence Laboratory of FEI in São Bernardo do Campo, São Paulo, Brazil. There are 14 images for each of 200 individuals, a total of 2800 images. All images are colourful and taken against a white homogenous background in an upright frontal position with profile rotation of up to about 180 degrees. Scale might vary about 10% and the original size of each image is 640x480 pixels. All faces are mainly represented by students and staff at FEI, between 19 and 40 years old with distinct appearance, hairstyle, and adorns. The number of male and female subjects are exactly the same and equal to 100.
Contact: Carlos Eduardo Thomaz, cet@fei.edu.br
An image database containing face images showing a number of subjects performing the six different basic emotions defined by Eckman & Friesen. The database has been developed in an attempt to assist researchers who investigate the effects of different facial expressions.
Citation: Frank Wallhoff; Bjorn Schuller; Michael Hawellek; Gerhard Rigoll: Efficient Recognition of Authentic Dynamic Facial Expressions on the Feedtum Database IEEE ICME, page 493-496. IEEE Computer Society, (2006)
Contact: Frank Wallhoff, frank.wallhoff@jade-hs.de.
This database contains three images of 303 identities (each taken using separate cameras), similarity data quantifying perceived similarity between any two identities and 20 images per identity that have been extracted from a video clip for the purpose of familiarisation.
Contact: Mike Burton, mike.burton@york.ac.uk
JACFEE: Japanese and Caucasian Facial Expressions of Emotion
JACNeuf: Japanese and Caucasian Neutral Faces
The Japanese Female Facial Expression (JAFFE) Dataset contains 213 images of 10 Japanese female expressers.
Citation: Lyons, Michael, Kamachi, Miyuki, & Gyoba, Jiro. (1998). The Japanese Female Facial Expression (JAFFE) Dataset [Data set]. Zenodo. https://doi.org/10.5281/zenodo.3451524
Contact: Michael Lyons, ORCID
The Karolinska Directed Emotional Faces (KDEF) is a set of totally 4900 pictures of human facial expressions of emotion. The set contains 70 individuals, each displaying 7 different emotional expressions, each expression being photographed (twice) from 5 different angles.
Citation: Goeleven, E., De Raedt, R., Leyman, L., & Verschuere, B. (2008). The Karolinska directed emotional faces: a validation study. Cognition and emotion, 22(6), 1094-1118.
Contact: Emotion Lab at Karolinska Institutet
The Labeled Faces in the Wild is a database of face photographs designed for studying the problem of unconstrained face recognition. The data set contains more than 13,000 images of faces collected from the web.
Citation: Gary B. Huang, Manu Ramesh, Tamara Berg, and Erik Learned-Miller. Labeled Faces in the Wild: A Database for Studying Face Recognition in Unconstrained Environments. University of Massachusetts, Amherst, Technical Report 07-49, October, 2007.
Contact: Gary Huang, gbhuang@cs.umass.edu
This database contains
Makeup Datasets contain four datasets of female face images assembled for studying the impact of makeup on face recognition.
YouTube Makeup (YMU)
Virtual Makeup (VMU)
Makeup Induced Face Spoofing (MIFS)
These sets contain stimuli for use in our studies on cross-racial face recognition and identification. The sets are available by email request to Dr. Meissner for those seeking to conduct research on face identification. Our stimuli currently include African American and Caucasian male faces in two poses (smiling w/ casual clothing and non-smiling with burgundy sweatshirt).
Citation: Meissner, C. A., Brigham, J. C., & Butz, D. A. (2005). Memory for own‐and other‐race faces: A dual‐process approach. Applied Cognitive Psychology: The Official Journal of the Society for Applied Research in Memory and Cognition, 19(5), 545-567.
Contact: Christian Meissener cmeissner@utep.edu
Montreal Set of facial displays of emotion (MSFDE) consists of emotional facial expressions by men and women of European, Asian, and African descent. Each expression was created using a directed facial action task and all expressions were FCAS coded to assure identical expressions across actors.
The set contains expressions of happiness, sadness, anger, fear, disgust, and embarrassment as well as a neutral expression for each actor.
Contact: Social Psychophysiology Laboratory, Université du Québec à Montréal
The Academic MORPH database (non-commercial) was collected over a span of 5 years with numerous images of the same subject (longitudinal).This is not a controlled collection (i.e., it was collected in real-world conditions). The dataset also contains metadata in the form of age, gender, and race. The database has 55,134 images of 13,618 subjects.
Contact: Available for purchase
The MMI Facial Expression Database is an ongoing project, that aims to deliver large volumes of visual data of facial expressions to the facial expression analysis community. The database consists of over 2900 videos and high-resolution still images of 75 subjects.
Citation: Valstar, M., & Pantic, M. (2010, May). Induced disgust, happiness and surprise: an addition to the mmi facial expression database. In Proc. 3rd Intern. Workshop on EMOTION (satellite of LREC): Corpora for Research on Emotion and Affect (p. 65).
Contact: mmi_face_db@mahnob-db.eu
The MR2 is a multi-racial, mega-resolution database of facial stimuli, created in collaboration with the psychologist Kurt Gray and the photographer Titus Brooks Heagins. It contains 74 full-color images of men and women of European, African, and East Asian descent.
Citation: Strohminger, N., Gray, K., Chituc, V., Heffner, J., Schein, C., and Heagins, T.B. (in press). The MR2: A multi-racial mega-resolution database of facial stimuli. Behavior Research Methods.
Contact: Nina Strohminger, humean@wharton.upenn.edu
The MUCT Face Database consists of 3755 faces with 76 manual landmarks. The database was created to provide more diversity of lighting, age, and ethnicity than currently available landmarked 2D face databases.
Citation: Milborrow, S., Morkel, J., & Nicolls, F. (2010). The MUCT landmarked face database. Pattern recognition association of South Africa, 201(0).
Contact: Stephen Milborrow, milbo@sonic.net
The NimStem Set of Facial Expressions is a broad dataset comprising of 672 images of naturally posed photographs by 43 professional actors (18 female, 25 male) ranging from 21 to 30 years old. Actors from a diverse sample were chosen to portray emotional expressions within this dataset. To be precise, the actors were African-American (N = 10), Asian-American (N = 6), European-American (N = 25), Latino-American (N = 2). The images contained in this dataset include eight emotional expressions, namely: neutral, angry, disgust, surprise, sad, calm, happy, and afraid. Both open and closed mouth versions were provided for all emotional expressions, with the exception of surprise (only open mouth provided) and happy (high arousal open mouth/exuberant provided).
Citation: Tottenham, N., Tanaka, J. W., Leon, A. C., McCarry, T., Nurse, M., Hare, T. A., ... & Nelson, C. (2009). The NimStim set of facial expressions: judgments from untrained research participants. Psychiatry Research, 168(3), 242-249.
Contact: Nim Tottenham, nlt7@columbia.edu
The Oslo Face Database consists of ~200 male and female faces of neutral expression with three gaze directions: left, center and right. The photos were taken in 2012 of students from the University of Oslo.
Oulu-CASIA NIR&VIS facial expression database contains videos with the six typical expressions (happiness, sadness, surprise, anger, fear, disgust) from 80 subjects captured with two imaging systems, NIR (Near Infrared) and VIS (Visible light), under three different illumination conditions: normal indoor illumination, weak illumination (only computer display is on) and dark illumination (all lights are off).
Citation: Zhao, G., Huang, X., Taini, M., Li, S. Z., & PietikäInen, M. (2011). Facial expression recognition from near-infrared videos. Image and Vision Computing, 29(9), 607-619.
Contact: Guoying Zhao, guoying.zhao@oulu.fi
The POFA collection consists of 110 photographs of facial expressions that have been widely used in cross-cultural studies, and more recently, in neuropsychological research. All images are black and white. A brochure providing norms is included with the collection. It is important to note that these images are not identical in intensity or facial configuration.
Contact: Paul Elkman
The Psychological Image Collection at Stirling (PICS) contains two databases of face images.
Stirling/ESRC 3d face database
Citation: varies
Contact: Peter Hancock, pjbh1@stir.ac.uk
The Radboud Faces Database (RaFD) is a set of pictures of 67 models (including Caucasian males and females, Caucasian children, both boys and girls, and Moroccan Dutch males) displaying 8 emotional expressions. The RaFD in an initiative of the Behavioural Science Institute of the Radboud University Nijmegen, which is located in Nijmegen (the Netherlands), and can be used freely for non-commercial scientific research by researchers who work for an officially accredited university.
RADIATE is an open-access face stimulus set of 1721 racially diverse expressions is described. Sixteen different emotions in color and in black and white versions are included. (scroll down for link to the dataset)
Citation: Conley, M. I., Dellarco, D. V., Rubien-Thomas, E., Cohen, A. O., Cervera, A., Tottenham, N., & Casey, B. J. (2018). The racially diverse affective expression (RADIATE) face stimulus set. Psychiatry research.
The Sheffield Face Database (previously UMIST) consists of 564 images of 20 individuals (mixed race/gender/appearance). Each individual is shown in a range of poses from profile to frontal views – each in a separate directory labelled 1a, 1b, … 1t and images are numbered consecutively as they were taken. The files are all in PGM format, approximately 220 x 220 pixels with 256-bit grey-scale.
Citation: Wechsler, H., Phillips, J. P., Bruce, V., Soulie, F. F., & Huang, T. S. (Eds.). (2012). Face recognition: From theory to applications (Vol. 163). Springer Science & Business Media.
Contact: Laboratory of Vision Engineering (LoVE), University of Lincoln
Several databases of computer-generated synthetic faces.
Citations: varies
Contact: Alexander Todorov, University of Chicago
Database 1
Database 2
Database 3
Database 4
Database 5
Database 6
Database 7
1,400 faces manipulated on face shape and reflectance by gender-specific models built by Oh, Dotsch, Porter, & Todorov (2020): 25 (face identities) x 2 (gender models: for males and females) x 2 (trait dimensions: perceived dominance and trustworthiness) x 7 (parametric face manipulations, ranging from -3 to +3SD with a step of 1SD) x 2 (face gender: male and female).
Database 8
UB KinFace database is used to develop, test, and evaluate kinship verification and recognition algorithms. It comprises 600 images of 400 people which can be separated into 200 groups. Each group is composed of child, young parent and old parent images. Most of images in the database are real-world collections of public figures (celebrities and politicians) from Internet. To the best of our knowledge, it is the first database that contains all children, young parents and old parents for the purpose of kinship verification.
Citation: Ming Shao, Siyu Xia and Yun Fu, “Genealogical Face Recognition based on UB KinFace Database,” IEEE CVPR Workshop on Biometrics (BIOM), 2011.
Contact: Yun Raymond Fu, yunfu@ece.neu.edu
This database contains 550 photos of US politicians who competed either in a gubernatorial race (248) or in a house race (302). The database also contains the politicians’ perceived competence from their photos, as measured in a forced choice competence judgement of participants unfamiliar with the politicians. As such, these judgments simply indicate perceptions and are in no way indicative of the actual competence of the politicians.
Contact: Alexander Todorov, University of Chicago
The Yale Face Database (size 6.4MB) contains 165 grayscale images in GIF format of 15 individuals. There are 11 images per subject, one per different facial expression or configuration: center-light, w/glasses, happy, left-light, w/no glasses, normal, right-light, sad, sleepy, surprised, and wink.
The Yale Face Database B (1GB) contains 5760 single light source images of 10 subjects each seen under 576 viewing conditions (9 poses x 64 illumination conditions).
Citation: varies
Contact: UCSD Computer Vision
The Yonsei Face Database (YFace DB), consists of both static and dynamic face stimuli for six basic emotions (happiness, sadness, anger, surprise, fear, and disgust), and to test its validity. The database includes selected pictures (static stimuli) and film clips (dynamic stimuli) of 74 models (50% female) aged between 19 and 40.
Citation: Chung, K. M, Kim, S.J., Jung, W. H., & Kim, V. Y. (2019). Development and Validation of the Yonsei Face Database (Yface DB). Frontiers in Psychology, 10, 2626. https://doi.org/10.3389/fpsyg.2019.02626