ИСТИНА |
Войти в систему Регистрация |
|
Интеллектуальная Система Тематического Исследования НАукометрических данных |
||
We would like to draw the readers’ attention to our method of precise facial expressions analysis. Our research pursued two main objectives: (1) developing the methods and algorithms for the direct evaluation of facial expressions using computer vision (CV) technology; (2) creating software for conducting empirical research in psychology and related subjects. We have developed an authentic implementation of cFACS (Rosenberg, Ekman, 2020) as an approach to computer facial expressions analysis. We would like to stress that at present, the neural network approach is de facto dominant. The approach is based on the basic emotions concept, and on the assumption that a neural network can be trained to recognize emotions using a respective set of pictures, or, more rarely, videos. In our opinion, implementing this approach is limited (Baev et al., 2021). We have applied a comprehensive approach to the analysis of facial activity, based on detecting and analyzing time-defined FACS action units’ (AUs) combinations. The chosen approach has obvious advantages: 1) employing AUs as facial activity analysis units; 2) the feasibility of assessment of individual features of facial activity; 3) racial and age biases becoming irrelevant. While developing the software, two major methodological principles have been used: 1) the direct assessment of facial surface movements aiming to detect AUs, coupled with the deliberate rejection of employing neural networks for facial events classification; 2) modelling an expert’s perception of facial surface movements’ peculiarities while detecting certain AUs, taking into account the general topography of facial surface movements. We have developed the EmoRadar software, using our proprietary CV procedures that allow us to evaluate facial surface changes in the areas corresponding to different AUs. In the process, there was created a system of rules, based on which the “raw” data on lighting changes transform into surface movements, and those, in turn, convert into AUs, and, further on, into basic emotions and patters of facial activity. In the course of our software’s empirical verification, the method of automated analysis of job interviews was created. We have also performed automatic evaluation of video recordings for forensic psychological examinations and analyzed participants` emotional dynamics, participating in the “SIRIUS-21” project of the Institute of Biomedical Problems of the Russian Academy of Sciences. We believe that the implementation of the above-mentioned principles of the direct approach to facial activity registering and the comprehensive approach to its analysis provides tangible advantages: • High accuracy of facial surface movements assessment. • Feasibility of conducting all kinds of facial analysis (basic emotions, basic emotions emblems, facial activity patterns) based on reliable allocation of 22 main AUs. • Complete absence of racial bias. • Feasibility of assessing children’s facial activity (up to 5-7 years old). • Independence from theoretical approaches to the facial expressions analysis (P. Ekman, K. Scherer, L. Feldman Barrett, H. Oster). Keywords: FACS, emotions, affective computing, facial expressions, computer vision. 1. Introduction In the psychological studies of today, a toolkit for automatic analysis of a person’s emotional state by his/her facial expressions is becoming more and more acclaimed. In our opinion, somewhat of a paradox has emerged: each year the volume of videos increases, while no software, allowing to reliably assess facial expressions on an expert level, has been developed. Specialists able to professionally code mimic activity and categorize various facial expressions as emotional states’ manifestations, are critically few; given this p-ISSN: 2184-2205 e-ISSN: 2184-3414 ISBN: 978-989-35106-0-5 © 2023 66