Repository logo
 
Loading...
Profile Picture
Person

Simplício, Carlos

Search Results

Now showing 1 - 4 of 4
  • A Face Attention Technique for a Robot Able to Interpret Facial Expressions
    Publication . Simplício, Carlos; Prado, José; Dias, Jorge
    Automatic facial expressions recognition using vision is an important subject towards human-robot interaction. Here is proposed a human face focus of attention technique and a facial expressions classifier (a Dynamic Bayesian Network) to incorporate in an autonomous mobile agent whose hardware is composed by a robotic platform and a robotic head. The focus of attention technique is based on the symmetry presented by human faces. By using the output of this module the autonomous agent keeps always targeting the human face frontally. In order to accomplish this, the robot platform performs an arc centered at the human; thus the robotic head, when necessary, moves synchronized. In the proposed probabilistic classifier the information is propagated, from the previous instant, in a lower level of the network, to the current instant. Moreover, to recognize facial expressions are used not only positive evidences but also negative.
  • Comparing Bayesian Networks to Classify Facial Expressions
    Publication . Simplício, Carlos; Prado, José; Dias, Jorge
    In this paper are presented two distinct Bayesian networks to analyse human beings' facial expressions. Both classifiers are completely defined: structure of the networks, belief variables and respective events, likelihoods, initial priors and procedure to change dynamically priors. The performance (relatively to the convergence) of the two approaches is compared. For both networks, the classification is done associating the facial expression to the probabilities of five emotional states: anger, fear, happy, sad and neutral. A justification for the usage of this set is presented: it is based in emotional states presented by human beings during social relationships. Classifiers as these described here can be used in Human Robot Interation. We believe that this interaction shall be done in a similar way of that used by human beings to communicate between them and, after all, facial expressions is one of the main non-verbal means of communication used by human.
  • Robot Emotional State through Bayesian Visuo-Auditory Perception
    Publication . Prado, José Augusto; Simplício, Carlos; Dias, Jorge
    In this paper we focus on auditory analysis as the sensory stimulus, and on vocalization synthesis as the output signal. Our scenario is to have one robot interacting with one human through vocalization channel. Notice that vocalization is far beyond speech; while speech analysis would give us what was said, vocalization analysis gives us how was said. A social robot shall be able to perform actions in different manners according to its emotional state. Thus we propose a novel Bayesian approach to determine the emotional state the robot shall assume according to how the interlocutor is talking to it. Results shows that the classification happens as expected converging to the correct decision after two iterations.
  • Visuo-auditory Multimodal Emotional Structure to Improve Human-Robot-Interaction
    Publication . Prado, José Augusto; Simplício, Carlos; Lori, Nicolás F.; Dias, Jorge
    We propose an approach to analyze and synthesize a set of human facial and vocal expressions, and then use the classified expressions to decide the robot’s response in a human-robot-interaction. During a human-tohuman conversation, a person senses the interlocutor’s face and voice, perceives her/his emotional expressions, and processes this information in order to decide which response to give. Moreover, observed emotions are taken into account and the response may be aggressive, funny (henceforth meaning humorous) or just neutral according to not only the observed emotions, but also the personality of the person. The purpose of our proposed structure is to endow robots with the capability to model human emotions, and thus several subproblems need to be solved: feature extraction, classification, decision and synthesis. In the proposed approach we integrate two classifiers for emotion recognition from audio and video, and then use a new method for fusion with the social behavior profile. To keep the person engaged in the interaction, after each iterance of analysis, the robot synthesizes human voice with both lips synchronization and facial expressions. The social behavior profile conducts the personality of the robot. The structure and work flow of the synthesis and decision are addressed, and the Bayesian networks are discussed. We also studied how to analyze and synthesize the emotion from the facial expression and vocal expression. A new probabilistic structure that enables a higher level of interaction between a human and a robot is proposed.