Advisor(s)
Abstract(s)
In this paper we focus on auditory analysis as the sensory stimulus,
and on vocalization synthesis as the output signal. Our scenario is to have one
robot interacting with one human through vocalization channel. Notice that
vocalization is far beyond speech; while speech analysis would give us what
was said, vocalization analysis gives us how was said. A social robot shall be
able to perform actions in different manners according to its emotional state.
Thus we propose a novel Bayesian approach to determine the emotional state
the robot shall assume according to how the interlocutor is talking to it. Results
shows that the classification happens as expected converging to the correct
decision after two iterations.
Description
Keywords
Bayesian Approach Auditory Perception Robot Emotional State Vocalization.
Pedagogical Context
Citation
José Augusto Prado, Carlos Simplício, Jorge Dias. Robot Emotional State through Bayesian Visuo-Auditory Perception. 2nd Doctoral Conference on Computing, Electrical and Industrial Systems (DoCEIS), Feb 2011, Costa de Caparica, Portugal. pp.165-172, ⟨10.1007/978-3-642-19170-1_18⟩
Publisher
Springer Berlin Heidelberg
