Index of content:
Volume 119, Issue 5, May 2006
- SPEECH PROCESSING AND COMMUNICATION SYSTEMS 
119(2006); http://dx.doi.org/10.1121/1.2188647View Description Hide Description
This paper presents a rule-based method to determine emotion-dependent features, which are defined from high-level features derived from the statistical measurements of prosodic parameters of speech. Emotion-dependent features are selected from high-level features using extraction rules. The ratio of emotional expression similarity between two speakers is defined by calculating the number and values of the emotion-dependent features that are present for the two speakers being compared. Emotional speech from Interface databases is used for evaluation of the proposed method, which was used to analyze emotional speech from five male and four female speakers in order to find any similarities and differences among individual speakers. The speakers are actors that have interpreted six emotions in four different languages. The results show that all the speakers share some universal signs regarding certain emotion-dependent features of emotional expression. Further analysis revealed that almost all speakers in the analysis used unique sets of emotion-dependent features and each speaker used unique values for the defined emotion-dependent features. The comparison among speakers shows that the expressed emotions can be analyzed according to two criteria. The first criterion is a defined set of emotion-dependent features and the second is an emotion-dependent feature value.