Speech analytics: Why the big data source isn't music to your competitors' ears
UCLA Professor Albert Mehrabian’s book Silent Messages published in 1971 is about his research on non-verbal communication. Mehrabian concluded that only 7% of prospects’ credibility during the sales process was based upon what they said, but 55% of the meaning of their communications was in their body language and another 38% in the tone and music of their voice.
Since that time, some have disputed those findings, and others tried to clarify them. One thing everyone agrees with is that there is more to verbal communication than what people express at face value.
This is a salient issue in big data, because voice expressions remain as non-traditional data that are in the infancy of analysis. The question is: Are organizations missing out on business value by ignoring voice as an analytics source? Yes.
Case study #1
Sekerbank, one of the leading financial institutions in Turkey, wanted to improve the results of customer interactions in its Desmer telephone contact center, but there was no way manually to accurately analyze all of the calls that came in on a daily basis. The bank was also aware of the impact that poor contacts with customers had on customer satisfaction and the perception of its overall corporate image.
The bank decided to adopt speech-to-text technology that through analytics was able to uncover root causes and hidden insights by transcribing calls to digital text. It also implemented an emotion detection software that analyzed and published important findings about problematic conversations that resulted from customer dissatisfaction by using the ratios of anger, monotony, interruption, and silence as evaluative criteria.
Textual and emotional analysis and various data mining methods were applied on the transcribed audio, and Sekerbank was able to gain important insights about call center efficiency, agent performance, and customer satisfaction. The analytics work contributed to reductions in call center operating costs, and the call center’s overall handle time and first call resolution rates improved. The bank was also able to assess customers’ reactions to marketing efforts and attitudes toward competitors.
Case study #2
In a use case cited by industry research firm GigaOm, by listening to just 10 seconds of a person speaking, one company said it could analyze the patterns of high and low intonations to determine several emotional dimensions.
For example, in an analysis of a clip of President Barack Obama speaking in a debate against then Republican presidential nominee Mitt Romney, the company’s voice analytics software detected the primary emotions of “practicality, anger and great strength,” with underlying hints of “provocation, cynicism and ridicule.”
The company claims that its voice analytics are 81% accurate for phonetic language and 75% accurate for tonal languages like Mandarin Chinese and Vietnamese.
Tap into this area of analytics
Is it likely that speech analytics practices such as analyzing voice tones and emotional content are going to take off? Probably not in the near future, in part because many companies still consider the concept too vague an area for their analytics investments. Yet, most of us understand differences in voice inflections and attitudes, such as:
- I like that dress (a somewhat defensive posture when someone’s apparel choice might be getting questioned); and
- I like that dress (a matter of picking out the dress you like).
For those companies that broaden their big data and analytics horizons to take advantage of relatively untapped areas like voice, there might also be differentiation in the form of competitive advantage.
Link to original –