The term refers to the process of detecting the positive or negative sentiment of the writer starting from textual data. It helps businesses see emotion in social data, measure brand reputation, and better understand customers. Some companies – such as call centers – have for some time now implemented systems capable of assessing whether a person seems upset on the phone in order to direct them to operators specialized in calming people down.
This is made possible by sentiment analysis systems that allow us to identify how the customer feels while interacting with the company. Recently, Spotify also filed a patent on a technology capable of recommending songs based on vocal signals from which the emotional state, age, or gender of the speaker can be deduced.
Similarly, Amazon said its health tracking bracelet and service, Halo, will analyze the energy and positivity in customers’ voices to nudge people into better communications and relationships. While these systems can improve customer service, they are not without critical issues.
What Is Sentiment Analysis?
Sentiment analysis is traditionally understood as the process of detecting a writer’s positive or negative sentiment from textual data. It is often used as an umbrella term to describe technologies that seek to determine the emotion behind a customer’s message. It helps companies detect sentiment in social data, measure brand reputation, and understand customers.
As customers express their thoughts and feelings more openly than ever, sentiment analysis is becoming an essential tool for tracking and understanding those feelings. Automatically analyzing customer feedback, such as opinions in survey responses and social media conversations, allows brands to learn what makes customers happy or frustrated so they can tailor products and services to meet customers’ needs. Its customers. Sentiment analysis works with natural language processing (NLP) and machine learning algorithms to automatically determine the emotional tone behind online conversations.
At the moment, these systems, above all, help companies to improve the experience of their customers (so-called consumer experience ). Virtual assistants with sentiment analysis capabilities are thus able to detect the user’s emotional state and adapt their responses or tone of communication accordingly. For example, suppose the assistant sees that the user is angry or frustrated.
In that case, she can modify her responses to calm him down and offer solutions to alleviate the situation by solving the problem that is plaguing him. By measuring the customer’s level of irritation, they can also redirect complex cases to human agents better suited to offer a targeted solution. Finally, they are used to evaluate the level of satisfaction and to be able to improve the offer and services accordingly.
Sentiment Analysis In Voice Assistants
The implementation of sentiment analysis in voice assistants could lead companies to analyze what we say and how we feel to recommend products or personalize advertising messages. Voice technologies like Alexa or Siri would then transform into diviners who use the sound of our voices to elaborate intimate details such as our mood, our desires, or our medical conditions.
Thus, Joseph Turow, a professor at the Annenberg School for Communication at the University of Pennsylvania, expressed fear that these technologies could one day be used by police to determine who should be arrested or by banks to determine who is worthy of a mortgage.
This issue raises the question of the value we place on our data and the necessary balance between the usefulness of the use of new technologies for both companies and users and the impact that such use can have on citizens’ fundamental rights.
Indeed, voice representations, such as voice recordings or voice samples, can be considered personal data if they relate to an identifiable person. Therefore, in most cases (unless there is an alteration aimed at anonymization), voice data will be personal data under the GDPR.
Sensitive Information In Voice Data
Furthermore, voice data can reveal the identity of an individual, but also sensitive information such as demographic/ethnic origin, for example, through an accent, a possible disease, etc. The data thus processed by the voice assistants are not only personal data but can be qualified as special data pursuant to Article 9 GDPR, which prohibits their processing except under certain conditions, including the explicit consent of the interested party.
However, it has been seen how the proliferation of privacy information and consent requests can have a negative effect in practice on the data protection of interested parties. In fact, requests for consent become so pressing that very often, users come to consider them a mere nuisance and stop paying attention and making informed choices.
Sentiment Analysis In Negotiations For The EU Regulation On AI
A University of Michigan study showed that users are not that concerned about providing more data to web giants like Google or Amazon. As the study’s authors note, these technologies are slowly eroding people’s expectations about their privacy. Current data protection controls do not meet people’s needs.
Most subjects in the study didn’t even realize that their data was being analyzed to serve them targeted ads, and when they found out, they expressed discontent that their voice commands were being used that way. In addition to targeted ads, the same technologies could be used in the future by insurance companies or banking institutions to identify a customer’s health status and decide, on that basis, whether to grant them life insurance or a mortgage. They could also be used, as expressed by Turow, for police purposes.
Therefore, a public debate is needed about the risks associated with the widespread use of sentiment analysis in voice assistants and the value we place on our data and fundamental rights. This debate must naturally fit into the current discussions on the Proposal for a European regulation containing harmonized rules on artificial intelligence.