Siri and Alexa they have crept into our lives: accompany us in our smartphones, smart speakers, navigation systems and home automation devices. They are virtual assistants very useful in many contexts. For example, to use our phones while we cook or to facilitate internet access for people with functional diversity. However, its use is not without risk. Some, that we may not know.
To what extent do we risk our privacy with them? Do we really care about losing our privacy?
The B side of virtual assistants
Given the variety of devices in which they are incorporated, it is difficult to have precise figures on the penetration of virtual assistants today. In it american market more than 50% of households already have a smart speaker and in Spain the figures are around 7%.
We are talking about virtual assistants that work with a set of systems and algorithms that recognize natural language and perform different tasks. But, in addition to collecting personal data in the same way as other applications, these assistants collect a particularly sensitive type of information: voice recordings.
Although they are designed to fire only when key terms are mentioned (“hey Siri”, “Alexa”), these terms are not always detected correctly and devices may wake up between 20 and 40 times in a day. As a result, they record between 6 seconds and 2 minutes before disconnecting.
What happens in those cases? The developer companies have permission to listen to these recordings (we remember, carried out in our living rooms, kitchens and bedrooms) in order to improve their algorithms. On some occasions these recordings have been ceded to third party companiesand even leaked to the presswith the consequent commotion.
Are we concerned about our privacy… or not so much?
According to CIS data, 75% of Spanish citizens are concerned about the protection of their data. However, we do not always act consistently and there is no evidence that we reward or use to a greater extent those applications that are more transparent or respectful of our data.
This phenomenon, called “the privacy paradox”, has different explanations.
- We know the risks, but we assume them because the service they offer us is useful to us. Alternatively, and in a more irrational way, because the benefits we obtain are immediate, while the security risks are future costs.
- We are not aware of those risks and we use those services without knowing the potential consequences.
Studying the privacy paradox
To clarify which of these two possibilities predominates, the Public University of Navarra has started an investigation –pending publication– that measures the impact of positive and negative news related to the privacy of virtual assistants on the social network Twitter.
The aim is none other than to shed light on the privacy paradox: if the news has a significant impact on the type of conversation generated, it will be evident that users were not previously aware of these risks.
To do this, this project has generated a two-year database of tweets mentioning Google, Apple and Amazon assistants (more than 600,000) and crossed it with a database of positive and negative news about assistants to this period. Next, the volume of conversation before, during and after the news was studied, as well as the average sentiment expressed by those tweets (based on the type of language used).
It was observed that, in general, aspects related to privacy are not very present in the conversation: they are only mentioned in 2% of the cases, although this figure doubles in the case of Apple, a brand that places greater emphasis on privacy. processing of personal data.
On the other hand, negative news about privacy has a strong impact, both in the volume of conversation and in the average sentiment, which becomes more negative. Positive news has no effect. In addition, the impact of negative news is much stronger for Apple than for Google, which indicates that taking a position on privacy has its risks, since users will react more negatively to problems related to this area.
Therefore, the results of this research indicate that users are not aware of the risks we assume and react very negatively when they are exposed. This leaves us with two main conclusions:
- Individuals must be more active in collecting information about the services we use.
- Administrations should take on a greater role in educating and monitoring virtual assistants, as it is unlikely that platforms are the ones that best inform their users.
Monica CortinasAssociate Professor of Marketing and Market Research, Public University of Navarre
This article was originally published on The Conversation. read the original.