Alleged data breaches from the Cambridge Analytica scandal offers a glimpse on how the AI revolution is shaping a new environment and why new legislation must be developed to protect not only citizen data, but the dignity of the citizen’s digital persona, taking into account how machines can extract and leverage the very instruction manual that makes each human individual actionable according to third party agendas.
Much has been said about the controversial use of personal data of citizens aggregating rich and actionable information around their identities to influence the referendum on Brexit and the presidential elections in America. Volumes of personal data were collected and aggregated, by and large in full compliance with current data protection laws. Did Facebook sue Cambridge Analytica? Did anyone sue Cambridge Analytica? So far the public scandal speaks of leakage of data and a potential class action in the US and most recently, Facebook facing a fine of £500,000 for its part in the scandal.
What we know is that a massive amount of information was transferred to a third party to evaluate the propensity of each person profiled on FaceBook to react positively to certain political messages. Up to here all seems in order: measuring consumer propensity is business as usual in commercial marketing. Plus, after all, one could argue that any political party or action needs to go through with consensus building and persuasion.
Cambridge Analytica: what is new?
Vance Packard published ‘The hidden persuaders’ back in 1957, exploring the use of consumer motivational research. Already sixty years ago he spoke about depth psychology and subliminal tactics used by advertisers to manipulate expectations and induce desire for products, particularly in the American postwar era.
His book describes how advertising exercises a subtle form of persuasion on individuals that are unaware of being its object. It also explores the manipulative techniques of promoting politicians to the general public. Cambridge Analytica’s case speaks of two major problems, the first relating to the transparency of messages conveyed with a manipulative purpose and the second to the technology behind today’s tools of manipulation, now extremely more powerful and effective.
The issue of transparency
A street sign is an obvious message that intends to modify our behavior. It is a clear message, with a transparent purpose. An advertising message, if recognizable as such, is likewise transparent. We want it to be such and our laws assert that this is our right: Instagram bloggers speaking of their favorite soft drink or perfume must specify if they receive incentives to do so from the endorsed brand. But in this case we are still looking only at the surface of simple, direct promotion in a one-to-many perspective.
If the advertising message is tailored to the individual identity of the person to be persuaded we start getting closer to the Cambridge Analytica case. And the advertising industry has decades of experience in this field as well. What is new in today’s case is that the Cambridge Analytica’s algorithms have studied such volumes of rich and relevant personal data about each individual so they are in the position to really understand what governs his or her inclinations, aspirations, immediate needs, strongest passions, deepest concerns and even unconfessed fears.
Extracting profile unknown to its owner
If a declared heterosexual, married individual only looks at other men’s physiques, ignoring those of women, today’s technologies can track what is in his line of sight, they can trace what he looks at, and maybe also what he looks for. Studying also other data, they can potentially extract a profile of this individual we all would be surprised to learn about, including himself. Today’s machines already extract profiles enriched with assumptions that are unknown even to the individual observed.
None of us sees the nape of our neck. None of us can see our own face, without the help of a mirror or a picture. Data technologies, on the contrary, draw our profile constantly, without reflecting our image. They do not offer us means to look at our picture, they simply use it.
What changed with respect to the past is the quantity of information available to data mining and aggregation. Our own picture is being drawn in aspects and details that go beyond our own knowledge. So, when confronted with the eternal privacy dilemma with respect to digital identities, what sensitive information should our government strive to protect in our interest? Standard personal and socio-economic data stored on our National ID document or collected about us by the various government agencies? This would be an old, marvelous and medieval definition of privacy. It is, to say the least, a limited definition.
Profiling our digital persona
Our digital identity, in line with how the world can see us today, is composed by a quantity of measurable acts that gets closer to drawing the ‘digitized me’, or the ‘digital twin’ of each citizen. Our identity is much longer in time and broader geography-wise than what our National ID record says. Even one’s body is just a placeholder and the tangible rightful owner of this identity, the main stakeholder of its reputation and integrity.
The Cambridge Analytica case shed light on the fact that our identity does not only consist of data, but includes to a certain extent the analysis that can be done with it. If I have a latent fear with respect to immigration that I have not expressed to date, a hidden aspiration, if deliberately bombarded with the pertinent messages this latent fear or unexpressed aspiration becomes a vote and shapes not only my identity, but my social action.
In the era of artificial intelligence
Cambridge Analytica made us understand that it is possible to profile citizens much more in detail and much more effectively than we ever thought, as with artificial intelligence psychology becomes mathematics. Persuasion can leverage signals that did not yet rise to the surface of an individual’s consciousness.
If I am a white, Caucasian mother, raised with little or no exposure to diversity, but genuinely compassionate in her spontaneous inclinations, I might be driven by generous thoughts at first when hearing about today’s global migration issues. But what will change if I see every day’s headlines report on children being kidnapped, mistreated or hit by immigrants? Will I ask myself why there is no news about the numbers of casualties caused today by drug abuse? Will I spontaneously check what type of issue creates the most significant damage in my country and if the use of cocaine or other drugs has decreased over the last months? Will I ask myself if the quality of news I receive is selected according to a hidden agenda?
Or would I not prefer that the news I receive is not an ‘instructions manual’ telling third parties exactly how I work and how they can action me in their interest is protected per se as part of my digital persona, my identity in the era of artificial intelligence?
Detonating actionable profiles
The ‘instructions manual’ illustrating how to mess around with a citizen’s free will is what we need to include in any checklist of things to be protected if we wish for democracy to remain meaningful. If we want covert manipulation to be avoided, a line must be drawn between what our society considers fair play and what is in fact abuse when algorithms extract and leverage actionable profiles of individual Internet users.
As the digital transformation of today’s society reaches new goals and technology continues evolving, one must prepare to face an entire set of new challenges. The new European GDPR data protection program is one such step and just the start of a new wave of initiatives that will pave the way to a new concept of privacy, security and dignity of our digital persona. Man and machine must learn to live together in our society, without the latter transforming us into the potential robots of the owners of agendas we are unaware of and did not choose to support.
Societal transformation
The Cambridge Analytica case reminds us of the speed at which the transformation of our society is happening: FaceBook was launched on February 4, 2004, not even 15 years ago. FaceBook interfaces in local European languages were launched in February 2008, about 10 years ago. In 2018 Facebook has been accused to have been used to change votes and the outcome of general elections in nations of prime importance. The media was used not only as a vehicle carrying promotional messages to the electorate, but as source of data on the individual identities of voters – and here one must refer to the broader meaning of ‘digital identities’, the modernized definition encompassing not only the data, but the instructions manual that is perhaps the most sensitive information of all.
A massive transformation of society took place in less than 15 years and one cannot but wonder about the expected effects on democracy. Should we protect ourselves or is it a citizen’s right to be persuaded according to his or her latent inclinations? How can we defend the free will of citizens if communications systems surround each individual with a sphere of information and knowledge that adapts perfectly to his profile and has him roll in a given direction, according to an agenda he did not spontaneously or deliberately choose to support? We must pay attention to the traceability issue. Every time we assert our identity electronically we increase our exposure to the risk of someone analyzing our overall behavior and that someone may influence it even to the point of changing our political actions. If powerful media are used to alter my perception of the world by bombarding me with information that is designed for me to accept it and adhere to it faster, I can be influenced and pushed to act against my own judgment, if it only were free.
The Cambridge Analytica case shows that our societies are at a turning point and are no longer willing to accept the use of all this technology without protection of sensitive personal data. It is interesting to see that Europe’s GDPR program is appreciated in the United States. But this is just a first step towards redefining our very concept of privacy and data protection. We will eventually mature a new appreciation of what information is to be considered part of our identity to be protected for the sake of the dignity of our digital persona.