“I made Steve Bannon’s psychological warfare tool.” That was the title of an article published in The Guardian in March 2018. It was the publication that would raise awareness to one of the largest-ever controversy involving big data: the Cambridge Analytica Scandal. Within months of releasing the article, Netflix released a documentary called ‘The Great Hack’ that made Cambridge Analytica well-known around the world for election interference through psychological manipulation. Companies around the globe, however, have already been using technology to make consumers subconsciously like their products for years.
Researchers at Cambridge University and Harvard Business School developed a unique strategy of harvesting Facebook data, which consists mainly of likes and friend groups, and applied machine learning algorithms to find connections between certain behaviors on the site and psychological profiles. In fact, “[e]ven a few Likes per week add up into a consistent picture of a user after years of social media use.” (Stillwell) If this strategy can become reliable for large populations, it may ultimately allow companies, governments, and even individuals to manipulate the opinions, beliefs, and behaviors of millions of people at the click of a button. The question is, how close are we to this technology and what technologies do we encounter daily that already shape how we make our decisions and in what we believe.
The Current State of Psychometric Research
Enter the world of psychometrics. This fast-growing field of study “is devoted to the advancement of quantitative measurement practices in psychology, education and the social sciences” (Psychometric Society). This includes the mathematical measurement of character traits and how we can use them influence behavior. According to the Harvard Business Review, “[r]oughly 18% of companies currently use personality tests in the hiring process [and] this number is growing at a rate of 10-15% a year.” (HBR.org) These tests are the foundation of psychometrics research and are currently being used to create efficient teams. Having that said, many business schools, like the Judge Business School at the University of Cambridge, invest millions of dollars for research in this field beyond the scope of human resource management. David Stillwell, a leading researcher at Judge Business School, has said that social media profiles “can [accurately] predict personality and other intimate traits” (Stillwell).
The problem with personality prediction is that advertisers can use this to their advantage to make certain products, services, or ideas make more appealing by falsely associating with causes with which people identify. In fact, specific social media sites like Facebook allow advertisers to use so-called “dark posts” that enable the user to send ‘personalized ads on social media that are visible only by the person who is specifically targeted” (Rehman). The implications of this are shocking. Imagine that the United States Senate is voting on a new piece of legislation on banning a certain kind of oil extraction that has cost thousands of peoples their lives because of water poisoning. Rather than illegally paying off politicians, these oil companies could hire a company that has accurate psychological profiles of the key senators in this vote. They could pay that firm to flood all their social media with ads influencing their decision by creating subconscious links to things that they strongly believe in, like. For example, for a veteran turned senator, this may show how this specific oil extraction has helped hundreds of military veterans to be able to find employment.
It is vital to see that this is not a far-fetched scenario. Within the psychometrics departments of leading universities, there are some researchers whose research overlaps with another field of study: nudge theory. This field focuses on “helping people improve their thinking and decisions [and] identifying and modifying existing unhelpful influences on people.” (BusinessBalls.com) This theory was first published in early 2008 and has impacted virtually every aspect of people’s lives since then. One of the best examples to showcase the findings of this theory is by looking at IKEA. This firm has spent millions in the research of these nudging techniques. Two main things stand out. First, when you go shopping at their stores, you are funneled through a so-called racetrack layout, which mimics an IKEA catalog and will make you spend more money. Secondly, their products themselves give the user a sense of achievement just for building the product even if the product is not the best quality and, therefore, will improve customer happiness. These simple non-technological manipulations are commonplace in our society nowadays.
If this kind of marketing, or ethical manipulation, is already usual and accepted in our society, how is the Cambridge Analytica case different? The main problem arises when a company realizes what your values are and then uses previously developed marketing techniques paired with false information and false claims, which currently are not necessarily illegal, to change your opinions. In other words, when taking ethical and responsible use of this technology out of the equation, it is impossible to predict what could happen. The problem evolve early in the development of these ideas, as a recent review of ethical guidelines “clearly show[s] that only a small minority of academic institutions has developed guidelines for data science.” (Schnebele) The Cambridge Analytica case shows how just a little framing and misinformation can already have a considerable impact. With a 15-million-dollar budget and about 100 employees a technology startup has potentially changed the results of an American presidential election due to their questionable code of ethics. Now, if a company should decide to learn from the firm’s publicity mistakes, spend more money, hire more employees, and ignore any ethical boundaries whatsoever by reducing transparency even within their own company, the negative impact could be tremendous.
Opinion Manipulation in News Media
It is impossible to talk about opinion manipulation without mentioning the news media impact on this issue, specifically within the United States. With only six major media companies that all have very defined biases, meaning a left-bias like CNN or a right-bias like Fox, it is critical to realize how much of our personal belief systems rely on the information presented to us. It manipulates us into thinking that our belief system is the right one and everyone else’s is wrong. In fact, news, no matter if factual or speculation, have a huge impact on our lives, especially our economy. Companies, like Facebook and Google, that have the most impact on our decision-making cause “an increase in the average value of correlations [that augment] the systemic risk [and that decrease] the possibility of allocating a safe investment portfolio”. (Peruzzi) In other words, it causes our financial system to be instable.
Additionally, “bias in the media is the root of one of the deepest political issues that we have in this country, as it prevents civil discourse between individuals of separate ideologies, inhibiting compromise and progress.” (Kessler) This is true with only few large news companies trying to actively change our perceptions. And currently we are aware of their biases. Once more players come into the equation that are trying to manipulate us more subtly, we are going to see lots of issues arise. Specifically, the borders of what is right and what is wrong are going to fade. Additionally, the effectiveness of American politics is going to face an unprecedented issue of election-interference before the votes are even cast.
We are currently in a time that is dominated by technological advances. In fact, “American adults spend more than 11 hours per day watching, reading, listening to or simply interacting with media” (Fottrell). This has allowed the rise of Big Data to such an extent where data has become the single most valuable asset in any industry. The reason is simple, according to Hofacker, as “Big Data from online, mobile, but especially social media can provide […] an advantage to marketers. We must also acknowledge and address various negative aspects”. This means that by using technologies like artificial intelligence, people will soon be able to influence “all stages of the consumer decision-making cycle, including what the consumer does, how it is done, where they consume, when they do it and with whom they consume.” This generally breaks down to motives “triggered by external or internal cues indicating threats or opportunities related to a specific evolutionary challenge” (Griskevicius). This means that by appealing to what’s most important to you, your brain will subconsciously start liking a particular brand just because of the association with your values, not the product itself. The negative impact appears when misinformation is spread that makes you believe in a specific product or service because it wrongly seems like it is aligned with your values.
The Future of Psychometrics
In conclusion, it can be said that there are various technologies that can actively influence our decisions and behaviors. With the rise of newer technologies like artificial intelligence, the impact of technology shaping who we are has never been more significant. Having that said, not all is bad. If these technologies are used responsibly, they can help small businesses and individuals thrive even in times of downturn, as we have seen during the impact of COVID-19. Besides advertising, this technology can be successfully applied in other areas. For example, as previously mentioned it has been seen in human resource departments all around the country that focus on generating more effective teams by using psychometrics personality-testing for hiring and training staff members. This use will also continue to expand.
Additionally, psychometric strategies can help us figure out what sources of information we can trust, what companies we really identify with and help us steer the world in a direction that we want it to go in. However, as it is with humankind, there will be some kind of unethical and malicious player. With the progress of this technology into uncharted territory, there will presumably be some kind of government intervention in this area to ensure the responsible and ethical use of this technology. Until then, however, individuals need to be aware of how their decisions are steered in specific directions by news media and corporations that are spreading potential misinformation.