Technology degrades humans
Transcript of the hearing of Tristan Harris at the June 25, 2019, U.S. Senate subcommittee on Communications, Technology, Innovation, and the Internet.
I want to argue today that persuasive technology is a massively underestimated and powerful force shaping the world and that it has taken control of the pen of human history and will drive us to catastrophe if we don’t take it back. Because technology shapes where 2 billion people place their attention on a daily basis shaping what we believe is true, our relationships, our social comparison and the development of children. I’m excited to be here with you because you are actually in a position to change this.
Let’s talk about how we got here. While we often worried about the point at which technology’s asymmetric power would overwhelm human strengths and take our jobs, we missed this earlier point when technology hacks human weaknesses. And that’s all it takes to gain control. That’s what persuasive technology does. I first learned this lesson as a magician as a kid, because in magic, you don’t have to know more than your audience’s intelligence – their PhD in astrophysics – you just have to know their weaknesses.
Later in college, I studied at the Stanford Persuasive Technology Lab with the founders of Instagram, and learned about the ways technology can influence people’s attitudes, beliefs and behaviors.
At Google, I was a design ethicist where I thought about how do you ethically wield this influence over 2 billion people’s thoughts. Because in an attention economy, there’s only so much attention and the advertising business model always wants more. So, it becomes a race to the bottom of the brain stem. Each time technology companies go lower into the brain stem, it takes a little more control of society. It starts small. First to get your attention, I add slot machine “pull to refresh” rewards which create little addictions. I remove stopping cues for “infinite scroll” so your mind forgets when to do something else. But then that’s not enough. As attention gets more competitive, we have to crawl deeper down the brainstem to your identity and get you addicted to getting attention from other people. By adding the number of followers and likes, technology hacks our social validation and now people are obsessed with the constant feedback they get from others. This helped fuel a mental health crisis for teenagers. And the next step of the attention economy is to compete on algorithms. Instead of splitting the atom, it splits our nervous system by calculating the perfect thing that will keep us there longer– the perfect YouTube video to autoplay or news feed post to show next. Now technology analyzes everything we’ve done to create an avatar, voodoo doll simulations of us. With more than a billion hours watched daily, it takes control of what we believe, while discriminating against our civility, our shared truth, and our calm.
As this progression continues the asymmetry only grows until you get deep fakes which are checkmate on the limits of the human mind and the basis of our trust.
But, all these problems are connected because they represent a growing asymmetry between the power of technology and human weaknesses, that’s taking control of more and more of society.
The harms that emerge are not separate. They are part of an interconnected system of compounding harms that we call “human downgrading”. How can we solve the world’s most urgent problems if we’ve downgraded our attention spans, downgraded our capacity for complexity and nuance, downgraded our shared truth, downgraded our beliefs into conspiracy theory thinking that we can’t construct shared agendas to solve our problems? This is destroying our sensemaking at a time we need it the most. And the reason why I’m here is because every day it’s incentivized to get worse.
We have to name the cause which is an increasing asymmetry between the power of technology and the limits of human nature. So far, technology companies have attempted to pretend they are in a relationship of equals with us when it’s actually been asymmetric. Technology companies have said that they are neutral, and users have equal power in the relationship with users. But it’s much closer to the power that the therapist, a lawyer or a priest has since they have massively superior compromising and sensitive information about what will influence user behavior, so we have to apply fiduciary law. Unlike a doctor or a lawyer, these platforms have the truth, the whole truth and nothing but the truth about us, and they can increasingly predict invisible facts about us that you couldn’t get otherwise. And with thee extractive business model of advertising, they are forced to use this asymmetry to profit in ways that we know cause harm.
The key in this is to move the business model to be responsible. With asymmetric power, they have to have asymmetric responsibility. And that’s the key to preventing future catastrophes from technology that out-predicts human nature.
Government’s job is to protect citizens. I tried to change Google from the inside, but I found that it’s only been through external pressure – from government policymakers, shareholders and media – that has changed companies’ behavior.
Government is necessary because human downgrading changes our global competitiveness with other countries, especially with China. Downgrading public health, sensemaking and critical thinking while they do not would disable our long-term capacity on the world stage.
Software is eating the world, which Netscape founder Marc Andreesen said, but it hasn’t been made responsible for protecting the society that it eats. Facebook “eats” election advertising, while taking away protections for equal price campaign ads. YouTube “eats” children’s development while taking away the protections of Saturday morning cartoons.
50 years ago, Mr. Rogers testified before this committee about his concern for the race to the bottom in television that rewarded mindless violence. YouTube, TikTok, Instagram can be far worse, impacting exponentially greater number of children with more alarming material. And in today’s world, Mr. Rogers wouldn’t have a chance. But in his hearing 50 years ago, the committee made a decision that permanently changed the course of children’s television for the better. I’m hoping that a similar choice can be made today.
Introduction to who & why
I tried to change Google from the inside as a design ethicist after they bought my company in 2011, but I failed because companies don’t have the right incentive to change. I’ve found that it is only pressure from outside – from policymakers like you, shareholders, the media, and advertisers -- that can create the conditions for real change to happen.
Who i am: persuasion & magic
Persuasion is about an asymmetry of power.
I first learned this as a magician as a kid. I learned that the human mind is highly vulnerable to influence. Magicians say “pick any card.” You feel that you’ve made a “free” choice, but the magician has actually influenced the outcome upstream because they have asymmetric knowledge about how your mind works.
In college, I studied at the Stanford Persuasive Technology Lab understanding how technology could persuade people’s attitudes, beliefs and behaviors. We studied clicker training for dogs, habit formation, and social influence. I was project partners with one of the founders of Instagram and we prototyped a persuasive app that would alleviate depression called “Send the Sunshine”. Both magic and persuasive technology represent an asymmetry in power– an increasing ability to influence other people’s behavior.
Scale of platforms and race for attention
Today, tech platforms have more influence over our daily thoughts and actions than most governments. 2.3 billion people use Facebook, which is a psychological footprint about the size of Christianity. 1.9 billion people use YouTube, a larger footprint than Islam and Judaism combined. And that influence isn’t neutral.
The advertising business model links their profit to how much attention they capture, creating a “race to the bottom of the brain stem” to extract attention by hacking lower into our lizard brains– into dopamine, fear, outrage – to win.
How tech hacked our weaknesses
It starts by getting our attention. Techniques like “pull to refresh” act like a slot machine to keep us “playing” even when nothing’s there. “Infinite scroll” takes away stopping cues and breaks so users don’t realize when to stop. You can try having self-control, but there are a thousand engineers are on the other side of the screen working against you.
Then design evolved to get people addicted to getting attention from other people. Features like “Follow” and “Like” drove people to independently grow their audience with drip-by-drip social validation, fueling social comparison and the rise of “influencer” culture: suddenly everyone cares about being famous.
The race went deeper into persuading our identity: Photo-sharing apps that include “beautification filters” that alter our self-image work better at capturing attention than apps that don’t. This fueled “Body Dysmorphic Disorder,” anchoring the self-image of millions of teenagers to unrealistic versions of themselves, reinforced with constant social feedback that people only like you if you look different than you actually do. 55% of plastic surgeons in a 2018 survey said they’d seen patients whose primary motivation was to look better in selfies, up from just 13% in 2016. Instead of companies competing for attention, now each person competes for attention using a handful of tech platforms.
Constant visibility to others fueled mass social anxiety and a mental health crisis. It’s impossible to disconnect when you fear your social reputation could be ruined by the time you get home. After nearly two decades in decline, “high depressive symptoms” for 13-18 year old teen girls suddenly rose 170% between 2010 - 2017. Meanwhile, most people aren’t aware of the growing asymmetry between persuasive technology and human weaknesses.
Using ai to extract attention, role of algorithms
The arms race for attention then moved to algorithms and A.I.: companies compete on whose algorithms more accurately predict what will keep users there the longest.
For example, you hit ‘play’ on a YouTube video and think, “I know those other times I get sucked into YouTube, but this time it will be different.” Two hours later you wake up from a trance and think “I can’t believe I did that again.” Saying we should have more self control hides an invisible asymmetry in power: YouTube has a supercomputer pointed at your brain.
When you hit play, YouTube wakes up an avatar, voodoo doll-like model of you. All of your video clicks, likes and views are like the hair clippings and toenail filings that make your voodoo doll look and act more like you so it can more accurately predict your behavior. YouTube then ‘pricks’ the avatar with millions of videos to simulate and make predictions about which ones will keep you watching. It’s like playing chess against Garry Kasparov, you’re going to lose. YouTube’s machines are playing too many moves ahead.
That’s exactly what happened: 70% of YouTube’s traffic is now driven by recommendations, “because of what our recommendation engines are putting in front of you,” said Neal Mohan, CPO of YouTube. With over a billion hours watched daily, algorithms have already taken control of two billion people’s thoughts.
Tilting the ant colony towards crazytown
Imagine a spectrum of videos on YouTube, from the “calm” side-- rational, science-based, long, Walter Cronkite section, to the side of “crazytown”.
Because YouTube wants to maximize watch time, it tilts the entire ant colony of humanity towards crazytown. It’s “algorithmic extremism”:
- Teen girls that played “diet” videos on YouTube were recommended anorexia videos.
- AlgoTransparency.org revealed that the most frequent keywords in recommended YouTube videos were: get schooled, shreds, debunks, dismantles, debates, rips confronts, destroys, hates, demolishes, obliterates.
- Watching a NASA Moon landing videos YouTube recommended “Flat Earth” conspiracies, recommended hundreds of millions of times before being downranked.
- YouTube recommended Alex Jones InfoWars videos 15 billion times – more than the combined traffic of NYTimes, Guardian, Washington Post and Fox News.
- More than 50% of fascist activists in a Bellingcat study credit the Internet with their redpilling. YouTube was the single most frequently discussed website.
- When the Mueller report was released about Russian interference in the 2016 election, RussiaToday’s coverage was the most recommended of 1,000+ monitored channels.
- Adults watching sexual content were recommended videos that increasingly feature young women, then girls to then children playing in bathing suits (NYT article)
- Fake news spreads six times faster than real news, because it's free to evolve to confirm existing beliefs unlike real news, which is constrained by the limits of what is true (MIT Twitter study)
Freedom of speech is not the same as freedom of reach. Everyone has a right to speak, but not a right to a megaphone that reaches billions of people. Social platforms amplify salacious speech without upholding any of the standards and practices required for traditional media and broadcasters. If you derived a motto from technology platforms from their observed behavior, it would be, “with great power comes no responsibility.”
They are debasing the information environment that powers our democracy. Beyond discriminating against any party, tech platforms are discriminating against the values that make democracy work: discriminating against civility, thoughtfulness, nuance and open-mindedness.
Equal, or asymmetric?
Once you see the extent to which technology has taken control, we have to ask, is the nature of the business relationship between platforms and users one that is contractual, a relationship between parties of equal power, or is it asymmetric?
There has been a misunderstanding about the nature of the business relationship between the platform and the user, that they have asserted that it is a contractual relationship of parties with equal power. In fact, it is much closer to the relationship of a therapist, lawyer, priest. They have superior information, such an asymmetry of power, that you have to apply fiduciary law.
Saying “we give people what they want” or “we’re a neutral platform” hides a dangerous asymmetry: Google and Facebook hold levels of compromising information on two billion users that vastly exceed that of a psychotherapist, lawyer, or priest, while being able to extract benefit towards their own goals of maximizing certain behaviors.
The asymmetry will only get exponentially worse
The reason we need to apply fiduciary law now is because the situation is only going to get worse. A.I. will make technology exponentially more capable of predicting what will manipulate humans, not less.
There’s a popular conspiracy theory that Facebook listens to your microphone, because the thing you were just talking about with your friend just showed up in your news feed. But forensics show they don’t listen. More creepy: they don’t have to, because they can wake up one of their 2.3 billion avatar, voodoo dolls of you to accurately predict the conversations you’re most likely to have.
This will only get worse.
Already, platforms are easily able to:
- Predict whether you are lonely or suffer from low self-esteem • Predict your big 5 personality traits with your temporal usage patterns alone
- Predict when you’re about to get into a relationship
- Predict your sexuality before you know it yourself
- Predict which videos will keep you watching
Put together, Facebook or Google are like a priest in a confession booth who listens to two billion people’s confessions, but whose only business model is to shape and control what those two billion people do while being paid by a 3rd party. Worse, the priest has a supercomputer calculating patterns between two billion people’s confessions, so they can predict what confessions you’re going to make, before you know you’re going to make them – and sell access to the confession booth.
Technology, unchecked, will only be able to better predict what will influence our behavior, not less.
There are two ways to take control of human behavior – 1) you can build more advanced A.I. to accurately predict what will manipulate someone’s actions, 2) you can simplify humans by making them more predictable and reactive. Today, technology is doing both: profits within Google and Facebook get reinvested into better predictive models and machine learning to manipulate behavior, while simultaneously simplifying humans to respond to simpler and simpler stimuli. This is checkmate humanity.
The harms are a self-reinforcing system
We often consider problems in technology as separate – addiction, distraction, fake news, polarization and teen suicides and mental health. They are not separate. They are part of an interconnected system of harms that are a direct consequence of a race to the bottom of the brain stem to extract attention.
Shortening attention spans, breakdown our shared truth, increase polarization, rewarding outrage, depressed critical thinking, increase loneliness and social isolation, increasing teen suicide and self-harm - especially among girls, rising extremism, and conspiracy thinking – and ultimately debase the information environment and social fabric we depend on. These harms reinforce each other. When it shrinks our attention spans, we can only say simpler, 140 character messages about increasingly complex problems – driving polarization: half of people might agree with the simple call to action, but will automatically enrage the rest. NYU psychology researchers found that each word of moral outrage added to a tweet raises the retweet rate by 17%. Reinforcing outrage compounds mob mentality, where people become increasingly angry about things happening at increasing distances.
This leads to “callout culture” that angry mobs trolling and yelling at each other for the least charitable interpretation of simpler and simpler message. Misinterpreted statements lead to more defensiveness. This leads to more victimization, more baseline anger and polarization, and less social trust. “Callout culture” creates a chilling effect, and crowds out inclusive thinking that reflects the complex world we live in and our ability to construct shared agendas of action. More isolation also means more vulnerability to conspiracies.
As attention starts running out, companies have to “frack” for attention by splitting our attention into multiple streams – multi-tasking three or four simultaneous things at once. They might quadruple the size of the attention economy, but downgraded our attention spans. The average time we focus drops. Productivity drops.
Naming the interconnected system of harms
These effects are interconnected and mutually reinforcing. Conservative pollster Frank Luntz calls it the “the climate change of culture.” We at the Center for Humane Technology call it “human downgrading”:
While tech has been upgrading the machines, they’ve been downgrading humans - - downgrading attention spans, civility, mental health, children, productivity, critical thinking, relationships, and democracy.
It affects everyone
Even if you don’t use these platforms, it still affects you. You still live in a country where other people vote based on what they are recommended. You still send your kids to schools with other parents who believe anti-vaxx conspiracies recommended to them on social media. Measles cases increased 30% between 2016 and 17 and leading WHO to call ‘vaccine hesitancy’ a top 10 global health threat.
We’re all in the boat together. Human downgrading is like a dark cloud descending upon society that affects everyone.
Competition with china
But human downgrading matters for global competition. Competing with China, whichever nation least downgrades its populations’ attention spans, critical thinking, mental health, and political polarization will win be more productive, healthy and fast-moving on the global stage.
Government’s job is to protect citizens. All of this, I genuinely believe, can be fixed with changes in incentives that match the scope of the problem.
I am not against technology. The genie is out of the bottle. But we need a renaissance of “humane technology” that is designed to protect and care for human wellbeing and the social fabric upon which these technologies are built. We cannot rely on the companies alone to make that change. We need our government to create the rules and guardrails that market forces to create competition for technology that strengthens society and human empowerment, and protects us from these harms.
Netscape founder Marc Andreesen said in 2011, “software is eating the world” because it will inevitably operate aspects of society more efficiently than without technology: taxis, election advertising, content generation, etc.
But technology shouldn’t take over our social institutions and spaces, without taking responsibility for protecting them:
- Technology “ate” election campaigns with Facebook, while taking away FEC protections like equal price campaign ads.
- Tech “ate” the playing field for global information war, while replacing the protections of NATO and the Pentagon with a small teams at Facebook, Google or Twitter.
- Technology “ate” our dopamine centers of our brains -- without the protection of an FDA.
- Technology “ate” children’s development with YouTube, while taking away the protections of Saturday morning cartoons.
Exactly 50 years ago, children’s TV show host Fred “Mister” Rogers testified to this committee about his concern for how the race to the bottom in TV rewarded mindless violence and harmed children’s development. Today's world of YouTube and TikTok are far worse, impacting exponentially greater number of children with far more alarming material. Today Mister Rogers wouldn’t have a chance.
But on the day Rogers testified, Senators chose to act and funded a caring vision for children in public television. It was a decision that permanently changed the course of children’s television for the better. Today I hope you choose protecting citizens and the world order – by incentivizing a caring and “humane” tech economy that strengthens and protects society instead of being destructive.
The consequences of our actions as a civilization are more important than they have ever been, while technology that informs these decisions are being downgraded. If we’re disabling ourselves from making good choices, that's an existential outcome.
We are still the masters of our fate. Rational thinking, even assisted by any conceivable electronic computors, cannot predict the future. All it can do is to map out the probability space as it appears at the present and which will be different tomorrow when one of the infinity of possible states will have materialized. Technological and social inventions are broadening this probability space all the time; it is now incomparably larger than it was before the industrial revolution—for good or for evil.
The future cannot be predicted, but futures can be invented.
It was man’s ability to invent which has made human society what it is. The mental processes of inventions are still mysterious. They are rational but not logical, that is to say, not deductive.
Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an ‘intelligence explosion’, and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make, provided that the machine is docile enough to tell us how to keep it under control.
Social media has given everyone a virtual megaphone to broadcast every thought, along with the means to filter out any contrary view [...] The result is a creeping sense of isolation and emptiness, which leads people to swipe, tap, and click all the more. Digital distraction keeps the mind occupied but does little to nurture it, much less cultivate depth of feeling, which requires the resonance of another’s voice within our very bones and psyches.
Moravec's paradox is the observation by artificial intelligence and robotics researchers that, contrary to traditional assumptions, reasoning (which is high-level in humans) requires very little ...
Almost always the men who achieve these fundamental inventions of a new paradigm have been either very young or very new to the field whose paradigm they change. And perhaps that point need not have been made explicit, for obviously these are the men who, being little committed by prior practice to the traditional rules of normal science, are particularly likely to see that those rules no longer define a playable game and to conceive another set that can replace them.