Permanent Record: An Exploration of Freedom and Technology

Oh Jun Kweon
The Startup
Published in
14 min readDec 5, 2019

--

Photo by Random Institute on Unsplash

Perhaps the first person that comes to mind with the words “freedom” and “technology” is Edward Snowden, the famous (or infamous) whistleblower who, in June 2013, revealed the NSA’s top-secret mass surveillance programs to the world. Six years later, he wrote his autobiography/manifesto/call-to-action — Permanent Record (Edward Snowden). For me, this book was a critical reminder to seriously contemplate my beliefs about freedom. As a computer science student, and as a citizen of a world in which technology is rapidly outpacing legislature, it is critical to begin understanding the sociopolitical and ethical impacts of the technology that we are creating.

What Is Freedom?

“The freedom of a country can only be measured by its respect for the rights of its citizens” — Edward Snowden

Let’s begin with a basic definition of freedom from Google Dictionary: “ the power or right to act, speak, or think as one wants without hindrance or restraint”. On the surface, this sounds like a good thing that respects individuals. However, absolute freedom would lead to a stateless anarchy, in which individuals would be free to do anything, uninhibited by a rule of law. This would include heinous acts such as murder and other violent crimes.

To prevent this anarchy, or more specifically the danger to individuals posed by an anarchy, humans throughout history have developed several ways of organizing the individuals to create a society. One thing that is certain is that we have never agreed on what the “best way” of organizing is; history reveals a turbulent pursuit of numerous ideals, often using violent means. Today, while we live in the safest generation in human history, we are still largely conflicted in our public discourse. I believe this is actually a good thing, because diversity of ideas represents a society that supports freedom of expression. Ironically, one of the major ideologies of today aims to reduce the freedom of expression that allows the ideology to be voiced in the first place. This is an ideology of surveillance, secrecy, and security.

We’ve all heard the phrase “for reasons of national security” on the news. Nowadays, it seems like all government actions can be justified this way. Curbing immigration? National security. Trade war with China? National security. Banning Huawei? National security. I am not making a judgement on whether these policies are good or bad; I am merely pointing out that national security is one of the most cited reasons for most policies nowadays. Transcending policies, concerns of national security and the emotion of fear largely dominate our public discourse today. More often than not, though, the cost of strengthening national security is individual freedom.

Right from the start, we established that sacrificing some freedoms for national security is not necessarily bad. For example, we sacrifice our right to take another life for the good of society. What is clear is that there is a scale that ranges from freedom on one end to security on the other. What is also clear is that the extreme ends, absolute freedom and absolute security, are not ideal. So the question is: where is the “ideal” point on this scale?

What rights are we willing to give up and not give up?

A Thought Experiment

Photo by Brian Kostiuk - @BriKost on Unsplash

Imagine a world in which a groundbreaking technology called “the chip” was invented. This chip is implanted into the brains of every human from birth. Using data collected from your brain, the chip has the ability to know when you’re about to commit a crime with 100% accuracy. Knowing this, the chip will make you physically incapable of committing that crime, leading to a society with a 0% crime rate. Importantly, note that this chip causes absolutely zero health issues, all data is encrypted, the government (or anyone/anything else) has no access to this data, and the chip has no influence on your “free will” and behavior within legal boundaries.

Do you support “the chip”?

Of course, a question that must be asked in this scenario is “Who creates the laws?” If the state is run by a people-serving democracy in which every voice is heard, then you might be more inclined to accept the chip. If the state is run by a state-serving authoritarian regime, then you might be more inclined to reject the chip. For the purposes of this thought experiment, let us assume that the laws are “completely fair” (whatever that means) and supported by everyone. I know this is unrealistic; the purpose of this hypothetical is to separate all practical and privacy concerns with the ethics of a free act in itself.

Society today works on a retributive justice system in which we punish criminals after they have committed a crime. By committing a crime, the criminal forfeits basic freedoms in the future. In contrast, the hypothetical above would never need to punish any criminals because there would be no criminals — all crimes are prevented preemptively. Notably, while criminals don’t give up any freedom in the future, everyone gives up their “freedom” (to commit a crime) in the present.

Practical arguments lead us to an obvious conclusion: the chip is good. The world now has no crime. Everyone can now stop worrying about theft and murder and sexual assault and every other crime that we see on the news every day. We no longer have to live in fear because we are guaranteed that we will not be subjected to crime.

Ethical arguments, though, are not as straightforward. Firstly, how does this chip impact free will? Can we still see ourselves as free, autonomous agents if we do not have the ability to commit a crime? Is there something innately valuable about having total control of your mind and body, even if it leads to bad thoughts and bad actions?

Michael L. Rich, Professor of Law at Elon University, explores similar questions in his paper Should We Make Crime Impossible? It is an excellent source for further reading that takes into account a plethora of practical factors and even considers the realistic case of whether we should make drunk-driving impossible. His conclusion is also notable. He writes that “the technologies that might be used to render such crimes impossible would likely be able to collect private data, thus raising substantial privacy concerns”, and increased government oversight could lead to the “creation of a society all too tolerant of intrusion on their individual rights”.

What could such a society look like?

Dystopian Futures

Two books that come to mind are The Circle (Dave Eggers), a story of corporation-led mass surveillance, and 1984 (George Orwell), a story of state-led mass surveillance and indoctrination. I don’t want to spoil any of these great books, so skip the next few paragraphs if you haven’t read the books yet.

*SPOILERS START HERE*

The Circle

The Circle was a relatable read for me, in the sense that I am a computer science student and one day I hope to work for a large, influential tech company like the Circle. It seems that this striking parallel to the tech culture in Silicon Valley today was intentional, everything from the over-eagerness to share everything to the relentless pursuit of applying technology to every aspect of our lives. Like many tech companies today, the Circle has a “good” vision of connecting the world and utilizing technology to make our lives easier. Despite this “good” intention, though, the Circle ends up harming the way of life of many people, such as Mae’s parents and the tragic death of Mercer (who refused to cooperate with “technological progress”).

Another notable fact is that the founder of the company, Ty, wants to tear the company down. He created the technology for this company out of pure intellectual curiosity, not realizing its sociopolitical consequences at the time. But once corporate ownership takes over, he is terrified of the monster that his technology is becoming. This is not so different from real life, in which many young, passionate, and curious technologists create amazing projects for their personal satisfaction, but the tools get used for more questionable purposes under corporate ownership.

Photo by Jp Valery on Unsplash

This is quite inevitable in a capitalist economy due to profit motive. Corporations are incentived to do whatever it takes to increase their user base and revenues even at the cost of the social good, because if they don’t take the “dirty” road (of thoughtless technological improvement to get an edge over their competition), they will be beat by those who do.

We can consider this as an example of the prisoner’s dilemma: individual corporations are incentivized to choose the “selfish” strategy, knowing that it doesn’t produce the most social good, because the “selfish” strategy strictly dominates the “benevolent” strategy. It’s the only way for individual corporations to stay competitive. Let’s compare this with another prisoner’s dilemma: the energy industry. If company A is using environmentally and socially detrimental strategies to lower their costs, they are going to beat company B which incurs higher costs to respect environmental and social safety. How is this dilemma solved? Government regulations. If the government imposes laws against how you treat the environment and your workers, company B does not have to worry about balancing their competitiveness with the social good.

Unfortunately, the solution in the case of technological development isn’t so easy. Unlike the energy industry, lawmakers have no idea what current technology even is, how it works, and what it’s capable of. Even if they did, the pace of technological innovation is so rapid that it is almost impossible (and I believe unreasonable) for lawmakers to be keeping up with the infinite technical intricacies and developments. This poses the question: how are regulators meant to regulate something that they don’t fully understand?

1984

George Orwell introduces a society in which the state maintains control through mass surveillance and mass indoctrination (which is enforced by mass surveillance). The scope of this story transcends technological progress and is a broader commentary on what a society is, what value culture has, how leaders of progress are often unreliable (much like Animal Farm), and so much more. But let’s focus on the role of surveillance.

Surveillance is the state’s ultimate tool to control its people. They distort their population’s sense of time and memory through state-controlled news and state-controlled activity, enforced using mass surveillance.

A great example of this is the “thought police”, whose purpose is to discover and punish people who have thoughts that the state disapproves of. How do they discover this? Mass surveillance. Thoughts lead to actions and behavioral changes. The thought police monitors this and interferes, in secret, as they see fit. Now imagine that it’s not a human monitoring you, but an artificially intelligent agent monitoring your every move 24/7. Take it a step further, imagine this agent isn’t just monitoring your physical movements. What if it monitored all of your online activity, all your communications, and had access to biological information like your heart rate and brain waves? This could be our future.

It may seem that I am taking a largely pessimistic stance on technological innovation. While it may seem that way, I consider myself a huge fan. I believe technology has the potential to solve problems on a scale that was never possible in the history of mankind, if the powerful tools we create are used for the right purposes. That’s why I’m majoring in computer science myself. But there’s no guarantee that these tools will only be used by good actors. What if these tools get into the hands of criminals, terrorists, foreign political meddlers, a tyrannical government, and so on? Think of the havoc that they could create if even the well-intentioned technological leaders cause harm that they didn’t foresee?

The one piece of hope is the unrelenting persistence demonstrated by Winston, the main character who dies trying to protect rationality and the truth. Even though he is eventually broken by the state, it takes a gargantuan amount of effort for the state to break just one individual. My takeaway from this is that, while the state can break an individual, the state cannot break every individual.

We must all strive to engage in ethical discussions about technology until we cannot be heard, much like Winston strived to protect the truth. If everybody recognizes the importance of considering what technologies we are creating today and its future implications, we force governments and corporations to listen and adjust. But if we remain silent and resign to the idea that these discussions entail too much effort, governments and corporations are given a free-pass to continue doing whatever they want.

Our only hope is mass discussion and attention.

*SPOILERS END HERE*

While these books provide an imaginative and insightful glimpse into a potential future, this future might not be as far away as we like to imagine. Just last week, China announced that everyone must perform a face scan when registering for new mobile services. This allows the government to track every person in the country with a smartphone. Note that this is in addition to the already existing system of mass surveillance using high resolution CCTV cameras and exception facial recognition software. The monitoring is also reinforced by drones with cameras attached to them. All the collected data feeds into an algorithm that determines a “social credit score” for every individual in the country, which helps the government decide who to reward and who to punish.

China isn’t alone in the effort of mass surveillance, however. They are simply more explicit about it, at least according to Edward Snowden. His revelations tell us that the United States government has the capability of viewing anybody’s emails, chat logs, photos, documents, browsing history… pretty much anything that you do online. It was even reported that Angela Merkel had her phone tapped by the NSA. Using this information, the government has a profile of everyone who uses the Internet.

While all of this seems to belong to a dystopian fiction, we must recognize that this is the reality that we live in today. We must also recognize that governments, and corporations, have no intention of stopping here. Mass surveillance, and perhaps manipulation, is about to get a lot more prevalent and capable with the rising investment into artificial intelligence.

Photo by Franck V. on Unsplash

Artificial Intelligence

“the first is AI, the second is also AI and the third is AI as well” — Masayoshi Son, CEO of Softbank

It seems that every nation in the world recognizes the importance of artificial intelligence. To name a few, President Trump recently signed an executive order called the American AI Initiative, Saudi Arabia hails the significance of AI for its Vision 2030, China includes AI as an essential ingredient for its Made In China 2025, and even my home country South Korea has recently announced that is aims to be an “AI powerhouse”. In a meeting with Korean President Moon, when asked about what Korea should focus on in the future, the CEO of Softbank Masayoshi Son said that “the first is AI, the second is also AI and the third is AI as well”. With a technology “Vision Fund” of over $100 billion, Softbank is the largest technology investor in the world. Clearly, businesses also recognize the importance and benefits of artificial intelligence.

Due to the rapidly rising investment into AI in the last decade, AI has become foundational to almost everything we do online. Your YouTube and Netflix recommendations? AI. Facebook feed? AI. Even beyond consumer-facing features, AI has become omnipresent in the field of data processing and analysis, and virtually everything on the Internet carries data and associated meta data. Most AI today is built to use existing data about users to predict their future behavior so that the website or app can adjust itself to be more useful to the user (video recommendations, Gmail autocomplete, etc.) However, Edward Snowden hints that this predictive algorithm can be seen as a kind of “coercion”, in which the user is fed certain information without their knowledge or consent.

Yuval Noah Harari offers a more macro and philosophical perspective of this “coercion”. In his book 21 Lessons for the 21st Century, he warns of a future in which governments and corporations will know you better than you know yourself. This will be possible through the massive amount of data collected on every individual using the mass surveillance tools and artificial intelligence technologies that are being developed today. In this future, using the superior knowledge of who you are, governments and corporations will be able to manipulate you into making certain decisions and supporting certain ideologies without you even knowing it, compromising your free will and autonomy. What then becomes of our freedom? Can we consider ourselves to be free if we are simply being manipulated by computers that know us better than we know ourselves?

Note again that this future may not be as far away as we believe. A recent example of the manipulation of minds using data is the “fake news” pandemic during the 2016 US presidential election. Facebook and Twitter feeds were compromised by misinformation spread effectively by the use of AI. AI wrote articles, AI spread articles, and Facebook’s and Twitter’s AI was designed in such a way that people were fed news that they were more likely to agree with, acting as a propaganda machine that further entrenched the beliefs of both extremes with (often) false information.

This is a classic example of the social drawbacks of AI that get neglected in the race for technological development. Even good actors with good intentions can cause irreparable harm on a huge scale. This is expected, because we are the first humans in history to have access to AI, so it’s only natural that we don’t fully understand its social and ethical consequences. But it seems to me that the world today is tunnel-visioned by the sparkling benefits of AI, unwilling to even have the ethical discussion. Unfortunately, the scale and nature of AI development is such that, by the time we realize we made a mistake, it’s probably going to be too late to fix it. This has been warned to us by Elon Musk, one of the most innovative and optimistic technologists of our time yet notably cautious about AI.

Branching from his fear of AI, Elon Must even founded OpenAI, an AI research company with the mission of “discovering and enacting the path to safe artificial general intelligence”. I believe this type of research into safe AI is essential, and we must have a more sophisticated public discourse about the sociopolitical and ethical impacts of AI in our society. Before it’s too late. If you are interested in hearing more perspectives about AI and ethics, I highly recommend the Beyond The Turing Test podcast run by a friend of mine in the AI Robotics Ethics Society at UCLA.

Coming Back To Freedom

Despite the apparent focus on technology, this article is fundamentally about freedom. Freedom gives us agency and a chance to live a meaningful life. Technology promises a chance to live an easier life. In exchange, we sacrifice our data, privacy, and ultimately, freedom.

I don’t mean to say that technological innovation is inherently bad; rather, I believe it can open doors we didn’t even know existed and legitamately solve important problems. However, given the rapid technological innovation of today, we must consider what we are actually creating. What tools and values are we leaving behind for future generations? We are desperately in need of more public awareness and engagement, as well as scrutiny on what the technological hegemonies of today intend for tomorrow.

It’s impossible to know what the future is going to look like, but that doesn’t mean we must resign to whatever future is determined for us by technological leaders.

What’s certain, though, is that, whatever happens in the future, we are all already part of a permanent record.

“The freedom of a country can only be measured by its respect for the rights of its citizens, and it’s my conviction that these rights are in fact limitations of state power that define exactly where and when a government may not infringe into that domain of personal or individual freedoms that during the American Revolution was called “liberty” and during the Internet Revolution is called “privacy”” — Edward Snowden

Like Clicking Buttons?

Follow me to create our future together. Read and appreciate my human-written articles before the computers take over.

--

--

Oh Jun Kweon
The Startup

CS & Math student at the University of Michigan // Ars longa, vita brevis, occasio praeceps, experimentum periculosum, iudicium difficile.