Serving the Machine

AI & the Specter of Digital Totalitarianism

Artificial intelligence—the prospect of machines capable of perceiving, synthesizing, and inferring information—has been the Holy Grail of the tech industry since the 1950s. But while various forms of machine learning have been around for decades, our world has recently witnessed a quantum leap forward in AI technology. After countless generations of self-learning, neural network systems have become so advanced that they are often incomprehensible even to those who wrote the original code. Computer engineers know that AI works, but they often don’t know how or why it works. As Vice Magazine explained last November:

The people who develop AI are increasingly having problems explaining how it works and determining why it has the outputs it has. Deep neural networks (DNN)—made up of layers and layers of processing systems trained on human-created data to mimic the neural networks of our brains—often seem to mirror not just human intelligence but also human inexplicability.

Most AI systems are black box models, which are systems that are viewed only in terms of their inputs and outputs. Scientists do not attempt to decipher the “black box,” or the opaque processes that the system undertakes, as long as they receive the outputs they are looking for.

Expanding Our Concept of Everything

As AI continues to provide desired outputs, is there any limit to the problems it might help us overcome? Not really, according to many thinkers. Sam Altman, CEO of OpenAI, predicts that within two decades, “computer programs . . . will do almost everything, including making new scientific discoveries that will expand our concept of ‘everything.’” In the process, Altman argues, poverty will be greatly reduced, and robots will deliver a future that “sounds utopian.”

But might these aspirations for utopia inadvertently end in dystopia? Is AI something we should be concerned about? To answer this question, we need to understand the mechanisms whereby technology in general, and AI in particular, become integrated into society.

Enveloping the World Around the Machine

On its own, AI is not intrinsically useful to human beings. If you were to release an AI chatbot into a hunter-gatherer society, it would be no more useful than releasing a book into a purely oral culture. Before AI can perform services that are useful to men and women, something else must first happen. We must first adjust our environment so that the machine can flourish within it.

A good example of creating an environment in which a machine can flourish is a dishwasher. We don’t just build a robot to stand at the sink and do dishes because that would be exceedingly expensive and not very effective. Instead, we build a three-dimensional space in our kitchens in which a “robot” (in the most general sense of the term) can accomplish the task. At a 2013 talk for the Innovation Forum, Italian Oxford philosopher Luciano Floridi referred to this process of customizing the environment as “enveloping” an environment around the machine.

What is true on the micro level with things like dishwashers is also true on the macro level of society as a whole. The process of customizing our social spaces, routines, and expectations for our machines has been a recurring theme throughout the history of technology. One very clear example is with the invention of the automobile. When the automobile was invented, it was widely believed that the new transportation would save time for ordinary people, especially women. While the ease of travel did open up new opportunities, its promise as a time-saving mechanism never materialized. This is because the infrastructures of economics, townscapes, and social life gradually adapted to the automobile and the expectations that came in its wake. We enveloped the world around the invention.

Moreover, given all the changes that came as a result of cars (the emergence of large department stores, zoning requirements, city planning, and on and on), in many ways the structures of our lives are now controlled by cars even if we don’t have one. There is nothing surprising about this, for throughout history, humans have been accommodating their internal and external environments to their inventions in a symbiosis that causes the inventions to take on new importance and indispensability.

The Emerging Information Society

What happened with the car is analogous to what is happening today as we forge an information society. Before information and communication technologies can appear smart and useful, we must first envelop the world around them. We must create a society in which digital code can, so to speak, flourish.

What does a world look like after it has been enveloped around digital code? Well, look around you. Contemporary life is increasingly built atop a substructure known as the “internet of things,” whereby physical and digital objects constantly communicate with one another—and often make decisions for us—in a sprawling network of inter-operating systems. You might think you are only using the internet when you are interacting with a screen, but you are actually using it when you shop, drive, watch TV, fly on an airplane, or go to the mall. This is because the technologies that facilitate activities like shopping and driving are increasingly built on a foundation of ubiquitous computing, embedded systems, and wireless sensor networks.

These systems now comprise an entire ecosystem of semi-autonomous processes that could not be easily stopped without massive disruptions. W. Daniel Hillis explained back in 2010 how so much of the modern world is now built on top of this ecosystem.

More and more decisions are made by the emergent interaction of multiple ­communicating systems, and these component systems themselves are constantly adapting, changing the way they work. This is the real impact of the Internet: by allowing adaptive complex systems to interoperate, the Internet has changed the way we make decisions. More and more, it is not individual humans who decide but an entangled, adaptive network of humans and machines . . . We have embodied our rationality within our machines and delegated to them many of our choices.

An Informational Approach to All of Life

Since Hillis wrote those words more than a decade ago, we have continued embodying our rationality in machines and delegating many of our decisions to them. In the process, a strange phenomenon began to occur: our own understanding of the world began to be modulated by our machines. Floridi described this in his 2014 book, The Fourth Revolution: How the Infosphere is Reshaping Human Reality. He pointed out that in a world where wireless information is ubiquitous and interconnected with every aspect of our lives, our information and communication technologies end up “promoting an informational interpretation of every aspect of our world and our lives in it.”

One example of an “informational interpretation” of our lives comes from Floridi himself. The Oxford philosopher has spent his career attempting to collapse anthropology, psychology, metaphysics, and history into branches of information studies. Floridi has argued that human beings are merely one variety of informational organism among many others. Anthropology and technology are on the same ontological plane of existence because all of us—you, me, and my PC—are just informational organisms.

Even among those of us who have not succumbed to Floridi’s reductionist anthropology, it is hard to escape from an informational interpretation of many aspects of our world. As we envelop our world around our machines, we produce a state of affairs in which technology that is unintelligent—and would have been completely useless in past epochs of history—begins to seem indispensable. “Smart” technologies become unavoidable to the degree that the option to opt-out is increasingly removed from the table.

Indeed, we have accommodated our environment to our machines to such an extent that in many parts of the urbanized world today a person can no longer flourish without having the right relationship to what Floridi calls the “infosphere.” Our employability, our ability to borrow money, and even our social capital are largely contingent on how we stand in relation to data in the cloud. The machines that store, organize, manage, and manipulate this data start to seem very important and smart to us, but only because we have built social, political, and economic environments around them. As Floridi puts it in The Fourth Revolution, “The digital online world is spilling over into the analogue-offline world and merging with it,” leading to “the informization of our ordinary environment.”  

In short, instead of merely making computers user-friendly, we have also been making our world computer-friendly. In the process, we have created a data-centric world in which information is becoming all-important and indispensable, thus requiring ever more powerful systems to manage and control that information.

The Information Society

What does all this have to do with AI? As we begin customizing our world for AI—essentially enveloping our society around it—this will likely lead to new ways of perceiving the world, including an intensified tendency to interpret everything through an informational lens. Let’s unpack that.

Before AI can be useful, we need to first create a macro-environment in which AI can flourish—a world where whole swaths of life are handed over to processes and procedures that operate by digital code. This type of societal shift is already significantly underway with the developments described in the last section. However, this process will only be complete when we transition to a state of total technology and information immersion, or what I sometimes refer to as “digital totalitarianism.”

Digital totalitarianism refers to the final stage of enveloping our world around our technology, when all aspects of life are mediated through, managed by, and integrated with the information regime. Until recently, this type of condition might have been a pipedream of engineers and people on the spectrum. But thanks to AI, this type of world now seems within reach. As AI becomes more sophisticated, it claims to offer more “objective” and efficient solutions to problem-solving. But this comes with incentives, often unspoken, to reframe more aspects of life and culture (including ethics, politics, health, education, and more) in terms that AI can understand, manage, and contribute to.

In a world of digital totalitarianism, all problems come to be perceived as an information-processing opportunity. Ways of working with the world’s material and cultural potential that cannot be framed in digitized terms are marginalized; things that at one time would have been moral issues become reframed as engineering problems, and all objects, including perhaps human beings themselves, become receptacles of data.

Ultimately, digital totalitarianism leads us to reinterpret ourselves. Floridi, who celebrates recent digital developments as ushering in a new epoch of self-understanding, nevertheless offered this warning about ways our self-understanding may be modulated under the influence of emerging systems:

The risk we are running is that, by enveloping the world, our technologies might shape our physical and conceptual environments and constrain us to adjust to them because that is the best, or easiest, or indeed sometimes the only, way to make things work. . . . The deepest philosophical issue brought about by ICTs [information and communication technologies] concerns not so much how they extend or empower us, or what they enable us to do, but more profoundly how they lead us to reinterpret who we are and how we should interact with each other.

AI does not necessarily have to lead to digital totalitarianism. If AI can predict when a bridge is about to collapse, help identify wildfires, or relieve workers from repetitive tasks, few would dispute these are good things. AI could also help humans more effectively exercise the virtue of prudence. According to Proverbs, prudence involves collecting information prior to making a decision. Sometimes information-collecting comes from a human source (wise counsel), but sometimes it also comes from a natural source (evidence of past high watermarks on a riverbank) or an electronic source (a calculator or spreadsheet). AI could fall into the latter category as an additional source to inform wise decision-making. In fact, AI already does just that: every time we do research using a search engine, we are using AI.

Risky Bargains

Yet these benefits need to be weighed against substantial risks. As the trend to envelop our world around our technologies expands to include seamless integration with AI, we simply do not know what type of world will emerge. Just as the internet of things has become a semi-autonomous infrastructure on which much of modern life now depends, the future could be morphing into a place where nearly all departments of life are built atop a substrate of machines making decisions on our behalf, yet without any clear exit strategy should this prove dehumanizing.

The risk is that instead of using machines to customize the world to our needs, we may end up customizing the world to the machine. In a world customized for AI, the engineering mentality becomes the modus operandus for all of life, and we face the temptation to turn all problems into information-processing opportunities. Worse, we face the prospect of becoming habituated to a vision of human good forged in the opaque recesses of a sea of neural networks.

Consider that many of the ways AI has already proven dehumanizing (pricing algorithms that capitalize on disasters, bots that hire and fire workers with callous disregard for human factors) are precisely the result of reshaping human society around the type of intelligence appropriate only for a bot. As a result of ceding huge swaths of life to the machine’s way of doing things, new problems may emerge, and to solve these problems we could end up capitulating to even deeper levels of machine dependence.

In customizing our environment to data-dependent states of affairs, are we in danger of losing the things that matter most to our humanity? If so, could that be the real AI dystopia we should be worried about? Not algorithms taking over and acquiring agency, and not AI evolving its own intentionality, but humans creating a world customized for the flourishing of artificial intelligence?  In our attempt to create machines to serve our needs, we could end up refashioning ourselves into servants of our machines.

has a Master’s in Historical Theology from King’s College London and a Master’s in Library Science through the University of Oklahoma. He is the blog and media managing editor for the Fellowship of St. James and a regular contributor to Touchstone and Salvo. He has worked as a ghost-writer, in addition to writing for a variety of publications, including the Colson Center, World Magazine, and The Symbolic World. Phillips is the author of Gratitude in Life's Trenches (Ancient Faith, 2020), and Rediscovering the Goodness of Creation (Ancient Faith, 2023). He operates a blog at www.robinmarkphillips.com.

This article originally appeared in Salvo, Issue #66, Fall 2023 Copyright © 2024 Salvo | www.salvomag.com https://salvomag.com/article/salvo66/serving-the-machine

Topics

Bioethics icon Bioethics Philosophy icon Philosophy Media icon Media Transhumanism icon Transhumanism Scientism icon Scientism Euthanasia icon Euthanasia Porn icon Porn Marriage & Family icon Marriage & Family Race icon Race Abortion icon Abortion Education icon Education Civilization icon Civilization Feminism icon Feminism Religion icon Religion Technology icon Technology LGBTQ+ icon LGBTQ+ Sex icon Sex College Life icon College Life Culture icon Culture Intelligent Design icon Intelligent Design

Welcome, friend.
Sign-in to read every article [or subscribe.]