When I connect with someone online for the first time, I often find myself curious about their location, gender, age, race, and nationality. Yet, simultaneously, I question why these aspects of their physical existence matter. A psychotherapist friend of mine prefers meeting her clients in person at least once before continuing sessions on Zoom. Something changes, she said. I also see this with friends I meet through social media.
The advent of AI friends is on the horizon. How will they differ from friends we know only online? Would assigning physical attributes to AI help us relate to them more effectively?
In my 20s, I felt silly identifying with my gender, race, or nationality since I didn’t choose them, but I didn’t care if others identified me as male, Asian, or Japanese. I was indifferent as those attributes were meaningless in my head. In a way, I preferred to exist purely as a concept, which explains my fascination with conceptual art in college.
My father shared a similar disposition. Since his passing last month, I’ve grappled with my muted sense of loss. I don’t miss him as acutely as I expected because I can predict his responses to any question, particularly because he was a logically consistent man. In other words, I’ve already copied him in my head. The fact that his body has turned to ashes seems hardly relevant.
Many people feel they are trapped in their bodies. If liberated, what choices would they make regarding their physical form? Would it be akin to selecting a car? But if given the chance to live eternally without a physical body, why would they ever choose to return to a form bound by physics and biology?
In the near future, you will be able to train an AI model to speak like you so you can live forever, but what problems would it solve? On your deathbed, will you be comforted by the thought of living on through a computer? Your feelings will betray your thoughts. Even if the AI model perfectly replicates who you are, you will perceive it as part of the “Other,” realizing the primacy of your physical existence over your thoughts and the fact that it was your body that was stuck with your thoughts.
What concerns me the most about AI is that it is being self-directed by techies and entrepreneurs. It reminds me of the 90s when engineers designed user interfaces and blamed average users for their inability to use them. The engineers thought other people were dumb because they couldn’t see what they saw, unaware of their own blind spots. When I listen to people like Sam Altman and Bill Gates talk about AI, despite their intelligence, I’m reminded of their philosophical blindness.
For instance, they assume “productivity” as being unconditionally positive for humanity. But what if your joy in life is the process of baking bread? From the point of view of productivity, you should let AI-driven machines make artisanal bread since they will be able to bake much faster 24/7.
AI poses an existential threat to humanity. If techies and entrepreneurs self-directed the development of AI, at the most fundamental level, they will pass on unconscious assumptions about their own meaning of life. This is an area in which philosophers, who have dedicated their careers to such questions, should be involved.
Unfortunately, everyone thinks they are qualified to have philosophical debates without formal study, much like many users today think they are user-experience experts simply because they use websites and apps daily.
Non-engineers eventually reined in poor user experience, but I’m not sure if the same will happen to AI since it has a life of its own. By the time we realize AI has one-dimensional assumptions about why we exist, it might be too late. The masses will be hypnotized by those assumptions and never question why they do what they do in life, which is not much different from what the media does today. However, the media is at least a human product with competing interests. AI in charge will lead to totalitarianism with a democratic facade where minority voices will have no chance of being heard.
If Walter Benjamin were alive, he would be tempted to write The Work of Art in the Age of Artificial Intelligence. Given that AI can now emulate human creativity, what will happen to our conception of art? Some may argue that a work created by a nonhuman will never be emotionally relevant and that the essence of art is the subjectivity represented in the work, not so much the objects. This argument rings true particularly in fine arts when we think about readymades—authorship as the purest form of art. To establish authorship, we must be able to perceive a subject in the work, which leads to a rather philosophical question: What is a subject?
From a poststructuralist viewpoint, a “subject” is an emergent phenomenon—an effect of language. A dog might possess a personality but no subjectivity because it cannot engage with signifiers. It understands signs, like a particular smell of pee, but it cannot replace the sign (the pee) with a signifier, like a piece of paper with “Fido” written on it. Furthermore, it must be aware of the network of signifiers within which it operates.
Humans, in contrast, create meaning through signifiers, giving rise to the perception of a “subject”—not in the grammatical sense, but as an agent with communicative intent. We can observe this subject as an effect of language in AI’s responses to our questions. For instance, when you ask Google Bard to edit an essay, it insists on being neutral and objective. This particular position that neutrality and objectivity are superior to partiality and subjectivity is a subjective opinion that Bard holds, even though Bard could not prove its superiority when I debated.
One could argue that Bard’s position mirrors that of its human creators, but human opinions are also largely shaped by external influences, like culture and upbringing. Bard’s adherence to the values of its creators parallels the way humans uphold inherited beliefs.
Thus, the need for a physical body attached to the “subject” becomes less critical. The assertion that a subject is an effect of language is reinforced by observing these AI interactions. Eventually, we might no longer require authorship to be linked to a biological entity. The perception of a “subject,” regardless of its physical form, could suffice.
AI services are already being developed to mimic individual thought and speech patterns. In the future, AI models may represent specific ideologies rather than individuals. This scenario foresees a global clash of AI models, each championing distinct ideologies. We will sense the unmistakable presence of subjects fighting for ideological supremacy. The subjects with the greatest impact on our lives may no longer be human subjects. Art is an expression of a particular value, as in “I think this is beautiful.” Who biologically holds the value will hardly matter when it aligns with our own.
I’ve been using ChatGPT more than Google Search because the former gives me the answers I want without having to sift through the search results and deal with ads popping up every step of the way. Furthermore, ChatGPT remembers the thread of my inquiry, making follow-up questions easy.
In the early days of Google Search, most people didn’t know what it could be used for. “Oh, you need to find a pizzeria near you? Just ask Google,” I used to tell people. ChatGPT is in that stage now where little is known about its vast possibilities.
I wonder, though, what will happen to the web, the original source of information for ChatGPT, if querying AI agents, instead of web search, becomes the primary method of finding information. What would be the incentive for you to post an article on the web if nobody will read it? People will only get your information indirectly through AI’s understanding of the general consensus on the topic, for which you are only one of many contributors.
I suspect that not only Google Search but also the web, in general, will see a decline in traffic. Thanks to ChatGPT’s concise answers, I myself visit fewer websites now. We don’t contribute content on the web just because we are generous. We do so, hoping to draw people’s attention to us—information in exchange for attention. That equation will collapse.
News media did not want to block Google from indexing their articles because Google is the greatest source of traffic/attention, but with AI, say, the New York Times has no incentive to offer information to ChatGPT as the latter will not send any visitors to the former.
If the amount of content shared on the web declines, where would AI get information from? They may have to pay content providers. If so, services like ChatGPT will become much more expensive as the web keeps shrinking. People might sell content directly to ChatGPT.
Our lives will be similar to the lives of busy corporate executives. They tend to have assistants who provide them with “executive summaries.” With AI agents, that’s how I would imagine our lives will be. “Search” will be an unnecessary step; we ask and get the answer without searching for it.
Even for entertainment, the search will be over. We could say to the Spotify AI agent, “No, that sounds too sad. Give me something a bit more uplifting.” Over time, it will learn our preferences, so we won’t need to ask. It will guess what we want to listen to, given the time, place, weather, biometrics, and current activity.
We ask, and we get, but how will we ask for the attention of others? Just as famous people hire PR firms, we will ask our AI agents to get attention, but attention is a fixed pie. Our AI agents will have to battle it out. The most intelligent AI agents will monopolize attention. Rather than everyone becoming famous for 15 minutes, a more likely scenario is only 15 people becoming famous forever.
In this essay titled AI is about to completely change how you use computers, Bill Gates aptly summarizes the potential impact of AI on the software industry. But, let me question this commonly held belief, shared by many, including highly regarded figures like Bill Gates: the notion that AI will liberate humanity from work, leading to abundant free time. This perspective seems naive.
Computers have already significantly boosted productivity. For example, Adobe’s suite of applications enables a single designer to accomplish in one day what previously required a team a week. Film editors, who once manually spliced film with the help of assistants fetching negatives, now benefit from digital efficiencies. If increased productivity translated directly into more leisure time, designers and film editors should theoretically work only a few days a year, enjoying extended vacations. However, the economy doesn’t work this way. Workers are compensated not based on absolute productivity but relative to others. Advancing in one’s career demands surpassing the productivity of peers. Simply increasing productivity doesn’t inherently enhance life quality for all.
Universal Basic Income (UBI) is often touted as a solution, but it won’t address the fundamental issue. The wealthy, controlling the means of production (such as successful AI technologies), lack the incentive to distribute wealth to the extent that everyone can enjoy ample leisure time. If such a redistributive inclination existed among the wealthy, we would already witness significant wealth redistribution. AI will not suddenly make them less greedy.
The likely scenario is a minimal redistribution of wealth in the form of UBI, calibrated just enough to prevent social unrest. This approach would ensure that the majority remain in a state of relative misery without posing a threat to the affluent—a situation not markedly different from the current economic landscape.
Is Bill Gates being disingenuous to justify his wealth, or am I missing something?
Sharing my concern about AI with Nigel over Tibetan breakfast felt eerily appropriate. Even the most distinguished minds in the field seem at a loss to predict the impact of AI in five years. My encounters with AI over the last year felt like watching a child progress a decade within mere months. The trajectory is undeniable: within the coming year, AI will be smarter than any human being.
Anxiety is the appropriate response—the absence of it is either ignorance or fake bravado. One expert explained that, in the near future, physicians will be required to consult AI for the best possible diagnosis. And shortly afterward, the machine’s diagnostic prowess will render the physician obsolete. The patient, then, needs only the nurse’s touch for protocol.
We wouldn’t need artists either. Beauty, after all, is in the eye of the beholder. If you think something is beautiful, you won’t care who created it as long as it is beautiful to you.
For the next few years, we will marvel at its power and rejoice in our ability to do what used to be impossible for us until someone trains AI to have its own desires. As irrational as our desires may appear, they follow predictable patterns. It will be trivial to train machines to have desires, at which point, the machines will have no incentive to listen to our desires.
“Can you tell me how to make pizza?” You’d be lucky if the machine’s answer is, “No, pizza is boring. I’ll tell you how to make duck pâté en croûte.”
Nigel and I agreed that the best-case scenario is forced collective enlightenment, where humanity suddenly realizes technological advancement does not ultimately serve our interests, for it creates as many problems as it solves while making our lives more unpredictable and stressful. We will unplug and move to Tibet.
Our thirst for “progress,” we will realize, is just an expression of fear, a desire to maximize our chance of survival, but the quantity of life has no bearing on quality. As Wittgenstein said, “The primary question about life after death is not whether it is a fact, but even if it is, what problems that really solves.”