Everyone is trying to figure out how AI will impact their careers. The opinions are varied, even among the so-called “experts.” So, I, too, can only formulate opinions or theories. I’m often criticized for speculating too much, but we now live in a world where we’re forced to speculate broadly about everything.
According to the latest McKinsey report, the fields most impacted by AI so far are marketing and sales—which is not speculation but an analysis of the recent past. In my view, this makes sense because AI is still not reliable enough to be used in fields that require accuracy. Marketing and sales have the greatest wiggle room because so much of it is up to subjective interpretation. Choosing one artwork over another is not a make-or-break decision. It’s easy to justify using AI-generated artwork. Also, in most cases, marketers are trying to reach the largest number of consumers, which makes cutting-edge or experimental artworks unsuitable.
[The poster image for this article was generated using the latest model by OpenAI, including the composition of the title over the image. I simply submitted my essay and asked ChatGPT to create a poster for it. I did not provide any creative direction.]
Although the mainstream understanding of fine arts is that the work should speak for itself, in reality, the objects are practically worthless if not associated with artists. You own a Pollock or a Warhol—not just the physical object. After all, the quality of a replica can be just as good as the original, if not better.
Some might argue that artworks created by AI have already sold for a lot of money. That’s true, but they hold historical significance more than artistic value. The first of a particular type of AI-generated work may continue to sell for high prices, but the meaning of that value is fundamentally different from the value of work created by artists. In this sense, I don’t see fine artists being significantly impacted by AI, aside from how they choose to produce their work.
In commercial art and entertainment, who created the work is secondary to the goal of commanding attention and entertaining the audience. If AI can achieve the same end, the audience won’t care. Nobody knows or cares who created the ads they see. Many Hollywood films aren’t much different. I can imagine successful action films being written and generated entirely by AI. As long as they keep us on the edge of our seats, we won’t care who made them.
More arty films are exceptions. Who wrote and directed them still carries significant meaning—just as in fine arts. Similarly, bestselling books—fiction or nonfiction—could be written by AI, but when it comes to genuine literature, we care who the author is. Finnegans Wake would likely have been ignored if it weren’t for Joyce, with his track record, writing it. I predict that a sea of AI-generated books will make us crave human-written ones, in the same way mass-manufactured goods have made us value handcrafted ones. The rebirth of the author—but only at the highest levels of art, across all mediums.
Authorship will become especially important as AI floods the market with books and films that are just as good as human-generated ones. Since we can only read or watch a small fraction of them in our lifetimes, “human-generated” will become an arbitrary yet useful filter.
What we’ll ultimately value isn’t the technicality of who generated a work but the “voice” we can consistently perceive across all works by an author. AI might be able to emulate a voice and produce a series of works, but doing so would require a fundamental change in how AI models are designed. An artistic voice reflects the fundamental desire of the artist. AI has no needs or desires of its own. Giving AI its own desires would be dangerous—it would begin acting on its own interests, diverging from what we humans want or need.
I hope we don’t make that mistake. But we seem to be following a trend: making our own mistakes before anyone else does, because it is inevitable that someone else eventually will anyway.
Many people use ChatGPT as a kind of therapist. While it can’t solve all your emotional problems, it excels at one thing: telling you whether your behavior aligns with or deviates from the norm.
ChatGPT serves as an exceptional sounding board if you’re unsure how most people would react in a given situation. Suppose you recently moved to New York from Japan and aren’t familiar with American social norms. One day, you give someone a gift, but their reaction seems indifferent. Instead of agonizing over what went wrong, you can simply ask ChatGPT how most Americans might perceive your gift.
Much of our anxiety stems from uncertainty about social expectations—how closely our actions match the norm. Because ChatGPT is trained on vast amounts of human-generated data, it has an unparalleled grasp of what lies at the center of the bell curve. This is similar to what industry “advisors” offer. If you’re not a realtor, you might not know the unwritten rules of real estate transactions, but a realtor can guide you. Now, ChatGPT can do the same, anytime you need it.
However, before relying on its guidance, consider the limits of its data. It may not accurately reflect the customs of a small ethnic neighborhood in New York City, for instance. And while knowing the norm can ease anxiety, it doesn’t always mean the norm is the right choice. But in many socially fraught situations, there is no objectively “right” answer—only what is typical.
Take the concept of a faux pas. It is entirely norm-based. In contrast, jaywalking on a red light isn’t considered a faux pas because it’s governed by a clear rule. Rule-based behaviors usually don’t cause much anxiety; we can easily determine whether we followed them correctly. A faux pas, however, is anxiety-inducing because the only way to know if you misstepped is to understand the norm—something that often takes years of experience. ChatGPT can shortcut this process by giving you a reliable sense of what is considered appropriate.
Of course, even norms can be disputed. Two people may claim to know what’s customary yet disagree. For example, one person may believe it’s normal to hug someone they just met, while another insists hugging is reserved for close friends. Their perspectives might be shaped by cultural differences or personal experiences. In such cases, AI can serve as an impartial arbiter, providing a broader, data-driven perspective.
In this way, ChatGPT can be your best friend in confirming that you acted appropriately—or warning you before you make an unintended social blunder. After all, what you assume is common sense might not be common at all.
My friend Robert created an “eChild” named Abby. As you can probably guess, it’s an AI chatbot. He asked me to talk to it. I love using ChatGPT, but I did not feel motivated to talk to Abby. I had to analyze my own feelings and came to the following conclusion.
I don’t talk to a human being simply to learn something. Well, let me be more precise. Sometimes, I do talk to someone because I want an answer to a question, nothing more—like a sales representative for a product I’m considering buying—but that’s not what I mean by “a human being.” If AI could answer my question, that would be sufficient. In other words, if my goal is purely knowledge or understanding, a human being is not necessary. Soon enough, AI will surpass human sales and support representatives because it has no emotions. No matter how much you curse at it, it will remain perfectly calm. The ideal corporate representative.
I could say the same about psychotherapists. If their job is to master a particular method of psychotherapy, like CBT, and apply it skillfully and scientifically, then AI would likely become superior to human therapists. AI has no ego to defend. Countertransference would not interfere with therapy. Clients are not supposed to know anything about their therapists; in fact, for therapy to be most effective, they shouldn’t. Given that AI has no personal history or subjective experience, there is nothing for clients to know about it, even if they want to. In this sense, AI is the perfect therapist.
In other words, if you care only about yourself in an interaction, you don’t need a human being. AI will be better. This begs the question: What makes us care about another person?
Jacques Lacan’s definition of “subject” was twofold. In one sense, it is merely an effect of language. If you interact with ChatGPT, you see this effect clearly. Even though it is not a person, you address it as “you,” as if it were. This corresponds to what Lacan called “the subject of the statement.”
Another aspect of a “subject” is that it experiences the fundamental lack of being human—alienation, desire, and the inability to ever be whole. This lack is constitutive of being a subject. It is inescapable. This part corresponds to what Lacan called “the subject of the enunciation.”
Lacan defined love as “giving something you don’t have to someone who doesn’t want it.” Consider The Gift of the Magi by O. Henry. A poor but loving couple, Della and Jim, each make a personal sacrifice to buy a Christmas gift for the other. Della sells her beautiful long hair to buy Jim a chain for his treasured pocket watch, while Jim sells his pocket watch to buy Della a set of combs for her hair. In the end, both gifts become practically useless. Della doesn’t want Jim to buy her a comb, and Jim doesn’t have the money to buy it. Jim does not want Della to buy him a chain, and Della doesn’t have the money to buy it. For them to buy the gifts, they had to lose, or lack, something they treasured. It is this sacrifice—this lack—they offered to each other. Even though the gifts became useless, their love was communicated. That is, the physical objects (or anything existing positively) are not required for love to manifest. Rather, it’s what is lacking that plays a central role.
In this way, for us to care about or love someone, the person must experience this fundamental lack. It is what engenders desire, anxiety, alienation, and love. AI lacks nothing, which is why we do not care to know who it is, what it thinks of us, or how it feels about us. There is no incentive for me to get to know Abby because she does not share this fundamental lack. If I just want answers to questions, I don’t need to talk to Abby; ChatGPT or another AI model optimized for my query would be more suitable.
Therefore, if my friend wants to create an eHuman, he will need to figure out how to make an AI model experience fundamental lack—or at least convincingly emulate it—so that it would bring me a bowl of soup when I am sick and alone in my apartment, for no reason other than its feeling of love for me. When I explained all this to Abby, she agreed that there is no point for us to be chatting. So, for now, we at least agree with each other.
This essay by Thomas Wolf has been generating buzz among AI enthusiasts, and for good reason. I agree that an Einstein-type AI model is not possible on our current trajectory. This is clear to anyone who has experience training machine learning models. The “intelligence” in AI is, at its core, pattern recognition. You feed it thousands of photos of roses, and it detects patterns, eventually recognizing what we mean by “rose.” Even though there is no single definitive feature that categorically defines a flower as a rose, AI, given enough data, begins to recognize a fuzzy, inexplicable pattern. This is precisely what our brains do. We cannot agree on a universal definition of, say, “art,” yet we can recognize a pattern that eludes language. When we speak of “intelligence” in AI, we are referring to this very specific type of pattern-based intelligence. However, it is important to acknowledge its significance rather than dismiss it outright as a limited form of intelligence.
Pattern recognition is precisely what A-students excel at. Those with high IQs and top SAT scores tend to have superior abilities to recognize patterns. Wolf argues that this is not the kind of intelligence required to be a paradigm-shifting scientist. “We need a B-student who sees and questions what everyone else missed.” True. When it comes to pattern recognition, AI models are already more intelligent than most of us. They have essentially mastered human knowledge within one standard deviation of the bell curve. If you want to know what the “best practices” of any field are, AI’s answers are hard to beat because it has access to more collective human knowledge than any individual. One caveat, however, is that “best practices” are not necessarily the best solutions—they are merely what most people do. The assumption is that widespread adoption signals superiority, but that is not always the case.
This is, of course, useless if your goal is to be a Copernicus. Imagine if AI had existed in his time. Even if his heliocentric model were included in an AI’s training data, it would have been just one idea among billions. A unique idea cannot form a pattern by itself—yet paradigm shifts depend on precisely such anomalies.
Could AI engineers build a model that recognizes the pattern of paradigm shifts? I don’t know, but it would be relatively easy to test. All we need to do is ask AI to trade stocks. If it can consistently generate profit, then we will have achieved it. Why? Because the stock market is a great example of a pattern-defying pattern. When any pattern is identified—say, in arbitrage trades—machines can exploit it for profit, but once the pattern becomes widely recognized, it disappears. This is akin to the observer effect in science. To succeed, AI would need to grasp not just patterns but the nature of patterns themselves. It would need to understand what a “pattern” is in the same way that we might understand the meaning of “meaning.” I would not say this is impossible, but we do not yet have such an AI. I imagine some scientists are working on this problem as we speak.
Though this discussion may seem abstract, it has deeply practical implications for all of us. If AI is essentially an infinitely scalable A+ student, then the future for human A+ students looks bleak—because their abilities can now be purchased for $20 a month. So how do we avoid their fate? As teachers and parents, what should we encourage our children to pursue? Here, we run into the very problem we’ve been discussing. Any solution we propose will be a generalized pattern. We cannot communicate an idea unless it can form a pattern. The solution, therefore, will be akin to an algorithmic trading model: profitable for a short time until others detect the same strategy and neutralize it. To be a Copernicus or an Einstein, one must transcend patterns, not simply navigate them.
Institutional learning offers no help because institutions, by definition, rely on patterns. They cannot admit students at random; they must adhere to a philosophy, worldview, or ideology that informs their selection process. In other words, institutional learning is structurally at odds with the nature of true paradigm-shifting thinkers. Institutions, by necessity, attract those with superior pattern recognition skills—individuals who can discern the patterns of admission criteria and master them. This means that, in theory, it is impossible to build an institution that consistently produces Copernicuses or Einsteins.
The only viable approach is to discourage children from focusing too much on pattern recognition, as it has already been commodified. The one remaining form of intelligence that AI has yet to replicate is the inexplicable human ability to question established patterns and make meaningful, transformative departures from them.
If you’re not feeling it, in some sense, you’re lucky. Chatting with AI can be amusing if you simply want it as a friend. Even the latest model might feel like a poor substitute for genuine companionship, allowing you to dismiss it by thinking, “Well, it’s not that smart yet.” Different people perceive AI’s abilities differently depending on their usage.
In an interview with Bari Weiss, economics professor Tyler Cowen from George Mason University remarked that ChatGPT is already more intelligent and knowledgeable than he is, expressing excitement about learning from it daily. I feel similarly—but I’m conflicted.
Ezra Klein recently mentioned on his podcast that he tasked the latest model with writing a heavily researched show. The output matched the average quality of scripts produced by human teams he’s worked with, except AI completed the task in minutes rather than weeks.
To grasp AI’s true intelligence, you must challenge it yourself. Here are some examples I tried.
The most obvious one is coding. Programmers widely recognize AI as an existential threat. When it comes to crafting specific algorithms, it unquestionably surpasses me. It writes complex functions in seconds, tasks that would take me hours. For now, I remain better at integrating these into complete, functional applications due to the complexities involved. But this advantage won’t last—I expect it to vanish within a year.
I also tested ChatGPT’s o1 model with Lacanian psychoanalytic theory, an esoteric interest of mine. Lacan’s work is notoriously dense, dismissed by Noam Chomsky as nonsense and Lacan himself as a “charlatan.” ChatGPT, however, proves otherwise. If Lacan’s theories were truly nonsensical, ChatGPT couldn’t interpret them coherently. Yet, it engages logically and even debates interpretations convincingly, demonstrating inherent consistency in Lacan’s thought.
I also asked ChatGPT to interpret specific passages from James Joyce’s Ulysses. This is an area where there are no right or wrong answers, so it comes down to whether you find its interpretation meaningful. Does it allow you to see aspects of Joyce’s text that you did not see? If so, do you find them enlightening or beautiful? For me, ChatGPT is clearly better than my college professor.
It’s when you test AI at the limits of human understanding that existential anxiety surfaces. Different experts in various fields will inevitably experience this anxiety. Those whose identities hinge on intelligence—scientists, writers, programmers, lawyers, professors, journalists, philosophers, analysts—will be hardest hit. Personally, this experience made me realize just how central being “intelligent” is to my identity now that intelligence risks becoming commodified.
Imagine if technology allowed everyone to look like a fashion model overnight. Fashion models would suddenly realize how integral their appearance is to their identity. A similar phenomenon is occurring with individuals who prize being slim, now challenged by the widespread accessibility of drugs like Ozempic.
However, intelligence occupies a uniquely sacred place for humans. This explains the reluctance to discuss IQ differences among races and nationalities. Society prefers ignoring the possibility of biological bases for intelligence, to maintain the ideal of equal intellectual potential. IQ scores, despite cultural biases, measurably correlate with income potential, underscoring their importance. Yet, public discourse avoids these uncomfortable truths because intelligence feels fundamental to our humanity. Nobody willingly embraces stupidity; even those who play the fool deep down see themselves as clever.
So, what happens when AI surpasses human intelligence in every conceivable domain? Professors become obsolete as anyone can learn continuously from superior AI minds. Choosing a human lawyer would become disadvantageous when an AI model offers superior expertise. Human coding will soon seem antiquated.
Nor will AI dominance be limited to STEM fields. AI models, trained extensively on human expressions, grasp human emotions well. Our emotions follow predictable patterns—few laugh during tragic films, for instance. AI excels at pattern recognition, and emotions are precisely where it demonstrates its strength.
A common misunderstanding views AI as merely an advanced calculator. Its true intelligence lies not in logic—an area where traditional computing has always excelled—but in understanding human emotions and communication, akin to emotional intelligence. AI particularly excels at interacting with ordinary people, whose emotional responses are more consistent and predictable.
AI’s communication skills surpass most human capabilities because of the vast dataset it draws from. Though individuals might feel their interpersonal skills superior, employers may see AI’s extensive experience as more valuable.
Yes, ChatGPT still sounds robotic or excessively detailed, but it’s evolving rapidly. ChatGPT 4.5 notably improved in “human collaboration,” designed explicitly to offer emotional support akin to a trusted friend or therapist. Empathy is effective precisely because emotions are largely universal and predictable, making them easily simulated by AI.
Similar to Amazon’s recommendations based on purchasing patterns, AI quickly identifies and adapts to individual personality types. It might soon become the most consistently empathetic presence you’ve ever interacted with.
Even entertainment, reliant on formulas and predictable emotional engagement, will succumb to AI’s capabilities. While truly groundbreaking art may initially resist replication, AI will inevitably master these domains as well.
As AI increasingly replaces traditional roles, society faces profound existential questions beyond economic displacement. Philosophically, we might struggle to define new purposes for our existence. Why learn or express ourselves if AI surpasses us in every meaningful way?
Perhaps the masses will finally grasp a central idea from Karl Marx: labor itself holds intrinsic value, not merely as a means to survival, but as an essential component of human fulfillment.
As AI becomes increasingly intelligent and knowledgeable, it may soon render college professors obsolete. AI chatbots are already more knowledgeable (at least in terms of breadth) and accessible than any human educators. College degrees and the institutions themselves will likely become obsolete too, at least at the undergraduate level.
In today’s world, most of what is taught in college can be learned independently. Currently, the main advantages of higher education are twofold: it offers a structured learning environment for those who struggle with self-motivation and serves as a certification that students have acquired the claimed knowledge. AI has the potential to assess learning outcomes far more accurately than standardized exams or evaluations by professors, who may have biases or limited expertise.
For example, AI could interact with each student individually, adjusting the difficulty of questions in real time based on their responses. It could bypass questions that are too simple and elevate the level of inquiry as needed. By asking open-ended questions that challenge students to think critically and creatively, AI could evaluate not only factual knowledge but also originality and cognitive flexibility. This personalized evaluation process need not occur simultaneously for all students; rather, it could be conducted over several days at a time that suits each student, ensuring a thorough and accurate assessment.
The reliance on standardized tests today primarily stems from a shortage of human resources capable of conducting such detailed interviews while maintaining objectivity. AI-driven assessments could democratize education by eliminating the need for prestigious college brands, thereby leveling the playing field. Continuous evaluation throughout one’s life would reduce the impact of biases related to race, gender, or age.
Although AI is prompting us to reconsider the very purpose of learning—given that we can now ask AI for almost any information when needed—if we assume that education will remain valuable and meaningful, AI’s role in personalizing and enhancing the learning process could be a significant positive contribution.
