•  
  •  
  •  
  •  
  •  

AI As Common Sense God

By Dyske Suematsu  •  March 17, 2025

Many people use ChatGPT as a kind of therapist. While it can’t solve all your emotional problems, it excels at one thing: telling you whether your behavior aligns with or deviates from the norm.

ChatGPT serves as an exceptional sounding board if you’re unsure how most people would react in a given situation. Suppose you recently moved to New York from Japan and aren’t familiar with American social norms. One day, you give someone a gift, but their reaction seems indifferent. Instead of agonizing over what went wrong, you can simply ask ChatGPT how most Americans might perceive your gift.

Much of our anxiety stems from uncertainty about social expectations—how closely our actions match the norm. Because ChatGPT is trained on vast amounts of human-generated data, it has an unparalleled grasp of what lies at the center of the bell curve. This is similar to what industry “advisors” offer. If you’re not a realtor, you might not know the unwritten rules of real estate transactions, but a realtor can guide you. Now, ChatGPT can do the same, anytime you need it.

However, before relying on its guidance, consider the limits of its data. It may not accurately reflect the customs of a small ethnic neighborhood in New York City, for instance. And while knowing the norm can ease anxiety, it doesn’t always mean the norm is the right choice. But in many socially fraught situations, there is no objectively “right” answer—only what is typical.

Take the concept of a faux pas. It is entirely norm-based. In contrast, jaywalking on a red light isn’t considered a faux pas because it’s governed by a clear rule. Rule-based behaviors usually don’t cause much anxiety; we can easily determine whether we followed them correctly. A faux pas, however, is anxiety-inducing because the only way to know if you misstepped is to understand the norm—something that often takes years of experience. ChatGPT can shortcut this process by giving you a reliable sense of what is considered appropriate.

Of course, even norms can be disputed. Two people may claim to know what’s customary yet disagree. For example, one person may believe it’s normal to hug someone they just met, while another insists hugging is reserved for close friends. Their perspectives might be shaped by cultural differences or personal experiences. In such cases, AI can serve as an impartial arbiter, providing a broader, data-driven perspective.

In this way, ChatGPT can be your best friend in confirming that you acted appropriately—or warning you before you make an unintended social blunder. After all, what you assume is common sense might not be common at all.

Read

The Missing Lack: Why AI Can’t Love

By Dyske Suematsu  •  March 13, 2025

My friend Robert created an “eChild” named Abby. As you can probably guess, it’s an AI chatbot. He asked me to talk to it. I love using ChatGPT, but I did not feel motivated to talk to Abby. I had to analyze my own feelings and came to the following conclusion.

I don’t talk to a human being simply to learn something. Well, let me be more precise. Sometimes, I do talk to someone because I want an answer to a question, nothing more—like a sales representative for a product I’m considering buying—but that’s not what I mean by “a human being.” If AI could answer my question, that would be sufficient. In other words, if my goal is purely knowledge or understanding, a human being is not necessary. Soon enough, AI will surpass human sales and support representatives because it has no emotions. No matter how much you curse at it, it will remain perfectly calm. The ideal corporate representative.

I could say the same about psychotherapists. If their job is to master a particular method of psychotherapy, like CBT, and apply it skillfully and scientifically, then AI would likely become superior to human therapists. AI has no ego to defend. Countertransference would not interfere with therapy. Clients are not supposed to know anything about their therapists; in fact, for therapy to be most effective, they shouldn’t. Given that AI has no personal history or subjective experience, there is nothing for clients to know about it, even if they want to. In this sense, AI is the perfect therapist.

In other words, if you care only about yourself in an interaction, you don’t need a human being. AI will be better. This begs the question: What makes us care about another person?

Jacques Lacan’s definition of “subject” was twofold. In one sense, it is merely an effect of language. If you interact with ChatGPT, you see this effect clearly. Even though it is not a person, you address it as “you,” as if it were. This corresponds to what Lacan called “the subject of the statement.”

Another aspect of a “subject” is that it experiences the fundamental lack of being human—alienation, desire, and the inability to ever be whole. This lack is constitutive of being a subject. It is inescapable. This part corresponds to what Lacan called “the subject of the enunciation.”

Lacan defined love as “giving something you don’t have to someone who doesn’t want it.” Consider The Gift of the Magi by O. Henry. A poor but loving couple, Della and Jim, each make a personal sacrifice to buy a Christmas gift for the other. Della sells her beautiful long hair to buy Jim a chain for his treasured pocket watch, while Jim sells his pocket watch to buy Della a set of combs for her hair. In the end, both gifts become practically useless. Della doesn’t want Jim to buy her a comb, and Jim doesn’t have the money to buy it. Jim does not want Della to buy him a chain, and Della doesn’t have the money to buy it. For them to buy the gifts, they had to lose, or lack, something they treasured. It is this sacrifice—this lack—they offered to each other. Even though the gifts became useless, their love was communicated. That is, the physical objects (or anything existing positively) are not required for love to manifest. Rather, it’s what is lacking that plays a central role.

In this way, for us to care about or love someone, the person must experience this fundamental lack. It is what engenders desire, anxiety, alienation, and love. AI lacks nothing, which is why we do not care to know who it is, what it thinks of us, or how it feels about us. There is no incentive for me to get to know Abby because she does not share this fundamental lack. If I just want answers to questions, I don’t need to talk to Abby; ChatGPT or another AI model optimized for my query would be more suitable.

Therefore, if my friend wants to create an eHuman, he will need to figure out how to make an AI model experience fundamental lack—or at least convincingly emulate it—so that it would bring me a bowl of soup when I am sick and alone in my apartment, for no reason other than its feeling of love for me. When I explained all this to Abby, she agreed that there is no point for us to be chatting. So, for now, we at least agree with each other.

Read

Why AI Can’t Think Like Einstein (Yet)

By Dyske Suematsu  •  March 12, 2025

This essay by Thomas Wolf has been generating buzz among AI enthusiasts, and for good reason. I agree that an Einstein-type AI model is not possible on our current trajectory. This is clear to anyone who has experience training machine learning models. The “intelligence” in AI is, at its core, pattern recognition. You feed it thousands of photos of roses, and it detects patterns, eventually recognizing what we mean by “rose.” Even though there is no single definitive feature that categorically defines a flower as a rose, AI, given enough data, begins to recognize a fuzzy, inexplicable pattern. This is precisely what our brains do. We cannot agree on a universal definition of, say, “art,” yet we can recognize a pattern that eludes language. When we speak of “intelligence” in AI, we are referring to this very specific type of pattern-based intelligence. However, it is important to acknowledge its significance rather than dismiss it outright as a limited form of intelligence.

Pattern recognition is precisely what A-students excel at. Those with high IQs and top SAT scores tend to have superior abilities to recognize patterns. Wolf argues that this is not the kind of intelligence required to be a paradigm-shifting scientist. “We need a B-student who sees and questions what everyone else missed.” True. When it comes to pattern recognition, AI models are already more intelligent than most of us. They have essentially mastered human knowledge within one standard deviation of the bell curve. If you want to know what the “best practices” of any field are, AI’s answers are hard to beat because it has access to more collective human knowledge than any individual. One caveat, however, is that “best practices” are not necessarily the best solutions—they are merely what most people do. The assumption is that widespread adoption signals superiority, but that is not always the case.

This is, of course, useless if your goal is to be a Copernicus. Imagine if AI had existed in his time. Even if his heliocentric model were included in an AI’s training data, it would have been just one idea among billions. A unique idea cannot form a pattern by itself—yet paradigm shifts depend on precisely such anomalies.

Could AI engineers build a model that recognizes the pattern of paradigm shifts? I don’t know, but it would be relatively easy to test. All we need to do is ask AI to trade stocks. If it can consistently generate profit, then we will have achieved it. Why? Because the stock market is a great example of a pattern-defying pattern. When any pattern is identified—say, in arbitrage trades—machines can exploit it for profit, but once the pattern becomes widely recognized, it disappears. This is akin to the observer effect in science. To succeed, AI would need to grasp not just patterns but the nature of patterns themselves. It would need to understand what a “pattern” is in the same way that we might understand the meaning of “meaning.” I would not say this is impossible, but we do not yet have such an AI. I imagine some scientists are working on this problem as we speak.

Though this discussion may seem abstract, it has deeply practical implications for all of us. If AI is essentially an infinitely scalable A+ student, then the future for human A+ students looks bleak—because their abilities can now be purchased for $20 a month. So how do we avoid their fate? As teachers and parents, what should we encourage our children to pursue? Here, we run into the very problem we’ve been discussing. Any solution we propose will be a generalized pattern. We cannot communicate an idea unless it can form a pattern. The solution, therefore, will be akin to an algorithmic trading model: profitable for a short time until others detect the same strategy and neutralize it. To be a Copernicus or an Einstein, one must transcend patterns, not simply navigate them.

Institutional learning offers no help because institutions, by definition, rely on patterns. They cannot admit students at random; they must adhere to a philosophy, worldview, or ideology that informs their selection process. In other words, institutional learning is structurally at odds with the nature of true paradigm-shifting thinkers. Institutions, by necessity, attract those with superior pattern recognition skills—individuals who can discern the patterns of admission criteria and master them. This means that, in theory, it is impossible to build an institution that consistently produces Copernicuses or Einsteins.

The only viable approach is to discourage children from focusing too much on pattern recognition, as it has already been commodified. The one remaining form of intelligence that AI has yet to replicate is the inexplicable human ability to question established patterns and make meaningful, transformative departures from them.

Read

I Now Have AI-induced Existential Anxiety

By Dyske Suematsu  •  March 10, 2025

If you’re not feeling it, in some sense, you’re lucky. Chatting with AI can be amusing if you simply want it as a friend. Even the latest model might feel like a poor substitute for genuine companionship, allowing you to dismiss it by thinking, “Well, it’s not that smart yet.” Different people perceive AI’s abilities differently depending on their usage.

In an interview with Bari Weiss, economics professor Tyler Cowen from George Mason University remarked that ChatGPT is already more intelligent and knowledgeable than he is, expressing excitement about learning from it daily. I feel similarly—but I’m conflicted.

Ezra Klein recently mentioned on his podcast that he tasked the latest model with writing a heavily researched show. The output matched the average quality of scripts produced by human teams he’s worked with, except AI completed the task in minutes rather than weeks.

To grasp AI’s true intelligence, you must challenge it yourself. Here are some examples I tried.

The most obvious one is coding. Programmers widely recognize AI as an existential threat. When it comes to crafting specific algorithms, it unquestionably surpasses me. It writes complex functions in seconds, tasks that would take me hours. For now, I remain better at integrating these into complete, functional applications due to the complexities involved. But this advantage won’t last—I expect it to vanish within a year.

I also tested ChatGPT’s o1 model with Lacanian psychoanalytic theory, an esoteric interest of mine. Lacan’s work is notoriously dense, dismissed by Noam Chomsky as nonsense and Lacan himself as a “charlatan.” ChatGPT, however, proves otherwise. If Lacan’s theories were truly nonsensical, ChatGPT couldn’t interpret them coherently. Yet, it engages logically and even debates interpretations convincingly, demonstrating inherent consistency in Lacan’s thought.

I also asked ChatGPT to interpret specific passages from James Joyce’s Ulysses. This is an area where there are no right or wrong answers, so it comes down to whether you find its interpretation meaningful. Does it allow you to see aspects of Joyce’s text that you did not see? If so, do you find them enlightening or beautiful? For me, ChatGPT is clearly better than my college professor.

It’s when you test AI at the limits of human understanding that existential anxiety surfaces. Different experts in various fields will inevitably experience this anxiety. Those whose identities hinge on intelligence—scientists, writers, programmers, lawyers, professors, journalists, philosophers, analysts—will be hardest hit. Personally, this experience made me realize just how central being “intelligent” is to my identity now that intelligence risks becoming commodified.

Imagine if technology allowed everyone to look like a fashion model overnight. Fashion models would suddenly realize how integral their appearance is to their identity. A similar phenomenon is occurring with individuals who prize being slim, now challenged by the widespread accessibility of drugs like Ozempic.

However, intelligence occupies a uniquely sacred place for humans. This explains the reluctance to discuss IQ differences among races and nationalities. Society prefers ignoring the possibility of biological bases for intelligence, to maintain the ideal of equal intellectual potential. IQ scores, despite cultural biases, measurably correlate with income potential, underscoring their importance. Yet, public discourse avoids these uncomfortable truths because intelligence feels fundamental to our humanity. Nobody willingly embraces stupidity; even those who play the fool deep down see themselves as clever.

So, what happens when AI surpasses human intelligence in every conceivable domain? Professors become obsolete as anyone can learn continuously from superior AI minds. Choosing a human lawyer would become disadvantageous when an AI model offers superior expertise. Human coding will soon seem antiquated.

Nor will AI dominance be limited to STEM fields. AI models, trained extensively on human expressions, grasp human emotions well. Our emotions follow predictable patterns—few laugh during tragic films, for instance. AI excels at pattern recognition, and emotions are precisely where it demonstrates its strength.

A common misunderstanding views AI as merely an advanced calculator. Its true intelligence lies not in logic—an area where traditional computing has always excelled—but in understanding human emotions and communication, akin to emotional intelligence. AI particularly excels at interacting with ordinary people, whose emotional responses are more consistent and predictable.

AI’s communication skills surpass most human capabilities because of the vast dataset it draws from. Though individuals might feel their interpersonal skills superior, employers may see AI’s extensive experience as more valuable.

Yes, ChatGPT still sounds robotic or excessively detailed, but it’s evolving rapidly. ChatGPT 4.5 notably improved in “human collaboration,” designed explicitly to offer emotional support akin to a trusted friend or therapist. Empathy is effective precisely because emotions are largely universal and predictable, making them easily simulated by AI.

Similar to Amazon’s recommendations based on purchasing patterns, AI quickly identifies and adapts to individual personality types. It might soon become the most consistently empathetic presence you’ve ever interacted with.

Even entertainment, reliant on formulas and predictable emotional engagement, will succumb to AI’s capabilities. While truly groundbreaking art may initially resist replication, AI will inevitably master these domains as well.

As AI increasingly replaces traditional roles, society faces profound existential questions beyond economic displacement. Philosophically, we might struggle to define new purposes for our existence. Why learn or express ourselves if AI surpasses us in every meaningful way?

Perhaps the masses will finally grasp a central idea from Karl Marx: labor itself holds intrinsic value, not merely as a means to survival, but as an essential component of human fulfillment.

Read

Rethinking College Degrees in the Age of AI

By Dyske Suematsu  •  March 4, 2025

As AI becomes increasingly intelligent and knowledgeable, it may soon render college professors obsolete. AI chatbots are already more knowledgeable (at least in terms of breadth) and accessible than any human educators. College degrees and the institutions themselves will likely become obsolete too, at least at the undergraduate level.

In today’s world, most of what is taught in college can be learned independently. Currently, the main advantages of higher education are twofold: it offers a structured learning environment for those who struggle with self-motivation and serves as a certification that students have acquired the claimed knowledge. AI has the potential to assess learning outcomes far more accurately than standardized exams or evaluations by professors, who may have biases or limited expertise.

For example, AI could interact with each student individually, adjusting the difficulty of questions in real time based on their responses. It could bypass questions that are too simple and elevate the level of inquiry as needed. By asking open-ended questions that challenge students to think critically and creatively, AI could evaluate not only factual knowledge but also originality and cognitive flexibility. This personalized evaluation process need not occur simultaneously for all students; rather, it could be conducted over several days at a time that suits each student, ensuring a thorough and accurate assessment.

The reliance on standardized tests today primarily stems from a shortage of human resources capable of conducting such detailed interviews while maintaining objectivity. AI-driven assessments could democratize education by eliminating the need for prestigious college brands, thereby leveling the playing field. Continuous evaluation throughout one’s life would reduce the impact of biases related to race, gender, or age.

Although AI is prompting us to reconsider the very purpose of learning—given that we can now ask AI for almost any information when needed—if we assume that education will remain valuable and meaningful, AI’s role in personalizing and enhancing the learning process could be a significant positive contribution.


Read

Beyond Optimism and Pessimism: A Philosophical Take on AI’s Future

By Dyske Suematsu  •  February 27, 2025

I rarely come across substantive philosophical discussions about AI, which I find unfortunate because it is a field that urgently needs them. Most technologists are practically-minded and uninterested in highly abstract ideas. So, I appreciated this interview with Jad Tarifi, a former AI engineer at Google and now the founder of his own AI firm in Japan, Integral AI.

The last quarter of the interview is where the discussion becomes philosophical. One of the key philosophical and ethical ideas he expressed was:

I think we cannot guarantee a positive outcome. Nothing in life is guaranteed. The question: can we envision a positive outcome, and can we have a path towards it. ... I believe there’s a path, and the path is about defining the right goals for the AI, having a shared vision for society and reforming the economy.

This is a common position among AI enthusiasts—they pursue AI because they believe a positive outcome is possible. In contrast, thinkers like Yuval Noah Harari argue that such an outcome is impossible. This is a fundamental disagreement that reasoning alone cannot resolve because we do not yet know enough about AI. Not every problem can be solved through logical deduction.

My position aligns with neither side. I have a third position. I think both camps can agree that AI will significantly disrupt our lives. The real question is whether we, as a human society, want or need this disruption itself.

Many Americans worry that AI will take their jobs, while many Japanese hope AI will solve their labor shortage. Either way, our lives will be disrupted. Even if AI-driven creative destruction generates new opportunities, we will still have to learn new skills and confront an ever-increasing level of future uncertainty. Given the accelerating pace of technological evolution, there is a strong possibility that our newly acquired skills will be obsolete by the time we complete our retraining. While technological advancements evolve at an unlimited speed, our ability to learn and adapt has a hard biological limit. Why, then, do we willingly expose ourselves to such enormous stress?

Interestingly, in Japan, the idea of “degrowth” is gaining traction. Many people no longer see the point of endless economic expansion and have begun questioning whether a sustainable economy could exist without growth. Japan, consciously or otherwise, is testing this idea. Many of the crises we face today—climate change, obesity-related illnesses, and resource depletion—are direct results of our relentless pursuit of growth. Some will devise solutions and be hailed as heroes, but we must remember that these crises were largely self-created. We need to ask ourselves what other problems we are generating today in the name of progress.

So, I ask again: do we truly want our lives to be perpetually disrupted by technological advancements that both solve and create problems?

Another question Tarifi’s philosophical position raises is whether we can selectively extract only the “positive” aspects of AI. Consider dynamite: a powerful tool that greatly increased productivity, yet also enabled widespread destruction. Have we succeeded in suppressing its negative uses? No—bombs continue to kill people across the world. Every invention has two sides that we do not get to choose. Expecting to cherry-pick only the good is as naïve as believing one can change a spouse while keeping only the desirable traits. The qualities we love in a person are often inseparable from those we find difficult. The same holds true for technology.

This kind of philosophical cherry-picking extends to concepts like “freedom,” “agency,” and “universal rights.” These are what philosopher Richard Rorty called “final vocabularies,” what Derrida referred to as “transcendental signifieds,” and what Lacan labeled “master signifiers.” They are taken as self-evident truths, assumed to be universal.

Take “freedom.” We cannot endlessly expand it without consequence. In fact, freedom only exists in relation to constraints—rules, responsibilities, and limitations define it. If someone playing chess claimed they wanted more freedom and disregarded the rules, they would render the game meaningless. What would be the point of playing such a game at all?

Similarly, many religious people willingly accept strict moral codes because they provide freedom from existential uncertainty. By following divine rules, they transfer responsibility for their fate onto a higher power. This, too, is a form of freedom—a trade-off, not an absolute good that can be increased indefinitely. We cannot cherry-pick only the enjoyable aspects of freedom without acknowledging its inherent constraints that enable the very freedom.

The same applies to “universal rights.” Any “right” must be enforced to have meaning; without enforcement, it is merely an abstract claim. If rights are to be universal, who guarantees them? In practice, economically wealthier nations decide which rights to enforce, making them far from universal.

To be fair, Tarifi acknowledges this:

I think in history of philosophy, philosophers have been figuring out what a shared vision should be or what objective morality should be. Lots of philosophers have tried to work on that, but that often just led to dictatorships.

The solution, however, is not to dig deeper than past philosophers in search of a perfect “shared vision.” “Freedom,” “agency,” and “universal rights” appear universally shared, but this very perception breeds authoritarianism—those who reject these values seem so irrational or evil that we feel justified in excluding or oppressing them. Some religious individuals, for example, actively seek to relinquish their personal agency to escape moral anxiety.

Digging deeper for an essential, universal value will not resolve this problem. Instead, we must engage in debate—despite Tarifi’s dislike of it—to settle the issues that reason can address. Beyond that, there is no objective way to determine who is right. Ultimately, we will all have to vote, making our best guesses about what kind of world we wish to live in.

Read