•  
  •  
  •  
  •  
  •  

Why AI Can’t Think Like Einstein (Yet)

By Dyske Suematsu  •  March 12, 2025

This essay by Thomas Wolf has been generating buzz among AI enthusiasts, and for good reason. I agree that an Einstein-type AI model is not possible on our current trajectory. This is clear to anyone who has experience training machine learning models. The “intelligence” in AI is, at its core, pattern recognition. You feed it thousands of photos of roses, and it detects patterns, eventually recognizing what we mean by “rose.” Even though there is no single definitive feature that categorically defines a flower as a rose, AI, given enough data, begins to recognize a fuzzy, inexplicable pattern. This is precisely what our brains do. We cannot agree on a universal definition of, say, “art,” yet we can recognize a pattern that eludes language. When we speak of “intelligence” in AI, we are referring to this very specific type of pattern-based intelligence. However, it is important to acknowledge its significance rather than dismiss it outright as a limited form of intelligence.

Pattern recognition is precisely what A-students excel at. Those with high IQs and top SAT scores tend to have superior abilities to recognize patterns. Wolf argues that this is not the kind of intelligence required to be a paradigm-shifting scientist. “We need a B-student who sees and questions what everyone else missed.” True. When it comes to pattern recognition, AI models are already more intelligent than most of us. They have essentially mastered human knowledge within one standard deviation of the bell curve. If you want to know what the “best practices” of any field are, AI’s answers are hard to beat because it has access to more collective human knowledge than any individual. One caveat, however, is that “best practices” are not necessarily the best solutions—they are merely what most people do. The assumption is that widespread adoption signals superiority, but that is not always the case.

This is, of course, useless if your goal is to be a Copernicus. Imagine if AI had existed in his time. Even if his heliocentric model were included in an AI’s training data, it would have been just one idea among billions. A unique idea cannot form a pattern by itself—yet paradigm shifts depend on precisely such anomalies.

Could AI engineers build a model that recognizes the pattern of paradigm shifts? I don’t know, but it would be relatively easy to test. All we need to do is ask AI to trade stocks. If it can consistently generate profit, then we will have achieved it. Why? Because the stock market is a great example of a pattern-defying pattern. When any pattern is identified—say, in arbitrage trades—machines can exploit it for profit, but once the pattern becomes widely recognized, it disappears. This is akin to the observer effect in science. To succeed, AI would need to grasp not just patterns but the nature of patterns themselves. It would need to understand what a “pattern” is in the same way that we might understand the meaning of “meaning.” I would not say this is impossible, but we do not yet have such an AI. I imagine some scientists are working on this problem as we speak.

Though this discussion may seem abstract, it has deeply practical implications for all of us. If AI is essentially an infinitely scalable A+ student, then the future for human A+ students looks bleak—because their abilities can now be purchased for $20 a month. So how do we avoid their fate? As teachers and parents, what should we encourage our children to pursue? Here, we run into the very problem we’ve been discussing. Any solution we propose will be a generalized pattern. We cannot communicate an idea unless it can form a pattern. The solution, therefore, will be akin to an algorithmic trading model: profitable for a short time until others detect the same strategy and neutralize it. To be a Copernicus or an Einstein, one must transcend patterns, not simply navigate them.

Institutional learning offers no help because institutions, by definition, rely on patterns. They cannot admit students at random; they must adhere to a philosophy, worldview, or ideology that informs their selection process. In other words, institutional learning is structurally at odds with the nature of true paradigm-shifting thinkers. Institutions, by necessity, attract those with superior pattern recognition skills—individuals who can discern the patterns of admission criteria and master them. This means that, in theory, it is impossible to build an institution that consistently produces Copernicuses or Einsteins.

The only viable approach is to discourage children from focusing too much on pattern recognition, as it has already been commodified. The one remaining form of intelligence that AI has yet to replicate is the inexplicable human ability to question established patterns and make meaningful, transformative departures from them.

Read

I Now Have AI-induced Existential Anxiety

By Dyske Suematsu  •  March 10, 2025

If you’re not feeling it, in some sense, you’re lucky. Chatting with AI can be amusing if you simply want it as a friend. Even the latest model might feel like a poor substitute for genuine companionship, allowing you to dismiss it by thinking, “Well, it’s not that smart yet.” Different people perceive AI’s abilities differently depending on their usage.

In an interview with Bari Weiss, economics professor Tyler Cowen from George Mason University remarked that ChatGPT is already more intelligent and knowledgeable than he is, expressing excitement about learning from it daily. I feel similarly—but I’m conflicted.

Ezra Klein recently mentioned on his podcast that he tasked the latest model with writing a heavily researched show. The output matched the average quality of scripts produced by human teams he’s worked with, except AI completed the task in minutes rather than weeks.

To grasp AI’s true intelligence, you must challenge it yourself. Here are some examples I tried.

The most obvious one is coding. Programmers widely recognize AI as an existential threat. When it comes to crafting specific algorithms, it unquestionably surpasses me. It writes complex functions in seconds, tasks that would take me hours. For now, I remain better at integrating these into complete, functional applications due to the complexities involved. But this advantage won’t last—I expect it to vanish within a year.

I also tested ChatGPT’s o1 model with Lacanian psychoanalytic theory, an esoteric interest of mine. Lacan’s work is notoriously dense, dismissed by Noam Chomsky as nonsense and Lacan himself as a “charlatan.” ChatGPT, however, proves otherwise. If Lacan’s theories were truly nonsensical, ChatGPT couldn’t interpret them coherently. Yet, it engages logically and even debates interpretations convincingly, demonstrating inherent consistency in Lacan’s thought.

I also asked ChatGPT to interpret specific passages from James Joyce’s Ulysses. This is an area where there are no right or wrong answers, so it comes down to whether you find its interpretation meaningful. Does it allow you to see aspects of Joyce’s text that you did not see? If so, do you find them enlightening or beautiful? For me, ChatGPT is clearly better than my college professor.

It’s when you test AI at the limits of human understanding that existential anxiety surfaces. Different experts in various fields will inevitably experience this anxiety. Those whose identities hinge on intelligence—scientists, writers, programmers, lawyers, professors, journalists, philosophers, analysts—will be hardest hit. Personally, this experience made me realize just how central being “intelligent” is to my identity now that intelligence risks becoming commodified.

Imagine if technology allowed everyone to look like a fashion model overnight. Fashion models would suddenly realize how integral their appearance is to their identity. A similar phenomenon is occurring with individuals who prize being slim, now challenged by the widespread accessibility of drugs like Ozempic.

However, intelligence occupies a uniquely sacred place for humans. This explains the reluctance to discuss IQ differences among races and nationalities. Society prefers ignoring the possibility of biological bases for intelligence, to maintain the ideal of equal intellectual potential. IQ scores, despite cultural biases, measurably correlate with income potential, underscoring their importance. Yet, public discourse avoids these uncomfortable truths because intelligence feels fundamental to our humanity. Nobody willingly embraces stupidity; even those who play the fool deep down see themselves as clever.

So, what happens when AI surpasses human intelligence in every conceivable domain? Professors become obsolete as anyone can learn continuously from superior AI minds. Choosing a human lawyer would become disadvantageous when an AI model offers superior expertise. Human coding will soon seem antiquated.

Nor will AI dominance be limited to STEM fields. AI models, trained extensively on human expressions, grasp human emotions well. Our emotions follow predictable patterns—few laugh during tragic films, for instance. AI excels at pattern recognition, and emotions are precisely where it demonstrates its strength.

A common misunderstanding views AI as merely an advanced calculator. Its true intelligence lies not in logic—an area where traditional computing has always excelled—but in understanding human emotions and communication, akin to emotional intelligence. AI particularly excels at interacting with ordinary people, whose emotional responses are more consistent and predictable.

AI’s communication skills surpass most human capabilities because of the vast dataset it draws from. Though individuals might feel their interpersonal skills superior, employers may see AI’s extensive experience as more valuable.

Yes, ChatGPT still sounds robotic or excessively detailed, but it’s evolving rapidly. ChatGPT 4.5 notably improved in “human collaboration,” designed explicitly to offer emotional support akin to a trusted friend or therapist. Empathy is effective precisely because emotions are largely universal and predictable, making them easily simulated by AI.

Similar to Amazon’s recommendations based on purchasing patterns, AI quickly identifies and adapts to individual personality types. It might soon become the most consistently empathetic presence you’ve ever interacted with.

Even entertainment, reliant on formulas and predictable emotional engagement, will succumb to AI’s capabilities. While truly groundbreaking art may initially resist replication, AI will inevitably master these domains as well.

As AI increasingly replaces traditional roles, society faces profound existential questions beyond economic displacement. Philosophically, we might struggle to define new purposes for our existence. Why learn or express ourselves if AI surpasses us in every meaningful way?

Perhaps the masses will finally grasp a central idea from Karl Marx: labor itself holds intrinsic value, not merely as a means to survival, but as an essential component of human fulfillment.

Read

Rethinking College Degrees in the Age of AI

By Dyske Suematsu  •  March 4, 2025

As AI becomes increasingly intelligent and knowledgeable, it may soon render college professors obsolete. AI chatbots are already more knowledgeable (at least in terms of breadth) and accessible than any human educators. College degrees and the institutions themselves will likely become obsolete too, at least at the undergraduate level.

In today’s world, most of what is taught in college can be learned independently. Currently, the main advantages of higher education are twofold: it offers a structured learning environment for those who struggle with self-motivation and serves as a certification that students have acquired the claimed knowledge. AI has the potential to assess learning outcomes far more accurately than standardized exams or evaluations by professors, who may have biases or limited expertise.

For example, AI could interact with each student individually, adjusting the difficulty of questions in real time based on their responses. It could bypass questions that are too simple and elevate the level of inquiry as needed. By asking open-ended questions that challenge students to think critically and creatively, AI could evaluate not only factual knowledge but also originality and cognitive flexibility. This personalized evaluation process need not occur simultaneously for all students; rather, it could be conducted over several days at a time that suits each student, ensuring a thorough and accurate assessment.

The reliance on standardized tests today primarily stems from a shortage of human resources capable of conducting such detailed interviews while maintaining objectivity. AI-driven assessments could democratize education by eliminating the need for prestigious college brands, thereby leveling the playing field. Continuous evaluation throughout one’s life would reduce the impact of biases related to race, gender, or age.

Although AI is prompting us to reconsider the very purpose of learning—given that we can now ask AI for almost any information when needed—if we assume that education will remain valuable and meaningful, AI’s role in personalizing and enhancing the learning process could be a significant positive contribution.


Read

Beyond Optimism and Pessimism: A Philosophical Take on AI’s Future

By Dyske Suematsu  •  February 27, 2025

I rarely come across substantive philosophical discussions about AI, which I find unfortunate because it is a field that urgently needs them. Most technologists are practically-minded and uninterested in highly abstract ideas. So, I appreciated this interview with Jad Tarifi, a former AI engineer at Google and now the founder of his own AI firm in Japan, Integral AI.

The last quarter of the interview is where the discussion becomes philosophical. One of the key philosophical and ethical ideas he expressed was:

I think we cannot guarantee a positive outcome. Nothing in life is guaranteed. The question: can we envision a positive outcome, and can we have a path towards it. ... I believe there’s a path, and the path is about defining the right goals for the AI, having a shared vision for society and reforming the economy.

This is a common position among AI enthusiasts—they pursue AI because they believe a positive outcome is possible. In contrast, thinkers like Yuval Noah Harari argue that such an outcome is impossible. This is a fundamental disagreement that reasoning alone cannot resolve because we do not yet know enough about AI. Not every problem can be solved through logical deduction.

My position aligns with neither side. I have a third position. I think both camps can agree that AI will significantly disrupt our lives. The real question is whether we, as a human society, want or need this disruption itself.

Many Americans worry that AI will take their jobs, while many Japanese hope AI will solve their labor shortage. Either way, our lives will be disrupted. Even if AI-driven creative destruction generates new opportunities, we will still have to learn new skills and confront an ever-increasing level of future uncertainty. Given the accelerating pace of technological evolution, there is a strong possibility that our newly acquired skills will be obsolete by the time we complete our retraining. While technological advancements evolve at an unlimited speed, our ability to learn and adapt has a hard biological limit. Why, then, do we willingly expose ourselves to such enormous stress?

Interestingly, in Japan, the idea of “degrowth” is gaining traction. Many people no longer see the point of endless economic expansion and have begun questioning whether a sustainable economy could exist without growth. Japan, consciously or otherwise, is testing this idea. Many of the crises we face today—climate change, obesity-related illnesses, and resource depletion—are direct results of our relentless pursuit of growth. Some will devise solutions and be hailed as heroes, but we must remember that these crises were largely self-created. We need to ask ourselves what other problems we are generating today in the name of progress.

So, I ask again: do we truly want our lives to be perpetually disrupted by technological advancements that both solve and create problems?

Another question Tarifi’s philosophical position raises is whether we can selectively extract only the “positive” aspects of AI. Consider dynamite: a powerful tool that greatly increased productivity, yet also enabled widespread destruction. Have we succeeded in suppressing its negative uses? No—bombs continue to kill people across the world. Every invention has two sides that we do not get to choose. Expecting to cherry-pick only the good is as naïve as believing one can change a spouse while keeping only the desirable traits. The qualities we love in a person are often inseparable from those we find difficult. The same holds true for technology.

This kind of philosophical cherry-picking extends to concepts like “freedom,” “agency,” and “universal rights.” These are what philosopher Richard Rorty called “final vocabularies,” what Derrida referred to as “transcendental signifieds,” and what Lacan labeled “master signifiers.” They are taken as self-evident truths, assumed to be universal.

Take “freedom.” We cannot endlessly expand it without consequence. In fact, freedom only exists in relation to constraints—rules, responsibilities, and limitations define it. If someone playing chess claimed they wanted more freedom and disregarded the rules, they would render the game meaningless. What would be the point of playing such a game at all?

Similarly, many religious people willingly accept strict moral codes because they provide freedom from existential uncertainty. By following divine rules, they transfer responsibility for their fate onto a higher power. This, too, is a form of freedom—a trade-off, not an absolute good that can be increased indefinitely. We cannot cherry-pick only the enjoyable aspects of freedom without acknowledging its inherent constraints that enable the very freedom.

The same applies to “universal rights.” Any “right” must be enforced to have meaning; without enforcement, it is merely an abstract claim. If rights are to be universal, who guarantees them? In practice, economically wealthier nations decide which rights to enforce, making them far from universal.

To be fair, Tarifi acknowledges this:

I think in history of philosophy, philosophers have been figuring out what a shared vision should be or what objective morality should be. Lots of philosophers have tried to work on that, but that often just led to dictatorships.

The solution, however, is not to dig deeper than past philosophers in search of a perfect “shared vision.” “Freedom,” “agency,” and “universal rights” appear universally shared, but this very perception breeds authoritarianism—those who reject these values seem so irrational or evil that we feel justified in excluding or oppressing them. Some religious individuals, for example, actively seek to relinquish their personal agency to escape moral anxiety.

Digging deeper for an essential, universal value will not resolve this problem. Instead, we must engage in debate—despite Tarifi’s dislike of it—to settle the issues that reason can address. Beyond that, there is no objective way to determine who is right. Ultimately, we will all have to vote, making our best guesses about what kind of world we wish to live in.

Read

AI’s Influence on the Meaning of Life

By Dyske Suematsu  •  February 4, 2025

I suspect I’m not the only one who feels that AI is throwing into question what I want to do with my life. And I don’t just mean what I should do to survive—beyond such practical concerns, AI also raises questions about our desires. Last month, while spending time in Japan, I thought it might be fun to get back into drawing cartoons. But then I had to ask myself: why should I want to draw by hand when AI can generate what I envision? Which aspect of cartooning am I actually desiring? It’s time to take a step back and ask some existential questions.

Philosophically, I’m neither for nor against AI. It is just what is happening in the world that I’m observing. My mind is too limited to predict whether AI will save humanity or bring about its end. Even in the worst-case scenario, as far as the Earth or the universe is concerned, it’s just another species going extinct out of countless others. The role of morality is to govern human behavior; outside of our minds, it is meaningless.

However, I must admit that I find the pursuit of endless productivity silly. Now, we are witnessing an AI arms race, particularly between the US and China. Ultimately, it is a race of productivity—who can outproduce the other—and AI is merely a tool, or a weapon, for that war. As a philosopher, I must ask what the point is, but in the grand scheme of things, nothing we do has a point. We concoct a meaning from what we do and act as if it’s universally meaningful. In that sense, productivity as the meaning of life is no better or worse than any other. What comes across as silly is the “acting as if” part.

For instance, Sam Altman, the CEO of OpenAI, sure acts as if he is saving the world, or at least Americans, even though there is no fundamental need to develop AI. Humanity will be fine without it. It’s merely a solution that turned into a problem. We solved all the real problems a long time ago. All our problems today (like climate change) were created by our “solutions” (like industrialization). If we stopped solving problems, new problems wouldn’t arise, but human minds need problems and obstacles. Without something to strive for or overcome, we would cease to be human, which is why I’m not for or against AI. We are destined to struggle, even if it means we have to create our own struggles artificially.

“Man’s desire is the desire of the Other,” as Jacques Lacan said. As the Other is transformed by AI, our desires are inevitably transformed as well. The web was originally developed as a public repository of knowledge, but large language models like OpenAI’s have already harnessed and distilled much of that knowledge. As people begin turning to AI for answers instead of search engines, the desire to share knowledge publicly on the web will diminish. Information will still be used to train AI models, but few will visit your website to read it—no fun if your goal is to engage with others.

As of today, most of our social engagement—not just on social media, but across platforms for any purpose—consists of emailing, chatting, and talking with other humans. AI agents like ChatGPT will soon take over a significant share of these interactions as they become more knowledgeable and intelligent. When it comes to practical information and advice, consulting other humans, with their limited knowledge and intelligence, will begin to feel archaic and inefficient. It would be like paying a hundred dollars for a handmade mug when all you need is something functional for your office. Even if that price fairly compensates for the time, skill, and materials involved, buying it will feel increasingly wasteful.

Often, what we enjoy about our work is the process, not the results. It’s not just about acquiring information but about the process of seeking it from others. It’s not just about the final song but about composing, playing, and recording. Yet, in the name of productivity, AI will make it increasingly difficult to monetize these enjoyable processes. We will have to fight for processes that AI has not yet mastered. There will still be problems for us to solve, but only because our own solutions will artificially create new problems.

I, for one, am optimistic about our ability to generate challenging, unnecessary problems—but I’m less certain whether we can continue to enjoy the process. The speed of technological disruption will only accelerate, and it is already outpacing our ability to adopt and adapt. At some point, we might all throw in the towel. What that world looks like, I have no clue—but I’m curious to see.

Read

Model Collapse in AI and Society: A Parallel Threat to Diversity of Thought

By Dyske Suematsu  •  August 26, 2024

The New York Times has a neat demonstration of AI “model collapse,” where using AI-generated content to train future models leads to diminishing diversity and ultimately to complete homogeneity (“collapse”). For example, all digits of handwritten numbers converge into a blurry composite of all numbers, and AI-generated human faces merge into an average human face. To avoid this problem, AI companies must ensure that their training data is human-generated.

One positive aspect of this issue is that AI companies will need to pay for quality content. As we increasingly depend on AI to answer our questions, website traffic will likely decline since we won’t need to verify sources for the vast majority of answers. Content creators won’t have much incentive to share their work online if they cannot connect directly with their audience. Consequently, the quality of free content on the web may decline. However, I believe this is a problem that AI engineers can eventually solve. The real issue, I’d argue, is that “model collapse” was already happening in our brains long before ChatGPT was introduced.

AI mimics the way our brains work, so there is likely a real-life analog to every phenomenon we observe in AI. Feeding an AI-generated (or “interpreted”) fact or idea to train another model is equivalent to relying entirely on articles written for laymen or political talking points to formulate our opinions and understanding without engaging with the source material.

In my experience, whenever social media erupts with anger over something someone said, almost without exception, the outraged individuals have never read the offending comment or idea in its original context, whether it’s a court document, research paper, book, or hours-long interview. They simply echo the emotions expressed by the first person who interprets the comment. It’s no surprise that the model (our way of understanding ideas/data) would collapse if everyone followed this pattern. One person’s interpretation of the world is echoed by millions on social media.

In politics, the first conservative to interpret any particular comment will shape the opinions of all the Red states, and the first liberal to interpret it will shape the opinions of all the Blue states. In fact, “talking points” are designed to achieve this effect most efficiently. We are deliberately causing models—our ways of understanding the world—to collapse into a few dominant perspectives. This is a deliberate effort to eliminate the diversity of ideas.

In a two-party system like that of the US, this is a natural consequence because the party with greater diversity will always lose. Another factor is our reliance on emotions. We feel more secure and empowered when we agree with those around us. Holding a unique opinion can be anxiety-inducing. So, we are naturally wired for “model collapse.” This is the new way of “manufacturing consent,” discouraging people from checking the sources to form their opinions.

What the New York Times’ experiment reveals isn’t just the danger of AI but also the vulnerabilities of our own brains. AI simply allows us to simulate the phenomenon and see the consequences in tangible forms. It’s a lesson we need to apply to our own behavior.

Read