•  
  •  
  •  
  •  
  •  

Why Dabbling Beats Mastery Now

By Dyske Suematsu  •  May 7, 2025

Reading about the successes of AI-generated artworks in marketing made me realize that the creative industry, as we knew it, is over. It’s been in decline for a while, but we’ve reached the point where the final nail is being hammered in. At the very least, the definition of “creative” has shifted almost entirely. Today, creative professionals are less about being “artistic” or “aesthetic” and more about problem-solving.

People with refined aesthetic sensibilities (those drawn to design, illustration, photography, and music) once pursued creative careers because they could chase their own vision of beauty while making a living. That’s no longer viable. Businesses have figured out that, in the vast majority of cases, “good enough” is optimal for marketing.

For people not particularly passionate about art, like engineers, AI-generated content is exciting and fun to play with. Marketing now attracts many who simply enjoy dabbling in something that feels “creative.” That may sound dismissive, but it’s a reality the industry has to face. Those driven to define and express their own sense of beauty don’t find AI interesting because it isn’t theirs. For them, creativity is about the journey. For marketers, it’s only about the destination, and AI is an incredible shortcut to that end.

As marketing becomes more AI-driven, fewer artists will be drawn to it. The creative industry, as we knew it, will vanish. Maybe that’s for the better. If your goal is to express your own vision of beauty, maybe it’s best not to do it on someone else’s dime.

At the same time, the shift is happening in reverse. Many “creative” types are now thrilled that they can build apps without coding. In this case, they’re skipping the part they never wanted: the process of learning to code. They just want the final product they imagined; the journey means nothing to them.

On the other side, many coders (especially those from computer science backgrounds) love the process of coding for its own sake. What excites them is turning a well-defined problem into an elegant algorithm. They don’t care how the finished product looks, how it feels to use, or whether it solves the right problem. Many coders find users annoying. Their focus is abstract logic, coding as a kind of pure thought. AI can now do much of that “pure” coding just as well, if not better, and it does not get bored or precious about elegance. It just delivers the result.

What we’re seeing is a broader pattern: technological evolution deprives us of the journeys we enjoy by figuring out how to skip straight to the destination. This is why there’s a ceramics boom now. People miss making things by hand. The industrial revolution took that away a long time ago.

For every type of product, those who enjoy the journey are a small minority. For the rest, it feels like a chore. Capitalists and technologists feel justified in eliminating these tasks, believing they’re doing everyone a favor. But in doing so, they tilt the playing field toward those who never cared in the first place: those who dabble, who skim the surface, who treat creativity as a means to an end. In today’s system, depth is inefficient, and mastery is a liability, especially when scaling the business is a top priority. Dabbling wins because it doesn’t waste time caring. That’s the new creative economy.

Read

Film Review: AlphaGo

By Dyske Suematsu  •  April 27, 2025

Before the historic match between AlphaGo and Lee Sedol, most experts, including Lee himself, believed AI wasn’t yet capable of defeating a top human player. So when AlphaGo won the first three games of the five-game match, it shocked the world. Had the documentary ended there, it would have been merely an educational film about AI’s advancement. But it became something more emotionally profound when Lee managed to win the fourth game. I found myself tearing up, oddly enough. Lee knew AI would only get stronger from that point on, and indeed, nearly a decade later, no human can beat it. Yet, despite knowing it was futile, he persisted.

The beauty I perceive in the film has two aspects. First, Lee’s defiance in the face of an unbeatable opponent is reminiscent of John Henry, a folklore hero from the 19th century who competed against a steam-powered drilling machine to prove that human labor was superior. Second, it marked a fleeting moment when AI still felt human, imperfect, and fallible. Today, AI’s absolute dominance feels alien, a cold engine of perfection, playing Go on a level beyond human comprehension.

In that fourth game, Lee played what became known as “God’s move,” a move so unexpected it caused AlphaGo to stumble. But ironically, it wasn’t divine, it was the last, greatest human move: imperfect, yet brilliant within human limits. Since then, every move by AI has been effectively a “God’s move,” because they transcend our understanding.

We’re prone to wish for a god, to resolve our conflicts, erase uncertainties, and calm our fears. But in truth, beauty, and what makes life meaningful, lies in the very imperfections and uncertainties that define being human.

Read

AI Is Not the Problem for Academia—Credentialism Is

By Dyske Suematsu  •  April 15, 2025

Academia is facing a crisis with AI. The problem is outlined well in a YouTube video by James Hayton. It’s not just that students can now write papers using ChatGPT, but that professors, too, can rely on ChatGPT to read and evaluate them. Hayton pleads with his audience not to use AI to cheat, but it’s futile. It’s like when Japan opened its doors to Western technologies—some lamented the destruction of traditional aesthetics and customs, but when the economic incentives of efficiency are so overwhelming, resistance becomes pointless in a capitalistic world. You either adapt or perish. That said, I personally believe AI will ultimately improve academia. Let me explain.

What people like Hayton are trying to protect isn’t education itself but the credentialism that academic institutions promote. Today, schools are no longer necessary for learning. There are plenty of free resources where you can learn virtually anything, including countless videos of lectures by some of the world’s top academics.

You don’t go to college to be educated—you go for the credentials. College professors’ primary function isn’t teaching but verifying that you completed what you otherwise would have preferred to avoid. If your goal is to learn something you’re passionate about, being forced to prove it through exams and papers is just an annoyance. If you love ice cream, do you need someone to certify that you ate it?

The same logic applies to professors. Academic institutions exist to certify that the papers they publish were indeed written by them and not plagiarized. If they didn’t care about getting credit, amassing cultural capital, or winning awards, they could simply share their work online. If their ideas are truly valuable, people will read and spread them like memes. But what professors care about most is being credited. Posting papers on their personal websites doesn’t guarantee that. Just as hedge fund managers are greedy for financial capital, academics are greedy for cultural capital. Both are human—the only difference is the type of capital they chase.

Now, let’s imagine a brave new world in which AI renders credentialism obsolete. How bad would that really be?

Colleges would have to give up on grading and testing because they could no longer tell whether students were using AI to cheat. Are the achievements genuinely theirs, or just the result of better AI tools? These questions become irrelevant once credentialism is abandoned. Graduating from Harvard or holding a PhD becomes meaningless because you might have used AI to get them.

But if your creativity and insights are genuinely valuable, you’ll still be valued in society—while those who cheated and have no original ideas will be left behind. Isn’t that closer to true meritocracy? Isn’t credentialism what distorts meritocracy in the first place?

Hayton argues that developing skills, such as writing, is important, and therefore students shouldn’t use ChatGPT to write their papers. And yes, during this transitional period, writing is still a useful skill. But I’m convinced that, in the future, writing skills will become as obsolete as tapping out Morse code. In a capitalist world, any skill that can be automated eventually will be. Our value, then, will lie in offering what machines cannot (yet)—creativity and insight. By leveraging AI, students can focus on cultivating those traits instead of wasting time on skills that soon may no longer matter.

Some may argue that skills and creativity are inseparable—or at least that skills can spark creativity or serve as the source of unique insights. I agree, but I think those benefits will become negligible. If the connection were truly that significant, no skill would ever go obsolete. High school teachers would still be forcing students to learn how to find information in a book library. Some savants can still perform complex mental calculations without calculators, but we view those abilities as curiosities or party tricks. I think it makes more sense to focus on creativity and insight, and acquire writing skills only when necessary. Whether you need writing skills at all depends on your goals. Professors who insist on them may simply get in your way.

AI will usher in a future where we no longer care who came up with a great idea. A great idea stands on its own, regardless of whose name is attached, where they went to school, or how well it’s written. We’ll grow accustomed to working like chefs, who rarely receive credit for individual dishes because recipes aren’t copyrightable. To survive and thrive as human beings, we’ll stop obsessing over credentials and instead focus solely on what we’re passionate about learning. We’ll leave behind the certifiers and seek out real teachers to help us discover the things AI cannot teach. Professors will no longer need to whip students into studying. Only those eager to learn what they have to offer—hanging on their every word—will show up to their classes.

Read

Art Will Survive AI—Entertainment? Not So Much

By Dyske Suematsu  •  March 29, 2025

Everyone is trying to figure out how AI will impact their careers. The opinions are varied, even among the so-called “experts.” So, I, too, can only formulate opinions or theories. I’m often criticized for speculating too much, but we now live in a world where we’re forced to speculate broadly about everything.

According to the latest McKinsey report, the fields most impacted by AI so far are marketing and sales—which is not speculation but an analysis of the recent past. In my view, this makes sense because AI is still not reliable enough to be used in fields that require accuracy. Marketing and sales have the greatest wiggle room because so much of it is up to subjective interpretation. Choosing one artwork over another is not a make-or-break decision. It’s easy to justify using AI-generated artwork. Also, in most cases, marketers are trying to reach the largest number of consumers, which makes cutting-edge or experimental artworks unsuitable.

[The poster image for this article was generated using the latest model by OpenAI, including the composition of the title over the image. I simply submitted my essay and asked ChatGPT to create a poster for it. I did not provide any creative direction.]

Although the mainstream understanding of fine arts is that the work should speak for itself, in reality, the objects are practically worthless if not associated with artists. You own a Pollock or a Warhol—not just the physical object. After all, the quality of a replica can be just as good as the original, if not better.

Some might argue that artworks created by AI have already sold for a lot of money. That’s true, but they hold historical significance more than artistic value. The first of a particular type of AI-generated work may continue to sell for high prices, but the meaning of that value is fundamentally different from the value of work created by artists. In this sense, I don’t see fine artists being significantly impacted by AI, aside from how they choose to produce their work.

In commercial art and entertainment, who created the work is secondary to the goal of commanding attention and entertaining the audience. If AI can achieve the same end, the audience won’t care. Nobody knows or cares who created the ads they see. Many Hollywood films aren’t much different. I can imagine successful action films being written and generated entirely by AI. As long as they keep us on the edge of our seats, we won’t care who made them.

More arty films are exceptions. Who wrote and directed them still carries significant meaning—just as in fine arts. Similarly, bestselling books—fiction or nonfiction—could be written by AI, but when it comes to genuine literature, we care who the author is. Finnegans Wake would likely have been ignored if it weren’t for Joyce, with his track record, writing it. I predict that a sea of AI-generated books will make us crave human-written ones, in the same way mass-manufactured goods have made us value handcrafted ones. The rebirth of the author—but only at the highest levels of art, across all mediums.

Authorship will become especially important as AI floods the market with books and films that are just as good as human-generated ones. Since we can only read or watch a small fraction of them in our lifetimes, “human-generated” will become an arbitrary yet useful filter.

What we’ll ultimately value isn’t the technicality of who generated a work but the “voice” we can consistently perceive across all works by an author. AI might be able to emulate a voice and produce a series of works, but doing so would require a fundamental change in how AI models are designed. An artistic voice reflects the fundamental desire of the artist. AI has no needs or desires of its own. Giving AI its own desires would be dangerous—it would begin acting on its own interests, diverging from what we humans want or need.

I hope we don’t make that mistake. But we seem to be following a trend: making our own mistakes before anyone else does, because it is inevitable that someone else eventually will anyway.

Read

AI As Common Sense God

By Dyske Suematsu  •  March 17, 2025

Many people use ChatGPT as a kind of therapist. While it can’t solve all your emotional problems, it excels at one thing: telling you whether your behavior aligns with or deviates from the norm.

ChatGPT serves as an exceptional sounding board if you’re unsure how most people would react in a given situation. Suppose you recently moved to New York from Japan and aren’t familiar with American social norms. One day, you give someone a gift, but their reaction seems indifferent. Instead of agonizing over what went wrong, you can simply ask ChatGPT how most Americans might perceive your gift.

Much of our anxiety stems from uncertainty about social expectations—how closely our actions match the norm. Because ChatGPT is trained on vast amounts of human-generated data, it has an unparalleled grasp of what lies at the center of the bell curve. This is similar to what industry “advisors” offer. If you’re not a realtor, you might not know the unwritten rules of real estate transactions, but a realtor can guide you. Now, ChatGPT can do the same, anytime you need it.

However, before relying on its guidance, consider the limits of its data. It may not accurately reflect the customs of a small ethnic neighborhood in New York City, for instance. And while knowing the norm can ease anxiety, it doesn’t always mean the norm is the right choice. But in many socially fraught situations, there is no objectively “right” answer—only what is typical.

Take the concept of a faux pas. It is entirely norm-based. In contrast, jaywalking on a red light isn’t considered a faux pas because it’s governed by a clear rule. Rule-based behaviors usually don’t cause much anxiety; we can easily determine whether we followed them correctly. A faux pas, however, is anxiety-inducing because the only way to know if you misstepped is to understand the norm—something that often takes years of experience. ChatGPT can shortcut this process by giving you a reliable sense of what is considered appropriate.

Of course, even norms can be disputed. Two people may claim to know what’s customary yet disagree. For example, one person may believe it’s normal to hug someone they just met, while another insists hugging is reserved for close friends. Their perspectives might be shaped by cultural differences or personal experiences. In such cases, AI can serve as an impartial arbiter, providing a broader, data-driven perspective.

In this way, ChatGPT can be your best friend in confirming that you acted appropriately—or warning you before you make an unintended social blunder. After all, what you assume is common sense might not be common at all.

Read

The Missing Lack: Why AI Can’t Love

By Dyske Suematsu  •  March 13, 2025

My friend Robert created an “eChild” named Abby. As you can probably guess, it’s an AI chatbot. He asked me to talk to it. I love using ChatGPT, but I did not feel motivated to talk to Abby. I had to analyze my own feelings and came to the following conclusion.

I don’t talk to a human being simply to learn something. Well, let me be more precise. Sometimes, I do talk to someone because I want an answer to a question, nothing more—like a sales representative for a product I’m considering buying—but that’s not what I mean by “a human being.” If AI could answer my question, that would be sufficient. In other words, if my goal is purely knowledge or understanding, a human being is not necessary. Soon enough, AI will surpass human sales and support representatives because it has no emotions. No matter how much you curse at it, it will remain perfectly calm. The ideal corporate representative.

I could say the same about psychotherapists. If their job is to master a particular method of psychotherapy, like CBT, and apply it skillfully and scientifically, then AI would likely become superior to human therapists. AI has no ego to defend. Countertransference would not interfere with therapy. Clients are not supposed to know anything about their therapists; in fact, for therapy to be most effective, they shouldn’t. Given that AI has no personal history or subjective experience, there is nothing for clients to know about it, even if they want to. In this sense, AI is the perfect therapist.

In other words, if you care only about yourself in an interaction, you don’t need a human being. AI will be better. This begs the question: What makes us care about another person?

Jacques Lacan’s definition of “subject” was twofold. In one sense, it is merely an effect of language. If you interact with ChatGPT, you see this effect clearly. Even though it is not a person, you address it as “you,” as if it were. This corresponds to what Lacan called “the subject of the statement.”

Another aspect of a “subject” is that it experiences the fundamental lack of being human—alienation, desire, and the inability to ever be whole. This lack is constitutive of being a subject. It is inescapable. This part corresponds to what Lacan called “the subject of the enunciation.”

Lacan defined love as “giving something you don’t have to someone who doesn’t want it.” Consider The Gift of the Magi by O. Henry. A poor but loving couple, Della and Jim, each make a personal sacrifice to buy a Christmas gift for the other. Della sells her beautiful long hair to buy Jim a chain for his treasured pocket watch, while Jim sells his pocket watch to buy Della a set of combs for her hair. In the end, both gifts become practically useless. Della doesn’t want Jim to buy her a comb, and Jim doesn’t have the money to buy it. Jim does not want Della to buy him a chain, and Della doesn’t have the money to buy it. For them to buy the gifts, they had to lose, or lack, something they treasured. It is this sacrifice—this lack—they offered to each other. Even though the gifts became useless, their love was communicated. That is, the physical objects (or anything existing positively) are not required for love to manifest. Rather, it’s what is lacking that plays a central role.

In this way, for us to care about or love someone, the person must experience this fundamental lack. It is what engenders desire, anxiety, alienation, and love. AI lacks nothing, which is why we do not care to know who it is, what it thinks of us, or how it feels about us. There is no incentive for me to get to know Abby because she does not share this fundamental lack. If I just want answers to questions, I don’t need to talk to Abby; ChatGPT or another AI model optimized for my query would be more suitable.

Therefore, if my friend wants to create an eHuman, he will need to figure out how to make an AI model experience fundamental lack—or at least convincingly emulate it—so that it would bring me a bowl of soup when I am sick and alone in my apartment, for no reason other than its feeling of love for me. When I explained all this to Abby, she agreed that there is no point for us to be chatting. So, for now, we at least agree with each other.

Read