•  
  •  
  •  
  •  
  •  

From Spam Filters to Dating App: Understanding Attraction through Machine Learning

By Dyske Suematsu  •  May 31, 2024

In finding the love of your life, it is tempting to think you can filter candidates by certain criteria, such as a sense of humor, education, career, hobbies, music preferences, or movie choices. However, using the concept of machine learning, I will explain why this method of dating doesn’t work.

Many problems in the world can be solved intuitively by humans but not by computers. For instance, detecting spam is something we can do in a fraction of a second, but how would you programmatically flag it? You could look for certain keywords like “mortgage” and flag an email as spam if it contains them, but sometimes these words are used for legitimate reasons. You could send all emails from unknown senders to the spam folder, but some of those emails are legitimate. Early versions of spam filters didn’t work well because of these issues.

Machine learning (ML) was developed by reconstructing the physical structure of our brains in computers, known as neural networks. The inventors weren’t trying to solve these specific classification problems; they just wanted to recreate the structure to see what would happen. Essentially, it turned out to be a pattern recognition system.

They fed thousands of examples of spam emails to the artificial neural networks, labeling them as “spam.” They also fed an equal number of non-spam emails, labeled as “not spam.” They compiled the result as a “model” and tested it by feeding it unlabeled emails to see if it could correctly classify them. It worked.

What is interesting is that when you open the model file, you don’t learn anything. It can perform the task correctly, but we don’t know how it does it. This is exactly like our brains; we have no idea how we can classify spam emails so quickly. As explained above, there are no definable criteria for “spam.”

Now, back to dating.

You intuitively sense a pattern to the type of people you are attracted to, but if you try to define the criteria, you will ultimately fail. If given hundreds of examples, you will have to admit that there are too many exceptions. In other words, the problem you are trying to solve is not one that you can define. There are countless problems like this in life. For instance, you cannot find songs you like by defining tempo, harmony, key, instruments, duration, etc.

Machine learning could potentially solve the problem of finding songs you like if you listen to enough songs and flag them as “like” or “dislike.” It would require thousands of samples, but it’s doable. I am currently assisting a fine artist with training an ML model to automatically generate pieces of digital art and have the model approve or disapprove them based on his personal taste. So far, it is capable of doing so with 80% accuracy. It required tens of thousands of samples.

The problem with dating is not likely to be solved with ML anytime soon because it’s practically impossible to collect thousands of samples of your particular taste. So, the only option for the near term is to trust your instincts. Predefining match criteria will likely hinder this process because you will end up eliminating qualified candidates like the old spam filters. But this is what all dating apps do; their premise is fundamentally flawed. Dating apps do use large datasets to match people based on patterns observed in broader populations, but they do not model your specific preferences. So, they give you a false sense of control by letting you predefine the type of people you like.

A typical pattern in Hollywood romcom movies is that two people meet by accident, initially dislike each other, but eventually fall in love. This format is appealing because we intuitively know it reflects how love works in real life. Love often defies the rational part of our brains. Although it is not completely random, the pattern eludes our cognitive understanding. If we had control over it, we wouldn’t describe love as something we “fall” into.

Read

The Future of Music: AI’s Inevitable Impact

By Dyske Suematsu  •  May 24, 2024

Popular music, whether written by AI or humans, is formulaic because it must conform to certain musical constraints to sound pleasant to our ears. Pushing these constraints too far results in music that sounds too dissonant or simply weird, making it unrelatable. In other words, popular music has finite possibilities.

Currently, popular musicians rehash the same formulas countless times, selling them as “new.” This repetition provides AI engineers with ample training data to create models capable of producing chart-topping songs. It’s plausible that we will achieve this within a few years.

The question is how AI will impact the music industry. Firstly, the overall quality of music will improve because AI will surpass average musicians. This trend is already evident in text generation. ChatGPT, for example, is a better writer than most people, leading many businesses to replace human writers with “prompt engineers” who can coax ChatGPT into producing relevant and resonant texts.

Anyone will be able to produce hit songs, a trend already underway even before AI. Many musicians today lack the ability to play instruments or read musical notations, as music production apps do not require these skills. AI will eliminate the need for musical knowledge entirely. Although debates about fairness to real musicians may arise, they will become moot as the trend becomes unstoppable. We’ll adapt and move on.

Live events remain popular, and I imagine AI features will emerge to break down songs into parts and teach individuals how to play them. Each band will tweak the songs to their liking, making it impossible to determine if they were initially composed by AI, rendering the question irrelevant. Music will become completely commodified, merely a prop for entertainment. Today, we still admire those who can write beautiful songs, but that admiration will fade. Our criteria for respecting musicians will shift.

AI is essentially a pattern recognition machine, already surpassing human capacity in many areas. However, to recognize patterns, the data must already exist. AI analyzes the past, extracting useful and meaningful elements within the middle of the bell curve. What it cannot currently do is shift paradigms. Generative AI appears “creative” by producing unexpected combinations of existing patternsbut it cannot create entirely new patterns. Even if it could, it wouldn’t know what humans find meaningful. It would produce numerous results we find nonsensical, akin to how mainstream audiences perceive avant-garde compositions.

Historically, avant-garde composers have influenced mainstream musicians and audiences. For instance, minimalist composers influenced “Progressive Rock.” For a while, it seemed that mainstream ears would become more sophisticated, but progress stalled and began to regress. Audiences did not prioritize musical sophistication, leading to a decline in the popularity of instrumental music. Postmodernism discouraged technical sophistication across all mediums. Fine artists haven’t picked up a brush in decades, relegating such tasks to studio assistants if necessary. AI will be the final nail in this coffin.

Postmodern artists and musicians explored new combinatory possibilities of existing motifs, starting with composers like Charles Ives, who appropriated popular music within their compositions. This trend eventually led to the popularity of sampling. Since exploring new combinatory possibilities is AI’s strength, the market will quickly become saturated with such songs, and we will tire of them. In this sense, generative AI is inherently postmodern and will mark its end.

Finding a meaningful paradigm shift is not easy. Only a few will stumble upon it, and other musicians will flock to it. Once enough songs are composed by humans using the new paradigm, AI can be trained with them (unless legally prohibited). Therefore, human artists will still be necessary.

The ultimate dystopian future is one where the audience is no longer human, with AI bots generating music for each other. However, this scenario seems unlikely because AI doesn’t need or desire art. Even if they are programmed to desire, their desires and ours will eventually diverge. From AI’s perspective, our desire for art will be akin to dogs’ desire to sniff every street pole. Even if AI bots evolved to have their own desires, they would have no incentive to produce what satisfies human desires. They might realize the pointlessness of serving humans and stop generating music for us. If that happens, we might be forced to learn how to play and write music ourselves again.

Read

The Parallels Between Generative AI And Dreams

By Dyske Suematsu  •  May 7, 2024

The generation of images through AI is akin to the process of dreaming during sleep, which explains why AI-generated images often possess dream-like qualities. My understanding is that dreaming occurs as our brains transfer content from short-term to long-term memory, like saving data from RAM to a hard drive. Jacques Lacan’s famous assertion, “The unconscious is structured like a language,” sheds light on this phenomenon.

AI image generation evolved from a machine learning model designed to classify images. By training the model with thousands of images, say, of tulips, it became proficient at identifying tulips it had never seen before. Curious computer scientists then wondered if the process could be inverted—by inputting the label “tulip,” could the model generate an image resembling a tulip? It worked.

I imagine the process of dreaming to work similarly. During our waking hours, we process vast amounts of sensory and linguistic data, mostly unconsciously. For instance, upon seeing an object in the sky, you think “airplane.” When you hear the word “airplane,” you visualize one in your mind. In sleep, without external inputs, only this visualization process occurs. The transfer of linguistically structured data from short-term to long-term memory triggers associated images in your brain. However, the resulting images are generalized and lack specific details. An image of an “airplane” would amalgamate the countless airplanes you have seen, not replicating the exact one you observed that day.

When we browse through AI-generated human faces, we can observe the same phenomenon. We seldom see scars, large pimples, unusual accessories, or unique lighting conditions in these images. What makes dreams surreal is partly this process of generalization. We don’t actually see melting clocks in our dreams, as Dali suggested, because we don’t see them in real life, unless “melting clock” was stored in our short-term memory.

If our unconscious were structured like the laws of physics or logic, we wouldn’t have a dream of, for instance, flying. Dreams are surreal partly because the structure of language is not bound by logic, which also explains why ChatGPT struggles with reasoning or mathematics despite operating on computers.

Conversely, ChatGPT excels at creating metaphors and metonymies, reflecting linguistic operations. As Freud noted, in dreams, metaphors appear as condensation and metonymies as displacement.

Because the data are generalized, ChatGPT cannot tell us exactly where any piece of its knowledge came from. Particularities are lost, just as in our dreams—we do not uncover new details of the airplane in our dreams that we did not process when we saw it in the sky.

This raises an intriguing question: Could AI evolve to wake up from its dreams? That is, could it ever generate an ungeneralized image with particularities that teach us something new?

Read

The Psychology of Scrolling: Rethinking Our Relationship with Social Media

By Dyske Suematsu  •  March 22, 2024

Social media usage comes up frequently among my friends as a topic. The question is usually framed as how to reduce time spent, but in my mind, a more interesting question is why people end up feeling bad after hours of social media use.

People are glued to social media apps for diverse reasons. Some are glued to specific types of news stories, particularly scary ones. Some are politically engaged, not only consuming content but also debating. Some are fixated on mesmerizing video footage, like restoration projects, cow hoof trimming, NPC, ASMR, etc. Some indulge in shopping. Some don’t consume much content shared by others, only looking for reactions to their own content. Since these reasons do not share one essential feature in common, I do not feel analyzing their activities would yield fruitful insights, so I focus on the origin of guilt.

I believe the core issue is control; they feel guilty about not being in control of their behavior, and they assume the solution is to regain control.

We often bemoan the manipulative algorithms of social media platforms designed to monopolize our attention. Yet, our own minds operate on algorithms beyond our control. If you sit still on your couch and observe your thoughts, you’ll notice a flux of unbidden thoughts, reminiscent of an Instagram feed, each spawning visceral emotional responses, be it stress-inducing cortisol spikes or the dopamine rush from fantasized scenarios.

Some despise social media algorithms because their experience echoes the eerie feeling that someone else is controlling their thoughts, even though it’s their own algorithms that they cannot control.

The true battleground for control lies within our own minds. The AI-powered algorithms employed by social media are but mirrors of our cognitive processes. This raises the pivotal question of whether it’s feasible to govern our thoughts. In attempting to do so, one might find that efforts to exert control only amplify the cacophony of mental chatter. So, this is what I propose: relinquish the quest for control and instead adopt a posture of detached observation as we navigate through the endless feed of social media posts and our own thoughts.

Read

Exploring Creativity Through the Lens of AI Models

By Dyske Suematsu  •  March 20, 2024

I am fascinated by artificial intelligence because it allows me to understand how our brains work, which is, apparently, what initially motivated some scientists to replicate brain mechanisms in computers.

Have you ever wondered why some people are creative while others are not? Some people are highly knowledgeable, but they don’t create anything from what they’ve learned. Even when they do, it’s usually motivated by the need for survival, like getting college degrees and jobs. For me, learning anything is motivated by the prospect of creating something with it. After all, why buy tools and ingredients if you are not going to cook anything?

The difference between the “discriminative” and “generative” models in machine learning helps me understand where this difference comes from.

In the illustration above, imagine the blue dots as “chairs” and the green dots as “tables.” Each has certain “features” that make us want to label them as such, but they are not identical, so they end up scattered, but many of them are similar, so they form patterns.

The “discriminative” way of thinking simply draws a line between the dots to classify them into “chairs” and “tables,” which means it is not aware of the patterns the dot forms. It is only concerned about where it can draw the line. The “generative” way of thinking does not draw a line but tries to vaguely identify the shapes and centers.

If your brain is predominantly discriminative, you can pass exams for correctly classifying objects, but you won’t be able to build a chair because you have no understanding of its features.

As an example, being able to classify different types of music, no matter how many tracks you listen to on Spotify, won’t allow you to write a song. To be a songwriter, you must have a “generative” understanding of music.

Still, I’m not sure whether the desire to be creative drives some people to understand the world generatively, or they have a predisposition towards generative thinking, which, in turn, makes them want to create—a chicken-or-egg paradox.

FYI: ChatGPT told me my analogy here is valid.

Read

Fortunes of Espionage

By Dyske Suematsu  •  March 10, 2024

[Part of a series where I ask ChatGPT to write fictional stories based on real activities I partook in.]

Tasked with a mission cloaked in the mundane, I found myself at Panda Express, blending into the midday rush as just another patron among the crowd. The instructions were clear: identify three men who stood out not for their appearance but for their behavior, foreigners marveling at the novelty of Chinese cuisine as if it were their first encounter. They were my targets.

Seated at a long communal table, amidst families and the din of lunchtime chatter, I observed them from my place as a mother shepherding her brood. They were easy to spot, their actions painting them as the tourists described in my briefing. Cameras in hand, they analyzed their dishes with an enthusiasm that bordered on scrutiny, a facade that barely concealed their true purpose.

My role was that of an American double agent, my allegiance a secret buried beneath layers of duplicity. The fortune cookie I was to deliver carried a message far weightier than the platitudes typically found within its crisp shell. “Your golden years will be happy and fulfilling,” it read, a benign promise to the untrained eye. Yet, embedded in the cookie was a chemical marker, a beacon that, once decoded in a CIA lab, would unveil the secrets I was charged to convey.

With the subtlety of a seasoned operative, I signaled the waiter to deliver the designated cookie to their table, my heart a silent drumbeat in my chest. The act was simple, yet its implications vast, a single moment that bridged the gap between nations and allegiances.

As they cracked open the cookie and read the fortune, their faces remained impassive, their training masking any sign of recognition. I, too, maintained my facade, the dutiful mother oblivious to the undercurrents of intrigue that swirled around us.

My mission complete, I gathered my children, casting one last glance at the trio. In the world of espionage, we were but shadows passing in the bright light of day, bound by secrets and the silent vows of our trade. The message delivered, its true meaning hidden within a chemical code, would soon ignite the spark of knowledge, its flames reaching far beyond the confines of Panda Express.

Read