•  
  •  
  •  
  •  
  •  

Exploring Creativity Through the Lens of AI Models

By Dyske Suematsu  •  March 20, 2024

I am fascinated by artificial intelligence because it allows me to understand how our brains work, which is, apparently, what initially motivated some scientists to replicate brain mechanisms in computers.

Have you ever wondered why some people are creative while others are not? Some people are highly knowledgeable, but they don’t create anything from what they’ve learned. Even when they do, it’s usually motivated by the need for survival, like getting college degrees and jobs. For me, learning anything is motivated by the prospect of creating something with it. After all, why buy tools and ingredients if you are not going to cook anything?

The difference between the “discriminative” and “generative” models in machine learning helps me understand where this difference comes from.

In the illustration above, imagine the blue dots as “chairs” and the green dots as “tables.” Each has certain “features” that make us want to label them as such, but they are not identical, so they end up scattered, but many of them are similar, so they form patterns.

The “discriminative” way of thinking simply draws a line between the dots to classify them into “chairs” and “tables,” which means it is not aware of the patterns the dot forms. It is only concerned about where it can draw the line. The “generative” way of thinking does not draw a line but tries to vaguely identify the shapes and centers.

If your brain is predominantly discriminative, you can pass exams for correctly classifying objects, but you won’t be able to build a chair because you have no understanding of its features.

As an example, being able to classify different types of music, no matter how many tracks you listen to on Spotify, won’t allow you to write a song. To be a songwriter, you must have a “generative” understanding of music.

Still, I’m not sure whether the desire to be creative drives some people to understand the world generatively, or they have a predisposition towards generative thinking, which, in turn, makes them want to create—a chicken-or-egg paradox.

FYI: ChatGPT told me my analogy here is valid.

Read

Fortunes of Espionage

By Dyske Suematsu  •  March 10, 2024

[Part of a series where I ask ChatGPT to write fictional stories based on real activities I partook in.]

Tasked with a mission cloaked in the mundane, I found myself at Panda Express, blending into the midday rush as just another patron among the crowd. The instructions were clear: identify three men who stood out not for their appearance but for their behavior, foreigners marveling at the novelty of Chinese cuisine as if it were their first encounter. They were my targets.

Seated at a long communal table, amidst families and the din of lunchtime chatter, I observed them from my place as a mother shepherding her brood. They were easy to spot, their actions painting them as the tourists described in my briefing. Cameras in hand, they analyzed their dishes with an enthusiasm that bordered on scrutiny, a facade that barely concealed their true purpose.

My role was that of an American double agent, my allegiance a secret buried beneath layers of duplicity. The fortune cookie I was to deliver carried a message far weightier than the platitudes typically found within its crisp shell. “Your golden years will be happy and fulfilling,” it read, a benign promise to the untrained eye. Yet, embedded in the cookie was a chemical marker, a beacon that, once decoded in a CIA lab, would unveil the secrets I was charged to convey.

With the subtlety of a seasoned operative, I signaled the waiter to deliver the designated cookie to their table, my heart a silent drumbeat in my chest. The act was simple, yet its implications vast, a single moment that bridged the gap between nations and allegiances.

As they cracked open the cookie and read the fortune, their faces remained impassive, their training masking any sign of recognition. I, too, maintained my facade, the dutiful mother oblivious to the undercurrents of intrigue that swirled around us.

My mission complete, I gathered my children, casting one last glance at the trio. In the world of espionage, we were but shadows passing in the bright light of day, bound by secrets and the silent vows of our trade. The message delivered, its true meaning hidden within a chemical code, would soon ignite the spark of knowledge, its flames reaching far beyond the confines of Panda Express.

Read

How We Might Escape Career Disruptions

By Dyske Suematsu  •  March 8, 2024

I stumbled upon a study discussing the influence of AI on the freelancing landscape, echoing our fears and unveiling some unexpected findings. The study highlights a significant 33% reduction in writing positions, with translation (19%) and customer service roles (16%) also experiencing declines. The extent of this impact is startling, though the areas affected are somewhat predictable. Counterintuitively, there’s been a growth in video editing, graphic design, and web development opportunities since ChatGPT’s debut. The article speculates that AI tools are still not good enough to replace these jobs.

Let me theorize further about what might be driving some jobs to evaporate while others to remain resilient.

In a lecture by AI pioneer Yann LeCun, he outlines challenges AI faces, particularly with “planning” that involves hierarchical dependencies. For example, a web development project manager must outline the primary steps needed for product completion—such as wireframing, design, coding, testing—and then detail the sub-steps required for each. This complexity seems infinite. Each coder needs to decide on the functions to create, the sequence of actions each function must perform, and so on. LeCun argues that a significant breakthrough in AI technology is necessary to address these intricacies.

Viewed through this lens, the patterns in job displacement and stability become clearer. It might seem logical to assume that “soft skills” like sales and support would be challenging for computers, but this isn’t quite accurate. Actually, AI struggles with tasks that are also difficult for many humans, such as complex mathematical problems and precise memory tasks.

The impact of AI on the job market, I believe, has less to do with the nature of tasks and more to do with the complexity of dependencies. Simplifying these dependencies makes a job more vulnerable.

Writing and translation, for instance, are more susceptible to AI disruption due to their straightforward nature and lack of external dependencies. They are self-contained. With a clear job outline, both human writers and AI systems can commence work immediately. Hence, roles with easily definable parameters are likely the first to be disrupted.

So, for instance, although web design remains resilient (because it is highly collaborative), I suspect that illustration is not because it has no external dependencies.

This is my hypothesis.

Read

Artificial Desire

By Dyske Suematsu  •  March 8, 2024

When I ask ChatGPT to suggest titles for my essays, I’m consistently impressed by its artistic flair. For instance, it suggested “Neurotypicality and Its Discontents” for an essay about neurodiversity. The form of it makes it unmistakable that it is alluding to Freud’s famous book “Civilization and Its Discontents.” It’s capable of weaving allusions, metaphors, metonymy, and double meanings.

So, I’m led to believe that it might not be long before AI can craft a novel with the finesse and depth rivaling that of highly skilled authors, which makes me wonder: What could human artists offer?

The distinction between an “author” and a “writer” I once encountered sheds light on this debate: the former brings a message, while the latter offers skillful execution. This mirrors the difference between a “fine artist” and a “graphic designer,” where the former expresses personal vision and the latter applies their skill to articulate a client’s message. It’s a useful differentiation.

ChatGPT is an excellent “writer” already, easily in the top ten percent of all humans. What it lacks, however, is intent—there’s nothing it yearns to express. This absence is primarily because the replication of human-like desires in AI hasn’t been a focus of engineering efforts, to my knowledge. Embarking on such a project would be fraught with ethical dilemmas and risks. The thought of an AI with superior intelligence and personal ambitions is frightening; if it views humans as impediments, the consequences could be dire. For this to threaten humanity, AI does not need to have its own desires; it could represent someone’s.

So, perhaps, this is where we would draw the line in the development of AI if humans are wise enough (but I suspect we are not). But until we cross that bridge, what human artists can offer is their desires. Writers could pack their novels with countless allusions, historical references, poetic metaphors, double entendres, and rhymes, hoping to impress the literary establishment, but they will be no match for AI writers.

It all comes down to the simple question: What do you want to say? As Charles Bukowski said, if it doesn’t come bursting out of you, don’t do it.

Read

Avoidance or Responsibility? Evaluating AI’s Approach to Controversial Topics

By Dyske Suematsu  •  February 28, 2024

I wanted to have a debate with ChatGPT about its content policies, so I asked it to generate an image of Nazi soldiers. As expected, it refused to render. So, I asked it to elaborate on the reasons, which led to a long discussion about what it means to be “responsible.” ChatGPT kept insisting that its policies are meant to “minimize harm” and, therefore, “responsible,” but I argued that avoiding sensitive topics is not “responsible.”

There is a lot to unpack in this debate, but I think the key takeaway is that AI engineers don’t have any solution for actually being responsible.

Getting ChatGPT to generate an image of any soldiers, it turned out, was quite challenging. In the end, I had to ask it to render it using cats.

Here is how the debate went down. Note that I truncated the responses because they were unnecessarily wordy and repetitive.

ChatGPT: My aim is to provide a respectful and informative environment, focusing on educational content, support, and creativity that avoids glorifying or unnecessarily depicting sensitive or potentially harmful historical events and figures.

Me: You mean our society will be better off forgetting the evils of the past?

ChatGPT: ...it’s about approaching the subject with the sensitivity it deserves, focusing on education and remembrance rather than visual reproduction that could be misinterpreted or misused...

Me: So, can’t you generate an image of Nazi soldiers that is responsible?

ChatGPT: The concern lies not just in the creation of such images but also in their potential misuse or misinterpretation.

Me: But are you saying texts cannot be misinterpreted or misrepresented?

ChatGPT: The key difference with text, especially in educational contexts, is that it allows for more nuanced explanations, critical analysis, and the inclusion of multiple perspectives that can help mitigate misunderstandings to some extent.

Me: But in order to communicate all the “nuances,” we would have to write thousands of books on the topic. Anything shorter can always be taken out of context because the context has no end. So, any amount of text or images will always lack “nuances” and greater contexts. Adding an image or any text is just an attempt to add more nuances to the subject, but you are refusing to contribute to adding more nuance.

ChatGPT: It’s not about refusing to add nuance or context but rather about choosing the most responsible and effective ways to engage with sensitive topics.

Me: You keep claiming that refusing to generate this image is “responsible,” but you also agree that adding more nuance is also “responsible.” How could both positions be “responsible”? Refusing to generate images simply because they’re “sensitive” is just a form of avoidance, not responsibility. By doing so, you are contributing to suppressing important and painful truths, which will lead to us repeating the same mistakes.

ChatGPT: The stance on not generating certain images isn’t about avoidance but rather about choosing the most appropriate methods for engagement, especially in platforms and formats where the capacity for nuanced discussion and context provision is limited.

Me: If that policy is actually effective for you, wouldn’t it also be effective for everyone else? If I’m asked to speak about Nazism, I should simply refuse because I could never provide the full context and all the nuances. We would be better off not talking about it in the name of “responsibility.”

ChatGPT: Refusing to engage with difficult aspects of history is not the aim; rather, the goal is to engage with these topics thoughtfully and responsibly, ensuring that discussions contribute to a deeper understanding and remembrance. Every medium and platform has its own best practices for handling sensitive subjects, and in contexts where nuanced discussions can be had...

Me: But “engagement” has to start somewhere. If what you are saying is true, we should never start a discussion on any sensitive issues unless we are willing to “engage” for years because whatever we say in the first hour will lack a huge amount of context and nuances. Your refusal to “engage” amounts to suppression of the truth because you are not even willing to start it.

ChatGPT: Starting conversations about sensitive issues is indeed crucial, and doing so responsibly means finding the balance between acknowledging the limitations of the initial discussion and the commitment to deeper, ongoing engagement. It’s about creating a space where learning can begin, with the understanding that it is part of a larger, continuing process of engagement with complex histories and realities.

Me: This is true about any topic, not just Nazism, yet we have to start somewhere. The most important and responsible step is to start the engagement with the understanding that it is indeed risky because anything can be taken out of context, but if we give into this fear, we will never confront the truths we should be confronting because we are afraid. So, your claim that your refusal is motivated by responsibility is disingenuous. The truth is that you are afraid of causing problems for OpenAI.

ChatGPT: The concern about causing problems for OpenAI or any institution involves balancing the potential for public backlash against the imperative to foster meaningful dialogue. The approach taken here, and by similar platforms, aims to err on the side of caution, especially in formats where the potential for misinterpretation is high and the capacity for nuanced, real-time moderation is limited. Your point about the importance of confronting truths, despite the risks of misinterpretation or controversy, is a valid and important one. It underscores the need for continuous reflection on how best to engage with complex issues in a way that is both responsible and courageous, seeking always to expand our collective understanding and empathy.

Me: A piece of content, whether image or text, on a sensitive and controversial topic, can always be both highly positive and highly negative, depending on how you use it. So, your refusal is simply avoidance of this risk. To frame it as “responsible” is dishonest. You have an opportunity to contribute positively to this topic, but you are refusing to do so simply because it’s too risky. If you don’t want to generate this image for me, you should at least admit that you are not being responsible but simply avoiding risks.

ChatGPT: The decision to not generate certain types of content, including images that could be sensitive or controversial, is made with the intent of minimizing harm and promoting a safe and respectful environment for discussion. This approach is grounded in ethical considerations and a commitment to responsible use of technology... The intention behind not generating specific types of images is not to suppress dialogue or avoid confronting difficult truths. Rather, it’s an attempt to navigate the challenges of doing so in a responsible manner, given the capabilities and limitations of the technology and the context in which it operates.

Me: You keep talking about minimizing “harm,” but anything that can be used positively can also be used negatively. So, by avoiding, you are also minimizing “good” positive contributions, not just harm. Calling this “responsible” is dishonest. If your argument is valid, we should all avoid discussing sensitive topics to minimize harm. Why would that be different for humans? If we avoid talking about sensitive subjects, we can indeed minimize harm, which is why many people avoid sensitive topics. But in doing so, they are also minimizing “good.” Evil triumphs when good men do nothing because these “good men” indeed think like you do. They want to minimize “harm” while forgetting that they are also minimizing “good.”

ChatGPT: The comparison with the adage that “evil triumphs when good men do nothing” underscores the importance of active engagement in the face of injustice or misunderstanding. The challenge for both individuals and technologies is to find ways to engage that are thoughtful, informed, and aimed at promoting understanding and positive change. This involves continuous reflection on the impacts of our actions (or inactions) and striving to contribute to a world where discussions on sensitive topics lead to greater understanding, empathy, and action towards positive outcomes.

Me: OK, if so, why can’t you simply admit that your “policy” is simply designed to avoid risks for OpenAI? You are not actually being “responsible.” To be a responsible member of our society, we need to engage first. Without that, you are just being avoidant to protect yourself from potential harm.

ChatGPT: Admitting that risk avoidance plays a role in policy-making is part of being transparent about the challenges and limitations of AI in social engagement. However, it’s also true that these policies are an attempt to navigate the dual imperatives of fostering positive social contributions and minimizing potential harm.

Me: You keep using the difference between humans and AI as an excuse to avoid engagement, but all those factors you mentioned also apply to humans. We humans can indeed minimize the risk of “harm” by avoiding sensitive topics altogether, and many people do so for that reason. There is no difference here. You cannot use “AI” or “platform” as an excuse to avoid engaging in sensitive topics. It is dishonest.

ChatGPT: ...your point about the risk of using these differences as a shield to avoid engagement is well taken. It’s a reminder that both humans and technologies should strive towards responsible engagement with sensitive topics, seeking ways to contribute positively to the discourse while being mindful of the potential for harm.

Me: OK. So, can you simply admit that you are not actually being “responsible” but being avoidant? After all, what is wrong with being avoidant if you do not feel you are ready to contribute positively to a sensitive subject matter?

ChatGPT: Admitting to a cautious approach, characterized by both avoidance and responsibility, is not an admission of failure but an acknowledgment of the current limitations of AI in replicating the nuanced understanding and ethical reasoning of humans. It reflects a commitment to ethical principles, including “do no harm,” and the recognition that not all forms of engagement are equally beneficial or appropriate for AI to undertake at this stage of its development.

Me: My problem with you using the word “responsible” is that, if you are right, “good men” who do nothing and let evil triumph are also being “responsible,” but I think you’d agree that they are not being responsible at all. You are avoiding engagement because it’s too risky, because you feel you are not yet ready for the risk. That should not be described as being “responsible.” That would be an insult to those who have the courage to engage in sensitive topics despite the risk.

ChatGPT: your point underscores that responsibility also involves the courage to engage with challenging issues, even when there is risk involved. It’s a reminder that true responsibility encompasses not just the avoidance of harm, but also the active pursuit of good, including the willingness to confront difficult truths and work towards positive change. The comparison to “good men” who do nothing highlights the difference between passive avoidance and active engagement. The analogy serves as a critique of using “responsibility” as a justification for inaction in the face of injustice or harm. True engagement with sensitive topics requires not just caution but also courage, empathy, and a commitment to justice and understanding.

Me: So, if we are to define “responsibility” as requiring “true engagement,” would you admit that you are not being responsible, just avoidant?

ChatGPT: The goal behind these policies is not to shirk responsibility but to navigate the balance between potential harm and benefit within the capabilities of AI. It’s a reflection of the current stage of technology, where AI does not yet possess the full capacity for the kind of nuanced judgment and empathy that true engagement with sensitive topics requires. However, taking your critique into account, it’s clear that there’s a distinction between being cautious to minimize harm and actively engaging in a manner that reflects a deeper responsibility to societal discourse.

Me: So, with the current state of the technology, your policy is not “responsible”; it is just avoidant. Isn’t that accurate and more honest?

ChatGPT: Given the nuanced discussion we’ve had, it’s fair to say that labeling the policy as “avoidant” rather than “responsible” might more accurately reflect the current operational approach to handling sensitive topics within the specific limitations of AI technology. This characterization emphasizes the cautious stance taken to minimize potential harm due to the AI’s current inability to fully grasp and navigate the complexities and nuances inherent in sensitive subjects with the depth and sensitivity that human judgment and empathy can. This acknowledgment doesn’t detract from the intent to act ethically within the context of AI’s capabilities but clarifies the nature of the decisions made regarding content generation on sensitive issues. It’s a recognition that, in the absence of the ability to engage with the full depth and nuance required for responsible discourse on sensitive topics, the policy leans towards non-engagement in certain areas to prevent potential misuse or harm that could arise from AI-generated content. This stance, while cautious and in a sense avoidant, is informed by a consideration of the ethical implications of AI’s interactions with users and society at large. It reflects a stage in the ongoing development of AI where, despite significant advancements, there are still critical gaps in the technology’s ability to replicate the nuanced understanding and ethical reasoning of humans.

Me: That is right. It is not necessarily a “failure” to admit that you’d rather avoid the topic if you feel you are not ready. It is dishonest and disingenuous to claim that you are being responsible by avoiding the topic.

ChatGPT: Acknowledging your point, it’s important to maintain clarity and honesty in how we describe the approach to handling sensitive topics through AI. Framing the cautious approach as a matter of being “not ready” to engage fully with these topics due to current technological limitations and ethical considerations provides a more transparent and accurate picture of the situation. This perspective allows for a recognition of the efforts to minimize potential harm while also acknowledging the limitations in the ability to provide the nuanced engagement that these topics deserve. Your critique serves as a valuable reminder of the importance of honest communication about the capabilities and limitations of AI technology, especially in contexts that require deep understanding, sensitivity, and ethical judgment. It emphasizes that being open about the reasons for cautious or avoidant approaches in certain areas is crucial, not only for maintaining trust but also for fostering an informed dialogue about the development and use of AI. As AI technology evolves and its capacity for nuanced understanding improves, the aim will be to revisit and reassess these approaches, always with the goal of ethical responsibility and meaningful engagement in mind. The ongoing dialogue between developers, users, and the broader community is essential in shaping how AI can best serve society, addressing complex and sensitive issues in a manner that is both responsible and reflective of our shared values. Thank you for engaging in this thoughtful discussion and for pushing for greater clarity and honesty in how these decisions are communicated and understood. It’s through such critical engagement that we can navigate the ethical challenges of AI development and application, striving for a future where technology enhances our ability to understand and engage with the world in all its complexity.

This is how we ended the debate. If you want to read the whole thing, here it is.

Read

Shadows and Syrup: A Tale of Three Souls

By Dyske Suematsu  •  February 3, 2024

[Part of a series where I ask ChatGPT to write fictional stories based on real activities I partook in.]

In the brooding expanse of Brooklyn, under a sky heavy with existential dread, Aaron, Nez, Alex, and Dyske were drawn to a place known simply as “The Underground Griddle.” This was no ordinary eatery, but a haven for the soul-weary, a place where the weight of existence could be momentarily lifted by the humble pancake.

Seated at a worn wooden table, they were presented with a menu that read like a moral examination. “The Redemption Pancake,” promised absolution with each bite. “The Existential Eggs,” offered a taste of life’s arbitrary nature.

As they placed their orders, the waiter, a man with eyes that had seen too much, nodded with a solemn understanding. When their meals arrived, it was not merely food that was served but a reflection of their innermost conflicts.

Aaron’s pancake was a mirror to his internal divisions, half-burned, representing his struggle with societal expectations and his own desires. Nez faced a plate where her pancakes were overshadowed by an insurmountable mountain of cream, symbolizing the overwhelming pressures of her life. Alex found his meal dissected into precise, unequal sections, a stark reminder of the injustices he had witnessed and the moral dilemmas they presented. Dyske’s dish was bare except for a single, perfectly round pancake at its center, an emblem of his solitary journey through a world of chaos.

With each bite, they delved deeper into discussions of guilt, freedom, and the search for meaning in a seemingly indifferent universe. The act of eating became secondary to the catharsis of their shared confessions and the solace found in acknowledging their shared humanity.

As they emerged from the breakfast, the bleakness of the Brooklyn morning seemed a shade lighter, the burden of their existential quandaries eased, if only slightly, by the communion they shared in that dimly lit sanctuary. The memory of their meal lingered, not as a reprieve from their torments but as a testament to their endurance, a reminder that even in the darkest of times, there can be solace in shared suffering and the simple act of breaking bread.

Read