The Tragedy of Artificial Intelligence and Human Learning

Choi Han Kyum
13 min readNov 3, 2023

<Key Points>

  • This article discusses the impact of the game event where the Go AI program Alphago defeated the world’s strongest Go(Baduk) player, Lee Sedol. First of all, it is saying that the event is worth noting that comparing to human capabilities is a completely wrong premise.
  • It emphasizes AlphaGo’s ability not to the extent that many futurists suggest. Because Alphago is limited to the 19-by-19 board it has learned. If the same game had been played on a 20-column board with one extra row, the result would have been completely different.
  • The passage also highlights Baduk is a game where humans compete over infinite space and time using white and black stones, and is not a game limited to a 19-line Go board. it is saying that the current checkerboard with 19 lines has means ‘infinity’ to humans.
  • The article further points AI’s increasing capabilities is not because of any great technological advancement in AI technology or algorithms, but because of a change in the way we think about machine learning.
  • Finally, it is noted the problem is not the new technology but the missing what it means to be human. A.I. is being asked to establish a relationship with humans. It emphasizes needs to be able to decide what to create and what not to create in a relationship with me.
Lee Se-dol enters the press conference after losing the game between Lee Se-dol and artificial intelligence AlphaGo. Eric Schmidt, chairman of Google, is seen smiling. /Chosunilbo A1, 10. March, 2016.

A revolutionary moment in the development of artificial intelligence programs may have been the game event in which Alphago, a Go AI program, is walloped by the world’s strongest Go player, Lee Sedol.
Humans have always made rules. It’s a competition to decide the winner. For animals, there are no games. There is only to kill or to be killed. Humans play games as a substitute for risking their lives. Whether it’s for food or dominance, they invented games as a way to make decisions.

While most games involve a contest of strength or physical skill, Go is not a physical game, but rather a mental game of alternating white and black stones on a board. If you’re a Westerner who’s never seen Go before, it’s hard to understand what’s so interesting about the game. The reason why Go has been regarded as the best brain game in the East is to find the beauty that the stones placed by the two players vary widely depending on the location and order. When gamers place a stone, they do so with an eye toward how it will interact with the next stone, and the result of that interaction is the amount of power they have over their opponent, which is indicated by the size of their territory (house). The side with the most active stones gains territory, which is the player’s skill. The action of the white and black stones is determined by the position of the grid, which is 19 squares wide and 19 squares long, and the order in which the stones are placed.

This raises a question. Will AI ever surpass humans? I still don’t get it, even though I’ve clearly witnessed humans being defeated by AI. Ray Kurzweil, a futurist who leads AI development at Google, puts the timeframe at 2029: “By 2029, it will have the same level of intelligence as humans.” It would take less than 20 years for AI to beat humans at chess and then go on to win at Go. “Sixteen years later, in 2045, the Singularity, the point at which humanity’s physical and intellectual capabilities, when combined with artificial intelligence, exceed biological limits, will come,” he said with confidence. (Chosun Ilbo, March 11, 2016, A5)

The first time an artificial intelligence beat a human at chess was in May 1997. (IBM’s Deep Blue played against world chess champion Garry Kasparov.) AlphaGo’s battle of the brains comes exactly 19 years after that. Futurists predict that AI will continue to advance at an exponential rate.

In fact, the outcome of the Google Challenge Match held on March 9, 2016 in Seoul was already decided early on. Game 1 ended in AlphaGo’s favor, shocking the world and surprising Go experts. In the second game, even Lee Sedol called AlphaGo “scary”. The experts on the sidelines had the same reaction. This is because he could not win even though there were no special mistakes.

The BBC said, “AlphaGo won by using intuition, a skill thought to be uniquely human,” and Reuters said, “AlphaGo’s victory marks a new milestone in the development of artificial intelligence.” The Guardian said, “A computer program has completely conquered one of humanity’s most creative and complex games.” “I don’t know if the world in which AlphaGo won is a utopia or a dystopia, but I feel like I’m watching a huge transition in human history, comparable to the agricultural and industrial revolutions,” said Professor Kyung-Cheol Joo of Seoul National University.

Everyone was amazed, and interest in the future of artificial intelligence was intensified. Korean newspapers published special features highlighting the transformation of AI, including Chosun Ilbo, which published an article titled “AI Speaks — Medical, Legal, and Economic Professions… His Decisions Are the Right Answers” on the front page of its Saturday edition after Lee Sedol lost two games to AlphaGo. In the article, it was introduced that “machines and computers with artificial intelligence are surpassing humans in intelligence and comprehensive judgment, which were once thought to be the exclusive domain of humans” and are not limited to games such as Go, but are already excelling in various fields such as medical finance.

Will the future of AI be as unimaginable as the media and experts predict? To me, at least, it is clear that it is not. The message of the 2016 AlphaGo vs Lee Sedol match is clear. I believe that AlphaGo’s ability to play “that game” has been perfected, but not to the extent that many futurists suggest. This is because, at its simplest, AlphaGo’s victory is limited to the 19-by-19 board it has learned. If the same game had been played on a 20-column board with one extra row, the result would have been completely different.

This is where I feel sorry for Lee Sedol. For humans, the 19-line checkerboard is just a play tool created for convenience, and is not the game of baduk itself. It could be 20 lines horizontally or vertically, or it could be a 50-line checkerboard. Baduk is a game where time is unlimited. If so, that space also cannot be fixed to 19 lines. Lee Se-dol did not play baduk, but played the ‘19-line baduk board’ game. At least as I understand it, Baduk is a game where humans compete over infinite space and time using white and black stones, and is not a game limited to a 19-line Go board. The fact that the current checkerboard has 19 lines only means ‘infinity’ to humans. However, for a computer, artificial intelligence, the number of possible matches that can be made on a 19-line checkerboard is not infinite. It is a finite game that can be played after calculating the start and end perfectly. A human cannot beat an opponent who already has the answer.

Well, then, it is not something to be excited about because artificial intelligence has surpassed human capabilities. It is worth noting that comparing to human capabilities is a completely wrong premise. If we do not understand this properly and, like the futurists before us, understand artificial intelligence as if it will surpass humans, we will experience an unexpected disaster. I am not afraid of artificial intelligence, but I am more afraid of human misjudgment. Therefore, it is necessary to examine the differences between artificial intelligence learning and human learning.

The fact that computers have reached a stage where they can learn on their own is a big change. The reason why AI is able to learn on its own, which it hasn’t been able to do in the past, is because it has inadvertently abandoned human methods of learning in favor of machine learning. This is not because of any great technological advancement in AI technology or algorithms, but because of a change in the way we think about machine learning. It’s not AI anymore. I’m going to refer to algorithms that work as “machine learning” like AlphaGo and Chat GPT as “machine intelligence”. I’m not trying to repeat the technical explanations of engineers here.

There was a 60-year dark ages from 1950 to 2010 when attempts were made to explore the possibilities of artificial intelligence, and then a decade of brilliance starting in 2010 that gave us machine intelligence like AlphaGo and ChatGPT. When it comes to the history of artificial intelligence, the dividing lines of 1950, 1980, and 2010 are irrelevant. There is little difference in the technical act of reading data or images with the goal of recognizing intelligence in a computer. There may be some differences between the “rule-based” AI of the 1950s and the “learning-based” AI of the 1980s, but they are very superficial. By the way, engineers refer to machine learning methods applied since 2010 as “deep learning”. It’s a strange name. It’s not clear what is being learned deeper. Even AI experts say “we don’t know” how this happened, so we’re not really naming the revolutionary change.

Let’s think about the last 60 years of inaction from a non-scientific common sense perspective: in 1950, only scientists had access to computers; in 1980, personal computers were just getting started. And in 2010, the mobile era dawned and humans began to move around freely with smartphones. This distinction has an important message.

Often, there are only vague explanations such as a different learning method and amazing results, but there is no direct evidence to say “this is it”. Therefore, let’s get out of the myopic view and expand the time series and connect the spatial view. Today’s machine learning, and machine learning methods like Transformers, can be traced back to the dawn of the personal computer era in 1980. The chronological connection to the version of AI is very important. The concept of artificial intelligence originated in the early 1950s when computers were taught to learn. This is when the term “artificial” came into vogue.

This was the era of the Industrial Revolution, when technology and machines could do everything that humans could do. It’s no wonder that in the 1980s, movies like “The Terminator” featured an all-around humanoid AI as the main character. Although it is a world where what we imagine becomes reality, it is also a strict truth that we pay a price for imagining and trying to realize the unrealizable. Even if we move the point of view to a much more recent time, say 1990 instead of 1980, in the absence of widespread computer users, machine learning methods could only be limited to rule learning. That was the reality. Even human education at the time was either rule learning or indoctrination.

Alan D. Thompson

From this perspective, it makes sense that machine learning has only recently changed. The explosion of personal computer use in the 1990s coincided with the introduction of the Internet, which led to globalization across the globe. Institutionally, it was a time of upheaval, with the replacement of the GATT (General Agreement on Trade and Tariffs) system with the WTO (World Trade Organization) and the inclusion of China, a country of 1.5 billion people, into the market economy. As human use of computers increased dramatically, so did the amount of data produced, which can be used for artificial intelligence.

In my opinion, no matter how much we talk about the change in data volume, except for environmental changes, it is still not explained in physical terms. The human world has undergone a revolutionary social and environmental change in the Internet and mobile era. The process has manifested itself as a change in the human thinking system, a change in the human-centered worldview. It may also be a result of human generational change and a change in human learning methods through media.

Generational changes were also evident in the fields of physics and engineering, and the generation after the new computer age (1980) became the center of research. For them, media was a necessity, and they were educated freely and autonomously in an infinite information environment using media. They did not dream of a universal human being, nor did they imagine A.I. as a 1950s-style creature. They realize that nothing can be accomplished by creating a creature based on rules or learning. They know how to utilize massive amounts of data as the computer generation did, and I think that’s why they approach machine learning differently.

If so, it is no longer conceptually “artificial intelligence”. The name “A.I.” still has the image of an all-powerful human being like the Terminator. The idea of Chat GPT talking to humans also reeks of omnipotence. Why did we suddenly give AI an image of omnipotence, and why is it so easy to accept the idea of a machine talking to humans? The ripples of AlphaGo’s defeat of Lee Sedol are deep and long. It is natural to raise questions, but human society has accepted it without question.

Humans only talk to each other when they have a special relationship. We don’t talk to just anyone. It’s not a conversation. You can ask ‘anyone’ for directions or to talk to you, and you can ask ‘anyone’ when you need information that anyone knows. So let’s ask, what is the relationship between Chat GPT and me and Chat GPT and you? You might get an answer like this We don’t talk because we’re special, but if we can talk, we can be special.

In the 21st century, we’ve all had the experience of becoming friends with people we never knew on the internet. We became friends regardless of relationship, and as social media dismantled the old order of relationships, we adapted to the new order of friends. This was the case with KakaoTalk friends, Facebook friends, and Instagram friends. They made friends and neighbors in online cafes and other places, and some of them actually met offline. But the trend was a flash in the pan, and then it was gone.

Recently, newspapers have reported that the online-driven Generation Z, unlike previous generations, is reluctant to reveal their daily lives to an unknown number of people. They are much less likely to use friendships like Facebook, and are more genuine in offline encounters. User changes on SNSs such as these show that the open sharing and participation that was once considered the hallmark of the digital age has been replaced by closed friendships, blocked sharing, and disappearing participation.

JoongAng Daily on September 5, 2023. While people are still spending time on social media, they are sharing less, and young people are reluctant to reveal their daily lives on social media, the newspaper said. In other words, the social media era we have been enjoying is coming to an end because users refuse to use it the way they used to.

What I’m trying to say is that we are at the beginning of a new trial for humanity. There’s a new artificial intelligence, and it’s a magic bat that can go beyond being able to talk to humans and create anything that humans ask of it. So the buzz is back, and everyone is rushing in and giving AI stellar “jobs” and churning out results. Is this data really useful? It’s very likely that we’re going to end up with a lot of garbage data. It’s a very worrying thing, and it’s a vicious cycle.

It’s not just the trial-and-error chaos that humanity has experienced so far. When humans ask and answer questions in a conversational format with an AI, whether online or offline, it is stored as data, unlike a human-to-human conversation. The AI will learn from it and use it to answer someone’s question.

AI can’t tell if it’s malicious or good data, so let’s ask the question again. What is the relationship between me and Chat GPT? Can I establish a new relationship and develop it into a deeper one, and if so, are there steps to do so? As soon as new AIs appear, they become conversational with humans. As mentioned earlier, conversations are different from simply asking for information. The content of the conversation will change depending on the person and the situation, and the content will change depending on the person.

It’s an ever-changing “art of relationship” that doesn’t have a right answer. Without it, a conversation can’t work, and it can’t be called a conversation. If it’s a conversation without a relationship, the outcome will not be of high quality. As time goes on, more and more malicious data will be generated, and it may even replace the good data that humans have accumulated in the last century. Is that what we are trying to save through AI?

We have a vague idea that AI is going to give us what we want. What will we do with the problems that will arise from it, such as the massive amounts of data that will be produced by humans who think they know everything and will serve it? Will we be able to solve the side effects of the resulting chaos? Without asking questions about these problems, without checking once, just because it is a new technology, just because it is curious, will we again share space and time with A.I., so that we can repeat the trial and error and waste time and effort as we have experienced in the past 30 years?

What we can expect, at least based on the information about AI presented so far, is that AI will generate more big data, and more people will talk to it. Can we be sure that our lives will be filled with valuable data as a result? Let’s at least ask the minimum questions to ensure that AI doesn’t ravage our lives before the bad data is sold, and then we can say “Don’t worry, Be happy!” as if nothing happened.

Embrace new technology, but at least ask questions, so you’re not caught off-guard when you encounter something unexpected. If you don’t ask questions, you’re bound to be exposed to the very things you fear, such as malicious data that feeds fantasies or fake news. This is the reality we face today, where we have to deal with unfortunate events that breed social pathologies and harm communities. I am afraid that such a situation will repeat itself, and the tragedy of chaos will not be long in coming.

We’ve crossed many rivers of illusion to realize the illusion of Windows’ virtual world, to escape the lure of new media, and to wake up from other illusions. No one dreams of being a Transformer anymore. No engineer designs an all-around AI. But then again, Chat GPT comes along and makes me expect that it will fulfill only what I want.

The age of search also gave me that expectation at first, but I realize now that it didn’t just pick out what I wanted. Chat GPTs may also disappoint humans and get kicked out. This error of expectation and disappointment is human-made. There will always be a gap between expectations and reality, and tragedy still starts there.

The problem is not the new technology. Just like search is not the problem, Chat GPT is not the problem. It’s the result of missing what it means to be human. That’s what A.I. is being asked to do now, and that’s to establish a relationship with humans. It needs to be able to decide what to create and what not to create in a relationship with me. This is an even bigger challenge than the dark ages of AI in the past 60 years.

--

--

Choi Han Kyum

My writing is about humans, how put them at the center. Especially in the age of AI that will erode humanity. After all it's coding your space and time.