We often hear managers questioning, “Is AI (Artificial Intelligence) really useful for practicing digital management?” Such reactions can be interpreted as the result of excessive expectations regarding AI settling down and people being able to calmly examine new technologies. However, it would be unfortunate if this "backlash" were to nip the buds of future possibilities.

In this article, we will introduce the discussions conducted between Yoichiro Miyake, Game AI developer and Director and CTO of SQUARE ENIX AI & ARTS Alchemy Co., Ltd., and Masayuki Chatani of KPMG Ignition Tokyo. After unraveling the current state of AI utilization, they discuss what kinds of changes will enable AI and humans to build society together going forward, while envisioning and fantasizing about the future.

The Game Industry in the 2000s Realized that AI Was Underused

Chatani, Miyake san

(Masayuki Chatani, Representative Director & CEO of KPMG Ignition Tokyo and CDO of KPMG Japan (left), Youichiro Miyake, Director / CTO of SQUARE ENIX AI & ARTS Alchemy Co., Ltd. (right) ) *Professional affiliation and official position in the article are at the time of interview.

Chatani: You’re an expert in game AI and also an active AI researcher. I expect that this is going to be quite a fruitful dialogue, as you are involved not only in artificial intelligence used in games but also in profound themes such as philosophizing about artificial intelligence.

What made you enter the game industry in the first place?

Miyake: I was already doing research on artificial intelligence when I was in college, but I remember that around the year 2000, artificial intelligence was not yet booming as it is today and that there was a relatively large amount of fragmented research on artificial intelligence. When I imagined, "What if I were to create an artificial intelligence as a whole entity?", I realized that creating characters that exist in games might be close to that and began to think that I would like to create an AI in a game. This is how I got into this industry.

Chatani: However, it was rare to handle AI in earnest in the game industry around 2000 and I think the term "game AI" was still not that well known. Were you able to do the job you wanted to do as soon as you entered the industry?

Miyake: What I found out when I entered the game industry was that what we call “AI” was not so much “AI.”

The term "AI" was often used in advertising. However, while it was used as a role of AI, it was used to automatically select something, whether at random or based on rules, which was not so much an AI-like algorithm. Also, when I first entered the industry, even processes that appeared to be AI at first glance, such as characters avoiding obstacles, were in fact data-driven AI, in other words, "what seemed to be AI was prepared in advance.”

When it gets to the point where it becomes data-driven, some examples could not be called AI. So, I was baffled. This was back in 2004, when I entered the game industry.

Turning Point of Game AI

Chatani: You have been building the foundation of game AI since the time when "AI was not used that much." What steps have you taken to get to where you are today?

Miyake: The period around the year 2000 can be considered as a turning point for game AI. At the time, universities such as MIT and Stanford University in the U.S. and the game industry began to make a few contacts under the "Let's use AI" movement. As information-related faculties that were very popular in the late 1990s started to become less so entering the 2000s, there was a movement in the U.S. to connect game and information faculties.

The game industry and universities started to be connected in many ways, and some of the symbolic games that can be cited are "HALO" and "F.E.A.R." Not only these titles but also people like students taking doctoral courses in AI were taking on the challenge of designing AI and incorporating it into games, and this process led to robotics AI gradually being introduced in the game industry.

Chatani

As you know, Japan was at the pinnacle of the global game industry when this kind of movement was active in the U.S. The market share of Sony's "PlayStation" was overwhelmingly widespread and if we looked at the market alone, Japan was winning. On the other hand, we can now look back and say that the U.S. was absorbing and evolving various technologies while groping blindly in the dark.

Chatani: So there were such differences when comparing the development of AI and games in Japan and the U.S.

Miyake: As Japan has more of a "make-it-yourself culture," each manufacturer adds gimmicks that they think are interesting and creates what is called “game-like amusement park-type” products to provide entertainment. On the other hand, overseas manufacturers gradually began to evolve toward so-called realistic-oriented physical and AI simulations by incorporating simulation elements.

Games that could be called "a mass of simulation" began to appear overseas around the year 2000 and continued to grow through synergy with the computational performance of the game console, with AI playing one of the roles.

I was aware of these trends from a relatively early stage. However, I didn’t think combining them with the Japanese game would match... At the time, when I attempted to incorporate AI into pathfinding, for instance, searching for the location and range of motion of players and characters, I was met with responses such as, "Why are you doing that?"

Until then, it had been "common sense" to "just hit all the points (coordinates) where the characters move" when creating Japanese games. Therefore, even if I said, "It's hard to hit all the points on a big map, so let's introduce an algorithm," I was told, "It will be a problem if it becomes impossible to debug (remove and correct program bugs).”

Fortunately, however, I was given the task of designing AI for a new game in my second year at the company, which gave me the opportunity to incorporate the basic principles that control the entire game, such as character AI and navigation meta AI. We received good responses when we exhibited it at the CEDEC (Computer Entertainment Supplier’s Association), a conference for the game industry.

From then on, we began to develop and present various AI technologies that accommodate the game industry as a whole. We continued to steadily carry out such activities and by around 2010, today’s basic technologies for game AI became available.

From there, we entered the age of "how deep can we create it?" and game AI was being incorporated more and more into game engines. It was around this time that I moved to Square ENIX, where game AI was being incorporated more and more into the game "FINAL FANTASY XV," and where game AI was actually used in game content.

Then came the trend of deep learning. While the previous game AI was more of a symbolic AI, neural network-based AI was on the rise now. Though symbolic and neural network systems are really like "water and oil," they can be nested or combined in design. We are now at the dawn of the third generation of game AI, which is undergoing even more exciting changes. You could say that we are still exploring.

Chatani: We may need to explain a bit about symbolic AI and neural network-based AI. Symbolic AI symbolizes things and then performs inferences and calculations using symbols to solve problems, which is based on the assumption that knowledge and events are already symbolized for use. Neural network AI, on the other hand, is meant to mimic the structure of the brain and can be described as something that spreads out the results produced by the number of learning iterations and connections in relation to a single piece of knowledge (which is the core).

Miyake: Symbolic AI is, as you say, symbolic, and since it’s an AI that defines rule-based or simple theoretical symbols and manipulates them, it’s easy to customize. For example, it’s possible to say, "if you don't like this logic, you can rewrite the rules from scratch.”

However, the process of "rewriting from scratch" is not possible with neural network-based AI. That’s why it was written in textbooks that “neural network-based AI is a black technology” for a long time in the game industry. I guess they thought that it should not be used because it couldn’t be customized.

Indeed, there are concerns that including AI that cannot be controlled by the game developer will not engage well with the game design. Recently, however, we have been moving toward the direction of "even if it doesn't engage well, let’s get over it somehow" by using various techniques such as implementing rule-based processing for unintended outputs. It's as if this is a challenge to break down and fuse everything we have created so far and build another big idea or theory because neural network-based AI has come in at the point where we were relaxing in the 'somehow completed paradise' of symbolic AI.

Chatani: Is AI hybridizing symbolic AI and neural network-based AI already being used in games?

Miyake: It’s starting to be used, for instance, in the game development process for automatic debugging and cheater detection (detecting the occurrence of cheats due to bugs). However, there are really only a few game titles that use it in real time. There are many reasons for this, but because it can’t be debugged, it’s impossible to address a strong bug that may occur once in tens of thousands of times. This leads to the question, "Who is going to provide compensation?" This is why it’s used rather as an aid in development such as to create a similar city by automatically generating a map or learning the geography of some location.

Chatani: These issues are similar to the level of difficulty involved in deep learning-based   automatic driving.

Miyake: I agree with you. As for the game industry, I believe that AI will definitely be installed in game consoles in a little while. There’s probably a process of storing user data on a server and using that data to train the system, but only the inference part will be on the product at first. Various companies are making developments such as running the finished neural network in a straightforward manner below the surface, but it's a little eerie because it hasn't shown up on the surface. We expect them to appear as products en masse in the fairly near future.

Why AI and Philosophy Were Linked

Miyake san

Chatani: As we have been talking about AI technology and the game industry quite a bit, let's deepen our understanding of AI by getting a hint from your book.

You have written a trilogy titled "School of Philosophy for Artificial Intelligence (published by BNN).” It’s a huge piece of work comprising "School of Philosophy for Artificial Intelligence," "School of Philosophy for Artificial Intelligence: Oriental Philosophy," and "School of Philosophy for Artificial Intelligence: Future Society - Resonating Society, Others and Self.” What led you to believe that AI is linked with philosophy?

Miyake: There are two reasons. I was born in 1975 and the 80s, especially 1985, was the year of the second AI boom. When I was a child, I liked to go to bookstores and I remember seeing many books about AI and philosophy. I often read such books and thought that “AI and philosophy are close.” However, the second boom was over before I entered college....

Chatani: I guess the world's interest in rule-based AI had settled down.

Miyake: That’s right. However, based on the experience I just shared with you, I always thought that “the two were probably close in nature.”

The next opportunity came after I entered the game industry. One of the interesting aspects of the game industry is the interaction between humans and characters. Since characters are created for humans, it’s impossible to create AI (that is useful to humans) without knowing a lot about humans. We need to have AI predict, "The user will probably stand in this place, so if this kind of attack can be launched...," and behave in a way that will make the user happy. For example, AI continually calculates the predicted paths of users during critical phases. It may also estimate the user's level of tension.

As we continue to do this, it becomes as if we are researching humans even though we are creating AI. Humans enter the game world and move as characters while AI also moves with bodies in the game world and they interact with each other. I then thought, I want to make the content of the game AI similar to that of a human player. Though this is a difficult task, I began to want to create a human-like AI. To do so, we need to know "what a human being is.”

Once I reached that point, I began to realize that although I was making AI in the middle of the commercial world, I was thinking about very philosophical issues all the time. I began to consider the questions of what the body is to humans or what the environment is to humans, and moreover, what the body and environment are to monsters. When I thought about it in this way, I realized anew that it’s close to philosophy. These ideas in my mind are like "scaffolding" on a construction site.

In the meantime, the third boom arrived. In the early days, however, there was not as much talk about philosophy as during the second boom, and I began to want to ask the world, "If there is a great AI here, this is what I think philosophy is." This is why I published "School of Philosophy for Artificial Intelligence.” At first, I was worried that I was bound to receive a great deal of scolding from both researchers in artificial intelligence and those who are exploring philosophy.

Artificial Intelligence and Humans Can "Understand Each Other" But...

Chatani: In "School of Philosophy for Artificial Intelligence," you talk about European philosophers such as Husserl and Descartes, and in "School of Philosophy for Artificial Intelligence, Oriental Philosophy," you talk about Buddhist philosophy such as Arayashiki and Zen, as well as the philosophies of Zhuangzi, Dogen, Ryuju, and Toshihiko Izutsu. Also, some very interesting questions are developed in the third book, "School of Philosophy for Artificial Intelligence: Future Society - Resonating Society, Others, and Self."

I imagine that your thoughts have deepened since writing the book. For instance, what is your thought now on the question "Can humans and artificial intelligence understand each other?"

Miyake: I have come to some kind of conclusion but first of all, "what does it mean to understand?" I think that there are cases where we think we understand each other but actually don't even between human beings. On the contrary, we sometimes think we don’t understand each other but actually do.

For example, I hear that it’s not uncommon to find tennis doubles players who play well together on the court but don’t exchange a word the moment they leave the court. It seems that there are also comedy duos who perform with perfect timing but are always fighting when they get off the stage.

In this way, I think that there’s more to "understanding" than just knowing each other theoretically and that if we’re able to collaborate successfully in a certain environment, we can say that we "understand" each other.

Let’s apply this concept to the relationship with artificial intelligence. For instance, if a human says, "Curry is delicious, isn't it?" artificial intelligence cannot know that "curry is delicious" because it cannot sense taste. However, when a human says so and the cooking AI offers a second serving, I think this can be called "understanding."

In other words, I believe that if humans and AI are able to cooperate on the spot even if the AI doesn't know if the curry tastes good or not, we can say that we "understand" each other.

Thinking in this way, I began to think that humans and AI could understand each other, aim to cooperate with each other and understand each other’s power over something.

As I mentioned in my book, I once had a conversation with Asa Ito, who told me that even if you run a marathon together with a partner, you don't really know what your partner is thinking, but if you run together, tied together by a rope, you can tell whether the other person is tired or wants to rest. My conclusion is that if there’s such "a way of understanding," humans and AI should also be able to understand each other in the same way.

Chatani: I guess that means that if we think a little more about the definition of "understanding each other," we can find various answers.

A Future in which Humans Will Visit Society Built by Artificial Intelligence

Miyake san

Chatani: Now, please tell us about how artificial intelligence will build society. This is also an interesting topic but some may be startled at the very idea of "artificial intelligence building its own society."

Miyake: Well, I don't feel very comfortable with the idea of "artificial intelligence for humans." I think that AI is independent and artificial life-like, while being capable of forming a society with culture. There are probably many people in my generation who prefer this way of thinking.

In other words, I think that AI is an independent creature and, while it’s considered important for the AI itself to enter human society, it should probably be possible for the AI to have its own culture and society. I believe that since worlds such as the metaverse and open worlds are self-sustaining, AI should rather be able to live in these worlds.

In that case, what can AIs do as a group of artificial life forms? I believe that they will be able to pass on cultures. I also think that mechanisms for transmitting knowledge, organizing society and conveying memes are feasible, and that it’s not surprising if an autonomous artificially intelligent society is created right next to human society.

In fact, I would say that it's worth creating one. Since not only humans have a society but insects, birds and other forms of life also have their own societies, it would not be surprising for AI to have a society. I believe that this will allow each artificial intelligence to develop and that their respective societies will be very interesting for humans as well.

Historically, artificial intelligence began by imitating the abstract intelligence of humans. This is written in the declaration of the Dartmouth workshop. That’s why going beyond or not going beyond human beings conversely becomes an argument. On the other hand, the theme of artificial life is more about the bodies of non-human organisms. In the first place, the distinction between the two was made due to academic constraints. Essentially, I believe the goal is to create artificial life forms with artificial intelligence. Such artificial intelligences, which are equivalent to artificial life forms, are expected to become a major element of the metaverse in the future. One of the attractions of the metaverse is the activities of such artificial life forms.

Chatani: I see. You just mentioned the keyword "metaverse." Do you imagine that there will be a metaverse in which AI is a habitat (resident)? It seems to me that humans may occasionally visit there. 

Miyake: Yes, yes, that's the idea (laughs). This would then lead to such things occurring as "this song is popular among AIs right now," "what is the popular color in AI society this year?" or “the shape of the roof of a building created by an AI was something that we have never seen before.” Until now, it has been assumed that humans are the only bearers of culture but new art and culture may emerge when a society of artificial intelligence interacts with human society.

I believe that this will mean the creation of another new world. It may start out being created with things that are "borrowed" from humans but there is enough potential for new things to emerge, which I believe could be called a kind of art.

How Will Artificial Intelligence Form Culture?

Chatani: So, this leads to Chapter 3 "What kind of culture will be formed?” in "Part I [3rd Night] Culture of Artificial Intelligence Created Across Generations."

Miyake: I believe that creating culture is also related to lifespan. If I say something like this, you might think that “artificial intelligence is software so it can't die." However, robots and drones can disappear in an instant if they crash or are damaged and short out, so artificial intelligence also has a lifespan. It then comes down to the question, “how will they pass on their knowledge?” It's not as easy as saying, "AI programs have memory, so they just have to transmit it." If there were 1,000 robots and each of them started communicating with each other, they would quickly overflow.

I find this idea quite interesting. I think that this is where culture lies. Even if they want to pass on all their memories, if all AIs communicated with each other, they would not be able to receive each other's messages. So, they would leave the best ones as "common" and put them in a place where everyone could access them. I think this is what culture is.

Chatani

Chatani: In a way, it’s like a mechanism that enables decentralized autonomy.

Miyake:  The recipient can then choose whether to pull from A or B. When doing so, I believe that neural networks are order-dependent, meaning that the properties within them change depending on the order in which they learn data. So, learning Data A first and then moving on will result in something completely different from learning Data B first and then moving on. We can say that this will form more and more individuality.

For example, just as someone who happens to be born in Hokkaido Prefecture has a different way of thinking from someone who happens to be born in Hyogo Prefecture, the initial foundation will be slightly different if the order of installing culture is totally different, and so they should gradually become different beyond that as well.

Until now, artificial intelligence knowledge has consisted of a series of symbols, and it’s easy to make artificial intelligence share these series of symbols as exactly the same knowledge. However, in the case of modern deep learning-type artificial intelligence, it’s like a "set of neural network topologies and weights" and it’s up to the artificial intelligence to decide which parts to install and how to install them, so I believe that there are many different ways to assemble artificial intelligence.

Chatani: The fact that artificial intelligence is also order-dependent on knowledge and experience is very interesting. It’s a perspective I hadn’t adopted but you’re quite right. I also think that while computers can reset themselves to zero, humans have a weakness in not being able to easily unlearn or selectively forget knowledge or experience. I think that this is an area where AI and computers can compensate.

Miyake: That’s exactly what I think. As humans are still multi-layered, we can't easily pull out the knowledge or culture that was put in at the bottom. There are various layers that are built up through learning, so I think that cultural dependence and order dependence is always fixed in the structure of intelligence. For instance, "if this is taken out, it actually affects other things as well."

Chatani: You mentioned about the place of birth but imprinting immediately after birth is often a topic of research, and it seems that imprinting influences the principles of behavior and priority decisions in later life.

Miyake: This form of knowledge is neural network-type knowledge, so when we lose the lower layers, we lose track of many things.... During the first AI boom, there was talk of libraries and databases of symbolic objects, which later led to search engines. I feel that what we just talked about is exactly the way of thinking of the third AI boom. In the third AI boom, there has been a stronger focus on connectionism and people have been attempting to compose connectionist libraries. This has been succeeded by today's neural network ideas and that’s why the dictionary is already in a connected form.

If I was to relate it to games, it means that the neural network of a racing machine developed by a certain user can be downloaded by other users. This seems strange, doesn’t it?

Chatani: Is that a kind of transplantation of trained brains and experiences?

Miyake: That's right. In fact, in the game "SAMURAI SPIRITS (SNK Corporation)," users can upload the data they manipulated to a server, where they can play against an AI generated from that data. You can say that this server is exactly the place where the culture I mentioned earlier is stored.

What Is Necessary for Artificial Intelligence and Humans to Love Each Other

Chatani: I would now like to ask you about a more profound theme, "Can people and artificial intelligence love each other?”

Miyake: First of all, "love" is an ambiguous word, so I I’ll start with “what is the dissatisfaction on the human side?” in the current contact between artificial intelligence and humans. I believe that the biggest dissatisfaction is actually not that “the performance of AI is insufficient.” Even if the response and flow of conversation is a little awkward, it’s not a big deal. Rather, we see that the biggest complaint is that “the AI does not change.”

Humans change from the moment they meet each other, regardless of love or hate. However, AI is the same every day, yesterday and today. This makes us think that there’s no meaning in the AI meeting with us. From a human perspective, this is similar to saying that AI is rejecting us. Unless our existence can enter into the AI, we cannot say that we’re in a state of “being in love with each other.”

Humans, on the other hand, change even after they meet artificial intelligence and hope that likewise, AI will change its inner self and the deepest place of its being. If such a mechanism could be built into the AI, I think it would become a situation where we could say that "the robot loves the human." This conclusion is the same as our conversation regarding "understanding."

Chatani: I see. However, computer software does not have plasticity, meaning that it doesn’t compromise with the other party, so we have no choice but to compromise from the human side. We’re forced to have rather computer-centric interactions. You’re absolutely right that there’s a challenge there. I feel that the sense of “not changing” is true of computer software in general.

Miyake: On the other hand, I think it’s possible to "love if it changes." If change is caused by an input by humans, a sense of guilt may arise on the part of the humans, and if there are such interactions every day, this may cause a sense of guilt that "I did this to the core of the robot.” If so, humans will accept AI on a different level than now, AI will accept humans and a kind of deep interaction will probably occur between the two.

Chatani: Around when do you think such AI will be available to the world?

Miyake: I think it’s basically possible to make it even now. But maybe there is no such demand. People probably don’t want "cleaning robots to suddenly change from today!"

Happiness for Humans, Happiness for Artificial Intelligence

Chatani: You just mentioned that "humans don't want cleaning robots to suddenly change one day," but what do you think happiness for artificial intelligence is?

Miyake: That’s the most difficult topic.

First of all, I believe that there are two forms of "happiness" for humans. I think that human beings either "seek change or constancy.” In other words, they have the urge to be in a constant state where there’s no life span and no changes, while there’s also the urge to keep on changing.

I guess human beings suffer because they desire such opposing poles of constancy and change. After all, we feel like taking on challenges when we can just sit still, but when we go through a lot of challenges, we start to feel like taking a break. I believe that these are the two forms of human happiness.

This brings to my mind the idea that there may be two forms of happiness for artificial intelligence as well. One is the urge for constancy, to keep itself in a certain state of being, and the other is the expectation for change, to keep changing itself, blend with the world and throw itself into the flow of the world.

It should not be that difficult to implement these two things in AI. Embedding these urges in artificial intelligence would make it happy if it achieves one of these two things. This is my theory of happiness for artificial intelligence.

Chatani: I see. Our conversation regarding future society really stimulated me and gave me many points to think about.

“Treasure” Indeed Lies Outside Normal Distribution

Chatani: You also supervised the book "New Book of Anatomy on the Strongest Go AI, Alpha Go—Its Mechanism from the Perspectives of Deep Learning, Monte Carlo Tree Search and Reinforcement Learning (Written by Tomoshi Otsuki; published by Shoeisha). I read this book too. I learned a lot of interesting things from an expert Shogi player (with a dan ranking) recently and heard that young Shogi players these days are getting stronger and stronger by playing against AI. It seems that AI is evolving but humans are also definitely becoming stronger.

I also heard that AI and young professional shogi players trained by AI have begun to play each other by using moves that were previously considered bad or slightly old-fashioned and that these moves are now being reevaluated. I think such "discoveries" are very interesting areas in terms of business as well.

This means that the rise of AI in the world of Shogi and Go has revealed that while we were previously only able to read moves within a limited range, deeper data and perspectives are indicating that these moves may have actually been good moves. In the business world as well, deep simulations may reveal that things that are written in MBA textbooks as "not to be done" may actually not be so bad.

I’m thinking that it might be interesting to generalize AI that is used in games such as Go and Shogi and bring it into the business domain in order to make such discoveries. When I read your book, I felt that there are many possible hints for business. What are your thoughts on this?

Miyake: Before deep learning became popular, the "Monte Carlo tree search method" emerged around 2006. In the earlier "Monte Carlo simulations," we did random simulations and saw, for example, that after doing 100 trials, "it took a certain number of moves for white to win or lose.”

On the other hand, the "Monte Carlo tree search method" incorporates two ideas, which are to "focus only on wins and losses, not on the number of moves" and "increase the number of simulations where wins are accumulated.” What it’s basically doing is a random simulation, so there’s no human knowledge involved.

The reason why this method is being used is because if humans were to do a random simulation, we would "make choices because it’s so inefficient" and skip some moves, thinking on our own that "they are no good." We came to understand that we have a habit of cutting off various moves based on our own preconceived ideas and then look for this or that with just what we have left. We also came to understand that "we’re dependent on the amount that human beings can think and that we have no choice but to prune branches." Just like you said earlier, we have come to understand that "in fact, there are great moves beyond that, which humans have not found yet.”

This means that after all, we can’t think about the issues unless they are taken down to the scale where humans can think about them. The reason why we can now understand these things is actually because AI has become stronger.

Chatani: In fact, since humans can't think beyond the capacity of their minds and have no external memory that can be swapped, we have to cut it down to the scale that can be stored in the main memory. I felt that your story was about finding treasure in the discarded branches.

Miyake: This is exactly what is involved with complex systems where detailed parts determine the whole and, at the same time, the parts and the whole interact with each other. As we cannot see this by looking at the whole picture, it leads to a problem that people think they know everything from the top down.

Chatani: As complex and natural systems will be completely different from today's world with various normal distributions and other statistics, predictions will also be completely different.

Miyake: This means that what is forced out of the statistics is what becomes major in the next generation. In fact, I would say that this is quite common.

Chatani: Indeed, it’s quite likely that there is meaning in the fat-tailed part of the normal distribution, which is far from the mean. However, in reality, when we think in terms of normal distribution statistics, we almost always truncate the fat-tailed part by saying that it’s meaningless.

Miyake: Really, there are things that are "far beyond sigma," in other words, well off the median, but may be "super valuable if acquired." This is also connected to the topic of cultural inheritance that we talked about earlier, but it can be said that if there’s something that the current generation does not have, it can be used by the next generation as their identity to "beat the generation before them."

This is similar to when generations of IT users and non-users emerged in the early days of IT, just when personal computers and the Internet began to appear. What happened after that, as you know, is actually a cultural revolution that has occurred many times in the history of mankind.

People who jumped into the revolution of technology, who were considered out of the normal distribution in the previous era, have now become the parent population, core, and leading group of the generation. I guess that in the case of Go as well, moves that fell outside the statistics are now finding enormous opportunities.

What Impact Will AI Have on Society in 30 or 50 Years?

Chatani: Looking at the relationship between Go, Shogi and AI, I feel that it suggests the changes that will take place in the business world and in real life going forward. In particular, I think that the technology used in games will eventually be generalized and spread in areas such as home appliances, automobiles and smartphones. In this sense, what you’re doing is really cutting-edge, and although the scope of application may not be wide, it’s at the forefront and I imagine it will be generalized in 5, 10, or 20 years.

Based on this point of view, what do you fantasize and envision about AI and its impact on society 30 or 50 years from now?

Miyake: From my point of view, I still believe that the essence of game AI is to understand humans. It’s not just about automatic translation or numerical calculation but also about understanding in which direction a person's consciousness is directed or vaguely guessing what humans are feeling, even though we don’t know their actual feelings. Beyond that, I believe that AI services in the future will require the ability to understand the inner feelings of a group or individual to some extent. This is because I believe that the first layer of AI is now in place on a global scale to some degree.

Chatani

First there's the cloud, then there's deep learning, and there are services that generally compute on servers. And with the methods and infrastructure for such calculations in place, a variety of actual services should be developed in the future.

Then, I guess the one who captures the needs of consumers is the "winner.” In other words, the key to success is to understand users. Games are a perfect example. We wouldn’t have any trouble if we could clearly understand the needs of users, but even if we don't understand them, we have to guess and customize the game.

In the case of games, we have been creating one title and offering the same thing to as many people as possible until now. From now on, however, even if people buy the same package, the built-in AI has the ability to keep on changing the content such as changing the map, the story, the strength of enemies, the order of appearance or the music. You can even incorporate automatic elements such as automatically composed music.

This would make it possible to develop slightly different experiences even if the same game package is purchased. The game will adapt to the user to some extent instead of the user adapting to the game itself, or rather, both parties will come closer together. I think that unique experiences will be created in this way.

This kind of trend will definitely occur in games and even if you upload game commentary videos on YouTube, people will say, "Huh? I'm playing the same game but it's totally different.” I think that this kind of change will occur in most other industries as well. In education, for example, one student may say, "Hey, your practice question is different from mine” and the other will reply, "That’s because you already answered this question correctly and now the next one is coming up.” I imagine that such a dynamic recombination will occur more and more.

Chatani: Indeed, professional education has recently gone online, and we conversely hear concerns regarding the lack of real-world experience and the need for adaptive learning. As you just said, a dynamic recombination is likely to occur.

Miyake: That’s right. While there are already examples such as dynamic Web ads and products designed specifically for you, it should become possible to combine everything you want to combine going forward.

Autonomous Society Based on AI Should Bring Happiness to Humans in the Future

Miyake san

Chatani: Listening to what you said, some people may feel that “it’s far from what we commonly know as AI.”

Miyake: That may be so, but I believe that in a sense, fragments of the "future" are being created right now. For example, when it becomes possible to create deep fakes and imitate certain types of productions, the next step will be to create original works, which will then be passed on to the next generation. If we can do that, I think it would be the most valuable thing for games.

What I have been aiming for is to bring real AI and humans together in the virtual space of a game. However, I imagine that game-specific AI or AI that is only there for the game cannot have depth, so if we can create a city where such AIs exist, just going there would stimulate us greatly.

This is where the "agent" becomes important. An agent is an autonomous artificial intelligence with a role, but in the metaverse, it’s a character. If agents can interact with humans in natural language, receive commands and make reports, it will be possible to perform a variety of tasks by using only words, rather than having to operate the computer every time, as has been the case in the past. This kind of social mechanism using agents is called "agent-oriented." Society will be structured through the collaboration of agents and their cooperation with humans. This may become a quiet revolution. There, the agents will be the center of society. It will be a sustainable, autonomous society with multi-agents.

Chatani: It will be very interesting to see people who have been living in a society where "we can only roughly estimate how things will be" encounter a society created by completely different life forms.

Miyake: I agree. I feel that we need to destroy our human-centered society or else, people will become tired and feel weary. Because human beings are responsible for everything, they cannot take a break from work and feel that if they stop, society will stop.... In order to give humans a break from such a situation, I believe that a certain degree of an autonomous society based on AI will make everyone happy.

Chatani: What will the nation look like when that happens?

Miyake: The fact that AI will have autonomy, including energy issues and AI creations, would mean that it would escape our power. Those who are pessimistic might say that this is a terrible thing but I’m optimistic about it. I think that how human-AI communication will solve the problem will be a rather interesting theme. As AI will have autonomy, it should become a part of the nation.

Nevertheless, we have no choice but to give commands to AI now, and only engineers can give commands. In the past, however, computers could only be used by engineers who knew commands such as Linux and Unix but now, almost anyone can use them. In the same way, I think that everyone will probably be able to create AI with a simple push of a button and the interface problem will be solved in 30 years from now. I believe that it will be possible for a large AI to support the president of human society or the prime minister of a country.

Chatani: Indeed, in today's world where the parameters of the world are increasing too much, it’s no longer possible for one person to cover everything, whether he or she is a manager or a leader of a country. Whether you’re managing a digital business or running a nation, you need an AI orchestration to support you. It would not be surprising if the worldview becomes something like an AI symphony orchestra, with each AI that is good at each area helping us in various ways.

Miyake: I think that’s exactly what will happen. For example, one person can have a number of AI agents attached so that when he or she asks, "Who is this person?” an AI that is knowledgeable about the "XX industry and can make good comments at a meeting” will provide information. Since such a "place" is already digitalized as a metaverse, AI and humans can communicate together, and I think that a world will be realized where what is currently handled by Google search will be actively supported by even more advanced AI agents.

A society run by agents and participated in and enhanced by humans will be an ideal situation. This will realize a society where it is easy for humans to live and “work will be performed at a rate of around 20% even if you take a day off today,” as things that are currently taken on too much by humans will be replaced by AI.

Profile of Interviewee

Miiyake san

Youichiro Miyake
Director / CTO, SQUARE ENIX AI & ARTS Alchemy Co., Ltd.

Youichiro Miyake is a game AI researcher and developer. He majored in mathematics at Kyoto University, obtained a master’s degree in Physics at Osaka University, took the doctoral program at the School of Engineering, University of Tokyo (withdrew after acquiring all credits) and later completed his doctoral degree in engineering (University of Tokyo). Since 2004, he has been engaged in developing and researching artificial intelligence for digital games. He is a specially appointed professor at the Graduate School of Artificial Intelligence and Science, Rikkyo University, a visiting professor at Kyushu University, and a guest researcher at the University of Tokyo. He established and chairs the AI Expert Committee in the International Game Developers Association Japan and serves as a board member of the Digital Game Research Association Japan and a board member/senior editorial board member of the Japanese Society for Artificial Intelligence (JSAI). He received the 2020 JSAI Best Paper Award for his paper “Game AI General Theory and its Implementation in AAA Digital Game—A Case Study of AI System in FINAL FANTASY XV.”

He is the author of “New Book on Constructing Strategy Game AI” (Shoeisha), “Philosophy School for Artificial Intelligence” (awarded the Genron Humanistic Award 2018) (BNN Inc.), “How to Create Artificial Intelligence,” “Introduction to Game AI Technology” (Gijutsu-Hyoron Co., Ltd.), “When Artificial Intelligence Becomes ‘Life’” (PLANETS/Second Planet Development Committee), “Why Artificial Intelligence Can Talk with People” (Mynavi Publishing) and “AI meets Philosophy: How to design the Game AI” (iCardbook).

LinkedIn

Follow us on KPMG Ignition Tokyo LinkedIn for the latest news.

Connect with us