Machine Learning, Virtual World and Metaverse

rct AI
37 min readSep 28, 2020

--

I. Introduction: Similarities between reincarnation and Machine Learning

1. “Reincarnation” under religious systems

In religious systems of knowledge, people often talk about a term called “reincarnation.” It is the continuous cycle of birth and death, and the Hindus believe that the living beings in the cycle experience eternal suffering. In the Buddhist view, eternal life and immortality are still in the cycle of grief, and only the so-called “emptiness” or “no-self” can help us escape from the suffering of reincarnation.

The Sanskrit word for “reincarnation” is Saṃsāra, which is strictly a theory of thought. It holds that life takes on different faces and forms and is a continuous process of “birth and death.” In the East, there are, for example, Hinduism, Buddhism, and Taoism, which recognize this idea. In Europe, there are Greek philosophies of reincarnation, such as Pythagoras and Plato; and as a religious experience, it is considered to be another reality of the world or an extension of the world as perceived by the senses.

This is, of course, a description that leans towards the “religious” approach, as Buddhism holds that if one wants to confirm the existence of reincarnation through practical means, there are three ways: (1) after death, (2) after enlightenment, and (3) under particular circumstances. But from a practical point of view, none of these three ways can be experienced in the short term, for the time being, so what if we try it from the path of logical reasoning?

The study of reincarnation in Buddhism itself can be summed up as the “law of cause and effect,” which simply means that the first plant seed and the first person in the world do not exist. If there were a first plant and a first-person, it would violate the logical law of causal universality, because the seeds precede the seeds and the people forego the people. There can never be a “first” causeless seed or a parentless person.

At the same time, before the next reasoning process, a priori assumption is made that the essence of anything is “emptiness” or “no-self”. Thus the inference is drawn: because the existence of anything and any life is a process of successive movement in “time” or “the law of development of all things”, and things and life itself is a “causal chain” of old and new life. This causal chain cannot logically find its beginning and end.

Partial things have a beginning and an end and things as a whole have no beginning and no end. Likewise, human life has a beginning and an end, but as a whole of human life, this life is only a “life stage” that remains relatively stable throughout the life process. There have been countless such “life stages” in the past, and there will still be countless such “life stages” to go through in the future, which is the logical reasoning basis for the Buddhist theory of life continuation and reincarnation.

The reason for the introduction of “reincarnation” before we get into the main topic of this article is that we see some striking similarities between “reincarnation” and “machine learning.” It makes us wonder if there is some possible relationship between the concepts and perceptions of religion and the virtual world we are trying to create through AI technology.

  • The premise of “reincarnation”
  1. The existence of the law of causality (time is an illusion, there is no past, present, or future)
  2. A rule beyond “reincarnation” (the essence of all things)
  • The premise of “machine learning”
  1. The existence of causal laws (Markov chains, change of state as a probabilistic choice)
  2. Rules beyond “machine learning” (preconfigured settings and regulations)

In concrete terms, in the concept of “reincarnation,” each lifetime of human beings and living creatures is an experience. The accumulated “karma” of the previous life will point to a different beginning of the next one, based on karma rules. Simultaneously, the experience and knowledge of the previous life will be passed on to the next life in a certain way, but the detailed experience and memory of the previous life will not be inherited in its entirety. Logically, even if you inherit the experience and knowledge of the previous life, you still need certain conditions to obtain the “key” in this life.

Some creatures are born with the key, but they can never use it in the right way; some creatures acquire it in the next life, and when the key is turned on, they get some or all of the information and knowledge from reincarnation.

2. Similar performance of machine learning

The previous step’s actions and states will determine the next step in the machine’s judgment and decision-making for machine learning. If the transfer between states is regular, the machine can learn, or it can discover the pattern itself, which is called a Markov chain. This process requires a “memory-free” nature: the probability distribution of the next state can only be determined by the current state, regardless of the events preceding it in the time series. This particular type of “no-memory” is called a Markovian property.

Simply put, your tomorrow is only related to your today and has no direct relationship to your past. It is related to your yesterday through your today.

Guided by the law of causality, either type of machine learning (supervised learning, unsupervised learning, reinforcement learning) faces constraints from settings and rules. And this type of control is precisely what must be specified before the computation can begin so that the corresponding computational process can proceed according to a given framework.

A classic scenario in reinforcement learning is that we construct the rules and the setting of the role in this scenario and build some goal so that the role can be trained quickly and repeatedly within the framework of that goal and gradually learn and exhibit intelligence. It’s very interesting to see how the character can perform in basic pathfinding and survival scenarios.

Assuming we set up a character’s life cycle and rules for reproduction in this scenario, the machine-constructed “creature” can make decisions and take actions as it would in the real world, and accumulate learned experiences and abilities in the form of data to make decisions and take actions again in the next turn or round.

Of course, suppose we set a so-called “death condition”, or the end of a cycle or round, and try to guess the state of the machine creature from a human point of view. In that case, we have a “reincarnation” event: the machine creature acquires more and more knowledge and skills as it ends and begins again and again, and gradually becomes “intelligent”.

For reinforcement learning, for such a creature to acquire “intelligence,” it would generally have to go through hundreds of thousands, millions, or even tens of millions of training and simulations. At the same time, we can also note that in traditional religious knowledge systems, on a macro level, if one wants to become a Buddha or reach the state of “no-self” and “emptiness,” one needs to go through hundreds of millions of reincarnations and experience the various states of the earth.

In terms of relevance, religion and reinforcement learning are far apart, but they seem to be inextricably linked. The scenes and worlds described in religion are descriptions of a past human understanding and perception of something; this category of things may be a rule or a transcendent understanding and perception. And it is clear that this cognition may be a common endpoint that all humans eventually reach through different reasoning paths. Although in the whole field of machine learning, we can not yet really understand the reasoning process of machine humans, in the more macroscopic matter of how to let the machine produce intelligence, the field of computer, like many fields of basic science, eventually can not bypass back to the problem of religion.

However, we cannot draw any strict scientific conclusions about this, but machine learning provides us with a very feasible and reasonable way to construct a virtual world that resembles the real environment, allowing us to think and explore in a more “free” perspective.

II. Rules and Mechanism: Possibilities of Machine Learning

1. The carrier used to construct the “reality”

Before discussing machine learning, we need to define what kind of objects can become “carrier” that allows machine learning to simulate the way the real world works as closely as possible in such an environment or set of rules.

First of all, this “carrier” should be sustainable. For the simple reason that a non-sustainable “carrier” is not capable of continuous learning and decision making by the machine, the information or experience gained in a single round cannot be transferred. As a result, the data accumulation and iterative evolution required by machine learning cannot be achieved.

Secondly, this “carrier” should be stabilizable. A stable “carrier” can provide relatively unchanging constraints and conditions, providing a series of stable rules and settings for the machine’s continuous computation and learning iterations.

At the same time, this “carrier” should be replicable. In the development of science, all scientific experiments’ results must be replicable before they can be accepted as real results. Non-replicable “carriers” are not sufficient to allow the machine-constructed world to scale and generalize, and thus to follow continuous and consistent rules and settings.

In the current industrial landscape, only video games can meet the above conditions and serve as the “carrier.”

  1. Video games are sustainable. Throughout the life cycle of a game, it can be thought of as occurring continuously in a specific “code-space-time,” and except for a few games, the time scale in most game worlds is independent of the real-world time scale. In such a continuous environment, machines can obtain sufficient time scales to compute, learn, and iterate continuously.
  2. Video games are stabilizable. What makes video games unique is that each game is a virtual space-time built out of code by the creator. In this virtual time and space, the rules and settings of the world are all clearly defined by the creator, and the game itself will be run by the machine in strict accordance with the rules built by the code. At the other end of the spectrum, there are offline board games, TRPG games, etc. These gaming experiences are often characterized by more human intervention, resulting in a different experience, but they are not as inherently stable as video games generally are.
  3. Video games are replicable. There are several levels of meaning of replicability of games. The first is replicability within a game, and the second is replicability between games. Based on the continuity of games, the contents within a game, whether they are image-based scenes, characters, and animations, or logical dialogues, plots, and narratives, are carriers that can be replicated and scaled up. Also, between games, image-level objects such as environments, objects, physics rules, etc. from different games can be migrated repeatedly. Since rct launched the Chaos Box algorithm, characters, roles, and plot narratives from different games can be replicated at scale into other types of games, while carrying their memories, experiences, and knowledge that have been trained, learned, and iterated on in the corresponding game.

As our visual perception of the virtual world represented by a game approaches the real world with advances and innovations in graphics technology, the visual information we acquire also needs to be organized in a correct and logical manner that allows us to understand the world and ourselves a more immersive experience.

In the past, relying solely on traditional game AI techniques, such as decision trees and state machines was not enough to meet the growing need for interaction on the logical side of games, let alone organizing information and generating logical structures in a way that is similar to the way the human brain processes. With the Chaos Box algorithm, we can now build a real “Simulator” on both the logic and graphics side, which is the concept we often see in things like The Matrix.

From The Matrix

Simply put, if our computing power is powerful enough to simulate every tiny state parameter of the human world in extreme detail, can we create a virtual world that is almost identical to our real world? At the same time, many philosophers, physicists, and entrepreneurs (such as Elon Musk) believe that we are in a very powerful “computer simulation” and that the reality we experience is simply part of that program.

One explanation is that, in the world we live in, if our technology continues to evolve and we gradually come into contact with the “edges” of our world, threatening the world above us, then it is inevitable that the higher layers will destroy the lower layers out of necessity. Thus the only option for humans is to continue building downwards, and the virtual world will be nested in countless others. The reason we still exist at the moment is that the countless previously nested ones have chosen to build their own simulators. Otherwise, one in the middle would not do so, and the system would collapse, so simulator nesting is almost inevitable.

At the same time, video games are almost the prototype of a perfect simulator. With the support of AI technology, we can start to construct virtual worlds from the logic and image side, and gradually build and improve the whole human cognitive system in the future, moving closer to a complete simulator.

When we focus on “video game” as the experimental carrier to the virtual world, we will also notice that the traditional “game AI” still needs to be predetermined with specific dialogues, behaviors, actions, etc., to adopt some rules to make seemingly “intelligent” responses to players’ interactions.

2. Achieve real “reasonableness” in the virtuality

At the same time, the Artificial Intelligence system, represented by neural networks, has already shown concrete application prospects on the image side. Coupled with the addition of multi-layer neural networks, the application of supervised and unsupervised learning in the field of computer vision has penetrated into all walks of life; but can supervise and unsupervised learning realize the simulation of human logic processing systems?

Supervised learning is the training of a machine under human guidance, just like solving a question, people will assign it a standard answer. Every time it finishes a problem, it checks to see if it’s exactly right. If it’s wrong, it will recheck to see where it went wrong, thus continually optimizing the way it does the question and the way it thinks. For the machine, we can let it train and learn by feeding it tagged data, and finally, give it a new dataset to see if it can give the corresponding learning outcome by “learning” the way it thinks.

Unsupervised learning doesn’t even give the answer to the question but lets the machine figure out what the answer “should” be by feeding it data in a way that tells people the characteristics of the question and how it should be answered.

These two types of machine learning might seem to be far from the way humans grow up and learn. When people learn an object or a skill as children, they aren’t fed a lot of data or trained millions of times; we only learn in ways that seem “simple.” Our cognitive logic about the knowledge framework behind the world is another way that allows us to quickly organize the information we acquire.

But we see some hope in reinforcement learning. It doesn’t require feeding data, and at the same time, just like the way we acquire knowledge, we are told what the rules of an object or something is, what we can and can’t do in a specific situation. Then we practice to get feedback and adjust our corresponding knowledge ourselves.

The only uncertainty about reinforcement learning, and the only thing that holds great hopes for further breakthroughs, is that the process of reinforcement learning is very black-boxed and still far from our natural human logical reasoning process. But with our inability to clearly define them, reinforcement learning may be the only hope for humanity.

In accordance with the analytical framework we said before, there are also attempts with some fantastic results represented by reinforcement learning at the logical end, such as AlphaGo Zero by Deepmind, which means a learning framework with a single goal, where AI learns to surpass the “intelligence” of human players in the process of playing with itself. However, for “multi-agents and multi-objective” scenarios that are more close to the real world, a perfect framework for reinforcement learning did not exist until the birth of the Chaos Box algorithm.

Video games, as possibilities for scenarios and virtual worlds, naturally have many classifications and gameplays to map as closely as possible what humans do in the real world, thus presenting different human needs:

For example, by creating similar social scenes, games that focus on the colonial experience satisfy people’s need to communicate or confide in each other. Other games that tap into people’s desire to win, lose, and compare naturally bring the competitiveness and competitive scenes in the real world into the virtual world to satisfy people’s vanity. However, the objects and forms of competition will vary, such as competition for survival, competition for victory, competition for limited supplies, and so on.

From World of Warcraft

Another type of game shows the desire for another identity and ability for people to become another person and experience a different “life” in the virtual world. This type of game is usually accompanied by the freedom to explore the virtual world, experience, and feel the different scenarios as if it were the real world. Just like when we meet people in the real world and sit down to chat, we don’t set up ahead of time at all for what we’re going to communicate with each other; the way and content of our communication are based on each person’s personality traits, background knowledge, etc.

From The Sims 4

Reinforcement learning allows us to see the possibility of shaping a complete set of logical frameworks and leveraging the information we get from the visual and graphical level, increasing the efficiency of our access to information to better “be” the other-self in the virtual world.

When we talk about whether AI can help us create another self, religious knowledge systems have this description of how they believe that “not only does time not exist, but so does the self,” or that “the self and the whole world are one, and the self in the world and every object in the world the is also another self”. This state is known as “no-self,” and such thinking is essentially a cognitive exploration of the “essence” of the world.

After decades of research and development, biotechnology and neuroscience have revealed that areas of the human brain are modular, with different modules responsible for processing different types of information. In games, machine learning is more suitable for “modular” character creation than traditional decision trees and state machine mechanisms.

For example, the Chaos Ball algorithm based on reinforcement learning builds the brains of different characters, giving them instructions that allow them to think and learn for themselves under different environments, conditions, and rules in the virtual world; with a command center, a completed “character” still requires emotional, expressive language and even dramatic text-to-speech generation algorithms; besides, body movements under the control of the brain’s instructions need more dynamic implementation techniques to restore dynamic and natural animations in the virtual world.

EEG powered by BCILAB

In addition, the modular brain controlling system is more representative of the rational decision-making mechanism of intelligent organisms. In the 18th century, Hume argued that reason is the slave of the passions, that rationality alone can never be the motivation for any volitional action. At the same time, rationality can only play a role in the decision-making process by influencing the ultimate motivating factor of “feeling.”

From a rational point of view, machines can follow strictly logical judgments and feedback on problems, but are they able to feel emotions and cognitions such as pleasure, pain, curiosity, etc., at the “feeling” level, acting as a basis for rational judgment?

3. Humans are high-dimensional species compared to virtual creatures

Human intelligence is divided into two modes of intelligence: physical and chemical, divided by the difference between rational and emotional. Behind rational intelligence is the rigorous logical deduction of humans, and although human logical deduction is actually based on a language system, all of our seemingly free will decisions have a logical deduction process behind them that can be reproduced.

When people are led to place an order with a few simple words by the lead anchors on Tik Tok, it’s actually the result of the logical deduction process that most people have mastered with precision. The video game is a very appropriate experimental scenario for us precisely because it has highly-structured data and logical deduction processes.

The biggest difference between humans and all other living creatures, including these creatures we’ve built in the virtual world, is that we have an emotional system based on a set of hormones and other chemicals. People can feel emotions such as happiness, sadness, anger, etc., and behind them is a complex set of chemicals that act on our brains. These emotions are indeed an important part of our human intelligence. However, we cannot implement this system in a computer system based on zeroes and ones, so what is the sensibility to a creature in a virtual world?

In the scenario of a tiny, tiny virtual world built based on the chaos box algorithm, we will repeatedly simulate millions of different plots in this scenario at least millions of times during the algorithm training process. Since the intelligence’s understanding and perception of the virtual system is entirely data-based, each scenario may take only a few hundred milliseconds to simulate. Then, we can assume that the time scale in this virtual space-time is independent of our real world.

The time and world that these virtual creatures perceive is the world that they see. We look at these creatures in the same way that other higher-level species might look at us in a higher-dimensional universe, and what we see maybe output data, maybe a graphical interface, or maybe experience in virtual reality, but in reality, these are just higher dimensional expressions of the world as perceived by these virtual creatures.

To continue the understanding from this perspective, as there is no chemistry in this data based virtual world, all logic and perception are made up of 0’s and 1’s. In effect, then, what we understand as the sensual decisions of virtual beings are understood from what we perceive as sensual, and if we are “them,” then what we can understand as alluring is based on the boundaries of the world we are in, i.e., on the perception of the data.

From Love, Death & Robots

On the other hand, even in the real world where we are, many, many new discoveries are constantly emerging that cannot be explained by basic science, and scientists are doing nothing more than continually trying to explain these new discoveries using existing or newly created theories. The basic science system now goes deeper into quantum mechanics, which is typically a fuzzy area near the borders of the world.

Perhaps by breaking through this zone, we will discover the higher dimensions of the universe, where we are in relation to these virtual creatures we have created. But the problem is that perhaps humans will never be able to break through the boundary of the world that is quantum mechanics. Even when they are capable of doing so, the whole meaning of the world will be proven as stated earlier all perceptions will be rewritten. We try to discover the results we expect from our outward quest by exploring inward, by going in the opposite direction.

So, when we look at the virtual creatures we have trained to simulate various scenarios in a virtual setting, we try to understand them from our perspective, just as we would if a possible creator were observing us in a higher-dimensional form of display. But, just as a similar situation may occur when encountering humans in different language systems, these virtual creatures, as a “human-like” virtual creature, already have a preliminary and simple human intelligence, an intelligence that we are more likely to need to stand for “ Their” perspective to understand and perceive.

Of course, these virtual creatures exist to give the player a more interesting and open experience of the virtual world. So it doesn’t occur to them that what they “think” of as a world is actually carefully designed and served by these higher dimensional bits of intelligence, that they will talk to the player, they will try to ally with the player, they may try to take advantage of the player, but they can’t possibly know that we exist beyond the 0’s and 1’s. And what have we humans ever wondered what everyone’s hurried and hard-working life is all about?

III. Connection and Transcendence: Emergence and Future of Metaverse

1. Definition and description of Metaverse

The concept of the Metaverse first came from Neal Stephenson’s book Snow Crash, which first described and coined the concept and description of a “meta-universe.” In fact, since the late 1970s and early 1980s, many people in technology have envisioned it belonging to the future. The term Metaverse is a combination of Meta, which means transcendent, and Verse, which means the cosmic universe. Two clauses put together usually mean The concept of “beyond the universe”.

This concept points to the long-term goal of human development, where we can create our own universe, which will run parallel to the real world and become an artificial dimensional space. It is believed that the next stage of the Internet will be a virtual world supported by logical and graphical technologies and various terminal hardware.

We can roughly summarize some characteristics of the Metaverse: the Metaverse would be an always-online virtual world with an infinite number of people who could participate in it simultaneously. It would also have complete economic systems running uninterruptedly and span both the real and digital worlds. At the same time, any image, content, wealth, and etc. based on data and information could circulate in the Metaverse. Many people and companies would create content, stores, and experiences to make it more prosperous.

From Gfycat.com

The consensus is that the Metaverse will not appear overnight, nor will it be built and run by just one company. Like the real world, the Metaverse will be implemented by a very large number of companies, organizations, individuals, and so on, and supported by many independent tools, platforms, infrastructure, standards, and protocols.

Many commentaries regarding Metaverse are very similar to “games” under the current definition, as games also seem to be the closest form of Metaverse in all of the contemporary digital realm. But if we look at this new species from a dynamic perspective, we will see that the reason we think of Metaverse as games is that we are using our current understanding to interpret future forms.

2. Features and appearances of Metaverse

In fact, we believe that the Metaverse began to develop from the first-day computers came out and that versions of the Metaverse have evolved and iterated as technology has advanced and applications have increased. It is important to note that at present, human society as a whole is at what might be called the Metaverse stage, but we also “seem” to be accumulating and evolving in this direction.

Before describing the stages of Metaverse development, we need to explain the relationship between what is represented by the terms “game”, “virtual world” and “Metaverse”. In general, we believe that:

  1. “Virtual world” is a definition of “virtual world” as opposed to the “real world”.
  2. “Metaverse” is a generic term for “connectable information” under the broad category of “virtual worlds”;
  3. In a broader sense, a “Metaverse” is a collection of information itself, forms of interaction, and processes of interaction with each participating subject within a “virtual world”.
  4. “Games” are the most direct carrier for our perception and interaction with the “virtual world” at this stage;
  5. In a broader sense, socializing, paying bills, shopping, and etc., in “virtual worlds” takes the form of “games”.
  6. An individual “virtual world” or an entire collection is not a “Metaverse” if there is no interflow between multiple “virtual worlds”.

It’s important to clarify that the inability of multiple “virtual worlds” to circulate with each other means that they cannot be accessed with a single, one-pass-like identity across “virtual worlds,” “games,” “social,” “e-commerce,” etc. and that they cannot be traded with a unified or consistent economic system.

In specific, Epic says it wants to build a Metaverse because it owns the Unreal Engine, the underlying engine that drives games, and also has the Epic Game Store as a trading hub that fits in with the rest of the digital infrastructure. In fact, Steam could be considered an early stage of the Metaverse. But at the moment, this kind of economic trading is rudimentary, just a superficial unified account purchase, not a deeply connected in-game economic trading system. Of course, it wouldn’t be hard to do, as long as each game’s in-game trading mechanism is linked to the platform’s trading system.

At the same time, given the effects and implications of “time” (which we tentatively assume to exist), the time scale in the “virtual world” is arbitrary. Still, as a connected “Metaverse”, the time scale needs to be consistent, but not necessarily the same as the real world. Consistency here does not mean that all time within a “virtual world” or “game” needs to be the same, but rather that there needs to be a mechanism for “connecting”. This would make them work as a mosaic of different gears.

In this way, based on a fully-developed Metaverse, if we can upload our consciousness in the future, we will be able to live, socialize, experience different lives, and so on in the Metaverse; or, if we can’t upload our consciousness yet and can only live in the virtual world in one direction, human bodies in the real world will be able to live in the Metaverse as long as they can sustain life.

From the current point of view, social platforms such as WeChat and Facebook have built a virtual social world, and e-commerce platforms such as Taobao and Amazon have built a virtual shopping world. Still, in fact, individuals and organizations in the real world are acting as players in this Metaverse. Various “virtual worlds” are connected to create a larger one in which they can live and thrive. That’s why Metaverse is not created by a single company.

Because the difference between a “Metaverse” and a “virtual world” is connectivity, and everyone’s identity is also critical to this connectivity. The key to the “Metaverse” is the “virtual world”. If they are not connected, then they are not a whole, and they are not a “Metaverse”; but if they are, then they are a “Metaverse”. For example, An Apple account represents various apps and information within the app; A Valve account represents game information, which can be “connected” to show your ability to cut fruit (the Fruit Ninja app) in CS: GO as well as other attributes and features.

Therefore, the “Metaverse” is also a real “connection” and “recognition” of a virtual identity.

Besides, the physical rules of the virtual world can be completely different from those of the real world. For example, if we want to go from one place to another in the “virtual world”, we only need to set up rules by coding, and then we just jump. In this way, in the virtual world, addresses are defined by 1 and 0 rules.

However, in the real world, assuming we also live in the “virtual world” created by the previous level, if we want to travel or move instantaneously, we need to find the 1 and 0 rules (or perhaps the strings in string theory) of the real world. Once we have the current coding rules of our real living world, time travel is nothing more than determining the “addresses” of different states, which can simply be achieved by a definition and a jumping mechanism.

Studying the Metaverse is much more than just talking about the “virtual world” because it is a much better way to simulate and explore real-world societies and environments. For the Metaverse, the image level will only be a matter of arithmetic, but the logic level will be about getting the real-time and automatic interaction logic right.

Thus, another core performance of the “Metaverse” is that the information and content of the “virtual world” will once again explode and feed the real world.

Again, generally speaking, everything in the “Metaverse” can be called a “game,” but not every “game” is a “Metaverse” because some “games” just aren’t connected. There are also e-commerce companies that serve the “virtual world” (or the real world) that are not part of the “Metaverse” if the accounting system is not connected to the economic system. This is also because the interactions, purchases, etc. in the “virtual world” must also be compatible with the virtualized image scene, so it is not too much to call it a “game”.

When “games” social networks, e-commerce, etc. are connected, they will take on a strong gaming character from nowadays perspective, so we may have the idea that it’s just a virtual game. But from the perspective of the future, it’s not just a “game.” It’s a “Metaverse”. We may play games to get sensory stimulation or make money or get better in the real world. Still, with the “Metaverse,” we may play games to want a better life in another “game.”, to buy something in a virtual mall, then chat on a social network, and go on a date in the next “game” scenario…

Especially in the case of “Metaverse”, “game” will be a broad concept. Thus, we can say that a simplified version of “Metaverse” is Epic or Valve… Things like social networks aren’t “Metaverse” as long as they are not connected; however, once when they are, they’re Metaverse, with the attribute of being a “game”.

3. Development stages of Metaverse

Currently, based on the trajectory of computer-related technologies and the Internet as a whole, we start from an information perspective and divide the development of the Metaverse correspondingly into several different stages of development.

Metaverse version 0.1: Establishment of fundamental rules (1940s — 1970s)

During this period, electronic computers were in the early stages of development when they were just being created, and binary codes were devised for storage and computation, which were expanded upon to become the “Von Neumann Principle”. At the same time, we also designed a standard set of “communication mechanisms” for communicating with computers in the form of code. Perhaps people did not realize that it were the fundamental rules that opened the door to the virtual world and connected two completely different worlds from the moment the code was entered.

Metaverse version 0.2: Information Infrastructure Delivery (1980s-1990s)

As computer technology continues to evolve and update, what began as Electronic Data Interchange (EDI) has also evolved into the Internet, and it has gradually begun its mission to “connect.” In the process, we have moved from “face-to-face” information exchange and communication to “inter-temporal” two-way information transfer based on network communication. People are showing a high level of curiosity about a new species that seems to “connect everything” and are almost frantically sending messages into the Internet, hoping to make connections with everything in the virtual world.

However, at that time, network communication technology had limitations that did not allow real-time streaming as it does today, and the efficiency of “connectivity” had to be improved. Overall, people’s attitude towards the Internet during this period was more about seeing the Internet as a new opportunity, investing in it, and generating information and data to generate economic returns from the “Internet” species and thus achieve different goals pursuits in the real world.

Metaverse version 0.3: Information High-Frequency Interaction (2000s — present)

As people’s use of the Internet grows and becomes more and more inseparable from Internet-based applications, the Internet itself increasingly exists in the real world as an infrastructure for the virtual world. Innovations and advances in communication technology have also improved the way people interact with the virtual world, enabling us to access a variety of high-quality streams from the virtual world in real-time and have also begun to contribute explosive amounts of information to the virtual world.

As a result of the infrastructural contributions of virtual worlds and the variety of content they produce, virtual worlds are gradually beginning to provide valuable feedback to the real world. In the past, virtual worlds created value to make the real world better, but value feeding means that the value generated by the real world becomes oriented towards making the virtual world better, perhaps more immersive, freer, more realistic, or more integrated with the real world.

Metaverse future version: virtual information feeds (future)

In the future, as we build more and more virtual worlds, the virtual world infrastructure will become more sophisticated and will gradually exhibit greater efficiency in support. Among other things, the richness and efficiency of content provisioning will become far greater than we ever imagined and will be delivered in a way that is computed, generated, experienced, and feedback in real-time, so that we will consider the virtual world to be indistinguishable from the real world.

At this stage, the virtual world has been “connected” and the economic system has been perfected, along with the corresponding management and governance structures, so we can consider this stage as belonging to the form of “Metaverse”. At the same time, the virtual world’s feedbacks to the real world have reached an unprecedented height, and the value people generate in the real world will be invested in the virtual world on a large scale, and more and more cycles and iterations in the virtual world will be completed in the economic and social sense.

These are the states of what we consider to be the different stages of the development of a “Metaverse”. Again, it is important to note and emphasize that a “Metaverse” will not appear overnight, nor will it be built and run by a single company, and there may not be a very clear point or event that marks its formal birth and maturity.

Since the invention of the first electronic computer in human history, a “Metaverse” has been making. Frankly speaking, we have not yet implemented the “Metaverse”, but through technology-driven innovation and iteration, we have achieved version 0.3 of the “Metaverse”. In the future, we will continue to create richer and more efficient content and forms of information interaction in the virtual world, which will be reorganized and presented in new ways, such as games as pan-entertainment, e-commerce as consumption, and communities as social scenes.

At the same time, AI must and will have to play an even more important role in the future to support and satisfy the even larger scale of content provisioning in the virtual world: not only in terms of more efficient infrastructure support but also in terms of more realistic logic, more richness and better reproduction of content generation.

We believe that although in the real world more than 80% of human information comes from visual sources, information on the image side can greatly enrich our perception of things. Still, we need a logical and effective way to process and organize this information to help us perceive the world and ourselves more efficiently. Hence, we choose to explore and innovate on the logic side.

We’re also very excited to see companies like Unity Engine, Unreal Engine, Nvidia, and others pushing the boundaries and evolving on the graphics side of the equation, which is the human race as a whole exploring the future and working together to realize the vision of a Metaverse.

IV. Glory and Dreams: the Road to Cloud-based Species

1. Go to space! Go to the clouds!

It’s an enduring set of questions that we, as a species, need to face and consider: what does our future look like? Where are we headed? How will we develop…

Relatively speaking, if one adopts a monist perspective in thinking about such questions, it doesn’t mean anything. In the case of dualism, this view, which holds that the world is originally two entities, consciousness, and matter, attempts to reconcile the philosophical views of materialism and idealism. It argues for the philosophical doctrine that the world has two different natures, spirit and matter, while also opposing monism. This perspective is not only removed from the discussion of consciousness and matter but can be generalized to the discussion of Outside and Inside orientation.

The ultimate mission of mankind is to survive and reproduce, and there are two directions to accomplish this mission: “Outward Exploration” and “Inward Exploration”. “Outward exploration” means that we keep setting out into outer space by building a series of space travel vehicles and supporting facilities to reproduce on the long journey to the stars in search of one habitable planet after another. On top of that, we build a mass relay similar to the one in Mass Effect to serve as a jumping-off point for interstellar travel, for fast and efficient exploration.

From Mass Effect

In fact, the logic of such a development is too familiar to mankind. The essence of it is the exploration of habitable geographic areas in the real world through technological advances, leading to massive colonization, reproduction and social development, and ultimately to an interstellar species. We believe that Elon Musk made this choice and thus used immigration to Mars as a progression point, leading to the creation of a series of “facilities” such as Space X and others.

However, also like a trip to Mars, would it be more efficient to take the perspective of “Inward Exploration” and either upload ourselves to the cloud or to the chip, using a more stable cloud data to transfer or sending the chip with a ship and assembling a mechanical or bionic torso on the spot? After all, our current carbon-based bodies are still very fragile and consume a lot of resources and costs to keep them functioning properly. However, a carbon-based body is a perfectly reasonable and efficient depletion vector from an Earth-based perspective. But if we look back to the present from a future silicon-based era (which may be the case), it’s true that carbon-based endurance efficiency is not sufficient to support mass human migration.

Strictly speaking, whether it is becoming an interstellar-based species or a cloud-based species, the fact is that the two orientations are not contradictory and can even be developed in combination. Their central concern is the same: how can we give humans a higher probability of surviving in the future? This question in fact implies two different conditions, “as human beings in the real world” and “as human beings in the virtual world”, they will have different paths of development.

First of all, we believe that with the emergence of the Metaverse, the boundaries between the virtual world and the real world will be gradually blurred, and people will be free to choose the places and scenes in which they live. The infrastructure between the two worlds is connected, and a person who is clouded in the last second can appear in the real world’s prosthetic body in the next second.

But quite frankly, there is still quite a long way to go before we are a truly cloud-based species, not only because the Metaverse itself still requires very much technological progress and the combined efforts of multiple organizations to achieve, but also because the infrastructure has yet to be upgraded. Even so, humanity has begun some exciting attempts at “digitizing” itself.

On September 2, 2019, American author Andrew Kaplan became the first “digital human” who would hallucinate his mind and consciousness into the world’s first “digital human” and become a pioneer of the cloud-based species. He is involved in Nectome’s HereAfter program, which uses AI technology and related hardware devices to achieve “immortality” on the web. He will also become the first digital human “AndyBot” and Nectome will use this as an opportunity to continue the project of resurrecting the human brain in the form of a computer simulation.

While this kind of “digital” approach can’t be considered true uploading of consciousness, we still see great hope. As mentioned earlier, our carbon-based bodies are so fragile that we sometimes think that the oxygen we rely on for survival may be a chronic poison that is causing cells to die in a rapid oxidative reaction. With the advances that have been made in the life sciences, there is a growing sense that life is extremely complex and that immortalizing the human carbon-based body seems almost impossible at this time.

For this reason, the innovation and exploration of “digital life” have become another path for human beings to pursue eternal life. Digital life, supported by artificial intelligence and information network technology, no longer cares about human flesh. The current attempts to “cloud-based species” are using AI to preserve and circulate human thoughts and consciousness, so that all the experiences and thoughts of human beings in their lifetime, including their voices, language styles, and behavioral patterns, can be preserved, interacting with people through AI in a trade-off way.

But even if we did upload our consciousness and experience to the cloud, this approach would still face the question: is “I” myself? Or a proposition akin to the “Ship of Theseus”. The inherent paradox is that when our consciousness is “digitized,” how do we make sure that the consciousness we upload is the same one that was there? If there is no guarantee of consciousness consistency, it is essentially just another “digital person” that we have created based on our own data to live in the virtual world. Therefore, we need to emphasize the difference between the two behaviors of “copying” and “uploading,” which represent two completely different perceptions and pathways.

2. “Copying yourself” and “Uploading yourself”

The process of “copying” represents the indiscriminate and exact representation of another identical object so that there are two subjects, two consciousnesses, etc. Objectively speaking, for a brain with a huge amount of information, a 100% upload would require an anatomically accurate copy of that information, even if only 0.0001% of the data is distorted and lost, and the copy of oneself in the cloud cannot be called “another” self. So in this way, it’s generally accepted that it’s almost impossible to make an exact copy.

Furthermore, based on current knowledge, we know that “consciousness” is dependent on the neurons in the biological brain. Strictly speaking, it is not possible to prove whether “consciousness” itself is an isolated system, or whether it is integrated with the rest of the human body and holds a lot of interaction between them. In other words, if we upload our “consciousness” to the cloud, are these cloud consciousnesses still the same “consciousness” as the consciousness in the native brain?

From Uploading

From the perspective of neuroscience, much of human mental activity, such as learning, memory, consciousness, etc., are purely electrochemical processes that take place in the brain. Noted neuroscientist, director and chief scientist of the Allen Institute for Brain Science, Christof Koch, also refers to a mechanistic-like description: “Consciousness is a part of nature, and we believe it is based only in mathematics, logic and those aspects of physics, chemistry, and biology that we do not currently understand thoroughly, but not in magic or other things that are not of the nature of our world.” Not only that, but many computer scientists and neuroscientists also believe that by programming in a particular way, computers will have the ability to “think” and even gain “consciousness.”

In fact, both “copying” and “uploading” take place after the brain “consciousness” has been “extracted”. Simply put, one of these two approaches is used to “create another self” and the other is used to “allow yourself to exist in a different way.” Logically, before either can be used, “consciousness” needs to be determined. In order to achieve this, two elements are crucial, one is “Moore’s Law”, representing arithmetic power, and the other is “human brain modeling”, meaning logic.

Moore’s Law states that “at a constant price, the number of components that can fit on an integrated circuit will double about every 18–24 months, doubling its performance”. Any operation a human performs on a computer will be stored, processed, and output in binary form by the computer, and Moore’s Law represents the ability to process such processes on a large scale. However, with transistor spacing already getting closer to 1nm, conventional techniques are approaching their physical limits. When objects are as small as the quantum scale, they will be subject to quantum effects, and thus classical physics will be invalidated, and electrons will not move according to physics laws.

In such anticipation, quantum computing is considered as one of the paths to continue Moore’s Law, also known as quantum Moore’s Law. A quantum computer is a computer built on quantum mechanics principles, with the core based on the principle of superposition of quantum states, which is capable of representing both 0 and 1 at the same time. Under the binary rule, it enables an explosive growth in its arithmetic power, which enables richer computations and simulations to be performed on it.

On the other hand, according to neuroscience research, the human brain contains roughly 85 billion neurons and the 850 trillion synapses that connect them, and simulating any of them with current technology would require a supercomputer; but given that a computer’s electrical signals travel at the speed of light, much faster than the electrochemical information that travels when the human brain thinks, if the quantum Moore’s Law remains valid, it is feasible to simulate the operation of the brain with a supercomputer in the future.

From QUANTUM COMPUTING FOR THEORETICAL NUCLEAR PHYSICS

To “extract” the brain consciousness, another aspect we need to consider is “human brain modeling”. In fact, there are also two paths to achieve this, one is to directly model the human brain by understanding its function after getting information from scanning, capturing signals, etc. on a biological level. The other is to use AI to directly build a “digital brain,” setting similar rules and constraints that allow this “digital brain” to crack the way our brains work in iterative learning over and over again.

From the perspective of the AI field, breakthroughs in reinforcement learning frameworks have indeed opened up this possibility. From the beginning with AlphaGo’s “single-object” framework to Chaos Box’s “multiple-agent, multiple-objectives” framework, machines have gradually learned the ability to learn over and over again as they play against themselves, making them similar to human brains and ways of thinking. As computing power increases and rises, AI is also gradually showing surprising potentialities in many nuanced scenarios, which offers considerable hope for the path of “human brain modeling”.

V. Last But Not Least

Our perception of the world stems from our senses and is also limited by them. Based on our current sensory pathways and perceptions, we may not understand the principles behind quantum mechanics, why we cannot travel faster than the speed of light, or the rules behind how the world works. The reason behind this might be referenced to the characters we create when we create virtual worlds, whose way of perceiving is limited to the rules of correspondence. Therefore they never have the way of perceiving the upper world, and therefore can’t get a glimpse of how to cross that world.

Similar to ourselves, if there is a way to open the other “senses” of humans, we can receive more dimensions of information, which is what the practice within religion refers to. With one more dimension and way of perceiving information, we can understand and rules that go beyond the current size. Although the human brain is generally unable to comprehend higher dimensions, a computer given mathematical rules can always calculate them, which means that the computer can “understand” higher dimensions. When the human brain-computer combines, it may be able to understand higher dimensions.

As we continue to understand humans with AI, modeling, and simulating other parts of the human brain, we’re actually becoming more and more AI-like. As we rack our brains and try to understand the black box of AI, AI is also learning about us in their own way. Are we studying AI, or is AI studying us? Perhaps in the future, machines will become more and more like humans, and humans will become more and more like AI, and we believe that AI will be involved in the future evolution of humans, the future development of society, the future exploration of the virtual world, and the formation of the Metaverse.

Broadly speaking, the virtual world is full of “game” features, and the creation of Metaverse requires the participation of many companies, organizations, and individuals to realize. In the future, the boundary between the real and virtual worlds will be blurred, and in this process, AI may be able to provide us with new perceptual dimensions and new ways of understanding, thus allowing us to try to break through the physical limitations and find the most appropriate development path. No matter if it’s an “interstellar-based species” or a “cloud-based species,” the existence of life is a miracle itself. If you regard each full life as one training round for humans in the “real world,” it is just over 10,000 rounds from Homo sapiens, but we have already evolved to what we are today.

Sometimes we also wonder if there is a kind of “deliberate” guidance for us to evolve so quickly so that we often look up at the bright starry sky and the vast universe. We have always believed that driven by our curiosity and spirit of exploration, the future of mankind is full of infinite possibilities. In the long epic of the universe, the human species has just stepped onto the stage, and our era is just beginning.

From 2001: A Space Odyssey

The universe was made just to be seen by human eyes, the infinity of which is explained with shortness of breath. How rare and beautiful it truly is that we exist.

Writer:Yan Zhang, Yuheng Chen
Editor:Yan Zhang
Designer:Yuxiao Hu

About rct

rct was founded in 2018, a member of Y Combinator W19, and is comprised of talents across AI, design and business. The team is passionate about using AI to create next generation interactive entertainment experiences. Our mission is to help human beings know more about themselves. So far, rct is backed by YC, Sky Saga Capital, and Makers Fund.

See our official website:https://rct-studio.com/en-us/

--

--

rct AI

Providing AI solutions to the game industry and building the true Metaverse with AI generated content