Inheritance of Civilization: The Birth, Growth and Reconstruction of Information and Digital Boundaries

rct AI
22 min readDec 4, 2020

Introduction:

At the end of the 18th century, the invention and use of the steam engine caused the first technological revolution. More than 10 years later, British romantic poet Percy Bysshe Shelley wrote this verse:

My name is Ozymandias, king of kings:

Look on my works, ye Mighty, and despair

Nothing beside remains. Round the decay

Of that colossal wreck, boundless and bare

The lone and level sands stretch far away

Ozymandias is what the ancient Greeks called the ancient Egyptian pharaoh Ramses II. According to the time of writing, it is speculated that when Shelley wrote this poem, the statue of Ramses II was about to enter the British Museum. As an activist full of free thoughts, Shelley was influenced by technological progress. Not only wrote the exclamation “Look on my works, ye mighty, and despair!’”, but also expressed the memory of beautiful things through “Of that colossal wreck, boundless and bare, The lone and level sands stretch far away.”

This emotion is more derived from a sense of loss. The advancement of technology has indeed brought better and better life and the world, but due to the iteration and development of civilization, the material in the real world will eventually return to the dust.

The good things will gradually dissipate physically, but the spirit of romance and humanism will always exist.

From the ancient humans of 2.6 million to 1.5 million years ago, to the Enlightenment that occurred in Europe in the 17th to 18th centuries, to the continuous breakthroughs in modern information technology, technological progress has always been the core driving force for the development of civilization. People learned to make and use tools, gained the light of reason, broke through the boundaries of numbers, and began to gradually liberate the productivity of the virtual world, and carry out the continuation and development of civilization in a more efficient way.

The discovery of new land has brought us an increase in living space and available resources; the invention of new technologies has improved our efficiency in resource utilization and space discovery.

This article first begins with the original accumulation of information, discusses the basis of information, the form of dissemination, and the effective carrier. Then we extended the perspective of analysis. Starting in 1600, we listed the theoretical breakthroughs and specific applications that played an important milestone in the development of digital information. (You can skip the first part and go directly to the second part to view the development timeline)

After more than 400 years of scientific progress, this long history of digital information development has not only brought mankind into the information age with difficulty, but also witnessed the romantic heritage of civilization.

While feeling the stubbornness and greatness of human civilization, we also see that the development of computer and communication technology first broke through the boundaries of digital information, ignited the spark of the virtual world, and then quickly covered the real world with a prairie fire. Every corner of the world and transform it into digital information to form a digital real world. These structured digital information will increase day by day, and it will also give birth to new demands, new scenarios and new ways of interaction.

When we are measuring the real world, we are also creating a greater and immortal virtual world.

Do not go gentle into that good night.

Rage, rage against the dying of the light.

— Dylan Thomas

Creation of Adam,Vatican Museums

I. The original accumulation and dissemination of information

(1) Language builds the information foundation for the development of civilization

In the classic work “The Part Played by Labour in the Transition from Ape to Man”, Engels believes that the birth of language comes from human collective labor. In this process, language emerges spontaneously as people adapt to communication, and at the same time human abstract thinking emerges.

It is undeniable that the production of language is indeed related to labor, especially collective labor. Collective labor puts more emphasis on overall consistency and coordination, which also provides a necessity for language creation. At the same time, the details of labor not only make the language system from pronunciation to grammatical rules more complete and clear, but also provide a biological and psychological basis for the production of language.

However, this theory does not solve the specific process of language occurrence, so the explanation of the origin of language is still incomplete. Relatively speaking, from the perspective of species evolution, it seems more reasonable.

Biology needs a certain level of thinking and intelligence to form semantic meaning, and it also needs a complete pronunciation system to make clear speech. From the archaeological research on brain capacity, we can speculate that the late Homo sapiens may already have the level of thinking required to produce language.

  • 300,000 years ago, the first species with complete language ability appeared: Homo sapiens. They have strong information exchange capabilities and a systematic technical, cultural and social structure;
  • 40,000 years ago, a large number of cave art, clothing, and sacrificial rituals appeared. At that time, humans already had the necessary abstract thinking;
  • Ten thousand years ago, humans began to engage in agricultural production;
  • 5000 years ago, ancient writing appeared;

Then there is the development of early civilization as we know it. In fact, language builds the information foundation for the development of civilization, and the spread of words along with the use of language has become one of the signs that human society has entered the stage of civilization. It makes people’s abstract thinking more logical, changes the way people store and disseminate knowledge and information, and further highlights the attributes of society.

(2) Text and images increase the spread of information

After extensive experiments, experimental psychologist Treicher believes that 83% of the information obtained by humans comes from sight, 11% from hearing, 3.5% from smell, 1.5% from touch, and 1% from taste. In other words, text and images (including still pictures, moving pictures, long and short videos, game screens, etc.) constitute the main form of information we currently obtain.

Ali Research Institute mentioned in a study that in a person’s life, the brain can process about 173G of information, and the amount of information that people process on the Internet every day is at least 5G. Based on this calculation, the amount of information that modern people receive from the Internet every month far exceeds the total amount of information that can be processed in a lifetime.

The most significant aspect of modern information technology is that it has increased the storage capacity and the form of information dissemination, and at the same time improved the speed and stability of information dissemination. The development of 1G to 5G technology has also evolved text and images from static and single forms to real-time interactive video, live broadcast and game experiences.

At the same time, with the diversification and enrichment of information sources, people’s attention is dispersed to various sources, and the time of attention is increasingly shortened. In the era of print media, consumers’ attention is 24 seconds; in the age of television media, consumers’ attention is 15 seconds; in the Internet era, consumers’ attention does not exceed 8 seconds.

Simply put, this is a confrontation between silicon-based and carbon-based, 4 (base) and 2 (computer) conflict. The information processing method of the human brain adopts the principle of least enough. Survival is its primary purpose, not learning. Therefore, when the amount of information received deviates from the initial function of the human brain, the phenomenon of attention fragmentation will occur.

In order to improve the efficiency of dissemination, the carrier of digital information has become a key factor, which also implies the way of human-computer interaction. Technological breakthroughs can make information data appear in our lives in a high-frequency manner, and at the same time, it is accompanied by faster exchange speed and more information volume. The size of computers is getting smaller and smaller, and more and more information can be carried and transmitted; at the same time, as the most intuitive information interaction carrier, screens, lenses and other equipment are rapidly covering every corner of the real physical world.

II. The barbaric growth of digital boundaries

(1) From frictional electrification to the digital world

Around 600 BC, Thales of Miletus, the first natural scientist and philosopher in ancient Greece and the West, began to observe and study electrostatic phenomena. He found that the amber rubbed with fur can attract small things. But people at that time couldn’t explain this phenomenon, so they thought that there was a special divine power in amber and called it “electricity.” This word evolved from the ancient Greek word “amber”.

In 1600, William Gilbert, Queen Elizabeth’s imperial physician, completed the first systematic study of magnetism in the history of physics. He used experimental methods to combine exploration of nature with theory. Then in 1729, Gray discovered the property that electricity can conduct. Sixteen years later, Mushenbrook invented the Leiden bottle, which is also the earliest capacitor used to store electricity. In the late 18th century, Coulomb published “The Law of Electricity” in 1785, which was also the first quantitative law in the history of the development of electricity.

Thales of Miletus / William Gilbert / Musschenbroek / Coulomb

In the early 19th century, electromagnetics developed in full swing. First, Faraday proposed the “electromagnetic induction phenomenon” and invented the first generator of mankind, laying the foundation for mankind to enter the electrical age. Decades later, Maxwell unified electricity, magnetism and optics, which also marked the formation of the classical electromagnetic theory system.

The accumulation of this series of electromagnetic knowledge was gradually applied to the industry in the late 19th century. In 1866, the German Siemens invented the world’s first industrial generator, and at the same time Tesla also proposed an alternating current system, allowing people to transmit electricity to any place in the world with higher efficiency. Since then, electricity has become an indispensable tool of modern life, and it is also the main driving force of the second industrial revolution.

Faraday / Maxwell / Siemens / Tesla

Time has come to the 20th century. This is an era full of chaos, passion and hope. On December 12, 1901, Marconi launched a radio signal from Britain and successfully crossed the Atlantic Ocean to Newfoundland, Canada. This propagation also caused radio waves to span 2,100 miles of the Atlantic Ocean. Five years later, the “father of radio” Lee De Forest developed the vacuum triode, which caused a fundamental change in the infinite power technology and pushed people to the age of electronics at a rapid pace.

Before von Neumann completed the computer structure and binary coding, in the 1920s and 1930s, the most important development in the field of information transmission was Nyquist and Hartley’s proposal of discrete information, continuous information and other related concepts. In fact, these theories provided the basis for Shannon’s later completion of “Theory of Information.”

In 1933, Godel officially published a paper on the incompleteness theorem. This theory brought epoch-making changes to the basic research of mathematics. Turing, on the other hand, proposed the abstract device and structure of the Turing machine in 1936, which replaced Gödel’s formal language based on general arithmetic, affirmed the possibility of computer realization and the main structure of the computer, and became the theoretical cornerstone of the computer world.

Nyquist / Hartley / Gödel / Turing

In the 1940s-1950s, people witnessed the emergence and rise of technologies such as semiconductors and computers. Known as the “Father of Artificial Intelligence”, Turing published two more milestone papers: Computing Machinery and Intelligence and Intelligent Machinery, A Heretical Theory, in which Turing first proposed the concept of artificial intelligence. And the Turing test has always been regarded as the basic criterion for measuring our pursuit of machine intelligence.

If we say that Turing gave the computer the soul, then von Neumann brought the body to the computer. In 1945, “Father of Computer” von Neumann specified computer structure, binary coding, storage program and program control in a “First Draft of the EDVAC Report”. This remarkable idea has laid the foundation for the logical structure design of electronic computers and has become one of the basic principles of computer design.

During this time, transistors were also born in Bell Labs in 1947; and only more than ten years later, integrated circuits also appeared. Until now, a large number of electronic components such as transistors, diodes, resistors, capacitors, etc. can be combined by means of integrated circuits and exhibit amazing results.

After the 1950s, since the basic supply and delivery of electricity was secured, the computer infrastructure and rules for data communication were defined, it became possible to create the keys to the digital world. Since then, the technological breakthroughs have been more about how to make the road through the digital world wider and faster, how to create more information about the content in the digital world, and how to build the infrastructure of the digital world.

The computers that were originally built were not efficient enough to meet the growing communication needs of humanity. As a result, a whole new way of switching processing was expected to emerge.

Bell Labs introduced the world’s first mobile cellular telephone system, also known as the Advanced Mobile Phone System (1G), in 1968. The 1G used analog communications technology, which had low information capacity, unreliable signals, and low data transmission quality. Therefore, in the late 1980s, with the gradual development of technologies such as large-scale integrated circuits and digital signals, people began to study the transition to digital communications.

With the technological breakthroughs in communications, and in the period before the rise of 2G, the Internet exploded. In the 1980s, the theory of computer and network technology was refined, and both the technology itself and its industrial applications enjoyed a visible boom that eventually gave rise to the powerful Internet. In the 1980s and 1990s, the focus of communication technology was on moving from analog to digital, from voice to multimedia, which directly accelerated the growth of 2G technology. This helped to accelerate the development of 2G technology, and in 1990, the European Telecommunications Standards Institute (ETSI) published a global standard for mobile communication systems.

By 2000, the International Telecommunication Union released the third generation of mobile communication standards, 3G technology, which could meet the basic needs of multimedia services with greater information capacity and faster information transmission. At this time, smartphones, such as the iPhone, have impacted the traditional mobile terminals, such as Nokia, and we are able to transfer and exchange information about the world around us and information to digital networks more efficiently through mobile devices.

After several years of development, the development of communication technology was further accelerated, and in 2008, the International Telecommunication Union specified a set of requirements for the 4G communication standard, and ten years later, in 2017, 3GPP officially froze and released the first version of 5G NR, which marked the fifth generation of communication technology into the application phase.

Of course, this is the development of information and digital processes from the perspective of digital communication, including electrical and magnetic, wired and wireless communications, switches (mechanical, semi-electronic, electronic), hardware (semiconductors, chips, memory, etc.), software systems and applications, and so on. In addition, ABC technology (AI, Big Data, Cloud) has also gained rapid and amazing development in the ongoing technological revolution.

In 1950, in a paper titled “Computers and Intelligence”, Turing introduced the concept of “artificial intelligence” for the first time, which formally began the exploration and creation of the nature of intelligence. In the following decades, the development of the field of artificial intelligence has not been very smooth, mostly remaining in the laboratory stage.

Since 2010, the development of communication technology from 3G to 4G has enabled people to access much larger amounts of data and computing power than in the past, and people have also found new machine learning algorithms (especially deep learning), with increasing amounts of data and scenarios, further promoting the re-emergence of artificial intelligence.

For now, deep learning will have better performance than traditional machine learning methods in areas such as computer vision and natural language processing, but this does not mean that it is the end of machine learning.

Both supervised learning and unsupervised learning require the preparation of data samples in advance, one needs to be manually labeled and the other not. Then they are trained so that the machine learns as much as possible about what the data corresponds to in a human context. However, reinforcement learning is more attractive when viewed by the standards of “true intelligence,” because at least reinforcement learning seems more like a way for an intelligence to actually “learn how to learn.”

The simplest approach in reinforcement learning is Q-learning, which consists of two parts: one in which the agent continually builds awareness of a state in the environment, and the other in which the agent makes optimal decisions based on that awareness, both in the present and over time. If we think of humans as a combination of rationality and emotion, then the principles of reinforcement learning and how the human brain learns to make decisions are very similar.

Whether it is AlphaGo from Deepmind or OpenAI Five from OpenAI, these applications of techniques that use reinforcement learning are in fact “purely rational” in the sense that the only goal of an intelligence is to win or lose. They do demonstrate that deep reinforcement learning, supported by computational power, is capable of understanding beyond human knowledge.

But in fact, Hume said in his Treatise on Human Nature that “reason is and ought to be the slave of the passions only”; while Daniel Kahneman also has a similar view “In many cases man is not rational, and prejudice is an inherent defect of man”. It is because people mostly live with emotions rather than rationality, and emotions may be a particular factor hidden in genes or past experiences that influence our decisions.

From the perspective of human nature, how to realize or simulate the “emotional” part on the basis of “pure rationality” is the future direction that reinforcement learning technology will explore. Accordingly, the machine will be influenced by multiple goals simultaneously, not just to win or lose, but more in terms of parameters such as “character, personality, hobbies, etc.”, which together simulate the brightest moments of human nature. If this were done, Turing’s 1950 question might have a natural answer.

Of course, this is a very long process, and the development of artificial intelligence cannot be achieved without the support of big data and cloud computing. As the speed of communication increased, and the capacity of information surged, Google officially exploded Big Data with the release of three seminal papers in 2004. Google FS, MapReduce, and BigTable together comprised a distributed file system, a distributed computing framework, and a more efficient database system, opening the era of big data.

Amazon released the Amazon Web Services cloud computing platform in 2005, and Google released Hadoop in 2006, enabling the storage and computation of massive amounts of data. Since then, more and more technologies and software have emerged that continue to optimize the efficiency of cloud technology and big data operations.

As communications technologies evolve, in 2015 Carnegie Mellon University proposed the Open Edge Computing Initiative (OEC), and the European Telecommunications Standards Institute published a white paper on mobile edge computing, proposing multi-access edge computing. The advent of information and digital technologies has given us the ability to digitize information in the real world, and there will be more and more data and information in the real physical world that can be digitized and transmitted to the virtual world (also known as the digital twin).

There will be enormous pressure on centralized cloud computing as we need more and more information about objects, events, etc., to be captured, processed, and interacted with in a more efficient way. The result is edge computing, where the processing of data, the running of applications, and even the implementation of functional services are decentralized from the center of the network to nodes at the edge, facilitating the formation of the Internet of Things (IoT).

Raymond Kurzweil, in his Singularity Theory, argues that technology is entering the second half of the game and the Singularity is approaching. The discovery of new technologies is becoming more and more dependent on the foundation of existing technologies, and it is becoming more and more difficult for a single technology to explode, while the emergence of holistic technologies will become the general trend, with each technology becoming a node in the birth of new technology. The future of technology will not only depend on the strength of one field or industry, but will also require the concerted efforts of all individuals, organizations, and environments connected by the digital frontier.

As people continue to explore and create the outside world, they are also trying to combine information technology and biotechnology to connect their bodies with data. In fact, biological DNA, the transmission of chemicals between nerves, is also a form of information, and in digitizing it, we can more effectively understand the patterns of information about life, which can influence human survival and reproduction in the real world at a microscopic level.

Since the focus of this article is more on the world outside of humans themselves, it will not be necessary to describe too much the development of information in the biological sciences here and in the following list.

(2) The long history of digital information development

From the perspective of information development, we have chosen to organize the theoretical and application developments related to digital information technology in the range of time from 1600 to today, and tried to list the important milestones and events.

In fact, each new theoretical breakthrough and the creation of new applications is an important part of the history of human civilization. However, due to the restriction of article length, we cannot list them all, so we have selected some of the more important and essential events in the history of the development of digital information and presented them in the form of milestones.

The march of history will not stop, and technological progress will continue to occur in the future. Behind the development of technology, it is the light of human nature that drives our curiosity and sense of mission, inspiring us to explore the unknown and create the future.

We also believe that with the technological breakthroughs in various fields, the digital frontier will further cover the real physical world, on the one hand, to obtain information that was previously inaccessible, and on the other hand, to open the era of the Great Voyage in the virtual world, creating new virtual coastlines and releasing unlimited creativity.

III. Reconstructing digital boundaries and scenarios

(1) New information becomes the hotbed of new demand

Every revolution in communication technology generates an enormous amount of information, and new technologies can further digitize real-world objects and objects, as well as accelerate the generation and expansion of native information in the virtual world. On top of that, as the volume of structured data and information increases, together with the learning and simulation of AI, new information will be created in new scenarios, bringing new needs and new ways of interaction.

The real world will become smarter and smarter as the digital frontier accelerates its coverage, and the virtual world will produce very intelligent content production methods, creating native digital demand. 5G signals can be transformed into a variety of LAN signals such as WiFi, Bluetooth, ZigBee, etc. to meet the connectivity requirements of different devices. The linkage, control and computation of various devices at the channel end, combined with AI algorithms and real-time interaction in the cloud, will enable diverse intelligent life scenarios.

With the support of cloud, 5G and IoT technologies, multimedia will evolve into “cloud media”, thus moving the production, delivery, iteration and other processes to the cloud, unified scheduling and management of different terminals, realizing real-time interaction and iteration of various forms of media information, fully scheduling all terminal devices, connecting the past different Scenario.

Algorithms in the cloud can control and generate content in real time and deliver it back to the end scene for intelligent interaction, such as travel, home, live streaming, games, and other scenarios. Users can break the sub-dimensional wall, redefine the needs of these scenarios, deepen the degree of interaction of content, and gradually realize de-appification and decentralization. Thus, in any scene and device, users can seamlessly connect and switch, and truly experience and live in an intelligent digital world.

With the further development of big data, AI and computing power, human-centered, fully automated scene interaction will emerge to get rid of the constraints of terminal equipment on people and scenes. In the future, breakthroughs in semiconductor material technology can connect various objects in life to the Internet, and future home, office, shopping malls and other scenes of building materials, decorative props, etc. can be connected to the digital network in real time.

As communication terminals are everywhere, the concept of terminals will instead disappear and be replaced by digital borders. Different users will get personalized intelligent experiences, natural human-centered intelligent interaction and scenario-based intelligent life.

(2) Measuring the Real World & Exploring the Virtual World

Today, with the development of information technology, the digital frontier covers most of our lives, and our every move in the network is retained in the virtual world in the form of structured data.

From the perspective of information transmission, images and words can represent people’s expressions of the real world and the virtual world. With the breakthrough and development of various information technologies, in the virtual world, the earliest visual information was presented by means of code and text, and only later did images, visual and other technologies appear, so that we can gradually restore the information of the real world in the virtual world. Today, we can interact with the content of the virtual world in real time; in the future, we will be able to feel the pulsating data and flowing information in the virtual world in a more direct way.

At the same time, technological advances are driving changes in the digital boundary between the virtual and real worlds, which is bringing about a renewal and iteration of the way we interact with the digital world. Simply put, the human senses determine the way we perceive and experience digital information, so we need different tools to bridge the gap between our senses and information. Among these borders, electronic screens are currently indispensable for receiving visual information, while tools such as keyboards, mice, and headphones respond simultaneously to visual information delivery through touch and hearing.

If one wants to experience or perceive the existence and functioning of the digital world in a more immersive way, beyond the uploading of consciousness in science fiction, one needs to connect mechanical interactions with human neural interactions, so that neurons can directly understand how machines work and thus perceive virtual content in a more immersive and realistic way.

Ghost in the Shell

Although this is only a conjecture, the logic of the development of digital information is that as the digital frontier gradually covers the external physical world, the real world is left with information located inside the human body that can be digitized. Strictly speaking, at present, in the field of life sciences, in addition to information related to neurology, hormones, and perception, other information, including even DNA, can already be preliminarily transformed into digital information for practical applications.

We believe that, just as scientists and entrepreneurs throughout the history of mankind have developed innovative theories and frameworks that have propelled civilization forward, future generations will be able to accelerate the breakthrough of the physical limitations of mankind, not only creating native virtual worlds, but also truly bridging the gap between nerves and machines, freeing the human senses and ushering in the age of intelligent civilization.

Finally:

From the ancient Greek philosopher Thales to today’s artificial intelligence, quantum computers, brain-machine interfaces, and more, humans have used the power of the world to explore and understand it, and then in turn have tried to change it and even create another world.

In real life, we are constantly thinking about our relationship with the world as we interact with our environment; in the digital space, we are watching and influencing the survival and reproduction of native virtual creatures as we watch ourselves. Of course, this is only a conjecture, but the digital age of humanity has just begun, and we are full of hope and confidence in the future development of civilization.

Although humans as a carbon-based species are fragile, without tough skin, without eternal life, with desires, jealousy, delusions, and even unable to think and make decisions in absolute rationality all the time, the power of humanity always drives us to pursue sincerity, goodness and beauty, and to explore the world with endless curiosity while being in awe of nature.

As Gustave says in The Grand Budapest Hotel said:

You see, there are still faint glimmers of civilization left in this barbaric slaughterhouse that was once known as humanity.

Perhaps in the future, machines will be able to feel every pain and sadness of human growth, and when they are relieved, these emotions will lead to a thousandfold love and desire for life and the world.

The Grand Budapest Hotel still photograph

Reference

Anderson, W., 2014. The Grand Budapest Hotel. Faber & Faber.

Assmann, J., 2011. Cultural memory and early civilization: Writing, remembrance, and political imagination. Cambridge University Press.

Bosworth, B., 2003. The Gentics of Civilization: An Empirical Classification of Civilizations Based on Writing Systems. Comparative Civilizations Review, 49(49), p.3.

Engels, F., 1950. The part played by labour in the transition from ape to man.

Hume, D., 2003. A treatise of human nature. Courier Corporation.

Shannon, C.E., 1948. A mathematical theory of communication. The Bell system technical journal, 27(3), pp.379–423.

Shelley, P.B., 1992. Ozymandias. James L. Weil.

Thomas, D., 2014. Do Not Go Gentle into ThatGood Night. The Collected Poems of Dylan Thomas (New York: New Directions, 1953), p.128.

Von Neumann, J., 1993. First Draft of a Report on the EDVAC. IEEE Annals of the History of Computing, 15(4), pp.27–75.

About rct

rct was founded in 2018, a member of Y Combinator W19, and is comprised of talents across AI, design and business. The team is passionate about using AI to create next generation interactive entertainment experiences. Our mission is to help human beings know more about themselves. So far, rct is backed by YC, Sky Saga Capital, and Makers Fund.

See our official website:https://rct-studio.com/en-us/

Free

Distraction-free reading. No ads.

Organize your knowledge with lists and highlights.

Tell your story. Find your audience.

Membership

Read member-only stories

Support writers you read most

Earn money for your writing

Listen to audio narrations

Read offline with the Medium app

rct AI
rct AI

Written by rct AI

Providing AI solutions to the game industry and building the true Metaverse with AI generated content

No responses yet

Write a response