April 16, 2015

Artificial Intelligence by Peer-Reviewing Agents

The dream of artificial general intelligence (AGI) is to build algorithms and machines that are capable of performing intellectual tasks that a human can. Futurists like Ray Kurzweil even postulate the development of machines that will exceed the intelligence of humans by far.

Methods used in AI research range from symbolic manipulation methods developed in the 1950's over neural networks and diverse machine learning algorithms, and many more. In the past few years, detailed simulations of brain components gained a lot of attention. Also deep learning and the idea of distributed artificial intelligence in cyber-physical systems and the Internet of Things are popular these days.

A Small Gedankenexperiment


First, let us assume that one day we in fact build a machine X with artificial general intelligence, i.e., which has the same intellectual power as humans. Second, assume we build a machine X++ that exceeds the human intellectual capabilities by far.

In the classical terminology of machine learning, the development of the machine X can be regarded as a supervised learning problem. We basically implement algorithms that can learn human intelligence by observing labeled training data. For example, we teach a robot how to cross a street by showing it examples of right and wrong ways of crossing a street. Let's assume that we somehow manage to do this.

Now, if we try to apply the same method to build the machine X++, we run into an apparent bootstrapping (or chicken-egg) problem. How can we apply supervised learning for this task? How can we teach a machine that is supposed to become far more intelligent than we are? This is of course a philosophical question, but it it is also of practical relevance as it shows the limits of supervised machine learning methods.

Trying to build X or even X++ by a detailed reverse engineering of the human brain has similar problems. The most apparent one is the need for an embodied agent that connects the brain with the outside world.

The Role of the Society


If we want to build X++, the most crucial question is not what the nature of human intelligence is, but rather how it became how we know and (partially) understand it today. Of course it is necessary to understand how the organ of the human brain works -- it developed over millions of years. However, the key point is that there has been an exponential growth of the general human intelligence during the last ~2000 years. Why is there such an immense growth in this short amount of time?

A major part of the answer are the sociological structures that the modern human forms. Today, children learn in pre-school what was not thinkable in the Mediveal. But how did the important ideas crystalize from the irrelevant ones? How do we evaluate today the future impact of ideas and concepts, such that the general intelligence of humans keeps its fast growth in the future?

In my point of view, the concept of peer-reviewing and rating is the key to accelerate the development of intelligence. When Einstein came up with his idea of relativity, no one knew that this will be of such a huge relevance for the future of physics. His ideas had to be evaluated, discussed, rated and compared to others to gain insight of its potential. If someone has a great idea, but fails in convincing the society that it is a good one, he will probably take it to the grave. The human intelligence could only develop by sharing and evaluating ideas. The media channels for this sharing changed over time, but the principle remains the same.

Peer-Reviewing Agents


If we want to build X++, a machine that shall become more intelligent than we are, we need to apply the same principles to learning algorithms and machines as manifested in the human society. It is not enough to simulate a single brain by a learning algorithm. The learning algorithm must be embedded into a society of (virtual) agents. Single agents must have the capability of learning like the human brain. In addition, they must be pro-active and able to generate and share their ideas with peers. Similarly to scientific peer-reviewing or liking in social networks, agents must be able to rate ideas of others. Based on their ratings, agents can become experts in certain domains, and thereby their reviews get higher impact. Also the idea of agent generations seems important: at some point agents die and make room for new ones which learn the ideas of the previous generation, but maybe in a condensed form and seeing it from a different angle. Therefore, it is also important that agents learn from other agents. This is the key difference to classical supervised leaning: it is not us humans who teach the machines, but they need to crystalize their own ideas and learn from each other.

A virtual society of peer-reviewing agents has the potential to develop a form of intelligence that exceeds that of humans. However, since the agents live in their own world, they have to develop not only their own ideas, but also the very principles of interaction and communication, e.g., their own language. Important milestones for such a system would be to observe communication patterns, the formation of sociological structures and peer-reviewing mechanisms and, last but not least, knowledge and intelligence.