Winter 2001–2002

The Emergence of Social Inequality among Robots

Why can’t we AI just get along?

Luc Steels

In 1949 the British cyberneticist, Grey Walter, and his technician, “Bunny” Warren, built two electronic tortoises, Elmer and Elsie. The tortoises were capable of avoiding obstacles, moving toward a light source (phototaxis), and parking in a hutch to recharge their batteries. They were built with analog electronics that connected simple sensors with motors. Grey Walter had designed these robots to investigate how a “living brain” manages to exhibit flexible control, learning, and adaptivity vis-à-vis the environment. And indeed, Elmer and Elsie showed rather surprising emergent behaviors that looked remarkably similar to some animals. They would zigzag to the charging station when their batteries were low, avoid obstacles on the way, and perform a kind of dance with each other due to the intermingling of attraction and repulsion. People ascribed all sorts of behaviors to these robots, as well as emotional states like fear and anger.

Elmer and Elsie are among the first examples of research in Artificial Intelligence, a field that has continued to generate fascination and anxiety. Many people believe that Artificial Intelligence researchers try to build machines that not only look like they exhibit life-like behavior, but actually possess qualities similar to those of animals and humans, such as life, intelligence, emotion, or consciousness. This is the image that science fiction myths from Fritz Lang’s Metropolis to Steven Spielberg’s A.I. evoke, and unfortunately some A.I. researchers, like Ray Kurzweil or Marvin Minsky, do not hesitate to propagate it as well. Thus in Spielberg’s story, the robot-child David is suggested to be capable not only of expressing and recognizing emotion (which appears possible with current technology), but of actually possessing one of the most profound human emotions, namely love. The catastrophic consequences of this are frightening and well laid out in the film.

However, actual research in Artificial Intelligence is very different. The goal is to research the mechanisms underlying behavior, thought, language, learning, and sociality by conducting careful experiments with artificial systems. These systems embody certain assumptions and the experiments teach us what conclusions inescapably follow from these assumptions. They might show, for example, that an image cannot be segmented meaningfully on the basis of color alone, or that language understanding requires a sophisticated model of the goals and intensions of the speaker, or that a network of rather simple units can self-organize into an efficient face recognizer. Artificial Intelligence research is therefore similar to theoretical biology or theoretical economics. It explores what explanations of cognition or sociality are theoretically possible and therefore plausible. Of course, various applications can then be drawn from these insights that are put to use, for example, in computer programs, searching the web, or robotic systems that inspect and clean water pipes. But these applications are nowhere near the complete humanoid robot of science fiction myths. We must distinguish science from science fiction.

In the mid-1990s, Oxford ethologist David McFarland and I decided to revive Walter Grey’s experiments in order to explore some of the same issues that had interested him fifty years earlier. At that time, my students and I had been building a variety of robots with animal-like shapes and properties—sometimes referred to as “animats”—at the Artificial Intelligence laboratory of the University of Brussels. We even built a fish that swam around in the university’s swimming pool. Compared to Grey Walter, we had much more sophisticated sensors and actuators and could use digital computers to implement much more complex brains. Moreover, many additional insights had been gained with more realistic models of neural networks and ‘artificial life’ simulations of the collective dynamics of robots, and we wanted to test their relevance to fundamental questions of animal behavior.

Each of the robots in our new experiment had a body, a variety of sensors (infrared, touch, photosensors), two motors, a battery, and dedicated on-board processors. They could roam around in an arena and perform random movement and obstacle avoidance. The arena also contained a charging station into which the robots could slide to recharge their batteries. The charging station had a light bulb attached to it so that the robots could rely on phototaxis to guide themselves into the charging station. All of this was similar to Grey Walter’s experiment and, indeed, we witnessed many of the same behaviors. But then we added something extra: competitors and the need to do work.

The arena also featured a series of boxes with slits. The boxes contained (infrared) lights that absorbed some of the energy flowing into the total ecosystem. Therefore, energy was drained away that would otherwise go into the charging station. The robots could push against a box, causing the light to go out, and thus make more energy available to the charging station. The robots needed to do this; otherwise there was not enough energy for them to recharge themselves. Sometime after being turned off, the light in the box would turn on again, causing renewed competition for the total energy in the ecosystem. We could experimentally regulate the boxes’ energy consumption and how fast this consumption went up again after the robots dimmed a light, thus determining how much work the robots would have to do before there would be enough energy in the charging station for them. In fact we constructed the environment so that one robot could not survive on its own: It needed others to keep pushing against the boxes, dimming the light. The robots had to cooperate and compete for the available resources. The competition was ruthless. A robot that did not have adequate energy simply died.

This situation models one faced by many animals in nature. There is an environment that contains sources of food, but the animals have to do something to get this food. There are typically a lot of competitors for the food, usually animals of the same species, and occasionally cooperation is needed. For example, birds that have a nest with eggs need to take turns feeding themselves, and hunters may need to organize in groups to attack larger animals. Ethological investigations over the past few decades have shown that animals adapt remarkably to their environment. They exploit the available resources very efficiently and will cooperate when needed. We managed to elicit these behaviors in the robots as well, and investigated in particular what kind of learning and adaptation strategies the robots needed to adopt to ensure group survival. But we also witnessed the emergence of a remarkable pattern of behavior that was neither preprogrammed nor expected.

In the ecosystem we constructed, two robots starting as equals needed to coordinate their efforts to survive. When robot one (R1) was in the charging station, the other robot (R2) had to keep pushing against the energy-draining box. When R2 was exhausted, it would go toward the charging station and push R1 out so that it could recharge itself. R1 would then start pushing against the box, and so on. As mentioned earlier, the timing of the cycle was not preprogrammed and indeed it could not be specified in advance because the physical properties of batteries are unknown and change with time. The robots needed to adapt their behavior to be compatible with the situation. They would monitor how much energy was available at the charging station while they were recharging and would work harder next time if there was not enough energy. If there was still plenty of energy in the charging station after they had recharged themselves, they would work a bit less. So each robot would individually optimize its own behavior with the idea that global optimality would emerge from the individual behaviors.

Indeed, we observed that after a while the robots regulated their turn-taking to be almost optimally compatible with the pressures in the environment that we set up for them. Both robots were taking roughly equal amounts of time to work and to recharge. But then we observed something strange and unforeseen. One robot managed to maneuver itself into a situation where it worked much less. The other robot was pushing the boxes twice as long as the first, and was on the verge of utter collapse due to lack of energy. The “master” enjoyed life by roaming around freely in the arena, only occasionally pushing the boxes. How was this possible? How could two robots that started with exactly the same hardware and software nevertheless develop a relationship where one appears to exploit the other?

The answer turns out to be surprisingly simple and is in line with research on game theory in economics. The robots behave in the real world and have only limited “knowledge” about this world. Moreover, the real world introduces various random factors that can disturb the equilibrium. What happens here is that due to chance, one robot, say R1, happens to be a bit less efficient in pushing against the boxes. After all, this is a non-trivial task. It requires detecting the boxes, moving towards them, and hitting them. When R1 is less efficient or less lucky in a particular run, less energy is available for R2, which therefore decides to work a bit more when it is its turn to push the boxes. But this makes more energy available to R1, which therefore decides to work even less. And this causes R2 to work just a bit more again next time to compensate for the lack of energy available to it when entering the charging station. So there is a hidden positive feedback loop in the system. Once the equilibrium is broken, there is a steady evolution towards exploitation. In the parlance of dynamical systems, we would say that there is “symmetry breaking.” The system, consisting of the cycles of pushing and recharging, moves from one stable attractor (every robot works an equal amount of time) to another stable attractor (one robot works twice as much as the other one).

What to make of the stunning results of this experiment? Recall that the methodology of Artificial Intelligence research is to examine the consequences of certain assumptions. Here we examine what happens when two agents need to cooperate in order to exploit resources in the environment available for their survival. This sort of situation is very common, at the level of families, workplaces, and nations. The experiment does not imply that inequalities are inevitable or that those who were lucky, like R1, have a kind of birthright to their position. However it warns us that inequalities may develop even if the rules are completely fair and initial conditions unbiased. If one were to apply these results to human societies, it has the disturbing implication that even if every agent within a society attempts a fair distribution of labor and benefit, gross inequalities will nevertheless have a tendency to develop. The experiment does not say what could be done about it. Do we need an external controller that monitors the distribution of labor and occasionally sweeps in to re-arrange matters? Do the agents need a way to keep track of what work others are doing and be given the right to re-equilibrate if they judge the situation to be unfair? Should work be divided equally by a central authority who ensures that no one deviates? What happens when resources expand or diminish or when tasks need to become specialized? These questions have occupied social scientists for centuries and we are now finding new tools in A.I. for their examination.

Further Reading
Scott Canazine, Jean-Louis Deneubourg, Nigel Franks, James Sneyd, Guy Theraulez, & Eric Bonabeau, Self-Organization in Biological Systems (Princeton: Princeton University Press, 2001).
Grey Walter, The Living Brain (New York: W. W. Norton, 1963).
David McFarland, Artificial Ethology (Oxford: Oxford University Press, 2000).
Luc Steels, “The Artificial Life Route to Artificial Intelligence,” in Chris Langton, ed., Artificial Life (MIT Press, Cambridge Mass: MIT Press, 1997).
Luc Steels, “Language Games for Autonomous Robots,” in IEEE Intelligent Systems, September/October 2001, pp. 16–22.

Luc Steels is professor of artificial intelligence at the University of Brussels and director of the Sony Computer Science laboratory in Paris. He is the author of many books. His work on the origins of language in populations of robots has recently been featured in various exhibitions, including “LAwerp, 1999) and “NO1SE” (Cambridge and London, 2000).

If you’ve enjoyed the free articles that we offer on our site, please consider subscribing to our nonprofit magazine. You get twelve online issues and unlimited access to all our archives.