Anyone who has ever bravely volunteered to coach a youth soccer team is familiar with the blank stares that ensue when trying to explain the offsides rule. The logic that combines moving players, the position of the ball and the timing of a pass is always a challenge for 10-year-old brains to grasp (let alone 40-year-old brains).
Imagine trying to teach this rule to an inanimate, soccer-playing robot, along with all of the other rules, movements and strategies of the game.
Now researchers have developed an automated method of robot training by observing and copying human behavior.
Why are scientists teaching robots to play soccer? The short-term motivation is to win the annual RoboCup competition, the "World Cup" of robotic development. International teams build real robots that go head to head with no human control during the game. This year's competition is in Graz, Austria in June.
Here's the final match from the 2008 RoboCup:
The long-term goal is to develop the underlying technologies to build more practical robots, including an offshoot called RoboCup Rescue that develops disaster search and rescue robotics.
In a study released in the March 2009 online edition of Expert Systems with Applications, titled "Programming Robosoccer agents by modeling human behavior", a team from Carlos III University of Madrid used a technique known as machine-learning to teach a software agent several low-level basic reactions to visual stimuli.
"The objective of this research is to program a player, currently a virtual one, by observing the actions of a person playing in the simulated RoboCup league," said Ricardo Aler, lead author of the study.
In addition to actual robots, RoboCup also has a simulation software league that is more like a video game. In the study, human players were presented with simple game situations and were given a limited set of actions they could take.
Their responses were recorded and used to program a "clone" agent with many if-then scenarios based on the human's behavior. By automating this learning process, the agent can build its own knowledge collection by observing many different game scenarios.
The team has seen early success at learning rudimentary actions like moving towards the ball and choosing when to shoot, but the goal is to advance to higher-level cognition, including the dreaded offsides rule.
Implanting the physical robots with this knowledge set will give them a richer set of actions to choose from when they are exposed to visual stimuli from the playing field.
Previous attempts at machine learning relied on the robot/software to learn rules and reactions entirely on their own, similar to neural networks. Aler's team hopes to jump start the process by seeding the knowledge base with human players’ choices.
While current video soccer games like FIFA 2009 already use a detailed simulation engine, transferring this to the physical world of robots is the key to future research.
RoboCup organizers are not shy about their ultimate tournament in the year 2050. According to their website, "By mid-21st century, a team of fully autonomous humanoid robot soccer players shall win the soccer game, comply with the official rules of the FIFA, against the winner of the most recent World Cup."
That's right; they plan on the robots beating the current, human World Cup champions. "It's like what happened with the Deep Blue computer when it managed to beat Kasparov at chess in 1997," says Aler.
Maybe they can also build a robot linesman who can always get the offsides call correct!