Nate Kohl's Publications

Sorted by DateClassified by Publication TypeClassified by Research Category


Evolving Keepaway Soccer Players through Task Decomposition

Evolving Keepaway Soccer Players through Task Decomposition. Shimon Whiteson, Nate Kohl, Risto Miikkulainen, and Peter Stone. In Proceedings of the Genetic and Evolutionary Computation Conference 2003, pp. 356–368, July 2003.
GECCO 2003

Download

[PDF]148.2kB  [gzipped postscript]69.5kB  

Abstract

In some complex control tasks, learning a direct mapping from an agent's sensors to its actuators is very difficult. For such tasks, decomposing the problem into more manageable components can make learning feasible. In this paper, we provide a task decomposition, in the form of a decision tree, for one such task. We investigate two different methods of learning the resulting subtasks. The first approach, layered learning, trains each component sequentially in its own training environment, aggressively constraining the search. The second approach, coevolution, learns all the subtasks simultaneously from the same experiences and puts few restrictions on the learning algorithm. We empirically compare these two training methodologies using neuro-evolution, a machine learning algorithm that evolves neural networks. Our experiments, conducted in the domain of simulated robotic soccer keepaway, indicate that neuro-evolution can learn effective behaviors and that the less constrained coevolutionary approach outperforms the sequential approach. These results provide new evidence of coevolution's utility and suggest that solution spaces should not be over-constrained when supplementing the learning of complex tasks with human knowledge.

BibTeX Entry

@InProceedings{whiteson:gecco03,
   author = "Shimon Whiteson and Nate Kohl and Risto Miikkulainen and Peter Stone",
   title = "Evolving Keepaway Soccer Players through Task Decomposition",
   booktitle = "Proceedings of the Genetic and Evolutionary Computation Conference 2003",
   year = "2003",
   month = "July",
   pages = "356--368",
   abstract = {
                 In some complex control tasks, learning a direct
                 mapping from an agent's sensors to its actuators is
                 very difficult.  For such tasks, decomposing the
                 problem into more manageable components can make
                 learning feasible.  In this paper, we provide a task
                 decomposition, in the form of a decision tree, for
                 one such task.  We investigate two different methods
                 of learning the resulting subtasks.  The first
                 approach, layered learning, trains each component
                 sequentially in its own training environment,
                 aggressively constraining the search.  The second
                 approach, coevolution, learns all the subtasks
                 simultaneously from the same experiences and puts few
                 restrictions on the learning algorithm.  We
                 empirically compare these two training methodologies
                 using neuro-evolution, a machine learning algorithm
                 that evolves neural networks.  Our experiments,
                 conducted in the domain of simulated robotic soccer
                 keepaway, indicate that neuro-evolution can learn
                 effective behaviors and that the less constrained
                 coevolutionary approach outperforms the sequential
                 approach.  These results provide new evidence of
                 coevolution's utility and suggest that solution
                 spaces should not be over-constrained when
                 supplementing the learning of complex tasks with
                 human knowledge.
	},
   wwwnote = {<a href="http://gal4.ge.uiuc.edu:8080/GECCO-2003/">GECCO 2003</a>},
   bib2html_pubtype = {Refereed Conference},
   bib2html_rescat = {Machine Learning, Robot Soccer}
}    

Generated by bib2html.pl (written by Patrick Riley ) on Sun May 15, 2011 08:27:15