Nate Kohl's Publications

Sorted by DateClassified by Publication TypeClassified by Research Category


Evolving Neural Networks for Fractured Domains

Evolving Neural Networks for Fractured Domains. Nate Kohl and Risto Miikkulainen. In Proceedings of the Genetic and Evolutionary Computation Conference, pp. 1405–1412, July 2008.
http://www.sigevo.org/gecco-2008/

Download

[PDF]183.7kB  

Abstract

Evolution of neural networks, or neuroevolution, bas been successful on many low-level control problems such as pole balancing, vehicle control, and collision warning. However, high-level strategy problems that require the integration of multiple sub-behaviors have remained difficult for neuroevolution to solve. This paper proposes the hypothesis that such problems are difficult because they are fractured: the correct action varies discontinuously as the agent moves from state to state. This hypothesis is evaluated on several examples of fractured high-level reinforcement learning domains. Standard neuroevolution methods such as NEAT indeed have difficulty solving them. However, a modification of NEAT that uses radial basis function (RBF) nodes to make precise local mutations to network output is able to do much better. These results provide a better understanding of the different types of reinforcement learning problems and the limitations of current neuroevolution methods. Thus, they lay the groundwork for creating the next generation of neuroevolution algorithms that can learn strategic high-level behavior in fractured domains.

BibTeX Entry

@InCollection{kohl:gecco08,
  author = "Nate Kohl and Risto Miikkulainen",
  title = "Evolving Neural Networks for Fractured Domains",
  booktitle = "Proceedings of the Genetic and Evolutionary Computation Conference",
  year = "2008",
  month = "July",
  pages = "1405--1412",
  abstract = {
  Evolution of neural networks, or neuroevolution, bas been successful
  on many low-level control problems such as pole balancing, vehicle
  control, and collision warning.  However, high-level strategy
  problems that require the integration of multiple sub-behaviors have
  remained difficult for neuroevolution to solve.  This paper proposes
  the hypothesis that such problems are difficult because they are
  fractured: the correct action varies discontinuously as the agent
  moves from state to state.  This hypothesis is evaluated on several
  examples of fractured high-level reinforcement learning domains.
  Standard neuroevolution methods such as NEAT indeed have difficulty
  solving them.  However, a modification of NEAT that uses radial
  basis function (RBF) nodes to make precise local mutations to
  network output is able to do much better.  These results provide a
  better understanding of the different types of reinforcement
  learning problems and the limitations of current neuroevolution
  methods.  Thus, they lay the groundwork for creating the next
  generation of neuroevolution algorithms that can learn strategic
  high-level behavior in fractured domains.
  }  
  wwwnote = {<a href="http://www.sigevo.org/gecco-2008/">http://www.sigevo.org/gecco-2008/</a>},
  bib2html_pubtype = {Refereed Conference},
  bib2html_rescat = {Machine Learning}
}

Generated by bib2html.pl (written by Patrick Riley ) on Sun May 15, 2011 08:27:15