Reinforcement Learning – NEAT, Part 1

The MIT Journal “Evolving Neural Networks through Augmenting Topologies”, published by Kenneth O. Stanley and Risto Miikkulainen in 2002, claims that NeuroEvolution of Augmenting Topologies (aka NEAT) outperforms the best fixed-topology methods due to

  • using a crossover method for different topologies,
  • ‘speciation‘ (taking care the networks’ structure and structuring), and
  • constant incremental growing from a minimal structure.

Since this is a high-level claim, this blog will give a theoretical introduction to these concepts. (Future blogs might extend on this one.) There are many subcategories in the field of Artificial Intelligence. For this topic, two of them are important: Artificial Neural Networks and Genetic Algorithms.

Artificial Neural Networks

Artificial neural networks (ANN) and genetic algorithms are the most popular approaches to machine learning. They involve adaptive mechanisms a computer can use to either learn from experience, by example or by analogy. The goal of such learning mechanisms is to improve the performance of an intelligent system over a certain period of time.

An ANN is designed to work like a brain. It consists of interconnected processors called neurons, which are representative for biological neurons in brains. Every link between two neurons has a numerical weight, which represents the importance of each neuron input. In order to learn, an ANN adjusts these weights. So, more important information is stored longer while irrelevant information is destined to be forgotten quickly. (cf. Negnevitsky, Michael: ‘Artificial intelligence. A guide to intelligent systems.’, P. 214)

Genetic Algorithms

Genetic (or evolutionary) algorithms (GA) discover and recombine elements of a given model in order to improve it. This is done through methods called recombination and mutation. Unlike other AI subcategories, a GA tries to evaluate and optimize a solution (‘fitness’ of a model) based on a ‘fitness’-function instead of trying to solve a problem.

GAs are based on evolutionary operators. The most important operators are the crossover, mutation and selection operators. Crossover combines certain characteristics of two elements, which follows the idea to enhance those elements. Mutation applies changes to an elements’ characteristics (However, the probability that a mutation happens is usually set to a low value). Selection chooses elements for crossover and mutation. (cf. Kramer, Oliver: ‘A brief Introduction to Continuous Evolutionary Optimization’, P. 6)

Neuro-Evolution

Neuro Evolution (NE) is a hybrid form of AI. It combines the concepts of neural networks and genetic algorithms to improve the overall effectivity and efficiency of the AI.

NE searches for a behavior instead of a fitness function. Therefore, it is effective in problems with rising complexity. Also, memory can be represented through ‘recurrent’ connections in the ANN.
ANNs have different topologies (Recurrence is one of them); the next blog will continue with this topic.

Schreibe einen Kommentar

Deine E-Mail-Adresse wird nicht veröffentlicht. Erforderliche Felder sind mit * markiert.

4 × 5 =