Star Field


libPG -- A high performance distributed RL library.


This is a high-speed C++ implementation of many popular RL algorithms for MDPs and POMDPs including:

  1. Baxter and Bartlett's GPOMDP/OLPOMDP
  2. Jan Peter's Natural Actor-Critic
  3. Various PSR code from Satinder Singh's papers
  4. Online PSRs from McCracken and Bowling
  5. HMM estimation of hidden state from observations
  6. Finite history methods
  7. Optimised for multi-agent/distributed RL applications

Requires the uBlas components of the Boost library. Having Lapack and Atlas will also open up more features.

This is the source code for a few papers, including the base library for anything to do with the Factored Policy Gradient Planner (FPG), the Brazil Planner, and the RL with PSRs/POMDPs paper.

FPG (version IPC-06)


Probabilistic planner addressing its problem as a reinforcement learning one. Its principle is to optimise a controller's parameters (such as a neural network whose input is a state and whose output is a decision) using a domain simulator. It relies on the libPG library (see above).

Article and Report

[Article.tgz] - [Report.tgz]

Article and report examples written with LaTeX. This should be a good start for beginners, but does not prevent from reading documentations.

Usage (under linux):

LaTeX flipbook package

[ -]

[flipbook-doc.pdf/flipbook-ex.pdf] - []

An example showing how to make a flip book (/flick book) with LaTeX.

Usage (under linux):

. Last modified: Mon May 8 15:42:06 CEST 2017