Repository navigation
pomdps
- Website
- Wikipedia
Reinforcement Learning Tutorial with Demo: DP (Policy and Value Iteration), Monte Carlo, TD Learning (SARSA, QLearning), Function Approximation, Policy Gradient, DQN, Imitation, Meta Learning, Papers, Courses, etc..
MDPs and POMDPs in Julia - An interface for defining, solving, and simulating fully and partially observable Markov decision processes on discrete and continuous spaces.
A C++ framework for MDPs and POMDPs with Python bindings
A framework to build and solve POMDP problems. Documentation: https://h2r.github.io/pomdp-py/
Implementation of the Deep Q-learning algorithm to solve MDPs
Online solver based on Monte Carlo tree search for POMDPs with continuous state, action, and observation spaces.
A gallery of POMDPs.jl problems
Concise and friendly interfaces for defining MDP and POMDP models for use with POMDPs.jl solvers
Interface for defining discrete and continuous-space MDPs and POMDPs in python. Compatible with the POMDPs.jl ecosystem.
Pytorch code for "Learning Belief Representations for Imitation Learning in POMDPs" (UAI 2019)
Adaptive stress testing of black-box systems within POMDPs.jl
Julia Implementation of the POMCP algorithm for solving POMDPs
The goal of the project is to make a robot plan its path from a source to the destination and reach the destination only by evidence and its previous transition.
Compressed belief-state MDPs in Julia for reinforcement learning and sequential decision making. Part of the POMDPs.jl community.
POMDP-based decision-making technique for Social Robots using ROS, Python and Julia