_requests_for_research/cartpole.html (20 lines of code) (raw):

--- title: "Cartpole: for newcomers to RL" summary: '' difficulty: 1 # out of 3 --- <p>The <a href="https://gym.openai.com/envs/CartPole-v0">Cartpole</a> environment is one of the simplest MDPs. It is extremely low dimensional, with a four-dimensional observation space and only two actions. The goal of this exercise is to implement several RL algorithm in order to get practical experience with such methods.</p> <p>The small size and simplicity of this environment makes it is possible to run very quick experiments, which is essential when learning the basics.</p> <p>Start with a simple linear model (that has only four parameters), and use the sign of the weighted sum to choose between the two actions. <ul> <li>The random guessing algorithm: generate 10,000 random configurations of the model's parameters, and pick the one that achieves the best cumulative reward. It is important to choose the distribution over the parameters correctly.</li> <li>The hill-climbing algorithm: Start with a random setting of the parameters, add a small amount of noise to the parameters, and evaluate the new parameter configuration. If it performs better than the old configuration, discard the old configuration and accept the new one. Repeat this process for some number of iterations. How long does it take to achieve perfect performance? </li> <li> Policy gradient algorithm: here, instead of choosing the action as a deterministic function of the sign of the weighted sum, make it so that action is chosen randomly, but where the distribution over actions (of which there are two) depends on the numerical output of the inner product. Policy gradient prescribes a principled parameter update rule [<a href="https://www.youtube.com/watch?v=oPGVsoBonLM">1</a>, <a href="http://www0.cs.ucl.ac.uk/staff/D.Silver/web/Teaching_files/pg.pdf">2</a>]. Your goal is to implement this algorithm for the simple linear model, and see how long it takes to converge. </li> </ul> <p> What happens to the above algorithm when the policy is a neural network with tens of thousands of parameters? </p> <hr /> <h3>Notes</h3> <p> This is a simple task that is meant to help newcomers gain practical experience with implementing simple RL algorithms. </p> <h3>Solutions</h3> <p> Results and some intuition behind the algorithms at <a href="http://kvfrans.com/simple-algoritms-for-solving-cartpole/"> this post </a>, and <a href="https://github.com/kvfrans/openai-cartpole"> here </a> is the code used.