--- title: 'Better sample efficiency for TRPO' summary: '' difficulty: 3 # out of 3 ---

Trust Region Policy Optimization (TRPO) is a scalable implementation of second order policy gradient algorithm that is highly effective on both continuous and discrete control problems. One of the strengths of TRPO is that it is relatively easy to set its hyperparameters, as a hyperparameter setting that performs well on one task tends to perform well on many other tasks. But despite its significant advantages, the TRPO algorithm could be more data efficient.

The problem is to modify a good TRPO implementation so that it would converge on all of Gym's MuJoCo environments using 3x less experience, without a degradation in final average reward. Ideally, the new code should use the same setting of the hyperparameters for every problem.

This will be an impressive achievement, and the result will likely be scientifically significant.

When designing the code, you may find the following ideas useful:


Notes

This problem is very hard, as getting an improvement of this magnitude is likely to require new ideas.