design + research + robots

Simulated Robot 1.0

17-1209_Robots-Moving-Blocks.jpg
 

developing the Simulated environment

2.13.2018

 

Step 1: manually controlled environment

As discussed in my previous post, my end goal is to develop multiple robots that learn to work together to assemble a puzzle or structure that they could not complete alone. After running into some issues with my earlier simulation environment, I decided to simplify the problem and focus on a 2D environment using PyBox2D, a 2D physics simulator, and PyGame for visualization. I developed a manually controlled 2D environment to test out controls. At the beginning of each round, the blocks are randomly placed within the field and the goal is to complete the puzzle as fast as possible, using the white octagonal agent to push the blocks. Small blue dots near the center of the screen mark the final location of each block.

 

step 2: implement an openAI gym env

Once this simulation was successfully built, I implemented it as an environment within the OpenAI gym framework. OpenAI gym is used by many researchers as a benchmark for Reinforcement Learning (RL) algorithms and serves as an ideal framework to work in. In my first round, the action space was composed of 8 simple discrete actions, outlined below.

ACTION_DICT = {
  0 : ((0, 1), "North"),
  1 : ((1, 1), "NorthEast"),
  2 : ((1, 0), "East"),
  3 : ((1, -1), "SouthEast"),
  4 : ((0, -1), "South"),
  5 : ((-1, -1), "SouthWest"),
  6 : ((-1, 0), "West"),
  7 : ((-1, 1), "NorthWest"),
}

The state was composed of each block's location and rotation angle. In addition, I tracked how many blocks were in their final location.

state = []
in_place = []
for block in self.blocks:
  x, y = block.worldCenter
  angle = block.angle % (2*np.pi)
  state.extend([x, y, angle])
  in_place.append(self.is_in_place(x, y, angle, block))

The reward structure was composed of a living penalty, block reward, and puzzle completion reward. A living penalty was assigned for each block at each timestep. The further the block was away from its final location, the higher the penalty. This was intended to motivate the agent to move the blocks closer to their final location quickly. A reward of +10 was awarded for each block moved into its final position, and -10 deducted if it was moved out of place. Once the puzzle was completed, an award of +1000 would be given. Unfortunately, the no positive rewards were ever given. The agent consistently moved directly towards a wall, with no regard for the blocks.

After this initial test, I took note of a number of areas for improvement or different directions, listed below:

  • State Description:
    • Make the blocks location relative to its final location
    • Include relative position of agent to each block
    • Include information about shape of blocks - vector of 0s and 1s.
    • Use pixels as state information
  • Reward Structure:
    • Needs some positive reinforcement before block is in final location, rather than penalty alone.
    • Include reward for moving towards final angle
    • Include reward for agent's contact with blocks
 

step 3: simplify, refine, and test

The initial environment was not behaving as expected, so I needed to rethink my approach. The first step was to simplify the problem to a single block. My changes are listed below.

  • Simplify to a single block and no agent 
  • The block is initialized at a random location and rotation for every episode.
  • Action space is continuous and controls the linear velocity (x, y) and angular velocity of the block
  • State is composed of the relative x and y location to the final position, the relative rotation angle to the final rotation, and the distance between the block and its final position.
  • Reward structure is updated to give reward for good behavior - i.e. moving towards the final position:
    • Location based reward
      • -5 for moving away from the final position
      • +1 for moving towards it
      • -3 for not moving
    • Rotation based reward
      • -0.5 for rotating away from final position
      • +0.5 for rotating towards it
    • Puzzle Complete
      • +1000

In order to avoid bugs in the learning algorithm, I am using the Deep Deterministic Policy Gradient (DDPG) baseline implementation from OpenAI. The video above shows a model trained using DDPG for around 450 epochs. The model is fairly successful, but during certain episodes, the block will hover a short distance away from the final position, moving back and forth at really short distances. To improve this, I will experiment with the reward structure to motivate larger movements rather than micro-movements.

 

Next steps

The primary goal of my work is to develop multiple agents that learn to collaborate to achieve a common goal. With that in mind, my priority to is to move quickly to multiple agents in simulation.

  1. Expand environment to include one agent and one block
  2. State Definition:
    1. Option 1: Expand state to include relationship between block (relative vector, contact)
    2. Option 2: Experiment with state defined as pixels (RGB)
  3. Reward:
    1. Give reward for contact between blocks and agent

Once the simulation is successfully working, I will focus on implementing this strategy with real robots. This may involve developing a new 3D simulation environment and using domain adaptation strategies to improve the transfer of learning from simulation to the real-world environment.

For more info, check out my thesis book.