Skip to content

MZandtheRaspberryPi/quadruped_learning

Repository files navigation

quadruped_learning

Bittle Pic

Motion re-targeting

We took an existing URDF file that was newly published by Petoi, the makers of the Bittle robot, and modified it to work in pybullet. Original file here. Modified file here.

We then leveraged tools from "Learning Agile Robotic Locomotion Skills by Imitating Animals", by Xue Bin Peng et al. here to fit motions to our robot model and get a sequence of poses. We saved the poses here. A description of this method is below:

For retargeting the root orientation and position, we simply calculate orientation using pelvis, neck, shoulder, and hip locations in the reference motion. We take position by using pelvis and neck position. For joint angles, we first calculate target toe positions. Then we then use pybullet to calculate inverse kinematics to match all the toe positions.

motion_retargeting

motion_retargeting

Simulation Environments

We made a simulation environment for pybullet that let's you set target angles and step time forward for our bittle model, here.

We also made a version of this simulation environment that plugs in tensorflow-agents and their API here

sim

Learning Approaches

We implemented and tested a behavioral cloning approach that performed well when the robot was in a state similiar to the examples. It performed poorly when outside.

We started to teach an actor-critic model, but didn't deploy the model.

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages