...

How to Control a Robot with Python


PyBullet is an open-source simulation platform created by Facebook that’s designed for training physical agents (such as robots) in a 3D environment. It provides a physics engine for both rigid and soft bodies. It is commonly used for robotics, artificial intelligence, and game development.

Robotic arms are very popular due to their speed, precision, and ability to work in hazardous environments. They are used for tasks such as welding, assembly, and material handling, especially in industrial settings (like manufacturing).

There are two ways for a robot to perform a task:

  • Manual Control – requires a human to explicitly program every action. Better suited for fixed tasks, but struggles with uncertainty and requires tedious parameter tuning for every new scenario.
  • Artificial Intelligence – allows a robot to learn the best actions through trial and error to achieve a goal. So, it can adapt to changing environments by learning from rewards and penalties without a predefined plan (Reinforcement Learning).

In this article, I’m going to show how to build a 3D environment with PyBullet for manually controlling a robotic arm. I will present some useful Python code that can be easily applied in other similar cases (just copy, paste, run) and walk through every line of code with comments so that you can replicate this example.

Setup

PyBullet can easily be installed with one of the following commands (if pip fails, conda should definitely work):

pip install pybullet

conda install -c conda-forge pybullet

PyBullet comes with a collection of preset URDF files (Unified Robot Description Format), which are XML schemas describing the physical structure of objects in the 3D environment (i.e. cube, sphere, robot).

import pybullet as p
import pybullet_data
import time

## setup
p.connect(p.GUI)
p.resetSimulation()
p.setGravity(gravX=0, gravY=0, gravZ=-9.8)
p.setAdditionalSearchPath(path=pybullet_data.getDataPath())

## load URDF
plane = p.loadURDF("plane.urdf")

cube = p.loadURDF("cube.urdf", basePosition=[0,0,0.1], globalScaling=0.5, useFixedBase=True)

obj = p.loadURDF("cube_small.urdf", basePosition=[1,1,0.1], globalScaling=1.5)
p.changeVisualShape(objectUniqueId=obj, linkIndex=-1, rgbaColor=[1,0,0,1]) #red

while p.isConnected():
    p.setRealTimeSimulation(True)

Let’s go through the code above:

  • when you can connect to the physics engine, you need to specify if you want to open the graphic interface (p.GUI) or not (p.DIRECT).
  • The Cartesian space has 3 dimensions: x-axis (forward/backward), y-axis (left/right), z-axis (up/down).
  • It’s common practice to set the gravity to (x=0, y=0, z=-9.8) simulating Earth’s gravity, but one can change it based on the objective of the simulation.
  • Usually, you need to import a plane to create a ground, otherwise objects would just fall indefinitely. If you want an object to be fixed to the floor, then set useFixedBase=True (it’s False by default). I imported the objects with basePosition=[0,0,0.1] meaning that they are 10cm above the ground.
  • The simulation can be rendered in real-time with p.setRealTimeSimulation(True) or faster (CPU-time) with p.stepSimulation(). Another way to set real-time is:
import time

for _ in range(240):   #240 timestep commonly used in videogame development
    p.stepSimulation() #add a physics step (CPU speed = 0.1 second)
    time.sleep(1/240)  #slow down to real-time (240 steps × 1/240 second sleep = 1 second)

Robot

Currently, our 3D environment is made of a little object (tiny red cube), and a table (big cube) fixed to the ground (plane). I shall add a robotic arm with the task of picking up the smaller object and putting it on the table.

PyBullet has a default robotic arm modeled after the Franka Panda Robot.

robo = p.loadURDF("franka_panda/panda.urdf", 
                   basePosition=[1,0,0.1], useFixedBase=True)

The first thing to do is to analyze the links (the solid parts) and joints (connections between two rigid parts) of the robot. For this purpose, you can just use p.DIRECT as there is no need for the graphic interface.

import pybullet as p
import pybullet_data

## setup
p.connect(p.DIRECT)
p.resetSimulation()
p.setGravity(gravX=0, gravY=0, gravZ=-9.8)
p.setAdditionalSearchPath(path=pybullet_data.getDataPath())

## load URDF
robo = p.loadURDF("franka_panda/panda.urdf", 
                  basePosition=[1,0,0.1], useFixedBase=True)

## joints
dic_info = {
    0:"joint Index",  #starts at 0
    1:"joint Name",
    2:"joint Type",  #0=revolute (rotational), 1=prismatic (sliding), 4=fixed
    3:"state vectorIndex",
    4:"velocity vectorIndex",
    5:"flags",  #nvm always 0
    6:"joint Damping",  
    7:"joint Friction",  #coefficient
    8:"joint lowerLimit",  #min angle
    9:"joint upperLimit",  #max angle
    10:"joint maxForce",  #max force allowed
    11:"joint maxVelocity",  #max speed
    12:"link Name",  #child link connected to this joint
    13:"joint Axis",
    14:"parent FramePos",  #position
    15:"parent FrameOrn",  #orientation
    16:"parent Index"  #−1 = base
}
for i in range(p.getNumJoints(bodyUniqueId=robo)):
    joint_info = p.getJointInfo(bodyUniqueId=robo, jointIndex=i)
    print(f"--- JOINT {i} ---")
    print({dic_info[k]:joint_info[k] for k in dic_info.keys()})

## links
for i in range(p.getNumJoints(robo)):
    link_name = p.getJointInfo(robo, i)[12].decode('utf-8')  #field 12="link Name"
    dyn = p.getDynamicsInfo(robo, i)
    pos, orn, *_ = p.getLinkState(robo, i)
    dic_info = {"Mass":dyn[0], "Friction":dyn[1], "Position":pos, "Orientation":orn}
    print(f"--- LINK {i}: {link_name} ---")
    print(dic_info)

Every robot has a “base“, the root part of its body that connects everything (like our skeleton spine). Looking at the output of the code above, we know that the robotic arm has 12 joints and 12 links. They are connected like this:

Movement Control

In order to make a robot do something, you have to move its joints. There are 3 main types of control, which are all applications of Newton’s laws of motion:

  • Position control: basically, you tell the robot “go here”. Technically, you set a target position, and then apply force to gradually move the joint from its current position toward the target. As it gets closer, the applied force decreases to avoid overshooting and eventually balances perfectly against resistive forces (i.e. friction, gravity) to hold the joint steady in place.
  • Velocity control: basically, you tell the robot “move at this speed”. Technically, you set a target velocity, and apply force to gradually bring the velocity from zero to the target, then maintain it by balancing applied force and resistive forces (i.e. friction, gravity).
  • Force/Torque control: basically, you “push the robot and see what happens”. Technically, you directly set the force applied at the joint, and the resulting motion depends purely on the object’s mass, inertia, and the environment. As a side note, in physics, the word “force” is used for linear motion (push/pull), while “torque” indicates rotational motion (twist/turn).

We can use p.setJointMotorControl2() to move a single joint, and p.setJointMotorControlArray() to apply force to multiple joints at the same time. For instance, I shall perform position control by giving a random target for each arm joint.

## setup
p.connect(p.GUI)
p.resetSimulation()
p.setGravity(gravX=0, gravY=0, gravZ=-9.8)
p.setAdditionalSearchPath(path=pybullet_data.getDataPath())

## load URDF
plane = p.loadURDF("plane.urdf")
cube = p.loadURDF("cube.urdf", basePosition=[0,0,0.1], globalScaling=0.5, useFixedBase=True)
robo = p.loadURDF("franka_panda/panda.urdf", basePosition=[1,0,0.1], useFixedBase=True)
obj = p.loadURDF("cube_small.urdf", basePosition=[1,1,0.1], globalScaling=1.5)
p.changeVisualShape(objectUniqueId=obj, linkIndex=-1, rgbaColor=[1,0,0,1]) #red

## move arm
joints = [0,1,2,3,4,5,6]
target_positions = [1,1,1,1,1,1,1] #

The real question is, “what’s the right target position for each joint?” The answer is Inverse Kinematics, the mathematical process of calculating the parameters needed to place a robot in a given position and orientation relative to a starting point. Each joint defines an angle, and you don’t want to guess multiple joint angles by hand. So, we’ll ask PyBullet to figure out the target positions in the Cartesian space with p.calculateInverseKinematics().

obj_position, _ = p.getBasePositionAndOrientation(obj)
obj_position = list(obj_position)

target_positions = p.calculateInverseKinematics(
    bodyUniqueId=robo,
    endEffectorLinkIndex=11, #grasp-target link
    targetPosition=[obj_position[0], obj_position[1], obj_position[2]+0.25], #25cm above object
    targetOrientation=p.getQuaternionFromEuler([0,-3.14,0]) #[roll,pitch,yaw]=[0,-π,0] -> hand pointing down
)

arm_joints = [0,1,2,3,4,5,6]
p.setJointMotorControlArray(bodyUniqueId=robo, controlMode=p.POSITION_CONTROL,
                            jointIndices=arm_joints,
                            targetPositions=target_positions[:len(arm_joints)],
                            forces=[50]*len(arm_joints))

Please note that I used p.getQuaternionFromEuler() to convert the 3D angles (easy for humans to understand) into 4D, as “quaternions” are easier for physics engines to compute. If you want to get technical, in a quaternion (x, y, z, w), the first three components describe the axis of rotation, while the fourth dimension encodes the amount of rotation (cos/sin).

The code above commands the robot to move its hand to a specific position above an object using Inverse Kinematics. We grab where the little red cube is sitting in the world with p.getBasePositionAndOrientation(), and we use the information to calculate the target position for the joints.

Interact with Objects

The robot arm has a hand (“gripper”), so it can be opened and closed to grab objects. From the previous analysis, we know that the “fingers” can move within (0-0.04). So, you can set the target position as the lower limit (hand closed) or the upper limit (hand open).

Moreover, I want to make sure that the arm holds the little red cube while moving around. You can use p.createConstraint() to make a temporary physics bond between the robot’s gripper and the object, so that it will move together with the robot hand. In the real world, the gripper would apply force through friction and contact to squeeze the object so it doesn’t fall. But, since PyBullet is a very simple simulator, I’ll just take this shortcut.

## close hand
p.setJointMotorControl2(bodyUniqueId=robo, controlMode=p.POSITION_CONTROL,
                        jointIndex=9, #finger_joint1
                        targetPosition=0, #lower limit for finger_joint1
                        force=force)
p.setJointMotorControl2(bodyUniqueId=robo, controlMode=p.POSITION_CONTROL,
                        jointIndex=10, #finger_joint2
                        targetPosition=0, #lower limit for finger_joint2
                        force=force)

## hold the object
constraint = p.createConstraint(
    parentBodyUniqueId=robo,
    parentLinkIndex=11,
    childBodyUniqueId=obj,
    childLinkIndex=-1,
    jointType=p.JOINT_FIXED,
    jointAxis=[0,0,0],
    parentFramePosition=[0,0,0],
    childFramePosition=[0,0,0,1]
)

After that, we can move the arm toward the table, using the same strategy as before: Inverse Kinematics -> target position -> position control.

Finally, when the robot reaches the target position in the Cartesian space, we can open the hand and release the constraint between the object and the arm.

## close hand
p.setJointMotorControl2(bodyUniqueId=robo, controlMode=p.POSITION_CONTROL,
                        jointIndex=9, #finger_joint1
                        targetPosition=0.04, #upper limit for finger_joint1
                        force=force)
p.setJointMotorControl2(bodyUniqueId=robo, controlMode=p.POSITION_CONTROL,
                        jointIndex=10, #finger_joint2
                        targetPosition=0.04, #upper limit for finger_joint2
                        force=force)

## drop the obj
p.removeConstraint(constraint)

Full Manual Control

In PyBullet, you need to render the simulation for every action the robot takes. Therefore, it’s convenient to have a utils function that can speed up (i.e. sec=0.1) or slow down (i.e. sec=2) the real-time (sec=1).

import pybullet as p
import time

def render(sec=1):
    for _ in range(int(240*sec)):
        p.stepSimulation()
        time.sleep(1/240)

Here’s the full code for the simulation we have done in this article.

import pybullet_data

## setup
p.connect(p.GUI)
p.resetSimulation()
p.setGravity(gravX=0, gravY=0, gravZ=-9.8)
p.setAdditionalSearchPath(path=pybullet_data.getDataPath())

plane = p.loadURDF("plane.urdf")
cube = p.loadURDF("cube.urdf", basePosition=[0,0,0.1], globalScaling=0.5, useFixedBase=True)
robo = p.loadURDF("franka_panda/panda.urdf", basePosition=[1,0,0.1], globalScaling=1.3, useFixedBase=True)
obj = p.loadURDF("cube_small.urdf", basePosition=[1,1,0.1], globalScaling=1.5)
p.changeVisualShape(objectUniqueId=obj, linkIndex=-1, rgbaColor=[1,0,0,1])

render(0.1)
force = 100


## open hand
print("### open hand ###")
p.setJointMotorControl2(bodyUniqueId=robo, controlMode=p.POSITION_CONTROL,
                        jointIndex=9, #finger_joint1
                        targetPosition=0.04, #upper limit for finger_joint1
                        force=force)
p.setJointMotorControl2(bodyUniqueId=robo, controlMode=p.POSITION_CONTROL,
                        jointIndex=10, #finger_joint2
                        targetPosition=0.04, #upper limit for finger_joint2
                        force=force)
render(0.1)
print(" ")


## move arm
print("### move arm ### ")
obj_position, _ = p.getBasePositionAndOrientation(obj)
obj_position = list(obj_position)

target_positions = p.calculateInverseKinematics(
    bodyUniqueId=robo,
    endEffectorLinkIndex=11, #grasp-target link
    targetPosition=[obj_position[0], obj_position[1], obj_position[2]+0.25], #25cm above object
    targetOrientation=p.getQuaternionFromEuler([0,-3.14,0]) #[roll,pitch,yaw]=[0,-π,0] -> hand pointing down
)
print("target position:", target_positions)

arm_joints = [0,1,2,3,4,5,6]
p.setJointMotorControlArray(bodyUniqueId=robo, controlMode=p.POSITION_CONTROL,
                            jointIndices=arm_joints,
                            targetPositions=target_positions[:len(arm_joints)],
                            forces=[force/3]*len(arm_joints))

render(0.5)
print(" ")


## close hand
print("### close hand ###")
p.setJointMotorControl2(bodyUniqueId=robo, controlMode=p.POSITION_CONTROL,
                        jointIndex=9, #finger_joint1
                        targetPosition=0, #lower limit for finger_joint1
                        force=force)
p.setJointMotorControl2(bodyUniqueId=robo, controlMode=p.POSITION_CONTROL,
                        jointIndex=10, #finger_joint2
                        targetPosition=0, #lower limit for finger_joint2
                        force=force)
render(0.1)
print(" ")


## hold the object
print("### hold the object ###")
constraint = p.createConstraint(
    parentBodyUniqueId=robo,
    parentLinkIndex=11,
    childBodyUniqueId=obj,
    childLinkIndex=-1,
    jointType=p.JOINT_FIXED,
    jointAxis=[0,0,0],
    parentFramePosition=[0,0,0],
    childFramePosition=[0,0,0,1]
)
render(0.1)
print(" ")


## move towards the table
print("### move towards the table ###")
cube_position, _ = p.getBasePositionAndOrientation(cube)
cube_position = list(cube_position)

target_positions = p.calculateInverseKinematics(
    bodyUniqueId=robo,
    endEffectorLinkIndex=11, #grasp-target link
    targetPosition=[cube_position[0], cube_position[1], cube_position[2]+0.80], #80cm above the table
    targetOrientation=p.getQuaternionFromEuler([0,-3.14,0]) #[roll,pitch,yaw]=[0,-π,0] -> hand pointing down
)
print("target position:", target_positions)

arm_joints = [0,1,2,3,4,5,6]
p.setJointMotorControlArray(bodyUniqueId=robo, controlMode=p.POSITION_CONTROL,
                            jointIndices=arm_joints,
                            targetPositions=target_positions[:len(arm_joints)],
                            forces=[force*3]*len(arm_joints))
render()
print(" ")


## open hand and drop the obj
print("### open hand and drop the obj ###")
p.setJointMotorControl2(bodyUniqueId=robo, controlMode=p.POSITION_CONTROL,
                        jointIndex=9, #finger_joint1
                        targetPosition=0.04, #upper limit for finger_joint1
                        force=force)
p.setJointMotorControl2(bodyUniqueId=robo, controlMode=p.POSITION_CONTROL,
                        jointIndex=10, #finger_joint2
                        targetPosition=0.04, #upper limit for finger_joint2
                        force=force)
p.removeConstraint(constraint)
render()

Conclusion

This article has been a tutorial on how to manually control a robotic arm. We learned about movement control, Inverse Kinematics, grabbing and moving objects. New tutorials with more advanced robots will come.

Full code for this article: GitHub

I hope you enjoyed it! Feel free to contact me for questions and feedback, or just to share your interesting projects.

👉 Let’s Connect 👈

(All images are by the author unless otherwise noted)

Source link

#Control #Robot #Python