Robot Telehandler

Organization Status Duration
Charge Robotics Complete 2021-2024

Overview

As a very early stage startup, Charge Robotics needed to prove it could deliver a technically challenging project that would be valuable to potential customers. The autonomous telehandler was a demonstration of our multidisciplinary robotics skills and ability to build and deploy large complex systems. It’s primary function was to unload pallets of solar modules from trailers as they are delivered to a solar construction site, a task normally done by several construction workers.

Contribution

My first job was to work on the brake system for the telehandler. The boom was controlled by the CAN bus from the joystick, the throttle was controlled by a custom circuit interfacing with the shifter, and the steering wheel was rotated by a custom actuator, but we had no way to control the brakes. After diving deep into the hydraulic circuit of the JPG943 telehandler, I determined it was not worth the effort to try to actuate the brakes by augmenting the hydraulic system. Instead, I simply connected an electric linear actuator to a cable that would pull on the brake pedal. Since the cable could be pulled but not pushed, if an operator wanted to apply the brakes, they could push on the pedal and the cable would simply go slack. This acted as a physical OR gate, so the brake pedal could be pushed by an operator, or pulled by my actuator, but they would not interfere with eachother. The simple solution worked perfectly, with the caveat that the paracord I used as a cable got worn down over time.



Still, it never failed

My next task for the telehandler project was to build the perception system responsible for finding the pallets. I used the Vuforia plugin on Unity3d game engine running on a mobile phone mounted to the forks to detect the boxes sitting on the pallets. (I used images of the boxes themselves as Target Marks for Vuforia to detect, so there are no obvious fiducial markers like QR codes.) This worked as a first prototype to test our ability to pick pallets from a container.


To make the final version of the perception system, I gathered a dataset of several thousand images of pallets and hand-labeled all of them. I then trained an object detection model (YOLOv8) to find pallets in the image. Although this was only bounding-box detection, it was good enough to locate most pallets when combined with the depthmap provided by a ZED camera mounted on the forks.



Projects