Organization | Status | Duration |
---|---|---|
harvard School of Engineering and Applied Sciences, Wyss Institute for Biologically Inspired Engineering, Lewis Research Group | Complete | Summer 2014 |
Soft robots are known for being durable, adaptable, low-cost, and utilizing “embodied intelligence”. However, they have a severe limitation, in that most soft robots are pneumatically powered and controlled, but the air supply and valve arrays are rarely considered as part of the robot. Often, the air pump is larger and heavier than the entire robot, defeating their claims of high power/weight ratios and compactness. The valves which control the actuation and computers that control the valves are likewise ignored.
The Octobot is the first entirely self-contained soft robot. It is made by a combination of embedded 3D printing, molding/casting, and soft lithography.
Inspired by real octopuses, the Octobot has no rigid components. It is powered by a chemical reaction and controlled with microfluidic logic that directs the flow of pressurized fluids. The microfluidic circuit acts just like an electrical oscillator and logic circuit, directing power to the legs in sequence. Octobot paves the way for a new generation of soft robots.
The microfluidic channels of Octobot are fabricated with “embedded 3D printing", a novel 3D printing technique that creates hollow channels within a gel-like “matrix” material before it is cured to a soft part. I developed the 3D printing software for patterning the actuator networks, on-board fuel reservoirs and catalytic reaction chambers. I wrote a python framework for combining modular fluidic “parts” and generating g-code to fabricate them. I also assisted grad students in the wet lab with fabrication of many Octobot prototypes, mixing resins and casting in molds.
Michael Wehner, Ryan L. Truby, Daniel J. Fitzgerald, Bobak Mosadegh, George M. Whitesides, Jennifer A. Lewis & Robert J. Wood
The Mediate software is a Unity application using SteamVR for to track the 3D pose of the shape display relative to stationary virtual geometry. Rays are projected from the base of each pin, and intersected with virtual mesh objects. The pin height is then set to the length of the vector. This way, the display will recreate the surface geometry of arbitrary objects in arbitrary relative positions and orientations to itself. The tracking and pin-rendering run independently from the hand-following, so the user can move the display manually to “reveal” virtual geometry, if they choose.
Learn More/ Research Paper: www.nature.com