Soft Robotic Gripper (Design Analysis Draft 2)


The article “This Soft Robotic Gripper Can Screw in Your Light Bulbs for You” (2017) introduces a new robotic gripper along with its design and functionalities. Developed by engineers at the University of California, San Diego, the three-finger gripper can lift and manipulate objects without visualization and training, enabling operations in dimmed and poor visibility. The article mentions each finger consists of three “pneumatic chambers”, providing the gripper various degrees of freedom. The movement of “pneumatic chambers” when air pressure is exerted allows the manipulation of objects. The fingers are overlaid with a “smart, sensing skin” manufactured with “silicone rubber”, implanted with sensors constructed of “carbon nanotubes”. As the fingers contract, the conductivity of the nanotubes varies, granting the skin recording and identification capabilities when the fingers are near an object. A control board houses the data generated by the sensors, gathering information to form a 3D model of the manipulated object. The article states future improvements including “machine learning” and “artificial intelligence”, as well as “3D printing” for increase durability of gripper’s fingers.

When compared to other related products on the market, the soft robotic gripper is lacking certain features and functionality. These include a lack of a slip detection system, a slow and tedious manufacturing process of the robotic gripper, and the absence of machine learning algorithms for object identification.

Torque is a key factor in a robotic gripper. The right amount of torque is crucial when the gripper is grasping a fragile object, excess torque will damage it and too low torque will cause the object to slip out of the gripper. Hence, the right amount of torque while the gripper is able to grab the object without slipping is desirable. To combat this issue, a sensor is required for detecting slippages. Phys.org (2017) states that the engineers did not take slipping into consideration due to the “high coefficient of friction between the silicone elastomers” of the fingers and skin of the gripper. However, the engineering team suggested taking slippages during grasping for their future works, as they believed it could enhance their results. In contrast to the intelligent robotic gripper, (Johansson & Pettersson-Gull, 2018) specified that their robotic gripper uses a software called “Object Motion Focused Grasping (OMFG)”, embedded into their slip sensors. The software increases the grasping force of the gripper when the object is moving, allowing more robust, fragile items to be grabbed without damaging them.

Another downside is the manufacturing process of the soft robotic grippers. The gripper module consists of actuators wrapped with “sensor skin”, and fabrication is necessary for both components. The making of the actuator module is a five-step molding-based process, while the fabrication of “sensor skin” requires stirring the “conductive-polydimethylsiloxane traces” overnight. The process requires multiple assembly steps in a long period of time, and the durability of the gripper comes into the question. A better fabrication method would be 3D printing, as mentioned in (Truby, Katzschmann, Lewis, & Rus, 2019), where they used 3D printing to create their soft robotic fingers in a short period of time. The 3D printing technology is unique as it is able to print “soft sensors” from “organic ionogel-based sensor ink” for better feedback and response.

Another limitation is the lack of machine learning for object identification. Phys.org (2017) explained the usage of 2D and 3D “tactile object modeling”. When the gripper comes in contact with an object, data points generated by the sensors are collected, forming a “2D tactile object model”. A 3D rendition of the object is then generated from several “2D outlines”, resembling the original shape of the object. However, the modeling processes are not perfect as the gripper is not able to identify convex objects and the slope of any surface object. Furthermore, the gripper does not actually identify objects, rather, it only models them. In comparison to the robotic gripper from (Homberg, Katzschmann, Dogar, & Rus 2019), it was built with four complex algorithms. The first two algorithms were associated with the grasping system of the gripper, and the remaining two represented the object identification feature. Algorithm 3 is the “trained object identification”, utilizing an existing datasheet of sensor data for “repeated grasps of known object”. Algorithm 4 is the “online object identification”, identifying objects online when the gripper is grabbing new and old objects. The algorithm decides if the grasped object is either a known or new object. If a known object is identified, the algorithm will upload data with the label of the identified object. On the other hand, if it is a new object, the algorithm will create a new label and adds new information. Experiments were conducted to test the object identification algorithms, with a resounding success rate of 94.5% for algorithm 3 and 85.7% for algorithm 4.

In conclusion, the soft robotic gripper only serves as basic robotic gripper compared to other innovative and unique grippers. The soft robotic gripper is able to carry out simple functions, however, it lacks what its competitors bring to the table, therefore, it does not stand out from the others.

References:
Homberg, Katzschmann, Dogar & Rus (2019) Robust Proprioceptive Grasping with a Soft Robot Hand. Retrieved from https://link.springer.com/article/10.1007/s10514-018-9754-1

Johansson & Pettersson-Gull (2018) Intelligent Robotic Gripper with an Adaptive Grasp Technique. Retrieved from http://www.diva-portal.org/smash/get/diva2:1245002/FULLTEXT01.pdf

Phys.org (2017) This Soft Robotic Gripper Can Screw In Your Lightbulbs For You. Retrieved from https://phys.org/news/2017-10-soft-robotic-gripper-bulbs.html?utm_source=TrendMD&utm_medium=cpc&utm_campaign=Phys.org_TrendMD_1


Comments

Popular Posts