Don’t Spill the Beans: Increasing efficiency, safety and profitability for a coffee roaster.

A Vision-Guided Workcell Case Study



Making a system more efficient, safe and profitable with a vision guided workcell

For this coffee roaster, the robot moving bags from pallets to the conveyor was a weak link. It often ripped bags, spewing beans on the floor. Plus, it relied on older technologies, using feel and memory to locate the next bag. This robot was costing the company 100,000 pounds of lost beans every year.



The Problem: Ripping Open Bags

The coffee roaster’s robot was supposed to unload pallets of 150-pound burlap bags full of raw beans and then place the bags one-by-one on a conveyor leading to the roaster. The robot picked up the bags with a pair of plier-like grippers, which were susceptible to tearing bags resulting in spilled the beans. Additionally, the robot was not able to accurately determine the orientation of bags on the pallets, which slowed the unloading process.

Dealing with glare in the field of view

Glare from metal objects could distort the image. Polarized filters placed on the cameras dulled the effect of glare, which was caused by laser light from pallet nails.

When the light hits the bags, it diffuses and polarization becomes random. Yet, when the laser light hits a shiny piece of metal, it creates glare. Using a polarization filter eliminated almost all of the glare, while removing only a small amount of the randomly polarized light coming off the bags, resulting in a clearer image.



The Solution: A Vision Guided Work Cell

A new end effector with 16 points of contact was designed for the robot, using pneumatically operated tines to pick up the bag, eliminating tearing. A control system incorporates an advanced 3D vision system with advanced PC software to model the precise position and orientation of each tier of bags on the pallet. The camera image is transmitted to the robot in coordinates allowing the robot to go directly to the next bag and pick it up.

Creating a smart robotic work cell

The vision guided work cell we developed for coffee roaster applications models each pallet of coffee bean bags, one pallet at a time and computes distance measurements through laser triangulation. Each pallet has 20 bags in four layers. A new computer model is constructed for every tier of bags on the pallet. An advance algorithm identifies unique features of the bags and determines the precise position and orientation of each bag in a tier. With this information, the robot is dispatched to load each bag on the conveyor.

Building a highly accurate visual model

Two SICK Ranger cameras and two lasers direct the robot’s motion through triangulation, a highly accurate distance measuring technique that is effective when the scanned surface is essentially perpendicular to the lasers. As Concept engineers studied how the beans were presented to the robot, they positioned and angled lasers and cameras in a configuration that identified surface contours of the bags on the pallets. Cameras are located about seven-feet above the pallet, providing a 53-inch wide field of view (+/- 30 or 40 degrees). So, the entire top of the pallet surface (48 sq. in.) can be mapped. Cameras are capable of producing 30,000 samples per second.

The customer asked for a special configuration, so it was impossible for the robot to hit any of the scanning equipment. This led us to a scanning gantry that is 13-feet off the pallet surface, which is a very unique scanning system configuration. By moving the scanners so far away from the coffee bags, we eliminated the possibility of an inadvertent robot collision, minimizing potential downtime.

Modeling how bags are oriented on the pallet

By developing a 3D model program, the system identifies how bags of beans are oriented on the pallet by looking at the bag outlines. Since bags may be oriented slightly differently on each tier of the pallet, extreme precision is required. The previous robot did not function well in this respect, because it had no vision so was not able to determine the exact location of the bags.

Our system used a series of carefully constructed statistical rules to find the location of the edges of the bags within a couple inches. Bags are picked up from the middle, and with this system, it’s easy to reliably find the center point of the bag. Additionally, the height of the pick point can be determined, which was a primary driver to increasing system efficiency.

Identifying the optimal pick up point

The system finds the edges of all of the visible bags, then calculates all of the pick points for all five bags in one layer in a single pass. Then, it computes the target positions for the picks and sorts them by height using the assumption that the highest pick point is the bag on top. Because the camera is calibrated to robot coordinates, the robot can go directly to this spot and pick up the bag at 1.5 meters per second. As each layer of bags is removed, the #D model us updated for the next layer. The software directs the robot to the highest bag. When the camera doesn’t see any more bags, the pallet is seen to be empty and a PLC is signaled to eject the pallet and bring in a new one.

Picking up the bags

Concept had a new end effector for the robot arm designed with 16 points of bag contact. It pierces the bags and picks them up using pneumatically operated tines that penetrate the bag as the tines rotate outwardly from the center. The tines push the burlap threads out of the way without tearing them, retaining the integrity of the bag. Pneumatics, a highly reliable power source, keeps downtime to a minimum, and its hoses are quick to replace when needed, making on-the-fly maintenance much easier. No sensors or electronics are mounted on the end effector, eliminating the need for special high-flex cabling or ruggedized sensors, creating a system than creates very few downtime opportunities.



The Results

The infeed bottleneck was removed, improving efficiency and safety. Now, with the vision guided work cell, the robot can handle six bags per minute, twice as fast as the previous robot. This new infeed system also planned for future plant efficiencies, as it is capable of unloading beans faster than the current operation can process them.



Project Details

Project Duration

About 6 Months

Team

Client: 1 engineer

Concept Systems: 3 engineers

Technology Used

SICK Ranger camera

FANUC robot reprogrammed

Laser scanner