Top

  • Aero

coffee beans

DETAILED CASE STUDY: ADVANCED VISION-GUIDED ROBOTIC WORKCELL

For this coffee roaster, the robot moving bags from pallets to the conveyor was a weak link. It often ripped bags, spewing beans on the floor. Plus, it relied on older technologies, using feel and memory to locate the next bag. This highly inefficient robot was costing the company 100,000 pounds of lost beans every year. Plant managers called on Concept to upgrade the system, making it more efficient, safer and more profitable.

the-pointPROJECT DETAILS

Project duration: About 6 months

Team
Client: 1 engineer
Concept Systems: 3 engineers

Concept Systems’ time on site: 20 days

Pain points

  • Wasted green beans due to flaws in bag handling system
  • Lost productivity
  • Coffee beans on floor are like marbles, creating an unsafe condition

Results

  • Increased green bean infeed rate by 100%.
  • Eliminated loss of 100,000 pounds of beans annually.
  • Improved worker safety.
  • Gained higher speed bag handling.

Technology used

  • SICK Ranger camera
  • FANUC robot reprogrammed
  • Laser scanner

REQUEST A QUOTE

Creating a smart robotic workcell guided with machine vision systems

The system we developed for coffee roaster applications models each pallet of coffee bean bags, one pallet at a time and computes distance measurements through laser triangulation. Each pallet has 20 bags in four layers. A new computer model is constructed for every tier of bags on the pallet. An advance algorithm identifies unique features of the bags and determines the precise position and orientation of each bag in a tier. With this information, the robot is dispatched to load each bag on the conveyor.

Building a highly accurate visual model

Two SICK Ranger cameras and two lasers direct the robot’s motion through triangulation, a highly accurate distance measuring technique that is effective when the scanned surface is essentially perpendicular to the lasers. As Concept engineers studied how the beans were presented to the robot, they positioned and angled lasers and cameras in a configuration that identified surface contours of the bags on the pallets. Cameras are located about seven-feet above the pallet, providing a 53-inch wide field of view (+/- 30 or 40 degrees). So, the entire top of the pallet surface (48 sq. in.) can be mapped. Cameras are capable of producing 30,000 samples per second.

The customer asked for a special configuration, so it was impossible for the robot to hit any of the scanning equipment. This led us to a scanning gantry that is 13-feet off the pallet surface, which is a very unique scanning system configuration. By moving the scanners so far away from the coffee bags, we eliminated the possibility of an inadvertent robot collision, minimizing potential downtime.

Dealing with glare in the field of view

Glare from metal objects could distort the image. Polarized filters placed on the cameras dulled the effect of glare, which was caused by laser light from pallet nails.

When the light hits the bags, it diffuses and polarization becomes random. Yet, when the laser light hits a shiny piece of metal, it creates glare. Using a polarization filter eliminated almost all of the glare, while removing only a small amount of the randomly polarized light coming off the bags, resulting in a clearer image.

Modeling how bags are oriented on the pallet

By developing a 3D model program, the system identifies how bags of beans are oriented on the pallet by looking at the bag outlines. Since bags may be oriented slightly differently on each tier of the pallet, extreme precision is required. The previous robot did not function well in this respect, because it had no vision so was not able to determine the exact location of the bags.

Our system used a series of carefully constructed statistical rules to find the location of the edges of the bags within a couple inches. Bags are picked up from the middle, and with this system, it’s easy to reliably find the center point of the bag. Additionally, the height of the pick point can be determined, which was a primary driver to increasing system efficiency.

Identifying the optimal pick up point

The system finds the edges of all of the visible bags, then calculates all of the pick points for all five bags in one layer in a single pass. Then, it computes the target positions for the picks and sorts them by height using the assumption that the highest pick point is the bag on top. Because the camera is calibrated to robot coordinates, the robot can go directly to this spot and pick up the bag at 1.5 meters per second. As each layer of bags is removed, the #D model us updated for the next layer. The software directs the robot to the highest bag. When the camera doesn’t see any more bags, the pallet is seen to be empty and a PLC is signaled to eject the pallet and bring in a new one.

Picking up the bags

Concept had a new end effector for the robot arm designed with 16 points of bag contact. It pierces the bags and picks them up using pneumatically operated tines that penetrate the bag as the tines rotate outwardly from the center. The tines push the burlap threads out of the way without tearing them, retaining the integrity of the bag. Pneumatics, a highly reliable power source, keeps downtime to a minimum, and its hoses are quick to replace when needed, making on-the-fly maintenance much easier. No sensors or electronics are mounted on the end effector, eliminating the need for special high-flex cabling or ruggedized sensors, creating a system than creates very few downtime opportunities.