Automation standards update: The current version of the Industrial Robot Safety Standard, ANSI/RIA R15.06-2012, is a U.S. national adoption of the ISO 10218-2011, Part 1, Robots, and Part 2, Robotic Systems. Look for new versions of these documents in the 2020 or 2021 timeframe. Also see information on collaborative robots, loading and unloading stations, end-effectors, and lockout and tagout.
Author: Carole Franklin
Various standards and guidance documents govern and help those working with robotics and motion control. The current version of the Industrial Robot Safety Standard, ANSI/RIA R15.06-2012, is a U.S. national adoption of the ISO 10218-2011, Part 1, Robots, and Part 2, Robotic Systems. Those in compliance with the R15.06, 2012 version also are in compliance with the 10218, 2011 version. These standards will continue to be the current versions at least through 2020. Look for new versions of these documents in the 2020 or 2021 timeframe.
The ISO (international standards) group will begin updating the 10218 standard later this year; the revision process is expected to take about three years, which gives us the 2020 target publication date. Following that revision of the 10218, our standards committees in the U.S. will revise the R15.06 as well. In both the ISO and ANSI (U.S.) robotics communities, we currently are working on supplemental documents to help people apply these standards.
Some key things to know about robots and robotic systems:
- For the purpose of ISO 10218 and ANSI/RIA R15.06, it’s important to distinguish between the terms “robot” and “robot system.”
- “Robot” includes the robot arm and controller; “robot system” includes the robot, the end-effector (end-of-arm tooling or EOAT), and any other machinery, equipment, devices, etc., supporting the robot in performing its task.
- The ISO 10218:1,2-2011 and ANSI/RIA R15.06:2012 require that a risk assessment be conducted for each integrated robot application. It is the integrator’s responsibility to ensure that this required risk assessment is completed.
- RIA TR R15.306:2016 describes one task-based risk assessment method that meets the requirements of the standard.
Collaborative robotics, safety
In the U.S., these supplemental documents are registered with ANSI and are known as Technical Reports, or TRs. We are just about to publish a U.S. version of the ISO TS 15066:2016, the RIA TR R15.606-2016, on safety of collaborative robot systems. We also are starting work on two new TRs in the U.S., one of which is on guidance for users, which we hope to complete by the end of 2017; the other is on testing methods for power and force limiting for collaborative robot systems, which will likely be published in 2018.
In the ISO world, supplemental documents can be either Technical Reports (TRs), similar to the ANSI-registered TRs, or Technical Specifications (TSs). The difference is that the ISO TS describes requirements that are expected to mature to an International Standard (IS) level in the future. In the standards world, this means a TS is a “normative” document and can contain normative requirements. On the other hand, the TR is an “informative” document—that is, it cannot contain requirements but can only inform. The recently-published ISO TS 15066:2016 on collaborative robot safety is an example of a normative document. Because it is so recently published, it will not be revised for several years.
Some key things to know about collaborative robot safety include:
- The application is key. There are some tasks which are simply not well suited to collaborative operation, even if the robot that is performing the task is power- and force-limited and called a “collaborative” robot.
- The concept of a robot system is also important. The robot is not working in isolation. The workstation, the end-effector, the workpiece itself, the potential presence of multiple robots and other equipment in a cell are just some of the many factors that also must be taken into account when planning for a safe robotic installation. This is still the case even when using robots designed for collaboration.
- A risk assessment of the collaborative robot system is also important. Even when using a robot designed for collaborative use, it’s really important to assess and mitigate any risks of the system—precisely because we anticipate people and robots working in close proximity.
- It’s important to understand the foundational standard in addition to the collaborative supplement. TS 15066 builds upon the 10218 standard. That is, effective use of TS 15066 assumes that the robot system under consideration is in compliance with Part 1 and Part 2 of ISO 10218:2011.
Loading, unloading; end-effectors
The ISO group also is working on two TRs, both of which are expected to be completed in 2017. One is on the safety of manual load/unload stations, and the other is on end-effector safety.
Those with interest in robotics in food and beverage applications may be interested in a non-RIA standard.
The “3-A Sanitary Standard 3-A 103-00, Robot-based Automation Systems,” for use in the food industry, was published in September 2016, by 3-A Sanitary Standards Inc.
Outside the robot-specific world, there are some other standards on industrial safety that RIA members may want to know. These are the recently updated ANSI/ASSE Z244.1 on Lockout, Tagout and Alternative Methods, published in late 2016.
B11.20 on Safety Requirements for Integrated Manufacturing Systems is being updated right now, with an anticipated publication date in 2017.
Carole Franklin is director of standards development, Robotic Industries Association (RIA), part of Association for Advancing Automation (A3), a CFE Media content partner. Edited by Mark T. Hoske, content manager, CFE Media, Control Engineering, email@example.com.
- ISO 10218-2011, Part 1, Robots, and Part 2, Robotic Systems, and the U.S. adoption of it, R15.06, will likely be revised in the 2020-2021 timeframe.
- ISO TS 15066:2016 on collaborative robot safety was published in February 2016. The U.S. adoption of this, RIA TR R15.606-2016, will be published soon.
- Standards covering lockout/tagout and safety of integrated manufacturing systems also are important.
Copyright: Copyright 2017 CFE Media LLC
Manufacturers need to think through their business model with the Industrial Internet of Things (IIoT) or Industrie 4.0 and ask how can a product become a service with a long-term revenue stream.
Author: Mike James
There is much talk about the Industrial Internet of Things (IIoT). However, ‘things’ are just part of the plumbing. We connect devices, giving them, no more than, nominal intelligence. The real innovation is the internet of services. Manufacturers need to think through their business model and ask how can a product become a service with a long-term revenue stream. Many manufacturers, however, recognize this and are exploiting the opportunity to improve their operations. For example,
Tesla is delivering vehicles with hardware and software which can be upgraded, their cars are sensor ready and software upgrades will provide extra intelligence, delivered via the internet. The customer could pay for the upgrades which then generates extra revenue for Tesla.
Otis is supplying elevators/lifts with sensors which send data into their cloud. The data is analyzed and Otis sells a predictive maintenance services package, again adding a long-term revenue stream.
Additionally, a catering company in The Netherlands is supplying custom meals to hospitals. Each meal is prepared for the patient based upon data received from the hospital about the patient’s needs. The meals are prepared in an automated plant.
The individualization of mass production and the internet of services add additional revenue. The smart manufacturing plant needs to be flexible and deliver intelligent products. A major misunderstanding is that this is not a cost saving exercise; it’s a new business model to increase revenue and profitability.
It’s important to map out opportunities and match them against the realities of today’s technology. A manufacturer who was heavily investing in a factory of the future did not build this type of strategy. Enthusiastic engineers ordered additive manufacturing (3-D printing machines only to learn they could not connect them to their network using international standards. They paid a heavy price for this error and damaged the initiative’s reputation. It’s worth taking independent advice before completing a company’s manufacturing strategy.
The best way to avoid these mistakes and build a successful strategy is to learn from other manufacturers in a safe space. MESA is a safe harbor to share best practices and lessons learned so that the industry can collectively rise to Industrie 4.0.
Mike James is chairman for MESA International Board of Directors. This article originally appeared on MESA International’s blog. MESA International is a CFE Media content partner. Edited by Chris Vavra, production editor, CFE Media, firstname.lastname@example.org.
Copyright: Copyright 2017 CFE Media LLC
Attending the recent Automate Show in Chicago was an extraordinary experience that allowed me and more than 20,000 other attendees an opportunity to peer into the future of industrial robotics. Being part of a company that is at the forefront of the industrial robotics and manufacturing automation industries still provides only one perspective, and Automate brought together leaders from all corners of the industry, such as Fanuc, ABB, Kuka, Keyence and Cognex, to showcase advances and share insights. The range of technologies on display that were designed to enhance processes, improve product quality and lower manufacturing costs was astonishing. I walked away from the show with a deeper sense of awareness of two notions: the rise of robots is upon us, and machine vision provides robots with the artificial intelligence that will forge the future of robotics in our increasingly globalized society.
The Rise of Robots
As many in automation are aware, robots are becoming an increasingly popular answer to completing dangerous or repetitive tasks: grinding, deburring, bin-picking, part inspections, etc. Several manufacturers and esteemed integrators assembled elaborate booths displaying various robot capabilities, many currently in use and others as possible future applications. This alone is indicative of the rise of robots, but it is only the beginning. The leading robot manufacturers all appear to be focused on making robots simpler to program/configure and easier to integrate with technologies that create incredible functionality. The result: collaborative robots.
The show floor featured a number of collaborative robots performing a wide variety of tasks from part handling to packaging, some even bagged candy to hand out or served ice cream in a cone. Using various sensing technologies, the applications for collaborative robots to work with human counterparts are infinite. Long gone seem to be the days of robots in hard guarding and being tucked away in the corner, wrapped in ominous metal fencing. Today’s robots are becoming more flexible in their range of applications, friendly in their interface, and free to be placed anywhere on the manufacturing floor.
Forging the Future
After seeing the surprising versatility of machine vision applications on display at Automate, it became clear that machine vision is the technological advancement that will launch industrial robotics into the future. When combined with the interconnectivity of the Industrial Internet of Things (IIoT) and other smart tools such as mobile analytics, machines equipped with technologies like 3D embedded vision, multispectral and hyperspectral imaging, and deep learning will possess a primitive form of artificial intelligence that allows greater flexibility in application and the ability to actively learn processes without programming.
For example, Cognex and Keyence both have solutions that can compare 8-10 different part characteristics in a fraction of a second. These are designed to be mounted on the end of a robot so you have a complete solution that is capable of part picking and inspection. Part picking and part inspection are tasks that are often hard to fill and results can vary widely as operators tire throughout long shifts.
In another instance, Fanuc is working on developing the ability to configure a robot through learning instead of programming, specifically the capability to give a robot a task, like picking objects out of a bin and putting them into another container. In this scenario, once the robot it is configured it will spend some amount of time figuring out how to complete the task via trial and error, and within a short time the robot will have mastered the task as well as if it had been programmed by an engineer. It seems apparent that as we continue to combine advancing vision technologies with low cost, power processing abilities the future is endless as to what can be accomplished.
Although the next Automate isn’t until April of 2019, I highly recommend that you get this event on your calendar early and plan to attend. The Automate show attracts more than 20,000 visitors, all looking for new ways to enhance their manufacturing processes, lower production costs, and increase their competitive edges.
While Microsoft’s new HoloLens won’t replace the universe’s most loveable robot, R2D2, any time soon, it can give you a taste of what it might be like to live in a Star Wars universe.
Concept Systems’ Doug Taylor recently tested out the HoloLens, and here’s what he has to say about it:
The HoloLens is kind of like a marriage of a holographic projector (ala Star Wars R2D2), a Kinect, a computer, and a stereo headset with other cool accoutrements. Basically, it is a 3D computer interface. And it works. It works really, really well.
The device comes in a hard blob like case that has a device, a wall charger & cable, and a clicker. The device fits many different styles of heads from my massive (but yet tasteful) pumpkin shaped head to my wife’s petite and attractive apple sized head. Above the ears are a pair of speakers that aren’t too loud, but are pretty close to great. The device has all sorts of lenses and whatnot and what looks like four Kinect style 3D mappers pointed in various directions.
First, let’s talk about localization, or basically how well does it know where it is. Short answer: perfectly. When you put a screen on the wall, it stays there regardless of how you move in the room and when you get closer to it, the screen gets larger and totally appears to be stuck to the wall. If you place screens sticking out of things, they stay there. You can place screens or what have you anywhere. I fired up Microsoft Edge, logged into Netflix and started watching a movie. I walked around the room and could hear the movie, but could only see it if I look where the movie was playing. The localization engine on this device is as close to flawless as I could imagine. No, I do not know how it does it, but I guess that it is just like a Kinect but with a 6 axis accelerometer.
Next, lets talk specs on this bad boy:
· See-through holographic lenses (waveguides)
· 2 HD 16:9 light engines
· Automatic pupillary distance calibration
· Holographic Resolution: 2.3M total light points
· Holographic Density: >2.5k radiants (light points per radian)
· 1 IMU
· 4 environment understanding cameras
· 1 depth camera
· 1 2MP photo / HD video camera
· Mixed reality capture
· 4 microphones
· 1 ambient light sensor
· Spatial sound
· Gaze tracking
· Gesture input
· Voice support
Input / Output / Connectivity
· Built-in speakers
· Audio 3.5mm jack
· Volume up/down
· Brightness up/down
· Power button
· Battery status LEDs
· Wi-Fi 802.11ac
· Micro USB 2.0
· Bluetooth 4.1 LE
· Battery Life
· 2-3 hours of active use
· Up to 2 weeks of standby time
· Fully functional when charging
· Passively cooled (no fans)
· Intel 32 bit architecture with TPM 2.0 support
· Custom-built Microsoft Holographic Processing Unit (HPU 1.0)
· 64GB Flash
· 2GB RAM
What’s in the box
· HoloLens Development Edition
· Carrying case
· Charger and cable
· Microfiber cloth
· Nose pads
· Overhead strap
OS and Apps
· Windows 10
· Windows Store
What you need to develop
· Windows 10 PC able to run Visual Studio 2015 and Unity
Now for the downside. The format of the image is 16×9 and the resolution of the device is something like 1080×607. Also, the device draws the colors iteratively (one color after another) rather than like a TV does. When you move your head quickly, the red image, the green image and the blue images are drawn at slightly different locations since they are drawn at different times. When you hold still, it does a good job, but if you are moving around quickly, it offsets things a little. Normally this is not a problem. Since “moving” requires neck motion, moving quickly is not something you usually do.
The lower resolution is partly because the HoloLens does not have a single resolution, but has two, one for each eye, and when you think about a wearable device with 1080×607 times two, it is pretty impressive. Think of it at 1080×1200 effective resolution, split between eyes. The “actual” resolution (since it is a mobile device) is limitless, but the field of view is limited to the frame.
In reality though the HoloLens requires you to mouse using your whole head and hold your hand out in front of you to interact (so it can be seen by the cameras). All this to say is that you would have to be pretty hardcore to use this instead of a TV when watching a movie. Also, the device is not exactly heavy, but after 15 minutes of it sitting on your nose, you notice it for sure.
Enough with the downsides. This thing is absolutely the coolest device I have ever seen, or even heard of. It is many times better than I expected. I giggled like a schoolgirl being asked on her first date as I gleefully played with it. It is not just awesome, it is a ALL TIME MUST HAVE category device for any geeks who love cool, which means that when these things go on sale, we are all going to be out some serious coin because once you take a drink of the elixir, you will be transported down the rabbit hole and the world will never look the same again.
Because computers are always improving—think of your smart cellphone. If it’s not replaced every 2-3 years many of the applications and functions become obsolete rending the cellphone almost useless—new technologies and solutions are making robotics smarter and more capable.
One major upgrade in robotic technology is one that allows robotic vision in three dimensions (3-D) instead of two (2-D). Robotics using 2-D technology has been around for years, but for intricate work, 2-D visioning wasn’t the best way to get the job done. Thanks to improved technology and newer cameras, robotic systems can now be upgraded to see the world the same way we see it—in 3-D.
At Concept Systems, Inc., we put these visioning upgrades to work when we worked with a cake maker and decorator. The decorator used a robotics system that took a photo of the diameter of the cake as it progressed down the conveyer. Further down the line, an automated system created borders and designs on top of the cake. The camera was able to exactly diagram the surface area of the top of the cake, but every cake is a little bit different. Some were a tiny bit taller or shorter than the programmed height. The height variance meant that some of the cake designs were distorted on the finished product.
Enhancing a system doesn’t have to be costly. The cake maker earned back its investment in the updated robotic vision system in only 8 months. In some instances, upgrading from 2-D to 3-D can automate systems that were once done manually, and those cost savings add up! The cake maker’s overhead photo visioning system also made creating borders at the bottom of the cake almost impossible, so staff members were creating bottom borders by hand, costing time and labor.
While the cake maker project sounds costly and time-consuming, it didn’t require a lot of down time or new equipment expense. Existing robots were retrofitted to with a new 3-D visioning system, though laser scanners were also installed. The scanners’ lasers used triangulation to create an exact 3-D image of each cake before passing it down to the robot decorators.
The combination of the latest 3-D vision technology coupled with SCARA robots can now be used to bring automation to industries beyond the food industries. Updating and modernizing complex machinery and robots just makes sense. It can lower costs, increase productivity, simplify processes and even increase machine safety.
Let Concept Systems, Inc. put this upgraded technology to work for your business.