Everything has a good side and a bad side. Cobots are no exceptions to this rule.
When dealing with machines, safety is a legit concern. “Workers and consumers who come in contact with cobots understandably have reservations about safety, benefits, behavior, etc. But if the benefits outweigh the risks, then acceptance will widen,” says Oliver.
That said, accidents involving cobots are rare. A stray incident was reported in 2016, when a roving security cobot at a California shopping center bumped into a toddler, knocking him down and running his foot over.
However, even one accident is too many.
Well-defined regulations are needed to keep the peace between man and machine. As the idea is at a nascent stage, regulations are yet to be developed and standardized but some countries have moved on it.
In Germany, for example, there are guidelines to protect humans. These include topics such as manual guidance, on humans controlling cobots; speed and distance monitoring, on contact between a human and a moving robot being prevented by the machine; safety-related, supervised standstill, on cobots ‘freezing’ in place if a human comes into its immediate space; and power and force limitation, on contact force between a human and a cobot being technically limited to a safe level.
In the future, we might see cognitive cobots that can sense obstacles and adjust their speed or reverse to avoid crashing into objects, handle unforeseen situations even in complex environments and improve their behavior through learning.
Some other factors hindering the large-scale adoption of cobots are privacy concerns, the threat of cyberattacks, and fears of technical glitches. Cobots equipped with cameras and controlled via servers can trigger questions about privacy, while risk of cyberattacks always looms over programmable tools. A technical glitch could prove rather costly if companies have to recall a product because of quality issues introduced by a malfunctioning cobot.