Subscribe / Unsubscribe Enewsletters | Login | Register

Pencil Banner

Robots present a cyber risk

Bob Violino | Nov. 10, 2016
The security of robots lies with many in organisations.


Credit: Thinkstock

The prospect of an army of robots marching in unison to launch an attack on an unsuspecting city belongs in the realm of science fiction—as do most images of menacing autonomous machines wreaking all kinds of havoc on civilization.

That’s not to say robotics is free from security and safety threats, however. In fact, experts say the growing use of robots by companies such as manufacturers, retailers, healthcare institutions and other businesses can present a number of cyber risks.

There are two primary issues related to security and robotics, says Michael Overly, a partner and information security attorney at law firm Foley & Lardner.

First, these machines are generally integral to assembly line operations and other similar activities, Overly says. “An attack could literally bring a manufacturing or assembly plant to its knees,” he says. “We have seen this very outcome in a ransomware attack targeted at robotic assemblers in a plant in Mexico.” In that case, the ransomware locked up the specifications files from which the robots drew their operating parameters, he says.

Second, robots are generally large and capable of causing significant bodily and property damage if operated other than in accordance with their specifications. “If the subject of an attack, the machines could cause dramatic harm, both to individuals and to property,” Overly says.

The difference between actual and potential risks with robot security incidents “is a function of the complexity of the algorithms used by robots, and the physical and social context of their operation, and their numbers,” says Tom Atwood, executive director of the National Robotics Education Foundation, which provides educational information about robotics to students, educators and professionals.

For example, the circumstances and predictions of potential harm will vary widely depending on whether the robots are used in an industrial, military, urban, mobile, educational context or other context, Atwood says. “These contexts are growing in number as physical and virtual robots proliferate in all spheres of human endeavor,” Atwood says.

Many organizations that operate autonomous machines such as industrial robots mistakenly think they will not be targets because the machines don’t process personal information or financial information. The same goes for companies that produce the machines.

“They tend to not have the level of security protection found in other industries,” Overly says. “These organizations should start with a thorough information security audit conducted by a third-party auditor who has specific experience in the manufacturing and automation space. They should prioritize remediation measures based on the outcome of that audit.”

The motivation to build rigorous and secure robots should be there “because it is quite possible that all involved in its design could be held liable if a horrendous weakness was found that led to personal distress or financial losses,” says Kevin Curran, senior lecturer in computer science at the University of Ulster and a senior member of the Institute of Electrical and Electronics Engineers (IEEE).

 

1  2  3  Next Page 

Sign up for CIO Asia eNewsletters.