Bull Session

Hacking Robots

August 31, 2017          

Episode Summary

On the podcast this week, we discuss the dangers of hacking robots. As you might expect, the rise of robotics in manufacturing and other industrial activities also means a rise in possible attacks. Of course, with a successful hack of industrial robots comes the potential for some dire physical outcomes. Security researchers have demonstrated unpatched vulnerabilities in a variety of industrial robot models including collaborative robots, which are designed to work together with people, in environments such as manufacturing. These industrial robots can be compromised in ways that could cause humans bodily harm. Join us as we discuss.

Resources:
Industrial Hack Can Turn Powerful Machines into Killer Robots
Exploiting Industrial Collaborative Robots

Jon:
Welcome to episode 222 of The Digital Life, a show about our insights into the future of design and technology. I’m your host Jon Follett, and with me is founder and co-host Dirk Knemeyer.

Dirk:
Greetings, listeners.

Jon:
For our podcast topic this week, we’re going to discuss some of the dangers of hacking robot systems. As you might expect, the rise of robotics in manufacturing and other industrial activities also means a rise in possible attacks. Of course, a successful hack of industrial robots comes with the potential for some dire physical outcomes. Recently, security researchers have demonstrated that there are unpatched vulnerabilities in a variety of industrial robot models, including collaborative robots, which we’ve talked about on the show before. The idea with collaborative robotics is essentially that people will be able to work together with robots in environments such as manufacturing or in a warehouse, and the robot is able to be side by side with a human being and not be in some restricted area where industrial robotics typically have been found.

With collaborative robotics, human beings can very easily teach robots how to do certain tasks and then work side by side with them. The danger, of course, is that these are still mighty machines, and they’re very heavy. They could be using other pieces that could be sharp or blunt or what have you in the process of whatever it is they’re doing, and ultimately, it comes down to the software that is, and the sensors and the information that’s coming into the software that is protecting the human beings that are around this collaborative robot.

These industrial robots can be compromised in ways that could cause human beings bodily harm, so one research firm showed some vulnerabilities, which included some of the control protocols and susceptibility to physical attacks, which I imagine is not unexpected, but is alarming in so far as every advancement it seems that we have in areas of automation also come along with the potential for those advancements to be used for ill.

Given that recent news that there are unpatched vulnerabilities, Dirk, what’s your take on the state of hacking robotics and how people move forward with this additional and perhaps purposely-created danger?

Dirk:
We’ve talked a lot about security on the show, and right now, online security, digital security, it’s willful. It’s willful. And this is just another context, one that’s particularly troubling around where exploits can happen. One of the stories you send, the headline was something breathless like, “Killer death robots”, which was in the news a lot and maybe after we talk about this a little, we should pivot to killer death robots more generally. But the idea that I’m a worker in a plant and also in a plant are robots, they’re currently are now. It’s not like this is some new thing, we’ve had big industrial robots for a while. But the security vulnerabilities and allow someone else inside the plant to make that robot behave in a way, so that I’m physically harmed or perhaps even killed. It’s unfortunate that companies spend a lot of money on this technologies, such as industrial robots, in order, ultimately, to save money and create efficiency but they don’t necessarily deploy them in a way that is responsible. For me, that’s just another example of capitalism at its worst.

Jon:
Yeah, I’m interested in the fact that seem to be repeating at least some of the errors of the personal computing age, which of course, for me was … when I was a teenager, getting to know computers as this things that were no longer mainframes, but that sat in your home office or whatever and could do certain things more efficiently. It seems like the same security issues that were present at the beginning of the internet age and certainly at the beginnings of the mobile age, all of these … I don’t know how to frame it properly, but it’s not naivete, but it’s this technological hopefulness that this time the things that we’re doing are not going to be subject to the same level of failure and danger and possible detrimental effects that this previous technologies have. For example, I can’t tell you how many times I look at my parent’s computer and say, “Oh, your antivirus isn’t up to date.” Or you look at someone’s computer and you realize it’s infected with something. All of these, this is predictable.

This is behavior that we know that there are bad actors who are going to try to manipulate technology, either for their own gain or for mischief. We know this is the case, but this march towards employing ever increasing amounts of technology without the proper security vetting, or without continuous patching of these software programs, it’s borderline irresponsible. I realize that people want to get products to market and that there are benefits for any number of these systems, but as we’ve said many times before, once you expose it to outside inputs via the internet, or other networks, internal networks, you’re asking for it. And there’s going to be some smart person who figures out how to penetrate your particular technology, whether it’s the robotics we’re talking about today or wearables … the Internet of Things control smart city systems. These attack surfaces are just begging to be hacked.

Dirk:
Whenever there’s emerging technologies and issues with those technologies, I think back to the original Jurassic Park and the Jeff Goldblum character, Dr. Ian Malcolm, his line was, “Your scientists are so preoccupied with whether they could, they didn’t stop to think if they should.” And it’s the notion of we just barrel head long into what is possible without thinking … without, a, fully thinking about the ramifications, but then b, not being responsible and diligent in anticipating some of the possible problems and investing in preventing some of the possible problems. And the risk to consumers, or employees, individuals, is that there’s a sheen of safety when a company puts something out.

When a company says we have this gian industrial robots, they’re massively expensive, they obviously are of very high technology, spend a bunch of money to put it in to your plant and save a ton of money, there’s the human … I don’t know if instinct is the right word, maybe bias of taking for granted. Yeah, of course these are secure. Wait a minute, why would I even think about security? They obviously spend a ton of money on this, they’re the experts, they know what they’re doing it, but they don’t necessarily. They’ve barreled forward, they’re trying to do cool things as quickly as possible, be first to market, and win the market, and all of these capitalism-driven things. Not being most mindful about what are the consequences. Should we be doing this thing at this pace, in this way? Or could something bad come of it?

Jon:
Yeah, I think as part of that, the … you mentioned emerging technologies in broader context. There are so many technologies that are network-enabled, internet-enabled, what have you, the aforementioned robotics, obviously all the connected systems of the Internet of Things, our mobile devices, our desktop computers, all these services that we rely on and increasingly so, all of them need to be secured in one way or another. And I think we’re … I won’t say we’re on the cusp of an age where we’re deploying more technology than we can possibly secure. But it sure feels like we’re dancing close to the edge here. One you see Cisco systems or some of these other large companies predict tens of billions of connected internet devices over the coming decades. So tens of billions of devices including robots, including things like smart cities, you begin to realize that the physical nature of our interactions with the internet are only increasing. And because of that, I think the danger of something catastrophic happening is also increasing.

With robots, for instance, these are hundreds of pounds, this is not my computer has a virus and my data might be at risk. This is risk of serious bodily harm. Same thing with connected traffic flow systems for smart cities. These are severe physical consequences, which have not really been vetted in the same way. When people get a computer virus, there’s serious data breaches, of course, which can have negative impacts on your privacy, on your bank account, if it’s embarrassing information, on your reputation. But there’s no grievous bodily harm coming out of a computer virus. Or at least none that I’m aware of. We’re talking about physical danger now, which I think ups the ante.

Dirk:
Sure, sure. Going back to the comment I made about it being guised under the killer death robots moniker, I think there’s a lot of fear-mongering going on right now with AI and robotics. In the context of these industrial robots, we have the breathless headline and then text and the article about aquatinting these the killer death robots. And then we have Elon Musk again this week, always in the news, wanting, if I remember correctly, wanting there to be a global ban on the development of killer death robots. Again, that’s the language. I think that that language, in general, is really confusing for people, because there’s an assumption of killer death robots being like a being. Being this thing that is going to, of its own volition and control, indiscriminately be killing things. And we’re a long way away from that. The killer death robots in the context of this robotics’ article and the context of what’s more immediately available to a national military are things that …. basically, machines that a human could take control of and have the machine behave in ways that result in the death of someone.

That is certainly still a bad thing, but it’s a far cry from the science fiction scary, scary, killer death robots thing. I’m just generally troubled by all of the breathless propaganda around the co-opting, the manipulation of these devices to do damage as if they’re these killer death robots of the future.

Jon:
Sure. And I think that’s part of the reason why we have these discussions is to really tease apart what’s an actual problem or what’s the actual reality of these technologies versus what is the hype cycle. We only have to look so far as to mentions of Internet of Things over the past two years to know that there’s an extreme level of attention being paid in the media to that particular technology. If it gets clicks, it’s more likely to be written. So killer death robots it’s a much more clickable topic than industrial workers could be harmed in a physical way by a hack of collaborative robotics. That’s not quite as interesting a headline and no one would click on that. We do have that problem, as well, and couple that with the fact that most folks are probably not going to be well-versed in a variety of these technologies, and so won’t know the difference between a breathless headline about technology that’s dangerous versus a more conservative warning. The believability of the hyped headline is likely to be high, because people may just not know what the possibilities are.

Regardless, it’s definitely a time where we do need to take close look at how secure our systems are for new technologies, especially as we’re discussing today, the realm of robotics in industrial settings.

Dirk:
To your point, we should somewhere in the title of this episode put the words killer death robots to get those extra clicks for ourselves.

Jon:
Or killer death robots or not? Will be the headline. Listeners, remember that while you’re listening to the show, you can follow along with the things that we’re mentioning here in real time. Just head over to thedigitalife.com, that’s just one l in the digital life and go to the page for this episode. We’ve included links to pretty much everything mentioned by everybody, so it’s a rich information resource to take advantage of, while you’re listening. Or afterward, if you’re trying to remember something that you liked. You can find The Digital Life on iTunes, Soundcloud, Stitcher, PlayerFM and Google Play. And if you want to follow us outside of the show, you can follow me on Twitter at jonfollett, that’s J-O-N F-O-L-L-E-T-T. And of course the whole show it’s brought to you by Involution Studios, which you can check out at goinvo.com, that’s G-O-I-N-V-O.com. Dirk?

Dirk:
You can follow me on Twitter at dknemeyer, that’s at D-K-N-E-M-E-Y-E-R, and thanks so much for listening.

Jon:
So that’s it for episode 222 of The Digital Life. For Dirk Knemeyer, I’m Jon Follett, and we’ll see you next time.

No Comments

Leave a Comment

Your email address will not be published. Required fields are marked *