hacking tags

Bull Session

Hacking Robots

August 31, 2017          

Episode Summary

On the podcast this week, we discuss the dangers of hacking robots. As you might expect, the rise of robotics in manufacturing and other industrial activities also means a rise in possible attacks. Of course, with a successful hack of industrial robots comes the potential for some dire physical outcomes. Security researchers have demonstrated unpatched vulnerabilities in a variety of industrial robot models including collaborative robots, which are designed to work together with people, in environments such as manufacturing. These industrial robots can be compromised in ways that could cause humans bodily harm. Join us as we discuss.

Resources:
Industrial Hack Can Turn Powerful Machines into Killer Robots
Exploiting Industrial Collaborative Robots

Jon:
Welcome to episode 222 of The Digital Life, a show about our insights into the future of design and technology. I’m your host Jon Follett, and with me is founder and co-host Dirk Knemeyer.

Dirk:
Greetings, listeners.

Jon:
For our podcast topic this week, we’re going to discuss some of the dangers of hacking robot systems. As you might expect, the rise of robotics in manufacturing and other industrial activities also means a rise in possible attacks. Of course, a successful hack of industrial robots comes with the potential for some dire physical outcomes. Recently, security researchers have demonstrated that there are unpatched vulnerabilities in a variety of industrial robot models, including collaborative robots, which we’ve talked about on the show before. The idea with collaborative robotics is essentially that people will be able to work together with robots in environments such as manufacturing or in a warehouse, and the robot is able to be side by side with a human being and not be in some restricted area where industrial robotics typically have been found.

With collaborative robotics, human beings can very easily teach robots how to do certain tasks and then work side by side with them. The danger, of course, is that these are still mighty machines, and they’re very heavy. They could be using other pieces that could be sharp or blunt or what have you in the process of whatever it is they’re doing, and ultimately, it comes down to the software that is, and the sensors and the information that’s coming into the software that is protecting the human beings that are around this collaborative robot.

These industrial robots can be compromised in ways that could cause human beings bodily harm, so one research firm showed some vulnerabilities, which included some of the control protocols and susceptibility to physical attacks, which I imagine is not unexpected, but is alarming in so far as every advancement it seems that we have in areas of automation also come along with the potential for those advancements to be used for ill.

Given that recent news that there are unpatched vulnerabilities, Dirk, what’s your take on the state of hacking robotics and how people move forward with this additional and perhaps purposely-created danger?

Dirk:
We’ve talked a lot about security on the show, and right now, online security, digital security, it’s willful. It’s willful. And this is just another context, one that’s particularly troubling around where exploits can happen. One of the stories you send, the headline was something breathless like, “Killer death robots”, which was in the news a lot and maybe after we talk about this a little, we should pivot to killer death robots more generally. But the idea that I’m a worker in a plant and also in a plant are robots, they’re currently are now. It’s not like this is some new thing, we’ve had big industrial robots for a while. But the security vulnerabilities and allow someone else inside the plant to make that robot behave in a way, so that I’m physically harmed or perhaps even killed. It’s unfortunate that companies spend a lot of money on this technologies, such as industrial robots, in order, ultimately, to save money and create efficiency but they don’t necessarily deploy them in a way that is responsible. For me, that’s just another example of capitalism at its worst.

Jon:
Yeah, I’m interested in the fact that seem to be repeating at least some of the errors of the personal computing age, which of course, for me was … when I was a teenager, getting to know computers as this things that were no longer mainframes, but that sat in your home office or whatever and could do certain things more efficiently. It seems like the same security issues that were present at the beginning of the internet age and certainly at the beginnings of the mobile age, all of these … I don’t know how to frame it properly, but it’s not naivete, but it’s this technological hopefulness that this time the things that we’re doing are not going to be subject to the same level of failure and danger and possible detrimental effects that this previous technologies have. For example, I can’t tell you how many times I look at my parent’s computer and say, “Oh, your antivirus isn’t up to date.” Or you look at someone’s computer and you realize it’s infected with something. All of these, this is predictable.

This is behavior that we know that there are bad actors who are going to try to manipulate technology, either for their own gain or for mischief. We know this is the case, but this march towards employing ever increasing amounts of technology without the proper security vetting, or without continuous patching of these software programs, it’s borderline irresponsible. I realize that people want to get products to market and that there are benefits for any number of these systems, but as we’ve said many times before, once you expose it to outside inputs via the internet, or other networks, internal networks, you’re asking for it. And there’s going to be some smart person who figures out how to penetrate your particular technology, whether it’s the robotics we’re talking about today or wearables … the Internet of Things control smart city systems. These attack surfaces are just begging to be hacked.

Dirk:
Whenever there’s emerging technologies and issues with those technologies, I think back to the original Jurassic Park and the Jeff Goldblum character, Dr. Ian Malcolm, his line was, “Your scientists are so preoccupied with whether they could, they didn’t stop to think if they should.” And it’s the notion of we just barrel head long into what is possible without thinking … without, a, fully thinking about the ramifications, but then b, not being responsible and diligent in anticipating some of the possible problems and investing in preventing some of the possible problems. And the risk to consumers, or employees, individuals, is that there’s a sheen of safety when a company puts something out.

When a company says we have this gian industrial robots, they’re massively expensive, they obviously are of very high technology, spend a bunch of money to put it in to your plant and save a ton of money, there’s the human … I don’t know if instinct is the right word, maybe bias of taking for granted. Yeah, of course these are secure. Wait a minute, why would I even think about security? They obviously spend a ton of money on this, they’re the experts, they know what they’re doing it, but they don’t necessarily. They’ve barreled forward, they’re trying to do cool things as quickly as possible, be first to market, and win the market, and all of these capitalism-driven things. Not being most mindful about what are the consequences. Should we be doing this thing at this pace, in this way? Or could something bad come of it?

Jon:
Yeah, I think as part of that, the … you mentioned emerging technologies in broader context. There are so many technologies that are network-enabled, internet-enabled, what have you, the aforementioned robotics, obviously all the connected systems of the Internet of Things, our mobile devices, our desktop computers, all these services that we rely on and increasingly so, all of them need to be secured in one way or another. And I think we’re … I won’t say we’re on the cusp of an age where we’re deploying more technology than we can possibly secure. But it sure feels like we’re dancing close to the edge here. One you see Cisco systems or some of these other large companies predict tens of billions of connected internet devices over the coming decades. So tens of billions of devices including robots, including things like smart cities, you begin to realize that the physical nature of our interactions with the internet are only increasing. And because of that, I think the danger of something catastrophic happening is also increasing.

With robots, for instance, these are hundreds of pounds, this is not my computer has a virus and my data might be at risk. This is risk of serious bodily harm. Same thing with connected traffic flow systems for smart cities. These are severe physical consequences, which have not really been vetted in the same way. When people get a computer virus, there’s serious data breaches, of course, which can have negative impacts on your privacy, on your bank account, if it’s embarrassing information, on your reputation. But there’s no grievous bodily harm coming out of a computer virus. Or at least none that I’m aware of. We’re talking about physical danger now, which I think ups the ante.

Dirk:
Sure, sure. Going back to the comment I made about it being guised under the killer death robots moniker, I think there’s a lot of fear-mongering going on right now with AI and robotics. In the context of these industrial robots, we have the breathless headline and then text and the article about aquatinting these the killer death robots. And then we have Elon Musk again this week, always in the news, wanting, if I remember correctly, wanting there to be a global ban on the development of killer death robots. Again, that’s the language. I think that that language, in general, is really confusing for people, because there’s an assumption of killer death robots being like a being. Being this thing that is going to, of its own volition and control, indiscriminately be killing things. And we’re a long way away from that. The killer death robots in the context of this robotics’ article and the context of what’s more immediately available to a national military are things that …. basically, machines that a human could take control of and have the machine behave in ways that result in the death of someone.

That is certainly still a bad thing, but it’s a far cry from the science fiction scary, scary, killer death robots thing. I’m just generally troubled by all of the breathless propaganda around the co-opting, the manipulation of these devices to do damage as if they’re these killer death robots of the future.

Jon:
Sure. And I think that’s part of the reason why we have these discussions is to really tease apart what’s an actual problem or what’s the actual reality of these technologies versus what is the hype cycle. We only have to look so far as to mentions of Internet of Things over the past two years to know that there’s an extreme level of attention being paid in the media to that particular technology. If it gets clicks, it’s more likely to be written. So killer death robots it’s a much more clickable topic than industrial workers could be harmed in a physical way by a hack of collaborative robotics. That’s not quite as interesting a headline and no one would click on that. We do have that problem, as well, and couple that with the fact that most folks are probably not going to be well-versed in a variety of these technologies, and so won’t know the difference between a breathless headline about technology that’s dangerous versus a more conservative warning. The believability of the hyped headline is likely to be high, because people may just not know what the possibilities are.

Regardless, it’s definitely a time where we do need to take close look at how secure our systems are for new technologies, especially as we’re discussing today, the realm of robotics in industrial settings.

Dirk:
To your point, we should somewhere in the title of this episode put the words killer death robots to get those extra clicks for ourselves.

Jon:
Or killer death robots or not? Will be the headline. Listeners, remember that while you’re listening to the show, you can follow along with the things that we’re mentioning here in real time. Just head over to thedigitalife.com, that’s just one l in the digital life and go to the page for this episode. We’ve included links to pretty much everything mentioned by everybody, so it’s a rich information resource to take advantage of, while you’re listening. Or afterward, if you’re trying to remember something that you liked. You can find The Digital Life on iTunes, Soundcloud, Stitcher, PlayerFM and Google Play. And if you want to follow us outside of the show, you can follow me on Twitter at jonfollett, that’s J-O-N F-O-L-L-E-T-T. And of course the whole show it’s brought to you by Involution Studios, which you can check out at goinvo.com, that’s G-O-I-N-V-O.com. Dirk?

Dirk:
You can follow me on Twitter at dknemeyer, that’s at D-K-N-E-M-E-Y-E-R, and thanks so much for listening.

Jon:
So that’s it for episode 222 of The Digital Life. For Dirk Knemeyer, I’m Jon Follett, and we’ll see you next time.

No Comments

Leave a Comment

Your email address will not be published. Required fields are marked *

Jon Follett
@jonfollett

Jon is Principal of Involution Studios and an internationally published author on the topics of user experience and information design. His most recent book, Designing for Emerging Technologies: UX for Genomics, Robotics and the Internet of Things, was published by O’Reilly Media.

Dirk Knemeyer
@dknemeyer

Dirk is a social futurist and a founder of Involution Studios. He envisions new systems for organizational, social, and personal change, helping leaders to make radical transformation. Dirk is a frequent speaker who has shared his ideas at TEDx, Transhumanism+ and SXSW along with keynotes in Europe and the US. He has been published in Business Week and participated on the 15 boards spanning industries like healthcare, publishing, and education.

Credits

Co-Host & Producer

Jonathan Follett @jonfollett

Co-Host & Founder

Dirk Knemeyer @dknemeyer

Minister of Agit-Prop

Juhan Sonin @jsonin

Audio Engineer

Michael Hermes

Technical Support

Eric Benoit @ebenoit

Brian Liston @lliissttoonn

Original Music

Ian Dorsch @iandorsch

Bull Session

Hacking Infrastructure

April 13, 2017          

Episode Summary

On this episode of The Digital Life, we discuss our vulnerable infrastructure, in light of the recent hacking attack on the Dallas emergency sirens. Our real world infrastructure — from power plants to airports to dams — is increasingly subject to both online and offline security breaches, which represents a significant problem in a world where the Internet of Things (IoT) is just beginning to take hold.

While the Dallas hack was accomplished via a radio or telephone signal — not an online breach — it nonetheless provides a prime example of how such attacks disrupt municipal emergency response. Over 4,000 calls flooded the city’s 911 system, forcing real emergencies to wait. Unfortunately, the spectrum of these attacks runs from malicious prank to terrorism and it’s hard to know what kind of attack is happening as it occurs.

Potential outcomes, including the difficulties brought on by service disruptions for electricity, water, transportation, etc., not to mention increased skepticism of emergency systems, could potentially be life threatening. What are the solutions for such hacks on critical infrastructure? And how should we view these types of events and react to them in a resilient fashion in the future?

Resources:
Hacking Attack Woke Up Dallas With Emergency Sirens, Officials Say
Sirens in Dallas, Texas Maybe Civil Defense Tests? Hackers?
Culprit broadcast signal that triggered Dallas’ emergency sirens Friday night
Someone hacked every tornado siren in Dallas. It was loud.

Bull Session

Designing Deception

March 9, 2017          

Episode Summary

On The Digital Life this week, we discuss deceptive software in light of the recent revelations that Uber used its Greyball application to evade and thwart municipal officials nationwide, who were looking to regulate or otherwise monitor the service. This has a similar flavor to the Volkswagen story from last year, in which the company installed special software in its diesel powered cars to specifically reduce emissions during testing by authorities. What are the ways in which consumers now need to be aware of these deceptive practices? And how should we navigate the marketplace?

Resources

How Uber Deceives the Authorities Worldwide

Bull Session

A Year Talking Tech

December 22, 2016          

Episode Summary

For our final podcast of 2016, we chat about the big themes on the show and our favorite episodes over the past year. We had conversations on design and tech with some wonderful guests including ground breaking geneticist George Church and open science advocate and researcher, John Wilbanks. From AI to genomics to cybersecurity, we covered a wide range topics on The Digital Life in 2016. So what did we learn from a year talking tech?

AI is too smart for its own good.
Artificial intelligence is evolving rapidly, with both high profile public failure and success by a number of tech giants this year. For instance, Microsoft had to terminate Tay, its teenage chatbot, after the bot started tweeting neo-Nazi propaganda and other abusive language at people. Meanwhile, Google’s DeepMind created an AI capable of beating some of the very best human players in the world at Go, the Asian strategy board game. And, we were introduced to a brand new “Rembrandt”, which was 3D-printed with eerie accuracy by an artificial intelligence algorithm, trained by analyzing the artist’s paintings.

Episode 149: Artificial Intelligence
Episode 151: AI Goes to Art School
Episode 163: AI Goes to the Ballpark

DNA replaces silicon as the new material for innovation.
The fields of genomics and synthetic biology continue to press forward in astonishing ways. In Seoul, Korea, a controversial lab revealed plans to clone endangered animals in order to save them from extinction. At the Massachusetts Institute of Technology (MIT) and Boston University (BU) synthetic biologists created software that automates the design of DNA circuits for living cells.

Episode 148: On Cloning
Episode 150: Engineering Synthetic Biology
Episode 154: DNA as Data Storage
Episode 158: Writing Human Code
Episode 168: The Microbiome
Episode 169: Genomics and Life Extension
Episode 170: Chimeras and Bioethics
Episode 176: Three Parents and a Baby

Hacking and cybersecurity are front and center as online and offline worlds collide.
In 2016, cybersecurity became a primary issue in a host of critical areas including communication, energy, and politics. Power grids, airports, and other infrastructure were increasingly subject to cyber attacks and an increasing number were successful. The debate over privacy and security was reinvigorated by the hubbub around the FBI request of Apple to unlock an iPhone owned by one of the San Bernardino shooters. And, Wikileaks distributed e-mails obtained by sources who hacked the DNC and individuals associated with the Clinton campaign during the U.S. presidential elections.

Episode 139: Hacking Power
Episode 144: Apple vs. FBI
Episode 166: Hacking the DNC
Episode 179: Internet Takedown

The automation of work is coming.
We got another startling look at what the future of work could become as software, robots, and the IoT continued to automate activities previously completed by humans. According to preliminary findings of a recent McKinsey report, 45 percent of all work activities could be automated today using technology already demonstrated. From fulfilling warehouse orders to suggesting medical treatments for ailments, the coming wave of automation will redefine jobs and business processes for factory workers and CEOs alike.

Episode 140: Automating Work
Episode 141: Future Transportation
Episode 145: Robot World
Episode 153: Smart Cities and Sidewalk Labs
Episode 173: Labor and the Gig Economy

Design and science are intersecting in new and significant ways.
Whether it’s in the creation of high tech clothing, embeddables, or materials, design and science are coming together in new and significant ways. Clothing designers are working with multi-disciplinary teams, integrating input from engineers and synthetic biologists into their work. From 3D-printed couture to scarves dyed with bacteria to textiles grown in the lab, emerging tech is creating rapid innovation in the fashion industry. And this year, in the burgeoning world of designing embeddables, the U.S. Patent Office approved Google’s patent for electronic lens technology, which implantable directly in the eye. These mechanical eyes might give you superhuman abilities — to see at great distance or view microscopic material, and document it all by capturing photos or video.

Episode 143: Clothing and Technology
Episode 155: Designing Embeddables
Episode 161: The Future of UX
Episode 171: Embeddables
Episode 172: Quantum Computing

Bull Session

Internet Takedown

October 27, 2016          

Episode Summary

On The Digital Life podcast this week, we discuss the distributed denial of service attack (DDoS) that took down the Internet on the East Coast for a sustained period of time last Friday. Dyn, a Domain Name System (DNS) services company from New Hampshire was hit with multiple waves of attacks on its Internet directory servers.

This DDoS attack was propagated by an IoT botnet — essentially webcams, DVRs, and routers from all over the world — that were infected with malware. This is a very public example of an IoT outcome that was malicious rather than beneficial, an interesting case study for this emerging technology that raises serious questions about its future implementation.

 
Resources:
What We Know About Friday’s Massive East Coast Internet Outage