ai tags

Bull Session

My Trusted Robots

June 8, 2017          

Episode Summary

On The Digital Life this week we take a look at designing trust in human-robot relationships. More so than with other technologies, robots require a certain level of trust. Our comfort level with robots will dictate whether we’re willing to ride in driverless cars, work on the assembly line with a collaborative robot, or have a health robot caregiver. Designing human robot relationships will be key to overcoming barriers in the transition to a robot filled world. But how do we manage the wide variety of human emotional reactions? And what does this mean for the future of robot services?



Most westerners distrust robots – but what if they free us for a better life?

Welcome to episode 210 of The Digital Life, a show about our insights into the future of design and technology. I’m your host Jon Follett, and with me is founder and cohost Dirk Knemeyer.

Greetings, listeners.

For our podcast topic this week, we’ll be exploring designing trust in human-robot relationships. More so than with many other technologies, robots seem to require a level of trust in order to interact with human beings, whether you’re talking about robots on the factory floor, robots that could be potentially driving our cars, or robots who might be caregivers for our elders, either now or in the future.

Is robot the right term or is it artificial intelligence? We’re not just talking about a thing that looks like a being, right? We’re talking … Some of it is just software in a certain way. Is that right? I want to make sure we’re talking about the same thing.

Right. I’m thinking about there being a physical instantiation of the robot here. In that case, the automobile that is a self-driving car would qualify that way. Similarly, a robot service for healthcare would be the same, although they might look drastically different. Then on the factory floor, of course, a robot could pretty much look like anything at all. However, there’s a physical and a software application level to this.

Is our computer a robot? Is our smartphone a robot?

I would say no by the parameters of this particular definition, Although I’m sure we could draw those lines wherever we’d like.

Yeah, I think I would draw the lines somewhere other than where you’re drawing it, but I think I know what you’re for, and I’m onboard, brother. Let’s do this.

Okay. There’s some interesting fodder in an article, a recent article from The Guardian, and they were generous enough to provide us with some audio narrative of that. I wanted to play a couple of those audio pieces so we could get to know this Guardian writer’s opinion on this very topic.

I’m always amazed that people who tell me they would never trust their driverless car to take them somewhere, but then happily get into a car driven by a teenager. Driverless vehicles are likely to be much safer than those driven by humans. The safety differential is so large that insurance companies are already looking at alternative business models to make up for the fact that premiums will likely plummet once robots are driving us everywhere. The barriers to our transition to driverless vehicles and to other forms of robot intervention into our daily lives then are not just technical but social, political, and psychological. Trust will be a huge issue and you don’t have to think too hard to see why.

I enjoyed his take on it just framing of the problem of trust with our robot companions or robot overlords or robot drivers. Call them what you want. I think it’s interesting that … For me, part of it has to do with the physical presence of the robot being next to me. I can imagine that it must be very frightening to work side-by-side with a machine that if it behaves in the wrong way, could take your head off with one of its arms. Certainly, there are a series of heuristics that are required around robot behavior on the factory floor simply because they can be so dangerous and there’s no soft parts, or at least most of the time, there aren’t soft parts like if a human being by mistake knocked into you, you’re not likely to suffering broken bones as a result if you’re just working next to each other.

There’s this same scenario with automated car driving. Certainly, you’re in this piece of metal and plastic hurdling along the highway at, if you’re on the Mass Pike, then you’re doing 70 or 80 miles an hour, and leaving those decisions to this other, this piece of software, what have you, does require that there be a level of trust built in. Some of that’s going to happen over time as we see more automated cars or self-driving cars, seeing more robots in factories, but it appears to me that there’s a lot of service design work to be done in order for us to start down this path to having trust with robot services.

Yeah. Certainly, trust is a big part of I think understanding and control are big parts of it, too. If I’m driving in the passenger seat and my teenage hormone ramped-up son is driving, I believe that it is mathematically more dangerous for that as opposed to a driverless car. I would accept that on its face. However, I have an illusion of control where I can say look out. I can grab the wheel. I can, A, see the trouble coming before it comes, and B, physically intercede in the situation to impact it. Now, if we’re hurdling down the Mass Pike at high rate of speed, that is truly an illusion of control. It’s highly unlikely anything I could do could overcome a calamitous situation, but I think that I could. I feel the comfort level of I understand this. I can physically impact this.

When we’re in the driverless car, stuff is just going to happen. The car is going to do things and participate in ways we don’t understand. They don’t map to driving as we know it. Look, taking driverless cars through to the logical conclusion, you’re not going to have steering wheels. You’re not going to have the tropes of driving. They aren’t going to be necessary. The machine can move itself. Suddenly, you’re in a bubble of a certain kind that is moving and conveying and making decisions around the road and doing or not doing.

That takes away our ability to physically map what is going on and feel like we understand it, feel like we have control over it. I totally get why we’re cool with the teenager there but we’re more nervous with the machine there. Taking it from a different perspective, machines don’t have a great track record. Machines are twelve-o-clock blinking on the screen. Machines are blue screens. Machines are viruses. We do not have a strong track record of computing devices just working. If now the computing device isn’t just expected to serve us up a website, if the computing device is expected to keep us safe hurdling down the road at a high speed, that’s a big leap, actually, from the reliability level, the technology in the form of computing devices has given us to this point.

Again, while I certainly theoretically believe that it’s true that when everything’s functioning properly, the driverless car is safer than the driver car, viruses, blue screen, hello. Right? These things are real. These things happen, and that doesn’t even consider the fact when I’m driving the car, certainly I’m part of a bigger network of people being on the road, but I can control my one little thing. The driverless car, presumably, that technology is going to go in a direction where it’s all part of one integrated system. Now you’re not just worried about fails at the local level of the car, you’re worried about fails at the more global level of the whole system that impact you in the car in negative ways.

I will embrace and go with the driverless car. I will feel weird and nervous about it for some period of time until it becomes normal and fine, and the reasons I’ll feel weird about it is not because I doubt the fact it’s safer. I’ll take for granted it probably is safer, but there’s so much evidence that machines don’t always work right and I’m not able to fully understand.

Yeah. I think another, just to build off what you were saying, I think understanding the wide variety of failure modes that are possible with convention cars, that gives us a degree of comfort because we’ve, in many of those scenarios, the, as you pointed out, the failsafe is us, right?


It defaults to our knowledge of these variety of scenarios, you popped a tire, your car broke down in another way, a driver swerves in front of you, it’s raining out and the car is skidding a little bit. Each of these areas where there’s potential failure, you at least as an experienced driver, will have some inkling of what to do. When we’re talking about the early adoption curve here, there’s going to be plenty of failure modes where you have absolutely no idea what to do. Should I stick my foot out and try to cause the car to slow down like Fred Flintstone? What are the proper reactions to a self-driving car failing in one way or the other.

These things might become embedded in us as we’re more and more get used to being driven around by a robot, but in addition to maybe misunderstanding or not, not really understanding how things work, we also don’t have a lot of experiences to draw on that would also provide some confidence, some level of confidence anyway.

Yeah. There’s also initial trust at the corporate level, right? Tesla automobiles have had self-driving abilities in them for some period of time now. I think we’ve talked about before on the show how it I don’t know requires is the technically correct word but they say hey, you need to keep your hands on the wheel and your foot on the brake because even though the car is doing its thing, you may need to save the car, basically. Multiple people have died in this mode on Tesla automobiles. The self-driving car has driven them right into oblivion. Right?

I saw a speech at MIT maybe a year, year and a half ago and the speaker was talking about this feature, which was newly released at the time just with admiration and wonder. Saying isn’t it great that we live in this world. They didn’t even have a press release. They just issued a software update and said hey, now your car can drive itself. Tada. To me that’s horrifying. It’s like no sense of looking out for the consumer of safety. It’s just like this laboratory tinkerer mindset using the lives of real people as the testbed.

I don’t trust big corporations to make the right decisions anyway at a whole different level of abstraction than the vehicles themselves.

I have another clip here from The Guardian about some recent research that divides up our different attitudes towards robots, whether they be automated cars or factory robots or what have you.

According to recent research, people’s views about robots can be grouped into six categories. Namely the frightening other, the subhuman other, the human substitute, the sentient other, the divine other, and the co-evolutionary path to immortality. The connection is a view about how much like us or unlike us robot might be. The paper suggests our reaction to robots is similar to our reaction to humans. We trust those closest to us, most like us, and with whom we are most familiar. We are more wary of strangers or, in this case, the robot doing something we’re not used to robots doing.

We talked a little bit about what I would categorize, perhaps, correctly about the less than a human, incompetent service, right? Where the car-driving mechanism fails or the robot smacks you in the head while you’re on the assembly line. These are the … Maybe there’s another category, the incompetent other. Right? I imagine that the divine other might manifest itself in your all-knowing smart home, right? Where you’re magically given whatever service you need, whether it’s beverage when you walk in or the lights dimming or the heat going up at the right time and the air conditioner turning on when it gets too hot out.

What strikes me as fascinating about this is that the field of HCI, of human computer interaction, came about largely because of us pesky humans needing to be able to operate in conjunction with these marvelous machines. There’s this whole new section here, which intersects with service design and emotional design, however you want to frame it, that these products, these robotics products are going to require a level of social design, really, that is completely unprecedented. We’ve never had this level of technology so specialized and so powerful that it really is starting to rise to the level of human competence. Now we have this wide array of problems to solve to get us pesky humans to, once again, interact with these marvelous machines.

We’re pesky and machines are marvelous. Is that right?

That’s what I come away with feeling, as if this emotional design is more a problem of the human being and less the problem the robot, but that’s just my snarky take on it. I don’t think we’re actually all that pesky.

Yeah. We are selfish, right? As we read and talk about things relating to robots, I’m always brought back to the fact that I think the things we do as humans are based, I’ll argue 100% on selfishness, right? Even if you have someone who is seeming selfless, like a Mother Teresa type. Her selflessness is based on a worldview of giving and community and contribution as making a better world that she wants to participate in, right? As an easy example.

The reason I’m mentioning that is in the article that you referenced, as we’re looking at people’s perspectives towards robots. One of the categories it talked about were sex robots. It was talking about married people and using sex robots for affairs in lieu of other humans. It’s sad that both men and women felt better about using sex robots for that purpose as opposed to humans but more so men, right? Not surprising because men are more likely to have affairs. Why? Because men have biological programming that is compelling them to need a certain level of sexual activity or to have sexual activity that aligns with their ego and their self-conception in certain ways.

What I’m saying is we have all these different classifications of robots. Look. The degree to which different people identify robots in different ways. See robots in their life and the future of our society maps back to their selfish perception of what’s good or right for them. The people who are in industries where jobs are being taken by robots are going to have one view, whereas people who are going to make more money because of robots are going to have a different view, whereas people who are looking forward and thinking about the perpetuation of humanity and our species long into the future are going to have a different view.

I don’t know. I’m just always skeptical now. Whenever there’s well yeah, here are the different categories, they just map back to what our selfish framings of the world are and unconsciously so. We don’t think of ourselves as selfish. We believe things. We are a certain person, unique human, but it all maps back to this really, really shortsighted self-interested perspective. I hope robots are less so.

Listeners, remember that while you’re listening to the show, you can follow along with the things that we’re mentioning here in real time. Just head over to That’s just one L in “thedigitalife” and go to the page for this episode. We’ve included links to pretty much everything mentioned by everybody, so it’s rich information resource to take advantage of while you’re listening or afterward, if you’re trying to remember something that you liked. You can find The Digital Life on iTunes, SoundCloud, Stitcher, Player FM, and in Google Play. If you want to follow us outside of the show, you can follow me on Twitter @jonfollett. Of course, the whole show is brought to you Involution Studios, which you can checkout at Dirk?

You can follow me on Twitter @dknemeyer, and thanks so much for listening.

That’s it for Episode 210 of The Digital Life. For Dirk Knemeyer, I’m Jon Follett, and we’ll see you next time.

No Comments

Leave a Comment

Your email address will not be published. Required fields are marked *

Jon Follett

Jon is Principal of Involution Studios and an internationally published author on the topics of user experience and information design. His most recent book, Designing for Emerging Technologies: UX for Genomics, Robotics and the Internet of Things, was published by O’Reilly Media.

Dirk Knemeyer

Dirk is a social futurist and a founder of Involution Studios. He envisions new systems for organizational, social, and personal change, helping leaders to make radical transformation. Dirk is a frequent speaker who has shared his ideas at TEDx, Transhumanism+ and SXSW along with keynotes in Europe and the US. He has been published in Business Week and participated on the 15 boards spanning industries like healthcare, publishing, and education.


Co-Host & Producer

Jonathan Follett @jonfollett

Co-Host & Founder

Dirk Knemeyer @dknemeyer

Minister of Agit-Prop

Juhan Sonin @jsonin

Audio Engineer

Michael Hermes

Technical Support

Eric Benoit @ebenoit

Brian Liston @lliissttoonn

Original Music

Ian Dorsch @iandorsch

Bull Session

Storytelling and AI

April 20, 2017          

Episode Summary

On The Digital Life this week we explore storytelling, creativity, and artificial intelligence. Our cultural evolution is reflected in our ability to communicate through stories, creating shared experiences and meaning. Recent research from the University of Vermont and the University of Adelaide used an AI to classify the emotional arcs for 1,327 stories from Project Gutenberg’s fiction collection, identifying six types of narratives. Could these reverse-engineered storytelling components be used to build automated software tools for authors, or even to train machines to generate original works? Online streaming service Netflix already uses data generated from users’ movie and television preferences to help choose its next shows. What might happen when computers not only pick the shows, but also write the scripts for them?

The Six Main Arcs in Storytelling, as Identified by an A.I.
The strange world of computer-generated novels
A Japanese AI program just wrote a short novel, and it almost won a literary prize

Bull Session

AI Goes Mainstream

February 3, 2017          

Episode Summary

On this episode of The Digital Life, we discuss the high-powered Partnership on Artificial Intelligence to Benefit People and Society, an initiative whose founding members include Amazon, Facebook, Google, IBM, Microsoft and Apple. Apple was just recently added as a founding member.

The mission of the group is to educate the public about AI, study its potential impact on the world, and develop standards and ethics around its implementation. Interestingly, the group also includes organizations with expertise in economics, civil rights, economics, and research, who are concerned with the impacts of technology on modern society. These include: the American Civil Liberties Union (ACLU), the MacArthur Foundation, OpenAI, the Association for the Advancement of Artificial Intelligence, Arizona State University and University of California, Berkeley.

Will AI build upon our society’s biases and disparities and make them worse? Or does it have the potential to create something more egalitarian? Join us as we discuss all this and more.

Apple joins Amazon, Facebook, Google, IBM and Microsoft in AI initiative
Partnership on AI
A massive AI partnership is tapping civil rights and economic experts to keep AI safe

Bull Session

A Year Talking Tech

December 22, 2016          

Episode Summary

For our final podcast of 2016, we chat about the big themes on the show and our favorite episodes over the past year. We had conversations on design and tech with some wonderful guests including ground breaking geneticist George Church and open science advocate and researcher, John Wilbanks. From AI to genomics to cybersecurity, we covered a wide range topics on The Digital Life in 2016. So what did we learn from a year talking tech?

AI is too smart for its own good.
Artificial intelligence is evolving rapidly, with both high profile public failure and success by a number of tech giants this year. For instance, Microsoft had to terminate Tay, its teenage chatbot, after the bot started tweeting neo-Nazi propaganda and other abusive language at people. Meanwhile, Google’s DeepMind created an AI capable of beating some of the very best human players in the world at Go, the Asian strategy board game. And, we were introduced to a brand new “Rembrandt”, which was 3D-printed with eerie accuracy by an artificial intelligence algorithm, trained by analyzing the artist’s paintings.

Episode 149: Artificial Intelligence
Episode 151: AI Goes to Art School
Episode 163: AI Goes to the Ballpark

DNA replaces silicon as the new material for innovation.
The fields of genomics and synthetic biology continue to press forward in astonishing ways. In Seoul, Korea, a controversial lab revealed plans to clone endangered animals in order to save them from extinction. At the Massachusetts Institute of Technology (MIT) and Boston University (BU) synthetic biologists created software that automates the design of DNA circuits for living cells.

Episode 148: On Cloning
Episode 150: Engineering Synthetic Biology
Episode 154: DNA as Data Storage
Episode 158: Writing Human Code
Episode 168: The Microbiome
Episode 169: Genomics and Life Extension
Episode 170: Chimeras and Bioethics
Episode 176: Three Parents and a Baby

Hacking and cybersecurity are front and center as online and offline worlds collide.
In 2016, cybersecurity became a primary issue in a host of critical areas including communication, energy, and politics. Power grids, airports, and other infrastructure were increasingly subject to cyber attacks and an increasing number were successful. The debate over privacy and security was reinvigorated by the hubbub around the FBI request of Apple to unlock an iPhone owned by one of the San Bernardino shooters. And, Wikileaks distributed e-mails obtained by sources who hacked the DNC and individuals associated with the Clinton campaign during the U.S. presidential elections.

Episode 139: Hacking Power
Episode 144: Apple vs. FBI
Episode 166: Hacking the DNC
Episode 179: Internet Takedown

The automation of work is coming.
We got another startling look at what the future of work could become as software, robots, and the IoT continued to automate activities previously completed by humans. According to preliminary findings of a recent McKinsey report, 45 percent of all work activities could be automated today using technology already demonstrated. From fulfilling warehouse orders to suggesting medical treatments for ailments, the coming wave of automation will redefine jobs and business processes for factory workers and CEOs alike.

Episode 140: Automating Work
Episode 141: Future Transportation
Episode 145: Robot World
Episode 153: Smart Cities and Sidewalk Labs
Episode 173: Labor and the Gig Economy

Design and science are intersecting in new and significant ways.
Whether it’s in the creation of high tech clothing, embeddables, or materials, design and science are coming together in new and significant ways. Clothing designers are working with multi-disciplinary teams, integrating input from engineers and synthetic biologists into their work. From 3D-printed couture to scarves dyed with bacteria to textiles grown in the lab, emerging tech is creating rapid innovation in the fashion industry. And this year, in the burgeoning world of designing embeddables, the U.S. Patent Office approved Google’s patent for electronic lens technology, which implantable directly in the eye. These mechanical eyes might give you superhuman abilities — to see at great distance or view microscopic material, and document it all by capturing photos or video.

Episode 143: Clothing and Technology
Episode 155: Designing Embeddables
Episode 161: The Future of UX
Episode 171: Embeddables
Episode 172: Quantum Computing