artificial intelligence tags

Bull Session

My Trusted Robots

June 8, 2017          

Episode Summary

On The Digital Life this week we take a look at designing trust in human-robot relationships. More so than with other technologies, robots require a certain level of trust. Our comfort level with robots will dictate whether we’re willing to ride in driverless cars, work on the assembly line with a collaborative robot, or have a health robot caregiver. Designing human robot relationships will be key to overcoming barriers in the transition to a robot filled world. But how do we manage the wide variety of human emotional reactions? And what does this mean for the future of robot services?



Most westerners distrust robots – but what if they free us for a better life?

Welcome to episode 210 of The Digital Life, a show about our insights into the future of design and technology. I’m your host Jon Follett, and with me is founder and cohost Dirk Knemeyer.

Greetings, listeners.

For our podcast topic this week, we’ll be exploring designing trust in human-robot relationships. More so than with many other technologies, robots seem to require a level of trust in order to interact with human beings, whether you’re talking about robots on the factory floor, robots that could be potentially driving our cars, or robots who might be caregivers for our elders, either now or in the future.

Is robot the right term or is it artificial intelligence? We’re not just talking about a thing that looks like a being, right? We’re talking … Some of it is just software in a certain way. Is that right? I want to make sure we’re talking about the same thing.

Right. I’m thinking about there being a physical instantiation of the robot here. In that case, the automobile that is a self-driving car would qualify that way. Similarly, a robot service for healthcare would be the same, although they might look drastically different. Then on the factory floor, of course, a robot could pretty much look like anything at all. However, there’s a physical and a software application level to this.

Is our computer a robot? Is our smartphone a robot?

I would say no by the parameters of this particular definition, Although I’m sure we could draw those lines wherever we’d like.

Yeah, I think I would draw the lines somewhere other than where you’re drawing it, but I think I know what you’re for, and I’m onboard, brother. Let’s do this.

Okay. There’s some interesting fodder in an article, a recent article from The Guardian, and they were generous enough to provide us with some audio narrative of that. I wanted to play a couple of those audio pieces so we could get to know this Guardian writer’s opinion on this very topic.

I’m always amazed that people who tell me they would never trust their driverless car to take them somewhere, but then happily get into a car driven by a teenager. Driverless vehicles are likely to be much safer than those driven by humans. The safety differential is so large that insurance companies are already looking at alternative business models to make up for the fact that premiums will likely plummet once robots are driving us everywhere. The barriers to our transition to driverless vehicles and to other forms of robot intervention into our daily lives then are not just technical but social, political, and psychological. Trust will be a huge issue and you don’t have to think too hard to see why.

I enjoyed his take on it just framing of the problem of trust with our robot companions or robot overlords or robot drivers. Call them what you want. I think it’s interesting that … For me, part of it has to do with the physical presence of the robot being next to me. I can imagine that it must be very frightening to work side-by-side with a machine that if it behaves in the wrong way, could take your head off with one of its arms. Certainly, there are a series of heuristics that are required around robot behavior on the factory floor simply because they can be so dangerous and there’s no soft parts, or at least most of the time, there aren’t soft parts like if a human being by mistake knocked into you, you’re not likely to suffering broken bones as a result if you’re just working next to each other.

There’s this same scenario with automated car driving. Certainly, you’re in this piece of metal and plastic hurdling along the highway at, if you’re on the Mass Pike, then you’re doing 70 or 80 miles an hour, and leaving those decisions to this other, this piece of software, what have you, does require that there be a level of trust built in. Some of that’s going to happen over time as we see more automated cars or self-driving cars, seeing more robots in factories, but it appears to me that there’s a lot of service design work to be done in order for us to start down this path to having trust with robot services.

Yeah. Certainly, trust is a big part of I think understanding and control are big parts of it, too. If I’m driving in the passenger seat and my teenage hormone ramped-up son is driving, I believe that it is mathematically more dangerous for that as opposed to a driverless car. I would accept that on its face. However, I have an illusion of control where I can say look out. I can grab the wheel. I can, A, see the trouble coming before it comes, and B, physically intercede in the situation to impact it. Now, if we’re hurdling down the Mass Pike at high rate of speed, that is truly an illusion of control. It’s highly unlikely anything I could do could overcome a calamitous situation, but I think that I could. I feel the comfort level of I understand this. I can physically impact this.

When we’re in the driverless car, stuff is just going to happen. The car is going to do things and participate in ways we don’t understand. They don’t map to driving as we know it. Look, taking driverless cars through to the logical conclusion, you’re not going to have steering wheels. You’re not going to have the tropes of driving. They aren’t going to be necessary. The machine can move itself. Suddenly, you’re in a bubble of a certain kind that is moving and conveying and making decisions around the road and doing or not doing.

That takes away our ability to physically map what is going on and feel like we understand it, feel like we have control over it. I totally get why we’re cool with the teenager there but we’re more nervous with the machine there. Taking it from a different perspective, machines don’t have a great track record. Machines are twelve-o-clock blinking on the screen. Machines are blue screens. Machines are viruses. We do not have a strong track record of computing devices just working. If now the computing device isn’t just expected to serve us up a website, if the computing device is expected to keep us safe hurdling down the road at a high speed, that’s a big leap, actually, from the reliability level, the technology in the form of computing devices has given us to this point.

Again, while I certainly theoretically believe that it’s true that when everything’s functioning properly, the driverless car is safer than the driver car, viruses, blue screen, hello. Right? These things are real. These things happen, and that doesn’t even consider the fact when I’m driving the car, certainly I’m part of a bigger network of people being on the road, but I can control my one little thing. The driverless car, presumably, that technology is going to go in a direction where it’s all part of one integrated system. Now you’re not just worried about fails at the local level of the car, you’re worried about fails at the more global level of the whole system that impact you in the car in negative ways.

I will embrace and go with the driverless car. I will feel weird and nervous about it for some period of time until it becomes normal and fine, and the reasons I’ll feel weird about it is not because I doubt the fact it’s safer. I’ll take for granted it probably is safer, but there’s so much evidence that machines don’t always work right and I’m not able to fully understand.

Yeah. I think another, just to build off what you were saying, I think understanding the wide variety of failure modes that are possible with convention cars, that gives us a degree of comfort because we’ve, in many of those scenarios, the, as you pointed out, the failsafe is us, right?


It defaults to our knowledge of these variety of scenarios, you popped a tire, your car broke down in another way, a driver swerves in front of you, it’s raining out and the car is skidding a little bit. Each of these areas where there’s potential failure, you at least as an experienced driver, will have some inkling of what to do. When we’re talking about the early adoption curve here, there’s going to be plenty of failure modes where you have absolutely no idea what to do. Should I stick my foot out and try to cause the car to slow down like Fred Flintstone? What are the proper reactions to a self-driving car failing in one way or the other.

These things might become embedded in us as we’re more and more get used to being driven around by a robot, but in addition to maybe misunderstanding or not, not really understanding how things work, we also don’t have a lot of experiences to draw on that would also provide some confidence, some level of confidence anyway.

Yeah. There’s also initial trust at the corporate level, right? Tesla automobiles have had self-driving abilities in them for some period of time now. I think we’ve talked about before on the show how it I don’t know requires is the technically correct word but they say hey, you need to keep your hands on the wheel and your foot on the brake because even though the car is doing its thing, you may need to save the car, basically. Multiple people have died in this mode on Tesla automobiles. The self-driving car has driven them right into oblivion. Right?

I saw a speech at MIT maybe a year, year and a half ago and the speaker was talking about this feature, which was newly released at the time just with admiration and wonder. Saying isn’t it great that we live in this world. They didn’t even have a press release. They just issued a software update and said hey, now your car can drive itself. Tada. To me that’s horrifying. It’s like no sense of looking out for the consumer of safety. It’s just like this laboratory tinkerer mindset using the lives of real people as the testbed.

I don’t trust big corporations to make the right decisions anyway at a whole different level of abstraction than the vehicles themselves.

I have another clip here from The Guardian about some recent research that divides up our different attitudes towards robots, whether they be automated cars or factory robots or what have you.

According to recent research, people’s views about robots can be grouped into six categories. Namely the frightening other, the subhuman other, the human substitute, the sentient other, the divine other, and the co-evolutionary path to immortality. The connection is a view about how much like us or unlike us robot might be. The paper suggests our reaction to robots is similar to our reaction to humans. We trust those closest to us, most like us, and with whom we are most familiar. We are more wary of strangers or, in this case, the robot doing something we’re not used to robots doing.

We talked a little bit about what I would categorize, perhaps, correctly about the less than a human, incompetent service, right? Where the car-driving mechanism fails or the robot smacks you in the head while you’re on the assembly line. These are the … Maybe there’s another category, the incompetent other. Right? I imagine that the divine other might manifest itself in your all-knowing smart home, right? Where you’re magically given whatever service you need, whether it’s beverage when you walk in or the lights dimming or the heat going up at the right time and the air conditioner turning on when it gets too hot out.

What strikes me as fascinating about this is that the field of HCI, of human computer interaction, came about largely because of us pesky humans needing to be able to operate in conjunction with these marvelous machines. There’s this whole new section here, which intersects with service design and emotional design, however you want to frame it, that these products, these robotics products are going to require a level of social design, really, that is completely unprecedented. We’ve never had this level of technology so specialized and so powerful that it really is starting to rise to the level of human competence. Now we have this wide array of problems to solve to get us pesky humans to, once again, interact with these marvelous machines.

We’re pesky and machines are marvelous. Is that right?

That’s what I come away with feeling, as if this emotional design is more a problem of the human being and less the problem the robot, but that’s just my snarky take on it. I don’t think we’re actually all that pesky.

Yeah. We are selfish, right? As we read and talk about things relating to robots, I’m always brought back to the fact that I think the things we do as humans are based, I’ll argue 100% on selfishness, right? Even if you have someone who is seeming selfless, like a Mother Teresa type. Her selflessness is based on a worldview of giving and community and contribution as making a better world that she wants to participate in, right? As an easy example.

The reason I’m mentioning that is in the article that you referenced, as we’re looking at people’s perspectives towards robots. One of the categories it talked about were sex robots. It was talking about married people and using sex robots for affairs in lieu of other humans. It’s sad that both men and women felt better about using sex robots for that purpose as opposed to humans but more so men, right? Not surprising because men are more likely to have affairs. Why? Because men have biological programming that is compelling them to need a certain level of sexual activity or to have sexual activity that aligns with their ego and their self-conception in certain ways.

What I’m saying is we have all these different classifications of robots. Look. The degree to which different people identify robots in different ways. See robots in their life and the future of our society maps back to their selfish perception of what’s good or right for them. The people who are in industries where jobs are being taken by robots are going to have one view, whereas people who are going to make more money because of robots are going to have a different view, whereas people who are looking forward and thinking about the perpetuation of humanity and our species long into the future are going to have a different view.

I don’t know. I’m just always skeptical now. Whenever there’s well yeah, here are the different categories, they just map back to what our selfish framings of the world are and unconsciously so. We don’t think of ourselves as selfish. We believe things. We are a certain person, unique human, but it all maps back to this really, really shortsighted self-interested perspective. I hope robots are less so.

Listeners, remember that while you’re listening to the show, you can follow along with the things that we’re mentioning here in real time. Just head over to That’s just one L in “thedigitalife” and go to the page for this episode. We’ve included links to pretty much everything mentioned by everybody, so it’s rich information resource to take advantage of while you’re listening or afterward, if you’re trying to remember something that you liked. You can find The Digital Life on iTunes, SoundCloud, Stitcher, Player FM, and in Google Play. If you want to follow us outside of the show, you can follow me on Twitter @jonfollett. Of course, the whole show is brought to you Involution Studios, which you can checkout at Dirk?

You can follow me on Twitter @dknemeyer, and thanks so much for listening.

That’s it for Episode 210 of The Digital Life. For Dirk Knemeyer, I’m Jon Follett, and we’ll see you next time.

No Comments

Leave a Comment

Your email address will not be published. Required fields are marked *

Jon Follett

Jon is Principal of Involution Studios and an internationally published author on the topics of user experience and information design. His most recent book, Designing for Emerging Technologies: UX for Genomics, Robotics and the Internet of Things, was published by O’Reilly Media.

Dirk Knemeyer

Dirk is a social futurist and a founder of Involution Studios. He envisions new systems for organizational, social, and personal change, helping leaders to make radical transformation. Dirk is a frequent speaker who has shared his ideas at TEDx, Transhumanism+ and SXSW along with keynotes in Europe and the US. He has been published in Business Week and participated on the 15 boards spanning industries like healthcare, publishing, and education.


Co-Host & Producer

Jonathan Follett @jonfollett

Co-Host & Founder

Dirk Knemeyer @dknemeyer

Minister of Agit-Prop

Juhan Sonin @jsonin

Audio Engineer

Michael Hermes

Technical Support

Eric Benoit @ebenoit

Brian Liston @lliissttoonn

Original Music

Ian Dorsch @iandorsch

Bull Session

Automating Scientific Discovery

May 11, 2017          

Episode Summary

On The Digital Life this week we’ll look at automating knowledge work, and scientific discovery, in particular. There’s no doubt that knowledge work will change significantly in the coming decades due to massive computing power coupled with AI. It’s fascinating to consider the aspects of science, technology, and design that might be easily automated. AI and deep learning are rapidly changing areas of activity that were previously thought to be the exclusive arena of human cognition. For instance, in the pharmaceutical industry, AI might automate aspects of drug discovery and development, by helping to characterize drug candidates according to likely efficacy and safety. Additionally, the number of scientific papers published each year far exceeds any scientist’s ability to read and analyze them. It’s reasonable to assume that AI and deep learning could assist scientists in navigating this data.

Science has outgrown the human mind and its limited capacities
The BGRF is helping develop AI to accelerate drug discovery for aging and age-associated diseases

Bull Session

Storytelling and AI

April 20, 2017          

Episode Summary

On The Digital Life this week we explore storytelling, creativity, and artificial intelligence. Our cultural evolution is reflected in our ability to communicate through stories, creating shared experiences and meaning. Recent research from the University of Vermont and the University of Adelaide used an AI to classify the emotional arcs for 1,327 stories from Project Gutenberg’s fiction collection, identifying six types of narratives. Could these reverse-engineered storytelling components be used to build automated software tools for authors, or even to train machines to generate original works? Online streaming service Netflix already uses data generated from users’ movie and television preferences to help choose its next shows. What might happen when computers not only pick the shows, but also write the scripts for them?

The Six Main Arcs in Storytelling, as Identified by an A.I.
The strange world of computer-generated novels
A Japanese AI program just wrote a short novel, and it almost won a literary prize

Bull Session

Ethics and Bias in AI

March 24, 2017          

Episode Summary

On The Digital Life this week, we discuss ethics and bias in AI, with guest Tomer Perry, research associate at the Edmond J. Safra Center for Ethics at Harvard University. What do we mean by bias when it comes to AI? And how do we avoid including biases we’re not even aware of?

If AI software for processing and analyzing data begins providing decision-making for core elements critical to our society we’ll need to address these issues. For instance, risk assessments used in the correctional system have been shown to incorporate bias against minorities. And, when it comes to self-driving cars, people want to be protected, but also want the vehicle, in principle to “do the right thing” when encountering situations where the lives of both the driver and others, like pedestrians, are at risk. How we should deal with it? What are the ground rule sets for ethics and morality in AI, and where do they come from? Join us as we discuss.

Inside the Artificial Intelligence Revolution: A Special Report, Pt. 1
Atlas, The Next Generation
Stanford One Hundred Year Study on Artificial Intelligence (AI100)
Barack Obama, Neural Nets, Self-Driving Cars, and the Future of the World
How can we address real concerns over artificial intelligence?
Moral Machine

Bull Session


January 26, 2017          

Episode Summary

On this episode of The Digital Life, we discuss workplace automation and the technologies that will make it happen — from robotics to artificial intelligence (AI) to machine learning. The McKinsey Global Institute released a new study on the topic this month, “A Future that Works: Automation, Employment and Productivity”, which contains some interesting insights.For instance, almost every occupation has the potential to be at least partially automated, and it’s likely that more occupations will be transformed than automated away. However, people will need to work in conjunction with machines as a part of their day-to-day activities, and in this new age of automation, learning new skills will be critical.Add to this the fact that working-age population is actually decreasing in many countries, and we can see how the story of automation is multi-faceted. The path to automating the workplace is a complex one that could raise productivity growth on a global scale.

Report – McKinsey Global Institute: Harnessing automation for a future that works