Bull Session

UX for Robotics

September 3, 2015          

Episode Summary

Robots are ideal for taking care of jobs that are repetitive, physically demanding, and potentially hazardous to humans. There are immediate, significant opportunities for using advanced robotics in energy, health, and manufacturing. Designers working in robotics will need to help identify the major challenges in these areas and seek proactive solutions — not an obvious or easy task.

In this episode of The Digital Life we discuss the future of UX for robotics, and interview Scott Stropkay and Bill Hartman of the design firm Essential, on human-robot interactions. This interview aired originally on O’Reilly Radar.

Resources
Essential Design
Designing for Emerging Technologies

 

Jon:
Welcome to Episode 119 of The Digital Life, a show about our adventures in the world of design and technology. I’m your host Jon Follett. For our podcast topic this week, we’re going to explore the future of design and UX for robotics.

More so than any other emerging technology, robotics has captured the imagination of American popular culture, especially that of the Hollywood sci-fi blockbuster. We’re entertained, enthralled, and maybe (but only slightly) alarmed by the legacy of Blade Runner, The Terminator, The Matrix and any number of lesser dystopian robotic celluloid futures. It remains to be seen if robot labor generates the kind of negative societal, economic, and political change depicted in the more pessimistic musings of our culture’s science fiction. Ensuring that it does not is a design challenge of the highest order. We must seek to guide our technology, rather than just allow it to guide us.

In the near term, robots are ideal for taking care of jobs that are repetitive, physically demanding, and potentially hazardous to humans. There are immediate, significant opportunities for using advanced robotics in energy, health, and manufacturing. Designers working in robotics will need to help identify the major challenges in these areas and seek proactive solutions — not an obvious or easy task.

We can see an example of these major challenges in the tragic events of the Fukushima meltdown. On March 11, 2011, a 9.0 magnitude earthquake and subsequent tsunami damaged the Fukushima Daiichi nuclear reactors in Japan. Over the course of 24 hours, crews tried desperately to fix the reactors. However, as, one by one, the back-up safety measures failed, the fuel rods in the nuclear reactor overheated, releasing dangerous amounts of radiation into the surrounding area. As radiation levels became far too high for humans, emergency teams at the plant were unable to enter key areas to complete the tasks required for recovery. Three hundred thousand people had to be evacuated from their homes, some of whom have yet to return.

The current state of the art in robotics is not capable of surviving the hostile, high-radiation environment of a nuclear power plant meltdown and dealing with the complex tasks required to assist a recovery effort. In the aftermath of Fukushima, the Japanese government did not immediately have access to hardened, radiation-resistant robots. A few robots from American companies—tested on the modern battlefields of Afghanistan and Iraq—including iRobot’s 710 Warrior and PackBot were able to survey the plant. The potential for recovery-related tasks that can and should be handled by advanced robotics is far greater than this. However, for many reasons, spanning political, cultural, and systemic, before the Fukushima event, an investment in robotic research was never seriously considered. The meltdown was an unthinkable catastrophe, one that Japanese officials thought could never happen, and as such, it was not even acknowledged as a possible scenario for which planning was needed.

The Fukushima catastrophe inspired the United States Defense Advanced Research Projects Agency (DARPA) to create the Robotics Challenge, the purpose of which is to accelerate technological development for robotics in the area of disaster recovery. Acknowledging the fragility of our human systems and finding resilient solutions to catastrophes—whether it’s the next super storm, earthquake, or nuclear meltdown—is a problem on which designers, engineers, and technologists should focus.

In the DARPA competition mission statement, we can see the framing of the challenge in human terms.

History has repeatedly demonstrated that humans are vulnerable to natural and man-made disasters, and there are often limitations to what we can do to help remedy these situations when they occur. Robots have the potential to be useful assistants in situations in which humans cannot safely operate, but despite the imaginings of science fiction, the actual robots of today are not yet robust enough to function in many disaster zones nor capable enough to perform the most basic tasks required to help mitigate a crisis situation. The goal of the DRC is to generate groundbreaking research and development in hardware and software that will enable future robots, in tandem with human counterparts, to perform the most hazardous activities in disaster zones, thus reducing casualties and saving lives.

The competition was successful in its mission to encourage innovation in advanced robotics. In the competition finals held in June of this year, teams from around the world competed at a variety of tasks related to disaster recovery, which included driving cars, traversing difficult terrain, climbing ladders, opening doors, moving debris, cutting holes in walls, closing valves, and unreeling hoses.

The gap between the problems we face as a species and the seemingly unlimited potential of technologies ripe for implementation begs for considered but agile design thinking and practice. Designers should be problem identifiers, not just problem solvers searching for a solution to a pre-established set of parameters. We are on the cusp of a new technological age, saddled with the problems of the previous one, demanding that as we step forward we do not make the same mistakes. To do this, we must identify the right challenges to take on — the significant and valuable ones. Because this is where emerging technologies, like robotics, can have their greatest impact.

If you’re interested in further exploration of this topic, check out “Designing for Emerging Technologies”, coming from O’Reilly Media this December, a project on which I was honored to serve as editor. In this book, you will discover 20 essays, from designers, engineers, scientists and thinkers, exploring areas of fast-moving, ground breaking technology in desperate need of experience design — from genetic engineering to neuroscience to wearables to biohacking — and discussing frameworks and techniques they’ve used in the burgeoning practice area of UX for emerging technologies.

I interviewed Bill Hartman and Scott Stropkay, from the design firm Essential on their chapter about robots in healthcare. This interview originally appeared on O’Reilly Radar.

Jon:
Hi. I’m Jon Follett, editor of Designing for Emerging Technologies. With me today are Scott Stropkay, founding partner, and Bill Hartman, director of research at Essential Design, two of the book’s contributing authors. Their chapter looks at designing human-robot relationships, which we’ll explore a little in this interview. Guys, welcome to the show.

Scott:
Thanks for having us.

Bill:
Thanks for having us.

Jon:
You opened your piece with this very vivid description of a first-person patient telehealth scenario which is managed by a remote doctor through a robotic intermediary. Could you give us a quick overview of that scenario and tell us how that exemplifies some of the challenges inherent in designing human-robot relationships?

Scott:
Sure. We chose to write our chapter around a user scenario because we find ourselves designing pieces of these scenarios for different clients and different contexts. We’ve spent some time working on telepresence robots in the hospital setting and social robots in other settings and other robots for other pieces of these scenarios that we’re imagining here in the future. The scenario setting is a good way to make these issues very personal.

We think that the challenges inherent in these kinds of scenarios are fascinating, how you get people to accept a robot in a relationship that you’d normally have with a person in, let’s say, a hospital setting, how you develop acceptance from the team that’s not used to working with a robot as part of their functional team, how you develop trust in those relationships, how you engage people both practically and emotionally. How, as the scenario progresses, you bring robots into your home to monitor your recovery was one of the issues that we’ve begun to address in our work.

We’re pursuing other ideas as it relates to using smart monitors in the form of robot and robotic-enhanced devices that can help you advance your, I guess, improvement in behavior change over time, look at recovery. Then, ultimately, we’re thinking about it, although we’ve not worked in yet some of the interesting science that’s happening with robots that you ingest that can learn about you and monitor you over time, so just a world of fascinating issues about what you want to know and how you might want to learn that and who gets access to this information and how that interface could be designed.

Jon:
Yeah. What strikes me from that opening scenario which, for our listeners, is this person waking up in a hospital after some kind of operation and having their doctor communicate with them via this telehealth robot that you guys actually did some work on. The scenario itself is a little scary, a little shocking I think both for the user and for the reader, which I like because it really puts you in the user’s shoes. It strikes me that, and you mentioned this, Scott, emotion and these sort of primal senses that we have, those are going to be important to design for as we think about HRI, human-robot interactions. I think it’s worth considering that the emotional aspects are just as important as the technology aspects.

Would you agree with that statement?

Scott:
Definitely. The drivers, of course, are practical. The idea of a telepresence robot in an emergency room setting, let’s say, is totally practically driven. The idea that you may not have access to the best doctor where you might be located in some remote location becomes a non-issue if you can access that great specialist anywhere in the world. The motivation is, of course, practical and a huge user benefit, but getting this technology adopted is difficult because it’s not the scenario that anyone’s expecting to experience.

The way that, in the case of this telepresence robot, the posture it took, the way it moved, the way that the user would get introduced to the doctor that this robot was going to become, the way that the care team in the room would know the attitude of the robot for the person took its form were all things that we were trying to design.

Knowing when you can have a casual conversation with the doctor and knowing when the doctor, through the robotic embodiment, is rushing to the ER are important things to indicate, which humans do in many subtle and unconscious ways that we now have to be very conscious about the way we’re going to incorporate those into a machine interface.

Jon:
Right. That’s a good lead into our second question, which I’m going to direct to Bill, which is, in your chapter, you discussed a set of design rules of thumb for human-robot interaction which are based on Jakob Nielsen’s usability heuristics. What would you say are the most important of these, and how do you see them manifest in the work that you guys do every day?

Bill:
Yeah, that’s a great question, Jon. When Nielsen came up with that framework, I don’t think it was necessarily directed toward artificial intelligence down the road a few decades or even human-robot interactions, but those frameworks have proven themselves to be quite valuable to user-experience designers I think over and over again, as well as usability testing experts in terms of things to look for.

We can use those same principles and look for implications on robots serving our higher ordered needs moving forward, as we move from serving needs related to convenience or performance and actually supporting our decision-making and emerging technologies, moving from early stages of a new design, being able to do anything or be magic in terms of the user interface to being more human in the user interface. That point about making an emotional appeal as well as a logical appeal and a credible appeal to we, as humans, in developing our degrees of trust is really critical.

Where my kids go to school, they have a motto of “freedom and responsibility.” As robots take on these higher ordered functions, they need to prove to their users that they are responsible enough to be given higher degrees of freedom in how they operate and how they support our decision-making.

I was listening to Barry Schwartz being interviewed recently, the author of The Paradox of Choice, on Fresh Air. He described mutual funds that have been around a few decades. They automate the process of asset allocation and diverse investments for 401(k) plan participants, but when the numbers of mutual funds become too great, participation in 401(k) plans actually drops off.

As robots take on higher degrees of autonomy and freedom in guiding our decision-making, there are certain assumptions that they will need to make along the way in doing that so we aren’t overwhelmed with the number of choices available to us and can make meaningful choices within realistic constraints. This notion of choice architecture and, as designers, choreographing these choice architectures in some of the algorithms that might go into the robotics is really, really key.

Referring back to the Nielsen heuristics, I think that visibility of system status is a biggie, the telepresence robot with InTouch Health conveys its status in terms of its anthropomorphic language as it moves to the hospital. Am I in a hurry to get to another department or do I have time to linger and take up a new conversation or convey information to a passerby without having to ask the robot specifically how much time do you have available? It’s purely in the body language and the visibility of the system status for the robot.

I think also just the flexibility and efficiency of use is key. Is somebody a novice with a particular robot and using it for the first time? Is more transparency or self-evidency in the decision-making required or can it operate in a more expert fashion so that robots knowing their audiences can help them add that much more value and, indeed, help the technology support our needs as humans and not get in the way of our needs as humans.

Jon:
Yeah, both of those I think are critical heuristics. It’s interesting to consider what basically amounts to the elements of body language being described as system transparency. If you and I are walking in the hallway, you can convey to me via your body language your system status, “Oh, um, Bill’s busy. He’s doing something.”

Yeah, but to put it in that way, because when we’re talking about human-robot interaction, we’re talking about systems, right? I would never say, “Bill’s system status is busy,” but that’s essentially what we’re saying about the robots. That’s a fascinating connection between sort of a cultural mores and body language and all of these sort of call it soft language and communication skills that we pick up as human being that we’re now hard-coding into human-robot communication methods. I really enjoyed that part of your chapter.

Our final question for the show is you offered some thoughts in your chapter about people becoming robotic, we’re familiar with the bionic man and science fiction cyborgs where people are part-human, part-robot, whether it’s helping to aid a recovery in the healthcare scenario. I’ve seen it used to enhance human abilities in industrial working environments with exoskeletons. What are some of the critical areas that designers need to consider when creating these products and tool sets that are changing in some ways the way we understand really being human? Scott, what’s your take on that?

Scott:
It’s interesting because there are people that we interact with today that are becoming robotic. Anybody that wears a cochlear implant to either recover hearing or to actually hear for the first time because their biological systems weren’t there to offer that sensory input to begin with is robotic. There are companies that are offering retinal implants that are providing normal and even superhuman sensory inputs through your eyes that your brain can now interpret in ways that people that are working on, other kinds of sensors where you can react, that you can understand your environment on levels that are beyond what a human can currently understand from their normal sensory array.

There are people that we interact with that are already becoming robotic, and there are more people that we will be interacting with in the future that have some of these capabilities added to themselves. There’s other applications typically initiated through DARPA and military settings, but enhancing human capabilities to save people from compromised situations to move heavy things over long distances, one that you can’t do by using other kinds of smart prostheses and exoskeletons are experiments and developments that are happening right now.

I think it’s fascinating how much is happening. Most of us don’t appreciate it because we’re not interacting with these people and these little communities in these specialty areas, but we’ll be seeing more of that moving into our lives and into our world. It’s going to get interesting as it relates to you having a casual conversation with somebody that can see infrared in the room that you can’t see and using that information one way or another.

Jon:
I guess blushing will have all sorts of new meanings for that person. Bill, what are your thoughts on becoming superhuman, becoming robotic?

Bill:
Yeah. There’s already been a little bit of an arms race in the professional athletics arena, performance enhancing drugs, and our ability to, or the users of those products, their ability to conceal the use of those products. Even a blade runner in the last summer Olympics is not something that was easily concealed. That has raised some real questions about is this fair competition or isn’t it fair. This topic we were discussing previously about visibility system status and our trust in robotics when robots are autonomous, I think there is … Our worlds are colliding as we, as people, might take on higher degrees of mechanized componentry, regrowing parts on an organic level or augmenting our own performance through technologies that will presumably be easier to conceal in the future.

I think it’s difficult to predict where that is headed, but I think, ethically and just in terms of our personal relations and our inter-robot relations, that understanding for where there is fair versus unfair performance characteristics is going to be a part of that conversation moving forward.

Jon:
Yeah, there are going to be a lot of interesting conversations I’m sure as we start seeing more and more cyborg Americans and see it in the Olympics. Again, thank you guys so much for coming on the show today. I really appreciate your time.

Scott:
Thanks for having us. It was a lot of fun.

Bill:
Thanks, Jon. Congratulations on your book.

Jon:
Listeners remember that while you’re listening to the show, you can follow along with the things that we’re mentioning here in real time. Just head over to thedigitalife.com, that’s just one L in the Digital Life, and go to the page for this episode.

We’ve include links to pretty much everything mentioned by everybody, so it’s a rich information resource to take advantage of while you’re listening or afterward if you’re trying to remember something that you liked. If you want to follow us outside of the show, you can follow us on Twitter. I’m @jonfollett, J-O-N-F-O-L-L-E-T-T. Of coarse the whole show is brought to you by Involution Studios, which you can check out at goinvo.com, that’s G-O-I-N-V-O.com.

That’s it for Episode 119 of The Digital Life. I’m Jon Follett and I’ll see you next time.

No Comments

Leave a Comment

Your email address will not be published. Required fields are marked *