UX for Robotics
September 3, 2015
Robots are ideal for taking care of jobs that are repetitive, physically demanding, and potentially hazardous to humans. There are immediate, significant opportunities for using advanced robotics in energy, health, and manufacturing. Designers working in robotics will need to help identify the major challenges in these areas and seek proactive solutions — not an obvious or easy task.
In this episode of The Digital Life we discuss the future of UX for robotics, and interview Scott Stropkay and Bill Hartman of the design firm Essential, on human-robot interactions. This interview aired originally on O’Reilly Radar.
Designing for Emerging Technologies
More so than any other emerging technology, robotics has captured the imagination of American popular culture, especially that of the Hollywood sci-fi blockbuster. We’re entertained, enthralled, and maybe (but only slightly) alarmed by the legacy of Blade Runner, The Terminator, The Matrix and any number of lesser dystopian robotic celluloid futures. It remains to be seen if robot labor generates the kind of negative societal, economic, and political change depicted in the more pessimistic musings of our culture’s science fiction. Ensuring that it does not is a design challenge of the highest order. We must seek to guide our technology, rather than just allow it to guide us.
In the near term, robots are ideal for taking care of jobs that are repetitive, physically demanding, and potentially hazardous to humans. There are immediate, significant opportunities for using advanced robotics in energy, health, and manufacturing. Designers working in robotics will need to help identify the major challenges in these areas and seek proactive solutions — not an obvious or easy task.
We can see an example of these major challenges in the tragic events of the Fukushima meltdown. On March 11, 2011, a 9.0 magnitude earthquake and subsequent tsunami damaged the Fukushima Daiichi nuclear reactors in Japan. Over the course of 24 hours, crews tried desperately to fix the reactors. However, as, one by one, the back-up safety measures failed, the fuel rods in the nuclear reactor overheated, releasing dangerous amounts of radiation into the surrounding area. As radiation levels became far too high for humans, emergency teams at the plant were unable to enter key areas to complete the tasks required for recovery. Three hundred thousand people had to be evacuated from their homes, some of whom have yet to return.
The current state of the art in robotics is not capable of surviving the hostile, high-radiation environment of a nuclear power plant meltdown and dealing with the complex tasks required to assist a recovery effort. In the aftermath of Fukushima, the Japanese government did not immediately have access to hardened, radiation-resistant robots. A few robots from American companies—tested on the modern battlefields of Afghanistan and Iraq—including iRobot’s 710 Warrior and PackBot were able to survey the plant. The potential for recovery-related tasks that can and should be handled by advanced robotics is far greater than this. However, for many reasons, spanning political, cultural, and systemic, before the Fukushima event, an investment in robotic research was never seriously considered. The meltdown was an unthinkable catastrophe, one that Japanese officials thought could never happen, and as such, it was not even acknowledged as a possible scenario for which planning was needed.
The Fukushima catastrophe inspired the United States Defense Advanced Research Projects Agency (DARPA) to create the Robotics Challenge, the purpose of which is to accelerate technological development for robotics in the area of disaster recovery. Acknowledging the fragility of our human systems and finding resilient solutions to catastrophes—whether it’s the next super storm, earthquake, or nuclear meltdown—is a problem on which designers, engineers, and technologists should focus.
In the DARPA competition mission statement, we can see the framing of the challenge in human terms.
History has repeatedly demonstrated that humans are vulnerable to natural and man-made disasters, and there are often limitations to what we can do to help remedy these situations when they occur. Robots have the potential to be useful assistants in situations in which humans cannot safely operate, but despite the imaginings of science fiction, the actual robots of today are not yet robust enough to function in many disaster zones nor capable enough to perform the most basic tasks required to help mitigate a crisis situation. The goal of the DRC is to generate groundbreaking research and development in hardware and software that will enable future robots, in tandem with human counterparts, to perform the most hazardous activities in disaster zones, thus reducing casualties and saving lives.
The competition was successful in its mission to encourage innovation in advanced robotics. In the competition finals held in June of this year, teams from around the world competed at a variety of tasks related to disaster recovery, which included driving cars, traversing difficult terrain, climbing ladders, opening doors, moving debris, cutting holes in walls, closing valves, and unreeling hoses.
The gap between the problems we face as a species and the seemingly unlimited potential of technologies ripe for implementation begs for considered but agile design thinking and practice. Designers should be problem identifiers, not just problem solvers searching for a solution to a pre-established set of parameters. We are on the cusp of a new technological age, saddled with the problems of the previous one, demanding that as we step forward we do not make the same mistakes. To do this, we must identify the right challenges to take on — the significant and valuable ones. Because this is where emerging technologies, like robotics, can have their greatest impact.
If you’re interested in further exploration of this topic, check out “Designing for Emerging Technologies”, coming from O’Reilly Media this December, a project on which I was honored to serve as editor. In this book, you will discover 20 essays, from designers, engineers, scientists and thinkers, exploring areas of fast-moving, ground breaking technology in desperate need of experience design — from genetic engineering to neuroscience to wearables to biohacking — and discussing frameworks and techniques they’ve used in the burgeoning practice area of UX for emerging technologies.
I interviewed Bill Hartman and Scott Stropkay, from the design firm Essential on their chapter about robots in healthcare. This interview originally appeared on O’Reilly Radar.
We think that the challenges inherent in these kinds of scenarios are fascinating, how you get people to accept a robot in a relationship that you’d normally have with a person in, let’s say, a hospital setting, how you develop acceptance from the team that’s not used to working with a robot as part of their functional team, how you develop trust in those relationships, how you engage people both practically and emotionally. How, as the scenario progresses, you bring robots into your home to monitor your recovery was one of the issues that we’ve begun to address in our work.
We’re pursuing other ideas as it relates to using smart monitors in the form of robot and robotic-enhanced devices that can help you advance your, I guess, improvement in behavior change over time, look at recovery. Then, ultimately, we’re thinking about it, although we’ve not worked in yet some of the interesting science that’s happening with robots that you ingest that can learn about you and monitor you over time, so just a world of fascinating issues about what you want to know and how you might want to learn that and who gets access to this information and how that interface could be designed.
Would you agree with that statement?
The way that, in the case of this telepresence robot, the posture it took, the way it moved, the way that the user would get introduced to the doctor that this robot was going to become, the way that the care team in the room would know the attitude of the robot for the person took its form were all things that we were trying to design.
Knowing when you can have a casual conversation with the doctor and knowing when the doctor, through the robotic embodiment, is rushing to the ER are important things to indicate, which humans do in many subtle and unconscious ways that we now have to be very conscious about the way we’re going to incorporate those into a machine interface.
We can use those same principles and look for implications on robots serving our higher ordered needs moving forward, as we move from serving needs related to convenience or performance and actually supporting our decision-making and emerging technologies, moving from early stages of a new design, being able to do anything or be magic in terms of the user interface to being more human in the user interface. That point about making an emotional appeal as well as a logical appeal and a credible appeal to we, as humans, in developing our degrees of trust is really critical.
Where my kids go to school, they have a motto of “freedom and responsibility.” As robots take on these higher ordered functions, they need to prove to their users that they are responsible enough to be given higher degrees of freedom in how they operate and how they support our decision-making.
I was listening to Barry Schwartz being interviewed recently, the author of The Paradox of Choice, on Fresh Air. He described mutual funds that have been around a few decades. They automate the process of asset allocation and diverse investments for 401(k) plan participants, but when the numbers of mutual funds become too great, participation in 401(k) plans actually drops off.
As robots take on higher degrees of autonomy and freedom in guiding our decision-making, there are certain assumptions that they will need to make along the way in doing that so we aren’t overwhelmed with the number of choices available to us and can make meaningful choices within realistic constraints. This notion of choice architecture and, as designers, choreographing these choice architectures in some of the algorithms that might go into the robotics is really, really key.
Referring back to the Nielsen heuristics, I think that visibility of system status is a biggie, the telepresence robot with InTouch Health conveys its status in terms of its anthropomorphic language as it moves to the hospital. Am I in a hurry to get to another department or do I have time to linger and take up a new conversation or convey information to a passerby without having to ask the robot specifically how much time do you have available? It’s purely in the body language and the visibility of the system status for the robot.
I think also just the flexibility and efficiency of use is key. Is somebody a novice with a particular robot and using it for the first time? Is more transparency or self-evidency in the decision-making required or can it operate in a more expert fashion so that robots knowing their audiences can help them add that much more value and, indeed, help the technology support our needs as humans and not get in the way of our needs as humans.
Yeah, but to put it in that way, because when we’re talking about human-robot interaction, we’re talking about systems, right? I would never say, “Bill’s system status is busy,” but that’s essentially what we’re saying about the robots. That’s a fascinating connection between sort of a cultural mores and body language and all of these sort of call it soft language and communication skills that we pick up as human being that we’re now hard-coding into human-robot communication methods. I really enjoyed that part of your chapter.
Our final question for the show is you offered some thoughts in your chapter about people becoming robotic, we’re familiar with the bionic man and science fiction cyborgs where people are part-human, part-robot, whether it’s helping to aid a recovery in the healthcare scenario. I’ve seen it used to enhance human abilities in industrial working environments with exoskeletons. What are some of the critical areas that designers need to consider when creating these products and tool sets that are changing in some ways the way we understand really being human? Scott, what’s your take on that?
There are people that we interact with that are already becoming robotic, and there are more people that we will be interacting with in the future that have some of these capabilities added to themselves. There’s other applications typically initiated through DARPA and military settings, but enhancing human capabilities to save people from compromised situations to move heavy things over long distances, one that you can’t do by using other kinds of smart prostheses and exoskeletons are experiments and developments that are happening right now.
I think it’s fascinating how much is happening. Most of us don’t appreciate it because we’re not interacting with these people and these little communities in these specialty areas, but we’ll be seeing more of that moving into our lives and into our world. It’s going to get interesting as it relates to you having a casual conversation with somebody that can see infrared in the room that you can’t see and using that information one way or another.
I think it’s difficult to predict where that is headed, but I think, ethically and just in terms of our personal relations and our inter-robot relations, that understanding for where there is fair versus unfair performance characteristics is going to be a part of that conversation moving forward.
We’ve include links to pretty much everything mentioned by everybody, so it’s a rich information resource to take advantage of while you’re listening or afterward if you’re trying to remember something that you liked. If you want to follow us outside of the show, you can follow us on Twitter. I’m @jonfollett, J-O-N-F-O-L-L-E-T-T. Of coarse the whole show is brought to you by Involution Studios, which you can check out at goinvo.com, that’s G-O-I-N-V-O.com.
That’s it for Episode 119 of The Digital Life. I’m Jon Follett and I’ll see you next time.