Welcome to episode 232, The Digital Life, a show about our insights into the future of design and technology. I’m your host, Jon Follett and with me is founder and co-host, Dirk Knemeyer.
For our podcast this week, we’re going to chat with our special guest, Karen Kaushansky about designing new user experiences and interfaces for emerging technologies. Karen is an experienced designer and futurist who works on things five to 10 years before they’re on most people’s radar, designing new experiences from speech recognition to biometrics to autonomous vehicles. Karen, welcome to the show.
Karen, let’s start off with our softball question. I’d love to know what new experiences in technologies are you most excited about? What gets you up in the morning and makes you want to go to work?
Right. These days, I’m a consultant and I have multiple clients each trying to push the envelope in some way with certain technologies, and it’s not that they’re all new technologies. Sometimes they are technologies that are being used in new ways or a new context. I’ve been building speech recognition experiences for 20 years, and we’re still working on building new experiences with speech recognition. One example is virtual worlds now. Virtual worlds mixed reality worlds where it’s not just about talking to Alexa in your home, but when you have new virtual experiences, augmented reality and you meet people in that new reality or you want to interact with something in a new kind of reality, speech is a great way to interact, but there’s a lot to make that actually work in new context.
Again, I’m excited about mixed reality bringing lots of different kinds of input and output to new realities and that includes speech recognition, biometrics and I have clients that work on autonomous vehicles and I think that there’s a lot there that gets me out of bed in the morning.
Right. Yeah. The autonomous vehicles have really captured the imagination of the media for one, and I suppose in the United States anyway where we’re a very car centric country, this idea of autonomous vehicles is a gripping one because it means that we’re going to change the way our lives are structured. When you’re seeing the autonomous vehicle technology developed and you’re thinking about that as a new experience for people, what are the things as a designer that are your primary concerns and how do you even tackle such a big design problem as the autonomous vehicle?
Right. It’s not just being excited about autonomous vehicles or what do I as a designer bring to that problem space or help figure out? There are many design challenges and opportunities with autonomous vehicles. One is how in a Uber with the driver’s scenario, how people are going to interact with their vehicles. I was director of experience at Zoox for a while. They are building a robot taxi from the ground up, so a car … You’re going to call a car to come get you. It will come with no driver. Today, you do the, “Hey, I’m Susan. Are you Bob?” You do that exchange with your Uber driver.
We need to replace all of those conversations. It’s building relationships and connections, and building new languages from a vehicle to a passenger as well as what happens during that ride, like now the interior vehicle can be very different than your traditional experience and so, what are all the interactions that you could have within your vehicle? That can be speech. It could be, “Hey, what can you do with your windows? What can you do in an augmented reality world? What could you do to enhance that experience on your journey?” Yeah. Go ahead.
Yeah. That fascinates me because I’ve read about how there’s cultural differences in terms of what people would like to be doing in their cars when they’re not driving. In the US, I imagine, we have one perspective on that whereas in India, the autonomous vehicle experience like those things that go beyond just the driving experience, and that’s what we’re talking about now. Those things are going to be vastly different, and I imagine the European experience would be different as well.
Yes, absolutely. That’s a challenge for designers, right? As knowing the context, knowing who you’re designing for and what you’re designing for because it’s also different if you’re designing a three-mile commute in down town San Francisco versus you’re designing for a 500-mile journey from San Francisco to LA. What are those differences and do you need to be designing for everything at the same time? There’s lots of fun things with autonomous vehicles. Something else that I’ve been focusing on is how autonomous vehicles should interact with their environment. Basically, autonomous vehicles are going to impose themselves onto society. There’s going to be users that never sign up to interact. The people, pedestrians in the cross walk are going to look over and see this thing coming, they’re not asking to interact, but they’re looking over and they’re saying, “Is this thing going to stop?” Thinking about the language of how autonomous vehicles could interact with the outside world and creating new languages. There’s these robot languages, robot interactions that I work on. Back to you. Yeah. Sorry. Go ahead.
Oh, please. Go ahead. Go ahead.
I think your original question was like how do you handle unforeseen circumstances?
Right. There are so many dimensions to autonomous vehicles that need to be designed. I think in all of these emerging technologies and new experiences, the one thing that I always ask is what’s the cost of getting something wrong? That really means writing down, spelling out all of those edge cases, all of the understanding of where things can go wrong and we might not have answers for all of that, but designing things in from the start around that become really important.
Wow. Do you have like some kind of framework or way you dive into that question, what could possibly go wrong with … For example, we’re talking about autonomous vehicles. That list feels like it could be a novel length list to tackle. Is there a way that … How do you wrangle all of those unforeseen circumstances?
I think it’s a subtle difference to ask what’s the cost of getting something wrong, and what can go wrong?
I think it’s important to know all of the things that can go wrong if the cost of getting something wrong is minimal, and we see this all the time, right? You ask to play, “Hey, Alexa. Play a song …” She just woke up. She’s listening to me. Okay.
As she does play a song by The Rolling Stones and she starts playing something by somebody else. The cost of getting something wrong in that case is … Oh my gosh. She’s playing the song.
What a fantastic example of unforeseen circumstances.
I couldn’t hear exactly what she was playing, but she was playing something. It’s like understanding the cost as well as who does the user blame. Look, if Alexa … Oh, crap. I’ve got to-
You could say echo. Say echo from now on, so that she doesn’t turn on.
Right. I’ve already bought it, right? The cost of getting something wrong with that … Yes, it’s annoying. I’m going to be like, “Oh, that’s such a pain.” But it’s not a first impression of the device and so, it’s not going to impact everything. If an autonomous vehicle, the cost of something going wrong is way on a different scale and understanding what that is and how to design for it is really important.
Yeah. Whenever I think of these emerging technologies, I think of all of the people who are going to hack them, right? I think about with autonomous vehicles as reading some sci-fi book where the scenario is there’s all these autonomous vehicles and they all have to stop when a human is in the road, so people can cardiac you really easily because a bunch of people stand in the road, the car stops. Then all of a sudden, you can’t do anything because the car can’t run over these people. It’s very easy to mess with the autonomous vehicle. Just as an example of these unforeseen circumstances that we’re talking about. I love that you’re putting this as sort of a cost benefit analysis for these circumstances because that seems like a much more pragmatic way for tackling these scenarios. Otherwise, I imagine you could just design for scenarios until infinity, so you’d never release a product.
Yeah. The other approach that I do spend quite a bit of time on is investing time in knowing what you don’t know or understanding what it is that you don’t know. Again for a lot of these voice assistants, they may not be able to book you a reservation, so you’re like, “Hey, Google. Book me a reservation for two tomorrow night at an Italian restaurant.” Understanding what you said is separate than being able to actually book the reservation. It’s being smart about handling what you can’t handle and having smart responses to know, “Oh, this person wants to book a reservation, so we understand what they want to do. We can’t do it for them, but at least we can tell them that.” I think that the broader sense here is setting expectations of what the technologies can and can’t do. We talked a lot about this, Dirk and I about AI and setting expectations about what AI can and can’t do, so that the people that interact with have a better sense of one can achieve.
Yeah. I think that’s going to be key going forward, and it’s interesting that you’re also designing these boundaries, which you really don’t think about at least in some more traditional design fields. You don’t think about those boundaries quite so much because people are familiar with the technology, be it an app or a poster. You know what the limitations are, where in this case, you’re introducing the technology to people. You are providing that entry point, so you do have to provide this definition.
Oh, please. Go ahead. Yep.
Sorry. I need to wait to like raise my hand or something. Yeah. Let me figure out to how I want to phrase it. Karen, the things you’re working on are really complex. Everything is like the self driving car that are including all of the things that would go into a traditional industrial design process, and all of the things that will go into a traditional UX design process, and other things on top of it. From a creator’s perspective, from a process perspective, how is the work you do different than different designers that might be listening whether it be industrial or UX or something else? Is it a bigger process, a more complicated process? How do all of these things fit together in such a complex end product?
Yeah. This is some of where design, I think, is going in the future is I feel like in these complex products and with these emerging technologies that … I hate to say it, but design is at the center and all of the designers together are kind of at the center to create a vision, I would say with product management probably, product and design to create a vision of where things need to go and it’s really product managers and designers who collaborate with all these other teams to make things happen. It’s system design. It’s definitely a collaboration with the industrial designers, the visual designers, the interaction designers, but it also bridges into all the engineering teams. Working at a self driving car company, the AI stack computer vision perception like human machine interactions, the industrial design, all of that, it kind of all was brought together by the designers and the product team.
Otherwise, all these systems are siloed and one of the most important jobs was to make sure all of those teams are talking to each other both physically talking to each other but also these systems. This one piece of information has to get pass from one system to another system in order to tell a passenger that the car has arrived. The whole stream of information and that’s not going to happen without design and product to make sure that the requirements and the vision of the design is getting communicated and achieved. I don’t know if that answers the question.
Yeah. Sort of abstractly, it does. There’s another layer and different groups and different disciplines coming together, and working together. In a concrete sense, are there specific things for designers of any of those particular disciplines who haven’t had the opportunity to participate in these emerging technologies or complex products, that they would be surprised is different about designing or creating in that context or is the basic plan, architect design, engineer, whatever generic design process people would have is still pretty much the same and the things that you’re doing?
I don’t think that the design process is changed. I just think that there are more dimensions, and the challenges are around him at specking out like what is … We might have an ultimate vision of like, “Here’s the vision video of what we want to build,” but then to design every element from the interactions, the visuals, the pixels, the industrial and how it all come together, there is no good tool out there to spec all of these interactions that some of them are gesture and voice together in 3D mixed reality world, like that’s one of the biggest challenges of how to communicate what it is we want the design to be to the engineers and to key way and to everyone in the chain. I think that the design process is the same, but there are bigger challenges or it’s getting harder and harder to communicate the design.
I find that really interesting because I’ve noticed over the past couple of years, we’re starting to get more native interaction designs/prototyping tools that are purpose built like they’re meant for the software designer, right? You have your InVision, your Adobe XD, which are purposely made for the environments that they’re designing for. This is … I don’t know how many decades you’d want to talk about how deep we’re into digital now, but it’s taken a really long time for those tools to come about. Interesting that the design tools are just obviously following on the hills of the designers because you’re breaking new ground here, so you’re probably using tools that are meant for other things. Then, specking out the design as best as you can.
Yep. Yep, exactly. Right. Again, no good tools even with like Alexa, and I don’t know enough about their … If they have proprietary tools, but Alexa showing speech input and visual output on a screen, there’s still no good way to do that.
How do you see designers jobs changing over time? Because we’re in this really interesting time of flux and maybe that’s what it will be going forward, this ever present flux where new technologies are just coming on, happening so quickly that designer’s job description is just going to be learn, learn, learn and react. How do you see like the people going into the field today, how do you see their job’s changing? They’re going to be going into jobs that maybe haven’t even been invented yet.
Yeah. I think it goes back to what we were talking about on design as the center, which is you might have … Designers might have their depth in one area, the T shape designer and the horizontal, I think that’s some part technologist, some part product manager, collaborator, facilitator to bring all of the different technologies or pieces together with their own skill to create an experience. These technologies are not … They’re not like a little black box and then you just plug them in and they work, right? It’s like understanding the APIs, or not understanding APIs, but understanding inputs and outputs that you need in these technologies to know what to ask for to create these experiences. I think the horizontal again is a lot around design facilitation, collaboration, building relationships into other engineering organizations to make sure that everyone is working together to execute on a vision.
Very cool. I wanted to wrap up our interview today with this question about technologies that are coming that you think should be on designer’s radar. You’re working on things that are fight to 10 years out if you were advising a young designer, here are the technologies you should really be understanding or looking at, what would those be?
I think it’s some of the same today as what we talked about. I think artificial intelligence and designing with artificial intelligence is going to change a lot of industries, a lot of interactions and I think we’re just starting to scratch the surface of what opportunities that are there. Then, the mixed reality, augmented reality worlds and I think we’re just starting there and that encompasses a whole bunch of new kinds of interactions and interfaces that we just don’t know what they are yet, so I think it’s new context for exploration for new interfaces.
Listeners, remember that while you’re listening to this show, you can follow along with the things that we’re mentioning here in real time. Just head over to thedigitalife.com, that’s just one L in The Digital Life, and go to the page for this episode. We’ve included the links to pretty much everything mentioned by everybody, so it’s a rich information resource to take advantage of while you’re listening or afterward if you’re trying to remember something that you liked. You can find The Digital Life on iTunes, SoundCloud, Stitcher, Player FM and Google Play, and if you want to follow us outside of the show, you can follow me on Twitter, @jonfollett. That’s J-O-N F-O-L-L-E-T-T, and of course the whole show’s brought to you by Go Invo, which you can check out at goinvo.com. That’s G-O-I-N-V-O.com. Dirk?
You can follow me on Twitter, @dknemeyer. That’s @ D-K-N-E-M-E-Y-E-R and thanks so much for listening. Karen, how about you?
Yep. You can follow me on Twitter, it’s Karen Kaushansky. I’m kjkausha, so that’s @ K-J-K-A-U-S-H-A. Thanks for having me.
Thanks for being with us today. It’s a great show. That’s it for episode 232, The Digital Life. For Dirk Knemeyer, I’m Jon Follett, and we’ll see you next time.