Bull Session201 podcasts

Bull Session

Emerging Technologies and the Self

November 9, 2018          

Episode Summary

This week on The Digital Life, we discuss emerging technologies and the self. What makes us ourselves, the way we take in information, the way we share, communicate, collaborate and interact with people has gone digital in a number of ways. In particular, we delve into the topic of virtual reality experiences and empathy, based on the article in Aeon, “It’s dangerous to think virtual reality is an empathy machine”. VR can change how we think about the world, helping us understanding different perspectives. For instance, the Virtual Human Interaction Lab at Stanford University created a simulation from the perspective of a cow, of being raised for the slaughterhouse. There are immersive VR experiences of becoming homeless and experiencing racism. But what is the true impact of these early experiments? Join us as we discuss.

Resources:
It’s dangerous to think virtual reality is an empathy machine
Stanford University Virtual Human Interaction Lab

Jon:
Welcome to episode 283 of The Digital Life. A show about our insights into the future of design and technology. I’m your host Jon Follett and with me is Founder and Co-host Dirk Knemeyer.

Dirk:
Greetings, listeners.

Jon:
This week, we’ll be talking about emerging technologies and the self. In particular, we’re gonna chat a little bit about virtual reality experiences and empathy. But first, little preamble here on The Digital Life. Of course, we’ve delved pretty deeply into what makes ourselves, ourselves in the 21st century. Namely, all the different ways that we take in information, the way we share, the way we communicate, the way we collaborate and interact with each other. All of these things have gone online. All of this is digital now. So, I think it’s a really interesting question about how we are forming as people, in the digital age, right? There’s so much potential for good things to happen, and as we’ve seen over the years, there’s so much potential for miserable, miserable stuff to happen as well.

So, this is just one corner of that discussion around virtual reality and empathy, but I think it applies to the larger idea of ourselves in the digital age, and how we look at that. So, let’s dig in a little bit. I must note that this was inspired by the article in Aeon, which is at aeon.co. It’s a great online magazine, and the name of the article is, It’s Dangerous To Think Virtual Reality Is An Empathy Machine.

So, one of the starting points for this is that a few years ago, researchers at the Virtual Human Interaction Lab at Stanford University created a VR simulation of a slaughter house, and so the idea was that people would put on VR goggles and sort of walk around on all fours and experience what it’s like to be a cow and go through, being fed and then eventually being brought to the slaughter house, which you know sounds like a pretty horrific experience to me. And just sort of the concept there, was that it would give people an idea of what it was like to be an animal and this behavior will also lend us to think about, are we being cruel to animals and sort of what is the ultimate sort of morality there around that. That’s the idea anyway.

And I find the experiment really intriguing, as well as the hypothesis of the article itself, which is we’re going and we’re creating sympathy for animals, but the author of the article is like, “You’re not really experiencing this. This is not, you can’t be a cow.” That’s kind of not a notion that the author agreed with and I tend to fall on that side of the argument as well. Dirk, when you were reading about this experiment, where do you fall in that spectrum, or which side did you fall on too?

Dirk:
I mean I found it both interesting and silly. I mean it’s interesting that we are thinking about technologies such as mixed reality, and using them in ways to transport people to different experiences. And this one, which is sort of to me, is inherently political one of moving people towards behaviors and beliefs that reduce and eventually eliminate the slaughter of animals for human consumption. But it’s interesting and it’s noble as well.

You know, it’s silly though in that they’re saying that it’s empathy, it’s giving us empathy for the animals or creating sympathy for the animals. Forgetting the facts that these animals are in very specific conditions over a long period of time, fully physically ensconced in them, whereas we can be on our little virtual reality goggles having just had an ice cream sundae, crawling around on the floor eating a cheeseburger ironically enough, without real cattle prods zapping us in our sides. It’s sort of phony baloney. So, it’s skin deep.

So, the idea is interesting. There’s some impact, some value I’m sure in participating in it, but to translate that to sympathy or empathy in any real way is just silliness.

Jon:
Yeah, I think, so point taken on that. I think that’s, we’ll return to that discussion in a moment. I wanted to sort of expand our view to also include some of the other examples that were mentioned in the article. There was a simulation around being homeless as one example, and there’s another one about experiencing racism. Both of which are obviously human topics, not bovine as in the previous example. So, obviously a closer match to what the person is experiencing with a different perspective, a different set of life circumstances, let’s say.

Now, to me, those were sort of closer to the mark in terms of eliciting a response, sort of in keeping with the fact that these are human experiences and these are perspectives that you may not be exposed to at all, whether it’s being homeless or experiencing racism. You know, some folks will just not experience those things. And so taking the virtual walk in other people’s shoes may be valuable. Dirk, how did those two experiments strike you?

Dirk:
I’m skeptical again, and here’s the problem. You have people who experience those things. I mean let’s talk about the sort of racial micro aggressions for example. You have people who experience those things in real life. I think you have people who don’t experience those things in real life, but already know of them and believe in them. Let’s say you have four types of people. Then you have people who don’t experience them in real life, know about them but don’t believe them. And then you have people who don’t experience them in real life, don’t know about them, and thus can’t believe in them or not.

For the people who experience them, assuming they’re sort of accurate and correct, those people are going to say, “Yes, that’s right.” For the second group of people, the people who don’t experience them but believe that those things are a part of life, there’s something there. You can sort of experientially get it. It’s both reaffirming something you already think in your brain, but also giving you some experiential context to imagine. So there’s some value there. Then you’ve got the people who don’t believe in it, and I think more likely than not, much more likely than not, they’re not gonna believe in it afterward either, right? Because somebody put that together. The reasons that people wouldn’t believe in a political agenda like that is that they don’t wanna believe in it. And they’re going through this app and experience those things, they’re just gonna say, “Oh well, anybody can take these clips and put them together and make it seem that way. Yeah, well anybody can.”

There’s nothing in the experiential aspect of it that is going to take someone from being dug in and doubting it to, “Oh my God, if only I had known.” Because it’s not a real life thing. It’s a carefully curated thing. And then you have the people who are completely ignorant of it, and I think depending on sort of their political persuasion and beliefs to begin with, will either find something there or not find something there. So, I think it’s in very narrow bands that it has impact. I think it can have good impact within those bands, but the framing of it as having this transformative sort of universal power, I just think is really Trumpian, and it’s over hype and over statement.

Jon:
Yeah, I think so, I do think we’re at sort of the beginnings of experimenting with what’s possible with these virtual reality and augmented reality, and sort of establishing what could be sort of very useful tools for a different level, a different type of experience. Sort of more visceral, more immersive, more interactive. However you wanna frame that up. Sort of the same way we look at games, video games now and all their sophistication, and then I can go back and think of the games that I played at my desktop computer in the late ’80s. Sort of the vast difference between those and how much more nuanced, and beautiful, and sophisticated, and interesting video games are now.

And I think we had sort of the same discussions about video games, in particular I remember lots of discussions around first person shooter games. Whether or not those would influence people. Or even just what we’re communicating through games for education and for development of learning, etc. So I think virtual reality and augmented reality are the sort of natural inheritors of a lot of the purposes for what we’re using gaming for now. I mean, very specifically around video gaming. And I do think in the future, the sort of, I don’t know if I’d go so far as to call it scary, but you put your finger on it Dirk that there’s a particular narrative that’s being pushed or crafted here, to allow you to think in a certain way, which you may agree or disagree with, depending on your perspective. But to sort of think about how, whatever is done in that way in which you agree, you know someone can take those same tools and create a whole bunch of stuff that you would disagree with vehemently, right?

So, for example, showing certain groups of people. You have the racism app. I could see someone developing an application showing bad behaviors of certain groups that people didn’t like, right?

Dirk:
So, or real cheesy workplace education videos, right? Like HR.

Jon:
Right, you could certainly, you could. The technology of course supports any kind of narrative that we can imagine. So I think it’s worth discussing what the possibilities are, and I think down the road, the possibilities for influence I think are very much a part of the virtual reality discussion. Even if it’s just a small group of people you can influence that way. You know, for instance, we talk about all the influence campaigns, the propaganda campaigns on various digital platforms, and those are more or less text messages combined with some photographs, right? So I think we’re bringing online tools that are much more powerful in their ability to influence, and I think this discussion today sort of highlights that fact. And I don’t know if I should be nervous about that or not. I never, in a million years when I started using Twitter, “Propaganda campaigns on Twitter, what are you talking about?” That just never even crossed my mind. So, I wonder where these things will take us.

Dirk:
Yeah, I mean let’s look at games and the history of games for some guidance. What we’re talking about here, these sort of example mixed reality apps are very what I’ll call, stage one. Okay? And taking that to games, learning games have long been a holy grail in game creation, and what we found is the games that are explicitly created to be learning games, that are called learning games, people don’t wanna play. People perceive them as trying to teach them something as opposed to letting them have fun, and explore, and really discover something cool. And so what has happened instead is, lessons are taught through games that are made just to be fun games, and the lessons are kind of slid in.

If we want to, and for me if I was tasked, if I was given a big budget and said, “What you need to do is have the populace of our country, all have sympathy and some degree of empathetic understanding for racist micro aggressions.” If that was my task, what I would do is I would get a team together and say, “We’re gonna make this science fiction game, we’re just gonna focus on making the coolest game possible, and it’s putting the character in this as a minority species on this planet.” And you’re doing things like you do in science fiction, whatever the sort of genre of science fiction you’re doing might be. But the whole thing is from this place of a species in that case, basically racial minority, from being sort of culturally penalized, and just build that into an amazing fricking game. That is going to get the lesson through.

If you make the right game that’s a smash hit and people want to play, then people are gonna really get it and then they’re gonna identify with it. Because they’re gonna say, “Yeah, I was the Gorn character, and boy the Gorn had it hard.” Whatever that looks like. Suddenly it’s in there. You’ve got the sympathy, you’ve got the empathy, because you’re coming at it, not from this political, “Oh yeah, be a cow and see how bad cows have it.” That’s not gonna get you there. But if you take someone there through their own interests, their own desires, their own excitement, because we’re selfish, stupid, MFs here, right? We aren’t the brightest species in the world, so we can’t be taken there through these direct routes.

We have to be taken there, we have to be tricked into it, by being taken into something that we perceive as just fun for us. All about us. Not about somebody else’s agenda. But just out pushing our pleasure buttons, left, right, forward and backward. That’s the way to do it, and at the end of that game that everybody’s played and is a big smash hit, you’ve suddenly educated a bulk of the nation on these crucial things that have to do with the social welfare and equality for all. You can’t do it directly. You have to go in through the back door, and eventually we’ll get there. Right now, we’re just kind of seeing the clumsy first steps of, “Oh yeah, let’s do this social justice thing.” That most people are just gonna reject out of hand.

Jon:
Yeah, yeah I think that’s, that’s right, and I liked your example there of the sci-fi game. I guess the Gorn have had it pretty hard.

Dirk:
They’ve had it real hard, Jon, seriously right?

Jon:
So I mean, this sort of … You know, wrap up this conversation a little bit, the possibilities that we have with digital communication and technologies are only increasing and ourselves as consumers of information, as social beings, as participants in society, like all these aspects are subject now to our digital lives, the digital influence. I mean, for me that’s a question that is worth exploring further. We won’t, we obviously don’t have time to dig into that now, but I feel like there is, there’s value there in evaluating how we’re changing as people, as a society, because of all these new abilities and technologies. Or, the abilities that technologies give us, rather. So something we’ll have to explore a little bit further on the show.

Listeners, remember that while you’re listening to the show, you can follow along with the things that we’re mentioning here in real time. Just head over to thedigitalife.com. That’s just one L in The Digital Life, and go to the page for this episode. We’ve included links to pretty much everything mentioned by everyone, so it’s a rich information resource to take advantage of while you’re listening, or afterward if you’re trying to remember that something you liked. You can find The Digital Life on iTunes, SoundCloud, Stitcher, Player FM, and Google Play. And if you’d like to follow us outside of the show, you can follow me on Twitter @jonfollett. That’s J-O-N-F-O-L-L-E-T-T, and of course, the whole show’s brought to you by GoInvo, a studio designing the future of healthcare and emerging technologies, which you can check out at goinvo.com. That’s G-O-I-N-V-O.com. Dirk?

Dirk:
You can follow me on Twitter @dknemeyer. That’s at D-K-N-E-M-E-Y-E-R and thanks so much for listening.

Jon:
So that’s it for episode 283 of The Digital Life. For Dirk Knemeyer, I’m Jon Follett. And we’ll see you next time.

 

No Comments

Leave a Comment

Your email address will not be published. Required fields are marked *

Jon Follett
@jonfollett

Jon is Principal of GoInvo and an internationally published author on the topics of user experience and information design. His most recent book, Designing for Emerging Technologies: UX for Genomics, Robotics and the Internet of Things, was published by O’Reilly Media.

Dirk Knemeyer
@dknemeyer

Dirk is a social futurist and a founder of GoInvo. He envisions new systems for organizational, social, and personal change, helping leaders to make radical transformation. Dirk is a frequent speaker who has shared his ideas at TEDx, Transhumanism+ and SXSW along with keynotes in Europe and the US. He has been published in Business Week and participated on the 15 boards spanning industries like healthcare, publishing, and education.

Credits

Co-Host & Producer

Jonathan Follett @jonfollett

Co-Host & Founder

Dirk Knemeyer @dknemeyer

Minister of Agit-Prop

Juhan Sonin @jsonin

Audio Engineer

Dave Nelson Lens Group Media

Technical Support

Eric Benoit @ebenoit

Opening Theme

Aiva.ai @aivatechnology

Closing Theme

Ian Dorsch @iandorsch

Bull Session

The Pitfalls of Predicting AI

November 2, 2018          

Episode Summary

This week on The Digital Life, we discuss the pitfalls of predicting AI. AI predictions range from the measured and meaningful to highly unrealistic and downright hysterical. But how can you tell the difference? In this episode, we dig into some rules of thumb for thinking through the AI predictions we encounter, as laid out in the article “The Seven Deadly Sins of AI Predictions” by Rodney Brooks, a founder of Rethink Robotics. From better understanding the properties of narrow AI to asking “how will it be deployed?”, questioning supposed magical properties without limit, to admitting, in the long term, we just don’t know, we’ll explore the many factors that counter the breathless hysteria of AI predictions. Join us as we discuss.

Resources:
The Seven Deadly Sins of AI Predictions

Bull Session

Establishing AI Ethics

October 26, 2018          

Episode Summary

This week on The Digital Life, we explore the difficulties of establishing an AI code of ethics, inspired by an article from MIT Technology Review, “Establishing an AI code of ethics will be harder than people think”.

There’s already ample evidence that artificial intelligence can exacerbate existing system bias if left unchecked. And, a set of design ethics guiding AI development may be far in the future, as the difficulty of defining an applicable rule set, and the subjective nature of ethics itself, makes the task extremely difficult. However, such arguments over AI ethics often emphasize top-down efforts rather than bottom-up — for instance, auditing AI decision-making from the initial data curation stage and throughout the process. In this view, AI design and development is not a purely technical practice, but instead incorporates cultural aspects, similar to teaching children right and wrong. Join us as we discuss.

Resources:
Establishing an AI code of ethics will be harder than people think

Bull Session

Sapiens, Creativity, and Technology

October 19, 2018          

Episode Summary

On The Digital Life this week our special guest is Dr. Carie Little Hersh, an American cultural anthropologist, teaching professor in Anthropology at Northeastern University, and producer and host of podcast Anthropologist on the Street. We chat about human creativity and technology through time, from an anthropological perspective.

Which came first: humans or technology? And what is the relationship between homo sapiens and the species who came before us, or those such as the neanderthals with whom we competed? We consider the anthropological relationship between technology and creativity, as well as patterns in technological progression through time. Join us as we discuss.

Resources:
RelevANTH
Anthropologist on the Street

Bull Session

AI and Science

October 12, 2018          

Episode Summary

This week on The Digital Life, we discuss the intersection of artificial intelligence and science with special guest is Dany DeGrave, founder of Unconventional Innovation. AI and science are coming together in new and significant ways, including the use of cognitive and other innovative technologies in R&D — like NLP, machine learning, and advanced analytics. As bio-science companies rush to invest in AI, the implementation of scientific research, drug trials, and even personalized medicine is undergoing significant change. But with the potential to make erroneous decisions, and even be used for malicious purposes, it may be a long time before we fully trust AI to be used in such development.