Bull Session

Digital People

April 5, 2018          

Episode Summary

What happens when we can’t distinguish between the real and the artificial? On The Digital Life this week, we chat about what happens when we’re able to create convincing representations of people, digitally.

At the Game Developer Conference last week, Epic Games showed off Siren, a digital character powered by the real-time motion capture of an actress on the stage. The character detail, shading, and lighting was of especially high quality, including realistic looking facial expressions. While Siren isn’t quite capable of blurring the line between CGI and reality, the technology is definitely a big step in that direction.

The longer term possibilities of technologies like this seem endless: With realistic digital rendering of people in virtual space, it will be possible to digitally capture yourself or your family to potentially interact with future generations. On the other hand, it will also be possible to make it appear as if people said things that they really didn’t, further contributing to the deluge of digital misinformation. Join us as we discuss.

Resources:
Epic Games shows off amazing real-time digital human with Siren demo

Jon:
Welcome to episode 252 of The Digital Life, a show about our insights into the future of design and technology. I’m your host, Jon Follett, and with me is founder and cohost, Dirk Knemeyer.

Dirk:
Greetings, listeners.

Jon:
For our podcast this week, we’re going to chat about what happens when you could create convincing representations of people digitally. This is the promise of video games, of virtual reality, of CGI in the movies, is to create digital representations of people that more or less are convincing in such a way that you think they’re real. At the Game Developer Conference last week, there was an impressive demo by Epic Games showing off their technology called Siren. Siren is a digital personality that hooks up to a set of motion capture devices attached to an actress. As the actress moves about the stage, as she changes her facial expressions, Siren does exactly what the actress does in real-time. That’s a significant advancement.
There were a number of companies that worked with Epic Games on this. CubicMotion, 3Lateral, Vicon, and Tencent were the, with Epic Games, the five companies that pooled their technology to create this advancement, which relies on, as I said, this real-time motion capture. This is a big jump in quality. Especially, I think, in the facial expressions, it seemed like the motion capture was really able to effectively convey these facial impressions on the digital personality. There were improvements to the shading, to the reflection and refraction, that were all very impressive. The hype around Siren is that it’s blurring the line between CGI and reality. I think it’s a step in that direction, but you can still clearly tell it’s a CGI character. Dirk, you saw the video of the demo. What did you think?

Dirk:
Yeah, I think, like you said, it clearly is a CGI character, and it’s done really well. The uncanny valley aspect of it is really subtle. I’ve looked at it and tried to figure out why is it that this doesn’t read like a person, because it’s sort of correct to a person in so many ways. I think it’s the eyes in particular. The way that the eyes behave aren’t quite there. They’re really good in a lot of ways, but they’re not quite right. Some of the physics around the hair, it’s just not quite right. You can see that there’s an engine doing it based on a math model, but it’s not behaving precisely the way that it would in nature.
One of the things about the uncanny valley is the closer something is to looking human, but then not being correct, sort of the more jarring it is, and the more uncomfortable that we ultimately are. In all of the good that they’ve done, because they haven’t quite nailed it, in some ways it’s worse and weirder. But in other ways, it’s better too. I don’t know. I’m impressed with it. It’s good enough that I get to the point of saying this might be something I’d be interested in participating with. It’s cool.
Jon:
Yeah. My initial reaction was, “Wow, the video game industry is really sinking a lot of money into its technology.” Clearly, they have the finances to do so, and that’s going to really push the envelope for a lot of other industries that will license this technology and follow on.

Dirk:
In the digital games industry, VR, virtual reality, got super hot, I don’t know, three to five years ago. It got a lot of funding. Games companies were investing a lot of money, but then also there were, like the big … Most of the big VCs at this point have one or more investments in mixed reality, which VR specifically was the hot thing some years ago. Now it’s more of the mixed reality, whether it be virtual, augmented, or whatever type. This has been on trend from thought leaders in games kind of pushing it, but then the money people, in both games and the investment community, saying, “Hey, there’s something real here. We should pour a lot of bucks into it.” We’re starting to see some of the rewards of that, such as with this demo.

Jon:
Yeah. I think there are both short and long-term great possibilities for this technology. I can see, in the short term, really this kind of technology really advancing what we’re already seeing as the so-called golden age of television programming, because you can … Even right now, given sort of the budgets that, say, a Netflix distributed show is working with, they can create some pretty good stuff without the sort of rapid, motion capture, realistic-looking person technology. I think that it’s starting in the gaming community and the gaming industry, but I think that this is going to have tremendous positive upside, especially for the aforementioned Netflix or Amazon streaming or whatever, where there’s a need for content that can be produced inexpensively and then sort of delivered to whatever audience is interested. It makes the production costs come down significantly when you don’t have to sit there rendering. All of that was rendered in real-time. You’re not kerchunking away on your giant server farm for 24 hours to create this character. [crosstalk 00:06:53].

Dirk:
There was a couple second lag, just to pick a nit, but yeah, it’s pretty much in real-time.

Jon:
Yeah, that’s a good point. I think there’s obviously movies and television as sort of the near-term receiving benefits from this technology, as well as games. Both casual and sort of hardcore gamers are going to see this tech in their games in the near term. The long-term possibilities for this technology are also pretty significant. I think there’s any number of ways in which this could play out. This is all attached, of course, to the rise of technologies around virtual reality and mixed reality. Dirk, you and I have talked about this at length under the umbrella of smartware, right, as the type of technology that can reduce the need for people to travel, right?

Dirk:
Mm-hmm (affirmative). Mm-hmm (affirmative).

Jon:
You’re able to use smartware to interact with things virtually that you might otherwise need to travel for, just as one aspect of smartware. Dirk, when you look at this technology, are you saying, “Ah, yes, this definitely fits into the things we’ve been talking about. While not necessarily expected, this sort of fits into the paradigm we’re thinking about”?

Dirk:
Oh, for sure, for sure. It’s definitely a manifestation of smartware, both where it is now but also where it is going. As I was looking at it and thinking about it and appreciating it, I was also trying to think about how would this technology manifest in a concrete way? One of the things that I really was honing in on was the user interface, because the woman who was giving the demo, as perhaps we would logically think should be so, she was hooked up with sensors. The movements of the robot, for lack of a better word, mirrored the movements of her. She was moving and dancing and doing all of this different stuff. It makes sense on the surface. The problem is, is that how we want our entertainment to behave?
I remember one example when I was watching a Twitch Stream, and the streamer went to play a virtual reality game and played it for like 15 minutes, then sat down and said, “Oh, that was fun, but now I’m tired.” The moving of limbs and jumping and doing all of that stuff is very active and very physically active. We generally interact with our machines in very passive ways, sitting in a chair, using interfaces that are only using our … typically our hands and fingers. I suspect that the real power and application of these things are going to come from later generation user interfaces that are more passive, that fit into the long tradition of our sitting with a book, sitting in front of a radio, sitting in front of a television, sitting in front of our computer, interacting with these different types of entertainments, immersive entertainment media, in passive ways. I think active use of those, unless we have major social shifts, is unlikely to sort of prosper if it requires continual active interaction like that.

Jon:
Yeah, I think there’s probably going to be some branching going on here, because I could see … You described sort of the entertainment ideal, which is sort of, as you described, sitting and being entertained. Then of course, in the case of the demo, there’s all of this active motion going on. There’s a lot of uses, I think, that sort of fall in between those two poles. For example, when we’re meeting people in person, we get an awful lot of information from their body language, which is almost completely lost when you’re doing virtual-type interactions, whether it’s … I’m using virtual broadly, right, to encompass-

Dirk:
Mm-hmm (affirmative). Mm-hmm (affirmative).

Jon:
… all the digital tech that we have. If there are ways that you can use motion capture to capture some of that body language and convey it to the other person, you get that much more ability to communicate in a meaningful way with them. The biggest problems, I think, for some remote teams, at least the ones that I’ve had exposure to, is just the sort of limited bandwidth in terms of conveying the totality of their message to … Whether it’s a client and consultant relationship, or within the team, the subtleties really get lost. Even if you’re very experienced at it and can use that combination of communication media, whether that’s typing, video conference, phone calls, it’s not the same as being there in person. I think with tech like this, you’re going to see those types of possibilities, which are not really passive and not really active, but more about capturing that subtle motion of the body that comprises body language. That’s just one thought.
Another that sort of tickled my interest was this idea that Siren was captured from a real person. The information that’s used to render Siren, while it wasn’t the actress, it wasn’t her data that was captured to create this rendering, it was somebody’s data, which then sort of opens doors to, well, if you can capture people, this has been the dream of producers. You get this actor or actress or whatever. You can sort of capture their information, and then eternally, they will eternally be who they are when they are at their highest earning potential. You think of Star Wars. You see they’re using at least some similar technologies to recreate the characters from the first trilogy. I can see directors really being interested in that. Further, I can think of, hey, wouldn’t it be great to capture my family members or whatever, so I can always get the advice of my uncle or whatever? I mean, granted, that will take a heck of a lot more than just capturing his facial expressions.

Dirk:
Yeah, that’s going to bring together a few different technologies.

Jon:
It is.

Dirk:
But no, it’s a great example though. Yeah.

Jon:
It’s this kind of thing that sort of spins up the wheels of your designer’s imagination, or even, hey, I’ve got Alexander Hamilton here in our virtual reality or mixed reality classroom. You can sort of absorb some of that information from the character as the schoolkids interact.

Dirk:
Yeah, that’s a little less interesting, because so much of it is projection and sort of making things up. What would be more interesting is with someone like Barack Obama, who we could still capture and map their actual self in a certain way. You know that it’s a realistic representation of how that person behaved and was in real life, whereas the Alexander Hamilton, which has its own cool parts to it, there’s going to be a lot of authorial discretion there-

Jon:
No doubt.

Dirk:
… as opposed to sort of authentic experience. Yeah. Yeah.

Jon:
I think this is, and I get very excited about the future potential for the design of these things, because that’s part of the way my mind works, but we can see the start of this. I think it really does speak to the maturation of our digital lives and digital existence. We talked last week about our personal data and this week about capturing potentially even more of that data. The potential for that to be misused is equal to these ideas that we have for more legitimate usage. You can capture Barack Obama for an educational seminar in a classroom, but you could also use his likeness to have him say ridiculous things that he never said. That’s another sort of latent possibility with this technology. It’s interesting to see that we’re developing all kinds of ways of digitizing the self, and obviously pushing through a number of industries. Then in tandem with that is the mixing of that technology with what human beings always do, which is some things that might be to others’ detriment, might be deceptive or false, or fake is the favorite adjective right now. Dirk, does any of that worry you?

Dirk:
Sure. Yes.

Jon:
In-

Dirk:
In varying ways and in varying degrees.

Jon:
In one word.

Dirk:
Yeah. I think you’re putting your finger on what are going to be some of the important questions and important themes to come out of where all of this is going. It’s hard in a blanket and generic way to kind of … to really get deep on it, but I think as we start to see how these things are manifesting in more specific ways, these are the kind of things for us to talk about.

Jon:
Yeah. I think capturing the digital representation of yourself, the physical representation … Similarly, we were speaking of capturing all our Facebook data, which is more … sort of gives the skeletal outline of our personality, and then even going into the healthcare realm, capturing things like your genomics, for instance, these are all data, but they’re different aspects of digitizing ourselves and creating this digital world. I think we need to start having more conversations about how that happens and how that data can be used, et cetera. While I see tremendously exciting new technologies like Siren, I also am always slightly paranoid about how all of that’s going to play out.

Dirk:
Amen.

Jon:
Listeners, remember that while you’re listening to the show, you can follow along with the things that we’re mentioning here in real-time. Just head over to thedigitalife.com, that’s just one L in The Digital Life, and go to the page for this episode. We’ve included links to pretty much everything mentioned by everybody, so it’s a rich information resource to take advantage of while you’re listening, or afterward if you’re trying to remember something that you liked. You can find The Digital Life on iTunes, SoundCloud, Stitcher, Player FM, and Google Play. Of course, if you want to follow us outside of the show, you can follow me on Twitter, @jonfollett. That’s J-O-N-F-O-L-L-E-T-T. The whole show is brought to you by GoInvo, a studio designing the future of healthcare and emerging technologies, which you can check out at goinvo.com. That is G-O-I-N-V-O, dot com. Dirk?

Dirk:
You can follow me on Twitter, @dknemeyer. That’s @-D-K-N-E-M-E-Y-E-R. Thanks so much for listening.

Jon:
That’s it for episode 252 of The Digital Life. For Dirk Knemeyer, I’m Jon Follett, and we’ll see you next time.

No Comments

Leave a Comment

Your email address will not be published. Required fields are marked *