Bull Session209 podcasts

Bull Session

The Future of Creative Work

February 28, 2019          

Episode Summary

Jon: Welcome to episode 291 of The Digital Life, a show about our insights into the future of design and technology. I’m your host, John Follett and with me is founder and co-host Dirk Knemeyer.Dirk: Greetings, listeners.Jon: This week, we’re going to be talking about Creative Next, which is our new show debuting on February the 19th, and you can find out more about the Creative Next project at www.creativenext.org, and we’d love it if you came and checked out our new podcast.Jon: Dirk, what is the Creative Next show all about, and how is it a continuation of what we’ve started on The Digital Life?Dirk:  Yeah, so on Creative Next, what we say we are doing is future proofing creatives. That is specifically around the encroachment, which has a negative connotation that I’m not intending, of automation brought on by most directly artificial intelligence, but also other small ware technologies we’ve talked about on the show, like internet of things, 3D printing, stuff like that. The reality is, our frame for automation is one that is or about the factory floor. It’s about what we used to all blue collar workers being displaced. It wasn’t about us, the people who would listen to this show, the people who are involved in creative stuff.Well, the reality is that automation is now making its way into our space. It has, in fact for a long time, and we haven’t used the language of automation, but we have a design firm here at GoInvo and for many years, the core tools for our team, among them at least, is the Adobe Creative Suite, and that is software that’s loaded with automation, that has drastically automated what design means over the last 30 years now.This show is about the fact that automation is coming more quickly, in a way that is woven more into the very day work lives of me, of you, of people like us, all kinds of people. This is impacting researchers, writers, artists, designers, engineers, entrepreneurs among others. It’s going to change our work. It’s going to change our jobs. Tasks first are going to be falling to the automation, some of that automation will simply take the tasks over, some and more commonly it will be augmenting, so they will be tools that are helping us to perform tasks more quickly, giving us more power.Again, going back to the Adobe Creative Suite example. But, those will in turn change what our jobs look like. They’ll change the skills required, the tasks required, and for folks to be ahead of that, to have it be a tool that is improving our career, improving our chances, giving us more longevity, and more ability to really thrive not just survive, we’ve got to be ready for that. We’ve got to be knowledgeable, we’ve got to be thinking, we’ve got to be learning, and Creative Next is about exploring all of that stuff.Jon: Yeah, just to expand a little bit on the idea of automation within the creative fields, I mean you give the example of the Adobe Creative Suite which, in and of itself, is automation. The first version of Photoshop or InDesign is automation in and of itself, if you look under the hood there is an awful lot of things that Photoshop is doing, that InDesign is doing, that used to be done by hand, right?Dirk: That’s right.Jon: They used to be done in a much different way. One of my first design internships involved using a paste-up board, using wax, right? So we would -Dirk: What’s a paste-up board? What is this wax you speak of, Jon?Jon: We would get the columns of text, and I would be running it through this machine that put a little coating of wax on it, and I would arrange the layouts on the board and that that board would get photographed, and that photograph would eventually find its way to a plate, which would be printed on the press, and that’s how the book was eventually assembled. That was my earliest exposure to the graphic design industry. There were a number of designers on staff. I, of course, was just an intern basically, a summer employee, and these designers were going to learn about this new fangled software called Quark. They were being sent to classes.Dirk: What’s Quark?Jon: I think Quark still exists.Dirk: Barely.Jon: The competitor and precursor to InDesign, right?Dirk: I researched it in the context of Creative Next, it exists, but barely.Jon: Yeah, so there you are. That is the way automation comes to an industry. Now we don’t even think twice about it. No-one’s seeing the wax layout paste-up boards in at least 20 years, right?Dirk: People 30 and under don’t know what those things are.Jon: And that’s just one example of all the miraculous stuff that the Creative Suite automates for you, without you even knowing it, right? That’s happening on the digital side, too. All of these issues, we’re going to dig into on Creative Next, which brings me to the second talking point today. Why are we doing this show? What’s the impetus for us to do it? What inspired us to do it? We’ve been doing The Digital Life since 2010, which I guess is, it’s time for a change maybe one of those.Dirk: Wow. It’s almost a decade. Yeah. I’m interested to hear your answer, but for me, it’s something that needed to be done. It’s something that I saw impacting … I saw it happening in the bigger world, you know, the projects like The Next Rembrandt Project, sort of this experimental thing where a machine is making an original Rembrandt painting. Reading the story about the … It’s things we’ve talked about on this show, so our listeners are familiar with some of it, but reading the story about the AI that submitted an essay into an essay writing contest, and finished in the top half of competitors. Stories like that, that said, “Wait a minute, there’s something … Something’s coming with this technology,” and as we looked at it more closely … I’ll speak for both of us here, you can correct me if need be … We really … The more we researched, we went from being agog and thinking in line more along the lines of sort of the scifi type stuff, that you hear from the media about AI, to really understanding the big change is coming but it’s not what the media is talking about.It’s not what we’re reading about and learning, it’s different. It’s more subtle. It’s more integrated into our lives, and it has a more direct and real impact on our work lives in particular, in the short term. In the years ahead. People weren’t talking about that. It was still stuff that would be down the artificial general intelligence path, or stuff about goofy robots. I really felt like people are looking in the wrong place, and so for me it was like this is something people need to be aware of, it’s a story that needs to be told, and it will help a lot of people, because we’re understanding things that are going to really impact the world of work in the years ahead, and it’s going to surprise a lot of people.The people who aren’t surprised, the people who are striving with it, and us, and hopefully our listeners, and hopefully much even broader than that, but are going to be at an advantage, are going to be protected, are going to be … In the language you’re using on the show, future proofed. For me, it was something that the discover of it surprised me, the learning of it enlightened me, and I found a calling that this was something that needed to be done to be of service to people who I consider my peers, my friends, my colleagues, people I’m sharing community and history with.Jon: Yeah, that’s a great way to sum it up. I think for me, I’m very interested in sort of the patterns of change over time in relation to the economy, and emerging technologies in particular, and how people manage their work across these transitions. For example, we’ve gone through this a number of times in the past. As human beings, we’ve moved from being hunter-gatherers to agriculture, from agriculture to industry, and now from industry to information, right? As the drivers of our economy. In each of those transitions, those transitions take a long time, which may not be something that we’re accustomed to discussing.This long transition, which we are currently experiencing from a more industrialized economy into more of an information economy. Understanding that those changes really sparks a lot of interest from me. I’m interested in this kind of transformation. For me, this podcast Creative Next is … It’s a podcast, it’s also a much more focused research project in a lot of ways. We’re going to be talking to experts on AI, experts on design, on technology, similar to The Digital Life in that way, but exploring this thesis around what’s next for a creative economy. So, that’s another thing that excites me about the show, is just the focus and the research aspect to it as well.Dirk: Extending those differences a little, Jon, I mean for I don’t know, six years or so now, I’ve called myself a social futurist professionally. That’s the term that I’ve used, and I still use it, and I still think it’s correct. But, I have found myself weaving in the word journalist. I’ve never thought of myself as a journalist. But, the nature of this project, the work we’re doing, the way we’re doing it, my peers have been journalists, and I’ve been doing journalism work, and it’s a strange skin to wear, but I’m wearing it. It’s kind of cool. I’ve never felt that way with The Digital Life, certainly. I mean, we’re definitely bringing a level of research, of rigor, of real deep attention to these topics.Jon: Yeah, and I’m excited about that for sure. Dirk, what’s the first season going to be about? What’s the depth and breadth of the first season?Dirk: Yeah, so each season is going to take a side … to cover a wide topic that we think all together build a story around AI automation, and helping to future proof creatives. With that in mind, season one is about learning. When we settled on learning, we started to figure out, what is it we want to say about learning, what does this show need to be about? We start at sort of a high level. We start the season with sort of a philosophical look, a historical look at learning, at the relationship between humanity and technology. From there, we pivot into understanding terms. Understanding what we’re talking about, so going deep into artificial intelligence. Going deep into other smart ware technologies, and sort of doing the learning for ourselves about the context that we’re functioning in.From there, we pivot to looking at how machines learn, and then specifically how learning machines have been participating in, and influencing games. We get into chess, we look at … You know, chess was the first of the major strategy games that AI defeated, it’s now been over 20 years ago. That’s given us 20 years to study once a machine dominates a game, what happens to that game, and what happens to the people who play and compete in that game?We explore that, and then we move into poker, which is more recent. Understand how humans were able to build a machine that beat the best players, but then what has that done to the poker community just over the last two years? What impact has that had on strategy, on play, how are poker pros using machines? Which was pretty cool, too. That got us through about half of the season, and then we move into learning in the most direct way. Series of five shows, I think are really strong, where we start by looking at how is learning functioning in the corporate world, then talking with a high school principal, how is learning functioning in high school, then how is learning functioning in university, then how is learning functioning for young adults from a student perspective, how are they learning both in and out of the university, and then finally to online learning and lifelong learning, and how those things are manifesting.Before then, finishing off by taking a look at where AI is headed, where automation is headed. In the years ahead, what are some things that will be changing, and contextualizing those in the future season. Maybe that’s a long winded overview, but that’s … Season one is about learning, and that’s the journey that we’ve taken with it.Jon: Yeah, that’s a great summary. A couple of the guests … Could you give us a hint who we’ll be hearing from on season one?Dirk: Sure, there are a couple of guests that we’re familiar with from The Digital Life, really our discovery of this project, and our research around it started with some of the work that we’ve done here. For example, Noam Brown, who is one of the co-creators of Libratus, the AI that defeated the poker pros. He is joining us for an episode about that. We also … The very first episode is with Carie Little Hersh, who we have here on The Digital Life. She’s the anthropologist and a lot of wonderful insights from Carie, and we’re thrilled to have her back for Creative Next.But then a lot of new blood. A lot of people that will definitely be new to our listeners, and new to our shows. Chris Chabris, fantastically smart author, professor and columnist for the Wall Street Journal, talking with us about chess. Tobi Bisetti, senior machine learning engineer as episode two, and she really gives us a good framework for what we’re talking about here, when we’re talking about AI and machine learning. The real stuff, not the scifi stuff. The nuts and bolts, among others, and we have 12 guests in this first season, and I think it’s a fantastic crew.Jon: We’ve noticed, we’re going to a season rhythm now, as opposed to straight up episodes, so each theme will have a season associated with it, and there’s six seasons that we’ve got planned, which will bring us through this year and next. Dirk, what are the subsequent seasons going to be about?Dirk: Yeah, so learning is … There certainly in learning we’re getting into ways that automation will directly impact creatives, specifically during those 12 episodes we’re going to be talking about how research science is impacted, for sure, as well as education. But, once we get past learning which is a little more general, we’re going to get more narrow into application. So, season two we’re calling communication, and that’s going to be looking at things like writing, journalism, marketing, things that have to do with the automation of communication in a bunch of different ways. Season three is going to be about form, so art and design. Broadly. You know, we’re going to be looking at music, we’re going to be looking at painting, sculpture, as well as design and the things that maybe our listeners are more likely to be doing, but these things have a reciprocal relationship what’s happening in art and design for example. Form is going to be focused on those things.Function then is going to pivot in season four to engineering. How we make things work, and how we will automate the way that we make things work. Then season five is going to be on leadership, and that’s going to come from a couple different directions. One is about leadership in management, how those things will be automated. The other part of leadership is how leaders can implement automation solutions, at scales small and large, into their organizations, whether their organizations are small or large, and really understanding what is it going to look like to be shifting, and to be leading the shift into automated work places.Season six is going to be called, “You.” It’s going to look at our lives, and look in the most direct way, regardless of whether you’re an engineer, or an artist, or a journalist, or a research scientist. How will this impact you, how can you make the most of it? How can AI automation not be something that’s a little scary, that’s a little uncertain, that feels destabilizing, but it’s something that’s empowering, that is something that really is a tool for good in your life, in the life of people who count on you, and count on your ability to make an income. But also good for the world at large, and how you and those tools could be a catalyst for that. That’s our plan.Jon: Awesome. If you’d like to learn more about Creative Next, go to www.creativenext.org, you’ll also be able to find us on Twitter, Facebook and Instagram at GoCreativeNext, so we encourage you to get in touch with us there, and to check out the first season of Creative Next on learning, and we’ll be excited to have you along for this next adventure.Listeners, remember that while you’re listening to this show, you can follow along with the things that we’re mentioning here in real time, just head over to thedigitalife.com, that’s just one L in the digital life, and go to the page for this episode. We’ve included links to pretty much everything mentioned by everyone, so it’s a rich information resource to take advantage of while you’re listening, or afterward if you’re trying to remember something that you liked. You can find The Digital Life on iTunes, SoundCloud, Stitcher, Player Fm and Google Play, and if you’d like to follow us outside of the show, you can follow me on Twitter @jonfollett, that’s J-O-N, F-O-L-L-E-T-T, and of course the whole show is brought to you by GoInvo, a studio designing the future of healthcare and emerging technologies, which you can check out at goinvo.com. That’s G-O-I-N-V-O dot com. Dirk?Dirk: First, just a reminder that The Digital Life is going on hiatus, but it may be back someday. We’ve gone on hiatus a couple times before, and I don’t know. We wanted to reach episode 300 and this all happened too quickly, so we may come back yet again. But for now, please do check us out at creativenext.org. If you want to get in touch with me, you can follow me on Twitter @dknemeyer, that’s D-K-N-E-M-E-Y-E-R, and thank you so much for listening all these years.Jon: That’s it for episode 291 of The Digital Life. For Dirk Knemeyer, I’m Jon Follett, and we’ll see you next time.<br />

Jon:
Welcome to episode 291 of The Digital Life, a show about our insights into the future of design and technology. I’m your host, John Follett and with me is founder and co-host Dirk Knemeyer.

Dirk:
Greetings, listeners.

Jon:
This week, we’re going to be talking about Creative Next, which is our new show debuting on February the 19th, and you can find out more about the Creative Next project at www.creativenext.org, and we’d love it if you came and checked out our new podcast.

Jon:
Dirk, what is the Creative Next show all about, and how is it a continuation of what we’ve started on The Digital Life?

Dirk:
  Yeah, so on Creative Next, what we say we are doing is future proofing creatives. That is specifically around the encroachment, which has a negative connotation that I’m not intending, of automation brought on by most directly artificial intelligence, but also other small ware technologies we’ve talked about on the show, like internet of things, 3D printing, stuff like that. The reality is, our frame for automation is one that is or about the factory floor. It’s about what we used to all blue collar workers being displaced. It wasn’t about us, the people who would listen to this show, the people who are involved in creative stuff.

Well, the reality is that automation is now making its way into our space. It has, in fact for a long time, and we haven’t used the language of automation, but we have a design firm here at GoInvo and for many years, the core tools for our team, among them at least, is the Adobe Creative Suite, and that is software that’s loaded with automation, that has drastically automated what design means over the last 30 years now.

This show is about the fact that automation is coming more quickly, in a way that is woven more into the very day work lives of me, of you, of people like us, all kinds of people. This is impacting researchers, writers, artists, designers, engineers, entrepreneurs among others. It’s going to change our work. It’s going to change our jobs. Tasks first are going to be falling to the automation, some of that automation will simply take the tasks over, some and more commonly it will be augmenting, so they will be tools that are helping us to perform tasks more quickly, giving us more power.

Again, going back to the Adobe Creative Suite example. But, those will in turn change what our jobs look like. They’ll change the skills required, the tasks required, and for folks to be ahead of that, to have it be a tool that is improving our career, improving our chances, giving us more longevity, and more ability to really thrive not just survive, we’ve got to be ready for that. We’ve got to be knowledgeable, we’ve got to be thinking, we’ve got to be learning, and Creative Next is about exploring all of that stuff.

Jon:
Yeah, just to expand a little bit on the idea of automation within the creative fields, I mean you give the example of the Adobe Creative Suite which, in and of itself, is automation. The first version of Photoshop or InDesign is automation in and of itself, if you look under the hood there is an awful lot of things that Photoshop is doing, that InDesign is doing, that used to be done by hand, right?

Dirk:
That’s right.

Jon:
They used to be done in a much different way. One of my first design internships involved using a paste-up board, using wax, right? So we would –

Dirk:
What’s a paste-up board? What is this wax you speak of, Jon?

Jon:
We would get the columns of text, and I would be running it through this machine that put a little coating of wax on it, and I would arrange the layouts on the board and that that board would get photographed, and that photograph would eventually find its way to a plate, which would be printed on the press, and that’s how the book was eventually assembled. That was my earliest exposure to the graphic design industry. There were a number of designers on staff. I, of course, was just an intern basically, a summer employee, and these designers were going to learn about this new fangled software called Quark. They were being sent to classes.

Dirk:
What’s Quark?

Jon:
I think Quark still exists.

Dirk:
Barely.

Jon:
The competitor and precursor to InDesign, right?

Dirk:
I researched it in the context of Creative Next, it exists, but barely.

Jon:
Yeah, so there you are. That is the way automation comes to an industry. Now we don’t even think twice about it. No-one’s seeing the wax layout paste-up boards in at least 20 years, right?

Dirk:
People 30 and under don’t know what those things are.

Jon:
And that’s just one example of all the miraculous stuff that the Creative Suite automates for you, without you even knowing it, right? That’s happening on the digital side, too. All of these issues, we’re going to dig into on Creative Next, which brings me to the second talking point today. Why are we doing this show? What’s the impetus for us to do it? What inspired us to do it? We’ve been doing The Digital Life since 2010, which I guess is, it’s time for a change maybe one of those.

Dirk:
Wow. It’s almost a decade. Yeah. I’m interested to hear your answer, but for me, it’s something that needed to be done. It’s something that I saw impacting … I saw it happening in the bigger world, you know, the projects like The Next Rembrandt Project, sort of this experimental thing where a machine is making an original Rembrandt painting. Reading the story about the … It’s things we’ve talked about on this show, so our listeners are familiar with some of it, but reading the story about the AI that submitted an essay into an essay writing contest, and finished in the top half of competitors. Stories like that, that said, “Wait a minute, there’s something … Something’s coming with this technology,” and as we looked at it more closely … I’ll speak for both of us here, you can correct me if need be … We really … The more we researched, we went from being agog and thinking in line more along the lines of sort of the scifi type stuff, that you hear from the media about AI, to really understanding the big change is coming but it’s not what the media is talking about.

It’s not what we’re reading about and learning, it’s different. It’s more subtle. It’s more integrated into our lives, and it has a more direct and real impact on our work lives in particular, in the short term. In the years ahead. People weren’t talking about that. It was still stuff that would be down the artificial general intelligence path, or stuff about goofy robots. I really felt like people are looking in the wrong place, and so for me it was like this is something people need to be aware of, it’s a story that needs to be told, and it will help a lot of people, because we’re understanding things that are going to really impact the world of work in the years ahead, and it’s going to surprise a lot of people.

The people who aren’t surprised, the people who are striving with it, and us, and hopefully our listeners, and hopefully much even broader than that, but are going to be at an advantage, are going to be protected, are going to be … In the language you’re using on the show, future proofed. For me, it was something that the discover of it surprised me, the learning of it enlightened me, and I found a calling that this was something that needed to be done to be of service to people who I consider my peers, my friends, my colleagues, people I’m sharing community and history with.

Jon:
Yeah, that’s a great way to sum it up. I think for me, I’m very interested in sort of the patterns of change over time in relation to the economy, and emerging technologies in particular, and how people manage their work across these transitions. For example, we’ve gone through this a number of times in the past. As human beings, we’ve moved from being hunter-gatherers to agriculture, from agriculture to industry, and now from industry to information, right? As the drivers of our economy. In each of those transitions, those transitions take a long time, which may not be something that we’re accustomed to discussing.

This long transition, which we are currently experiencing from a more industrialized economy into more of an information economy. Understanding that those changes really sparks a lot of interest from me. I’m interested in this kind of transformation. For me, this podcast Creative Next is … It’s a podcast, it’s also a much more focused research project in a lot of ways. We’re going to be talking to experts on AI, experts on design, on technology, similar to The Digital Life in that way, but exploring this thesis around what’s next for a creative economy. So, that’s another thing that excites me about the show, is just the focus and the research aspect to it as well.

Dirk:
Extending those differences a little, Jon, I mean for I don’t know, six years or so now, I’ve called myself a social futurist professionally. That’s the term that I’ve used, and I still use it, and I still think it’s correct. But, I have found myself weaving in the word journalist. I’ve never thought of myself as a journalist. But, the nature of this project, the work we’re doing, the way we’re doing it, my peers have been journalists, and I’ve been doing journalism work, and it’s a strange skin to wear, but I’m wearing it. It’s kind of cool. I’ve never felt that way with The Digital Life, certainly. I mean, we’re definitely bringing a level of research, of rigor, of real deep attention to these topics.

Jon:
Yeah, and I’m excited about that for sure. Dirk, what’s the first season going to be about? What’s the depth and breadth of the first season?

Dirk:
Yeah, so each season is going to take a side … to cover a wide topic that we think all together build a story around AI automation, and helping to future proof creatives. With that in mind, season one is about learning. When we settled on learning, we started to figure out, what is it we want to say about learning, what does this show need to be about? We start at sort of a high level. We start the season with sort of a philosophical look, a historical look at learning, at the relationship between humanity and technology. From there, we pivot into understanding terms. Understanding what we’re talking about, so going deep into artificial intelligence. Going deep into other smart ware technologies, and sort of doing the learning for ourselves about the context that we’re functioning in.

From there, we pivot to looking at how machines learn, and then specifically how learning machines have been participating in, and influencing games. We get into chess, we look at … You know, chess was the first of the major strategy games that AI defeated, it’s now been over 20 years ago. That’s given us 20 years to study once a machine dominates a game, what happens to that game, and what happens to the people who play and compete in that game?

We explore that, and then we move into poker, which is more recent. Understand how humans were able to build a machine that beat the best players, but then what has that done to the poker community just over the last two years? What impact has that had on strategy, on play, how are poker pros using machines? Which was pretty cool, too. That got us through about half of the season, and then we move into learning in the most direct way. Series of five shows, I think are really strong, where we start by looking at how is learning functioning in the corporate world, then talking with a high school principal, how is learning functioning in high school, then how is learning functioning in university, then how is learning functioning for young adults from a student perspective, how are they learning both in and out of the university, and then finally to online learning and lifelong learning, and how those things are manifesting.

Before then, finishing off by taking a look at where AI is headed, where automation is headed. In the years ahead, what are some things that will be changing, and contextualizing those in the future season. Maybe that’s a long winded overview, but that’s … Season one is about learning, and that’s the journey that we’ve taken with it.

Jon:
Yeah, that’s a great summary. A couple of the guests … Could you give us a hint who we’ll be hearing from on season one?

Dirk:
Sure, there are a couple of guests that we’re familiar with from The Digital Life, really our discovery of this project, and our research around it started with some of the work that we’ve done here. For example, Noam Brown, who is one of the co-creators of Libratus, the AI that defeated the poker pros. He is joining us for an episode about that. We also … The very first episode is with Carie Little Hersh, who we have here on The Digital Life. She’s the anthropologist and a lot of wonderful insights from Carie, and we’re thrilled to have her back for Creative Next.

But then a lot of new blood. A lot of people that will definitely be new to our listeners, and new to our shows. Chris Chabris, fantastically smart author, professor and columnist for the Wall Street Journal, talking with us about chess. Tobi Bisetti, senior machine learning engineer as episode two, and she really gives us a good framework for what we’re talking about here, when we’re talking about AI and machine learning. The real stuff, not the scifi stuff. The nuts and bolts, among others, and we have 12 guests in this first season, and I think it’s a fantastic crew.

Jon:
We’ve noticed, we’re going to a season rhythm now, as opposed to straight up episodes, so each theme will have a season associated with it, and there’s six seasons that we’ve got planned, which will bring us through this year and next. Dirk, what are the subsequent seasons going to be about?

Dirk:
Yeah, so learning is … There certainly in learning we’re getting into ways that automation will directly impact creatives, specifically during those 12 episodes we’re going to be talking about how research science is impacted, for sure, as well as education. But, once we get past learning which is a little more general, we’re going to get more narrow into application. So, season two we’re calling communication, and that’s going to be looking at things like writing, journalism, marketing, things that have to do with the automation of communication in a bunch of different ways. Season three is going to be about form, so art and design. Broadly. You know, we’re going to be looking at music, we’re going to be looking at painting, sculpture, as well as design and the things that maybe our listeners are more likely to be doing, but these things have a reciprocal relationship what’s happening in art and design for example. Form is going to be focused on those things.

Function then is going to pivot in season four to engineering. How we make things work, and how we will automate the way that we make things work. Then season five is going to be on leadership, and that’s going to come from a couple different directions. One is about leadership in management, how those things will be automated. The other part of leadership is how leaders can implement automation solutions, at scales small and large, into their organizations, whether their organizations are small or large, and really understanding what is it going to look like to be shifting, and to be leading the shift into automated work places.

Season six is going to be called, “You.” It’s going to look at our lives, and look in the most direct way, regardless of whether you’re an engineer, or an artist, or a journalist, or a research scientist. How will this impact you, how can you make the most of it? How can AI automation not be something that’s a little scary, that’s a little uncertain, that feels destabilizing, but it’s something that’s empowering, that is something that really is a tool for good in your life, in the life of people who count on you, and count on your ability to make an income. But also good for the world at large, and how you and those tools could be a catalyst for that. That’s our plan.

Jon:
Awesome. If you’d like to learn more about Creative Next, go to www.creativenext.org, you’ll also be able to find us on Twitter, Facebook and Instagram at GoCreativeNext, so we encourage you to get in touch with us there, and to check out the first season of Creative Next on learning, and we’ll be excited to have you along for this next adventure.

Listeners, remember that while you’re listening to this show, you can follow along with the things that we’re mentioning here in real time, just head over to thedigitalife.com, that’s just one L in the digital life, and go to the page for this episode. We’ve included links to pretty much everything mentioned by everyone, so it’s a rich information resource to take advantage of while you’re listening, or afterward if you’re trying to remember something that you liked. You can find The Digital Life on iTunes, SoundCloud, Stitcher, Player Fm and Google Play, and if you’d like to follow us outside of the show, you can follow me on Twitter @jonfollett, that’s J-O-N, F-O-L-L-E-T-T, and of course the whole show is brought to you by GoInvo, a studio designing the future of healthcare and emerging technologies, which you can check out at goinvo.com. That’s G-O-I-N-V-O dot com. Dirk?

Dirk:
First, just a reminder that The Digital Life is going on hiatus, but it may be back someday. We’ve gone on hiatus a couple times before, and I don’t know. We wanted to reach episode 300 and this all happened too quickly, so we may come back yet again. But for now, please do check us out at creativenext.org. If you want to get in touch with me, you can follow me on Twitter @dknemeyer, that’s D-K-N-E-M-E-Y-E-R, and thank you so much for listening all these years.

Jon:
That’s it for episode 291 of The Digital Life. For Dirk Knemeyer, I’m Jon Follett, and we’ll see you next time.

 

No Comments

Leave a Comment

Your email address will not be published. Required fields are marked *

Jon Follett
@jonfollett

Jon is Principal of GoInvo and an internationally published author on the topics of user experience and information design. His most recent book, Designing for Emerging Technologies: UX for Genomics, Robotics and the Internet of Things, was published by O’Reilly Media.

Dirk Knemeyer
@dknemeyer

Dirk is a social futurist and a founder of GoInvo. He envisions new systems for organizational, social, and personal change, helping leaders to make radical transformation. Dirk is a frequent speaker who has shared his ideas at TEDx, Transhumanism+ and SXSW along with keynotes in Europe and the US. He has been published in Business Week and participated on the 15 boards spanning industries like healthcare, publishing, and education.

Credits

Co-Host & Producer

Jonathan Follett @jonfollett

Co-Host & Founder

Dirk Knemeyer @dknemeyer

Minister of Agit-Prop

Juhan Sonin @jsonin

Audio Engineer

Dave Nelson Lens Group Media

Technical Support

Eric Benoit @ebenoit

Opening Theme

Aiva.ai @aivatechnology

Closing Theme

Ian Dorsch @iandorsch

Bull Session

Business Models for the Future of Education

February 20, 2019          

Episode Summary

Jon: Welcome to episode 290 of The Digital Life, a show about our insights into the future of design and technology. I’m your host, Jon Follett, and with me is founder and co-host, Dirk Knemeyer.Dirk: Greetings listeners.Jon: This week, we’ll be talking about the future of education, and the business models that drive it. The impetus for this particular episode, was a news item that I spotted indicating that a unaccredited but still popular online school called Lambda school, which trains engineers in software development, had received 30 million dollars in their Series B funding round. AndLambda school is one among many of this code camp style schools that enables people to upscale themselves, move from whatever their current careers may be, into hopefully a more lucrative field for them coding software, which has endless need right now. There aren’t enough engineers, software developers really to fill all the job openings. So, in this particular example, I see some really interesting indicators of where education and training may be going as emerging technology is more and more become part of our innovation economy.So you have all these fantastic technologies, and you don’t have enough people to fill the jobs that they require, because they require a different set of skills than maybe what some universities, colleges, schools might be generating the students to do those particular jobs. They’re just not meeting the demand. Before we get into that broader topic, there’s a second part of this story that I find really fascinating, which is the way in which you can pay for this education fromLambda school. It’s a 30-week software engineering course, and so you can either pay 20 grand, which is your tuition. So you can pay that as you would maybe if you attended a university, or you can do this thing called an ISA, which stands for an Income Share Agreement, and it essentially means that you will pay the school 17% of your salary, from your job that you get after you complete your coursework, and that’s for a period of two years. It caps out at 30 grand, so you’re not going to pay more than 30 grand for your education.And if you don’t get a job after five years, you don’t owe them anything. So in this way, Lambda school is attaching its success in training you for these skill, taking on some of the risk. So it’s saying, “These skills we know are in demand, so we’re going to enable students who might not otherwise be able to afford this type of education, we’re going to make it possible for you.” And I thought that was a really fascinating model, and I don’t know how I feel about it. In one way, it kind of feels like economically that might work a lot better for people than carrying a load of debt, and at the same time, signing over a percentage of your salary seems a little funny. Dirk, what was your impression of this ISA business model type?Dirk: I think the business model was interesting, and when you sort of break it between the business model and thenLambda specifically, finding creative ways to allow people to educate themselves, in order to both provide for themselves, and the people they care about, and to fill opportunities in the workforce is necessary and important.Lambda is not a pioneer here, models like this have existed before, but it’s an interesting model and an example in this case, specifically with Lambda, of trying to innovate beyond, “Here’s this giant pain pill that you have to take in order to get the education.” There was a school by someone in the design field, Jared Spool. I think it was originally called The Unicorn Institute. I think now it’s called something different, but that school the tuition is massive. It’s in the many tens of thousands of dollars, and that makes it difficult to commit to that, and to make it happen. And that school may be able to exist as a small entity for a small number of students, but it will never scale, and have a bigger and broader impact. A model likeLambda as well. Now, talking aboutLambda specifically, the reason thatLambda is able to do this, is that it’s an online only course load, their infrastructure is online course delivery infrastructure, and some time of the teachers. They advertise, “Oh, you can slack with your instructor.” So the slack with the instructor is the only thing at the end of the day that really costs them ongoing money, once they have the platform made, because if we think about it, there is all of this free available online education, among many others. Like the type of education they’re giving, is worth very little in the marketplace. It’s generally free. There’s other things like MasterClass, which again you don’t have teacher interaction but for under $100 a year, or something sort of obscenely affordable, you can get access to this trove of classes, from like the best people in all of these different disciplines to teach yourself.So Lambda’s offering something that is a commodity, that is in the market generally seen as something to be given away, or to be acquired at a very small price, and they’re charging tens of thousands of dollars for it. They set as an anchor their $20,000 price point, in order to sort make you sign up for the more attractive model of paying them even more, significantly more downstream. So to me in that way, I don’t find it particularly altruistic, I find it particularly capitalistic, and they’re offering something where, what they have to pay their instructors to teach this online course and then slack with the students who reach out to slack them in some limited way. They’re going to be grossly profitable doing this. Good, creative, interesting, has a chance at scale to make an impact. All good, but I definitely see it as self serving motivation more than serving the public, because of the price model, what they have. And I’m sure that’s why they’re getting so much investment and so much attention, it’s because there’s just the opportunity to make gross amounts of money with it, which is generally what Silicon Valley’s all about.Jon: Yeah. I think there’s probably … having not taken a Lambda course, I’m sure there’s an array of things … I do know they have some help in finding jobs for instance. I’m sure there are other elements that you would include in that tuition cost, aside from just the basic instruction and Slacking with the instructor. All that being said, there is no reason any venture capital will put any money in it, if they couldn’t double their money to take it out. So that’s a point well made, I think.Dirk: It’s a near free gamble for them on this model, but if you make over $50,000 and you’re paying a percentage, and it’s one that’s really to maximize the money for them, it’s just really smart, from a “how do we profit as much as possible from this.” I think there’s other ways that the same thing as a nonprofit, or in some different structure could be offering and instead of $20,000 as a base, it could be $5000, it could be $500 as a base potentially. It just scales the magnitude beyond what’s required. And again, we’re in capitalism, it’s a perfectly acceptable and fine thing, that’s consistent with how the system work, but for me I’m not … There’s a lot of fawning over Lambda, people are really impressed, and I’m a lot less impressed, because to me it’s more transparent on the profit sides.Jon: Yeah. I think it’s worth also considering this particular business model in the context of the higher education market, and then also more broadly as we anticipate technologies will be continuing to automate and change our economy, and people will need to upscale and rescale themselves throughout their careers. What are some of the ways that people can do that effectively, and move on from whatever it is they’re doing, where there might be a bit of a crunch, no longer there are jobs available and move on to the next thing. And I think in a lot of ways, this particular example with Lambda and code schools generally speaking, is sort of a precursor of what we can expect in the future. So, business models that are geared towards pushing people in the direction of a technology and providing them with some skill basis to work from, and I think what that neglects, or what that particular type of educational system will leave out, I think, is all the benefits that you would get from the polar opposite.Which would be the more liberal arts education focused on whether it be writing, reading, understanding. Everything from science and literature, and getting sort of a broad survey, as opposed to very specific job-specific skills that you can use in the market place immediately. And I don’t know whether these two models will come crashing into each other, but it seems to me like we have these competing entities of very quickly moving technologies, university systems which are extremely expensive, and then the quest to find meaningful and ongoing work, which is only going to change even further as more technologies take shape. Dirk, when you think about how these worlds where continuous education is going to be a prerequisite for being able to compete, what do you see? How do you see the traditional university model and these more technical type schools in emerging technology? How does that all come together? Or is there even other …? I’m sure there are other ways that we could approach this realm of education as well.Dirk: In terms of technology and automation changing the skills required to do work, for people who are already working, I think it’s going to be more integrated into life. I think it’s going to be less of, “I’m going to attend this program.” It’s not going to be this thing, Lambda school, or General Assembly, or whatever the case. I think it’s going to be more woven in and integrated just into how we are online, and how we’re already going through our things. I think it will shift down more to a feature, a product level as opposed to a company level that these things will sort of manifest. Not just from a video perspective but more of like the Lynda.com, check in and checkout model, as opposed to the, “Here’s this big place that I’m going to make this big investment in.”Jon: That’s interesting. Yeah. I think it’s hard for me to reconcile the need for continuous learning. It’s hard for me to reconcile that as a separate piece, because in a lot of ways I feel like when I was at university, when I went to college, I, in some ways learned how I should go about learning, like what things worked for me, what things didn’t, and that’s how I apply it to learning new skills. So at university I had the opportunity to learn about lots of things that I will probably not use in my everyday life. Whether it’s Shakespeare, or poetry, or writing short stories, or whatever, but I draw on all that as I learned new skills, and it gives me perspective. So I do feel like there’s this need for constant education, and then also need for a really strong base from what’s to work. It’s a huge problem already, and I think it’s worthy of our attention nationally, because we can’t have students who are in perpetual debt, but at the same time we can’t have education that’s completely contingent on you working to fund that in some … basically a revenue sharing agreement.I feel like all this is headed for an interesting collision course, and that’s of course where innovation happens, but it’s a struggle for me, because I know what I took away from university, and that being so valuable for how I learn today, and at the same time I know the price tag of it, and the price tag today is huge whereas something like Lambda school seems almost … It’s extremely affordable in comparison. You’re not talking 200 grand, you’re talking 20. So I can see the appeal there. And obviously this is a topic that we’ll be exploring more as we dig into the future of education.Dirk: Yeah. It’s also unclear to me, and I don’t think either one of us are qualified to answer this, but it’s unclear to me that it’s an apples-to-apples comparison of a Lambda education to the sort of institution that you’re slotting in as $200,000 a year. I mean, right away Lambda’s online only and the other isn’t.Jon: Sure.Dirk: I don’t know that they even belong in the same category frankly, although I don’t think either of us are deep enough into sort of the Lambda product to say one way or the other with any confidence.Jon: Sure. What I can say with confidence is I did receive a terrific education at university, which now seems like a very expensive investment, just based on today’s price tags. It makes me concerned for sure. So I’d like to make a little announcement about what we’re doing here at The Digital Life. We’re transforming into sort of next iteration called Creative Next. Creative Next is about future proofing designers, engineers, writers, researchers and entrepreneurs to prepare for collaboration with smart machines, and enabling us to transform our jobs and improve our lives. Each episode of The Creative Next Podcast will introduce you to a compelling innovator, who’s going to offer a new perspective on critical issues related to our creative futures. The show, Creative Next will be presented across six seasons, and our first season on learning will be debuting on February 19th. So we encourage you to check out the next iteration, the next evolve of The Digital Life, it’s Creative Next, and you can check out a sample episode of the show at CreativeNext.org.Listeners, remember that when you’re listening to the show, you can follow along with the things that we’re mentioning here in real time. Just head over to TheDigitaLife.com. That’s just one L in the digital life, and go to the page for this episode. We’ve included links to pretty much everything mentioned by everyone, so it’s a rich information resource to take advantage of while you’re listening, or afterwards if you’re trying to remember something that you liked. You can find The Digital Life on iTunes, SoundCloud, Stitcher, Player FM and Google Play, and if you’d like to follow us outside of the show, you can follow me on Twitter @jonfollett. That’s J-O-N F-O-L-L-E-T-T, and of course the whole show is brought to you by GoInvo, a studio designing the future of healthcare and emerging technologies, which you can check out at GoInvo.com. That’s G-O-I-N-V-O dot com. Dirk?Dirk: You can follow me on Twitter @dknemeyer. That’s at D-K-N-E-M-E-Y-E-R and thanks so much for listening.Jon: That’s it for episode 290 of The Digital Life. For Dirk Knemeyer, I’m Jon Follett and we’ll see you next time.

Bull Session

Transformation

January 11, 2019          

Episode Summary

Jon: Welcome to episode 289 of The Digital Life, a show about our insights into the future of design and technology. I’m your host, Jon Follett, and with me is founder and co-host, Dirk Knemeyer.Dirk: Greetings, listeners.Jon: This week we’ll be talking about emerging technology, the transformation it brings and the fear of change that comes with it. In particular, I’m thinking of a news item that I saw the other week about attacks on driverless cars in Arizona, specifically in a city called Chandler, which is new Phoenix, and the attacks were on the Waymo vehicles, which is the Google spinoff for driverless cars.Dirk: By attacks, Jon, are these cyber attacks? Is there a DNS attack coming in? What’s going on with these cars?Jon: No, these are strictly … These are not cyber. These are strictly offline. This is like slashing of tires, throwing rocks, yelling …Dirk: Yelling.Jon: Yeah.Dirk: At the driverless car.Jon: Yeah. Threatening with firearms. So this is strictly analog. The attacks are of the old school variety. So, there were 21 of these attacks reported over the past couple of years, and like I said, there’s a variety of them, so I assume threatening gestures and yelling is one thing. Showing firearms is a much, much different level.Dirk: For sure.Jon: And of course throwing rocks or slashing things is clearly, clearly a violent attack. So far there’s not … None of this has really entered the legal system in terms of Google. Waymo in this case has not been pursuing these attacks as criminal mischief or whatever they would qualify as in the attempts to sort of keep the peace and not draw the kind of attention that a legal proceeding would definitely bring in more reporters and more attention and things like that. Let’s look at this from a couple of different angles because there’s something interesting going on in Chandler and I think it’s an interesting microcosm of what is slowly starting to take shape in the US which is you have this advanced technology, the driverless cars, and fundamentally sort of threatening huge mammoth change if the realization of driverless cars really comes into being. Whether that’s all going to get worked out and along what timelines, I don’t know. There’s policy, there’s insurance, there’s all kinds of ethical questions. In fact, there’s just technological questions that still need to be answered. So this world of driverless cars may be a long way away or at least decades away. But, for the citizens of Chandler, this is the everyday reality. I love the William Gibson quote that “The future is here. It’s just unevenly distributed”, and right now it’s distributed right on top of Chandler, Arizona. So, that being said, I could really see how this could be viewed as a quote “invasion,” right? Because you have this … If you’re a … I don’t know, a driver of any kind, and that’s …Dirk: Professionally speaking.Jon: Professionally speaking.Dirk: Yeah.Jon: This is a threat to your way of life potentially. So if you’re a taxi driver, a truck driver, a delivery driver, a UPS driver, any of these things.Dirk: Bus driver.Jon: Bus driver, right. Any of these things, this is potentially a huge problem for you because it replaces something that perhaps you’ve been doing for your entire career with a machine, and even worse, they’re testing it right on your streets. So you’re at the cutting edge. So what’s your response? I mean, sort of fear transforms into anger transforms into chucking a rock at a Waymo vehicle, I think. So, if we look at this as an example of the kinds of reactions that people will have to AI generally speaking … So, clearly we can see the thread of driverless cars sort of leading to a market disruption of … especially in the US, where we love, love, love our cars and our streets and our driving. We are a car culture unlike any the world has ever seen. The Ford production line sort of started here. We have a driver’s culture, very much in the US. There are other countries that have it as well, but we’ve … definitely among the top.Dirk: Sure.Jon: But let’s range a little farther and think about all the other industries where AI will start poking its technological nose into and you can begin …Dirk: Already is.Jon: Yeah, and you can begin to see one kind of reaction, one kind of cultural sort of rebellion against these unrelenting technological change that we’re facing now. I think it can be frightening. I think it will be frightening to many. So, I don’t know what all the takeaways are from this. Dirk, what was your reaction when you read this article? I thought it was completely fascinating.Dirk: It was fascinating. A few different things. So, one, another factor that it bears noting is that there was an incident where a driverless car killed a pedestrian in Tempe, Arizona. So, that was relatively local to Chandler, so whereas that story for the rest of us popped up in the national news, we read it, expressed some reaction to it and moved on with our lives. In Arizona this is real people. This was a local story. It had name, background attached to it, affiliation with local organizations that other people had affiliation with. So I think there’s a non-trivial impact of that on people’s attitudes in Chandler. Absolutely there’s the socioeconomic fear aspect and encroaching on future jobs, but there’s also a “these machines are killing our people” aspect that I think is really contributing to the psychology and the passion to varying degrees.This really took me back to the Luddites. We use the term Luddite without really knowing or understanding where it came from, but the Luddites are based on an economic worker’s movement in the 1810s, so about 200 years ago. At that time it was mainly in textiles that automated machines and companies with these machines were displacing skilled workers and the reaction to that was for these skilled workers to form groups and be disruptive. We remember sort of historically the top layers while they were off breaking machines. They did break machines, that’s true. They also assaulted, in some cases killed, business owners that had the companies that were doing the cheaper textile work and replacing their jobs. The Luddite movement was so significant and it was overlapping with the Napoleonic war, that at one point the English government had more soldiers dealing with domestic Luddite disturbances than they had soldiers dealing with Napoleon and the French army. So, the scale of it is staggering, so what we have happening now in Chandler, Arizona is … Let’s call it a minor nuisance for lack of anything better. At the point at which the US Army is having to deploy people en masse, then we’ll be dealing with something that is socially at a level similar to the Luddites 200 years ago.So, of course the story is disturbing and people behaving in such base and ultimately self-destructive ways, slashing tires and throwing rocks is … You don’t feel good about that, but it certainly is nothing compared to a very similar context 200 years ago and the sort of very organized, much larger scale reaction to approximately similar encroachments.Jon: Based on that, do you think that we’re in for increasing unrest around implementation of AI? Is this sort of an inevitable clash or is it in fits and starts? What can we anticipate? Can we not anticipate? Are there ways that we can mitigate this transformation enough that there is a slow intake? So you have your driverless car lanes, you have your side of the highway where there’s people driving it, and forever or at least until whatever generation is that thinks that they don’t need to drive cars anymore, those two lanes.Dirk: It’s not gonna be driver choice that drives it, it’s gonna be money and how do you have everyone having the correct technology in order to safely deploy into a mono system? The thing that’s gonna hold us back isn’t that Bob wants to hit the road on his Harley. The thing that’s gonna hold us back is people can’t afford to get the driverless car to participate in the grid with everyone else, right?Jon: Sure. I mean, people will hold on to their cars in New England for a while and then if you’re out in warmer climes, you can hold onto your car for decades, right? So, the infrastructure or that level of adoption is gonna take a while, not to mention simply the pricing question as well.Dirk: Absolutely.Jon: But yeah, I wonder if given that this is a indication of things to come, if sort of slow rolling emerging technologies in a way that is a little more cautious might be an inevitable police, right?Dirk: No, that’s not gonna … I mean, look. The market is gonna drive it to go as fast as it possibly can, as long as there’s some capitalist out there who can get a new vacation home or a new yacht. I mean, it’s gonna go as fast as that person chooses that it does. The difference now compared to the 1810s is that much of it is virtual instead of physical. The Luddites are remembered for destroying machines but they only destroyed machines because the machines were there. It was a physical thing that they could act upon. People are acting upon the Waymo cars in Chandler, Arizona because they’re there. It’s a physical thing. If you want to act on Facebook, and a lot of people are very mad at Facebook, what can you do? You can uninstall it and tweet that you uninstalled it as long as you haven’t raged at Twitter already as well and then you’re on Mastodon or something else that nobody follows.So, in the virtual world, there’s nothing to act upon. You can uninstall. You can not buy the stuff, but other people are going to buy the stuff unless everyone’s turning against it, in which case that service will go away. Other services will come to replace it, and a lot of the AI driven change over the next decade certainly will not be as physical. It will be more virtual. It will be things that are happening in systems where there’s nothing to destroy, there’s nothing to attack. Yeah, you can take your laptop and smash it on the ground. Congratulations, you’re out $2,000 or $500 or whatever the cost of your laptop is. There isn’t this external, corporate owned, physical thing that we can lash out against. I mean, can we go to their corporate headquarters and start throwing rocks through their windows? Yeah, but that’s the fastest path to jail you can possibly imagine. I don’t know. So from my perspective, it’s going to be a question of where are there opportunities to sort of physically act out and against things? That’s where this will show up more. So, companies that are more virtual … It just won’t be explicit because people can’t really do anything, but the fact is that these technologies will be disrupting older industries. The people in those industries could … It’s not like jobs are going away. There’s other things that they could be doing and retraining for, but that’s not what people want to hear. People want to continue doing what they were doing, what they perceived as safe and part of their identity, and those folks are going to continually be frustrated and discouraged over the next decade as AI and automation are encroaching on our world.Jon: Yeah, that makes a lot of sense. So, I think in the spirit of transformation and the new year, of course, 2019, we have some big news to share with all of our listeners, and we’re transforming as well. We’re changing as well, and our next adventure in podcasting is gonna be called creative next. It’s about looking forward to AI and automation and thinking about adaptive strategies, ways that we might future-proof design engineering, writing, researching, business, from all these sort of creative perspectives, and really prepare for collaboration with smart machines. So, sort of a look in a way of sort of positively transforming alongside these new AI tools that will be coming.So, we’re going to do this Creative Next podcast in a slightly different fashion. It’s going to be interview based, so each episode we’re gonna be talking to an innovator about a critical issue related to our creative futures and we’re doing this in six separate seasons which will be released over the next couple of years. Our first season on learning is going to be debuting on February 19th, 2019.So, The Digital Life is transforming into sort of our next podcast iteration and we invite all of you, all of our friends and listeners who have enjoyed the show over these past seven, eight years now, to come along with us on this next journey which really sort of builds on all of the work that we’ve done here on the digital life. It’s sort of the next instantiation, which is Creative Next. So, if you’re interested in taking this journey with us, please go to CreativeNext.org and sign up for our mailing list and we’ll be sure to let you know all the whens and wherefores when the first episode drops, and as a special bonus, we’ve got a sort of a prototype first episode out there for you, Creative Next number one, that you can sample and listen to and see where we’re headed. We would love it if you join us.Listeners, remember that while you’re listening to the show, you can follow along with the things that we’re mentioning here in real time. Just head over to TheDigitaLife.com. That’s just one L in the digital life, and go to the page for this episode. We’ve included links to pretty much everything mentioned by everyone, so it’s a rich information resource to take advantage of while you’re listening, or afterwards if you’re trying to remember something that you liked. You can find The Digital Life on iTunes, SoundCloud, Stitcher, Player FM and Google Play, and if you’d like to follow us outside of the show, you can follow me on Twitter @jonfollett. That’s J-O-N F-O-L-L-E-T-T, and of course the whole show is brought to you by GoInvo, a studio designing the future of healthcare and emerging technologies, which you can check out at GoInvo.com. That’s G-O-I-N-V-O dot com. Dirk?Dirk: You can follow me on Twitter @dknemeyer. That’s at D-K-N-E-M-E-Y-E-R and thanks so much for listening.Jon: That’s it for episode 289 of The Digital Life. For Dirk Knemeyer, I’m Jon Follett and we’ll see you next time.

Bull Session

Emerging Tech Trends for 2019

December 24, 2018          

Episode Summary

Jon: Welcome to episode 288 of the Digital Life, a show about our insights into the future of design and technology. I’m your host, Jon Follett. And with me is founder and co-host, Dirk Knemeyer.Dirk: Greetings, listeners.Jon: For our final show of 2018, we’re going to take a look forward into the realm of 2019, talking about emerging tech trends to watch from AI to gene-editing and a lot of more interesting technologies as well. Dirk, let’s start off by talking a little bit about how artificial intelligence is sort of come to the fore has become a major sort of tech-news, hype-trends this AI is the next emerging technology, I think, at least in the minds of a public discussion. One of the things, in an article that you pointed out to me in Fast Company that I found very interesting, was pointing to some software that basically made it possible for designers to begin putting together the elements of machine learning elements, sort of visual coding as it were for artificial intelligence.And so what this says to me is well, number one that sort of the technical aspects of artificial intelligence are going to be impenetrable I think for many designers, myself included. Having a visual interface that sort of reveals the system and how the connections are made and how the rules are set and how things interact is going to be important to getting more, call it non-technical people involved in the creation of AI systems.I found this completely fascinating because it felt like a step towards making it more accessible for folks who might also be interested in the user experience side of things, which, of course, we have a user experience studio we care very much about it. So to me, that’s a positive development and something I think we’re going to see more of in 2019. Your thoughts?Dirk: Well, in terms of the particular article … so it is showing the concept for a graphical user interface for programming artificial intelligence. The concept and idea are great. The reality, I’m not going to hold my breath right? So the article sites Squarespace as the example. Squarespace is a service which you can use to sort of cobble together a website without a designer that’s fairly professional. Squarespace is old technology and it’s notable that they can only cite Squarespace, not something more modern and recent and interesting. It’s been a very poor history of graphical user interfaces as intermediaries for software engineering and programming.Yeah, they might be able to make little simple websites work, but beyond that, more complex, more interesting, more powerful things are not able to be composed or created by a designer. It still requires a true program, a true software engineer. So the notion that suddenly for artificial intelligence, they have this great beautiful plug and play any creative professional can use it. Here’s my AI software. I’m super skeptical about. There’s just no track record in software in general of graphical-user interfaces totally disintermediating the engineering component and allowing us to plug and play code complex things. It just isn’t real. Cool. Great. If they could make it work with magic, awesome. But it’s just a concept at this point.Jon: I mean, I take it that the Squarespace reference I think is more to the conceptually, right. The way in which it would operate, right.Dirk: But if there was a better conceptual example, they would have used it.Jon: Yeah. I mean there are enterprise-grade sort of systems that allow business processes to be assembled together in more of a visual type interface. So I think Pegasystems does some of that. And now, I mean there’s a desire to create a no-code system so that business analysts can do a similar style of App. Call it assembly or design. So I think conceptually it’s something that people really would like to have happened. But as you pointed out, at least on the design side of things, and especially with this sort of creating the visual design for things supported the code underneath is suspect, right?I can remember sort of early, in the days of the nascent web, you had tools like Dreamweaver from Macromedia, right? Originally before Adobe bought them. And the idea was that you weren’t going to hand-code things, you were going to assemble things visually. And so the feedback from the engineering usually was, “Hey, this code is …Dirk: Is crap, yeah.Jon: It wasn’t really meant to integrate with the code that was being hand generated. Coding in a text editor was a sign that you knew really what you were doing versus dragging things around in a gooey like Dreamweaver. So all that being said, I think that what Dreamweaver did do was open the gates for a lot of folks who may have not had the mind for coding or really the time or inclination or whatever the excuse was, right.I know I’m not a capable coder in any sense of the term. So from a prototyping standpoint, maybe Dreamweaver is an interesting product or was right. So maybe these AI that are assembled using code as visual interface maybe they aren’t production grade or what have you. But I think even from an idea generation prototyping, lightweight testing, some of bringing these to a broader audience I think has value. I think as we move forward, the need to allow this technology to be accessible to a broader range of people I think is going to be really important for a number of reasons.Dirk: In this particular example, I mean, look maybe they’ll have something and it will be working great. That’s certainly a possibility. The other thing to keep in mind though is the business model. So this is proprietary, it’s released by a specific company. They have models around installation fees, ongoing subscription fees. And so again, you’re within this closed environment of whatever the tools that this company is going to make available. It’s not giving you access to a global open source, AI repository of wonders. You’re locked into whatever these cats are making for you. There’re just a lot of limitations and questions, but it certainly as beautiful and conceptually interesting.Jon: Right. One of our trends to watch in 2019, call it, the democratization of artificial intelligence, in some manner or another. Let’s move on then to some of the other emerging technologies that we should pay attention to in 2019. Dirk, you did some comparative work taking a look at a research report from Lux Research. Tell me what did you discover as you were poking around?Dirk: Yeah. So it’s interesting. So Lux Research is a Boston based research company specializing in helping companies to sort of analyze emerging tech. And I do consulting work in that space. And so they did the top 19 emerging technologies for 2019. They had previously done the top 18 for 2018. And so, I’m kind of a nerd and so I happily jumped into a spreadsheet and compared the two. There were a few different trends I found interesting. In general, the lists change wildly about half of what was on the 18 list is not on the 19 list. So there’s a bunch of things that are more, this is their moment and then they’re gone again. There were a couple things that were specifically interesting.Number one is the top thing in 2018 is the same as the top thing in 2019. In 2019, they call it machine-learning and AI. In 2018, they were calling it machine-learning and deep neural networks. So it’s also interesting to see how their language evolves and changes over time around what they think is important. But it really underscores the fact that AI, machine-learning, like these, are really dominant right now in terms of the emerging technologies, the trends, the sort of cutting-edge stuff year over year. That was interesting to me.The second one was number two on the list was wearable electronics and that’s interesting from a few perspectives. Number one, last year they called it Smart Watches. So big evolution from a specific device to a very broad category where they’re seeing the broader application of the things that make a Smartwatch interesting in a whole variety of wearable technology. That expansion really speaks a lot to the market. Second too is the raise in rank. 2018, it was ninth on the list and this year it’s second on the list. So that’s one really, really to watch from a Lux Research perspective. I found that interesting.And then also new to the list and number six, so not even one of the top 18 from last year, but now all the way up to number six is battery fast charging, which interesting. I know there’s certainly technologies behind it, but from a consumer perspective that’s more of a feature, right? My battery can charge quickly, that’s a feature, has much broader applications, particularly on the B2B side, on the industrial corporate side. But for that one to just kind of show up it’s sort of raising a signal flare that hey, this is something that might be important. So those were a few things that stood out to me, Jon.Jon: Yeah. So let’s dig into a wearable electronics a little bit more because, seems to be a rising and important, emerging technology. Now, for me personally having used these awful fitness trackers for, I don’t know, seven years now or however long, and sort of getting blisters from the first fitness tracker I ever used. I’m using a heart rate monitor right now when I bike. But I find wearable electronics distasteful.Dirk: Distasteful, that’s an interesting take.Jon: I kind of don’t like them because they’re awkward, and I’ve really felt a freedom of not wearing a watch. Used to wear a lovely watch, and then my phone sort of takes care of that. I suppose, if you go to a nice event you can wear a nice looking watch. Other than that it’s a piece of jewelry now. I like not wearing stuff. I’m not saying I want an embeddable to track my fitness or whatever, and obviously, when I’m at the gym cycling or whatever, I want to know details. But I think some of these if they’re going to get further adoption I think they’re going to be tied to very specific use cases around … I mean, obviously, the Fitbit’s a perfect example of where you really want to know about your fitness down to the nth degree, and some sort of motivator for you to take more steps. So there are many examples of different ways you can apply that especially, if folks have medical conditions and things like that. Diabetes is a perfect example of a condition where people will want to continuously be monitoring things like blood sugar.But generally speaking, I’ve always felt that wearables were a transitional technology. Definitely, an emerging technology but one that would give way to perhaps in an embedded type technology, or even one, like using cameras to discover some of the same information. There are algorithms that can tell you your heart rate based on what your facial scan is doing because they can detect the small capillaries pulsing right at a certain level. I’ve felt wearables were a transitional technology and that could just be my bias because I’m not really a huge fan. But, Dirk, I mean, you’ve worn wearables. I mean and you don’t wear them every day now.Dirk: I don’t wear them at all now.Jon: So I mean I think I’ve heard you say like, “Yeah, I got the information I needed out of it and then I was done.”Dirk: Right, right. That’s right. And now, of course, they’re becoming more powerful whether they can do more things and there would be more of a use-case to have them working ubiquitously, but it will be embeddables. I mean, this is a transitional period. It’s a transitional period that will last decades, not years, but it’s a transitional period nonetheless. Embeddables just make more sense. I mean the wearables are clunky and clumsy in a whole bunch of ways, whether it has to do with washing things. Whether it has to do with having things available in unusual and difficult contexts. I mean, there’s a bunch of reasons why wearable suck. However, there’s also a bunch of reasons why collecting data that currently wearables are the only feasible way to collect is important. Right? Yeah, I mean it’s just here in the way it’s here for now and it will go away at some point. I mean, just like a lot of other things will go away. I mean, our phone, all of that stuff will be some sort of embeddable or virtualized context. But again, that’s no time soon. We’re looking a ways down the path now.Jon: That’s an excellent point. Another emerging technology that’s going to light fire in 2019 if it hasn’t already is CRISPR and gene-editing. Now, I noticed on our friends here at Lux Research, gene-editing for 2019 it’s at four and 2018 was three.Dirk: That’s up at the top. It’s up near the top.Jon: It’s up at the top and based on sort of recent events where there were live births of gene-edited human beings. I would say the horse is racing around the track now. That was something that happened in 2018 that I did not expect by any means. I don’t know if I had a particular date in mind when I thought that would happen, but I did not think it was going to be this year. With that consideration, I think what that does is it does put it into the public eye in sort of a negative light, which is unfortunate. And I think, which is exactly what the scientific community did not want to have happen. That being said, I think it’s also upped the ante for competitiveness around not just sort of these, editing of human genes but all of the other aspects, whether it’s editing-genes in animals or plants or what have you. It’s raised the bar for all of that intentionally or not.And whether that’s good for the technology, probably not, but that is going to be getting a lot of additional scrutiny by governments, by organizations. They’re going to be a lot of ethical questions asked about CRISPR technology in 2019. So I don’t know whether this is going to be a net positive for gene-editing in 2019, but it’s going to be big.Dirk: Yeah, amen.Jon: So I think we can also mention and we’ve talked about this a bit on the show but 3D printing is another one, additive fabrication, slightly more technical name for it.Dirk: But we interestingly removed that distinction. So in 2018, it was their second highest technology and they call it 3D printing and additive manufacturing. This year, it’s third instead of second and they just call it 3D printing. So again, it’s interesting to see how the terms are fluctuating from their perspective of sort of analyzing the industry.Jon: Yeah, I think this is slightly under the radar technology in comparison to sort of the big news hogging items that AI and gene-editing can be. 3D printing, additive fabrication or not, the stuff of headlines so much, until you’re at least in the applications that are sort of immediately feasible. But what’s amazing about this technology is it really changes the face of manufacturing, especially for sort of short order complex but smaller amounts of product, right. So it takes manufacturing from needing huge assembly lines to a sort of a much smaller footprint. In fact, I think there’s the possibility that you can at least be prototyping some complex machines now using all additive fabrication. In fact, in Somerville, our neighbor town down the road here on Mass Ave., in Somerville there are plenty of startups working in the space.And I think in the past year we’ve seen the debut of some amazing metal 3D printing. Printing parts for motorcycles say that are extra-light because they’ve got a sort of very interesting honeycomb interiors, which are strong and yet a lot lighter than having a solid metal part. I’ve seen some demos of this and it’s really I think underappreciated how much this is going to transform manufacturing.Now, in terms of, over the course of 2019, I think we are going to see more productions systems come online. So moving from the prototyping, which is very popular right now with 3D printing and starting to move much more into the production space. So I know some companies are making it, so the prototype systems can be … You can have multiples of your prototyping system which then serve as production. So you may have one of these machines in your research and design facility and then 100 of them on your factory floor in a warehouse somewhere. But that’s one methodology that I’ve seen for rolling this to a production capacity. American manufacturing I think with these flexible lines that can produce different kinds of parts of different kinds of products and then swiftly retool them to produce some other thing. I think that’s part of the future of manufacturing. I think that’s pretty exciting and something we can watch for in 2019.Dirk: I don’t know about 2019. I mean I think it’s something that’s later as opposed to sooner because we still have such a labor cost disparity between the United States and China, or the United States and even down-market from China. Places like Vietnam for example. It’s just so much cheaper on the labor side in those places that I think we’re still a ways away from the manufacturing being here in any meaningful quantity. Now, in the longer now though, that will change because the extremes are going to come towards the middle and the difference will reach a point where it just makes sense. It makes financial sense for a company that, which is motivated mindlessly by money as opposed to human considerations as well. It then becomes a no-brainer for that company to say, “Hey look, we need to bring this here because it’s better us, even though we’re still paying more on wages. The time saved, the logistic costs, all that other stuff makes those the better play.”The other factor too, which won’t hit immediately but at some point, we’re not going to have giant container ships going over the ocean full of so many products. Due to global warming, there’ll be some kind of legislation or tariff thing or something that’s either a cost, a pain for the people who are wanting to ship or just limits, based on not allowing sort of global trade to happen at that scale just in order to keep the planet okay. We’re so backwards right now it’s a while away. But it’s when those things start to happen that the bringing it to the US will really start to take off.Jon: So just want to say thank you to all of our tremendous guests in 2018. We had a lot of fantastic guests on the show and I’m going to put together a little list and put it on SoundCloud of our interviews over the past year. It’s been a lot of fun. We actually had more guests on the show in 2018 than we did in 2017, so it was terrific growth there and we appreciate people taking the time to come and talk to us about emerging technologies, design and ethicsListeners, remember that while you’re listening to the show, you can follow along with the things that we’re mentioning here in real time. Just head over to the digitalife.com. That’s just one l in the digitalife. And go to the page for this episode. We’ve included links to pretty much everything mentioned by everyone, so it’s a rich information resource to take advantage of while you’re listening, or afterward if you’re trying to remember something that you liked. You can find the Digital Life on iTunes, SoundCloud, Stitcher, Player FM, and Google Play. And if you’d like to follow us outside of the show, you can follow me on twitter @jonfollett. That’s J-O-N F-O-L-L-E-T-T and, of course, the whole show is brought to you by GoInvo, a studio designing the future of healthcare in emerging technologies. You can check out goinvo@goinvo.com. That’s G-O-I-N-V-O .com. Dirk.Dirk: You can follow me on Twitter @dknemeyer. That’s at D-K-N-E-M-E-Y-E-R, and thanks so much for listening.Jon: That’s it for episode 288 of the Digital Life. And that wraps up our 2018 season. For, Dirk Knemeyer, I’m Jon Follett and we’ll see you next year.<br />

Bull Session

Ethics for Emerging Technologies

December 14, 2018          

Episode Summary

Jon: Welcome to Episode 287 of The Digital Life, a show about our insights into the future of design and technology. I’m your host, Jon Follett, and with me is founder and co-host, Dirk Knemeyer.Dirk: Greetings, listeners.Jon: This week, we’ll be talking with author and designer Cennydd Bowles about ethics and emerging technologies. Cennydd’s new book Future Ethics, published in September, is available now in print and digital formats. Cennydd, welcome to the show.Cennydd: Hi folks. Thanks very much for having me here.Jon: So Dirk, do you want to kick us off with some of the questions that we’ve prepared for Cennydd?Dirk: Sure, so Cennydd, you know, for starters. Just tell us a little bit about yourself.Cennydd: Sure thing. So I call myself a designer and a tech ethics consultants these days. I’d hesitate to call myself an ethicist because I believe that title should probably go to the people who’ve got the credentials to do so.But my background is as a digital product designer, I’ve worked in government, startups dot-coms. I spent three years heading up design at Twitter UK. And since then, I have focused pretty much exclusively on the ethics of technology and the ethics of design. And as you mentioned at the start, recently released a book about this sort of combination of my work in that field. And I’m now trying to see how I take that to the world. And how I help companies make better ethical decisions, and avoid some of the harms that have sadly become all too apparent, I think, in our field.Dirk: So to help me understand sort of the big picture of this, what is your conceptual model for ethics and technology? You know, I can think of sort of broad topics like agency or accountability, but do you have a framework of things that you think are sort of central and important that work together?Cennydd: To an extent, I sort of resist the idea of a grand narrative around something like ethics. I think we’ve sometimes looked for overly simplistic framings of that problem. And I see sometimes the solutions we try to offer are a little bit checklist-y. And I think there’s a danger, we get too much into that mentality.So I think there are some focal points within ethics that are understandable, that may be too narrow. So we see a lot of people within this field, say looking at the ethics of attention, and you know, all this panic about addictive technologies and devices that are consuming all our free time. Now, that’s an important issue. But it’s not the only issue. There are plenty of other ethical issues.So I’m keen not to be too boxed into a specific section, if you like, a specific problem, or indeed a specific approach. For me, it’s really about challenging these ideologies and the assumptions that have for too long gone unchecked, I suppose, in our field. And entering into a proper discussion about how we change things for the better. I don’t think we’re at the stage yet where we can simply just take an ethical design process and imprint it upon technology teams. I don’t think we have that level of maturity in the discussion yet. So it’s my job, hopefully, to stimulate some of that conversation.Dirk: You mentioned you stay away from grand narratives because they often have overly simplified solutions. Can you give us an example of one of those sort of what is an overly simplified solution? And why so that our listeners can sort of have context for why perhaps those grand narratives aren’t as compelling or interesting as we might think they are?Cennydd: Yeah, sure. One of the things I see a lot of people reaching for is the oversimplified answer of why don’t we just have a code of ethics for our field? Why don’t we have some Hippocratic Oath for technology or for design? And it’s such an obvious answer, frankly, that it’s been trying dozens and dozens of times. And it hasn’t worked.And so, when I see another one of these being projected, I try to view it charitably, but I don’t think it’s going to really change anything. If a previous 50 didn’t work, what use is another one going to be? I think there is a danger with approaches like codes of ethics and the like that we get this checklist approach. That we almost end up with ethics becoming sort of what’s happened with accessibility.Accessibility on the web, you know, since the release of the WCAG guidelines, they’ve helped and they’ve hindered. They’ve helped raise the profile of the issue, but they’ve also made accessibility appear to be a downstream development issue. You know, tick some boxes at the end, you know, check your contrast ratio is your … Now double AA compliant, job done, accessibility finished, let’s move on.And I don’t think that would be beneficial to have ethics as a checklist exercise at the end of the existing design process, the existing product development process, because it’s that process itself that we need to examine, rather than just tack on a code at the end and say “Well, did we comply with everything that we said we were going to?”So I can understand the impulse to do that kind of thing. And there may still be a place for some kind of codification, but we’ve got to have those hard conversations first, rather than just throw that up as a one size fits all answer.Dirk: That makes a lot of sense. You know, stretching the demystification into a different direction. Yet mainstream conversations about ethics, and particularly ethics and artificial intelligence, are often centered around sort of science fiction type topics. You know, machines that are smarter than or even from an evolutionary standpoint, replacing humans.Very entertaining, perhaps, but not necessarily grappling with the real ethical issues that matter now or in the future. As someone who spends a lot of time thinking about these things, what are the ethical issues that really should matter to us today and going forward?Cennydd: I mean, the issues you mentioned around some of that scary sci-fi future stuff, they are legitimate issues. They’re important ethical issues for the tech industry to grapple with. There is a risk that we over index on those and ignore some of the things that are staring us in the face. But I don’t want to say that we shouldn’t focus on the dystopian angles as well. I think we need to pull every single lever in front of us and explore the ethics of those.But on a more, I suppose you’d say, a more proximate scale, things that are more readily apparent harms that are happening right now, we obviously have a lot of harms around use of data and the effects of algorithms, often opaque algorithms. You know, the classic black box complaint that goes with a lot of say machine learning systems that we don’t know why they take the decisions that they do.And I fairly familiar with the idea that they replicate the biases within not just the teams that create them, but also the societies that creates the historic data that feeds and trains these algorithms. So they can essentially exacerbate and concretize these existing biases in ways that look objective and ways that look completely neutral.I find particularly interested in the effects of persuasive systems, persuasive algorithms. Karen Young, who’s a legal scholar here in London, talks about the advent of an era of hyper nudge, taking the idea of nudging systems to the extreme, where they’re networked and dynamic and highly personalized. And they could be irresistible manipulators. And we won’t know essentially the presence of these systems until it’s too late.We’ve started already to see, of course, in the political sphere, the power of bots and of human networks of trolls working in collaboration to try and change mindsets. What if we put that kind of persuasive power and dialed it up and amplify its capabilities and put it in the hands of more and more people, that could have phenomenally challenging implications for society and even for free will.I am also interested in how technology can be weaponized. And I mean that in two senses. I mean it in terms of how it can be misused by bad actors. So of course, hackers, trolls, et cetera. And to an extent, some governments are now using technology as a means of force to compel certain behaviors, or to take advantage of weaknesses and systems to their own advantage and to the disadvantage of others.And then, of course, there is, I suppose, what you’d call more visible and above the line weaponization of technology, which is still fraught with ethical difficulties. We look at what’s happened say in Google with their project Maven program, which caused all sorts of internal friction. And then, I think it was yesterday that Microsoft announced that they had just won a large defense contract to provide HoloLens technology to the US Army.And so, the weaponization of these technologies may not have been intended. We may be playing with things that we think have fascinating implications. And we want to see where that technology takes us. And then we find later, oh, actually this could be used for significant harm, but we didn’t plan for it, or we didn’t have an opportunity for the people working on that technology to object and say “Well, I’m not actually comfortable working on a military project, for instance.”So it’s all these unintended consequences of technologies and the externalities of technologies that fall on people that we just didn’t consider. I think that’s where some of the more pressing and slightly less far fetched perhaps ethical challenges lie.Dirk: For sure, those are really interesting and important examples. As I’m thinking about ethics in application, or how to get ethics properly considered in the context of the companies or countries, organizations that are making decisions now that have real ethical implications, what would or should that look like? You know, the notion of an ethicist or an ethical consultant such as yourself participating in a product development process or participating in a company.There’s not a wide precedent for it. I’m sure it’s happened, but there certainly isn’t a standard that I’m familiar with, and I would suspect most people are. I mean, is this a function that should be like a lawyer? You know, that’s generally sort of an outsider, specialized thing that’s coming in, in expert situations? Or is it more like a designer-researcher that’s sort of part of a team on an ongoing basis? How do we structurally make ethics the appropriate part of the things that we’re doing in our organizations?Cennydd: Yeah, that’s an astute question because, as you say, there isn’t a whole lot of precedent for this. The closest analogies we can take a probably in academia or in medicine and so on where we have institutional review boards, IRBs, which are essentially ethics committees, right? And any large study or any large program will then have to go through approval at the IRB level.So some people think, well, maybe that’s a model that we take, and we transfer to large tech companies. I’m not entirely convinced. There maybe some cases in which that works. But I think tech industry ideologies are just so resistant to anything that looks like a committee. That anything that feels like academia and the sort of heavy burdensome processes.So I think, in reality, we have to tread more likely to begin with, unless there are really significant harms that could result. I’d say, if you’re working on weapon systems, you probably need an IRB, right? You need a proper committee to validate the decisions, the ethical choices in front of you. But for every day tech work, I think there is certainly benefit in having, yep, legal on board. You know, there will absolutely be lots of lawyers, general counsel, and so on, who have an interest in this, in both senses of that word.But most of the change really has to come, I think, from inside the company. Now, I may be able to … And we’ll find out whether this true, I may be able to stimulate some of that and to help guide those companies. But ultimately, I think a failure state for ethics is to appoint a single person as the ethical Oracle. And say “Well, let’s get this person in, then they give their binding view on whether this is a moral act or not.” It doesn’t scale. And it also could be quite a technocratic way of tackling what should be more of a democratic, more of a public-orientated decision.So I think we have to find a way to approach ethics as an ethos, a mindset that we bring to the whole design process, the whole product development process, so that it raises questions throughout our work, rather than, as I say, just a checklist at the end or a legal compliance issue.As for the structures of that specifically, like do we need an onsite ethicist within the team? Or do we train designers in this, I think designers make for good vectors for this kind of work. I think they’re very attuned to the idea of the end user having certain sorts of rights, for example. But I am only just begun getting to see the patterns that different companies are trying.And what I’m seeing at the moment is there is very little in common. You have some companies setting up entire teams. You have some people leading it, some companies leading it from product. You have some companies getting it from design, some trying to hire ethicists out of university faculties. And I don’t yet have the data to know which of those approaches works. I’m glad they’re trying all these approaches because hopefully in a year, we’ll have a better idea of which of those have been the most successful.Dirk: That makes sense. What’s your approach? I mean, what’s your, as a consultant, you must have an engagement model. What is the sort of prototype that you’re trying out as you work with companies?Cennydd: You know, I’m literally working on that right now. So I don’t have a specific answer. My hunch at this stage is some initial engagement probably, you know, a talk, workshop, something like that, is that an awareness raising thing, but I don’t believe that’s a successful model for long term change. I think that has to be the initial engagements, like a foot in the door.But my hunch is it’s going to be much more meaningful to have some kind of, you know, like a retainer relationship, or something where someone like myself can come in and start off some initiatives, and then equip the team with some of the skills they need to make those changes. But then come in and check for progress. Because I can tell you from experience that pushing for ethical change is difficult work. You’re swimming against a very heavy tide a lot of the time.So you have to have persistence. You can’t be too dissuaded if your grand plans don’t work. So I think a kind of longitudinal interaction, maybe over the course of three, six, 12 months is where I’m trying to head. For me, there’s obviously, you know, I’ve got to position that appropriately and convince people that there’s value in that. But, you know, ethics is for life, not just the Christmas, all these sorts of things. I don’t want to have a situation in 12-18 months where we’re saying “Oh, we’re still talking about that ethics thing?” It has to be a bit more drawn into the way that we approach these problems.Dirk: Talk a little bit more about your expertise. You’ve just written this book, and it’s getting amazing reviews. People really are liking it, are seeing incredible value in it. Maybe share with our listeners in more detail, what’s going on in the book? What’s it all about?Cennydd: Sure thing. So my focus specifically has been on the ethics of emerging technology. And that’s not to say that there aren’t significant ethical questions to be asked around contemporary technology. But it’s a bit of a fait accompli. There is value in talking about, say, the ethics of news feed and Facebook. But right now, there’s not a whole lot we can do. Its effects have been felt when you look at, say, the effects that Facebook and Twitter may have had on major elections of 2016, we can try to mitigate those homes from happening again. But really, that horse has bolted if I can throw the cliches in.And for me, the ethical harms of emergent technology ramp up quite sharply because over the next 10 to 20 years, we’re going to be demanding, as an industry, we’re going to be demanding a huge amount of trust from our users. We’ll ask them to trust us with the safety of their vehicles and the homes and even their families. And I don’t think we’ve yet earned the trust that we’re going to request. So my focus is trying to illuminate some of the potential ethical challenges within that territory within that emerging fields. But then to interlace that with what we already know about ethics.I think the tech industry has this sometimes useful, but often infuriating belief that we’re the first people on any new shore. That we are beta testing this unique future. And therefore, we have to solve things from first principles. But of course, ethics as a field of inquiry has been around for a couple of millennia. Even the philosophy of technology, science and technology studies, these fields have been around for decades. And the industry really hasn’t paid them the attention that perhaps it should.So I see my job as trying to introduce some of the maybe theoretical ideas, but introducing them in a way that’s practical to designers and product managers and technologists, so they can actually start to have those discussions and make those changes within their own companies. So I’m trying to, if you like, translate between those those two worlds. So if I have to say there’s a particular focus of the book, it’s that.But I have structured the work in a way that it also is sort of somewhat chronological, working from the most readily apparent harms, such as, as I mentioned before data and digital redlining as it’s known, bias, things like that through to perhaps some of the larger, but further away threats such as the risks to the economy, the risks of autonomous war, and so on. Those sorts of things tend to appear later chapters partly because I decided you need to build upon some of the knowledge we introduced earlier in the book to get to that point.Dirk: I loved that the book is really practically focused. So I do hope that our listeners seek out Future Ethics because it will really, you know, give you sort of a steroid shot into understanding the space. And then also having practical stuff you can act upon. It’s really good.You know, pivoting to capitalism. So capitalism is under increased scrutiny and critique in ways that overlap with issues of technology and of course, ethics. A specific example recently is how the ad-funded business model’s being blamed for ethical lapses. And Cennydd, I know you have a different take on this. I’d love to hear about it.Cennydd: Sure. I’m sometimes a little bit unpopular in their tech ethic circles because my response to that challenge is different than the sort of pre-ordained view these days. I don’t believe advertising is the problem. I have to make a fairly, what might seem like a pedantic distinction here. But I think it’s actually important one to make, which is a separate advertising from tracking.I think tracking or targeting, that’s really where the ethical risk lies. Now, advertising can be seen as a promise, you know, a value exchange that we agree to. You know, I get some valuable technology and in exchange, I give up, you know, my attention. I expect, I believe that I’m going to see some adverts on my device, or in my podcast, or whatever it might be. I think if we reject that outright as a business model, which some people do, then really the only business model that leaves us is the consumer-funded technology model. And that has a lot going for it. But it is also potentially highly discriminatory.One of the great things the advertising model has brought us is that it’s put technology in the hands of billions for free. And I don’t want us to lose that. I think it would be a deeply regressive step to conclude that the only ethical technology is that which is funded by the end user because, of course, then you’re excluding the poor, developing nations, those without credit, and so on. So I would hate for us to throw the baby out with the bathwater.I do think, as I say though, we have to think more carefully about tracking. And tracking definitely does have some ethical challenges. Sometimes people make the inference then. They say “Well, okay, but the tracking comes from the needs to advertise. You know, you have to track people so you can advertise more accurately to them and get better return for that.”My counter to that is the value of tracking has now gone beyond the advertising case. Everyone sees value in tracking. So tracking helps any company, whether it’s ad-funded or not, helps us generate analytics about the success of our product, see what’s working or what isn’t in the market. And also, it’s particularly useful for generating training data. We want to understand user behavior so that we can train machine learning systems, AI systems upon that data to create new products and services.So tracking now has value to pretty much any company, regardless of the funding model. So this cliche of if you’re not paying for the product, you are the product being sold, I would take to even perhaps a more slightly dystopian perspective and say you are always the product. It doesn’t matter who’s paying for it. And so, we’re trying to make a change that isn’t focusing, I think, on the right issues, which is how do we combat some of these ideologies of datafication, of over quantification, and the exploitations that might lurk within that. I think that’s where the real ethical focus needs to go, rather than on the advertising case itself.Dirk: That makes a lot of sense. You know, another ethical topic and to sort of wrap up the interview is getting back maybe more into the science fiction realm and the notion of robot rights. So on one hand, modern robots appears little more than a complicated bucket of bolts.But on the other, you know, I remember feeling true, shocking outrage when there was a concept video for a Boston Dynamics robot that was shaped like an animal. This was maybe three years ago, and they had the engineers in this concept video beating it up, pushing it down, doing things that I would consider inhumane. And they were doing it to this robot, and I was upset at them and made sort of character judgments about the company and the people participating in the video based on those behaviors, sort of surprisingly so perhaps. Robot rights. Talk a little about that.Cennydd: Sure thing. So this is a complex and pretty controversial topic. There are many tech ethicists, AI ethicists, particularly, who would say robots cannot and never should have rights. Rights get quite slippery in ethics. It’s quite easy sometimes to claim rights without justification, which is a reason that some ethicists prefer not to use that perspective.You can look at something like Sophia, this robot that you’ve almost certainly seen. It’s this kind of rubber-faced, it’s a marionette essentially. It’s a puppet. It has almost no real robotic qualities or AI qualities. But it’s now been given citizenship of the Kingdom of Saudi Arabia. Some people pointed out that that actually afford it certain rights that women in that nation didn’t have.And things like that frustrate me because that thing should absolutely not to have any rights. It has nothing approaching what we might call consciousness. And consciousness is probably the point at which these issues really start to come to the fore. At some point, we might have a machine that has something approaching consciousness. And if that happens, then yes, maybe we do have to give this thing some legal personhood, or even moral personhood, which would then maybe suggest certain rights. You know, we have the Declaration of Human Rights, maybe a lot of those would have to apply, maybe with some modification in that situation.So we have, for instance, rights against ownership of persons. If we get to a point where a machine has demonstrated sufficient levels of consciousness or something comparable that we say it deserves personhood, then we can’t own those things anymore. That’s called slavery. We have directives against that kind of thing. We probably have to consider can we actually make this thing do the work that we built, essentially, this future of robotics on? Maybe suddenly it has to have a say and opportunity to say “I won’t do that work.”Now, it’s tempting to say the way around this is well, we just won’t make machines that have any kind of consciousness, right? We won’t program in consciousness subroutines. But as a friend of mine, who’s a philosopher and a science and technology studies academic, called Damien Williams, and he makes a very good point that consciousness may emerge accidentally. It may not be something that we simply excise and say “Well, we won’t put that particular module into the system.” It may be emergent. It may be very hard for us to recognize because that consciousness is probably going to manifest in a different manner to human or animal consciousness.So there’s a great risk that we actually start infringing upon what might be rights of that entity without even realizing that this is happening. So it’s a really thorny and controversial topic, and one that I’m very glad there are proper credentialed philosophers looking at. I’ve done obviously plenty of research into this, but they’re far ahead of me, and I’m very glad that folks are working on it.Just with respect to your point about the big dog, I think it was the Boston Dynamics robot. Yes, I mean, that’s fascinating and I think there is … Maybe I have a view that’s a bit more sentimental than most. Some people would say, well, it’s fine. It’s not sentient. It’s not conscious. It’s not actually suffering in any way. But I think it’s still a a mistake to maltreat advanced robots like that. Even things like Alexa or Siri. I think it feels morally correct to me to at least be somewhat polite to them and to not swear at them and harass them. At some point, they’ll be some hybrid entity anyway, they’ll be some centering where these things are combined with humans, some intelligence combination there. And if you insult one, you’ll insult the other. So that feels like something we shouldn’t do.But I also think we should treat these things so that we don’t brutalize ourselves, if you see what I mean. I think if we start to desensitize ourselves to doing harm to other entities, be they robots or be they animals, whatever it is, that line maybe between artificial and a real life may start to blur. But I think if we start to desensitize ourselves to that, if we lose the violence of violence, then I think that starts to say worrying things about our society. I would say not everyone agrees with that. Perhaps that’s my sentimental view on that topic.Dirk: No, that makes a lot of sense. And it just as a follow up, it seems as though people who are talking about robot rights and participating in the conversations around consciousness of robots, and making sure that they’re protected and we’re safe. This was happening while we take other species such as cows, for example, and slaughter them by the millions or 10s of millions. I don’t know what the scale is, but it’s horrifying. What do you think about the boundaries there? I mean, a robot versus a cow or some other non-human animal?Cennydd: I’m casting my mind back to remember who it was. I think it was Jeremy Bentham, the utilitarian philosopher said, and I forgive me, I’ll have to slightly paraphrase. But it’s the question is not can they talk or can I think, but can they suffer. And certainly, animals are absolutely capable of suffering.Now back in Bentham’s time, that was the view that he was challenging. Back in the 1700s, that didn’t really seem to be accepted that animals could suffer in the same way. But clearly, they exhibit preferences for certain states, certain behaviors, certain treatments, and you could argue that suffering results from acting against those preferences.You’re absolutely right to point out a fierce contradiction in a lot of ethics in the way we think about how we want to treat these emerging artificial intelligences and the way that we already treat living sentient species, such as animals. And I think anyone who’s interested in this area owes it to themself to consider their views on say animal ethics, and whether actually that’s an industry that they feel able to support.Now, that’s not an easy decision to take, and I’m not saying, for instance, that anyone who claims to be interested in robot ethics by logical extension has to become a vegan, for instance. But we owe it to ourselves to recognize as you point out, there are significant contradictions in those mentalities. And we have to try to find a way to resolve those.Dirk: It’s very eloquently put. Thanks so much, Cennydd.Cennydd: Thank you.Jon: Listeners, remember that while you’re listening to the show, you can follow along with the things that we’re mentioning here in real time. Just head over to thedigitalife.com. That’s just one L in The Digital Life. And go to the page for this episode. We’ve included links to pretty much everything mentioned by everyone. So it’s a rich information resource to take advantage of while you’re listening, or afterward if you’re trying to remember something that you liked.You can find The Digital Life on iTunes, SoundCloud, Stitcher, Player FM, and Google Play. And if you’d like to follow us outside of the show, you can follow me on twitter at @jonfollett. That’s J-O-N F-O-L-L-E-T-T. And of course, the whole show is brought to you by GoInvo, a studio designing the future of healthcare and emerging technologies, which you can check out at GoInvo.com. That’s G-O-I-N-V-O.com. Dirk?Dirk: You can follow me on twitter at @dknemeyer. That’s at D-K-N-E-M-E-Y-E-R and thanks so much for listening. Cennydd, how about you?Cennydd: Gosh, well, if anyone would like to follow me and my exploits on Twitter, I’m at Cennydd there, which is spelled the Welsh way. So C-E-N-N-Y-D-D, and of course, I’ll be thrilled if you were to buy my book, Future Ethics. You can find information of that at www.future-ethics.com. Thanks.Jon: So that’s it for Episode 287 of The Digital Life. For Dirk Knemeyer, I’m Jon Follett, and we’ll see you next time.