Welcome to Episode 268 of the Digital Life, a show about our insights into the future of design and technology. I’m your host, Jon Follet, and with me is founder and co-host Dirk Knemeyer.
For our topic this week, we’re going to chat about designing for a world of AI, and the broad range of possibilities of different types of intelligence. So, I think to give us a little bit of grounding in the various facets of artificial intelligence that we’ve talked about on the show at length but are useful to define here, Dirk can you walk us quickly through the applied AI versus artificial general intelligence? Just to give us the foundational definitions.
Sure. Yeah, we’ve talked about this a lot on the show and for some of our listeners this might be repeating, which I apologize for, but I think it’s important that we all understand the terms that we’re talking about. There’s generally two or three part structures for thinking about something to choose the three, even though I think the two is a little more in vogue at the moment. So, there’s weak or narrow AI, and then there is strong AI, and then there’s the artificial general intelligence, like what Jon suggested. What we have now, you know, the thing with the media is so excited about, is really only weak or narrow AI, which is the simplest. What weak or narrow AI is, is artificial intelligence related to one specific task. It’s one very narrow set of things, so we have, thanks to deep learning, AIs that can do amazing things, that are really good if we think for example about Facebook. I think DeepFace is the name of their product.
I think that’s right, but now it sounds silly enough that you’re making me wonder-
Don’t worry about it. And then of course we had Noam Brown on the show, who created narrow AI that could defeat world champion poker players, right?
Yes, that’s right. And it is DeepFace. I just checked, which is deep learning with facial recognition, and so it is able to identify now on Facebook who our friends are without our having to tag this is my friend. It’s able to … thanks to being really good, it identifying who people are by their facial structure learnt through a deep learning process. It’s totally made in utter magical, as opposed to user input to classify and create relationships around the different images in the software. In these and many other examples, it’s powerful and amazing and even some might say miraculous, but it’s super narrow. If we asked Noam to have his software instead of winning at poker, winning at chess, it would fail. It’s good at just one thing.
Then you have the strong AI. Strong AI is AI that can do multiple things. It’s not just good at the one, it can be successful in a broader way. The artificial general intelligence, is sort of the superhuman artificial intelligence. So it’s at or above the level of a human capability, and of course, we are able to do many, many, many, many, different things at various levels of ability but certainly … I don’t know what the right number it would be thousands or tens of thousands of different and discrete things competently. We’re not at Strong AI yet, and we’re not going to be in the years ahead, probably in the decades but not the years, and artificial general intelligence may or may not ever happen, but it is certainly decades to never away from happening.
Right. And so the artificial general intelligence that’s the human level AI and that is the fodder for science fiction, like Ex Machina or the Skynet of Terminator fame or any of that [ware 00:04:44] or even one of my favorites; The Matrix of course, where the computer is able to do everything humans can and more.
And , popular culture and media is conflating the artificial intelligence of today, which is weak deep-learning based into these bigger things, which completely confuses everyone and makes it difficult to understand where we actually are in the technical progression of this stuff.
Right. And I think that’s the genesis. All that is the genesis for our conversation today in which Dirk, you pointed out to me an interesting conversation on The Edge Magazine. It was a conversation with Murray Shanahan, who’s a professor of Cognitive Robotics at Imperial College London. He’s also a senior research scientist at DeepMind, and he discusses the space of possible minds, which is from a phrase that it’s British philosopher Aaron Sloman coined the term, but the idea is that there is this space of all the different possible kinds of intelligence. Whether it’s biological minds, or evolved intelligence, perhaps extraterrestrial intelligence, and then of course, the topic of our show, which is artificial intelligence and there’s this whole range of possibilities. It’s much more useful for thinking of this space versus artificial intelligence being a monolithic thing that is going to become AGI and put humanity at risk, which is the problem you outlined earlier with the media conflating all kinds of intelligence.
What’s fascinating about this is, if you look at this continuum of different kinds of artificial agents and intelligence, you can start to see some interesting combinations where human beings and computers work together to augment human intelligence. And I think there’s a much higher likelihood of our technology evolving in that direction, us working in concert with AI various agents and bots, rather than us being immediately superseded by AGI. We were having a little riffing session before the show where we were talking about all of the different ways that humans augment intelligence, and one of the things we’re doing now which feels exceptionally lazy but is probably smart, is we don’t remember as much information but we know where to get it. So Google is becoming our outboard mind and Dirk, you had some other examples of just ways that we’re augmenting our intelligence even today, sans AI.
You mentioned Google as being something that is emerging in that role, but really I think Google is in that role, I mean for me certainly for at least a decade, when someone asks me some sort of a fact about the world unless I have it top of mind, I just google it. And I’m a little bemused that people aren’t googling it in the first place. I mean, that’s already there on the digital side, but to your point, look, I mean we’ve had this going back, I’m sure going back to the beginning of time, but to at least a much earlier technology of the 14th century, the printing press, the dictionary and the encyclopedia are the equivalent to Google from the standpoint of, I need an answer to something. They’re slower, they may or may not be accessible to you when you need them but they’re filling the same role.
We have, going back to the earliest technologies, stone tools which I think as we might have talked about in the show, sort of precede humanity. It was a whole other species that invented stone tools, but we have augmented our being on the world with tools since the beginning of time. We’ve augmented intellectually with tools as soon as those tools became available. Again, with terms like artificial intelligence people get all carried away and all excited and think about things from sci-fi movies, when it’s really just an evolutionary step of better tools, of more capable tools to augment us to stretch both our reach and our grasp and assist us.
Yeah, exactly I think the continuum of technology that we’re really talking about an evolution in software and further augmentation of our intellectual capabilities rather than these giant leaps that seem more scary because they sort of pull us out of what we’re familiar with in to this apocalyptic context, right?
That sells newspapers or clicks or what have you.
AI is smarter software, it’s not killer death robots Elon Musk, shut up already.
I like this idea of the continuum of technology and this space of possible minds, and I think it’s worth digging in a little deeper. Part of what we’re aiming for as we’re designing AI to help us do a variety of tasks really is around this problem solving capacity that we have, whether we have it applied in various forms of software or even in more general problem solving, and I’ll elaborate on that a little bit. When we’re using narrow AI to solve problems, we’re essentially taking the human out of part of the loop and an automating a section of that workflow. So it becomes possible for us to concentrate on higher level things. GoInvo studio were running into a lot of scenarios where machine learning is being applied against parts of healthcare IT that are just plain tedious or could use many people in a room working 40 hours a week doing something exceptionally boring.
Give me examples of that. What are some concrete examples Jon for our listeners?
For example the electronic medical record has huge amounts of information that needs to be both employed and then sifted through and say build for, right? So you might have an encounter with a doctor and a scribe or a nurse or the doctor might type in all of the details of that encounter, and eventually that will make its way to your insurance company and surface in diagnosis and be used as part of your treatment over time. So, any of those things could be automated. So, instead of having the doctor facing away from you and typing into their keyboard not paying any attention to you, there could be a AI driven chat bot that’s listening for that information and could be transcribing that automatically and listening for particular terms that would trigger that information to be input.
So there’s any number of areas where this narrow intelligence, this space of possible minds can be applied, where could a very specific aspect of machine learning and artificial intelligence be applied to remove some problem in a workflow. That’s really where we’re going to see all kinds of innovation, stringing a number of those together where a human mind dedicated to a very narrow task. You can take the human being out of the loop there, automate that and now perhaps maybe, just maybe, the doctor can spend a little more time with the patient. I mean, which is kind of the goal there; having that more empathetic, more attention given to the patient there. I mean, that would be the goal.
Sure. That’s a good example of the technology sort of today and what’s available today. Going back, you were talking about the piece from The Edge and specifically the notion of possible minds. Even though artificial general intelligence is far away, the fact that people are talking about it, is leading towards these interesting conversations such as possible minds and what is consciousness. What I think is interesting about it is that while it gets to questions of artificial intelligence, it gets to questions of machines, it also gets to questions of life. We’re in a time where for example, just today, and this will be broadcast next week so it won’t be totally current anymore, but WeWork, the company that has coworking spaces around the world, 6000 employees, they announced that they will no longer serve meat and that they will have rules and restrictions around what they’ll allow their employees to eat that is provided or paid for by the company, and that’s something that is done explicitly in their release for environmental reasons.
But along with environmental questions about the eating of meat, there are also questions about the morality of eating of meat, of slaughtering another animal in order to eat it, and questions about possible minds and questions about consciousness aren’t just going to be things that give us insight around machines, they’re also going to set a set of beliefs and lead to an agenda that has to do with all life, and in particular biological life such as animals beyond just the human that will have other impacts socially. Even though the conversation around what AI is and how it’s going to be manifesting has veered into the semi-insane and the popular media, it is leading towards smart people having interesting conversations around questions that go far beyond kill or death robots and get into really how we see our place in the world and how we see the world in general, and may accelerate trends such as those to start eating less meat.
Yeah, that’s an angle that I think is very valid and perhaps one that is only starting to come forward and one we should pay some more attention to.
Listeners, remember that while you’re listening to the show you can follow along with the things that we’re mentioning here in real time. Just head over to the digitalife.com. That’s just one L in the digitalife, and go to the page for this episode. We’ve included links to pretty much everything mentioned by everyone, so it’s a rich information resource to take advantage of while you’re listening or afterward if you’re trying to remember something that you liked. You can find The Digital Life on iTunes, SoundCloud, Stitcher, Player FM and Google Play.
If you’d like to follow us outside of the show you can follow me on Twitter @jonfollett, that’s J-O-N-F-O-L-L-E-T-T, and, of course, the whole show is brought to you by Goinvo, a studio designing the future of healthcare and emerging technologies, which you can check out at goinvo.com. That’s G-O-I-N-V-O dot com. Dirk?
You can follow me on Twitter @dknemeyer, that’s D-K-N-E-M-E-Y-E-R, and thanks so much for listening.
So that’s it for episode 268 of The Digital Life. For Dirk Knemeyer, I’m Jon Follet, and we’ll see you next time.