Bull Session
The Pitfalls of Predicting AI
November 2, 2018
Episode Summary
This week on The Digital Life, we discuss the pitfalls of predicting AI. AI predictions range from the measured and meaningful to highly unrealistic and downright hysterical. But how can you tell the difference? In this episode, we dig into some rules of thumb for thinking through the AI predictions we encounter, as laid out in the article “The Seven Deadly Sins of AI Predictions” by Rodney Brooks, a founder of Rethink Robotics. From better understanding the properties of narrow AI to asking “how will it be deployed?”, questioning supposed magical properties without limit, to admitting, in the long term, we just don’t know, we’ll explore the many factors that counter the breathless hysteria of AI predictions. Join us as we discuss.
Resources:
The Seven Deadly Sins of AI Predictions
So first, I don’t think we need to go too far to see the hysteria of it all, right. It’s fun and it probably gets a lot of clicks. If you can talk about how a particular subsection of the economy is going to be completely wiped out by automation, whether it’s robotic or AI or some combination. Usually, I think the around any of these predictions, if you dig a little deeper, you can reveal some of the lazy thinking. Some of the questions probably worth asking are, “Hey, what’s automated already and how easy is it to automate and how many jobs are there and how likely is this to happen?” But Rodney Brooks gives us some rules of thumb, which I think are very useful.
Let’s get started. And we’re not going to take these in any particular order. I think just the sort of the generally interesting ones. We’ll start with some of them. I found this pretty fascinating and we’ve talked about this before, but the idea that purpose-built AI is just not adaptable. Anytime you’re making a prediction that’s based on a very specific purpose-built AI and it’s sort of a being pointer to future change, whether that’s apocalyptic or utopic or somewhere in between. A perfect example of this is we spent a lot of time talking about AI and poker, AI and Go, AI and chess and it’s this idea that here are these amazing games that humans invented and now we’re not even the best at them anymore. We’ve been bested by machines. Uh-oh, where is this going? Dirk, this is a great rule of thumb, I think, and sort of leads to the sort of the massive difference between narrow AI and the sort of more general AI, which people sort of conflate. Do you want to tackle that one a little bit?
Now, in order to achieve those ends, they will need to start over again from the standpoint of the AIs learning. However, the basic structure of the AI, the programming of the … I don’t know to what degree they’re just learning from what they’ve done. They’re leveraging assets from what they’ve done. They’re taking whole cloth, the whole engine and just re-teaching it. He didn’t go into those details and I can’t speak to that, but there is now this thoughtfulness around we want to make something that solves and addresses particular problems. Each instantiation that we have will need to be specific to one problem, but in working on the one we can adjacently easily work on the next and work on the next. That’s where you start to get some really interesting things happening. But it remains in the territory of narrow AI again, where each one is just doing, is doing kerchunking at that one thing. It’s a whole lot of hammers. John.
For example, EHRs got a ton of money sort of injected into that industry by the Federal government because they wanted to digitize health records and reap all of the benefits of having a digitized system. Now, we won’t get into the fact that a lot of this deployment wasn’t, we haven’t realized the success of it yet, right?
There are no open standards so people can’t share data with each other. Patients don’t own their data. Patients can’t even really transfer their data from one hospital to another provider to another hospital. There are all sorts of just sort of practical problems with the deployment of this technology and this is a fairly unremarkable technology. Let’s face it, digitizing the health record, it doesn’t seem like this would need to be magical like that this-
If you’ve got a system that works and you’ve got sort of incremental improvement from whatever the software is going to be, it’s also just going to take time for that to be absorbed. For many reasons, deployment is the unforeseen monster in the closet for any technology. It’s like, “Okay, great, this stuff works.” It may even work in the small prototype or a rollout. But once you start talking about enterprise-grade rollouts of things the stakes get a lot higher and the timelines get a lot longer, for sure.
The health room that we innovated at GoInvo is an example of that. But these houses are made of certain materials. They are physical expensive spaces. Changing those physical materials and completely metamorphosizing, I think is the right word, the environment, it’s just beyond the bounds of what 80, 90% of people can pay for. The more interesting and exciting and say magical solutions around smart homes that ain’t going to happen because of existing infrastructure, if absolutely nothing else. So as you’re thinking about your own predictions and your own trying to sort out what is the future look like, think about infrastructure because if you’re dealing with something that has existing infrastructure, I mean, that’s a huge boulder in the way of exciting new ideas becoming reality.
Okay, so that’s not necessarily true. These advances that we’re making, whether it’s in genomics or are around chips and computers. There are physical and other limits to them that will, economic limits for instance, at a certain point it doesn’t make any sense to keep on making things smaller if people just don’t need the power anymore for whatever it is they’re doing. I probably don’t actually need all the power that’s in my MacBook Pro right now. I could probably survive with something somewhere in between what I had in college and what I have now probably would have been fine. But there are economic limits and then there are the actual physical limit of how small you can make something. Right?
And so these will come into play in different ways, but it’s worth considering that this is not, it’s not really exponential or at least it’s not going to be exponential forever in many of these cases. That’s not to say that there won’t be some interesting quantum computing discovery that, who knows, may develop some crazy fast computing that we can’t even imagine yet. But barring that, and sort of considering the laws that we sort of understand now there are sort of these upper limits that you never hear about limitations when a miraculous predictions are made at all, frankly.
As we’re thinking about just personally how to manage our prognostications for the trajectory of the future, it’s to be mindful of that if you have specific time horizons are probably wrong. To try and figure out what’s happening in 10 years or whatever that chalk line is going to be a failed exercise in terms of the conclusions you come to. So think less about timeframes and think more about possibilities and more generally speaking what is going to happen. Timeframes are just, they’re going to prove inaccurate and are the wrong things to focus on.
Technology that is designed for one thing inevitably ends up in another industry being used for something that it’s inventors could never have imagined. So when we’re making predictions about AI, they’re based on our understanding at the moment with all of the biases around the industries that we have knowledge of and they very well could end up in completely separate areas doing something that we never imagined.
And so the example from the article that I love is how GPS was basically for targeting munitions, right? For dropping bombs. And now we’re using it for tracking our runs right down to the foot basically where I’m running around the park.
So there isn’t a lot of discernment because we’re being entertained, but at the same time it’s sort of laying some of the groundwork for our thinking around AI and, of course, we’ve all seen the Terminator films where the artificial general intelligence Skynet sort of takes over things and destroys humanity. And so all of the assumptions that lead up to this very interesting dystopian future, all of these assumptions that make the story feel so exciting, you accept as part of the fantasy. But once you exit the movie, these ideas remain with you and shape the way you think about AI, whether or not you’re conscious of it, in a day-to-day way.
To your point, the silliness of the killer death robots. I mean, where did we get the killer death robots idea in the first place? I suspect that as a kid growing up in the 80s. Oh, I love the Terminator so much. I don’t know if that was the 80s or early 90s. Feels like 80s.
Listeners, remember that while you’re listening to the show, you can follow along with all the things that we’re mentioning here in real time. Just head over to the digitalife.com. That’s just one l in the digitalife. And go to the page for this episode.
We’ve included links to pretty much everything mentioned by everyone. So it’s a rich information resource to take advantage of while you’re listening or afterward if you’re trying to remember something that you liked. You can find The Digital Life on iTunes, SoundCloud, Stitcher, Player FM and Google Play. If you’d like to follow us outside of the show, you can follow me on Twitter @jonfollett. That’s J-O-N-F-O-L-L-E-T-T and, of course, the whole show is brought to you by Goinvo a studio designing the future of healthcare and emerging technologies, which you can check out at goinvo.com. That’s G-O-I-N-V-O.com. Dirk?