Bull Session

The Pitfalls of Predicting AI

November 2, 2018          

Episode Summary

This week on The Digital Life, we discuss the pitfalls of predicting AI. AI predictions range from the measured and meaningful to highly unrealistic and downright hysterical. But how can you tell the difference? In this episode, we dig into some rules of thumb for thinking through the AI predictions we encounter, as laid out in the article “The Seven Deadly Sins of AI Predictions” by Rodney Brooks, a founder of Rethink Robotics. From better understanding the properties of narrow AI to asking “how will it be deployed?”, questioning supposed magical properties without limit, to admitting, in the long term, we just don’t know, we’ll explore the many factors that counter the breathless hysteria of AI predictions. Join us as we discuss.

Resources:
The Seven Deadly Sins of AI Predictions

Jon:
Welcome to episode 282 of The Digital Life, a show about our insights into the future of design and technology. I’m your host, Jon Follett, and with me is founder and co-host, Dirk Knemeyer.

Dirk:
Greetings, listeners.

Jon:
This week we’ll be discussing some of the many, many, many pitfalls of predicting artificial intelligence and obviously we’re guilty of some of that, but we’re going to tackle these rules of thumb which were put forth, I think last year in MIT’s technology review by Rodney Brooks, who, if you don’t know the name, you should. He’s the founder of Rethink Robotics. I believe he was also a founder of I, Robot, and was at MIT. The name of the article was the Seven Deadly Sins of AI Predictions. And so we’re going to dig into some of these rules of thumb that Mr. Brooks puts forth.

So first, I don’t think we need to go too far to see the hysteria of it all, right. It’s fun and it probably gets a lot of clicks. If you can talk about how a particular subsection of the economy is going to be completely wiped out by automation, whether it’s robotic or AI or some combination. Usually, I think the around any of these predictions, if you dig a little deeper, you can reveal some of the lazy thinking. Some of the questions probably worth asking are, “Hey, what’s automated already and how easy is it to automate and how many jobs are there and how likely is this to happen?” But Rodney Brooks gives us some rules of thumb, which I think are very useful.

Let’s get started. And we’re not going to take these in any particular order. I think just the sort of the generally interesting ones. We’ll start with some of them. I found this pretty fascinating and we’ve talked about this before, but the idea that purpose-built AI is just not adaptable. Anytime you’re making a prediction that’s based on a very specific purpose-built AI and it’s sort of a being pointer to future change, whether that’s apocalyptic or utopic or somewhere in between. A perfect example of this is we spent a lot of time talking about AI and poker, AI and Go, AI and chess and it’s this idea that here are these amazing games that humans invented and now we’re not even the best at them anymore. We’ve been bested by machines. Uh-oh, where is this going? Dirk, this is a great rule of thumb, I think, and sort of leads to the sort of the massive difference between narrow AI and the sort of more general AI, which people sort of conflate. Do you want to tackle that one a little bit?

Dirk:
Yeah. I mean, we’ve talked about this a lot on this show. Of course, at the most simple level, I mean narrow AI, all AI that is currently deployed that we’re seeing that we’re able to talk about is a real thing is narrow AI. It’s AI that is purpose-built to do a specific thing. General AI, which the media is more fond of talking about is the theoretical AI that is able to do many things, getting into to the point of being super-human and being able to eclipse the things that we do. AI is narrow and it’s going to be narrow for a while. So any predictions that are beyond narrow AI, you can really throw out right away. We will move beyond narrow, but it’s more likely to be decades than years. So just we can all chill a little bit.

Jon:
Right. I think part of that is its narrow’s not adaptable, so it’s not like the AlphaGo or, the chess playing AI is going to figure out how to, I don’t know, make a menu for your dinner or help you with your math homework or whatever it is. They’re not adaptable beyond the purpose they were built for. In fact, if you went and changed one small aspect of the rules of play, right? So you could have, like when you play monopoly, right? There are lots of monopoly fanatics who changed the rules just slightly, just because it’s fun. They hack the game, so if you are an AI playing Monopoly and someone went in and started changing all the rules, the AI wouldn’t be able to do anything anymore because the rule set that it learned and that it was trained on is now useless.

Dirk:
Yeah, no, that’s all true. And the other thing to think about, even though yes, with those narrow AIs, they’re only able to do one thing. You do have people thinking in terms of frameworks as well. So when we had Noam Brown, one of the co-creators of the Libratus, the AI that beat poker professionals, he made a point of saying multiple times, “We didn’t make this to beat poker. We made this to be a particular engine that could be re-deployed to do other things.” And he talked about things in the financial sector. He talked about things in the security sector.

Now, in order to achieve those ends, they will need to start over again from the standpoint of the AIs learning. However, the basic structure of the AI, the programming of the … I don’t know to what degree they’re just learning from what they’ve done. They’re leveraging assets from what they’ve done. They’re taking whole cloth, the whole engine and just re-teaching it. He didn’t go into those details and I can’t speak to that, but there is now this thoughtfulness around we want to make something that solves and addresses particular problems. Each instantiation that we have will need to be specific to one problem, but in working on the one we can adjacently easily work on the next and work on the next. That’s where you start to get some really interesting things happening. But it remains in the territory of narrow AI again, where each one is just doing, is doing kerchunking at that one thing. It’s a whole lot of hammers. John.

Jon:
Yeah. That’s an excellent point. Another one of these rules of thumb from the article that particularly interested me just because we’ve sort of seen some of its effects at our work at the studio around AI and non-AI software frankly was just talking about the speed of deployment of technologies and their capital costs. A good sort of way to look at this is through the digitization of healthcare and healthcare records. Sort of a good example of how much time, effort and money is required to do this digital transformation. And you can sort of see from the example I’m about to give, how this might apply to AI software and just sort of the needs in all three of those areas that are going to be far off in the future. We’re going to need lots of money lots of times.

For example, EHRs got a ton of money sort of injected into that industry by the Federal government because they wanted to digitize health records and reap all of the benefits of having a digitized system. Now, we won’t get into the fact that a lot of this deployment wasn’t, we haven’t realized the success of it yet, right?

Dirk:
Is it an epic fail, John?

Jon:
I don’t know if it’s an epic fail, but there are lots of smaller failures that might add up to an epic fail.

Dirk:
I was using the word epic in a couple of different ways there.

Jon:
I know, I know you were. So let’s look at that. Over the past decade, we’ve been deploying these electronic health records and we’re moving from an analog system, which is largely paper base and doing faxing of records right? To a digital one. So you have to deploy all of these massive enterprise systems at hospitals. You have to retrain people. In fact, now that people are not using paper all the time, you sort of have to rethink the way they’re interacting with patients because now doctors are looking at their screens all the time and not looking at the patients. And together, with all those things we’re sort of finally gotten the electronic health records a sort of up and running and people don’t really know how to use them yet.

There are no open standards so people can’t share data with each other. Patients don’t own their data. Patients can’t even really transfer their data from one hospital to another provider to another hospital. There are all sorts of just sort of practical problems with the deployment of this technology and this is a fairly unremarkable technology. Let’s face it, digitizing the health record, it doesn’t seem like this would need to be magical like that this-

Dirk:
This is not a killer death robot level of problem there Jon.

Jon:
No, no it’s not. And so the fact that we can almost sort of kind of deploy that thing over 10 years but not super successfully, just imagine trying to do your next level of digital transformation and add AI into all your workflows across hospitals. I can’t imagine how many years that would take. Once the technology exists, right? First, you have to develop the technology and then you have to deploy it. Never mind the fact that a lot of people were really sort of happy just faxing things and filing away their papers just as they always were.

If you’ve got a system that works and you’ve got sort of incremental improvement from whatever the software is going to be, it’s also just going to take time for that to be absorbed. For many reasons, deployment is the unforeseen monster in the closet for any technology. It’s like, “Okay, great, this stuff works.” It may even work in the small prototype or a rollout. But once you start talking about enterprise-grade rollouts of things the stakes get a lot higher and the timelines get a lot longer, for sure.

Dirk:
That’s really true. And part of that is existing infrastructure, which is an issue in that context, but also beyond that. If you’re thinking about making predictions in where technology can go, remember that we have a lot of stuff that people can’t afford to replace, right? So if we could magically make vanish all roads and all cars, you can bet automobiles would not be the transportation solution of today or the future. It would be something completely different. However, in a world where the road’s already exists and people have tens of thousands of dollars invested in their cars and don’t have a lot of extra money to spend on new transportation conveyances, cars are going to be the center for a really long time. In the home, there’re lots of interesting concepts around what a house of the future could look like.

The health room that we innovated at GoInvo is an example of that. But these houses are made of certain materials. They are physical expensive spaces. Changing those physical materials and completely metamorphosizing, I think is the right word, the environment, it’s just beyond the bounds of what 80, 90% of people can pay for. The more interesting and exciting and say magical solutions around smart homes that ain’t going to happen because of existing infrastructure, if absolutely nothing else. So as you’re thinking about your own predictions and your own trying to sort out what is the future look like, think about infrastructure because if you’re dealing with something that has existing infrastructure, I mean, that’s a huge boulder in the way of exciting new ideas becoming reality.

Jon:
Yeah, I think that’s correct. One point that Mr. Brooks made in his article that I found interesting and doesn’t really, it’s not something I think about very often, but he was just pointing out that exponential thinking can get us in trouble. Right? So we have this in our heads, we have this idea that exponential growth around technology, in particular, sort of Moore’s law has spoiled us in a lot of ways because now I can get a computer that is a 100 times more powerful than whatever it was I was using as a kid in the late 80s, right, in my pocket, right, versus this giant behemoth that was sitting on my father’s desk. We’ve sort of accepted that as an article of faith, right? So now everything will get smaller, everything will get faster and that will just go on forever. Right?

Okay, so that’s not necessarily true. These advances that we’re making, whether it’s in genomics or are around chips and computers. There are physical and other limits to them that will, economic limits for instance, at a certain point it doesn’t make any sense to keep on making things smaller if people just don’t need the power anymore for whatever it is they’re doing. I probably don’t actually need all the power that’s in my MacBook Pro right now. I could probably survive with something somewhere in between what I had in college and what I have now probably would have been fine. But there are economic limits and then there are the actual physical limit of how small you can make something. Right?

And so these will come into play in different ways, but it’s worth considering that this is not, it’s not really exponential or at least it’s not going to be exponential forever in many of these cases. That’s not to say that there won’t be some interesting quantum computing discovery that, who knows, may develop some crazy fast computing that we can’t even imagine yet. But barring that, and sort of considering the laws that we sort of understand now there are sort of these upper limits that you never hear about limitations when a miraculous predictions are made at all, frankly.

Dirk:
That’s true. I think when many of us think about the future, we’re not thinking about in terms of timeframes, right? I guess pundits who were like, “In 10 years, this will happen. In 20 years that will happen,” but the article pointed out that overestimating and underestimating the timeframe of things happening is a big mistake that people make and is one that I think is valuable for all of us to keep in mind as well. I mean, I’ll speak for myself. I mean, over the years there’s been a lot of things that I’ve seen coming and predicted correctly. I tend to have a skill for that, but I have almost universally been wildly off in terms of the specific timeframe when I’ve put a specific timeframe on it. It’s tough to get that right unless it’s really close and you can say, “Oh, in six months next year,” because it’s like, it’s all kind of imminent. It’s all kind of coming to a head.

As we’re thinking about just personally how to manage our prognostications for the trajectory of the future, it’s to be mindful of that if you have specific time horizons are probably wrong. To try and figure out what’s happening in 10 years or whatever that chalk line is going to be a failed exercise in terms of the conclusions you come to. So think less about timeframes and think more about possibilities and more generally speaking what is going to happen. Timeframes are just, they’re going to prove inaccurate and are the wrong things to focus on.

Jon:
Yeah. That’s the brutality of timeframes, which if you’re given to any kind of estimation, whether it’s part of your job or just part of your life. I mean the rule of thumb that I use sometimes is, “Okay, I know this much, right? So I’m just going to double it,” because I figure I know maybe 50% of what’s going on and in this very specific to estimating, right? Like, “How long is this going to take me? Oh, I think it’s going to take a half an hour.” Just better make it an hour or I think this is going to take 12 weeks. Better make it double that and you’re likely closer because you don’t know what you don’t know.

Dirk:
Yeah, that’s right.

Jon:
I mean, and to build on that point a little bit, a lot of the technologies that we are beginning to understand now, we have a specific purpose in mind for those technologies and oftentimes those purposes are vastly different from how the technology actually gets used.

Technology that is designed for one thing inevitably ends up in another industry being used for something that it’s inventors could never have imagined. So when we’re making predictions about AI, they’re based on our understanding at the moment with all of the biases around the industries that we have knowledge of and they very well could end up in completely separate areas doing something that we never imagined.

And so the example from the article that I love is how GPS was basically for targeting munitions, right? For dropping bombs. And now we’re using it for tracking our runs right down to the foot basically where I’m running around the park.

Dirk:
Among other things. GPS is used in incredibly diverse ways.

Jon:
Sure, sure. Yeah. To help us get from place to place, to Lord knows what else it’s used for. So, for the last rule of thumb that we’ll touch on today, and I think this is a fun one. Obviously, we haven’t hit them all in our episodes, so I encourage you to go check out the original article and we’ll link to that from the podcast. But the last one that I want to touch on is this idea of Hollywood and movies scenarios where we have a story in mind around a particular usage of AI and sort of accept that usage because it’s convenient for the myth or the story that’s being told to us on the screen.

So there isn’t a lot of discernment because we’re being entertained, but at the same time it’s sort of laying some of the groundwork for our thinking around AI and, of course, we’ve all seen the Terminator films where the artificial general intelligence Skynet sort of takes over things and destroys humanity. And so all of the assumptions that lead up to this very interesting dystopian future, all of these assumptions that make the story feel so exciting, you accept as part of the fantasy. But once you exit the movie, these ideas remain with you and shape the way you think about AI, whether or not you’re conscious of it, in a day-to-day way.

To your point, the silliness of the killer death robots. I mean, where did we get the killer death robots idea in the first place? I suspect that as a kid growing up in the 80s. Oh, I love the Terminator so much. I don’t know if that was the 80s or early 90s. Feels like 80s.

Dirk:
First Terminator was 80s.

Jon:
Yeah. And that’s probably infected my thinking, thanks to James Cameron, right.

Dirk:
Or whoever wrote the original story, which may have been James Cameron and that probably even goes back to the 30s. That’s when sort of science fiction AI writing was first kind of getting started. That’s probably not true. But science fiction, as we think about it today with spaceships and all of that stuff, a lot of that was germinating them. Probably the killer death robots idea began them or around then.

Jon:
Right. So something to keep in mind for ourselves and for anybody making AI predictions that the Hollywood version of things is probably not the greatest starting point.

Listeners, remember that while you’re listening to the show, you can follow along with all the things that we’re mentioning here in real time. Just head over to the digitalife.com. That’s just one l in the digitalife. And go to the page for this episode.

We’ve included links to pretty much everything mentioned by everyone. So it’s a rich information resource to take advantage of while you’re listening or afterward if you’re trying to remember something that you liked. You can find The Digital Life on iTunes, SoundCloud, Stitcher, Player FM and Google Play. If you’d like to follow us outside of the show, you can follow me on Twitter @jonfollett. That’s J-O-N-F-O-L-L-E-T-T and, of course, the whole show is brought to you by Goinvo a studio designing the future of healthcare and emerging technologies, which you can check out at goinvo.com. That’s G-O-I-N-V-O.com. Dirk?

Dirk:
You can follow me on Twitter @dknemeyer. That’s at D-K-N-E-M-E-Y-E-R and thanks so much for listening.

Jon:
So that’s it for episode 282 of The Digital Life. For Dirk Knemeyer, I’m Jon Follett. And we’ll see you next time.

No Comments

Leave a Comment

Your email address will not be published. Required fields are marked *