Welcome to episode 271 of The Digital Life, a show about our insights into the future of design and technology. I’m your host Jon Follett and with me is founder and co-host Dirk Knemeyer.
Our special guest today is Katja Grace from AI Impacts, whose research is focused is the future of AI. Katja, welcome to the show.
Hi. It’s good to be here.
For our topic this week, we are going to chat about where artificial intelligence will be going in the next 10 years, and it’s impact on the world and people. Dirk, why don’t you lead us off?
Sure. Katja, AI is such a big thing right now in the culture, in the media. A lot of that is around deep learning, the specific technology to a lot of the recent accomplishments have been achieved through. Why is deep learning having such a breakout moment?
I think the basic ideas behind it are quite old, many decades old, but in order to get value from it you need a lot of hardware, is my basic understanding at least. This isn’t central to my expertise, but I think you might be familiar with Moore’s Law, the phenomenon where hardware becomes an order of magnitude cheaper every maybe 10 years, but in the past every 4 years or something. If you wait for a bunch of decades you can afford to use lots of hardware.
I think that’s what has happened recently. Another thing that people point to is data. The digitization of much of our lives recently means there’s a lot more data for various things. Deep learning lets you use lots of hardware and data to get good results. I think there have been good insights as well, but the story that I’ve heard at least is the availability of hardware has allowed the science to progress there.
The buzz now is that deep learning will increasingly automate knowledge work. Can you explain why that is, and give us a couple examples of automation that might be coming in the next couple, or few years?
Yeah. Part of the reason to suspect that deep learning will automate knowledge work is we think that something will automate knowledge work eventually. We know that humans can do knowledge work so probably it’s possible to get machines to do it as well, unless you think that humans have some magic spark of consciousness that is needed for it that we will never be able to automate. I think many people suspect that’s not the case, so the question is what technology will let us do this?
The reason to suspect that deep learning might be it is we’ve recently seen a lot of progress in automating some things that are part of normal human functioning that we hadn’t previously been able to do, that are key to most work. For instance, recognizing and dealing with images and speech and writing. Knowledge work involves those things as well as maybe other thinking that we haven’t worked out how to do.
We’ve seen deep learning do some things that seem fairly intellectual. For instance, playing various games quite well. You might think that the kind of technology that lets us play Go well wouldn’t be that far off of what we need for being able to play some jobs well that are relatively straight-forward intellectual jobs. I suppose some people think that this sort of technology might take us all the way to having artificial general intelligence that can do basically everything humans can do, but that’s more controversial.
As far as what kinds of things might be automated soon, I haven’t looked into this directly but I have some collaborators around a big survey of machine learning researchers and asked them what they thought about that. The median person thought that retail assistance would be automated in 10-20 years. Surgeons, they thought, 30-50 years. AI researchers themselves, they thought much more than 50 years. Some of them thought 50 years to much longer.
Some things that they thought might happen in the next 10 years are AI being able to write top pop songs, or high school essays, or carry out the entire job of a telephone banking operator, or at least the central parts where you talk on the phone. Doing human level translation of written language. These things are quite close to some jobs that exist.
Yeah, that’s right. You mentioned that a surgeon would potentially take decades longer than some of the other things. I suspect that part of that comes back to robotics, and the different paces at which different technologies, such as AI as one technology and robotics as another, progress. How do you see the pace of robotics development in conjunction with AI development, and to what degree do you think they’ll … will they start to keep pace? Will robotics keep up or be the thing that holds things back? How do you think about those two different tracks?
I admit that I haven’t been following this closely. My impression is that robotics at the moment is fairly hard and expensive, so we haven’t seen a proliferation of robots. I guess advanced AI is also pretty hard and expensive at the moment, so it’s pretty unclear how they’ll keep pace or what exactly keeping pace means. I do think which order they happen might cause different things to happen in the world.
For instance, if we have quite good AI and not very good robotics we might try to push more tasks into virtual places where you don’t need to deal with the real world. Then maybe in the long run even when we have robotics, all of those things will be done virtually. Whereas, if we got robotics earlier we would stick with doing things in the real world more. This is pretty speculative and I haven’t actually looked into it a lot.
Sure. Although, it’s still interesting to hear from your perspective as an AI expert. Coming back to focus more on your specific AI expertise, these might not be things that you have particular experience in, but I’m interested in your guesses or even imaginations of where some creative fields … how they might be impacted by AI and automation, say, a decade from now. Research science, let’s use as the first example. How might research science be different in a decade as a result of AI?
Well, I think there are likely to be more tools for recognizing patterns and things, and allowing people to better see which hypotheses to test, that sort of thing. I don’t have a good sense of when such things will happen, but that’s the kind of thing I would expect to come out of this.
Sure. We definitely aren’t going to have a score card to see how long it takes or not.
It’s not every day that we get access to someone who has such insight in AI and where it’s going. I’m just interested in your instincts on some of these things. What about journalism? How might AI impact journalism in the future?
Yeah. You might imagine there being tools for helping to write better that are not entirely automated in the process. You could imagine something that just does journalism for you. I expect there to be better tools relatively soon. You can imagine something that pays attention to what you’re writing and tells you how to write it better, or points out problems with it.
At the moment, you have to actually write to a colleague and get them to read things. When I write things that’s often how it goes. You can imagine much better automated tools for that. As far as writing things fully, that seems … I guess I’ve seen fairly basic articles written fully, but in order to write a good article you need to understand the world to some extent, to understand what you’re talking about. The relevance of this topic with relation to …
So, understanding the world well enough to know who you’re writing to and what you’re talking about is perhaps … as you say, AGI complete. Which means you don’t really have it until you have AI that is as good as humans at everything.
That’s interesting. Is context something that is AGI dependent, or let’s say broader context? Is that something that only will come with AGI?
I think things are pretty hard to say. You could imagine something that has a system that has some sort of basic understanding of context, but maybe lacks some other mental skills, if our mental skills our pretty modular, and I think that’s unclear. I think people often think of understanding things as well as humans do as a big part of being at human level.
Interesting. When people talk about the difference between humans and animals, or humans and AI, a lot of times art is the thing that is reserved or treated as uniquely human. In technology, I think we’re becoming disabused of that notion, if we ever had it. Crossing that boundary now, what about fiction writing? How do you see AI impacting fiction writing?
I guess, again, in order to do it well enough the writer has a coherent picture of the world and a message, or goal that they’re trying to convey, and an idea of an audience they’re trying to convey it to, it seems AGI complete. I expect maybe earlier for AI to be able to produce written words that are interesting in some way, or amusing, or people find worth reading.
In this survey, one of the things quite early that I thought was interesting was, we asked when AI would be able to write a new Taylor Swift song as well as Taylor Swift can. A dedicated Taylor Swift fan would not be able to tell the difference between the new song and one that she wrote and performed. The people put that about 10 years out. That seems … you might think in order to really properly be a songwriter you need to understand the words that you’re saying. It’s possible these machine learning researchers just didn’t have a great view of Taylor Swift, but they thought this would be automatable soon.
What about management activities? To what degree do you think AI can be a tool that helps managers in doing their jobs?
It seems unlikely to be entirely automated any time soon, because it involves interfacing with and understanding so many different things. All the different things that are going on in an organization and different people and so on. My impression is that it’s generally easier to automate things that are a whole bunch of something similar. As far as tools go, I previously read something about a thing where they put sensors on people for weeks at a time and could figure out from the sensors what the social network was within the organization, and how people were feeling and how these things interacted with each other, and so on.
You could imagine without having to put sensors on people, but just interpreting the data that is easy to collect, maybe from everyone’s work emails or that sort of thing. With the ability to interpret things well, you could maybe figure out when tensions are rising or there’s something that a manager needs to see to. I think we don’t have tools like that yet but I can imagine there being tools like that based on machine learning to read the things.
In general, it seems like there’s space for tools to help with management that involve reading messy human systems.
That makes sense, yeah. From your perspective as an AI expert, where would you place your bets on the eventual development of AGI? Do you think it’s more likely to come from academia, military, commerce, somewhere else, and why?
I think at this point it’s pretty unclear. A big consideration is how much hardware you’re going to need for it. Recently the biggest headline results in AI have been using increasingly huge amounts of hardware. If AGI results from that sort of thing, using even bigger piles of hardware, then it sort of has to be at a place that can afford to spend a lot of money on huge piles of hardware, like a big company or a big government project or something like that.
I think it’s unclear whether that is what will happen, that you will need a lot of hardware. Another thing is, it’s not clear to what extent it is developed in one place, at all. If it’s basically one idea, maybe it happens in one place, but the development of deep learning for instance didn’t happen in one company, or something. It happened across industry and academia. If getting the AGI involves lots of little part being done in different places, potentially there might not be a straightforward answer to this.
That’s interesting. When we think of AI, we generally think of software engineers as the architects of the AI. We come from a community of designers and we’re curious, what role can or should designers have in the development in AI?
I come from a community of philosophers, so I’m less familiar with exactly what software engineers and designers do. To the extent that designing is about the interface between the machine and the people, and that sort of thing, that seems to be very important. That’s the sort of thing that the philosophers think about. If we build machines like this, what will actually happen in the world and how can people use them to get what they want? That sort of thing.
I’m pretty involved with the AI safety community, which is about that kind of thing. Is this an answer to your question?
Yeah, yeah. No, no. A lot of it is just hearing the facets and hearing your take, because you are coming from a very specific perspective and it’s interesting to hear for you as a philosopher, what your framing and perception is, so it’s very useful, yes. Thank you.
The basic thought is … it might be straightforward to make technology that can basically do a lot of things, and perhaps make very high quality decisions, play games really well, play games in the real world very well like make a lot of money or something, but if that’s quite easy and we haven’t figured out exactly how to make them do exactly what we want, we might end up with a giant mess.
You see problems a bit like this with current companies or something. If you can work out how to set up a company that makes money by producing stuff, but you haven’t figured out how to make it care about all of human values that you care about, whether the rivers are polluted or something, and you will tend to end up with polluted rivers. Similarly, if you make machines that are very good at optimizing for something and don’t really know what you care about, then without any malice they will end up destroying the things that you care about. That’s the kind of framing, I guess.
Sure. Is there any evidence, from your perspective as a philosopher, your experience, the things that you’ve learned, is there any evidence to believe AI will ever be more intelligent than humans, or not?
Yeah, I think so. Interpreting evidence broadly perhaps … I think there are reasons to think that. The question is, are humans as intelligent as a thing can be? Reasons to suspect not are … it seems like the human brain is fairly constrained by the need for it to fit in a human head and for mothers to be able to give birth to their children, that sort of thing. Biologically. How much energy does it use?
These days, it doesn’t matter that much to us if our brain uses a lot of energy. We would be happy to have an extra hundred IQ points if we had to eat more. Historically those things were a big problem. It looks like there are biological constraints on our brains that mean they couldn’t be a lot more impressive even if there were, in theory, greater heights of intelligence to have. Also, looking at the human population, it seems like the average human … it seems like you can be a lot smarter than the average human, because occasionally we see very smart humans. Even having machines that were somewhat smarter than the smartest humans we’ve ever had would perhaps make a big difference to the world.
Katja, it’s been really interesting hearing your insights and your perspective on AI. Thank you so much for being on the show.
Pleasure. Thank you for having me.
Listeners, remember that while you’re listening to the show you can follow along with the things that we’re mentioning here in real time. Just head over to thedigitalife.com, that’s just one “L” in the digitalife, and go to the page for this episode. We’ve included links to pretty much everything mentioned by everyone, so it’s a rich information resource to take advantage of while you’re listening or afterword if you’re trying to remember something that you liked.
You can find The Digital Life on iTunes, SoundCloud, Stitcher, Player FM, and Google Play. If you’d like to follow us outside of the show, you can follow me on Twitter @jonfollett, and of course the whole show is brought to you by Goinvo, a studio designing the future of healthcare and emerging technologies, which you can check out at goinvo.com, that’s g-o-i-n-v-o dot com. Dirk?
You can follow me on Twitter @dknemeyer, and thanks so much for listening. Katja, how about you?
If you want to know more about what I’m up to, go to aiimpacts.org. Thank you.
So, that’s it for episode 271 of The Digital Life. For Dirk Knemeyer, I’m Jon Follett, and we’ll see you next time.