Welcome to episode 233 of The Digital Life, a show about our insights into the future of design and technology. I’m your host Jon and with me is founder and cohost Dirk.
For our podcast this week, we’re going to chat about the Juvet Agenda, which was a special conference on artificial intelligence held in September of this year where a group of designers and future thinkers gathered at the Juvet nature retreat in Norway. Lovely space. You may have seen it, right Dirk, in the movie Ex Machina, right? That was where they shot that film.
That’s true. Yeah. Part of it was shot at the Juvet Landscape Hotel, which is adjacent to the nature retreat that you’re mentioning. Some of the movie of course was shot in London in a sound studio, but some of it was shot at the hotel where we had our little gathering.
You were there, Dirk, to discuss AI, where it’s going, what the potential outcomes are. Andy Budd from Clearleft published a lovely … It’s not really a manifesto, but more of a summation, a bunch of questions that came out of your discussions there over the three days you were there. It’s really relevant to what we talk about on the show all the time, which is sort of the human phase of these emerging technologies, so I thought today we could dig into the Juvet Agenda. Dirk, maybe you can shed some light on the different things that Andy and your group published on the site?
Sure. Before we get into that, let me tidy up some of the framing. I don’t know what the event was actually called. It wasn’t called the Juvet Agenda. The sort of publication coming out of it we called the Juvet Agenda. It was organized by Andy Budd as sort of the imprimatur along with Clearleft, his firm, and some of the leaders at his firm were also important in the gathering. There were 20 of us total who came together. As Andy sort of jokingly said, “It’s the mysterious benefactor invites different and interesting people to an island to have this clandestine thing going on.” It was a very eclectic group. It was one that certainly the heart of it was sort of design and user experience.
I mean there were a lot of folks who are notable in that space, but there were also academics. There was a fiction author, researchers, a lot of different people at the same time. Of the 20 of us, probably about half were design UX-ish and then the other half were from these sort of different disciplines. The idea was to talk about AI because as we’ve talked about on the show for a long time now, AI is just the media darling at the moment. It’s totally on trend. Has a lot of attention and investment and a lot of … Sort of a lot of ridiculously big things are being expected of AI at this point. I would say most of us there already were pretty well researched on AI and already had sort of a perspective and opinion, but we were all there to learn.
It was people who had a clue, but were looking to take and extend our framing of AI and talk about what we think the future looks like, where AI is going and how we as this group of creatives, sort of designerly creatives, will be participating in that future.
Yeah, that sounds like it was a lot of fun. I mean just based on the reading that I’ve done around this agenda that you published, it asks a lot of questions in a human-centric and sort of practical and creative-centric way, which I think is going to be absolutely essentially to making sure that technology serves people and not necessarily corporations or just sort of governments or larger interests that are sort of already have lots of power, right?
There were three or four areas, sort of larger areas of exploration that are outlined on the agenda. I thought we could take a dive into each one of those and get your thoughts on it, Dirk, and then also I’ll try to summarize some of what I learned on the site.
The first area is power and control, which I thought was a good way to sort of dive into AI. The areas of concern I think that they are outlined in the agenda are economics and sort of the already existing disparities whether it’s sort of technological or financial or what have you. Those have the potential to be broadened significantly by the additional leverage of artificial intelligence in part because it seems like the tech industry, this sort of a general statement, tends to have these winner take all markets. It’s all too possible that AI will result in another sort of winner take all scenario.
When you were exploring that, Dirk, what were the questions that you were asking and I don’t know if you delved into ways that we could make AI more egalitarian, but your thoughts on that.
From a very personal perspective, one of the delightful things for me at this gathering was that most of the people like me are anti-capitalist. That was a surprise because in my experience of networking with other sort of high level creatives over the years, I’m usually on the fringes in that, but here there were a lot of people who … I don’t know to say a majority, but possibly a majority who were firmly anti-capitalist as I am. Like throw the whole fucking thing out. It’s the wrong thing.
Most of the people, certainly people who articulated the position, saw the bad of capitalism and thought getting AI in the right direction would be only through overcoming capitalism as a boundary, as a barrier, as sort of the cancerous thing sort of blocking our health and our way. It was really gratifying to be in a community of people that thought similarly in that way because here in the United States usually people just think I’m bat shit crazy. The United States might be a key thing to mention there because of the 20 people, half or more were specifically from the UK, Clearleft and Andy are UK based. That network was coming through. They were only a handful of us from the United States and a handful from Continental Europe as well.
I don’t think there were any people from Asia. That perspective may certainly be biased by the participants and coming from socialist countries and countries with a little bit different bend on these things than we have on the United States. A lot of the stuff about power and control really came down to the pursuit and acquisition of capital and the mindset of the system and the people who were participating therein of trying to maximize, as opposed to looking more holistically and trying to create something that improves the situation of humanity, as opposed to just themselves or the organizations for which they’re an agent.
I agree with that approach there. I think there are other sort of centers of power, right, that could potentially use AI to their benefit. I mean in the US as you pointed out, we have a very strong capitalist system and we also have a strong governmental system, right? There is the potential for AI to be misused by all kinds of centers of power whether or not the NSA, CIA, name your intelligence gathering governmental arm of choice, whether they would be using AI in a responsible way. There’s certainly ways of managing that. We’ll need to consider ways of managing that as well, but yes, I do take your point on the acquisition of capital is certainly a driving factor that would make AI sort of this tool for leverage and not for everybody.
That threat was certainly also present. I mean there was an acknowledgment of the power of governments of countries to use the technology to take it in stopic ways. Certainly the primary sort of top of mind concern of what people were talking was about corporations, the Google, Apple, Facebook and Amazon. That was top of mind for people and what people seem to more urgently be concerned about as I think I myself am. Not that the threat ultimately is greater from the corporations as opposed to the governments, but it’s certainly more immediate.
Right. The second topic of exploration outlined in the agenda here was around bias and transparency and authenticity. We’ve talked about this quite a bit on the show in a number of different ways including discussing controlling bias, exploring bias first in our own existing rule sets. Then really making it possible to audit the artificial intelligence, the machine learning to expose those biases in the system. Just for an example, the use of software in the criminal justice system to do things like vet individuals for possible parole, right?
Including certain rules in the machine learning that may make it very difficult for people from certain places, right, or from certain economic backgrounds to be paroled, as opposed to setting up a system that’s either attempting to reform, right, or biased in that way. What was the discussion like about bias and transparency when you were having at the retreat?
I mean there were a number of different conversations. With bias, it’s interesting because this is a group of people who are very socially progressive. When we’re having conversations of bias, we’re assuming things that in the public discourse are contentious, right? Many people from the media to conservatives to people who are maybe generally less educated question the pervasiveness of bias in a lot of different ways. For us at this gathering again to the degree that people express a position, we’re sort of all on the same page and it was just assumed. Understanding how bias works, how it so easily infiltrates and how it so powerfully poisons context, this was really far forward in our thinking.
I mean bias was certainly one of the couple words that was most used over the time together. Transparency, that’s sort of falling into a bigger umbrella that that word is representing. There was a lot of overlap frankly with some of the things that we were talking about before with power. The lack of transparency is something that can be leveraged by powerful organizations or governments to misuse AI, to use it as a club, as opposed to a tool. Transparency ultimately is the key for vetting what’s going on with AI. The problem is its easy to say if you’re the creator of an AI that it’s just this giant black box and so much is going on that there’s no possible way to have transparency.
The flip side is there needs to be transparency otherwise we’re going to fall victim to bad actors either intentionally bad actors or accidentally bad actors that hurt us regardless of their intention.
Right. As part of this discussion or in the outline of it, there’s a subpoint here, which I thought was interesting, around exploring AI interfaces and asking should they be human-like and what are the alternatives. In particular, I mean it’s interesting we tend to gravitate towards interfaces that we’re familiar with on the outset of new technology. The iPhone UI famously started with this skeuomorphic representations of icons that sort of looked like the objects they were representing. Since people now kind of understand the interactions, it’s no longer necessary to be quite so ham-handed with the graphical representations.
I don’t know where we are in the course of creating the design language for voice use interfaces for instance or just for interfaces with artificial intelligence tend to fall in. The chatbot is sort of popular interface right now. If you can remember, what were the other alternative UIs for AI? Because I can think of a few that would be quite interesting, but of course we’re at this one point where we’re really talking about almost conversational user interfaces. What sort of alternatives did you guys look at?
We really weren’t talking at the level of UIs. There may have been a breakout or more than one breakout that did delve into that a little bit, but in the things that I participated in, that was not a topic for consideration. We’ve probably talked about that more on the show, you and I. It’s not something that I took away from Juvet with more insight on just because it wasn’t brought up in conversations I was part of. The focus was a little bit higher, a little bit more systemic.
Got you. Got you. Yeah. I think with our smart wear discussions, we’ve talked about AI being integrated into software in such a way that you’re anticipating user needs and you’re creating sort of custom interfaces that are sort of reforming on the fly to match the needs of the user, et cetera. I think that’s a completely fascinating realm and I’m sure we’ll do another separate show on that sometime soon. The next area of exploration that’s listed here is human and societal well-being and I like this question from the site, so I’m going to quote it directly, “If this change is inevitable, how might we ensure humanity is changed for the better?” Wow. That’s a pretty tall order, isn’t it?
It is. One of the things about the Juvet Agenda that you noted is it is more questions than answers. That was intentional both from the standpoint that we didn’t have sort of the hubris to throw out there that we had all the answers, but also some of these things are questions that we weren’t even getting into the answers on and that’s a good example of one that we collectively are concerned about and have a lot invested in, but it’s sort of so big. On the opposite, like the previous question is about UI, now this question is like the opposite end of the spectrum. We were sort of in the middle I would say, but this was very important to …
Again I don’t want to say all, but I mean in general, to people who were articulating what concerned them or what mattered to them, this was right towards the front of the list.
As a subpoint to this, there’s discussion of the future of work, which once again we’ve delved into often and sort of this fear of an automated future in which the human being automated out and there’s no reason for a person to continue working because the machines do it all. Certainly a fear there and not entirely unfounded.
A little unfounded, right? I mean it’s a little melodramatic. Yes, jobs are going to be going away. Huge chunks of what we do today are going to go away, but something else will come after it. There will be new and different things. If we look at the history of work and the history of technology and how technology impacts work, while some industries have gone away, new industries have come up. We now I think recently had one of the lowest unemployment rates in some period of time whatever that statistic.
That is despite the fact that over the last N hundred years, whatever, 200 some hundred years since the Industrial Revolution was first getting started, lots of industries have been murdered or lots of people have seen their skills become obsolete nearly overnight and yet we have very low unemployment in the first world. I’ll keep it to the United States. Here in the United States we have very low unemployment even today despite all of these things happening. It is possible that AI is such a cataclysmic event that there’s mass unemployment and totally bizarre and bad for the condition of many of us humans in the process, but history as a guide would suggest that the hue and cry over the possible future unemployment driven by AI is grossly exaggerated.
Yeah. I have faith that any forthcoming technology, I have faith in its ability to increase the length of my to do list. To put it in the negative, there’s no technology that has ever made my life simpler. The last area of discussion here is perspectives for action, right? I imagine and you can clarify this, Dirk, that these were the areas or the section where you sort of laid out some of the things that you might do that maybe the next steps for you all at the retreat. One of these perspectives for action that jumped out of me and I think is pretty important is this mainstream understanding of artificial intelligence and how it’s …
On the site the word is haphazard understanding, but this idea that we don’t really have a solid grasp on it and therefore, I would think following on that, we can’t have informed discussions and drive the agenda if we don’t fully understand what AI is about and what’s possible with it.
Yeah, that’s right. It’s a problem that we still face. I mean if you go to The New York Times or CNN or Fox News or whichever media outlet you choose and search for artificial intelligence, the preponderance of the articles and the content that people are consuming on this are just crappy. They’re focused on niched technologies. I mean like Sophia the Robot is getting a lot of play now. Sophia is a parlor trick, but you would think that it’s sort of the forefront of artificial intelligence. I think I read something today where even the creators of Sophia are saying, “No, it’s not even artificial intelligence,” right? You have all the conversation, which we touched on in the agenda, about is there going to be a whole different species.
Is AI essentially going to become an extinction event for humanity? While you need to sort of nod to that as a possibility that’s on the table, it’s A, super unlikely and B, decades down the road in the best case if it even happens. Yet it’s a disproportionate amount of what is being talked about in the mainstream media and by people on the street. Haphazard is the way that people understand this stuff. How many people who are reading about AI and are talking to their friends, family, colleagues about AI could give a coherent definition of weak versus strong AI or could talk about what is artificial general intelligence or what is super intelligence? What do those terms even mean? Those are the basic definitional terms of typing out AI.
People aren’t even familiar with them, let alone able to explain them in a coherent way. There’s a total lack of understanding of what’s real and what’s not. There’s a lack of vocabulary. There’s a lack of just basic baseline understanding. My hope is through things like the Juvet Agenda and things being done by a lot of smart concerned caring people around the world on the topic of AI that we can sort of get the word out and slowly get people to understand what it is we’re really talking about and shift the group perspective from a haphazard one to more of a sanguine one.
Listeners, remember that while you’re listening to the show, you can follow along with the things that we’re mentioning here in real time. Just head over to the digitalife.com, that’s just one L in the Digitalife, and go to the page for this episode. We’ve included links to pretty much everything mentioned by everybody. It’s a rich information resource to take advantage of while you’re listening or afterward if you’re trying to remember something that you liked. You can find The Digital Life on iTunes, SoundCloud, Stitcher, Player FM and Google Play. If you want to follow us outside on the show, you can find me on Twitter @jonfollett. That’s J-O-N-F-O-L-L-E-T-T. Of course, the whole show is brought to you by GoInvo, which you can check at goinvo.com. That’s G-O-I-N-V-O.com. Dirk?
You can follow me on Twitter at @dknemeyer. That’s @D-K-N-E-M-E-Y-E-R. Thanks so much for listening.
That’s it for episode 233 of The Digital Life. For Dirk, I’m Jon and we’ll see you next time.