Bull Session

AI Goes Mainstream

February 3, 2017          

Episode Summary

On this episode of The Digital Life, we discuss the high-powered Partnership on Artificial Intelligence to Benefit People and Society, an initiative whose founding members include Amazon, Facebook, Google, IBM, Microsoft and Apple. Apple was just recently added as a founding member.

The mission of the group is to educate the public about AI, study its potential impact on the world, and develop standards and ethics around its implementation. Interestingly, the group also includes organizations with expertise in economics, civil rights, economics, and research, who are concerned with the impacts of technology on modern society. These include: the American Civil Liberties Union (ACLU), the MacArthur Foundation, OpenAI, the Association for the Advancement of Artificial Intelligence, Arizona State University and University of California, Berkeley.

Will AI build upon our society’s biases and disparities and make them worse? Or does it have the potential to create something more egalitarian? Join us as we discuss all this and more.

 
Resources:
Apple joins Amazon, Facebook, Google, IBM and Microsoft in AI initiative
Partnership on AI
A massive AI partnership is tapping civil rights and economic experts to keep AI safe

Jon:
Welcome to episode 192 of The Digital Life, a show about our insights into the future of design and technology. I’m your host, Jon Follett, and with me is founder and co-host, Dirk Knemeyer.

Dirk:
Greetings, listeners.

Jon:
On this episode of the show, we’re going to discuss the high-powered partnership on artificial intelligence to benefit people in society. That’s a mouthful.

Dirk:
Indeed.

Jon:
This initiative includes founding members like Amazon, Facebook, Google, IBM, Microsoft, and the just recently added Apple. The mission of this group, this AI partnership is to educate the public in regards to what artificial intelligence is, and presumably its benefits, to conduct research and study AI’s potential impact on the world going forward as well as to develop standards and ethics around AI implementation. Interestingly, the group also includes some organizations that you may or may not associate with high tech. More in the economics category or civil rights research groups. These ere just added recently, but they include the American Civil Liberties Union, the ACLU, the MacArthur Foundation, Arizona State University, University of California Berkeley, and then some other associations for researching artificial intelligence including Open AI and the Association for the Advancement of Artificial Intelligence.

Now this partnership, this is the biggest, the best, the brightest organization considering how AI is going to shape our future, which says to me that there is a lot of money coming in AI if you’ve got all of these new economy superstars onboard. The big question I think, partly for this show for us to consider, is the promise of AI. You can build so many great new things. Are we just going to keep repeating the mistakes that we have made in the past, or are we going to use AI as leverage to help solve some of the problems that we’ve created with technology, with society, and with all those things? Some of the biggest issues, we’ve of course talked about some of these at length from education to poverty to healthcare to global warming, all of these things you could see AI getting involved in. I think part of the question for this group, the partnership on AI, and for us as a society as we start talking about this for real now, how are we going to leverage these possibilities? How are we going to move forward?

I admire this group for getting together. Their first board meeting’s in a few days, on February 3rd where they’ll start laying out their, oh I don’t know, their first hundred days maybe of what they’re going to do. Nonetheless, I’m glad they’ve gotten together and now we’ve got a spark of a conversation happening. What are your thoughts on this, Dirk?

Dirk:
Well, I’m skeptical that they’ll be talking about the right things. Sort of underpinning our current reality is the fact that the individual liberties that the west has been built upon and has developed over the last hundreds of years, individual liberties being the absolute good and right. I don’t think it is anymore, but that’s not being questioned. To solve how AI interacts with the world, we need a framing that is coming from that perspective. What I mean by it is the libertarian view, which is the most, sort of the purest expression of individual liberties. It’s saying, “Look, if you’re not hurting other people, do whatever you want. Bottom line. You’re in control. The government shouldn’t tell you who you can and can’t marry, and the government shouldn’t tell you that you have to serve everyone. You should be able to serve whoever you want.” That’s the pure libertarianism and is, again, sort of the purest expression of the socially liberal approach.

The problem with that is we live in a world where raising the temperature of the planet, where even conservative scientific estimates have the said raising of the temperature of planet displacing hundreds of millions of people over time, and even in the shorter term a lot of people, primarily poor people, primarily people in countries that we don’t think about here a lot in the United States such as in southeast Asia. A lot of people are going to have nowhere to live and nowhere to go as the ocean gets higher. Why I think that this is relevant is now, if I’m flying from New York to Los Angeles, I am demonstrably increasing that eventuality. In exercising my individual liberty to spend my money and fly across the country, I’m a contributor to the future displacement and ruin of people who are nowhere I can see them, but they will be impacted by that. If I own a big property and I want to go out and burn some tires, I should be able to do that, right? If I’m doing that, it is going to hurt people that I can’t see and I don’t care about.

I personally care that a lot of people don’t care, that are just outside their field of vision. This is relevant because in the way we frame the intersection of artificial intelligence and human reality, we need to shift our perspective from the now-outdated one of individual liberty as the highest good to one of systemic liberty as the highest good, and I don’t hear people talking about that. I don’t hear that on the agenda. I don’t see that kind of shift happening whether it’s on the conservative side or the liberal side. Everybody is expressing individual liberties, just in very different philosophical ways, but we’re now a world where the sort of butterfly effect, that one little thing happening in the United States could have catastrophic effects to people literally on the other side of the world. That needs to be the kind of framing as we tackle these problems, and I’m really skeptical that the capitalist, greedy organizations of Apple and Google et al. are going to frame that way.

They’re coming at it as businesses, businesses looking to make profits, and that, perhaps tempered by some humanistic altruism but driven by profits first and foremost is the order of the day. To some degree, I’m glad it’s happening but I think it’s very likely their eye is going to be on the wrong ball, and they’re going to put agendas in place that are 20th century agendas and not future-looking agendas. I’m skeptical, Jon.

Jon:
So from your perspective, Dirk, AI at this juncture is going to perpetuate the biases that already exist just by the nature of the system that it’s being injected into.

Dirk:
By the nature of the framing of the people in charge for the ecosystem that the envision and how the AI is a participant in that ecosystem. My contention is that their framing for the ecosystem is completely flawed.

Jon:
Yeah, that’s interesting. I would think that just the simple structure of, as you pointed out, all of these companies coming together that are driven by wanting to increase shareholder value, wanting to make a good living, wanting to return profits into a bank account. Structurally, that ultimately means that that’s going to drive decision-making in a certain way.

Dirk:
Yes.

Jon:
Now, if you look at the other institutions who are a part of this equation, whether it’s the ACLU or the MacArthur Foundation or some of the universities involved, their perspective is certainly different from the companies’ perspective, and I would think that there would be some balance achieved there. The question, of course, becomes-

Dirk:
I’m not sure about balance. I think there might be other voices there, but even at the level of universities, I mean universities over the last 20 to 30 years have been intentionally shifting away from the more traditional high-minded academic approaches to businesses and, “How do we more train people for jobs? How do we better monetize the market?” That’s been a very big shift in academia over the last generation plus. It depends who you have in the room at that point. If you have the right people in the room, then maybe they’re providing some kind of counterbalance, but don’t kid yourself. It’s these gigantic monoliths of companies that are going to be steering these conversations because they’re the ones that control the technologies and they’re the ones that are going to be executing at the end of the day. The ACLU’s participation, some of these other organizations, cynically I think it’s going to be hand-waving, it’s going to be incremental impact, but I want to be clear.
Yes, I’m skeptical about the bus being driven by the greedy corporations, but the point I’m really making is whether it be the ACLU, whether it be the greedy corporations, whether it be universities with questionable priorities around profits versus altruism, my concern is that all of them are looking at a 20th century, sort of an old Enlightenment-era framework of individual liberty as the highest good when we are now in an era where we need to be looking at the system liberty as the highest good and doing it. To me, that’s the biggest fail, unless they totally surprise but I’d be shocked.

Jon:
So what are the elements that would be required within this partnership on AI to tip the balance more towards a kind of systemic evaluation that you’re talking about? Who are the people or the organizations? How would that look?

Dirk:
I suspect that the focus is going to be on humanity. What is a human? I think they’re going to ask questions. What defines a human? What’s the role the AI should have in the context of humanity? Can AI evolve to be humans? Is there a synthesis of human and machines? I think that’s sort of the crucible upon which a lot of this is going to be looked at where I think those are really second-order issues right now. I think the bigger issue is the fact that what is being done by me here in the Boston, Massachusetts area in terms of living my live normally in the system as it’s designed is contributing to the fact that decades from now, people are going to be displaces and in some cases, people will just die because of these effects of global warming in particular.

What the focus should be on is first, how do we reframe what society, government, and civilization look like in the context of a world where the actions of a first-world actor in this country directly impact in negative ways people in other places? How is AI, which it can be crucially, how is AI a catalyst to managing the behavior, managing the manifestation of the system here in the first world to minimize the damage done later in people and other places? It’s looking at the problem from AI as a tool to radically improve serious, horrible risks and threats we have to the future of our planet and the living spaces that we currently have on it as opposed to a lot of hand-waving about what is a human and how does AI interact with it, what are the definitions therein and how do we design AI so that they’re preserving humanity, preserving the space and rights of humanity. To me, that’s second order. It matters, but the big problem is we’re completely ignoring the fact that we need to focus on liberty as a system at the macro, not liberty as an individual at the micro.

Jon:
Yeah, I think that will be a topic that would definitely raise some hackles for the individual-driven society that we are. At least in the United States, it’s mostly about individual achievement, individual prosperity, individual outcomes. The collective viewpoints, it’s certainly part of other societies in a more substantial way. Other cultures have more group-oriented philosophies.

Dirk:
Still not adequately so, though. It still is, for the most part, built on the back of nationalism and built on the back of what now should be considered really outdated ways of looking at the world. We’re not part of a human community as opposed to a community of nations if only by virtue of the ways in which technology here with me in the micro impact people everywhere in the macro.

Jon:
Yeah. Whether you’re talking about the global supply chain or just the sheer amount of connections that we have as a result of our consumption, our communication, and just the tremendous number of people involved and growing every day. There are fewer boundaries and barriers, although as the U.S., we’re trying our darndest to remove ourselves from that given our current political climate. It’s going to be very, very interesting to see what comes out of this partnership on AI group and see where they take it. I’m sure we’ll be paying attention to that.

Dirk:
Jon, the irony of the extreme focus on individual liberties that we live within, again that can be traced back to the Enlightenment but those things did not come as ends to themselves. Those were ends designed to fix the problem of the oppression of the many by the few. It was focusing on issues of individual liberty to elevate the individual not just as one person, but to elevate the individual as a group, as a mass. Let’s focus really narrowly on the American, the United States founding fathers because those are the folks that the conservatives in particular like to point out and say were the real wise guys in a mafioso kind of way.
They got there because they were trying to create something that was more utopian, something that was better for all. I believe that if they lived today and were looking at the world and trying to frame what would be the right government, what would be the right structure for the future, it would not be the extreme focus on individual liberties that they came to in the mid-18th century. It would be something that was much more holistic, much broader thinking, and really something for the 21st century. These companies, Apple, Google, Facebook, whatever they are, they’re rooted still in the 20th century from everything that I’ve seen. I’d love to see some 21st century leadership out of these folks. I’ve just seen nothing that suggests we’ll actually see it.

Jon:
Listeners, remember that while you’re listening to the show, you can follow along with the things that we’re mentioning here in real time. Just head over to thedigitalife.com, that’s just one “l” in thedigitalife, and go to the page for this episode. We’ve included links to pretty much everything mentioned by everybody, so it’s a rich information resource to take advantage of while you’re listening or afterward if you’re trying to remember something that you liked. You can find The Digital Life on iTunes, SoundCloud, Stitcher, PlayerFM, and Google Play. If you want to follow us outside of the show, you can follow me on Twitter @JonFollett. That’s J-o-n F-o-l-l-e-t-t. Of course, the whole show’s brought to by Involution Studios, which you can check out at goinvo.com. That’s g-o-i-n-v-o dot com. Dirk?

Dirk:
You can follow me on Twitter @DKnemeyer. That’s D-K-n-e-m-e-y-e-r. Thanks so much for listening.

Jon:
That’s it for episode 192 of The Digital Life. For Dirk Knemeyer, I’m Jon Follett. We’ll see you next time.

No Comments

Leave a Comment

Your email address will not be published. Required fields are marked *