ai tags

Bull Session

AI Goes Mainstream

February 3, 2017          

Episode Summary

On this episode of The Digital Life, we discuss the high-powered Partnership on Artificial Intelligence to Benefit People and Society, an initiative whose founding members include Amazon, Facebook, Google, IBM, Microsoft and Apple. Apple was just recently added as a founding member.

The mission of the group is to educate the public about AI, study its potential impact on the world, and develop standards and ethics around its implementation. Interestingly, the group also includes organizations with expertise in economics, civil rights, economics, and research, who are concerned with the impacts of technology on modern society. These include: the American Civil Liberties Union (ACLU), the MacArthur Foundation, OpenAI, the Association for the Advancement of Artificial Intelligence, Arizona State University and University of California, Berkeley.

Will AI build upon our society’s biases and disparities and make them worse? Or does it have the potential to create something more egalitarian? Join us as we discuss all this and more.

Apple joins Amazon, Facebook, Google, IBM and Microsoft in AI initiative
Partnership on AI
A massive AI partnership is tapping civil rights and economic experts to keep AI safe

Welcome to episode 192 of The Digital Life, a show about our insights into the future of design and technology. I’m your host, Jon Follett, and with me is founder and co-host, Dirk Knemeyer.

Greetings, listeners.

On this episode of the show, we’re going to discuss the high-powered partnership on artificial intelligence to benefit people in society. That’s a mouthful.


This initiative includes founding members like Amazon, Facebook, Google, IBM, Microsoft, and the just recently added Apple. The mission of this group, this AI partnership is to educate the public in regards to what artificial intelligence is, and presumably its benefits, to conduct research and study AI’s potential impact on the world going forward as well as to develop standards and ethics around AI implementation. Interestingly, the group also includes some organizations that you may or may not associate with high tech. More in the economics category or civil rights research groups. These ere just added recently, but they include the American Civil Liberties Union, the ACLU, the MacArthur Foundation, Arizona State University, University of California Berkeley, and then some other associations for researching artificial intelligence including Open AI and the Association for the Advancement of Artificial Intelligence.

Now this partnership, this is the biggest, the best, the brightest organization considering how AI is going to shape our future, which says to me that there is a lot of money coming in AI if you’ve got all of these new economy superstars onboard. The big question I think, partly for this show for us to consider, is the promise of AI. You can build so many great new things. Are we just going to keep repeating the mistakes that we have made in the past, or are we going to use AI as leverage to help solve some of the problems that we’ve created with technology, with society, and with all those things? Some of the biggest issues, we’ve of course talked about some of these at length from education to poverty to healthcare to global warming, all of these things you could see AI getting involved in. I think part of the question for this group, the partnership on AI, and for us as a society as we start talking about this for real now, how are we going to leverage these possibilities? How are we going to move forward?

I admire this group for getting together. Their first board meeting’s in a few days, on February 3rd where they’ll start laying out their, oh I don’t know, their first hundred days maybe of what they’re going to do. Nonetheless, I’m glad they’ve gotten together and now we’ve got a spark of a conversation happening. What are your thoughts on this, Dirk?

Well, I’m skeptical that they’ll be talking about the right things. Sort of underpinning our current reality is the fact that the individual liberties that the west has been built upon and has developed over the last hundreds of years, individual liberties being the absolute good and right. I don’t think it is anymore, but that’s not being questioned. To solve how AI interacts with the world, we need a framing that is coming from that perspective. What I mean by it is the libertarian view, which is the most, sort of the purest expression of individual liberties. It’s saying, “Look, if you’re not hurting other people, do whatever you want. Bottom line. You’re in control. The government shouldn’t tell you who you can and can’t marry, and the government shouldn’t tell you that you have to serve everyone. You should be able to serve whoever you want.” That’s the pure libertarianism and is, again, sort of the purest expression of the socially liberal approach.

The problem with that is we live in a world where raising the temperature of the planet, where even conservative scientific estimates have the said raising of the temperature of planet displacing hundreds of millions of people over time, and even in the shorter term a lot of people, primarily poor people, primarily people in countries that we don’t think about here a lot in the United States such as in southeast Asia. A lot of people are going to have nowhere to live and nowhere to go as the ocean gets higher. Why I think that this is relevant is now, if I’m flying from New York to Los Angeles, I am demonstrably increasing that eventuality. In exercising my individual liberty to spend my money and fly across the country, I’m a contributor to the future displacement and ruin of people who are nowhere I can see them, but they will be impacted by that. If I own a big property and I want to go out and burn some tires, I should be able to do that, right? If I’m doing that, it is going to hurt people that I can’t see and I don’t care about.

I personally care that a lot of people don’t care, that are just outside their field of vision. This is relevant because in the way we frame the intersection of artificial intelligence and human reality, we need to shift our perspective from the now-outdated one of individual liberty as the highest good to one of systemic liberty as the highest good, and I don’t hear people talking about that. I don’t hear that on the agenda. I don’t see that kind of shift happening whether it’s on the conservative side or the liberal side. Everybody is expressing individual liberties, just in very different philosophical ways, but we’re now a world where the sort of butterfly effect, that one little thing happening in the United States could have catastrophic effects to people literally on the other side of the world. That needs to be the kind of framing as we tackle these problems, and I’m really skeptical that the capitalist, greedy organizations of Apple and Google et al. are going to frame that way.

They’re coming at it as businesses, businesses looking to make profits, and that, perhaps tempered by some humanistic altruism but driven by profits first and foremost is the order of the day. To some degree, I’m glad it’s happening but I think it’s very likely their eye is going to be on the wrong ball, and they’re going to put agendas in place that are 20th century agendas and not future-looking agendas. I’m skeptical, Jon.

So from your perspective, Dirk, AI at this juncture is going to perpetuate the biases that already exist just by the nature of the system that it’s being injected into.

By the nature of the framing of the people in charge for the ecosystem that the envision and how the AI is a participant in that ecosystem. My contention is that their framing for the ecosystem is completely flawed.

Yeah, that’s interesting. I would think that just the simple structure of, as you pointed out, all of these companies coming together that are driven by wanting to increase shareholder value, wanting to make a good living, wanting to return profits into a bank account. Structurally, that ultimately means that that’s going to drive decision-making in a certain way.


Now, if you look at the other institutions who are a part of this equation, whether it’s the ACLU or the MacArthur Foundation or some of the universities involved, their perspective is certainly different from the companies’ perspective, and I would think that there would be some balance achieved there. The question, of course, becomes-

I’m not sure about balance. I think there might be other voices there, but even at the level of universities, I mean universities over the last 20 to 30 years have been intentionally shifting away from the more traditional high-minded academic approaches to businesses and, “How do we more train people for jobs? How do we better monetize the market?” That’s been a very big shift in academia over the last generation plus. It depends who you have in the room at that point. If you have the right people in the room, then maybe they’re providing some kind of counterbalance, but don’t kid yourself. It’s these gigantic monoliths of companies that are going to be steering these conversations because they’re the ones that control the technologies and they’re the ones that are going to be executing at the end of the day. The ACLU’s participation, some of these other organizations, cynically I think it’s going to be hand-waving, it’s going to be incremental impact, but I want to be clear.
Yes, I’m skeptical about the bus being driven by the greedy corporations, but the point I’m really making is whether it be the ACLU, whether it be the greedy corporations, whether it be universities with questionable priorities around profits versus altruism, my concern is that all of them are looking at a 20th century, sort of an old Enlightenment-era framework of individual liberty as the highest good when we are now in an era where we need to be looking at the system liberty as the highest good and doing it. To me, that’s the biggest fail, unless they totally surprise but I’d be shocked.

So what are the elements that would be required within this partnership on AI to tip the balance more towards a kind of systemic evaluation that you’re talking about? Who are the people or the organizations? How would that look?

I suspect that the focus is going to be on humanity. What is a human? I think they’re going to ask questions. What defines a human? What’s the role the AI should have in the context of humanity? Can AI evolve to be humans? Is there a synthesis of human and machines? I think that’s sort of the crucible upon which a lot of this is going to be looked at where I think those are really second-order issues right now. I think the bigger issue is the fact that what is being done by me here in the Boston, Massachusetts area in terms of living my live normally in the system as it’s designed is contributing to the fact that decades from now, people are going to be displaces and in some cases, people will just die because of these effects of global warming in particular.

What the focus should be on is first, how do we reframe what society, government, and civilization look like in the context of a world where the actions of a first-world actor in this country directly impact in negative ways people in other places? How is AI, which it can be crucially, how is AI a catalyst to managing the behavior, managing the manifestation of the system here in the first world to minimize the damage done later in people and other places? It’s looking at the problem from AI as a tool to radically improve serious, horrible risks and threats we have to the future of our planet and the living spaces that we currently have on it as opposed to a lot of hand-waving about what is a human and how does AI interact with it, what are the definitions therein and how do we design AI so that they’re preserving humanity, preserving the space and rights of humanity. To me, that’s second order. It matters, but the big problem is we’re completely ignoring the fact that we need to focus on liberty as a system at the macro, not liberty as an individual at the micro.

Yeah, I think that will be a topic that would definitely raise some hackles for the individual-driven society that we are. At least in the United States, it’s mostly about individual achievement, individual prosperity, individual outcomes. The collective viewpoints, it’s certainly part of other societies in a more substantial way. Other cultures have more group-oriented philosophies.

Still not adequately so, though. It still is, for the most part, built on the back of nationalism and built on the back of what now should be considered really outdated ways of looking at the world. We’re not part of a human community as opposed to a community of nations if only by virtue of the ways in which technology here with me in the micro impact people everywhere in the macro.

Yeah. Whether you’re talking about the global supply chain or just the sheer amount of connections that we have as a result of our consumption, our communication, and just the tremendous number of people involved and growing every day. There are fewer boundaries and barriers, although as the U.S., we’re trying our darndest to remove ourselves from that given our current political climate. It’s going to be very, very interesting to see what comes out of this partnership on AI group and see where they take it. I’m sure we’ll be paying attention to that.

Jon, the irony of the extreme focus on individual liberties that we live within, again that can be traced back to the Enlightenment but those things did not come as ends to themselves. Those were ends designed to fix the problem of the oppression of the many by the few. It was focusing on issues of individual liberty to elevate the individual not just as one person, but to elevate the individual as a group, as a mass. Let’s focus really narrowly on the American, the United States founding fathers because those are the folks that the conservatives in particular like to point out and say were the real wise guys in a mafioso kind of way.
They got there because they were trying to create something that was more utopian, something that was better for all. I believe that if they lived today and were looking at the world and trying to frame what would be the right government, what would be the right structure for the future, it would not be the extreme focus on individual liberties that they came to in the mid-18th century. It would be something that was much more holistic, much broader thinking, and really something for the 21st century. These companies, Apple, Google, Facebook, whatever they are, they’re rooted still in the 20th century from everything that I’ve seen. I’d love to see some 21st century leadership out of these folks. I’ve just seen nothing that suggests we’ll actually see it.

Listeners, remember that while you’re listening to the show, you can follow along with the things that we’re mentioning here in real time. Just head over to, that’s just one “l” in thedigitalife, and go to the page for this episode. We’ve included links to pretty much everything mentioned by everybody, so it’s a rich information resource to take advantage of while you’re listening or afterward if you’re trying to remember something that you liked. You can find The Digital Life on iTunes, SoundCloud, Stitcher, PlayerFM, and Google Play. If you want to follow us outside of the show, you can follow me on Twitter @JonFollett. That’s J-o-n F-o-l-l-e-t-t. Of course, the whole show’s brought to by Involution Studios, which you can check out at That’s g-o-i-n-v-o dot com. Dirk?

You can follow me on Twitter @DKnemeyer. That’s D-K-n-e-m-e-y-e-r. Thanks so much for listening.

That’s it for episode 192 of The Digital Life. For Dirk Knemeyer, I’m Jon Follett. We’ll see you next time.

No Comments

Leave a Comment

Your email address will not be published. Required fields are marked *

Jon Follett

Jon is Principal of Involution Studios and an internationally published author on the topics of user experience and information design. His most recent book, Designing for Emerging Technologies: UX for Genomics, Robotics and the Internet of Things, was published by O’Reilly Media.

Dirk Knemeyer

Dirk is a social futurist and a founder of Involution Studios. He envisions new systems for organizational, social, and personal change, helping leaders to make radical transformation. Dirk is a frequent speaker who has shared his ideas at TEDx, Transhumanism+ and SXSW along with keynotes in Europe and the US. He has been published in Business Week and participated on the 15 boards spanning industries like healthcare, publishing, and education.


Co-Host & Producer

Jonathan Follett @jonfollett

Co-Host & Founder

Dirk Knemeyer @dknemeyer

Minister of Agit-Prop

Juhan Sonin @jsonin

Audio Engineer

Michael Hermes

Technical Support

Eric Benoit @ebenoit

Mackenzie Cameron @theauthorm

Original Music

Ian Dorsch @iandorsch

Bull Session

A Year Talking Tech

December 22, 2016          

Episode Summary

For our final podcast of 2016, we chat about the big themes on the show and our favorite episodes over the past year. We had conversations on design and tech with some wonderful guests including ground breaking geneticist George Church and open science advocate and researcher, John Wilbanks. From AI to genomics to cybersecurity, we covered a wide range topics on The Digital Life in 2016. So what did we learn from a year talking tech?

AI is too smart for its own good.
Artificial intelligence is evolving rapidly, with both high profile public failure and success by a number of tech giants this year. For instance, Microsoft had to terminate Tay, its teenage chatbot, after the bot started tweeting neo-Nazi propaganda and other abusive language at people. Meanwhile, Google’s DeepMind created an AI capable of beating some of the very best human players in the world at Go, the Asian strategy board game. And, we were introduced to a brand new “Rembrandt”, which was 3D-printed with eerie accuracy by an artificial intelligence algorithm, trained by analyzing the artist’s paintings.

Episode 149: Artificial Intelligence
Episode 151: AI Goes to Art School
Episode 163: AI Goes to the Ballpark

DNA replaces silicon as the new material for innovation.
The fields of genomics and synthetic biology continue to press forward in astonishing ways. In Seoul, Korea, a controversial lab revealed plans to clone endangered animals in order to save them from extinction. At the Massachusetts Institute of Technology (MIT) and Boston University (BU) synthetic biologists created software that automates the design of DNA circuits for living cells.

Episode 148: On Cloning
Episode 150: Engineering Synthetic Biology
Episode 154: DNA as Data Storage
Episode 158: Writing Human Code
Episode 168: The Microbiome
Episode 169: Genomics and Life Extension
Episode 170: Chimeras and Bioethics
Episode 176: Three Parents and a Baby

Hacking and cybersecurity are front and center as online and offline worlds collide.
In 2016, cybersecurity became a primary issue in a host of critical areas including communication, energy, and politics. Power grids, airports, and other infrastructure were increasingly subject to cyber attacks and an increasing number were successful. The debate over privacy and security was reinvigorated by the hubbub around the FBI request of Apple to unlock an iPhone owned by one of the San Bernardino shooters. And, Wikileaks distributed e-mails obtained by sources who hacked the DNC and individuals associated with the Clinton campaign during the U.S. presidential elections.

Episode 139: Hacking Power
Episode 144: Apple vs. FBI
Episode 166: Hacking the DNC
Episode 179: Internet Takedown

The automation of work is coming.
We got another startling look at what the future of work could become as software, robots, and the IoT continued to automate activities previously completed by humans. According to preliminary findings of a recent McKinsey report, 45 percent of all work activities could be automated today using technology already demonstrated. From fulfilling warehouse orders to suggesting medical treatments for ailments, the coming wave of automation will redefine jobs and business processes for factory workers and CEOs alike.

Episode 140: Automating Work
Episode 141: Future Transportation
Episode 145: Robot World
Episode 153: Smart Cities and Sidewalk Labs
Episode 173: Labor and the Gig Economy

Design and science are intersecting in new and significant ways.
Whether it’s in the creation of high tech clothing, embeddables, or materials, design and science are coming together in new and significant ways. Clothing designers are working with multi-disciplinary teams, integrating input from engineers and synthetic biologists into their work. From 3D-printed couture to scarves dyed with bacteria to textiles grown in the lab, emerging tech is creating rapid innovation in the fashion industry. And this year, in the burgeoning world of designing embeddables, the U.S. Patent Office approved Google’s patent for electronic lens technology, which implantable directly in the eye. These mechanical eyes might give you superhuman abilities — to see at great distance or view microscopic material, and document it all by capturing photos or video.

Episode 143: Clothing and Technology
Episode 155: Designing Embeddables
Episode 161: The Future of UX
Episode 171: Embeddables
Episode 172: Quantum Computing

Bull Session

Emerging Tech in 2016

November 4, 2016          

Episode Summary

On The Digital Life podcast this week, we discuss the top emerging technologies of this year based on information from the World Economic Forum, a study by PwC and our own analysis. From nanosensors to drones, synthetic biology to AI, this year has seen a huge crop of emerging technologies move into the
commercial realm and the public consciousness. Join us as we break down our top five, and consider the implications for people, business, and the planet.

These are the top 10 Emerging Technologies of 2016
What are the eight essential emerging technologies for business?

Bull Session

AI Goes to the Ballpark

July 7, 2016          

Episode Summary

On The Digital Life this week we chat about technology and the great American past time, baseball.

Just last week the Associated Press announced that it’s covering Minor League Baseball games using AI software. The software from Automated Insights, draws upon supplied game data to create a written narrative. This AI is already being used by the Associated Press to create earnings stories on U.S. public companies and by corporate customers like, which uses it to generate descriptions of cars for its Web site.

So, AI can cover a baseball game, parsing the data and creating a narrative, but is the writing any good? So far, it seems to generate stories that are readable, but not really compelling or interesting beyond the most mundane facts. Is this the future of sports journalism? Join us as we discuss AI and baseball.

AP Sports is Using “Robot” Reporters to Cover Minor League Baseball
AP expands Minor League Baseball coverage