Bull Session

Establishing AI Ethics

October 26, 2018          

Episode Summary

This week on The Digital Life, we explore the difficulties of establishing an AI code of ethics, inspired by an article from MIT Technology Review, “Establishing an AI code of ethics will be harder than people think”.

There’s already ample evidence that artificial intelligence can exacerbate existing system bias if left unchecked. And, a set of design ethics guiding AI development may be far in the future, as the difficulty of defining an applicable rule set, and the subjective nature of ethics itself, makes the task extremely difficult. However, such arguments over AI ethics often emphasize top-down efforts rather than bottom-up — for instance, auditing AI decision-making from the initial data curation stage and throughout the process. In this view, AI design and development is not a purely technical practice, but instead incorporates cultural aspects, similar to teaching children right and wrong. Join us as we discuss.

Resources:
Establishing an AI code of ethics will be harder than people think

Jon:
Welcome to episode 281 of The Digital Life. A show about our insights into the future of design and technology. I’m your host, Jon Follett. With me is founder and co-host, Dirk Knemeyer.

Dirk:
Greetings listeners.

Jon:
This week we’ll be discussing the difficulties of establishing an AI code of ethics. We’ve addressed ethics before on the show, sometimes in relationship to artificial intelligence. Sometimes in relationship to design ethics. This week we’re inspired by an article from MIT’s Technology Review entitled “Establishing an AI code of ethics will be harder than people think”. I think that’s an interesting article, and the sub-head to that was that “Ethics are too subjective to guide the use of AI, argue some legal scholars”. The article goes on to describe some of those, which we’ll dig into in a minute here.

Jon:
But to start with, I guess my initial reaction is that ethics right now sort of reminds me of where design was in 1996. Which is, nobody knows they need it, and the person or the company that gets it right is going to reap the benefits of it. Further, there aren’t probably enough people doing it. There’s a possibility that there could be quite a few jobs providing the human interface layer to artificial intelligence via design/ethics. I think that’s going to be critical as AI moves forward.

Jon:
Dirk, before we dig into some of the substance of this article, any opening thoughts?

Dirk:
Well ethics, what ethics? Right? I mean, I don’t know Jon. It’s a continuation of what in the tech industry has been, I don’t know, it’s even beyond the tech industry. What in capitalism has been the corporate pursuit of profits. If you take a company, Google. Young company. Tech company. When they started they had the, “Don’t be evil”, was their, I don’t know if it was their mission. It was certainly this well publicized sort of tent pole of their company, and how they intended to do business. Since then, once they got big, they changed it. That went away, and they certainly have been evil in a number of different ways.

Dirk:
Ethics at large are just things that get in the way for these companies. As long as they’re convenient and not barriers to profit, yeah, companies will talk about ethics and wave their little manifestos around. But as soon as ethics get in the way, they go away. I think we see it over and over again with the big companies. I think small companies are more able to be ethical. But I think big companies are, it’s just sort of structurally, from a structural perspective they’re inherently unethical. Is it possible to be ethical in an organization of tens of thousands, hundreds of thousands of people, with no direct human relationships between a lot of them? With little accountability. With your incentive models having nothing to do with ethics or ethical behavior, but having to do with profits and growth and things that often are antithetical to ethics.

Dirk:
To me this is nothing new. It’s perhaps even more dangerous now that we’re getting into the space of AI and machines. Thinking machines. But to me it’s just an exacerbation of problems that are longstanding.

Jon:
I think there are many different levels of ethics. Sort of the broader company ethics you were referring to a little bit there when referencing Google. I think there’s the face to the world kind of ethics, and how a company may or may not fit into society and produce things for its own benefit, or for the benefit of its customers. I think there’s an underlying sort of, let’s call it a small e operational level ethics that is really not practiced very much, at least on the human-computer interaction side just yet.

Jon:
What I mean by that is, the kinds of ground level decision making that might go into creating an algorithm that makes decisions about how a self driving car parses information about living beings that it could do damage to, right? That’s different, the person designing that algorithm for Google, that’s a very different operational level kind of ethics versus the broader, “How does Google fit into our society as an organization?”. I think on the, call it the small e operational ethics level, I think there needs to be a pretty big expansion of sort of the practice of and understanding of and design of those kinds of decision making rules.

Jon:
I think that’s in part, the two aren’t separate of course because the bigger view of ethics of course influences what people are going to be doing on the algorithmic and rule making side. But let’s take that one step further and dig into a little bit about what this article’s addressing. Which is this idea that if you’re a lawyer or you’re a researcher and you know that AI is going to require some of these ethics and accountability in design and implementation, the question comes up, which is interesting, which is who decides what ethics are in play and then who enforces them? These are the problems that the author of this article, and I think that many are sort of rightly arguing. How difficult is this nut to crack?

Dirk:
It’s really difficult. Super difficult, Jon, and a big part of it is, technology moves so much faster than government moves. I mean, legislation is slow, and technology is fast. These things have already moved past the point of no return before the government can get its head out of its ass.

Jon:
Yeah. I think there’s a top down view of this operational ethical discussion that is distracting. That I don’t think is sort of maybe in totality, that it shouldn’t be the only way we view it. There is a –

Dirk:
Which shouldn’t be the only way we view it?

Jon:
As a top down kind of thing, right? Like, “You know, we’re going to create these imaginary problems. These imaginary, we’re going to plan ahead and try to have rules in place so that when we encounter these ethical dilemmas we’ll have this rule and the gate will drop down and the AI will make the right choice”. I think –

Dirk:
The metaphors there alone are freaking me out, but let’s keep going.

Jon:
Yeah. In other words. But I mean, how else … I mean, that’s one way to frame it up, right? Is top down. You know, I am interested in the top down view of it. But I’m also interested in a bottom up view. The bottom up view is one that we’re ignoring largely because we think that these systems don’t exist already. We’re trying to think of … I guess what I’m advocating for is transparency in the decision making. Making auditable systems for AI pattern recognition and decision making and rule making.

Jon:
Obviously this, you can’t have a self driving car go out and hit a bunch of people and then audit it and say, “When you hit the person, that was clearly the wrong choice”. Not all designs lend itself to a bottom up approach. But for things where you can prototype and test on data sets for instance that are pre-existing, and can see what the AI is learning, why it’s learning it, and what decisions it’s making based on that data, you can go in and audit that, and sort of change the course of the machine learning if you’re able to see what’s going on. If there is a human in the loop.

Jon:
I think that a lot of the systems that we talk about where there’s a potential bias sort of baked in, those systems are largely not auditable. I think there’s a huge opportunity both on the design side and for a company that can say, “We have a transparent AI view”.

Dirk:
Who validates that? Who agrees and decides that it’s a transparent AI view? I mean part of the problem, when I hear you talk about bottom up, one of the things that scares me is these things are so complex. We have things like implicit bias that we’re sort of understanding more and more about. But even as we understand that it’s an issue, we’re nowhere near untangling it and figuring out how the hell to get away from it.

These early AI implementations are taking those biases and perpetuating them. One of the statistics in this article, there was some kind of criminal database, and it was overwhelmingly, about 85 percent, 95 percent Black and Hispanic people were registered in this database. The implication there is, it’s bias being baked in to beget more bias to beget more bias.

It’s all fine and dandy for us to say, “Oh yeah. Company B could come in and have this filter that they put in there and test for that, and help weed that stuff out”. But from a science perspective we don’t know how to weed that stuff out. From a psychological, sociological. I mean there’s different theories and different ideas and little bits and pieces, but we’re clueless at the end of the day. We have no clue how to engineer deep implicit bias out of our society, without the technology aspect.

Once we bring the technology … Now that we have the technology we’re trying to create this ethical framework. I don’t know. To me, I think it’s super complicated and it’s not something that can be done correctly from a bottom up perspective. I fear that the bottom up will risk creating more issues. It might be well meaning. It might be, “Well there isn’t anything to guide us and help us figure out what the real problems are here”.

Jon:
Well, so I think that just sort of proves the point that we do need a bottom up type of audit. The statistic that you’re referring to was sort of made more apparent by the fact that the database is sort of populated largely by minority –

Dirk:
It’s obviously biased.

Jon:
Sure. That’s made obvious by the database itself, right? I mean I guess there’s the question of, “Okay. Well who …”. Does someone say, “Okay. Well then that’s fine”? Which was I think your question, which is who’s vetting the audits –

Dirk:
Who’s vetting the vetters? Yeah.

Jon:
Right. But at the same time, I mean there’s an opportunity, right? We don’t know how to remove implicit bias from our social systems. But some of these will be made obvious just by the nature of the patterns that are being recognized by the AI itself. I –

Dirk:
There’s some cleanup that could be done.

Jon:
Sure.

Dirk:
But I mean that’s the deck chairs on the Titanic. I mean –

Jon:
Well I mean, I guess part of that also has to do with what decisions we make now in regards to these patterns where we do see bias, right? It’s an interesting question of how we address this, because I do think as much as we’d like to take steps to do this, there’s also just going to be a dearth of people who can in an informed way combine development, AI, computer science, and ethics knowledge, and human factors into one role and be able to suitably converse with both engineering and executives to make these kinds of choices.

I do think the metaphor that I mentioned earlier comparing early digital design to where ethics is today, I mean I see just a huge huge gap on the human side. The human factor side. In fact it’s probably worse than the design gap was, because it’s unclear to me where these multi talented people will come from. Certainly it’s a unique skill set, and one that’s not present in a lot of people. I certainly wouldn’t be qualified to talk to AI engineers and executives about this kind of decision making.

One other point I wanted to make just as we’re discussing the ins and outs of this is that our view of AI has largely been shaped to be a technical view. Part of that is because AI is engineering heavy right now. We’re less –

Dirk:
AI is a product of engineering, right?

Jon:
Sure. The technical underpinnings of AI are sort of the most important drivers of the discussion right now. What can we do with it? What’s possible? The complexities of that technical discussion around the AI product. At the same time, I think there’s a cultural gap, or call it a design or culture gap on the AI product side that we don’t have with other types of technology. Like when we have a discussion around cars, gear heads will talk technically about it, but that’s not, there’s a cultural layer to our discussions around certain technical objects where people are comfortable talking about them even if they’re not technicians.

Dirk:
Yeah.

Jon:
Right now AI is the realm of far fetched fantasy and technical jargon. There isn’t a cultural discussion in the middle, which is I think also making it harder to germinate these ethical discussions and rule sets and decision making.

Dirk:
I think there’s a third category of irresponsible media characterization, but I understand your broader point.

Jon:
Yeah, I was trying to cover that in the flights of fantasy.

Dirk:
Well not everything the media’s saying is flights of fantasy, but a lot of it is still rubbish despite that.

Jon:
Yeah. I’d put the killer robots one into the fantasy department.

Dirk:
Me too. Me too, Jon. Me too.

Jon:
To me, having seen just sort of a few needs around auditing of machine learning just at the studio recently and figuring out ways to make that possible for managers to be able to understand what machine learning’s doing, I can see that need across other industries. We largely work in healthcare, so we see AI being applied in healthcare of course. But I can see the need for pulling back the covers on machine learning patterns for any number of industries. I mean healthcare clearly has some important areas that need to be vetted. But transportation, eCommerce, etc. Those are all areas that are going to require that AI not be a black box.

I guess that’s what it comes down to for me is that whether this is a set of rules that we apply from the top down, or auditing mechanisms that we apply on the bottom up, or perhaps both, this idea that AI can be a black box that magically executes things for us, that is not true. That is, that perception needs to be fought against, I think.

Listeners, remember that while you’re listening to the show, you can follow along with the things that we are mentioning here in real time. Just head over to thedigitalife.com. That’s just one L in The Digital Life. Go to the page for this episode. We’ve included links to pretty much everything mentioned by everyone, so it’s a rich information resource to take advantage of while you’re listening, or afterward if you’re trying to remember something that you liked. You can find The Digital Life on iTunes, Sound Cloud, Stitcher, Player FM, and Google Play. If you’d like to follow us outside of the show you can follow me on Twitter @jonfollett. That’s J-O-N-F-O-L-L-E-T-T. Of course the whole show is brought to you by GoInvo, a studio designing the future of healthcare and emerging technologies, which you can check out at goinvo.com. That’s G-O-I-N-V-O .com. Dirk?

Dirk:
You can follow me on Twitter @dknemeyer. That’s @ D-K-N-E-M-E-Y-E-R. Thanks so much for listening.

Jon:
That’s it for episode 281 of The Digital Life. For Dirk Knemeyer, I’m Jon Follett. We’ll see you next time.

 

No Comments

Leave a Comment

Your email address will not be published. Required fields are marked *