Bull Session

Ethics and Bias in AI

March 24, 2017          

Episode Summary

On The Digital Life this week, we discuss ethics and bias in AI, with guest Tomer Perry, research associate at the Edmond J. Safra Center for Ethics at Harvard University. What do we mean by bias when it comes to AI? And how do we avoid including biases we’re not even aware of?

If AI software for processing and analyzing data begins providing decision-making for core elements critical to our society we’ll need to address these issues. For instance, risk assessments used in the correctional system have been shown to incorporate bias against minorities. And, when it comes to self-driving cars, people want to be protected, but also want the vehicle, in principle to “do the right thing” when encountering situations where the lives of both the driver and others, like pedestrians, are at risk. How we should deal with it? What are the ground rule sets for ethics and morality in AI, and where do they come from? Join us as we discuss.

Resources
Inside the Artificial Intelligence Revolution: A Special Report, Pt. 1
Atlas, The Next Generation
Stanford One Hundred Year Study on Artificial Intelligence (AI100)
Barack Obama, Neural Nets, Self-Driving Cars, and the Future of the World
How can we address real concerns over artificial intelligence?
Moral Machine

Jon:
Welcome to Episode 199 of The Digital Life, a show about our insights into the future of design and technology. I’m your host, Jon Follett and with me is founder and co-host Dirk Knemeyer.
Dirk:
Greetings, listeners.
Jon:
Our special guest on the podcast this week is Tomer Perry, research associate at the Center for Ethics at Harvard University. Welcome, Tomer.
Tomer:
Pleasure to be here.
Jon:
We’re bringing Tomer on board today to discuss the weighty topic of ethics, AI, and the biases that may or may not be programmed into the systems that are going to be determining much about lives in the future.
That’s what we’re going to be digging into as our topic today. What do we mean when we’re talking about bias and artificial intelligence? I’m going to cue this up by talking about two examples of AI, where there’s the potential for bias. First, there’s the lovely automated self driving cars that we are hopeful to see on our roads sooner rather than later.
Dirk:
We will be seeing them in the 2020s. It will be in that decade. There will self driving cars in a prevalent volume.
Jon:
So coming very soon in the next three to four years, and what that means is there needs to be a rule set which allows cars to drive in expected and unexpected circumstances. The rules that govern these automated cars in circumstances that are life threatening most deeply apply to this discussion today.
What does a car do that’s automated when encountering a scenario where a human life could be at stake, if that’s the driver’s life or a pedestrian’s life, and how does this system balance those as it makes a split second decision? One example of where AI could be making some really important decisions for us, those decisions of course are left to our own judgment today, but how do we bake that into the system, and how do we make sure that we are enabling these systems to operate in a “moral” fashion going forward.
Based on that example, I think we could start our conversation with talking about what is bias in AI? Dirk.
Dirk:
I’d like to hear … I think Tomer is more likely to have a good definition. I would just fling my arms around at it.
Tomer:
I think what’s a good starting point to think about what’s new and unique about AI, right, because bias is part of our life and there’s a large conversation about that. I think what’s really fascinating and interesting here if you look, for example, at recent advances in AI, like the Alpha Go software beating Go players, is that the people who code the program don’t really know what’s going on, how exactly the software is making decisions. I think that’s one of the interesting features here.
I think one of the main worries about ethics of AI is that we’re not going to really know how the software is going to make decisions. For example, if we write a program beforehand that would tell a software, would have some clear criteria to evaluate insurance claim or something like that, and then we realize the criteria are wrong because we are biased and so our criteria is biased, we can change them, but if the software is making decisions on its own that we don’t really know, then there’s a worry that it will develop its own biases that we’re not privy to them.
The other part that’s related to it is that a lot of modern AI is machine learning, which is something I don’t know a lot about. It’s not my specialty, but from the stuff that I’ve been reading about it, key component to that and again, the Alpha Go I think is a really fascinating example, is that the software learns stuff by itself, so through repetition. Alpha Go played millions and millions of game against itself. That’s how it developed the skills that it developed, and when it played it made moves that no human player has even thought were good moves, and also definitely the designers of the software couldn’t anticipate that moves were being a good move.
I think that’s the distinct worry about AI, that it’s going to bring us things that we don’t even know, and it’s going to come to them through a process of learning that maybe mirrors the way we learn, maybe it doesn’t, but it’s really hard for us to anticipate. Let’s bracket that for a little bit, and then let’s talk about bias.
I think there are two kinds of bias we might be worried about in general, and specifically with AI. One of them is as I mentioned our conscience biases. There are things that opinions differ regarding. For example, affirmative action, is affirmative action justified and to what extent? Who counts for affirmative action, and in what context? It’s a subject of which there is disagreement. Calling something like this biased is controversial in our political world, but we also can say looking at history that certain opinions that used to be prevalent are now widely accepted as biased or prejudiced so there’s one form of bias is bias that we retrospectively recognize as bias, and even when we do, when we say we, I mean maybe the vast majority in a certain society in a certain time, often are still on the margin some people will reject it.
For example, most people today in the United States would accept as bias the idea that women aren’t as capable as men. I’m saying it very generally so that they would be more widely accepted. As soon as you narrow it down, some people are going to reject it. I’m sure there’s still the margin some people reject it, but for the most, consciously, this is something that has been disavowed. We’ve legislated laws back to the fifties that don’t allow discrimination on the basis of gender. It is illegal to discriminate merely on the basis of gender. If you can prove that a woman was fired because of that or denied promotion, something like that, that’s an offense. That’s our consciousness biases, and we realize them retrospectively, and they’re controversial, we can disagree on them, and we can differ.
The other kind of bias that we should be worried about with AI, is what researchers discuss as implicit bias. These are things that we’re not aware of them in the moment, but we make decisions and built into those decisions are all sorts of preferences that we actually wouldn’t necessarily even endorse if we were to reflect on them. Sometimes we do rationalize them, in hindsight people tend to come up with stories, but sometimes we deny that they exist, or we don’t think that they exist. I might think that I don’t have any different evaluation for men or for women, but research has shown that if you have a man’s name or a woman’s name on a CV with similar credentials, it’s treated differently.
There’s the famous orchestra experiment where they were evaluating the playing of musicians behind a curtain, they couldn’t see that they were women, people accepted more women into orchestras. These are the kind of implicit bias. Those are biases that we have regardless of AI. The worry is that AI could import both of these kinds of biases.
It imports conscious biases when we write code into the software, that’s usually less worrisome. It’s not less worrisome for the people that it’s prejudiced against, but it’s less worrisome in the long run because hopefully if we identify this kind of bias or we come to change our opinion on it we can rewrite the code. With this sophisticated machine learning softwares, they can sometimes learn our biases in ways that we have not intended.
There’s this really interesting paper that looked at Google searches, or other search engines. They tend to do there’s all sorts of linguistic algorithms, they group words together. You cannot write, you cannot feed the dictionary, but so they built semantic clouds. It’s been found that if you search employment as a man or a woman it can give you different results because it identifies secretary with woman and engineer with men. Women engineers are already facing all sorts of barriers, there’s all these hidden. Again, this is the problem. The people who wrote this software never intended it, nobody had any idea this is happening, but the algorithm found that most people when they talk about secretary they’re talking about women, and it thought oh, it’s just like mother, it’s a word that has a feminine attribute. This is one way it’s importing our biases.
Of course, same is true about implicit biases. We might not even know, we might think we’re designing something fairly innocuous, like a healthcare evaluation app, or some kind of algorithm that’s supposed to evaluate the risks people have, and when we’re doing it the software learns from us what we want, but what we want is already colored in this way which we ourselves would reject.
Dirk:
We’re transferring the bias into the computer the same way we may transfer bias into our children.
Tomer:
Yeah, I guess maybe you give birth to software. In some ways kind of the people who are more sci-fi-ish about AI would like that idea because they think that the software learns it the way human beings are. Of course it’s hugely controversial, it’s hotly debated whether or not these kinds of AI are learning in any way similar to the way human beings learn.
Dirk:
We certainly aren’t there yet.
Tomer:
We’re not there yet, but do they show the same germs of thinking? Some people, as far as my knowledge goes, think that these kind of programs do something that looks akin to human thinking in a very fundamental way. If they are, it definitely seems like they are learning from us in the same way as not just our children, but we teach our peers, social and cultural transmission.
In some ways there’s also some differences. It’s different than your kid in the sense that we don’t understand at all what goes on there. Not that we necessarily understand other people or the way the mind works, or the brain. It’s predictable in certain ways that we aren’t able to predict, so there is some kind of internal logic to it. Another part of it is the danger of it is that these systems tend to process huge amounts of data that no person could have the ability to. We use them and employ them it healthcare because they can process all these different scenarios. One of the risks is that we’re taking our biases and we’re making them much more powerful, and we’re applying that across a huge set of cases.
Beforehand, people’s biases might counter each other, so you think judges are somewhat biased, but some of them are biased this way, some of them are biased that way, and there’s experimentation and diversity.
Dirk:
The system has some degree of limit or self regulation.
Tomer:
Right, but then if we put it all in a computer and we let the computer make decisions instead of judges, we’re going to universalize one kind of bias.
Dirk:
Yeah, the Sky Net.
Jon:
I think one of the things that I’m hearing from both of you is that there’s this potential for the software to amplify the inherent biases that we’re inputting into the system, whether it comes from examining a large data set and that amplifies it, or just the fact that there is only a certain number of inputs that are being evaluated. Dirk, you talked about whether or not software is really like our children. Unless we’re keeping them solely in a very limited environment where inputs are only what we’re feeding to them, children have so many other types of input that they would see either from society, from television, from their environment generally speaking. Whereas, AI, unless it’s highly sophisticated beyond what we’re currently have in front of us, we’re controlling all of the inputs that are going in, and so there is no opportunity for diversity of input that we would have via human being.
It’s interesting that our inability to recognize how pattern amplification can take hold, that’s one potential source of bias in the system. Tomer, what are some other potential sources, or how do you see corrective measures being introduced once we’ve got these sort of imperfect systems that are capable of learning but are going in directions that we either don’t want them to or we want to steer them in another direction?
Tomer:
Right. Yeah. I think that’s a great point. I think one preliminary note that’s worth mentioning is that AI, Artificial Intelligence, that we’re talking about is what people talk about as specialized AI. There’s actually a distinction in the literature, people talk about between general AI, which is this kind of science fiction-esque supercomputer that can do anything, that’s the equivalent of the human mind in AI, and there’s specialized AI.
I mentioned Alpha Go, which is a software that plays Go. It doesn’t do anything else. It’s super smart, except it’s not going to handle laundry or anything. It’s like specialized for the field of AI.
Dirk:
Yeah, in the field, narrow AI is the term typically used. A lot of leading people in AI say that all currently functioning AI qualifies as narrow AI.
Tomer:
Right, exactly. Obviously there are some people who plan for and think that a general AI is imminent, but most people, I think, think it’s actually pretty far away, or we have no grasp on what that would look like and when if at all it would come. There’s the people, the Effective Altruism people, and Nick Balstrom and others, who think that the extreme possibility of general AI turning evil should be our prime concern, but most people in this field actually think we should be more concerned about the narrow AI, or specialized AI, because that kind of AI already exists and is developing. We mentioned cars, and we mentioned healthcare, and education, housing, law, arbitration, all these things, there are already algorithms that apply to all these things. They raise all these moral and political questions, questions of justice, and so we need to address them.
One way of answering is that as a specialized problem the answer also needs to come from specialized look at the problem. We would have to look at each one of these issues, there’s the technical component of it where the people who write these programs are also the people who are going to have to rewrite them in better ways. I mentioned the linguistic research about search engines. You cannot even analyze this unless you have the skills to begin with to see that there’s an issue there. It’s only those kind of people. Where will the corrective measure come from? In certain ways I think there’s a responsibility on the engineers and the tech people working on it that they need to recognize, and sometimes they don’t really recognize that they have this responsibility.
At the same time, the normative evaluation of what’s problematic and what’s not is not the sole responsibility of the tech people or the people with the engineering skills, but should be a part of a bigger conversation. We need to have a bigger conversation about these things. We need to have broader understanding, so people who aren’t tech people, like myself, need to educate themselves about what’s going on there, and also try to create a conversation where the value part of it is involved.
I spent a little bit of time at Stanford, right by Silicon Valley. There’s often kind of this disconnect between the humanities side of Stanford and all the people who are doing tech. I think that’s incredibly unfortunate. One thing we need to do is create conversations across those divides, and I think there’s a special responsibility for the specialists and then there’s a more communal responsibility to understand what’s going on there and push back on certain values that might be specific coincidentally to the people who are doing tech.
Some other things that are important, one of our faculty at Harvard, at the Kennedy School, Archon Fung, has talked about how he thinks we’re missing the equivalent of a journalistic ethic among those kinds of engineers, that this maybe has affinity to the early days of the internet, where in addition to the enthusiasm about the technology and its potential there was also a very kind of committed ethic.
Another example is kind of the nuclear physicist, and people working on nuclear bomb, and their kind of struggle to understand what is their responsibility vis a vis their creating new technology. One thing I think is also interesting is having like a frank conversation between and among people who are also on the producing side of it, what is the moral responsibility that comes with developing these kinds of AI, and monitoring them. I think one of the way in which corporate structure and economic incentive operate is that they often kind of insulate people working on these projects from even thinking about it in moral terms, because that’s not where the corporate incentive. Of course a lot of these people care about moral issues, but the structure of the companies often don’t really give room to have this kind of discussion and recognize this responsibility.
Dirk:
That makes sense.
Jon:
I think there’s an ongoing question then at least for me on how we develop a set of, for lack of better term, ground rules, like what are the sort of governing principles that could help shape decisions, whether they’re about how decisions get made within an algorithm, or even whether specific algorithms should be pursued, or how they get applied, if there is a sort of chasm or gap between technology and think humanities, which I find kind of surprising considering that technology at least purports to have this collaborative-slash-sort of egalitarian viewpoint, where you have cross pollination between technical areas. I’m kind of surprised that there isn’t already an effort underway to introduce the information of learning from the humanities side into some technological program, whether it be at the university level or even at the level of trade group.
If we acknowledge that this conversation needs to happen, and we acknowledge that there needs to be a sort of broader understanding of morals and ethics, what are the ways in which that can be pursued? Is it policy driven, is it sort of a free market ethos, we’ll just see how well these self-driving cars do, and if they start mowing down people then we’ll decide at that point that’s bad, no one’s going to buy the cars anymore, so that’s the imperative? What are the trigger points for having a deeper and more substantive conversation about ethics and AI, and how soon does that need to happen?
Tomer:
It’s somewhat of a question to a historian of the future, what will be the trigger events. Ethicists have already started talking about this. That’s in my perspective what political theory and ethics contributes to society is partly that we have the time and space and opportunity to already start thinking about it. The conversation is not still super widespread because those things are not still in a lot of people’s consciousness. People who work day in and day out at their jobs, and do a bunch of other stuff that has nothing to do with AI, don’t have the time. In dealing with all the stuff they need to do, they don’t have the time to think about this.
There’s already people working on it, the conversation already started. Once it becomes more of an issue of policy, and it has started slightly being that, I think you’ll see more widespread issues. Again, I’m not a prophet. Prophecy has been obviously given to fools, but wearing the hat of a prophetic fool for a second I would say that I think it’s very likely that there will be some kind of incident with self driving car that will trigger more of a widespread conversation. This is often how things happen, that way. Then a bunch of people will say we’ve been telling you about this for a long, long while.
Until that happens it’s probably the case that whoever is writing the software is going to have pretty much a free hand to do whatever they want. That’s kind of a worrying thing. We’ll have ethicists talking to ethicists coming up with all sorts of proposals and solutions and analysis, and then you have all sorts of people doing their tech thing.
By the way, I didn’t want to suggest that there’s no conversation, or that there’s no crossovers. Of course there are, there are a lot of people who care to do that, but overall I think there’s quite a significant chasm. I felt it at Stanford, the chasm between the tech people and political theorists and ethicists, and I think it’s also the structures in which they live are very, very different.
I think, and this is where we might differ, I think a lot of people who are enthusiastic about technology in my opinion overestimate its potential to be liberating. At Stanford, there’s the seminar on liberation technologies, and I participated in that, and I thought they showed a lot of great example of the potential of technology to be liberating, but technology can also be co-opted, and if there’s social and political structures that are already corrupted, and by corrupted I mean certain people are in power and are oppressing a bunch of other people, then the introduction of new technology can sometimes subvert this structure because it gives people who were formerly outsiders an opportunity to gain power that they didn’t have before, but if the people who are in power, or a new elite, can take over this technology, they can become the new elite.
I think we’ve seen how it’s been co-opted with what has been called the Arab Spring, and those kind of demonstrations. These new platforms were unfamiliar to these old school regimes and their intelligence operations, and they were used to coordinate mass action in a way that they weren’t aware of. Over the course of several years, they have learned. If you look at other countries, governments that are particularly interested in preventing coordinated action of their citizens, primarily China, but also Russia and Turkey, they have learned these lessons super well.
China, there’s research coming out of several of our colleagues as well, Gary King at Harvard with Jennifer Pan at Stanford, and others, expose extensive depth, extensive apparatus the Chinese government employs to use basically, not just restrict internet conversation and interaction, but also use it to police. They use it, they actually allow it to happen in a lot of places where that allows them to kind of tap into what’s going on. That’s at also the Syrian government with Facebook, et cetera, including the American government as well abuses that technology. I think technology is a double edged sword, and it’s not by itself merely liberating.
I think that outlook, there’s a certain outlook that technology by itself is liberating, I think is dangerous sometimes.
Dirk:
How does business and free market intersect with ethics? Specifically to take the self-driving car example, I know in ethical circles that’s sort of married to sort of the traditional Ethics 101 trolley-car problem. For our listeners who aren’t familiar with that, the idea with the trolley-car problem, there’s a trolley whizzing down the tracks, there’s five people on the trolley, it’s out of control, it’s going to crash. It can be diverted onto a different track on which it will kill one innocent pedestrian, or it can be taken to its fate and all the people on the trolley car die.
The problem I have with ethically marrying self driving cars to the trolley-car problem is one of agency, and one of how the market intersects. In the trolley-car problem, the riders of the trolley have chosen to ride on the trolley, which is owned by some other party.
Tomer:
Who knows, right?
Dirk:
Who knows.
Tomer:
You’re only filling in the blanks, which we all do.
Dirk:
It’s very specific with the self driving car, where I’m an individual who’s going to buy a vehicle, and that vehicle is going to have rules of how it’s going to behave. I’m opting in, in a certain way. To me, it’s very different, as a layman talking about ethics, which perhaps makes me unqualified to do so, but it’s a very different ethical conundrum than the classic trolley-car problem.
Tomer:
First of all, I’ll start with the last thing. Being a layman doesn’t disqualify you from talking about ethics. It’s actually a feature of ethics and political morality that every citizen and every person has to be an expert in a way, or has to make decisions. It’s actually not very much the case that political theorists or ethicists are experts in ethics. What they are is people who have more time and inclination to explore these things in greater depth. Actually every person has a responsibility to be moral and behave morally, and come to a conclusion about what morality requires, and every citizen has responsibility and a moral duty to understand what justice is and act accordingly.
I think that’s why there’s a big challenge of teaching ethics, as someone who also engaged in thinking about how to better teach ethics. Unlike our engineering friends, we cannot be experts in the same way. People come with a lot of opinions, they think they’re moral, and they have good reasons to. They have been raised to know certain things, so they don’t walk into our classrooms as they do to a math class when you tell them you don’t know anything, let me tell you what math is. The other thing is we cannot tell them the correct answer is four. We have to leave them with agency to make this kind of decision, even though sometimes we might think we know what the right thing is. I cannot grade somebody an F just because they think something I think is wrong. Anyway, that’s what triggered me, with you saying you’re a layman.
There’s a lot in the trolley problem and the markets question, a lot there. Let’s unpack it slightly. When you’re saying I’m buying a car, you’re opting in, there’s already a ton of assumptions there that I think need to be unpacked. What about public transportation? Perhaps we’ll introduce regulation about self-driving public transportation but not about private cars. All the sudden this goes out the window completely. Also public transportation in certain places is privately owned but regulated, in other places it’s publicly owned, so already there’s a myriad possibilities here.
The trolley problem is designed to get at people’s intuitions to articulate theory, so filling in the blanks of the trolley problem with all sorts of assumptions, maybe they opted in so it’s their fault, is counterproductive to what the trolley problem is trying to do. It’s a thought experiment, so it’s meant to isolate conditions. What they’re trying to do when they’re asking you the trolley problem is like suppose you had the power to save five people and have one other person die, and you know nothing about these people. People are like that’s weird, when would that ever happen. They’re filling the blanks by telling this weird trolley problem. Then a bunch of people say no, but if it’s on tracks, tracks have a tendency to … And you’re like forget it’s tracks. Let’s assume they’re on the moon, and you’ve given them an orange, and the orange is poisonous, but didn’t know.
Philosophers introduce these stories, and when they tweak it, if you know something about the trolley problem you know that there are like a dizzying number of variants and extensions, but they’re not random like experiments in a lab. You’re tweaking, you’re supposed to be tweaking this story to get at something that you think is maybe distinct morally, and that there’s a disagreement.
One variant of it is there’s a person that you can push over, and that will block the track, and people say, well, you cannot jump yourself because that person is heavier than you and he will stop the track, and you won’t yourself. Those variants are supposed to like keep all the conditions the same and just change this one variable to try to get at people’s intuitions about it. It’s part of ethics as a way of articulating moral theory, but it’s also part of Psych 101 because people use it to gauge what people’s prevalent instincts are.
Ethicists don’t care as a general matter what most people believe is moral. What they care about, and that’s my work as a political theorist, is to try to find the best argument, as best as my intellectual capacity allows me, closest to the truth as far as I can. The argument that I have with other theorists and ethicists is meant as a way to sharpen each other’s arguments such that we’ll together as a community produce the best theory that there is.
Psychologists are interested in what people actually think and believe, and so they take moral theory like utilitarianism versus deontology, and all these different goal oriented versus rule oriented theories, and they use these stories to test people’s intuitions to see under what circumstances they make decisions that seem to fit with one theory and another.
Now we’re getting back to the larger question here, the substantive question that you asked about the intersection of the markets with something like self driving car. Again here this is really a large topic. I’ll plug the Center for Ethics at Harvard here one last time. Starting next year and for a period of two years our theme will be political economy and justice. We’re going to actually look at a bunch of different questions that relate to the issues of where markets intersect with all sorts of issues, and the limits of markets, and the place of markets, and so we’ll have a series of public lectures and other events on these very questions.
I say that by way of saying depending on where you stand on kind of general matters about the role of market in society and in a just society, and in the international context which is different a little bit than maybe the domestic context, et cetera, you’ll have a different answer as to how that intersects with the question of AI. For example, some people believe that markets are not just permissible but kind of required, they’re freedom advancing, and if markets are advancing important liberties, for example economic liberties, very important, so John Tomasi at Brown thinks that economic freedom is a fundamental interest people have, and markets are social structure that allow people to enjoy that freedom, so they enable that, so markets are not merely permissible or efficient, or beneficial, but actually required. They’re what justice requires us.
You might think that whatever else, whatever opinion you have about the bias introduced, you might think that we have to allow for competition between different systems of AI, so we ought not regulate them in any way on the level of the state, because people ought to be able to compete and create those different systems. If you are inclined to think that the main role markets play in society is they’re an efficient way of distributing goods, so we might think material goods that aren’t particularly important, this is an opposing view, material goods that aren’t particularly important to people’s kind of basic needs, who should have this iPhone? I don’t know, I don’t care. Justice doesn’t care, so long as everybody has enough to live by, whatever.
Then we say let’s institute a market that allows people to transfer things if they mutually agree on it and nobody is cheating each other, and let’s keep healthcare, for example, or education outside of the market because these are things that people maybe are entitled to. This is an opposing view. The role markets play in this kind of worldview is just an efficient allocator of goods. Then they have to be regulated and restricted in the sense that we have to make sure they don’t encroach on things that they’re not allowed to. We don’t want people to start buying kidneys from other people because that defeats the purpose. We care about this allocation, it’s a matter of justice. It’s not that who owns this kidney, I don’t care, let’s just let people buy it off of each other.
If you have that view, you might be very worried about a market where people buy into whatever car they want. Some people would buy into the utilitarian car, and some people would buy into the save-the-driver-at-all-costs car.
Dirk:
Exactly.
Jon:
If you see the utilitarian car coming, you better get out of the way, or, sorry, the opposite. If you see the save-the-driver-at-all-cost car …
Tomer:
Sure. Then maybe like cigarettes that have to be marked in some way, like I am a selfish driver, something like that, and then maybe there will be some stigma attached to it, but all these mechanisms supposedly are not coercive regulation of it. We allow it, and we let the market regulate.
Dirk:
Yeah.
Jon:
Yeah. This is obviously a topic where we could go on at great length, but I think we’re going to wrap up the conversation today. Tomer, please come back again and join us because I’m sure we’re going to have a lot more questions to ask you as AI develops new tools and technologies for our society. Thanks.
Listeners, remember that while you’re listening to the show you can follow along with the things that we’re mentioning here in real time. Just head over to TheDigitaLife.com, that’s just one L in the DigitaLife, and go to the page for this episode. We’ve included links to pretty much everything mentioned by everybody, so it’s a rich information resource to take advantage of while you’re listening, or afterward if you’re trying to remember something that you liked.
You can find the Digital Life on iTunes, Sound Cloud, Stitcher, Player FM, and Google Play, and if you want to follow us outside of the show you can follow me on Twitter at Jon Follett, that’s J-O-N F-O-L-L-E-T-T, and of course the whole show is brought to you by Involution Studios, which you can check out at GoInvo.com. That’s G-O-I-N-V-O dot com. Dirk?
Dirk:
You can follow me on Twitter at D Knemeyer, that’s at D-K-N-E-M-E-Y-E-R, and thanks so much for listening. Tomer, how can our listeners get in touch with you?
Tomer:
Yeah, you can check out the Center for Ethics at Ethics.Harvard.edu, and I’m also on Twitter at @PerryTom6, which is P-E-R-R-Y T-O-M and the number 6.
Dirk:
Excellent.
Tomer:
Thank you so much.
Jon:
That’s it for episode 199 of the Digital Life. For Dirk Knemeyer, I’m Jon Follett, and we’ll see you next time.

No Comments

Leave a Comment

Your email address will not be published. Required fields are marked *