Bull Session

AI and Science

October 12, 2018          

Episode Summary

This week on The Digital Life, we discuss the intersection of artificial intelligence and science with special guest is Dany DeGrave, founder of Unconventional Innovation. AI and science are coming together in new and significant ways, including the use of cognitive and other innovative technologies in R&D — like NLP, machine learning, and advanced analytics. As bio-science companies rush to invest in AI, the implementation of scientific research, drug trials, and even personalized medicine is undergoing significant change. But with the potential to make erroneous decisions, and even be used for malicious purposes, it may be a long time before we fully trust AI to be used in such development.

Jon:
  Welcome to episode 279 of the Digital Life, a show about our insights into the future of design and technology. I’m your host, Jon Follett. Our special guest this week is Dany DeGrave, founder of Unconventional Innovation. Dany, welcome to the show.

Dany:
  Hey. Hello, Jonathan. Thanks for having me.

Jon:
We’ll be discussing the intersection of AI and scientific research, the use of cognitive and other innovative technologies in R&D, like natural language processing, machine learning, and advanced analytics. Let’s get started. Dany, how do you see AI and science intersecting?

Dany:
Mm-hmm (affirmative). That’s a broad question, and interesting. I would add a very hard question. Thank you for starting off with a hard question. It’s almost like you would say how do you see the use of Excel and science intersecting. In a way, kind of everywhere within the field of science.

Maybe we can summarize it, and say it’s going to change the way science is done, and fundamentally change the way science is done. I’ll give you just one example on that. One sub area of AI is NLP, natural language processing. One of the projects I’ve done recently, we wanted to look at what was published about this topic, but then really way more way beyond that into the scientific literature. We really wanted to go beyond what people thank you do in their own domain, like can we have a look at everything, can we find some patterns here. We went through that, went through that process, and just went through the catalog of one of the major publishers. Just to give you a feel of how things will then change is that this took the machines a week or so, not even a week, to read through everything, to ingest everything. If you or I would like to do that, and just read, not even talking about thinking about it, doing something with it, remembering it, if we would like to do the same thing, read all that scientific literature, it would take us more than 300 years, speed reading.

It shows just the scale of what we are getting into. There’s no longer a discussion in that example of, yeah, I can still do that, and maybe the machines can help me. No. This is really we are going into a field where now you can say these approaches can really help us to do things we really couldn’t do before. That’s just one example, and there are many. If you add everything up, we really have to do science in a very different way.

Jon:
What are the primary uses of AI in scientific research?

Dany:
For sure you have many applications in the field of what’s called biomarkers, so markers let’s say in your blood for example which can help to predict how you will respond to a medicine or a vaccine. Then that information can be used to make better medicines and better vaccines, effective for more people. That’s one area. You can certainly use it and it’s being used to not only find relationships between data, and as you mentioned big data and support, not only find relationships but also trying to figure out what’s the cause behind that. If you would say for example I see that a certain effect of a medicine, let’s say, and you see this in adults, we don’t see this in children, let’s say. It’s like okay, that’s kind of a fact, so it’s there, it’s clear, but like why is that? Is it just because you’re now 20-years old, and no longer 12-years old? What’s behind that? That has been always very difficult to identify. With AI today, you can go search for that.

What’s the underlying reason, what’s behind that, what’s below that, what is actually the real reason why I would see the difference in this age effect, for example. Certainly there’s a lot of work in genomics going on, a lot of work in coming back to my first example here finding patterns in literature. That’s again something that we as humans cannot really do, historically, or we can only do that to a certain extent in a small scale. If you know that 50% of the scientific literature is actually basically not read, it’s just read by the people who write the article and the reviewer, that’s a lot of information that’s out there, a lot of knowledge that’s out there that actually nobody is really aware of. If we can use AI to start connecting the dots, and start to find patterns, that’s something that is now possible as one of the uses.

Then certainly a lot of efforts to what is called real-world evidence, real world data, so we’re actually when you look at clinical trials typically these are very controlled environments, and everything is really ideal that everything is super controlled, that you can draw a really very good conclusion, but the aspect that we live in a world, and if you live in Africa, you live in Mexico, or you live in the United States is different. We come with a big different background. Maybe climate, maybe the weather patterns influence your trial, and you don’t know because you have never looked at these things. Now you can start to combine all those different elements into your AI project in a way, and have a much richer picture of what’s going on. These are just a few examples in scientific research, but if you just take them out just these ones it already shows how fundamental this would change the path forward.

Jon:
It sounds to me like AI will be critical to the implementation of personalized medicine.

Dany:
Yeah, exactly. I mean just exactly on the personalized idea, historically you couldn’t bring so much different types of information also together, and if you let’s say you’re sensitive to allergies or something, reactions, I mean historically these things would be evaluated in a certain way, in a population way. Now you can say actually, hey, Jonathan, do you have air conditioning at home, are you putting this on all the time, only when it’s cold or when it’s hot, that changes your humidity in your home. Where do you live, do you live in certain areas of the country where it’s more humid all the time, or you just moved to an area which is different, all these elements which were like how do we capture that before, all that becomes useful to use.

Also because all that information is becoming more and more digitized, so it’s just it’s becoming available. You could record humidity and any other factor you would may be thinking of has an impact, but like that would be on paper, it needs to be put into computers, a lot of effort. Especially in many cases, there’s a lot of information that actually is of no relevance, but you don’t know. Now with AI, you can say let’s look at this, and let’s figure it out. Yes, some of that information will be actually useless, but okay, let’s have a look. While before you would have the trade off of am I going to look at this information, it’s going to cost me so much money, and actually maybe there’s nothing in there, am I going to do this, yes, no. Now that becomes much easier.

Jon:
How would you advise a bio-sciences startup in investing in artificial intelligence? Is AI now a piece of critical infrastructure?

Dany:
Yeah. The answer is yes. It’s hard to think that you can now start businesses which say actually we do not need AI, we’ll be okay without it for the next coming years. I think that’s probably putting yourself at a disadvantaged position. Yeah, certainly it’s an investment in AI, but also I would advise them to really also deeply think about what you exactly want to do, and how. It’s not only about the technology. It’s there, and it’s accelerating at a fantastic pace. What was not possible last year now somebody has a solution for it, and now it can be done easily. The same what is not possible today will tomorrow, one year from now, somebody will find solution for that. The technology side is kind of, yeah, you need to make a choice. Don’t wait until you have the best possible answer, because it’s evolving as you’re selecting your type of AI you want to use.

I think it’s at least as important to really know which problem you’re trying to solve, and not just AI for the AI, but really okay, is this something where AI can really make a difference, not just a little bit better, but really doing something that gives you an advantage as a business. In whatever area of your business, if it’s now research, regulatory, manufacturing, whatever, really look at that. Then certainly also, and that’s a topic that it’s almost I see almost never being discussed, is really think about how you do that and how do you build your team around this. Because one thing I’ve seen over and over again is that when you do this type of implementations obviously you have different people coming from different areas. You can have the scientist, or you can have the data scientist and the business person, and they all speak English, but they actually don’t speak the same language. They come from very different angles, which is normal to come from different angles.

If you … I think it’s something that really needs to be taken care of and looked at. Because you can very easily end up with using AI, and say yeah, we are so great, we’re using AI, and, yeah, you have the data which then points you to a solution to really good next step, to an insight, but if those data can not be translated into something that business can use, so that the scientists then can use, then it’s like it may be stuck there. The message does not come through, and then you have spent money and time and have used AI, but nothing has really advanced because you can easily get to a situation where people may conclude like this was not worth it, this did not work out, while it was just a matter of translation.

If businesses want to invest in AI, I would say certainly the technology is important, but really don’t forget the people in that mix. Because also what also happens, and that’s where also I think we will need to evolve in how we approach things, is that historically the expert was someone who knew the most. It was the older you get, the more you know, and like the smarter you are considered to be. It’s about knowledge, but now this is I think shifting more to knowledge is there, and anyway like I mentioned my example the 300-years reading, you cannot be that, so it’s like now use your smartness, use your knowledge to ask the right questions.

I think it’s more going to be about that, like what is actually what do we want to find out here. When the results then come back, okay, what’s now the next steps, how are we going to iterate on that. Having a very open mind to the result, because it’s not unlikely that talk about scientists that they may say hey, this result produced by my AI, I don’t believe it. Because I never seen it. What I’m saying here is not a hypothetical. It’s like something I really lived through, where the expert says well, this cannot be true because I have not seen it. If you then do not have people in the room who know about the field but who can take a step back and who have a more open mindset, and who can say, well, let’s remember why are we doing this AI approach, we’re trying to find things we didn’t know. Yes, there might be wrong information coming out. Certainly that’s a possibility, but what if this is true, we need to think about this. That’s where you need to use your smartness. Like how can I prove this, or how can I disprove this. That’s not easy for many people.

Jon:
How do you evaluate the AI results? How do you know if it’s making good or erroneous decisions?

Dany:
It starts with the data, what we put into the AI systems. Some data sets you might have to conclude that this is not extensive enough or not complete enough. It doesn’t always have to be big, big data. It can be small data. That’s okay, but at least it needs to be a coherent data set to start with. That’s one part of it. Certainly if you think about systems that are going to look at data around humans, yeah, if you develop a system, let’s say facial recognition, and you only use pictures of white people over fifty, it’s not going to do the best job in the end, and it might lead to really wrong conclusions in the end. That certainly is going to happen. I would say that’s a guarantee. One way is to make sure that the data that people are aware of that, that if this data set is represented it’s like common sense, but typically very often this is not looked at in that way.

Sometimes the data are not available, and then you say let’s go with what we have. That may also be a reason. It’s something to really be careful about. Then when the results come out, yeah, it’s certainly at this stage I would not blindly conclude on what comes out of these systems, or accept it as the truth. It’s like it’s just an enabling technology. It’s there to help you. It’s pointing you in certain directions. It’s saying maybe look over here, and based on the data I see A or B. Maybe you should look over there, and maybe that over there is an area where you wouldn’t never have thought about. That’s where then the usefulness of AI comes in, to start looking at that.

If it’s really kind of crazy, why is it crazy? I mean at one moment in history, people thought the Earth was flat, it was a crazy idea to think something different. In the end, we all agree that it’s not flat. Again, I think we come a little bit to that open mindset, to say why would this be, not immediately saying this is wrong, this cannot be, I cannot explain it. Maybe the right attitude is to say how can I actually show that, how can I actually do that.

If I can just give one example, if you’re in let’s say you’re speaking at a conference, and you tell people my AI has figured out that you’re all here in this room, and because of going through that door at the end of this room, everybody will say well, yeah, okay, I don’t need AI to figure that out, but it’s right, I agree with your AI. It 100% makes sense. If you then would say that same AI you had just said you’re believing, that same AI has figured out that actually the people sitting on the left side of this room are actually more of the introvert type, and people sitting on the right side are actually more of the extrovert type, then probably most people would say okay, that’s going to be a little bit weird, a little bit crazy, and so this cannot be true. Maybe it is true. Maybe there’s a reason for that. Maybe when they entered into the room, the setup time to enter into the room, there’s maybe some reason behind it, but you cannot disqualify that as such. You can say how could that be, how can we explain this, and I think that’s a new kind of thinking, open-mindedness people will need to use, to avoid that maybe this really great result is in front of them, and they don’t see it.

Jon:
People will use technology for both good and bad purposes. What is the potential for AI to be used for malicious reasons? How might that play out?

Dany:
Well, people will use whatever we have available as tools in good and less-good ways. You can expect the same thing to happen here, for sure. In general, you already see that you can have … I mean if you just go a little bit out of the science, you can have military robots. You can use them to go to war, or you can also use them to help people out of the rubble after an earthquake. You can in every way there’s always kind of these two sides. In science, yes, you could say if it’s thanks to AI that you find a really powerful bio-agent that can create a lot of bad consequences, should you go there, it’s not one of those things that I’m going to be stopped. Somebody will be working on that.

For sure, it can go look into your privacy element. What if, and we started thinking not long ago, an insurance company said I want you to have a wearable, and I want to have your data, and that’s the condition for insurance. That’s coming very close to start to say I want to have access to your data, or there will be consequences and you may not have insurance, you may not have healthcare, and it’s actually going to impact your future life unless you agree to give your data. Is that good or bad? You could say that’s good, it’s good for your health, one way, but is this also leaving people behind in the process. What if for whatever reason you have something in your history or in your family history that somebody starts to figure out, or assumes, or has been using AI and but not in the right way, and says all people or all people who have this in their family history, they are at a much higher risk for cardiac problems, and so we’re not going to insure you, Jonathan, because that’s what our systems have told us.

Then how do you make sure that were the data correct, were the conclusions correct? That is going to be a fine line, and we cannot assume that this is all going to go automatically well. If you just look at today, you have a lot of those health systems where they say put in your information, cholesterol level, how much do you walk, do you do this, do you do that, is your lifestyle healthy and sound, and then they give you a score and then they put you into a kind of comparison to the group. It’s like okay, that’s interesting, and that might be very interesting to know where you are, but it’s just based on somebody has decided that this is the way these analyses need to be made.

Very personally for myself, in those packages or these approaches, they don’t ask if you run a marathon, or a half marathon, and so occasionally I do that, but I have no place to put that information. I would think probably my health is pretty okay if I can do that, but I cannot feed that to the system, and now the system is going to make a decision for me and it’s going to transfer the decision to whoever is now using this, typically without my knowledge. That may be not the best use of AI.

Jon:
How do we go about securing the public trust in the development and use of AI?

Dany:
There are probably different ways to at least make an attempt at this point. You see some companies who make declarations, and say we commit to ethical use of AI. That’s a good start. We’ll have to see that indeed is done also in practice, but that’s a good start, to say we don’t want to go there. Yes, we could go there, we have the knowledge, we have the tools to use it in a bad way, but we want to stay away from that. That’s certainly one way.

We have laws. There’s some laws on data privacy, so you cannot do everything you want, even if you had the data. If you’re a company who says hey, I don’t really care so much about that, and I’ll handle it when people know or when people realize that this has happened, and anyway maybe there’s a fee to be paid or something and that’s a detail for my business model, that’s something that there is no example there, it’s of no consequence, obviously you’re going just to continue have more of that. If you want to create some public trust, maybe there needs to be some containment there, but these are all, I would say this is all hard to implement or to make sure that overall we can have really good trust in how AI is developed and how it is used.

Unfortunately, I think, if history is any guidance, it might well be that we need to wait for something really, really big negative happening before some big measures are being taken to say ho, the risks. I mean there are different levels of AI. There are different users of AI, and there are things which have no negative impact, and they are applications where we can really be very bad, and those ones maybe we need to regulate or we need to have some validation system, or some process. If you just look at what historically has happened with medicines and biologicals and so on, so forth, this has been an evolution to come to the FDA, the Food and Drug Administration, where you say I want to bring this to the market, and there is a certain approach, and there are phases into that, and there are things to be checked, and there are people checking the ones who are doing this.

To minimize the chance that when this product comes onto the market that indeed this is an effective safe product and is not going to create harm to people, maybe that same thinking will come up in the future to say in certain [inaudible 00:
29:13] applications, or if you just think about resume handling, insurance, and all these domains where you submit information and you don’t know what’s happened, and then somebody somewhere somehow gets to a conclusion, which actually might be wrong, but it all depends like who created this tool and who created this AI, maybe in some instances you need to have some type of organization or some type of rules to say well, you need to get through these check points before we kind of release that to the world. That might be one way. We’ll see. We’ll see how it evolves.

Obviously I think we’ll have both a little bit at the same time. If you’re able out of the whole stadium of people to recognize through facial recognition the bad guy who nobody can catch, you find this person that way, you can say that’s great for society, but you can also use it in other ways because it means everybody is now recorded, can be found, and it’s all intentioned good behind that.

Yes, it’s a new field. It’s a new area. We’ll have to get used to it, and find ways to integrate that into society. That may be the next step.

Jon:
Thanks, Dany, for coming on the show.

Dany:
Thank you for having me. Thank you very much. It was nice. Thank you, Jonathan for the very interesting questions. Thank you very much.

Jon:
Listeners, remember that while you’re listening to the show you can follow along with the things that we’re mentioning here in real time. Just head over to TheDigitaLife.com. That’s just one L in TheDigitaLife, and go to the page for this episode. We’ve included links to pretty much everything mentioned by everyone, so it’s a rich information resource to take advantage of while you’re listening, or afterward if you’re trying to remember something that you liked. You can find the Digital Life on iTunes, Sound Cloud, Stitcher, Player FM, and Google Play, and if you’d like to follow outside of the show you can follow me on Twitter at JonFollett, that’s J-O-N-F-O-L-L-E-T-T, and of course the whole show is brought to you by Go Invo, a studio designing the future of healthcare and emerging technologies, which you can check out at GoInvo.com. That’s G-O-I-N-V-O dot com.

Dany, how can people get in touch with you?

Dany:
A good way is to go through Twitter, so at DanyDeGrave, D-A-N-Y-D-E-G-R-A-V-E, one word. That’s a good way to do that.

Jon:
That’s it for episode 279 of the Digital Life. I’m Jon Follett, and I’ll see you next time.

No Comments

Leave a Comment

Your email address will not be published. Required fields are marked *