Bull Session
AI and Science
October 12, 2018
Episode Summary
This week on The Digital Life, we discuss the intersection of artificial intelligence and science with special guest is Dany DeGrave, founder of Unconventional Innovation. AI and science are coming together in new and significant ways, including the use of cognitive and other innovative technologies in R&D — like NLP, machine learning, and advanced analytics. As bio-science companies rush to invest in AI, the implementation of scientific research, drug trials, and even personalized medicine is undergoing significant change. But with the potential to make erroneous decisions, and even be used for malicious purposes, it may be a long time before we fully trust AI to be used in such development.
Maybe we can summarize it, and say it’s going to change the way science is done, and fundamentally change the way science is done. I’ll give you just one example on that. One sub area of AI is NLP, natural language processing. One of the projects I’ve done recently, we wanted to look at what was published about this topic, but then really way more way beyond that into the scientific literature. We really wanted to go beyond what people thank you do in their own domain, like can we have a look at everything, can we find some patterns here. We went through that, went through that process, and just went through the catalog of one of the major publishers. Just to give you a feel of how things will then change is that this took the machines a week or so, not even a week, to read through everything, to ingest everything. If you or I would like to do that, and just read, not even talking about thinking about it, doing something with it, remembering it, if we would like to do the same thing, read all that scientific literature, it would take us more than 300 years, speed reading.
It shows just the scale of what we are getting into. There’s no longer a discussion in that example of, yeah, I can still do that, and maybe the machines can help me. No. This is really we are going into a field where now you can say these approaches can really help us to do things we really couldn’t do before. That’s just one example, and there are many. If you add everything up, we really have to do science in a very different way.
What’s the underlying reason, what’s behind that, what’s below that, what is actually the real reason why I would see the difference in this age effect, for example. Certainly there’s a lot of work in genomics going on, a lot of work in coming back to my first example here finding patterns in literature. That’s again something that we as humans cannot really do, historically, or we can only do that to a certain extent in a small scale. If you know that 50% of the scientific literature is actually basically not read, it’s just read by the people who write the article and the reviewer, that’s a lot of information that’s out there, a lot of knowledge that’s out there that actually nobody is really aware of. If we can use AI to start connecting the dots, and start to find patterns, that’s something that is now possible as one of the uses.
Then certainly a lot of efforts to what is called real-world evidence, real world data, so we’re actually when you look at clinical trials typically these are very controlled environments, and everything is really ideal that everything is super controlled, that you can draw a really very good conclusion, but the aspect that we live in a world, and if you live in Africa, you live in Mexico, or you live in the United States is different. We come with a big different background. Maybe climate, maybe the weather patterns influence your trial, and you don’t know because you have never looked at these things. Now you can start to combine all those different elements into your AI project in a way, and have a much richer picture of what’s going on. These are just a few examples in scientific research, but if you just take them out just these ones it already shows how fundamental this would change the path forward.
Also because all that information is becoming more and more digitized, so it’s just it’s becoming available. You could record humidity and any other factor you would may be thinking of has an impact, but like that would be on paper, it needs to be put into computers, a lot of effort. Especially in many cases, there’s a lot of information that actually is of no relevance, but you don’t know. Now with AI, you can say let’s look at this, and let’s figure it out. Yes, some of that information will be actually useless, but okay, let’s have a look. While before you would have the trade off of am I going to look at this information, it’s going to cost me so much money, and actually maybe there’s nothing in there, am I going to do this, yes, no. Now that becomes much easier.
I think it’s at least as important to really know which problem you’re trying to solve, and not just AI for the AI, but really okay, is this something where AI can really make a difference, not just a little bit better, but really doing something that gives you an advantage as a business. In whatever area of your business, if it’s now research, regulatory, manufacturing, whatever, really look at that. Then certainly also, and that’s a topic that it’s almost I see almost never being discussed, is really think about how you do that and how do you build your team around this. Because one thing I’ve seen over and over again is that when you do this type of implementations obviously you have different people coming from different areas. You can have the scientist, or you can have the data scientist and the business person, and they all speak English, but they actually don’t speak the same language. They come from very different angles, which is normal to come from different angles.
If you … I think it’s something that really needs to be taken care of and looked at. Because you can very easily end up with using AI, and say yeah, we are so great, we’re using AI, and, yeah, you have the data which then points you to a solution to really good next step, to an insight, but if those data can not be translated into something that business can use, so that the scientists then can use, then it’s like it may be stuck there. The message does not come through, and then you have spent money and time and have used AI, but nothing has really advanced because you can easily get to a situation where people may conclude like this was not worth it, this did not work out, while it was just a matter of translation.
If businesses want to invest in AI, I would say certainly the technology is important, but really don’t forget the people in that mix. Because also what also happens, and that’s where also I think we will need to evolve in how we approach things, is that historically the expert was someone who knew the most. It was the older you get, the more you know, and like the smarter you are considered to be. It’s about knowledge, but now this is I think shifting more to knowledge is there, and anyway like I mentioned my example the 300-years reading, you cannot be that, so it’s like now use your smartness, use your knowledge to ask the right questions.
I think it’s more going to be about that, like what is actually what do we want to find out here. When the results then come back, okay, what’s now the next steps, how are we going to iterate on that. Having a very open mind to the result, because it’s not unlikely that talk about scientists that they may say hey, this result produced by my AI, I don’t believe it. Because I never seen it. What I’m saying here is not a hypothetical. It’s like something I really lived through, where the expert says well, this cannot be true because I have not seen it. If you then do not have people in the room who know about the field but who can take a step back and who have a more open mindset, and who can say, well, let’s remember why are we doing this AI approach, we’re trying to find things we didn’t know. Yes, there might be wrong information coming out. Certainly that’s a possibility, but what if this is true, we need to think about this. That’s where you need to use your smartness. Like how can I prove this, or how can I disprove this. That’s not easy for many people.
Sometimes the data are not available, and then you say let’s go with what we have. That may also be a reason. It’s something to really be careful about. Then when the results come out, yeah, it’s certainly at this stage I would not blindly conclude on what comes out of these systems, or accept it as the truth. It’s like it’s just an enabling technology. It’s there to help you. It’s pointing you in certain directions. It’s saying maybe look over here, and based on the data I see A or B. Maybe you should look over there, and maybe that over there is an area where you wouldn’t never have thought about. That’s where then the usefulness of AI comes in, to start looking at that.
If it’s really kind of crazy, why is it crazy? I mean at one moment in history, people thought the Earth was flat, it was a crazy idea to think something different. In the end, we all agree that it’s not flat. Again, I think we come a little bit to that open mindset, to say why would this be, not immediately saying this is wrong, this cannot be, I cannot explain it. Maybe the right attitude is to say how can I actually show that, how can I actually do that.
If I can just give one example, if you’re in let’s say you’re speaking at a conference, and you tell people my AI has figured out that you’re all here in this room, and because of going through that door at the end of this room, everybody will say well, yeah, okay, I don’t need AI to figure that out, but it’s right, I agree with your AI. It 100% makes sense. If you then would say that same AI you had just said you’re believing, that same AI has figured out that actually the people sitting on the left side of this room are actually more of the introvert type, and people sitting on the right side are actually more of the extrovert type, then probably most people would say okay, that’s going to be a little bit weird, a little bit crazy, and so this cannot be true. Maybe it is true. Maybe there’s a reason for that. Maybe when they entered into the room, the setup time to enter into the room, there’s maybe some reason behind it, but you cannot disqualify that as such. You can say how could that be, how can we explain this, and I think that’s a new kind of thinking, open-mindedness people will need to use, to avoid that maybe this really great result is in front of them, and they don’t see it.
For sure, it can go look into your privacy element. What if, and we started thinking not long ago, an insurance company said I want you to have a wearable, and I want to have your data, and that’s the condition for insurance. That’s coming very close to start to say I want to have access to your data, or there will be consequences and you may not have insurance, you may not have healthcare, and it’s actually going to impact your future life unless you agree to give your data. Is that good or bad? You could say that’s good, it’s good for your health, one way, but is this also leaving people behind in the process. What if for whatever reason you have something in your history or in your family history that somebody starts to figure out, or assumes, or has been using AI and but not in the right way, and says all people or all people who have this in their family history, they are at a much higher risk for cardiac problems, and so we’re not going to insure you, Jonathan, because that’s what our systems have told us.
Then how do you make sure that were the data correct, were the conclusions correct? That is going to be a fine line, and we cannot assume that this is all going to go automatically well. If you just look at today, you have a lot of those health systems where they say put in your information, cholesterol level, how much do you walk, do you do this, do you do that, is your lifestyle healthy and sound, and then they give you a score and then they put you into a kind of comparison to the group. It’s like okay, that’s interesting, and that might be very interesting to know where you are, but it’s just based on somebody has decided that this is the way these analyses need to be made.
Very personally for myself, in those packages or these approaches, they don’t ask if you run a marathon, or a half marathon, and so occasionally I do that, but I have no place to put that information. I would think probably my health is pretty okay if I can do that, but I cannot feed that to the system, and now the system is going to make a decision for me and it’s going to transfer the decision to whoever is now using this, typically without my knowledge. That may be not the best use of AI.
We have laws. There’s some laws on data privacy, so you cannot do everything you want, even if you had the data. If you’re a company who says hey, I don’t really care so much about that, and I’ll handle it when people know or when people realize that this has happened, and anyway maybe there’s a fee to be paid or something and that’s a detail for my business model, that’s something that there is no example there, it’s of no consequence, obviously you’re going just to continue have more of that. If you want to create some public trust, maybe there needs to be some containment there, but these are all, I would say this is all hard to implement or to make sure that overall we can have really good trust in how AI is developed and how it is used.
Unfortunately, I think, if history is any guidance, it might well be that we need to wait for something really, really big negative happening before some big measures are being taken to say ho, the risks. I mean there are different levels of AI. There are different users of AI, and there are things which have no negative impact, and they are applications where we can really be very bad, and those ones maybe we need to regulate or we need to have some validation system, or some process. If you just look at what historically has happened with medicines and biologicals and so on, so forth, this has been an evolution to come to the FDA, the Food and Drug Administration, where you say I want to bring this to the market, and there is a certain approach, and there are phases into that, and there are things to be checked, and there are people checking the ones who are doing this.
Obviously I think we’ll have both a little bit at the same time. If you’re able out of the whole stadium of people to recognize through facial recognition the bad guy who nobody can catch, you find this person that way, you can say that’s great for society, but you can also use it in other ways because it means everybody is now recorded, can be found, and it’s all intentioned good behind that.
Yes, it’s a new field. It’s a new area. We’ll have to get used to it, and find ways to integrate that into society. That may be the next step.
Dany, how can people get in touch with you?