artificial intelligence tags

Bull Session

Automating Scientific Discovery

May 11, 2017          

Episode Summary

On The Digital Life this week we’ll look at automating knowledge work, and scientific discovery, in particular. There’s no doubt that knowledge work will change significantly in the coming decades due to massive computing power coupled with AI. It’s fascinating to consider the aspects of science, technology, and design that might be easily automated. AI and deep learning are rapidly changing areas of activity that were previously thought to be the exclusive arena of human cognition. For instance, in the pharmaceutical industry, AI might automate aspects of drug discovery and development, by helping to characterize drug candidates according to likely efficacy and safety. Additionally, the number of scientific papers published each year far exceeds any scientist’s ability to read and analyze them. It’s reasonable to assume that AI and deep learning could assist scientists in navigating this data.

Resources:
Science has outgrown the human mind and its limited capacities
The BGRF is helping develop AI to accelerate drug discovery for aging and age-associated diseases

Jon:
Welcome to episode 206 of The Digital Life, a show about our insights into the future of design and technology. I’m your host Jon Follett and with me is founder and cohost Dirk Knemeyer.

Dirk:
Greetings listeners.

Jon:
For our podcast topic this week we’re going to take a look at the idea of automating knowledge work and, in particular, scientific discovery. There’s no doubt that knowledge work is going to be changing significantly in the coming decades due to both massive computing power and its coupling with artificial intelligence. So, it’s fascinating to consider the aspects of science, technology and design that might be easily automated. AI and deep learning are rapidly changing areas of activity that were previously thought to be the exclusive arena of human cognition. So, with that preamble, Dirk, I think you’re going to dig into AI a little bit and give us a primer on that.

Dirk:
Yeah, I think there’s some confusion about what AI is and how it functions. I think it’s really not rocket science, but it feels like rocket science because artificial intelligence is sort of, at the extreme of our imagination, a big and scary thing. The recent trend in AI is towards machine learning or deep learning, and there’s a lot of advances around those technologies. The technologies seem confusing because they’re in juxtaposition to the old approaches to AI which were rules-based, they were rules-based approaches. However, machine learning and deep learning are also rules based. Where they differ from the thing called rules-based is even though the AI, the software is programmed with a ton of rules, in a machine learning environment then it is using those rules within the parameter of those rules continuing to refine the way that it behaves.

In the old rules-based systems it was limited to only the rules in the system, and so to make changes or alterations to what that software was able to do people would need to write new rules. It very quickly it didn’t scale and broke down in terms of trying to get artificial intelligence that’s more powerful and doing more interesting things. And so, crucially, that’s the big learning here. It’s not that we’ve gone from, “Oh, there used to be rules, now there’s no rules and there’s this black box of wizardry of artificial intelligence.” It’s still “rules-based” but now it’s the AI that takes on and carries on with the rules as opposed to being human-reliant.

Jon:
Right. Interesting. So, there’s a online magazine called Aeon, which I enjoy reading essays on technology and science. And there’s an author by the name of Ahmed Alkhateeb who’s a molecular cancer biologist at Harvard Medical School, and he wrote an interesting piece called “Science has outgrown the human mind and it’s limited capacities”. I have a couple of audio clips from that essay. They very kindly provide a audio version as well as the written word, so I thought I’d use a couple of those quotes as fodder for our discussion today. Here is the first one from Ahmed’s piece, and it’s sort of defining the problem that science is undergoing in this age of big data science.

Ahmed:
Science is in the midst of a data crisis. In 2016 there were more than 1.2 million new papers published in the biomedical sciences alone, bringing the total number of peer-reviewed biomedical papers to over 26 million. However, the average scientist reads only about 250 papers a year. Meanwhile, the quality of scientific literature has been in decline. Some recent studies found that the majority of biomedical papers were irreproducible.

Jon:
So that lays out two difficulties that scientists encounter in whatever your field of specialization might be, namely that there are far too many published papers to keep up with, on the order of millions. And the other problem being that there’s a validation loop problem where there’s published papers, but at the same time there isn’t confirmation of a lot of these pieces that are being published. So, you have a good chance of either not discovering something that could affect your field, or reading through something and basing it on data that hasn’t been properly vetted yet. So with limited time crunch for people who are, obviously, doing other things they can’t be sorting through the data in that fashion. Which is where there’s tremendous potential for AI services to augment human knowledge workers. It’s, maybe, a little bit frightening to consider that as a human being you’re trained to do certain kinds of work and you’re maybe considering that, “Hey, since this is knowledge work, so called, this area is something that isn’t going to be automated by robotics,” or whatever that you might encounter in more manufacturing, say, or manual labor.

Dirk:
Yeah.

Jon:
But the reality of it is if the work that you’re doing is following an algorithm, following certain steps that are reproducible and could be adjusted for a machine to digest the information and create the outputs that you’re looking for a percentage of your work is very much susceptible or likely to be automated and AI is actually going to be a way of helping you get your job done because you’re encountering these problems that you can’t possibly handle with your limited throughput. Dirk, your thoughts on those elements so far.

Dirk:
Yeah, that’s the way it’s happening already, right? I had a chance to tour IBM Watson Health recently, and that’s exactly what they’re doing already in healthcare. It’s certainly not theory. It’s applied. IBM Watson has their machine, basically, I’m using it in the singular but I don’t know all the different permutations behind the curtain, but they have their deep-learning approach to artificial intelligence. And what they’re doing is partnering with healthcare institutions at the level of like, say, a Mayo Clinic and going in and providing an instantiation of Watson that is specific to the context of some specialty that the healthcare organization has.

So, let’s use cancer as an example. Mayo Clinic wants to use IBM Watson Health for helping to diagnose cancer, and not just diagnose, but also provide treatment options for. Their sale, their approach is they’re saying, “Look,” very similar to the article that you had here, Jon, is that healthcare professionals, they can’t read all of the literature that’s out there. They can only read a fraction of the literature, and are they reading the right things? Is that fraction even the best things or the most correct things? There’s all these clinical trials that are happening and it’s just impossible for a doctor or for a healthcare professional to be aware of all them. And so the model is, again, using Mayo Clinic and this theoretical example, Mayo Clinic is then, essentially, providing the rules to IBM Watson Health from the cancer perspective, from the healthcare perspective, on top of the healthcare expertise that the IBM Watson Health team has already. They’re sort of co-creating how this system should operate.

And then the system is giving the advice to the doctor, is saying, “Here are the array of possible things it could be. Here’s the percent chance of each.” The UI may not manifest in exactly that way, but it’s saying, “Hey, here’s guidance for diagnosis.” And then once diagnosis is figured out it’s saying, “Here is guidance for treatment.” So it is really becoming the brain, and the IBM Watson Health presentation was very careful to keep the doctor in the center of it. And I’m sure they are because these healthcare systems and organizations wouldn’t be buying it if their sale was, “We’re going to put the doctors out of business.”

It doesn’t take a genius to connect the dots and say, “Hey, decades from now the doctors are going to be removed in the way that we know them to now.” Up to this point the doctor is the scientific expert repository at the center of the healthcare process. That simply won’t be the case. It will be AI along the lines of IBM Watson Health or some current or future competitor that’s providing that service. What I found interesting, but, of course, they’re not dealing with because they’re just worried about selling this system in its present incarnation and in the present environment.

What I found interesting was sort of the service design question of what should that future system look like because, for me as a consumer, I might want to interface with Watson Health directly. But there are other people through their preferences, maybe people who are older and not as comfortable with the technology or for any of a number of different reasons need some kind of intermediary. What should that intermediary look like? Is it a social worker? Is it somebody who’s more like a nurse so they still have the technical healthcare background, but they don’t need the scientific research aspects that doctors currently have, or something else entirely different. So, that’s, maybe, a long-winded way of not answering your question, but to share another example and what’s going on and some of the considerations around that.

Jon:
Here’s second quote from the essay I was referring to which lays out the hypothesis for scientific inquiry being automated via AI.

Ahmed:
One promising strategy to overcome the current crisis is to integrate machines and artificial intelligence in the scientific process. Machines have greater memory and higher computational capacity than the human brain. Automation of the scientific process could greatly increase the rate of discovery. It could even begin another scientific revolution. That huge possibility hinges on an equally huge question: can scientific discovery really be automated?

Jon:
So, I think it’s interesting that he posits that question there in, say, like big pharma. I think part of that question’s already being answered. There are elements of pharma research where to increase the pace of drug discovery and development they’re sort of already using AI to help search through all this data. And while that’s still in it’s nascent stages, I think the question of whether or not AI can be helpful to scientific discovery, I think that’s a positive, that’s very much where things are going.

To go back to a comment that you made about Watson earlier and talking about the service design components to this, I do think there is a tremendous amount of, we’ll call it service design and change management and practice design and all of those things that would go into new business processes that are going to be engendered by this coupling of the expert with their assistant AI or however you want to frame that relationship right now. What that ultimately means is that the configuration of scientific discovery, whether it be academic research or big pharma research and even into healthcare, there are changing roles that need to be considered in conjunction with the development and deployment of these technologies like IBM Watson, right?

We’re way too focused on the new engine that could, right? So we’re plopping these engines down into various business contexts and not really reconfiguring the businesses and the processes round those engines. So, there are going to be lots of fail points at the beginning because doctors aren’t going to readily be working with AI in a productive manner. It’s going to cause friction, right?

Dirk:
Yeah.

Jon:
There’s an emotional component there, I’m sure.

Dirk:
Yeah.

Jon:
I’m sure that some of the best practices will figure it out, but it will be because they considered the work flow and the relationships. It’s almost creating an entire area of design or service design area that is taking the technological components and really sort of reconfiguring practices to map to those. And I think that that evolutionary process is going to move a lot quicker as well. So, you’re going to need to bring in the technology, change your business process, go through whatever that adoption period is, get productive, get profitable, and then there’s going to be this next wave of technology that’s going to upend things again. I think we’re sort of into this continuous change era. And the reason I bring all this up is because the big promise of electronic health records was that it was going to digitize health information at hospitals and make better healthcare happen, right?

Dirk:
Yeah.

Jon:
Guess what? It’s not happening, folks.

Dirk:
No.

Jon:
It’s not happening because no one can use the technology and the technology is not compatible. You can’t share medical records, et cetera, et cetera. You’re going to see these same kinds of problems as you’re digitizing other aspects of knowledge work. By that standard the EHR is like just a drop in the bucket, right?

Dirk:
Yeah.

Jon:
The amount of pain people are going through to implement those is just a tiny bit. So, that’s a really long-winded way of saying that change management and service design are going to become big players in this as well.

Dirk:
That’s really true, and if this article is correct that may just be the tip of the iceberg. It’s saying a next scientific revolution. That’s awe-inspiring, and I say that from the standpoint of where we already are now with science is we understand many of the mysteries of the universe, or we think we do. We have hypotheses that often seem to prove-out, although we then discover new things that prove it differently. From a science perspective it doesn’t seem like there’s that much to learn. I wonder if the next frontier that AI actually helps us to figure out and solve, ironically, may be about ourselves, if maybe it will be through the computers that we’re finally able to understand the human animal and how and why we function and create clarity where now there is murkiness around ourselves. That would be a delicious irony given all of the fear around AI being so machine-focused and taking things in non-humanistic ways. Maybe it will be the ultimate lens through which to view humanity.

Jon:
Yeah, that’s a good point, Dirk. I think also within this realm of change we’re also going to see further splintering of these research and other capabilities. You can see it sort of with the citizen science movement where folks are, basically, creating their own wet labs, like in their dorm room or in their basement or whatever and pushing the science forward in small but important ways.

So, with an AI assistant you can all the sudden see the possibilities for groundbreaking discoveries coming from a really dispersed workforce where it’s no longer simply lab-centric or university-centric. But the work can be further sort of crowd-sourced and creating an acceleration pattern so discoveries build on one another. I think that might be what Ahmed means when he’s talking about another scientific revolution. So harnessing the “power of the crowd”, right? That’s an awful buzzword, but harnessing more citizen science in addition to our standard institutions of academia and big pharma.

Dirk:
Sure.

Jon:
So, I guess the conclusion that I come to is, we’re still at the very early stages, but I am always approaching these things with a little bit of trepidation since I’m a knowledge worker. I’m a designer, I’m a writer and any time you start talking about automating knowledge work it freaks me out a little bit. Why wouldn’t it? But I think we can see that, at least in the next stages of whether we’re talking about automating aspects of scientific discovery or other areas, these are tools of an enhancement, at least in the midterm. So, perhaps in addition to a little bit of trepidation we can also be excited about the possibilities.

Listeners, remember that while you’re listening to the show you can follow along with the things that we’re mentioning here in real time. Just head over to thedigitlife.com. That’s just one “L” in the Digital Life, and go to the page for this episode. We’ve included links to pretty much everything mentioned by everybody, so it’s a rich information resource to take advantage of while you’re listening or afterward if you’re trying to remember something that you liked. You can find the Digital Life on iTunes, SoundCloud, Stitcher, Player FM and Google Play. And if you want to follow us outside of the show you can follow me on Twitter @jonfollett, that’s J-O-N F-O-L-L-E-T-T. And, of course, the whole show is brought to you by Involution Studios, which you can check out at goinvo.com, that’s G-O-I-N-V-O.com. Dirk?

Dirk:
You can follow me on Twitter @dknemeyer, that’s @ D-K-N-E-M-E-Y-E-R, and thanks so much for listening.

Jon:
So that’s it for episode 206 of The Digital Life. For Dirk Knemeyer, I’m Jon Follett, and we’ll see you next time.

No Comments

Leave a Comment

Your email address will not be published. Required fields are marked *

Jon Follett
@jonfollett

Jon is Principal of Involution Studios and an internationally published author on the topics of user experience and information design. His most recent book, Designing for Emerging Technologies: UX for Genomics, Robotics and the Internet of Things, was published by O’Reilly Media.

Dirk Knemeyer
@dknemeyer

Dirk is a social futurist and a founder of Involution Studios. He envisions new systems for organizational, social, and personal change, helping leaders to make radical transformation. Dirk is a frequent speaker who has shared his ideas at TEDx, Transhumanism+ and SXSW along with keynotes in Europe and the US. He has been published in Business Week and participated on the 15 boards spanning industries like healthcare, publishing, and education.

Credits

Co-Host & Producer

Jonathan Follett @jonfollett

Co-Host & Founder

Dirk Knemeyer @dknemeyer

Minister of Agit-Prop

Juhan Sonin @jsonin

Audio Engineer

Michael Hermes

Technical Support

Eric Benoit @ebenoit

Brian Liston @lliissttoonn

Original Music

Ian Dorsch @iandorsch

Bull Session

Storytelling and AI

April 20, 2017          

Episode Summary

On The Digital Life this week we explore storytelling, creativity, and artificial intelligence. Our cultural evolution is reflected in our ability to communicate through stories, creating shared experiences and meaning. Recent research from the University of Vermont and the University of Adelaide used an AI to classify the emotional arcs for 1,327 stories from Project Gutenberg’s fiction collection, identifying six types of narratives. Could these reverse-engineered storytelling components be used to build automated software tools for authors, or even to train machines to generate original works? Online streaming service Netflix already uses data generated from users’ movie and television preferences to help choose its next shows. What might happen when computers not only pick the shows, but also write the scripts for them?

Resources:
The Six Main Arcs in Storytelling, as Identified by an A.I.
The strange world of computer-generated novels
A Japanese AI program just wrote a short novel, and it almost won a literary prize

Bull Session

Ethics and Bias in AI

March 24, 2017          

Episode Summary

On The Digital Life this week, we discuss ethics and bias in AI, with guest Tomer Perry, research associate at the Edmond J. Safra Center for Ethics at Harvard University. What do we mean by bias when it comes to AI? And how do we avoid including biases we’re not even aware of?

If AI software for processing and analyzing data begins providing decision-making for core elements critical to our society we’ll need to address these issues. For instance, risk assessments used in the correctional system have been shown to incorporate bias against minorities. And, when it comes to self-driving cars, people want to be protected, but also want the vehicle, in principle to “do the right thing” when encountering situations where the lives of both the driver and others, like pedestrians, are at risk. How we should deal with it? What are the ground rule sets for ethics and morality in AI, and where do they come from? Join us as we discuss.

Resources
Inside the Artificial Intelligence Revolution: A Special Report, Pt. 1
Atlas, The Next Generation
Stanford One Hundred Year Study on Artificial Intelligence (AI100)
Barack Obama, Neural Nets, Self-Driving Cars, and the Future of the World
How can we address real concerns over artificial intelligence?
Moral Machine

Bull Session

Automate

January 26, 2017          

Episode Summary

On this episode of The Digital Life, we discuss workplace automation and the technologies that will make it happen — from robotics to artificial intelligence (AI) to machine learning. The McKinsey Global Institute released a new study on the topic this month, “A Future that Works: Automation, Employment and Productivity”, which contains some interesting insights.For instance, almost every occupation has the potential to be at least partially automated, and it’s likely that more occupations will be transformed than automated away. However, people will need to work in conjunction with machines as a part of their day-to-day activities, and in this new age of automation, learning new skills will be critical.Add to this the fact that working-age population is actually decreasing in many countries, and we can see how the story of automation is multi-faceted. The path to automating the workplace is a complex one that could raise productivity growth on a global scale.

 
Resources:
Report – McKinsey Global Institute: Harnessing automation for a future that works

Bull Session

AI Goes to the Ballpark

July 7, 2016          

Episode Summary

On The Digital Life this week we chat about technology and the great American past time, baseball.

Just last week the Associated Press announced that it’s covering Minor League Baseball games using AI software. The software from Automated Insights, draws upon supplied game data to create a written narrative. This AI is already being used by the Associated Press to create earnings stories on U.S. public companies and by corporate customers like Edmunds.com, which uses it to generate descriptions of cars for its Web site.

So, AI can cover a baseball game, parsing the data and creating a narrative, but is the writing any good? So far, it seems to generate stories that are readable, but not really compelling or interesting beyond the most mundane facts. Is this the future of sports journalism? Join us as we discuss AI and baseball.

Resources
AP Sports is Using “Robot” Reporters to Cover Minor League Baseball
AP expands Minor League Baseball coverage