Healthy Optimism — April 19, 2017

2017-04-19-Healthy-Optimism.png

Well, it’s been a little bit quieter in the world of health these past couple of weeks. Quiet enough that it’s a chance to look at a non-acute item that keeps showing up in various forms across the web. More of an underlying theme than discrete events. There isn’t really an answer to the point, either. More just stuff we should be tracking (and yeah, for sure the healthcare industry is tracking) and, importantly, discussing here. So, if you have thoughts on anything, feel free to drop a comment and push back.


Open the Pod Bay Doors, Doc.

It’s the age-old question: AI or humans? Data, C3PO and Rosie are easy examples of artificial intelligence actually, you know, helping humans make things better (or in the case of C3PO, trying to help, but that’s a separate discussion). Then we have HAL, Terminator, Westworld Hosts*, Ava and certain politicians who are programmed to assist humans but in fact go rogue and, well, ruin things. Indeed, such is the danger of artificial intelligence that the entire Dune Universe is built on the elevation of human consciousness and the elimination of computer-based calculation.

Hang on, there’s a point to all this other than telling you that I got into sci fi later than most but am trying to make up for lost time.

It’s that artificial intelligence is coming to healthcare, in some regards is already here, and we have to figure out what to do with it. At its simplest, AI is building baseline algorithms that will allow the computer to accumulate new data, extract insights, and build better algorithms as a means to continuous improvement in health outcomes. Thus, we see companies taking the big data of human sequencing, finding mutations, linking them (or not) to disease states, and then repeating the process for more accurate and detailed diagnosis/prognosis. IBM is using its lovable blue behemoth Watson to do this in oncology in partnership with Quest Diagnostics. A company here in Nashville called Faros Healthcare uses predictive analytics to both identify at-risk patients and treat them appropriately.

We also see AI and machine learning adding valuable depth to the day-to-day practice of medicine. Praxify layers on top of EHRs and learns how each individual physician user interacts with patient data, making it easier for providers to find and/or input the data they need. At the same time, Praxify looks at a patient’s record and extracts useful data, thereby helping clinicians to connect the dots and make better decisions.

The value? Faster and more accurate decision making, less time spent charting, and a simpler, more customized workflow. Translated that potentially means lower costs, better outcomes, better reimbursement (and/or fewer readmission penalties), and happier clinicians. Thanks HAL! (See what I did there? Because HAL was opposed to readmissions…? Nevermind.)

Robots are coming on line in the area of elder care, too, particularly in Japan. It’s companionship when people aren’t available, for whatever reason. Frankly, I tend to twitch a little when I see this. “A robot? I mean, I have quirky friends but Dinsow takes it to a whole new level.” At least we, um, don’t have to worry about Dinsow dropping us off at the Uncanny Valley.

On the other hand, if no one’s there and Dinsow makes people smile, interacting with elderly patients and improving quality of life. That can’t be bad, right…


Hollywood Ruins Everything

It all sounds great, but there may be a couple of issues. One is the fear played up by Hollywood: AI will get so good that those humanoid creations no longer need their creators and simply take over. Ok, so that’s overstating it, but the principle in healthcare is the same. Will AI get so good that providers are shoved aside and if so what then? Robots can’t replace the human touch, can they? We still need skin on skin contact, right?

In conversations with people in the Medical AI field, they all — without prompting — emphasize that they are not trying to build algorithms to replace clinicians. “Augment, not replace!” is the rallying cry. And it’s a compelling argument. Why not make it easier to find relevant information to move things along quicker? If anything, good AI in this context means providers can actually be more human, making more eye contact instead of poking at a glowing screen.


The Student Hasn’t (Quite) Surpassed the Master

The other issues is exactly the opposite of the first one. Can AI even meet or exceed the human brain? Look, let’s be real here. We still don’t know how the brain works. Yes, we have the Brainbow and fMRI, and they (and other advances) have pushed us a long way in figuring out neuronal connections. But…it’s really freaking complicated. We haven’t come close to replicating the human brain, or building an AI system that can train itself to that level. Can we ever?

Westworld aside, intuition doesn’t seem like something we can build. “I don’t need a computer to tell me that!” our grandpas say. And it’s true. While some of us are sitting inside scrolling through the hourly forecast on our “smart”phones and looking for our umbrella, Grandpa is outside looking at the sky saying, “nah, it’s not gonna rain.” Who’s right more often?

Turns out, this is the case at times in medicine, too. Fierce Healthcare recently ran an article highlighting a study in the American Journal of Managed Care that found that “Predictive accuracy of PCP assessment in our study (C statistic, 0.77; 95% CI, 0.75–0.79) was comparable to the reported C statistic of other commonly used risk stratification instruments.” In other words, primary care providers were just as good as computers at predicting which patients would end up back in the hospital. Why? Because they know their patients. That sort of etherial combination of individual patient data plus the personal understanding between patient and provider.

From there, the study says (in a quote highlighted in the Fierce article) “Given the predictive accuracy of PCPs’ clinical assessment, efforts to identify patients at high risk for future hospitalization should aim to incorporate the unique insight that PCPs have about predisposing biopsychosocial factors.”

It’s a pretty funny statement wrapped in academic-ese: “eh, we should probably look up from our computers and hear what the doctors have to say.” Put another way (is this taking it too far?), maybe gut feeling should officially become part of evidence-based practice.

Second example is from (where else) STAT News. A couple of weeks ago Kate Sheridan wrote up an article about a facial recognition program that can diagnose specific genetic conditions. Please read the whole thing, it’s a great article as always. The point here is that the algorithms used (maybe not technically AI) are still being dialed in and, in some cases only offer probabilities and not definitive, binary diagnoses. Experienced physicians, though, can “walk into a room and it’s like, oh, that child has Williams syndrome,” according to Dr. Maximilian Muenke, quoted by Sheridan. It’s the accumulation of knowledge over years, the conscious study and the unconscious observation of trends, that tell people instantly what’s happening.


Can’t We All Get Along?

Like I said at the top, I don’t know how this can or should play out. I’m a cell biologist, not a physician, ethicist**, computer scientist or even robot enthusiast. I do have a gut feeling that, while the Singularity is a cool hypothetical, our relationship with artificial intelligence may be more manageable than some people think. Start by focusing on how AI can augment our function as humans, the way Praxify and others are doing, instead of starting by trying to figure out what computers can do on their own. The human touch first, technology second.

Now please excuse me, I have to go rewatch Blade Runner before the sequel comes out.

*Ok, if we’re going to get technical, the hosts (spoiler alert) are in fact programmed to develop consciousness so technically they’re not going rogue.

**As I was finishing this post, an article came out in Aeon called “Raising good robots.” Not directly health-related, but a pretty cool discussion of if/how we can/should teach AI morals.