AI in healthcare is coming, and we need to be ready - LawNow Magazine

AI in healthcare is coming, and we need to be ready

From the alarming forecasts of tech moguls to vigorous debates on online forums, there’s a growing public discussion about the risks and benefits of artificial intelligence (AI) and how to manage its development. People often talk about AI by evoking grandiose prophecies about the future. While one day we may be apathetically wiped out by a rogue AI, or instead, become immortal cyborgs worshipping a god-like algorithm, important developments are happening today. And healthcare is a domain in which AI is already having a significant impact. However, these advancements give rise to various policy challenges that will need to be carefully addressed.

Emerging AI in healthcare

Many medical tasks have already been performed very well by new AI. In radiology, for example, there is a growing body of fascinating new research. Recently, scientists in the UK used almost half a million chest X-rays to develop an AI that could reliably identify abnormal images, creating the potential for software to help triage the large backlog of X-rays in many healthcare systems. But AI can do more than just sort images. Researchers at Stanford have produced an algorithm to interpret chest X-rays for 14 distinct pathologies simultaneously within a few seconds. And another recent Finnish study showed that machine learning had overtaken humans at predicting, from imaging data, certain types of heart attack and death. Similar developments are happening with imaging for other diseases.

The question of who is liable for AI error is not fully answered in many jurisdictions, including Canada.AI will initially be implemented alongside human expertise rather than in place of it. However, some experts predict that radiologists could eventually be displaced by AI, relegating humans to the role of quality control reviewers. But with multiple factors contributing to increased strain on health professionals, a potential human resource crisis in healthcare could mean we need AI just to get by. And the shift toward AI could come quicker than you think. In April of 2018, the U.S. Food and Drug Administration (FDA) approved the first AI medical device, which analyzes images to detect a form of eye disease caused by diabetes.

Organ donation and transplantation have also benefited greatly from new AI. Kidney donation pairing programs in Canada and United States now use complex algorithms to optimize and match registered donors with transplant candidates based on a multitude of factors. Since potential living donors will only donate a kidney to a stranger if their loved one receives one in return, there is a recurring need to recalculate all the possible chains of donation. AI can perform this task very well. The results speak for themselves: Canadian Blood Services recently reported that, as of May 1, 2019, 392 of the 663 kidney transplants completed under the paired donation program were done through “domino exchanges” facilitated by their software algorithm. Moreover, AI is also being developed to efficiently and “fairly” allocate organs from deceased donors in accordance with programmed criteria for utility and equity.

New uses for AI in healthcare are emerging rapidly. Yet, as is always the case with new technology, ethics, law and policy are playing catch up. And unlike some other technologies, we won’t always know or control how AI could evolve over time.

Risks and policy challenges

The use of AI in healthcare is not without risks. There will certainly be less human oversight for tasks performed by AI, taking human hands “off the wheel.” Though benefits can sometimes be overstated, the idea is that AI makes fewer errors anyway, so there should be a net benefit. But in the same way existing “semi-autonomous” driving systems in cars can sometimes make errors of interpretation with disastrous results, healthcare AI is prone to specific types of errors that could cause harm.

The potential risks of AI raise the question of whether patients should have the choice to have their medical care administered by a human instead. Notably, AI often have difficulty properly recognizing variables outside what they were designed to interpret. For example, a ring or other object mistakenly present in a diagnostic image could be misinterpreted. AI can also be prone to “overfitting”. This happens when AI perform well using the data set from which they are built but become inaccurate or generate unexpected results when applied to other data. If overfitting happens after we’ve already put an AI to work, it could quickly cause widespread harm in a way that a single human error cannot.

Another concern is that AI could lead to overdiagnosis. There is already an established scientific and public movement against increased testing called “Choosing Wisely”. This is based on recent research showing that imaging and testing can sometimes do more harm than good, for example, by increasing unnecessary surgeries for anomalies that would never have become harmful.

As noted, AI could eventually become the dominant method of interpreting diagnostic images like X-rays and MRIs. If learning algorithms can increase diagnostic precision, they will be even more likely to find anomalies or patterns in images – some of which may not be central to patient health. It is also possible that routine screening could be performed more often if less human labour were required. These changes could lead to more diagnoses and treatments that are unnecessary and ultimately detrimental to patients and the healthcare system.

Policymakers will need to grapple with the possible overuse of AI, and experts developing rules about how to implement AI will have to account for the possible consequence of overdiagnosis. We may need to set new standards for frequency of testing, and certainly will need to think more about how to best deal with patients who may have unhealthy amounts of information about their bodies.

The potential risks of AI raise the question of whether patients should have the choice to have their medical care administered by a human instead. In the context of organ allocation, for example, both a potential donor and a recipient could strongly prefer to have humans allocate their organ. In Canada, there is no clear legal right to human-administered healthcare, but it is possible that public pressures could necessitate an option for human care. For example, living and deceased donors, whose contributions are essential to the entire functioning of the organ and tissue transplantation system, need to feel comfortable and confident consenting to donation. Otherwise, the entire system stops working. As such, if donors are reluctant to place their organs in the “hands” of an AI, this could override its usefulness. Other public pressures in different areas of healthcare could create similar reasons to maintain the option of human-based care, potentially at great financial cost.

Finally, there is the challenge of applying law to the interpretation of AI decision-making. Learning AI presents a “black box” problem, meaning it is often very difficult to understand how an AI reaches the conclusions it does. The question of who is liable for AI error is not fully answered in many jurisdictions, including Canada.

… AI could eventually become the dominant method of interpreting diagnostic images like X-rays and MRIs. Depending on the circumstances, one option would be to hold a presiding physician accountable for final recommendations. However, this could encourage excessive or unnecessary human oversight of AI. It also might not reflect the reality that a doctor has to trust an AI because he or she cannot actually comprehend its process. If human hands are truly to be taken “off the wheel”, a system in which corporations and institutions who develop and maintain AI are liable for errors could be preferred. From a regulatory perspective, the FDA is taking a “pre-certified” approach to implementing learning AI, meaning that they will focus on certification of the institution developing and maintaining an AI instead of the AI itself. This makes some sense given that many AI will be continually changing as they learn. Notably, understanding how errors occur and who is responsible will also be important for insurance firms creating systems of coverage for healthcare decisions facilitated by AI.

There are, of course, other policy challenges with healthcare AI beyond what is discussed above. For example, developing a policy framework for autonomous robotic surgery is a figurative minefield due to the difficulty of assigning accountability and liability. Ultimately though, creating public acceptance and trust may be the biggest policy challenge facing the real world application of healthcare AI. A 2019 survey of Americans found that 22% somewhat or strongly opposed the development of AI, and 82% believed AI should be “carefully managed”. Technological improvements in AI performance and adaptability could alleviate but will not eliminate the public’s concerns.

We need policy

Health Canada has stated that it is engaging with stakeholders to determine how best to implement AI in healthcare, and recently established a new internal division for “digital health technologies”. But more concrete guidance is needed. For example, what programming failsafes and other malfunction contingency planning will be required? We must focus on creating policy to govern how AI will be safely used and how they will be managed when things go wrong. Otherwise, we risk missing out on the many potential benefits of these technologies.


Blake Murdoch
Blake Murdoch, JD, MBA, is a research associate at the Health Law Institute at the University of Alberta Faculty of Law.

A Publication of CPLEA

For COVID-19 information: 
COVID-19 Alberta Law FAQ

Font Resize