Skip to main content
[vc_single_image image=”2799″ alignment=”center” onclick=”link_image”]

The author Mikko Kotila has 12 years of continuous research and development in machine intelligence, and is the core developer of Autonomio, the first rapid machine intelligence prototyping platform for non-programmers. Mikko is the principal of Botlab, a nonprofit foundation focused on long-term thinking on machine intelligence, and decentralization, and a co-founder of Autom8, one of the world’s first deep learning focused startup foundries.

A new paradigm for use of machine intelligence in healthcare

Many national economies are on the brink of collapse under the burden of state-supported health insurance programs. For example in the US, a country that does not provide universal healthcare, the national debt is expected to double as a result of healthcare related liabilities over the next three decades. There is an even greater price to pay for the inefficiencies found in healthcare systems. When already overburdened systems are pushed to their limits, healthcare professionals are forced to make bad decisions.

In a startling example, during Hurricane Katrina, doctors and nurses at New Orleans’ Memorial Hospital found themselves incapable of making even simple decisions. In her Pulitzer Prize-winning coverage, Sheri Fink reported how patients deemed the least likely to survive, were injected with a lethal combination of drugs — even as the evacuation was already on its way.

This chilling anecdote sheds light on a greater problem underpinning many of the biggest threats hurting human society and the eco-system of our planet.

We humans are not equipped for making good decisions under stress. Positive outcomes across a multitude of fields, including healthcare, largely depend on the ability to make good decisions under unpredictable and stress inducing conditions. We, humans, are not only bad at making decisions under pressure but are poorly equipped for making any rational decisions at all.

Daniel Kahneman, in his Nobel Prize winning work on decision making, explains how after years of deliberation leading to a single decision, the person making the decision might still completely ignore the entire process of deliberation and instead make a decision driven by emotions in a whim of the moment.

An Age of Automated Decision Making

As the information age is about the focus on automating processes related to access to information, the next age, the age of “decisioning”, will focus on automation of processes related to decision making. Whereas humans are excellent in pattern detection and pattern making, an essential requirement for creating intelligent computer systems, computers are strong in making decisions where processing of facts is of vital importance. This almost magical ability of computing systems to process information includes the capability to identify and utilize extremely subtle connections between otherwise seemingly disconnected pieces of information, a feat us humans could never do on our own with the kind of precision and scale even a simple computer system can. In no other field, rational decisions are of such vital importance as in healthcare.

Not only healthcare is the most significant economic liability for many nations, but it is also the only field of practice affecting people’s everyday lives that can be considered truly as “life and death” matter. It is, therefore, the area where the human society most desperately needs help.

[vc_single_image image=”2800″ img_size=”medium” alignment=”center” onclick=”link_image”]

Computer-aided decision systems can be categorized into three evolutionary stages, each with a corresponding quality of results, and the requirement for a level of human involvement.

Whereas up until a few years ago, most systems were still ‘descriptive’ with some examples of ‘predictive’ decision- making support systems, in the last two years the developments in the field of machine intelligence have for the first time made the dream of prescriptive expert systems realistic. Recent advancements in both open source software and commercial hardware have paved the way for rapid prototyping of ideas that promise to revolutionize the way decisions are made across a multitude of fields, including healthcare.

Examples of these advancements include deep learning platforms such as Google’s TensorFlow and Keras, unstructured data processing innovations such as word vectorization, and Nvidia’s data processing focused GPU product-line.

Regardless of the tremendous promise and the recent hype surrounding machine intelligence, some significant concerns remain without any serious attention. While machine intelligence innovators would like to focus on showcasing “what’s possible,” before introducing new ideas and processes into the healthcare apparatus, it is far more important to ask “what could go wrong”.

Morality and Healthcare Algorithms

Every machine intelligence solution can be reduced into two aspects; the data inputs available for the solution, and the means by which the solution process those inputs. These two act as the causes for the result the system provides. Algorithms underpin every decision a computer system makes, regardless of the kind of system it is. The unique feature in this regard, of modern machine intelligence systems, such as those based on neural networks and the deep learning method, is that humans cannot audit them. It is for this reason that we can get surprising results us humans could not arrive at without working with machine intelligence, and it is important to understand that surprising results may also be negative results. In many cases, adverse results in the healthcare context arise from causes that were set years or even decades before, and as of today, it is virtually impossible to establish true causality in such cases. This means that it would be almost impossible to know if a given machine intelligence solution is contributing positively in the long term or just driving shortterm efficiency. This will hold true at least for the next 100 years, or as long as it takes to understand causality in results that take decades to mature.

When in the 1980’s game theory-based principles were widely introduced in the western healthcare context, nobody predicted the consequences. For example, while nurses and doctors were incentivized to meet certain productivity quotas such as the number of days spent in the ICU, mortality rates of patients skyrocketed as a result of ICU beds being more available. In effect, a quota scheme is an example of a simple algorithm and can be used to highlight the danger that comes with introducing machine intelligence into healthcare.

This applies in particular in countries with universal healthcare and poorly performing national economy. Under such conditions, humans that make the decisions about the use of machine intelligence in the national healthcare system, are under tremendous stress. Not only their decisions affect individual patients’ lives but also have the potential for changing the destiny of an entire nation. It is very hard to see how under such conditions, non-experts, being bombarded with endless hype by self-proclaimed machine intelligence experts, would be able to make the right decisions.

In fact, government officials, healthcare professionals, nor computer and data scientists, are formally trained in morality and often lack even the most basic understanding of ontology, epistemology, and formal logic. The three legs of the stool on which rational decisions sit. As a result, as we have seen through examples in financial markets, online advertising, and other early embracers of algorithmic decision making, we end up with so-called greedy algorithms that optimize towards a given end ruthlessly without caring about anything else.

Unlike healthcare professionals, these algorithms are not afraid of losing their livelihood and reputation as a result of making the wrong decision that ends up hurting people.

While a healthcare facility or a professional working in one, could be sued for damages, in the case of machine intelligence systems, liabilities in the healthcare context have so far not been defined.

Security in Healthcare Systems

In the light of the recent events regarding ransomware, and the rapid growth in its popularity as a cybercrime tool, it does not seem too far-fetched that in the near-future entire hospitals will be targeted and held for ransom. Indeed many hospitals had already become victims of ransomware as a consequence of passive global or national attacks. In the recent WannaCrypt ransomware attack individual medical devices were rendered temporarily useless after being infected.

Siemens released multiple warnings about its healthcare devices being possibly vulnerable to WannaCrypt. Beau Woods, deputy director of Cyber Statecraft Initiative at the Atlantic Council said that it was likely that many important medical devices such as MRIs and other crucial computer-aided systems were rendered temporarily useless by the attack.

These examples show how healthcare organizations and their technology partners are currently incapable of securing important systems. Whereas in the human operated healthcare apparatus, the devastation is so easy to create, in a highly automated machine intelligence based healthcare apparatus, the problem would be significantly amplified.

Here too, regarding security and healthcare systems, we have to seriously consider the implications of the machine intelligence ideas that are adopted today, regarding the threat landscape over the next few decades.

For example, current cryptographic methods are all based on so-called key exchange cryptography. Any sudden improvement in computation power, for instance in the form of quantum computing becoming practical, will lead to an immediate collapse of the key exchange based security paradigm. In a machine intelligence dominated healthcare apparatus, collapse of the key exchange based security paradigm has the potential for leading to the greatest human travesty in the history of the world. In the absence of serious discussion about such longer-term threats, it will not be possible to make the right decisions regarding decision-making automation and wider adoption of machine intelligence based expert systems in healthcare.

The Importance of Human Touch

Sometime in the distant future, we may be able to completely automate certain key aspects of healthcare, such as triage management. Even then it is of significant importance to not lose sight of the essence of healthcare; taking care of people. Taking care of people is one part taking care of their physical body, and one part taking care of their feelings.

In a completely machine intelligence and robotics based triage management approach it would be hard for patients to feel that they are being taken care of in the way they feel when an actual doctor is treating them.

On the other hand, in a very short period of time, the doctors would lose their ability to deal with the basic day-to-day taking care of patients that still today keep them overworked.

Because algorithms don’t feel anything, for example, empathy, it is also likely that as specialist doctors increasingly get their inputs from those algorithms, they become further distanced from the human aspect that some argue is necessary for healing practice. In this light, perhaps it’s more reasonable for the allopathic medicine to seek intelligence from its eastern counterparts such as TCM, TTM, and Ayurveda, as opposed to seeking it from machines. Machines that ultimate base their decisions on the combination of the information they are receiving and the algorithms that process the information. Both the information and the algorithms being a product of, and therefore limited by, the people who produce them.

Combining the instruments and other marvels of the western symptomatic healthcare approach with the more holistic but in some cases inferior Eastern practices, have the potential for driving significant change within the healthcare system as we know it today.

In terms of specific machine intelligence implementations, as the point-of-view presented in this article clearly shows, the focus should be on long-term macro effects of such implementations, as opposed to focusing on short-term and micro context.

Perhaps in a future world where the western and eastern medicinal practices are better integrated and institutionalized into a new era of taking care of patients, we will be better equipped to handle the challenges that come with machine intelligence focused healthcare.

Want to write for InnoHEALTH? send us your article at  magazine@innovatiocuris.com

Read all the issues of InnoHEALTH magazine:
InnoHEALTH Volume 1 Issue 1 (July to September 2016) – https://goo.gl/iWAwN2 
InnoHEALTH Volume 1 Issue 2 (October to December 2016) – https://goo.gl/4GGMJz 
InnoHEALTH Volume 2 Issue 1 (January to March 2017) – https://goo.gl/DEyKnw 
InnoHEALTH Volume 2 Issue 2 (April to June 2017) – https://goo.gl/Nv3eev
InnoHEALTH Volume 2 Issue 3 (July to September 2017) – https://goo.gl/MCVjd6
InnoHEALTH Volume 2 Issue 4 (October to December 2017) – http://amzn.to/2B2UMLw

Connect with InnovatioCuris on:
Facebook: https://www.facebook.com/innovatiocuris
Twitter: https://twitter.com/innovatiocuris
LinkedIn: https://www.linkedin.com/groups/7043791
Stay updated about IC, visit: www.innovatiocuris.com

InnoHEALTH Magazine

Author InnoHEALTH Magazine

More posts by InnoHEALTH Magazine

Leave a Reply