is
is

Machine Learning in Healthcare: Defining the Most Common Terms

The concept of machine learning has quickly become very attractive to healthcare organizations, but much of the necessary vocabulary is not yet well understood.

Source: Thinkstock

by Jennifer Bresnick

Link to original article 
 

 – After a slow and unsteady beginning at the start of the decade, the healthcare industry is finally becoming somewhat more comfortable with the idea that learning to live with big data is the only way to see financial and clinical success in the future.

Electronic health records are now commonplace (if not universally beloved), and even the most reticent, paper-loving organizations are now cautiously embracing the idea that all that digital data could actually be good for something.

For stakeholders on the other end of the spectrum, charging forward on the leading edge of the health IT revolution, the benefits of big data analytics are already clear.

Predictive analytics, real-time clinical decision support, precision medicine, and proactive population health management are finally within striking distance, driven largely by rapid advances in machine learning.

But while many in the healthcare industry are sure that their technological goals are hovering somewhere just over the horizon, plotting a course to get there can be a difficult proposition – especially when the landscape is clouded by confusing vocabulary, technical terminology, and as-yet-undeliverable promises of truly automated insights.

READ MORE: Artificial Intelligence Could Take Over Surgical Jobs by 2053

“Artificial intelligence” is a buzzword saturated with hope, excitement, and visions of sci-fi blockbuster movies, but it isn’t the same thing as “machine learning.”

Machine learning is slightly different than deep learning, and neither of them match up exactly with cognitive computing or semantic analysis.

As the healthcare industry moves quickly and irreversibly into the era of big data analytics, it is important for organizations looking to purchase advanced health IT tools to keep the swirling vocabulary straight so they understand exactly what they’re getting and how they can – and can’t – use it to improve the quality of patient care.

ARTIFICIAL INTELLIGENCE

Artificial intelligence is the ability of a computer to complete tasks in a manner typically associated with a rational human being.

While Merriam-Webster’s definition uses the word “imitate” to describe the behavior of an artificial intelligence agent, the Encyclopedia Britannica defines AI as a program “endowedwith the intellectual processes characteristic of humans,” which indicates a slightly different view of the attributes of an AI agent.

READ MORE: How Do Artificial Intelligence, Machine Learning Differ in Healthcare?

Whether AI is simply imitating human behavior or infused with the ability to generate original answers to complex cognitive problems via some indefinable spark, true artificial intelligence is widely regarded as a program or algorithm that can beat the famous Turing test.

Developed in 1950 by computer science pioneer Alan Turing, the Turing test states that an artificial intelligence must be able to exhibit intelligent behavior that is indistinguishable from that of a human.

One classic interpretation of Turing’s work is that a human observer of both a fellow human and a machine would engage both parties in an attempt to distinguish between the algorithm and the flesh-and-blood participant.

If the computer could fool the observer into thinking its actions are equivalent to and indistinguishable from the human participant, it would pass the test.  Thus far, there are no examples of artificial intelligence that have truly done so.

Artificial intelligence also has a second definition.  It is the branch of computer science associated with studying and developing the technologies that would allow a computer to pass (or surpass) the Turing test.

READ MORE: 84% of Execs: Artificial Intelligence Will Transform Healthcare

So when a clinical decision support tool says it “uses artificial intelligence” to power its analytics, consumers should be aware that “using principles of computer science associated with the development of AI” is not really the same thing as offering a fully independent and rational diagnosis-bot.

MACHINE LEARNING

Machine learning and artificial intelligence are often used interchangeably, but conflating the two is incorrect.  Machine learning is one small part of the study of artificial intelligence, and refers to a specific sub-section of computer science related to constructing algorithms that can make accurate predictions about future outcomes.

Machine learning accomplishes this through pattern recognition, rule-based logic, and reinforcement techniques that help algorithms understand how to strengthen “good” outcomes and eliminate “bad” ones.

Machine learning can be supervised or unsupervised.  In supervised learning, algorithms are presented with “training data” that contains examples with their desired conclusions.  For healthcare, this may include samples of pathology slides that contain cancerous cells as well as slides that do not.

The computer is trained to recognize what indicates an image of cancerous tissue so that it can distinguish between healthy and non-healthy images in the future.

When the computer correctly flags a cancerous image, that positive result is reinforced by the trainer and the data is fed back into the model, eventually leading to more and more precise identification of increasingly complex samples.

Unsupervised learning does not typically leverage labeled training data.  Instead, the algorithm is tasked with identifying patterns in data sets on its own by defining signals and potential abnormalities based on the frequency or clustering of certain data.

Unsupervised learning may have applications in the security realm, where humans do not know exactly what form unauthorized access will take.  If the computer understands what routine and authorized access typically looks like, it may be able to quickly identify a breach that does not meet its standard parameters.

DEEP LEARNING

Deep learning is a subset of machine learning that deals with artificial neural networks(ANNs), which are algorithms structured to mimic biological brains with neurons and synapses.

ANNs are often constructed in layers, each of which perform a slightly different function that contributes to the end result.  Deep learning is the study of how these layers interact and the practice of applying these principles to data.

“Deep learning is in the intersections among the research areas of neural networks, artificial intelligence, graphical modeling, optimization, pattern recognition, and signal processing,”wrote researchers Li Deng and Dong Yu in Deep Learning: Methods and Applications.

Just like in the broader field of machine learning, deep learning algorithms can be supervised, unsupervised, or somewhere in between.  Natural language processing, speech and audio processing, and translation services have particularly benefitted from this multi-layer approach to processing information.

COGNITIVE COMPUTING

Cognitive computing is often used interchangeably with machine learning and artificial intelligence in common marketing jargon.  It is widely considered to be a term coined by IBM and used mainly to describe the company’s approach to the science of artificial intelligence, especially in relation to IBM Watson.

However, in 2014, the Cognitive Computing Consortium convened a group of stakeholders including Microsoft, Google, SAS, and Oracle to develop a working definition of cognitive computing across multiple industries:

To respond to the fluid nature of users’ understanding of their problems, the cognitive computing system offers a synthesis not just of information sources but of influences, contexts, and insights. To do this, systems often need to weigh conflicting evidence and suggest an answer that is “best” rather than “right”.  They provide machine-aided serendipity by wading through massive collections of diverse information to find patterns and then apply those patterns to respond to the needs of the moment. Their output may be prescriptive, suggestive, instructive, or simply entertaining.

Cognitive computing systems must be able to learn and adapt as inputs change, interact organically with users, “remember” previous interactions to help define problems, and understand contextual elements to deliver the best possible answer based on available information, the Consortium added.

This view of cognitive computing suggests a tool that lies somewhere below the benchmark for artificial intelligence.  Cognitive computing systems do not necessarily aspire to imitate intelligent human behavior, but instead to supplement human decision-making power by identifying potentially useful insights with a high degree of certainty.

Clinical decision support naturally comes to mind when considering this definition – and that is exactly where IBM (and its eager competitors) have focused their attention.

NATURAL LANGUAGE PROCESSING

Natural language processing (NLP) forms the foundation for many cognitive computing exercises.  The ingestion of source material, such as medical literature, clinical notes, or audio dictation records, requires a computer to understand what is being written, spoken, or otherwise communicated.

Speech recognition tools are already in widespread use among healthcare providers frustrated by the burdens of EHR data entry, and text-based NLP programs are starting to find applications in the clinical realm, as well.

NLP often starts with optical character recognition (OCR) technology that can turn static text, such as a PDF image of a lab report or a scan of a handwritten clinical note, into computable data.

Once the data is in a workable format, the algorithm parses the meaning of each element to complete a task such as translating into a different language, querying a database, summarizing information, or supplying a response to a conversation partner.

Natural language processing can be enhanced by applying deep learning techniques to understand concepts with multiple or unclear meanings, as are common in everyday speech and writing.

In the healthcare field, where acronyms and abbreviations are very common, accurately parsing through this “incomplete” data can be extremely challenging.  Other data integrity and governance concerns, as well as the large volume of unstructured data, can also raise issues when attempting to employ NLP to extract meaning from big data.

SEMANTIC COMPUTING

Semantic computing is the study of understanding how different elements of data relate to one another and using these relationships to draw conclusions about the meaning, content, and structure of data sets.  It is a key component of natural language processing that draws on elements of both computer science and linguistics.

“Semantic computing is a technology to compose information content (including software) based on meaning and vocabulary shared by people and computers and thereby to design and operate information systems (i.e., artificial computing systems),” wrote Lei Wang and Shiwen Yu from Peking University.

The researchers noted that the Google Translate service is heavily reliant on semantic computing to distinguish between similar meanings of words, especially between languages that may use one word or symbol for multiple concepts.

In 2009, the Institute for Semantic Computing used the following definition:

[Semantic computing] brings together those disciplines concerned with connecting the (often vaguely formulated) intentions of humans with computational content. This connection can go both ways: retrieving, using and manipulating existing content according to user’s goals (‘do what the user means’); and creating, rearranging, and managing content that matches the author’s intentions (‘do what the author means’).

Currently in healthcare, however, the term is often used in relation to the concept of data lakes, or large and relatively unstructured collections of data sets that can be mixed and matched to generate new insights.

Semantic computing, or graph computing, allows healthcare organizations to ingest data once, in its native format, and then define schemas for the relationships between those data sets on the fly.

Instead of locking an organization’s data into an architecture that only allows the answer to one question, semantic data lakes can mix and match data again and again, uncovering new associations between seemingly unrelated information.

Natural language interfaces that leverage NLP techniques to query semantic databases, are becoming a popular way to interact with these freeform, malleable data sets.

For population health management, medical research, and patient safety, this capability is invaluable.  In the era of value-based care, organizations need to understand complex and subtle relationships between concepts such as the unemployment rate in a given region, the average insurance deductible, and the rate at which community members are visiting emergency departments to receive uncompensated care.

As a buzzword, semantic computing has been very quickly overtaken by machine learning, deep learning, and artificial intelligence. But all of these methodologies attempt to solve similar problems in more or less similar ways.

Vendors of health IT offerings that rely on advanced analytics are hoping to equip providers with greatly enhanced decision-making capabilities that augment their ability to deliver the best possible patient care.

While the field is still in the relatively early stages of its development, healthcare providers can look forward to a broad selection of big data tools that allow access to previously untapped insights about quality, outcomes, spending, and other key metrics for success.

 

Share

Facebook
Twitter
LinkedIn