AI and machine learning are augmentative tools, size matters among data sets, real-world applicability is a must, and tools must be validated, experts say.

By Bill Siwicki

October 13, 2017

11:25 AM

Some healthcare organizations are turning to artificial intelligence and machine learning because of the enhancements these advanced technologies can make to patient care, operations and security. But assessing the promises of the technologies can be difficult and time-consuming unless you’re an expert.

Two such experts weigh in with insights hospitals should understand when both planning and purchasing AI tools. 

Raj Tiwari is a chief architect at Health Fidelity, which uses natural language processing technology and statistical inference engines, mixed with analytics, to identify and correct compliance risks, and Brent Vaughan is the CEO of Cognoa, a company that develops AI tools for diagnosing medical conditions.

[Also: What to know before buying AI-based cybersecurity tools]

Their advice: Know that AI and machine learning are augmentative tools, understand that size matters among data sets, real-world applicability is a must, and the tools must be trained and validated. 

To draw a baseline, at this point in time the An aspect of AI is more akin to augmented intelligence than artificial and, as far as machine learning is concerned, hospitals should think about it as a supplement to human expertise, experience, and decision-making. 

“AI is a tool that enhances our capability, allowing humans to do more than what we could on our own,” Tiwari added. “It’s designed to augment human insight, not replace it. For example, a doctor can use AI to access the distilled expertise of hundreds of clinicians for the best possible course of action. This is far more than he or she could ever do by getting a second or third opinion.” 

[Also: How AI is transforming healthcare and solving problems in 2017]

That needs to be done by analyzing AI recommendations carefully. A lot of buzz around AI and machine learning is from creators of AI tools. That’s understandable because this group is focused on what AI can do to improve healthcare and other realms. 

“People who implement and deploy real-world solutions based on AI need to ask big-picture questions,” Tiwari said. “Specifically, how does it assist the end-user. AI should be treated as one of the many tools at the disposal of the user, not the definitive solution.”

Healthcare organizations need to make sure the team that developed their AI tools has a deep enough understanding of the relevant industry, Cognoa’s Vaughan said. 

“Many people in the machine learning and AI world, especially consultants, feel that great AI can be developed without requiring deep domain knowledge – they will say that their AI solution is ‘domain agnostic,’” Vaughan said. “Many would not agree – and in healthcare, this can particularly be untrue.”

Healthcare data sets, in fact, are often much smaller than in other consumer and business applications. Unlike AI tools that deal with serving up ads or picking one’s next movie based upon tens of millions of data points, healthcare AI tools often rely on datasets orders of magnitude smaller and thus require that the AI developers have a deeper industry knowledge and understanding of the data, because coding mistakes and date misinterpretation are amplified in smaller data sets.

Real-world applicability is a must. One of the biggest challenges to machine learning adoption across the healthcare industry is scalability, Tiwari said.

“An algorithm may work flawlessly in the controlled academic or limited clinical setting, but translating that to the real world can introduce any number of complications,” he said. “For example, if the tool is trained by using data from a research hospital, it may not function well in a regular hospital where many patients have incomplete medical records.”

They may have critical pieces of data missing, and the tool would need to be able to account for that. Data cleanliness and processing speed can be hurdles outside the neat environment of research applications.

Healthcare organizations also need to make sure their AI tools were trained and validated with representative populations, Vaughan said.

“Since the training and validation data sets often are much smaller in healthcare, the differences between populations can become exacerbated,” he explained. “For example, primary and secondary or tertiary care settings can see dramatically different incident rates for different events. An AI tool that is good at predicting a particular outcome in one setting might have a much higher error rate in the other setting.”

And AI tools in healthcare must help meet security and compliance commitments, Tiwari said.

“As we build and leverage machine learning models, software vendors and organizations that implement them must be cognizant of data compliance and audit requirements,” Tiwari said. “These include having appropriate usage agreements in place for the data being analyzed.”

Having adequate permissions in place goes without saying; commitments to patient data privacy and security are a must. In certain cases, machine learning systems can inadvertently leak private information, Tiwari explained. 

Such occurrences could be disastrous and significantly hinder further adoption of AI and machine learning out of fear.

Twitter: @SiwickiHealthIT
Email the writer: bill.siwicki@himssmedia.com

AI for a more efficient healthcare

Automation through AI, robotics or 3D-printing will make healthcare more efficient and more sustainable. These new digital technologies will improve healthcare processes resulting in the earlier and more efficient treatment of patients. It will eventually shift the focus in medicine from treatment to prevention. Moreover, medical professionals will get the chance to move from repetitive, monotonous tasks to the challenging, creative assignments.

AI has certainly more revolutionary potential than simply optimizing processes: it can mine medical records or medical images in order to come up with previously unknown implications or signals; design treatment plans for cancer patients or create drugs from existing pills or re-use old drugs for new purposes. But imagine how much time you as a GP would have if the administrative process would be taken care of by an AI-powered system. Your only task would be to concentrate on the patient’s problem! Imagine how much time you as a GP could spare if healthcare chatbots and instant messaging health apps would give answers to simple patient questions, which do not necessarily need the intervention of a medical professional!

 Through the following 10 ways, AI could make me a better doctor.

1) Eradicate waiting time

You would think that waiting time is the exclusive “privilege” of patients and doctors do not have any free moment during their overpacked days. However, suboptimal health care processes not only result in patients sometimes waiting for hours in front of doctors’ offices but also medical professionals losing a lot of time every day waiting for something (a patient, a lab result, etc.). An AI system that makes my schedule as efficient as possible directing me to the next logical task would be a jackpot.

2) Prioritize my emails

The digital tsunami is upon us. Our inboxes are full of unread messages and it is an everyday challenge not to drown into the ocean of new letters. We deal with about 200 e-mails every single day. We try to teach Gmail how to mark an email important or categorize them automatically into social media messages, newsletters, and personal emails, it’s still a challenge. The AI system prioritized all the 3000 unread emails in a second. Imagine if we could streamline digital communication completely in line with our needs and if we could share and receive information more efficiently and more accurately without too much effort.

According to a recent report in the New Scientist, half a million people have professed their love for Alexa, Amazon’s intelligent personal assistant and more than 250,000 have proposed marriage to it.

 

3) Find me the information we need

We think we have mastered the skill of searching for information online using dozens of Google search operators and different kinds of search engines for different tasks, but it still takes time. What if an AI OS could answer my questions immediately by looking up the answer online?

More and more intelligent personal assistants, such as Siri on iOS or Alexa for Amazon lead us into the future, and there will be soon highly capable, specialized AI-powered chatbots also in the field of healthcare. Bots like HealthTap or Your.Md already aim to help patients find a solution to the most common symptoms through AI. Safedrugbot embodies a chat messaging service that offers assistant-like support to health professionals, doctors who need appropriate information about the use of drugs during breastfeeding.

4) Keep us up-to-date

There is too much information out there. Without an appropriate compass, we are lost in the jungle of data. It is even more important to find the most accurate, relevant and up-to-date information when it comes to such a sensitive area as healthcare. That’s why we started Webicina, which collects the latest news from the best, most reliable sources into one, easily manageable magazine.

On Pubmed, there are 23 million papers. If we could read 3-4 studies of my field of interest per week, we could not finish it in a lifetime and meanwhile millions of new studies would come out. We need an AI to process the pile of information for me and show me the most relevant papers – and we will get there soon. IBM Watsoncan already process a million pages in seconds. This remarkable speed has led to trying Watson in oncology centers to see how helpful it is in making treatment decisions in cancer care.

 

5) Work when we don’t

We can fulfill my online tasks (emails, reading papers, searching for information) when we use our PC or laptop, and we can do most of these on my smartphone. When we don’t use any of these, we obviously cannot work. An AI system could work on these when we don’t have any device in hand.

Imagine that you are playing tennis or doing the dishes at home when an important message comes in. With the help of an AI, you could respond to your boss without the need to touch any devices – a toned down version of Joaquin Phoenix’s AI, that arranged the whole publishing process of his book without the need for him to lift a finger.

6) Help us make hard decisions rational

A doctor must face a series of hard decisions every day. The best we can do is to make those decisions as informed as possible. We can ask people whose opinion we value, but basically, that’s it. Unfortunately, you would search the world wide web in vain for certain answers.

But AI-powered algorithms could help in the future. For example, IBM Watson launched its special program for oncologists – and we interviewed one of the professors working with it – which is able to provide clinicians evidence-based treatment options. Watson for Oncology has an advanced ability to analyze the meaning and context of structured and unstructured data in clinical notes and reports that may be critical to selecting a treatment pathway. So, AI is not making the decision per se but offers you the most rational options.

 

7) Help patients with urgent matters reach us

A doctor has a lot of calls, in-person questions, emails and even messages from social media channels on a daily basis. In this noise of information, not every urgent matter can reach you. What if an AI OS could select the crucial ones out of the mess and direct your attention to it when it’s actually needed.

Moreover, if you look at the patient side, you will see how long is the route from recognizing symptoms at home until reaching out to a specialist. For example, in the Hungarian county of Kaposvár, the average time from the discovery of a cancerous disease until the actual medical consultation about the treatment plan was 54 days. This alarming number has been drastically reduced to 21 days with the help of a special software and by optimizing patient management practices since November 2015. Imagine, though, what earthquake-like changes AI could bring into patient management if the usage of a simpler process management tool and follow-up system could halve the waiting time!

8) Help us improve over time

People, even those who work on becoming better at their job, make the same mistakes over and over again. What if by discussing every challenging task or decision with an AI, we could improve the quality of my job. Just look at the following:

97% of healthcare invoices in the Netherlands are digital containing data regarding the treatment, the doctor, and the hospital. These invoices could be easily retrieved. A local company, Zorgprisma Publiek analyzes the invoices and uses IBM Watson in the cloud to mine the data. They can tell if a doctor, clinic or hospital makes mistakes repetitively in treating a certain type of condition in order to help them improve and avoid unnecessary hospitalizations of patients.

 

9) Help us collaborate more

Sometimes we’re wondering how many researchers, doctors, nurses or patients are thinking about the same issues in healthcare as we do. At those times, we imagine that we have an AI by my side, which helps me find the most potential collaborators and invite them to work together with me for a better future.

Clinical and research collaborations are crucial to find the best solutions for arising problems, however, more often than not, it is difficult to find the most relevant partners. There are already efforts to change this. For example, in the field of clinical trials, TrialReach tries to bridge the gap between patients and researchers who are developing new drugs. If more patients have a chance to participate in trials, they might become more engaged with potential treatments or even be able to access new treatments before they become FDA approved and freely available.

10) Do administrative work

Quite an essential percentage of an average day of a doctor is spent with administrative stuff. An AI could learn how to do it properly and do it better than me by time. This is the area where AI could impact healthcare the most. Repetitive, monotonous tasks without the slightest need for creativity could and should be done by artificial intelligence. There are already great examples leaning towards this trend.

IBM launched another algorithm called Medical Sieve. It is an ambitious long-term exploratory project to build a next generation “cognitive assistant” with analytical, reasoning capabilities and a wide range of clinical knowledge. Medical Sieve is qualified to assist in clinical decision making in radiology and cardiology.

 

Many fear that algorithms and artificial intelligence will take the jobs of medical professionals in the future. We highly doubt it. Instead of replacing doctors, AI will augment them and make them better at their jobs. Without the day-to-day treadmill of administrative and repetitive tasks, the medical community could again turn to its most important task with full attention: healing.

 

 

(Source: https://goo.gl/ZwVjvu)

The concept of machine learning has quickly become very attractive to healthcare organizations, but much of the necessary vocabulary is not yet well understood.

Source: Thinkstock

by Jennifer Bresnick

Link to original article 
 

 – After a slow and unsteady beginning at the start of the decade, the healthcare industry is finally becoming somewhat more comfortable with the idea that learning to live with big data is the only way to see financial and clinical success in the future.

Electronic health records are now commonplace (if not universally beloved), and even the most reticent, paper-loving organizations are now cautiously embracing the idea that all that digital data could actually be good for something.

For stakeholders on the other end of the spectrum, charging forward on the leading edge of the health IT revolution, the benefits of big data analytics are already clear.

Predictive analytics, real-time clinical decision support, precision medicine, and proactive population health management are finally within striking distance, driven largely by rapid advances in machine learning.

But while many in the healthcare industry are sure that their technological goals are hovering somewhere just over the horizon, plotting a course to get there can be a difficult proposition – especially when the landscape is clouded by confusing vocabulary, technical terminology, and as-yet-undeliverable promises of truly automated insights.

READ MORE: Artificial Intelligence Could Take Over Surgical Jobs by 2053

“Artificial intelligence” is a buzzword saturated with hope, excitement, and visions of sci-fi blockbuster movies, but it isn’t the same thing as “machine learning.”

Machine learning is slightly different than deep learning, and neither of them match up exactly with cognitive computing or semantic analysis.

As the healthcare industry moves quickly and irreversibly into the era of big data analytics, it is important for organizations looking to purchase advanced health IT tools to keep the swirling vocabulary straight so they understand exactly what they’re getting and how they can – and can’t – use it to improve the quality of patient care.

ARTIFICIAL INTELLIGENCE

Artificial intelligence is the ability of a computer to complete tasks in a manner typically associated with a rational human being.

While Merriam-Webster’s definition uses the word “imitate” to describe the behavior of an artificial intelligence agent, the Encyclopedia Britannica defines AI as a program “endowedwith the intellectual processes characteristic of humans,” which indicates a slightly different view of the attributes of an AI agent.

READ MORE: How Do Artificial Intelligence, Machine Learning Differ in Healthcare?

Whether AI is simply imitating human behavior or infused with the ability to generate original answers to complex cognitive problems via some indefinable spark, true artificial intelligence is widely regarded as a program or algorithm that can beat the famous Turing test.

Developed in 1950 by computer science pioneer Alan Turing, the Turing test states that an artificial intelligence must be able to exhibit intelligent behavior that is indistinguishable from that of a human.

One classic interpretation of Turing’s work is that a human observer of both a fellow human and a machine would engage both parties in an attempt to distinguish between the algorithm and the flesh-and-blood participant.

If the computer could fool the observer into thinking its actions are equivalent to and indistinguishable from the human participant, it would pass the test.  Thus far, there are no examples of artificial intelligence that have truly done so.

Artificial intelligence also has a second definition.  It is the branch of computer science associated with studying and developing the technologies that would allow a computer to pass (or surpass) the Turing test.

READ MORE: 84% of Execs: Artificial Intelligence Will Transform Healthcare

So when a clinical decision support tool says it “uses artificial intelligence” to power its analytics, consumers should be aware that “using principles of computer science associated with the development of AI” is not really the same thing as offering a fully independent and rational diagnosis-bot.

MACHINE LEARNING

Machine learning and artificial intelligence are often used interchangeably, but conflating the two is incorrect.  Machine learning is one small part of the study of artificial intelligence, and refers to a specific sub-section of computer science related to constructing algorithms that can make accurate predictions about future outcomes.

Machine learning accomplishes this through pattern recognition, rule-based logic, and reinforcement techniques that help algorithms understand how to strengthen “good” outcomes and eliminate “bad” ones.

Machine learning can be supervised or unsupervised.  In supervised learning, algorithms are presented with “training data” that contains examples with their desired conclusions.  For healthcare, this may include samples of pathology slides that contain cancerous cells as well as slides that do not.

The computer is trained to recognize what indicates an image of cancerous tissue so that it can distinguish between healthy and non-healthy images in the future.

When the computer correctly flags a cancerous image, that positive result is reinforced by the trainer and the data is fed back into the model, eventually leading to more and more precise identification of increasingly complex samples.

Unsupervised learning does not typically leverage labeled training data.  Instead, the algorithm is tasked with identifying patterns in data sets on its own by defining signals and potential abnormalities based on the frequency or clustering of certain data.

Unsupervised learning may have applications in the security realm, where humans do not know exactly what form unauthorized access will take.  If the computer understands what routine and authorized access typically looks like, it may be able to quickly identify a breach that does not meet its standard parameters.

DEEP LEARNING

Deep learning is a subset of machine learning that deals with artificial neural networks(ANNs), which are algorithms structured to mimic biological brains with neurons and synapses.

ANNs are often constructed in layers, each of which perform a slightly different function that contributes to the end result.  Deep learning is the study of how these layers interact and the practice of applying these principles to data.

“Deep learning is in the intersections among the research areas of neural networks, artificial intelligence, graphical modeling, optimization, pattern recognition, and signal processing,”wrote researchers Li Deng and Dong Yu in Deep Learning: Methods and Applications.

Just like in the broader field of machine learning, deep learning algorithms can be supervised, unsupervised, or somewhere in between.  Natural language processing, speech and audio processing, and translation services have particularly benefitted from this multi-layer approach to processing information.

COGNITIVE COMPUTING

Cognitive computing is often used interchangeably with machine learning and artificial intelligence in common marketing jargon.  It is widely considered to be a term coined by IBM and used mainly to describe the company’s approach to the science of artificial intelligence, especially in relation to IBM Watson.

However, in 2014, the Cognitive Computing Consortium convened a group of stakeholders including Microsoft, Google, SAS, and Oracle to develop a working definition of cognitive computing across multiple industries:

To respond to the fluid nature of users’ understanding of their problems, the cognitive computing system offers a synthesis not just of information sources but of influences, contexts, and insights. To do this, systems often need to weigh conflicting evidence and suggest an answer that is “best” rather than “right”.  They provide machine-aided serendipity by wading through massive collections of diverse information to find patterns and then apply those patterns to respond to the needs of the moment. Their output may be prescriptive, suggestive, instructive, or simply entertaining.

Cognitive computing systems must be able to learn and adapt as inputs change, interact organically with users, “remember” previous interactions to help define problems, and understand contextual elements to deliver the best possible answer based on available information, the Consortium added.

This view of cognitive computing suggests a tool that lies somewhere below the benchmark for artificial intelligence.  Cognitive computing systems do not necessarily aspire to imitate intelligent human behavior, but instead to supplement human decision-making power by identifying potentially useful insights with a high degree of certainty.

Clinical decision support naturally comes to mind when considering this definition – and that is exactly where IBM (and its eager competitors) have focused their attention.

NATURAL LANGUAGE PROCESSING

Natural language processing (NLP) forms the foundation for many cognitive computing exercises.  The ingestion of source material, such as medical literature, clinical notes, or audio dictation records, requires a computer to understand what is being written, spoken, or otherwise communicated.

Speech recognition tools are already in widespread use among healthcare providers frustrated by the burdens of EHR data entry, and text-based NLP programs are starting to find applications in the clinical realm, as well.

NLP often starts with optical character recognition (OCR) technology that can turn static text, such as a PDF image of a lab report or a scan of a handwritten clinical note, into computable data.

Once the data is in a workable format, the algorithm parses the meaning of each element to complete a task such as translating into a different language, querying a database, summarizing information, or supplying a response to a conversation partner.

Natural language processing can be enhanced by applying deep learning techniques to understand concepts with multiple or unclear meanings, as are common in everyday speech and writing.

In the healthcare field, where acronyms and abbreviations are very common, accurately parsing through this “incomplete” data can be extremely challenging.  Other data integrity and governance concerns, as well as the large volume of unstructured data, can also raise issues when attempting to employ NLP to extract meaning from big data.

SEMANTIC COMPUTING

Semantic computing is the study of understanding how different elements of data relate to one another and using these relationships to draw conclusions about the meaning, content, and structure of data sets.  It is a key component of natural language processing that draws on elements of both computer science and linguistics.

“Semantic computing is a technology to compose information content (including software) based on meaning and vocabulary shared by people and computers and thereby to design and operate information systems (i.e., artificial computing systems),” wrote Lei Wang and Shiwen Yu from Peking University.

The researchers noted that the Google Translate service is heavily reliant on semantic computing to distinguish between similar meanings of words, especially between languages that may use one word or symbol for multiple concepts.

In 2009, the Institute for Semantic Computing used the following definition:

[Semantic computing] brings together those disciplines concerned with connecting the (often vaguely formulated) intentions of humans with computational content. This connection can go both ways: retrieving, using and manipulating existing content according to user’s goals (‘do what the user means’); and creating, rearranging, and managing content that matches the author’s intentions (‘do what the author means’).

Currently in healthcare, however, the term is often used in relation to the concept of data lakes, or large and relatively unstructured collections of data sets that can be mixed and matched to generate new insights.

Semantic computing, or graph computing, allows healthcare organizations to ingest data once, in its native format, and then define schemas for the relationships between those data sets on the fly.

Instead of locking an organization’s data into an architecture that only allows the answer to one question, semantic data lakes can mix and match data again and again, uncovering new associations between seemingly unrelated information.

Natural language interfaces that leverage NLP techniques to query semantic databases, are becoming a popular way to interact with these freeform, malleable data sets.

For population health management, medical research, and patient safety, this capability is invaluable.  In the era of value-based care, organizations need to understand complex and subtle relationships between concepts such as the unemployment rate in a given region, the average insurance deductible, and the rate at which community members are visiting emergency departments to receive uncompensated care.

As a buzzword, semantic computing has been very quickly overtaken by machine learning, deep learning, and artificial intelligence. But all of these methodologies attempt to solve similar problems in more or less similar ways.

Vendors of health IT offerings that rely on advanced analytics are hoping to equip providers with greatly enhanced decision-making capabilities that augment their ability to deliver the best possible patient care.

While the field is still in the relatively early stages of its development, healthcare providers can look forward to a broad selection of big data tools that allow access to previously untapped insights about quality, outcomes, spending, and other key metrics for success.

 

An Emergency Department (ED) is one of those things that you hate to need, and you love to hate. EDs have been much-maligned, characterized as error-prone money-wasters and “loss leaders.” Some healthcare policy-makers have targeted EDs as major contributors to healthcare costs spiraling out of control. They could not be more wrong.

A few decades back, many hospitals were staffed by local physicians who had little or no specialized training in trauma or critical care. Worse, the coverage was incomplete, requiring night and weekend shifts to be filled by resident physicians (“moonlighters”) who had not yet completed their training. The nursing and other staff also lacked specialized training and tended to be pulled in from other departments. Rural and community hospitals could not always afford technologies like CT or MRI scanners, which limited diagnostic and treatment options. Intubating a patient in a dark, narrow semi-trailer in the parking lot because that’s where the (rented) CT scanner was located was not unheard of.  Not the care you’d choose if given the option.

Thankfully, much has changed. The skills and capabilities of the teams providing this complex and crucial endeavor have grown exponentially, spilling out into even the smallest hospitals. Organizations like American College of Emergency Physicians (www.acep.org), the American Board of Emergency Medicine (www.abem.org), the Emergency Nursing Association (www.ena.org), and the Society for Academic Emergency Medicine (www.saem.org) have promulgated evidenced-based training and standards for the practice of emergency medicine that has given America an enviable resource.

We now have over 42,000 dedicated ED physicians and over 180,000 ED nurses in the United States. The majority of these providers now have specialized training and certification that uniquely qualifies them for their challenging roles. Hospitals are better-equipped and more prepared to meet the acute care needs of their communities. EMS and ambulance systems have become more organized and efficient and offer more services in the field. Furthermore, the regionalization of trauma care has assured that severely injured patients quickly reach the most appropriate care.

The ED has become center stage for diagnosis and treatment of many acute problems. EDs handle 28 percent of all US acute care visits and two-thirds of the acute care for the uninsured. The CDC reported in 2012 that one in five Americans visits the ED at least once a year. Primary care physicians are directing more patients to the ED as they can do more complex workups, provide diagnostic services not available in outpatient offices, absorb overflow, and handle unscheduled urgencies. The ED also sees most of the poor and uninsured because they turn no one away.

Calling Emergency Departments loss leaders is wrong. It is true that EDs order a lot of expensive tests. It is true that they duplicate tests when they don’t have outside results. It is true that some physicians order extra tests to protect themselves. Yet EDs still only account for 4% of the 2.6 trillion dollars we spend on health care every year. 31% goes to inpatient care, which EDs often avoid by providing outpatient workups and determining that patients don’t need to be in the hospital. It is true that 55% of ED care goes uncompensated, cutting overall hospital profits. But EDs generate up to 70% of inpatient admissions, from which hospitals make most of their money. But far more important than all of these financial considerations is the fact that EDs provide unparalleled diagnostic and treatment capabilities, 24/7, to anyone and everyone. This requires a massive effort that should make us all proud, grateful, and supportive.

Support means understanding the challenges currently facing EDs; support means being part of the solution, instead of complaining every time they fall short of our expectations. We love to let our Facebook friends know how we spent 5 hours in the ED. But do we ask why? Between 2001 and 2008, use of EDs increased at twice the rate of population growth while hospitals closed nearly 200,000 beds. When you increase inflow and decrease outflow simultaneously, you get people sitting around. EDs have responded by adding beds and providers. But this is a gradual and expensive solution. What we need is an affordable way to see more patients in less time, without decreasing quality of care. The industry terms are “throughput” and “patient processing,” but I don’t like these terms because they make us sound like cheese. A better term is “flow.”

How can we improve patient flow in the ED? Individual hospitals typically spends millions of dollars to get answers to this question. They hire consultants to analyze where patients pile up, where staff are overly taxed and where they are sitting around, where supplies run out and where they stack up and other operational details. Recommendations for staffing, layout, and operational changes are obtained, but implementing them is costly and difficult. Worse, the consultations need to be repeated every year or two because so many factors change.

Not long ago, better ways were only in our dreams. Why not use computers to figure this stuff out? Artificial intelligence (AI) is a branch of computer science that solves real-world logistical problems by teaching a computer to “think” like the greatest problem solvers on the planet: us.  From humble beginnings at Dartmouth and MIT in the 1950s, AI science and technology have grown remarkably and now pervade our world to the point where we take it for granted (like asking your phone to find you a local Italian restaurant). AI has had many successes in health care, such as expert systems that render diagnoses based on signs, symptoms, and interactive interviews. “Decision support” is a less threatening term for AI embedded in medical systems, and can make suggestions and deliver warnings in real time, at the point of care. Using natural language processing of the electronic medical record to understand the context and then applying rules and inference, decision support systems can recommend alternative treatments, provide warnings about drug interactions, or alert users to a departure from hospital policy.

AI research has also produced powerful new approaches to complex logistical problems. Older approaches either took to long to consider every possibility (brute force algorithms) or settled for better but not best solutions (greedy algorithms). Newer approaches like machine learning, neural networks, and genetic algorithms let us tackle bigger, more complex problems in a reasonable amount of computing time to find truly optimal solutions.

Computer solutions that could revolutionize ED patient flow now employ modeling and simulation to predict bottlenecks. Imagine knowing that in 6 hours you are going to have double the load on radiology, or that your ED average wait is going to triple in 3 hours. Now imagine being able to do something about it. AI-driven optimization algorithms provide real-time advice on how to schedule staff and other resources to avoid problems. The systems can also analyze historical patterns and offer long-term optimization advice. Knowing when and exactly how to move resources reduces wait times and allows more patients to be served. It also improves patient safety and enhances patient satisfaction. This technology is available at a fraction of the cost of flying consultants in, and advice is offered every day instead of every 1-2 years.

There are many more insights to come in this new age of medical informatics. The impact of these emerging technologies on the difficult, complex problems facing healthcare is only beginning. AI has already proven to be useful enough to know it is a good path. As we really put our backs into designing tools for the connected, data-rich world that is upon us, we can expect game-changing results.

Every 2,160 years the sun’s position at the time of the vernal equinox moves into a new constellation. There’s debate about dates because bulls, rams, scorpions, and lions are rather fuzzy when they are made of stars, but many astronomers believe we have arrived at the Age of Aquarius. Aquarius is associated with flight and freedom, idealism and democracy, with truth and perseverance, and, most interestingly, with electricity, computers, and modernization. Whether you pay any mind to the stars or not, you’re going to notice.

 

 

White paper: Potentia Analytics, Inc.

Computational Intelligence in Medical Informatics

Intelligent Provider Scheduling | Patient Flow Optimization | Predictive Analytics

 

References

 

 

There is a lot of buzz about artificial intelligence (AI) these days.  To date, AI has enjoyed amazing successes and endured embarrassing failures. People love to believe that technology can fix everything. After all, it does have a pretty good track record over the past 2,000 years. But it can often be hard to separate science from science fiction. Where do we draw the line between AI hope and hype?

AI has always been intoxicating. We are driven to create things in our own image, in ways that transcend basic biology. And if our digital creations are better at math and logic, perhaps they could become better at thinking in general. Maybe they would start building better versions of themselves. Better and better, in fact, until one day they wouldn’t need us anymore. Yikes. A whole genre of dystopian science fiction pits us against our creations in biblical proportion. Are we opening Pandora’s box?

AI can be generally defined as the field of designing and building machines that exhibit intelligent behavior. As such, it has been carved up a number of ways and is actually quite a diverse field. Most broadly, we can consider “narrow AI,” focused on activities like language translation, image recognition, game-playing, or self-driving cars, in contrast to “general AI,” a machine with broadly applicable reasoning capability and perhaps, ultimately, self-awareness.

In the 18th century, Gottfried Wilhelm Leibniz, who co-invented calculus with Isaac Newton, demonstrated how logic could be reduced to symbols and reasoning to a set of operations on those symbols. This spawned the general idea that intelligence was, in a sense, algorithmic, mathematical. A century later, George Boole developed Boolean Algebra, a system that operated on states of truth (1 is true, 0 is false) to mathematically define a logical path from facts to conclusions. Boolean Algebra became the basis of digital information and computer programming. A hundred years after Boole, Alan Turning (whose day job was cracking Nazi secret codes) proved that a simple “Turing machine” needed only zeroes and ones to compute anything computable. This revelation coincided with the advent of electronic circuits that could represent, store, and manipulate these zeroes and ones. The result, the digital computer, transformed our world.

AI officially got started in the summer of 1956, in the little mountain town of Hanover, New Hampshire. Dartmouth College hosted a two-month gathering of geniuses, including Herbert Simon, Allen Newell, Marvin Minsky, Claude Shannon, and John McCarthy. They witnessed a demonstration of the world’s first AI program, Logic Theorist, which was able to prove mathematical theorems using symbolic logic and a list-processing architecture. Many came away from that conference convinced that the human mind could be engineered—needing only enough computer memory and processing power. What ensued was an explosion of research funding to develop the new field. It was a heady time, when computers started beating humans at everything from algebra to checkers. Computer scientists boasted that within 2 decades machines would eclipse human intelligence.

By 1976 this had proven to be far more difficult than expected. Despite their facility with math, computers, in general, were dumb as dirt. Hope floundered, ushering in the first “AI winter.” Funding dried up and there was an ebb in new ideas. Then, in the early 1980s, a fresh kind of AI arrived: expert systems. These new systems incorporated knowledge from subject matter experts and could render a kind of distilled expertise on demand. Machines were taught more than formulas—now having specific, highly relevant knowledge of their problem-solving domains. Expert systems were making headway in medical diagnosis, molecular structure determination and other complex problem spaces, and were saving some companies millions of dollars. There was a global resurgence of interest and funding for AI, along with widespread commercialization.

In the end, expert systems could only address a restricted space of problems, were hard to update, did not learn independently, and failed rather ridiculously when they strayed from their subject. Also, there was a lot of soft science and “vaporware” that got funded but never really worked. Like a lot of “bleeding edge” science, AI lacked standards and structure. This led to a growing general perception that AI was snake oil. In a 1987 conference, several of the most respected researchers urged sensibility and a more cautious tack for AI research. Such lack of faith popped the hype bubble and imploded the whole industry, ushering in the second AI winter. Funding disappeared, and businesses that had sprung up to support the effort, like companies that manufactured specialized AI computers, went under.

This proved to be a necessary and good thing, however. Like a forest fire, the brush was cleared so that the tallest trees could breathe. AI became more rigorous, more mathematical, more scientific. Machines got stronger too, doubling in memory and speed every 2 years. Most importantly, machines got connected. The emergence of ethernet, the Internet, the World Wide Web, and protocols and standards for sharing electronic data caused a sea change in the art of the possible. AI researchers realized that intelligence could be collaborative, opening the door to previously unimaginable feats. In 1997, IBM’s Deep Blue computer defeated the world’s reigning chess champion, Garry Kasparov. In 2011, IBM’s Watson computer competed on Jeopardy!, defeating two of the top champions. This was an amazing feat, requiring the machine to fathom puns, word games, and subtle inferences. These highly publicized achievements vaulted us, once again, into the hype-o-sphere. Will we yet again melt our wax wings?

AI labs, once the purview of prestigious universities are springing up all over the place, especially in gaming, social networking, and search companies. Bloomberg Technology’s Jack Clark called 2015 a breakthrough year for AI, reporting that Google’s investment in AI had grown to over 2,700 projects. Much of what was once called AI, like optical character recognition, natural language understanding, and face recognition, is now just part and parcel of systems we use in our everyday lives. There is also less tendency to call AI by name and rather focus on what it actually does and does not do. AI has diversified into many forms, including machine learning, neural networks, genetic algorithms, deep learning, self-organizing maps, and is cleverly buried in endeavors like simulation, optimization, and predictive analytics. AI comes in honed packages, built to deliver real results for real-world problems. In that sense, it doesn’t matter what you call it, as long as it is useful.

In “Machines Who Think,” Pamela McCorduck says “Science moves in rhythms, in seasons, with periods of quiet, when knowledge is being assimilated, perhaps rearranged, possibly reassessed, and periods of great exuberance, when new knowledge cascades in. We can’t always tell which is which. Technology changes, permitting the formerly infeasible, even unthinkable.”

So the problem with artificial intelligence is: it’s not artificial. In many cases, the intelligence employed by these systems derives from human insight, rendered in zeroes and ones. In other cases, humans are irrelevant. Thinking machines can take a new tack, unencumbered by human limitations. For some problems, machine intelligence can actually be better than human intelligence. In either case, the intelligence—and the solutions—are very real.

 

White paper: Potentia Analytics, Inc.

Computational Intelligence in Medical Informatics

Intelligent Provider Scheduling | Patient Flow Optimization | Predictive Analytics

 

References

  • Luger, George F. Artificial Intelligence: Structures and Strategies for Complex Problem Solving, Second Edition. Redwood City, CA: The Benjamin/Cummings Publishing Company, Inc., 1993.
  • Clark, Jack. Why 2015 Was a Breakthrough Year in Artificial Intelligence. Last modified on December 10, 2015. https://www.bloomberg.com/news/articles/2015-12-08/why-2015-was-a-breakthrough-year-in-artificial-intelligence. Accessed May 3, 2017.
  • McCorduck, Pamela. Machines Who Think: A Personal Inquiry into the History and Prospects of Artificial Intelligence, 2nd Edition. Natuck, MA: A K Peters, Ltd., 2004.
  • Hintze, Arend. Understanding the Four Types of Artificial Intelligence. November 14, 2016. http://www.govtech.com/computing/Understanding-the-Four-Types-of-Artificial-Intelligence.html. Accessed May 16, 2017.
  • US Office of Science and Technology Policy. Preparing for the Future of Artificial Intelligence. October 2016. https://obamawhitehouse.archives.gov/sites/default/files/whitehouse_files/microsites/ostp/NSTC/preparing_for_the_future_of_ai.pdf. Accessed May 16, 2017.

 

According to a 2017 report commissioned by the American Association of Medical Colleges (AAMC), we are facing an unprecedented shortage of doctors in America.By 2030, we may be short over 100,000 physicians. Medical specialties that are expected to be hardest hit include primary care, surgery, and psychiatry. Over the same period, the percentage of Americans over 65, who require the most healthcare resources, is expected to increase by 55%. This is a huge problem.

The situation in nursing is projected to be even worse. According to the Bureau of Labor Statistics, there will be over a million unfilled nursing positions by 2022.Some experts warn that this could become the worst nursing shortage in U.S. history. A 2007 report from the Institute of Medicine details the tremendous impact that adequate nurse staffing has on quality of care and patient safety. Nurses bear the crucial responsibilities of monitoring and educating patients and of implementing their treatment plans. They are in a unique position to detect problems early and to correct the mistakes of other staff. A 2011 study published in the New England Journal of Medicine showed that patient death rates increase significantly when hospital nursing is understaffed.4  Studies have shown that being short on nurses increases rates of infections,5 readmissions to the hospital,6 medication errors,7 and other adverse events.

Shortages of healthcare workers are being compounded by a downward spiral of burnout and attrition. Nearly half of U.S. Physicians say they are experiencing burnout, and the numbers are getting worse.A 2011 survey by the American Nurses Association reported that every 3 in 4 nurses were feeling burned out, most of them blaming chronic nursing shortages as a major factor.9 Burnout leads to fatigue and psychologic distress and can lead to serious problems like alcohol and drug abuse. Undue work stress results in absenteeism, increased employee turnover, and difficulty recruiting new staff. Staff burnout impairs performance, patient safety, and patient satisfaction, and in the end is very costly to hospitals.

Organizations like the AAMC, the American Association of Colleges of Nursing (AACN), and others are working to recruit faculty and create more training positions to meet the increasing demand for providers. Unfortunately, skilled providers take many years to train and current efforts will not meet demands in time to prevent dangerous shortages of doctors and nurses.

Luckily, all is not lost. The solution to this problem, as in many other industries, is technology. We have arrived at what is being called the Fourth Industrial Revolution.10 The First Industrial Revolution hit in the 18th century with steam engines and industrial machinery. The Second, in the 19th century, gave us electricity and mass production. The Third came in the 20th century with computers, the internet, and automation. Now the Fourth Industrial Revolution is at hand with the progressive integration of physical, biological, and cyber systems. Sensors, monitors, connectivity, actuators, and machine intelligence surround us, in everything from cars to refrigerators, phones, home environment, lighting, home security, and much more. It is believed that there may be over 50 billion connected devices by 2020.11

So how can technology help us with doctor and nurse shortages? One important solution lies in scheduling software. Scheduling workers turn out to be a very hard problem. When you have more than just a few people and a few considerations, like not working nights or weekends, the number of possible solutions grows exponentially and it becomes very hard to find the fairest, most balanced schedule. Only in the past few years have we benefitted from a convergence of data connectivity and advanced computing technologies like artificial intelligence and machine learning to yield robust solutions to this difficult problem.

Efficient, fair, and flexible scheduling means better use of limited staff. It also means increased staff satisfaction. People can trade shifts, provide notifications and requests over mobile devices, and find replacements faster, over larger pools of qualified, credentialed colleagues. Automated systems, based on sophisticated algorithms, are able to keep track of myriad rules and considerations, and the systems are able to weigh literally thousands of alternative schedules to constantly deliver the best possible solution. These systems have emerged from decades of academic research and are now being deployed as commercial applications that are saving hospitals millions of dollars.

Intelligent, automated healthcare scheduling and staffing solutions are meeting another new requirement in modern healthcare: itinerant staff. Yesterday’s healthcare workers signed on at one or two hospitals and tended to stay there for their entire career. Now it is not unusual for doctors and nurses to travel year round, maintaining credentials in numerous states and organizations. They are following higher pay to areas of greatest need, easing the burdens of hospitals and communities to provide adequate staffing. Obviously, this itinerant workforce creates even more scheduling complexity.

We are fortunate to be at a point where accelerating growth in both computing power and connectivity have converged to enable technological solutions that were only pipe dreams a few years ago. Global policy efforts are also breaking down the silo-like sequestering of healthcare data, promoting the safe sharing of outcomes, performance data, and patient information. Historically, hospitals spent millions of dollars to hire consultants to painstakingly review their operation and advise improvements. The expense and effort required meant that such analyses occurred rarely, often years apart. Healthcare analytics software now enables statistically meaningful comparisons to be done continuously. Drawing on decades of artificial intelligence research, new and powerful analytics can be applied to identify areas of greatest need and to provide practical, useable advice to health care workers and administrators continuously, in real time.

Other new technologies that are compensating for provider shortages include predictive analytic software that identifies bottlenecks and offer advice to increase the speed and efficiency of patient care. Patient flow, scheduling, and staffing technologies will occupy an increasingly vital and central role in the delivery of healthcare. The degree to which they will be able to compensate for nursing and physician shortages remains to be seen, but it is clear that they will continue to have substantial and lasting benefits.

 

White paper: Potentia Analytics, Inc.

Computational Intelligence in Medical Informatics

Intelligent Provider Scheduling | Patient Flow Optimization | Predictive Analytics

 

References:

  1. AAMC Projections Update 2017. https://aamc-black.global.ssl.fastly.net/production/media/filer_public/a5/c3/a5c3d565-14ec-48fb-974b-99fafaeecb00/aamc_projections_update_2017.pdf. Accessed April 11, 2017.
  2. Juraschek SP, Zhang X, Ranganathan V, Lin VW. United States registered nurse workforce report card and shortage forecast. Am J Med Qual. 2012;27(3):241-249. doi:10.1177/1062860611416634.
  3. Philip Aspden, Julie Wolcott, J. Lyle Bootman, Linda R. Cronenwett, Editors Preventing Medication Errors: Quality Chasm Series Committee on Identifying and Preventing Medication Errors. The National Academies Press, Washington, DC., 2007.
  4. Needleman J, Buerhaus P, Pankratz VS, Leibson CL, Stevens SR, Harris M. Nurse staffing and inpatient hospital mortality. N Engl J Med. 2011;364(11):1037-1045. doi:10.1056/NEJMsa1001025.
  5. Cimiotti JP, Aiken LH, Sloane DM, Wu ES. Nurse staffing, burnout, and healthcare-associated infection. Am J Infect Control. 2012;40(6):486-490. doi:10.1016/j.ajic.2012.02.029.
  6. Tubbs-Cooley HL, Cimiotti JP, Silber JH, Sloane DM, Aiken LH. An observational study of nurse staffing ratios and hospital readmission among children admitted for common conditions. BMJ Qual Saf. 2013;22(9):735-742. doi:10.1136/bmjqs-2012-001610.
  7. Leape LL, Bates DW, Cullen DJ, et al. Systems analysis of adverse drug events. ADE Prevention Study Group. JAMA. 1995;274(1):35-43.
  8. Physician Burnout: It Just Keeps Getting Worse. http://www.medscape.com/viewarticle/838437. Accessed April 11, 2017.
  9. 2011 ANA HEALTH & SAFETY SURVEY – The-Nurse-Work-Environment-2011-Health-Safety-Survey.pdf. http://nursingworld.org/FunctionalMenuCategories/MediaResources/MediaBackgrounders/The-Nurse-Work-Environment-2011-Health-Safety-Survey.pdf. Accessed April 11, 2017.
  10. What Is The Fourth Industrial Revolution? https://www.forbes.com/sites/jacobmorgan/2016/02/19/what-is-the-4th-industrial-revolution/#3f1e5ca9f392. Accessed April 11, 2017.
  11. Evans, D. How the Next Evolution of the Internet Is Changing Everything Cisco Internet Business Solutions Group (IBSG), 2011. http://www.cisco.com/c/dam/en_us/about/ac79/docs/innov/IoT_IBSG_0411FINAL.pdf. Accessed April 11, 2017.