SALES & MARKETING
inquiries@potentiaco.com
CUSTOMER SUPPORT
support@potentiaco.com
SALES & MARKETING
inquiries@potentiaco.com
CUSTOMER SUPPORT
support@potentiaco.com
Alex Stephens has nearly a decade of experience in software engineering with a focus on web application development. He started at Potentia Analytics as a programmer on a physician staffing system for emergency departments. While he still serves as team lead for that project, he also directs the software engineering team, constantly seeking to improve product quality and reliability through better development processes. He has a Bachelor of Science degree in Computer Science from Southern Illinois University. |
Dr. Charles Foell III is the Director of Project Management at Potentia Analytics, a role that is natural progression of Charles’ experience managing a variety of highly technical B2B and B2C projects in web, mobile, machine learning, and data science. In Charles’ aggregate work as a project manager, he has led dozens of developers, facilitated B2B contracts, led client interactions, translated business requirements into technical specifications, developed and directed strategic planning, and oversaw coordination of personnel and resources.
Charles also brings a depth and breadth of technical experience to Potentia, having previously held roles including Principal Data Scientist, Computer Vision Engineer, and Technical Lead. Charles is an inventor with patents in recommender systems and computer vision, and has a strong quantitative background arising from peer-reviewed, published research in theoretical and experimental physics and years of experience as a software developer and data scientist. |
Tavia Vasicek is an insightful brand and marketing strategist who partners with progressive companies to showcase and grow their corporate brands and solutions. She came to Potentia Analytics in 2017 to support the growth and development of their compelling, cutting-edge resolutions for some of today’s biggest issues in healthcare operations.
Tavia’s specialties include brand strategy and development, marketing analytics and planning, account management and development, lead and content generation, cross functional team leadership, website and social media management, grant and proposal writing, public speaking and event planning. She holds a B.A. in Communications from Southern Illinois University in Carbondale with a specialty in public relations/marketing. After spending nearly three decades in business, product, and human development, Tavia believes that each entity has a specific niche market where human connection plays a key role in joining the two. She strives to empower organizations and thought leaders to differentiate themselves and drive measurable results through targeted content and outreach. |
Passionate, energetic, and driven coupled with the continuing effort of listening closely are all a part of what makes Jerry Cardwell who he is today. From his 35 years of business negotiations and courtroom deliberations, Jerry has developed persuasive presentation and sales skills with effective research and writing abilities. He is committed to excellence and strategic planning to economically serve his clients’ best interest with an empathetic heart towards finding an amicable resolution in an expeditious manner.
Jerry’s business acumen has been acquired from both the successful ownership and operation of a boutique nine (9) person law firm, as well as technology businesses. More importantly, however, was his time spent traveling for six and a half (6 ½) years throughout North, Central and South America. Such travels allowed him to experience and acquire the wealth of knowledge found in diverse cultures and social backgrounds.
Jerry has experienced much in the way of success and failure, which aids him in empathizing with the struggles of all entrepreneurs. He views his work as one of personal commitment and service towards those with whom he works and takes pride in his part in producing an end result that goes beyond the expectations of all involved.
Direct entrepreneurial experience: In addition to his transactional/litigation legal practice and varied business clientele, Jerry has participated in the startup, product development and sales, as well as part ownership and management of three (3) technology-based companies offering services in the Medical, Real Estate and Fast-Food industries. In addition, he has valued mid-market size companies as a broker for mergers and acquisitions with financial reviews and strategies for maximizing value.
Education: Colorado State University, Ft. Collins, CO BA-Technical Writing and Audio-Visual Script writing with a minor in Macro Biology Drake University Law School, Des Moines, IA Graduate Degree-Juris Doctorate (JD)
Jerry is dedicated to a strong work ethic with an even stronger commitment to his wife and two (2) children. Away from work, Jerry enjoys his time as a pilot, avid motorcyclist, horseman, skier/snowboarder, and snowmobiler, with a passion for racecars. He is also a voracious reader who is dedicated to continuous improvement in his business and personal life. Jerry attempts to balance his life with a strong spiritual commitment and service to friends and community. His favorite quote that sums up his attitude toward struggles is “Never, never, never give in except to convictions of honour and good sense.” (Sir Winston Churchill) |
Kirk Jensen has spent over 25 years in Emergency Medicine management and clinical care. Board-certified in Emergency Medicine, he has served as medical director for several emergency departments. Dr. Jensen is President and CEO of Healthcare Management Strategies and formerly Chief Innovation Officer for Envision Healthcare and the Innovation Group. Originally from the Chicago area, Dr. Jensen began his career in Emergency Department management in Los Angeles, building a physician group focused on the special needs of the disadvantaged urban patient population. He worked with the Governor and the Health Department to maintain a healthcare safety net for the city of Los Angeles.
In 1990 his clinical and management career transitioned to North Carolina and the formation of Southeastern Acute Care Specialists, providing emergency physician services for two hospitals seeing 90K visits annually. He served as Medical Director and Chairman of the Emergency Department (ED), leading both hospitals to national benchmark standards in ED operations and efficiency. He implemented procedures that achieved national recognition for Nash General Hospital as a “Best Practice Clinical Site” by the Emergency Nurses Association (1999). In addition, Dr. Jensen implemented crew resource management training at both hospitals, focusing on team performance, safety, and human error management. He is a certified MedTeams instructor.
Since 1998 Dr. Jensen has been on the faculty of the Institute for Healthcare Improvement (IHI), focusing on improving patient flow, quality enhancement and patient satisfaction. He has coached over 300 emergency departments through the process of improving operations and clinical services. He chaired and served as faculty for over a dozen IHI collaboratives: Operational and Clinical Improvement in the Emergency Department and Improving Flow Through the Acute Care Setting. And for years led the innovative seminars Cracking the Code to Hospital-wide Patient Flow and Perfecting Emergency Department Operations. He was on the expert panel and site examination team for Urgent Matters, a Robert Wood Johnson Foundation Initiative and was a Medical Director for the Studer Group. Dr Jensen is co-author of the 2008 Hamilton Award winning book Leadership for Smooth Patient Flow. He is also co-author of Hardwiring Flow and The Hospital Executive’s Guide to Emergency Department Management.
Dr. Jensen teaches at the American College of Emergency Physicians (ACEP) Directors Academy, leading ED directors through process and operational improvements, as well as patient safety activities. He has been honored as the American College of Emergency Physicians (ACEP) Speaker of the Year. Dr. Jensen holds Bachelor and Medical Degrees from the University of Illinois. He interned in Internal Medicine at the University of Hawaii and completed his residency in Emergency Medicine at the University of Chicago. Dr. Jensen earned an MBA at the University of Tennessee.
Dr. Shahram Rahimi is currently the Department Head and Professor at the Department of Computer Science and Engineering at Mississippi State University. Prior to that, he was Professor and Chair at the Department of Computer Science at Southern Illinois University (SIU) for six years. Dr. Rahimi has extensive background in both academia and industry. He is a recognized leader in artificial intelligence with over 190 peer-reviewed publications and several patents or pending patents in this area. He has served as the Editor-in-Chief of the International Journal of Computational Intelligence and sits on the editorial board of many other journals. He is also an integral part of the IEEE’s Committee for New Standards.
Shahram has organized over 15 conferences on Artificial Intelligence and multi-agent systems over the past decade and has served as Principal Investigator for several federally funded and industry-funded research projects. Shahram has been contributing to advancements in AI and Computational Intelligence over the past 20 years.
Norman Carver is a distinguished technology research scientist specializing in the area of Multi-agent systems, distributed problem solving, sensor interpretation, knowledge-intensive control of AI systems distributed problem solving (DPS), machine learning, multi-agent learning, architectures for knowledge-intensive control of AI systems. As a research scientist with numerous publications and two grants by the National Science Foundation, much of Dr. Carver’s theoretical work has been based on the use of Decentralized Markov Decision Processes (DEC-MDPs) for modeling MAS problems and producing minimum communication coordination strategies. Dr. Carver is the Chief Technology Officer at Potentia Analytics and a professor in the Department of Computer Science at Southern Illinois University in Carbondale with a Ph.D. in Computer Science from the University of Massachusetts.
Bonnie Kucharski is a dynamic operations leader with over 20 years of experience in the technology sector. She comes to Potentia Analytics from Liaison (OpenText), a leading provider of cloud-based enterprise application integration and data management solutions with a large footprint in healthcare data translation and harmonization. For nearly a decade she grew and led the technical integrations team responsible for the translation and architecture of data between two companies therefore optimizing interoperability.
Bonnie has held strategic positions within the technology sector at organizations including 3Com Corporation (formerly US Robotics) and Blackboard (formerly SchoolCenter). She has a Master of Arts degree in Organizational Management from Ashford University and a Bachelor of Arts degree in Business Management from DePaul University. She is also a certified Project Management Professional (PMP).
Amb. Michael Gfoeller (ret.) is an independent consultant on international politics and security matters.
He served for 26 years (1984 to 2010) as a US Foreign Service Officer. His career included service in Riyadh, Saudi Arabia; Manama, Bahrain; Iraq; Moscow, Russia; Yerevan, Armenia; Chisinau, Moldova; Warsaw, Poland; and Brussels, Belgium. From 2004 to 2008, he served as Deputy Chief of Mission and Charge’ d’Affaires at the US Embassy in Riyadh, Saudi Arabia.
He served for two years (2008-2010) as the Senior Political Advisor to General David Petraeus, then Commander, US Central Command. He retired from the State Department with the rank of Ambassador. His foreign languages include Arabic, Russian, French, and German.
Amb. Gfoeller is a member of the Council on Foreign Relations, the Cosmos Club, the Union League Club of New York and a Board Member of Potentia Analytics. He is also a Founding Partner of Arabia Analytica, LLC.
https://potentiaanalytics.bitrix24.com/pub/form/12_foresight_product_page/e8q94s/
Dr. Shivji is the Founder of 123Dentist, the 2nd largest DSO in Canada in terms of the number of practices. He is responsible for providing overall strategic direction and leadership to his Executive Team and highly respected in the Canadian dental industry. Dr. Shivji has decades of clinical experience and many of successful partnerships with practitioners. He has a proven track record in deal making and managing a broad network of dental practices. Dr. Shivji graduated from University of British Columbia Dental School in 1993 with a Doctor of Dental Medicine degree.
Dr. Iscovich has served as Chief Executive Officer of the Qualitas Group of Envision Healthcare, focused on developing the healthcare workforce. Through acquisition of Vista Staffing Solutions, the merger of Qualitas Staffing and telemedicine, Dr. Iscovich created viable solutions for today’s staffing issues.
Dr. Iscovich also served as the Chief Executive Officer of EmCare’s largest division, extending from Missouri to Hawaii. He has broad experience in physician management, healthcare finance, healthcare technology, organizational development, and mergers & acquisitions.
Dr. Iscovich currently serves as the Board Chair of Direct Relief, recognized as one of the top charities in the world. Direct Relief provides billions of dollars in global humanitarian relief partnering with pharmaceutical and healthcare companies throughout the United States and the world.
Dr. Iscovich is a member of the board of directors of Office Works, a staffing company, and board chair of Potentia Analytics, a startup company providing healthcare, political, and financial analytics. He also served as an advisor to InTouch Health, a robotics and telemedicine company, and was the CEO and founder of First Medical Group.
Dr. Iscovich currently serves on the Investment Committees of Envision Healthcare and Cottage Health System. He has been a past member of the Board of Directors for Cottage Health and served as Chair of the Audit/Compliance Committee. He has also served on the Catholic Healthcare West Central Coast Board of Directors (now Dignity Health), president of the St. Francis Hospital of Santa Barbara Foundation, as State of California EMS Commissioner, assistant clinical professor at the Keck USC School of Medicine, and on the American Heart Association National Faculty.
Dr. Iscovich received his medical degree from the University of California at San Francisco and a bachelor’s degree in philosophy and Chemistry (summa cum laude) at the University of Puget Sound. He lives with his wife Lisa in Santa Barbara, California.
Bill is a seasoned investment professional with over 40 years of experience in the global financial industry. He serves as an Independent Board Director and and/or Senior advisor to a range of organizations, both in the not-for-profit and for-profit arenas. His Wall Street career began in 1973 at Donaldson, Lufkin and Jenrette. He has a long-history of involvement within the hedge fund community, which began in 1983 when he met Julian H Robertson Jr. of Tiger Management. Bill has been investing in hedge funds ever since. He has lectured at the University of Virginia, Yale University, Harvard University and I.E. University in Madrid, Spain.
Bill serves on the board of the Jamestown Foundation which is widely considered one of the best resources on Terrorism and Russia. Bill was a U.S. Army Infantry Lieutenant in Vietnam and spoke Vietnamese. He is Chairman of the Governors Council of the Cerebral Palsy Alliance Research Foundation in NYC. He has a long history of involvement in causes related to Cerebral Palsy. He is an “Honorary Angel” of 100 Women in Hedge Funds. He is a Trustee of the Episcopal Academy, founded in 1785, in Newtown Square, Pennsylvania. Bill is also a Board Member of Potentia Analytics, a software company and is a Founding Partner of Arabia Analytica, LLC. He is on the Board of Rhymella, a children’s book company.
Dr. Sean Bozorgzad operates at the nexus of computer science and Healthcare. He is a practicing emergency room physician with a passion for software and information technology. Sean is currently the Chief of Medicine and the Emergency Department Medical Director at Peace Health United General Medical Center in Sedro Woolley, Washington, as well as, an adjunct faculty at the University of Southern Illinois Department of Computer Science.
For over ten years, Sean has helped create some of the most innovative software solutions to optimize emergency medicine staffing and logistics. He is also passionate about education and routinely participates in international educational trips as a way of giving back to the world community. He received a Bachelor of Science in Genetics from the University of South Florida and a Doctorate of Medicine from the University of British Columbia.
Mr. Berardino Baratta has been involved with Potentia Analytics since 2015 working initially as a consultant before joining in 2017 as CEO. Previously, he was General Manager of the Multimedia Applications Division for Freescale Semiconductor where he turned the business around, achieving profitability and sustained growth with major design wins at Microsoft, Ford, Amazon, Sony, and Logitech.
Prior to this, he led Strategy, Marketing and Business Development for the $2B Wireless Mobile and Systems Group at Freescale. He began his career with Metrowerks Corporation, a leading provider of Software Development tools, where he led engineering through its growth from startup to public corporation through acquisition by Motorola Corporation. Mr. Baratta co-founded Specialized Equine Services, a 501(c)3 corporation focused on using horses to help children, adults and veterans with disabilities. He received his Bachelor of Mechanical Engineering (Honours) degree from McGill University in Montreal, Canada.
What Hospitals Should Consider When Choosing AI Tools
AI and machine learning are augmentative tools, size matters among data sets, real-world applicability is a must, and tools must be validated, experts say.
By Bill Siwicki
October 13, 2017
11:25 AM
Some healthcare organizations are turning to artificial intelligence and machine learning because of the enhancements these advanced technologies can make to patient care, operations and security. But assessing the promises of the technologies can be difficult and time-consuming unless you’re an expert.
Two such experts weigh in with insights hospitals should understand when both planning and purchasing AI tools.
Raj Tiwari is a chief architect at Health Fidelity, which uses natural language processing technology and statistical inference engines, mixed with analytics, to identify and correct compliance risks, and Brent Vaughan is the CEO of Cognoa, a company that develops AI tools for diagnosing medical conditions.
[Also: What to know before buying AI-based cybersecurity tools]
Their advice: Know that AI and machine learning are augmentative tools, understand that size matters among data sets, real-world applicability is a must, and the tools must be trained and validated.
To draw a baseline, at this point in time the An aspect of AI is more akin to augmented intelligence than artificial and, as far as machine learning is concerned, hospitals should think about it as a supplement to human expertise, experience, and decision-making.
“AI is a tool that enhances our capability, allowing humans to do more than what we could on our own,” Tiwari added. “It’s designed to augment human insight, not replace it. For example, a doctor can use AI to access the distilled expertise of hundreds of clinicians for the best possible course of action. This is far more than he or she could ever do by getting a second or third opinion.”
[Also: How AI is transforming healthcare and solving problems in 2017]
That needs to be done by analyzing AI recommendations carefully. A lot of buzz around AI and machine learning is from creators of AI tools. That’s understandable because this group is focused on what AI can do to improve healthcare and other realms.
“People who implement and deploy real-world solutions based on AI need to ask big-picture questions,” Tiwari said. “Specifically, how does it assist the end-user. AI should be treated as one of the many tools at the disposal of the user, not the definitive solution.”
Healthcare organizations need to make sure the team that developed their AI tools has a deep enough understanding of the relevant industry, Cognoa’s Vaughan said.
“Many people in the machine learning and AI world, especially consultants, feel that great AI can be developed without requiring deep domain knowledge – they will say that their AI solution is ‘domain agnostic,’” Vaughan said. “Many would not agree – and in healthcare, this can particularly be untrue.”
Healthcare data sets, in fact, are often much smaller than in other consumer and business applications. Unlike AI tools that deal with serving up ads or picking one’s next movie based upon tens of millions of data points, healthcare AI tools often rely on datasets orders of magnitude smaller and thus require that the AI developers have a deeper industry knowledge and understanding of the data, because coding mistakes and date misinterpretation are amplified in smaller data sets.
Real-world applicability is a must. One of the biggest challenges to machine learning adoption across the healthcare industry is scalability, Tiwari said.
“An algorithm may work flawlessly in the controlled academic or limited clinical setting, but translating that to the real world can introduce any number of complications,” he said. “For example, if the tool is trained by using data from a research hospital, it may not function well in a regular hospital where many patients have incomplete medical records.”
They may have critical pieces of data missing, and the tool would need to be able to account for that. Data cleanliness and processing speed can be hurdles outside the neat environment of research applications.
Healthcare organizations also need to make sure their AI tools were trained and validated with representative populations, Vaughan said.
“Since the training and validation data sets often are much smaller in healthcare, the differences between populations can become exacerbated,” he explained. “For example, primary and secondary or tertiary care settings can see dramatically different incident rates for different events. An AI tool that is good at predicting a particular outcome in one setting might have a much higher error rate in the other setting.”
And AI tools in healthcare must help meet security and compliance commitments, Tiwari said.
“As we build and leverage machine learning models, software vendors and organizations that implement them must be cognizant of data compliance and audit requirements,” Tiwari said. “These include having appropriate usage agreements in place for the data being analyzed.”
Having adequate permissions in place goes without saying; commitments to patient data privacy and security are a must. In certain cases, machine learning systems can inadvertently leak private information, Tiwari explained.
Such occurrences could be disastrous and significantly hinder further adoption of AI and machine learning out of fear.
Twitter: @SiwickiHealthIT
Email the writer: bill.siwicki@himssmedia.com
10 Ways Artificial Intelligence Could Make A Better Doctor
AI for a more efficient healthcare
Automation through AI, robotics or 3D-printing will make healthcare more efficient and more sustainable. These new digital technologies will improve healthcare processes resulting in the earlier and more efficient treatment of patients. It will eventually shift the focus in medicine from treatment to prevention. Moreover, medical professionals will get the chance to move from repetitive, monotonous tasks to the challenging, creative assignments.
AI has certainly more revolutionary potential than simply optimizing processes: it can mine medical records or medical images in order to come up with previously unknown implications or signals; design treatment plans for cancer patients or create drugs from existing pills or re-use old drugs for new purposes. But imagine how much time you as a GP would have if the administrative process would be taken care of by an AI-powered system. Your only task would be to concentrate on the patient’s problem! Imagine how much time you as a GP could spare if healthcare chatbots and instant messaging health apps would give answers to simple patient questions, which do not necessarily need the intervention of a medical professional!
Through the following 10 ways, AI could make me a better doctor.
1) Eradicate waiting time
You would think that waiting time is the exclusive “privilege” of patients and doctors do not have any free moment during their overpacked days. However, suboptimal health care processes not only result in patients sometimes waiting for hours in front of doctors’ offices but also medical professionals losing a lot of time every day waiting for something (a patient, a lab result, etc.). An AI system that makes my schedule as efficient as possible directing me to the next logical task would be a jackpot.
2) Prioritize my emails
The digital tsunami is upon us. Our inboxes are full of unread messages and it is an everyday challenge not to drown into the ocean of new letters. We deal with about 200 e-mails every single day. We try to teach Gmail how to mark an email important or categorize them automatically into social media messages, newsletters, and personal emails, it’s still a challenge. The AI system prioritized all the 3000 unread emails in a second. Imagine if we could streamline digital communication completely in line with our needs and if we could share and receive information more efficiently and more accurately without too much effort.
According to a recent report in the New Scientist, half a million people have professed their love for Alexa, Amazon’s intelligent personal assistant and more than 250,000 have proposed marriage to it.
3) Find me the information we need
We think we have mastered the skill of searching for information online using dozens of Google search operators and different kinds of search engines for different tasks, but it still takes time. What if an AI OS could answer my questions immediately by looking up the answer online?
More and more intelligent personal assistants, such as Siri on iOS or Alexa for Amazon lead us into the future, and there will be soon highly capable, specialized AI-powered chatbots also in the field of healthcare. Bots like HealthTap or Your.Md already aim to help patients find a solution to the most common symptoms through AI. Safedrugbot embodies a chat messaging service that offers assistant-like support to health professionals, doctors who need appropriate information about the use of drugs during breastfeeding.
4) Keep us up-to-date
There is too much information out there. Without an appropriate compass, we are lost in the jungle of data. It is even more important to find the most accurate, relevant and up-to-date information when it comes to such a sensitive area as healthcare. That’s why we started Webicina, which collects the latest news from the best, most reliable sources into one, easily manageable magazine.
On Pubmed, there are 23 million papers. If we could read 3-4 studies of my field of interest per week, we could not finish it in a lifetime and meanwhile millions of new studies would come out. We need an AI to process the pile of information for me and show me the most relevant papers – and we will get there soon. IBM Watsoncan already process a million pages in seconds. This remarkable speed has led to trying Watson in oncology centers to see how helpful it is in making treatment decisions in cancer care.
5) Work when we don’t
We can fulfill my online tasks (emails, reading papers, searching for information) when we use our PC or laptop, and we can do most of these on my smartphone. When we don’t use any of these, we obviously cannot work. An AI system could work on these when we don’t have any device in hand.
Imagine that you are playing tennis or doing the dishes at home when an important message comes in. With the help of an AI, you could respond to your boss without the need to touch any devices – a toned down version of Joaquin Phoenix’s AI, that arranged the whole publishing process of his book without the need for him to lift a finger.
6) Help us make hard decisions rational
A doctor must face a series of hard decisions every day. The best we can do is to make those decisions as informed as possible. We can ask people whose opinion we value, but basically, that’s it. Unfortunately, you would search the world wide web in vain for certain answers.
But AI-powered algorithms could help in the future. For example, IBM Watson launched its special program for oncologists – and we interviewed one of the professors working with it – which is able to provide clinicians evidence-based treatment options. Watson for Oncology has an advanced ability to analyze the meaning and context of structured and unstructured data in clinical notes and reports that may be critical to selecting a treatment pathway. So, AI is not making the decision per se but offers you the most rational options.
7) Help patients with urgent matters reach us
A doctor has a lot of calls, in-person questions, emails and even messages from social media channels on a daily basis. In this noise of information, not every urgent matter can reach you. What if an AI OS could select the crucial ones out of the mess and direct your attention to it when it’s actually needed.
Moreover, if you look at the patient side, you will see how long is the route from recognizing symptoms at home until reaching out to a specialist. For example, in the Hungarian county of Kaposvár, the average time from the discovery of a cancerous disease until the actual medical consultation about the treatment plan was 54 days. This alarming number has been drastically reduced to 21 days with the help of a special software and by optimizing patient management practices since November 2015. Imagine, though, what earthquake-like changes AI could bring into patient management if the usage of a simpler process management tool and follow-up system could halve the waiting time!
8) Help us improve over time
People, even those who work on becoming better at their job, make the same mistakes over and over again. What if by discussing every challenging task or decision with an AI, we could improve the quality of my job. Just look at the following:
97% of healthcare invoices in the Netherlands are digital containing data regarding the treatment, the doctor, and the hospital. These invoices could be easily retrieved. A local company, Zorgprisma Publiek analyzes the invoices and uses IBM Watson in the cloud to mine the data. They can tell if a doctor, clinic or hospital makes mistakes repetitively in treating a certain type of condition in order to help them improve and avoid unnecessary hospitalizations of patients.
9) Help us collaborate more
Sometimes we’re wondering how many researchers, doctors, nurses or patients are thinking about the same issues in healthcare as we do. At those times, we imagine that we have an AI by my side, which helps me find the most potential collaborators and invite them to work together with me for a better future.
Clinical and research collaborations are crucial to find the best solutions for arising problems, however, more often than not, it is difficult to find the most relevant partners. There are already efforts to change this. For example, in the field of clinical trials, TrialReach tries to bridge the gap between patients and researchers who are developing new drugs. If more patients have a chance to participate in trials, they might become more engaged with potential treatments or even be able to access new treatments before they become FDA approved and freely available.
10) Do administrative work
Quite an essential percentage of an average day of a doctor is spent with administrative stuff. An AI could learn how to do it properly and do it better than me by time. This is the area where AI could impact healthcare the most. Repetitive, monotonous tasks without the slightest need for creativity could and should be done by artificial intelligence. There are already great examples leaning towards this trend.
IBM launched another algorithm called Medical Sieve. It is an ambitious long-term exploratory project to build a next generation “cognitive assistant” with analytical, reasoning capabilities and a wide range of clinical knowledge. Medical Sieve is qualified to assist in clinical decision making in radiology and cardiology.
Many fear that algorithms and artificial intelligence will take the jobs of medical professionals in the future. We highly doubt it. Instead of replacing doctors, AI will augment them and make them better at their jobs. Without the day-to-day treadmill of administrative and repetitive tasks, the medical community could again turn to its most important task with full attention: healing.
—
(Source: https://goo.gl/ZwVjvu)
Machine Learning in Healthcare: Defining the Most Common Terms
The concept of machine learning has quickly become very attractive to healthcare organizations, but much of the necessary vocabulary is not yet well understood.
Source: Thinkstock
by Jennifer Bresnick
– After a slow and unsteady beginning at the start of the decade, the healthcare industry is finally becoming somewhat more comfortable with the idea that learning to live with big data is the only way to see financial and clinical success in the future.
Electronic health records are now commonplace (if not universally beloved), and even the most reticent, paper-loving organizations are now cautiously embracing the idea that all that digital data could actually be good for something.
For stakeholders on the other end of the spectrum, charging forward on the leading edge of the health IT revolution, the benefits of big data analytics are already clear.
Predictive analytics, real-time clinical decision support, precision medicine, and proactive population health management are finally within striking distance, driven largely by rapid advances in machine learning.
But while many in the healthcare industry are sure that their technological goals are hovering somewhere just over the horizon, plotting a course to get there can be a difficult proposition – especially when the landscape is clouded by confusing vocabulary, technical terminology, and as-yet-undeliverable promises of truly automated insights.
READ MORE: Artificial Intelligence Could Take Over Surgical Jobs by 2053
“Artificial intelligence” is a buzzword saturated with hope, excitement, and visions of sci-fi blockbuster movies, but it isn’t the same thing as “machine learning.”
Machine learning is slightly different than deep learning, and neither of them match up exactly with cognitive computing or semantic analysis.
As the healthcare industry moves quickly and irreversibly into the era of big data analytics, it is important for organizations looking to purchase advanced health IT tools to keep the swirling vocabulary straight so they understand exactly what they’re getting and how they can – and can’t – use it to improve the quality of patient care.
ARTIFICIAL INTELLIGENCE
Artificial intelligence is the ability of a computer to complete tasks in a manner typically associated with a rational human being.
While Merriam-Webster’s definition uses the word “imitate” to describe the behavior of an artificial intelligence agent, the Encyclopedia Britannica defines AI as a program “endowedwith the intellectual processes characteristic of humans,” which indicates a slightly different view of the attributes of an AI agent.
READ MORE: How Do Artificial Intelligence, Machine Learning Differ in Healthcare?
Whether AI is simply imitating human behavior or infused with the ability to generate original answers to complex cognitive problems via some indefinable spark, true artificial intelligence is widely regarded as a program or algorithm that can beat the famous Turing test.
Developed in 1950 by computer science pioneer Alan Turing, the Turing test states that an artificial intelligence must be able to exhibit intelligent behavior that is indistinguishable from that of a human.
One classic interpretation of Turing’s work is that a human observer of both a fellow human and a machine would engage both parties in an attempt to distinguish between the algorithm and the flesh-and-blood participant.
If the computer could fool the observer into thinking its actions are equivalent to and indistinguishable from the human participant, it would pass the test. Thus far, there are no examples of artificial intelligence that have truly done so.
Artificial intelligence also has a second definition. It is the branch of computer science associated with studying and developing the technologies that would allow a computer to pass (or surpass) the Turing test.
READ MORE: 84% of Execs: Artificial Intelligence Will Transform Healthcare
So when a clinical decision support tool says it “uses artificial intelligence” to power its analytics, consumers should be aware that “using principles of computer science associated with the development of AI” is not really the same thing as offering a fully independent and rational diagnosis-bot.
MACHINE LEARNING
Machine learning and artificial intelligence are often used interchangeably, but conflating the two is incorrect. Machine learning is one small part of the study of artificial intelligence, and refers to a specific sub-section of computer science related to constructing algorithms that can make accurate predictions about future outcomes.
Machine learning accomplishes this through pattern recognition, rule-based logic, and reinforcement techniques that help algorithms understand how to strengthen “good” outcomes and eliminate “bad” ones.
Machine learning can be supervised or unsupervised. In supervised learning, algorithms are presented with “training data” that contains examples with their desired conclusions. For healthcare, this may include samples of pathology slides that contain cancerous cells as well as slides that do not.
The computer is trained to recognize what indicates an image of cancerous tissue so that it can distinguish between healthy and non-healthy images in the future.
When the computer correctly flags a cancerous image, that positive result is reinforced by the trainer and the data is fed back into the model, eventually leading to more and more precise identification of increasingly complex samples.
Unsupervised learning does not typically leverage labeled training data. Instead, the algorithm is tasked with identifying patterns in data sets on its own by defining signals and potential abnormalities based on the frequency or clustering of certain data.
Unsupervised learning may have applications in the security realm, where humans do not know exactly what form unauthorized access will take. If the computer understands what routine and authorized access typically looks like, it may be able to quickly identify a breach that does not meet its standard parameters.
DEEP LEARNING
Deep learning is a subset of machine learning that deals with artificial neural networks(ANNs), which are algorithms structured to mimic biological brains with neurons and synapses.
ANNs are often constructed in layers, each of which perform a slightly different function that contributes to the end result. Deep learning is the study of how these layers interact and the practice of applying these principles to data.
“Deep learning is in the intersections among the research areas of neural networks, artificial intelligence, graphical modeling, optimization, pattern recognition, and signal processing,”wrote researchers Li Deng and Dong Yu in Deep Learning: Methods and Applications.
Just like in the broader field of machine learning, deep learning algorithms can be supervised, unsupervised, or somewhere in between. Natural language processing, speech and audio processing, and translation services have particularly benefitted from this multi-layer approach to processing information.
COGNITIVE COMPUTING
Cognitive computing is often used interchangeably with machine learning and artificial intelligence in common marketing jargon. It is widely considered to be a term coined by IBM and used mainly to describe the company’s approach to the science of artificial intelligence, especially in relation to IBM Watson.
However, in 2014, the Cognitive Computing Consortium convened a group of stakeholders including Microsoft, Google, SAS, and Oracle to develop a working definition of cognitive computing across multiple industries:
To respond to the fluid nature of users’ understanding of their problems, the cognitive computing system offers a synthesis not just of information sources but of influences, contexts, and insights. To do this, systems often need to weigh conflicting evidence and suggest an answer that is “best” rather than “right”. They provide machine-aided serendipity by wading through massive collections of diverse information to find patterns and then apply those patterns to respond to the needs of the moment. Their output may be prescriptive, suggestive, instructive, or simply entertaining.
Cognitive computing systems must be able to learn and adapt as inputs change, interact organically with users, “remember” previous interactions to help define problems, and understand contextual elements to deliver the best possible answer based on available information, the Consortium added.
This view of cognitive computing suggests a tool that lies somewhere below the benchmark for artificial intelligence. Cognitive computing systems do not necessarily aspire to imitate intelligent human behavior, but instead to supplement human decision-making power by identifying potentially useful insights with a high degree of certainty.
Clinical decision support naturally comes to mind when considering this definition – and that is exactly where IBM (and its eager competitors) have focused their attention.
NATURAL LANGUAGE PROCESSING
Natural language processing (NLP) forms the foundation for many cognitive computing exercises. The ingestion of source material, such as medical literature, clinical notes, or audio dictation records, requires a computer to understand what is being written, spoken, or otherwise communicated.
Speech recognition tools are already in widespread use among healthcare providers frustrated by the burdens of EHR data entry, and text-based NLP programs are starting to find applications in the clinical realm, as well.
NLP often starts with optical character recognition (OCR) technology that can turn static text, such as a PDF image of a lab report or a scan of a handwritten clinical note, into computable data.
Once the data is in a workable format, the algorithm parses the meaning of each element to complete a task such as translating into a different language, querying a database, summarizing information, or supplying a response to a conversation partner.
Natural language processing can be enhanced by applying deep learning techniques to understand concepts with multiple or unclear meanings, as are common in everyday speech and writing.
In the healthcare field, where acronyms and abbreviations are very common, accurately parsing through this “incomplete” data can be extremely challenging. Other data integrity and governance concerns, as well as the large volume of unstructured data, can also raise issues when attempting to employ NLP to extract meaning from big data.
SEMANTIC COMPUTING
Semantic computing is the study of understanding how different elements of data relate to one another and using these relationships to draw conclusions about the meaning, content, and structure of data sets. It is a key component of natural language processing that draws on elements of both computer science and linguistics.
“Semantic computing is a technology to compose information content (including software) based on meaning and vocabulary shared by people and computers and thereby to design and operate information systems (i.e., artificial computing systems),” wrote Lei Wang and Shiwen Yu from Peking University.
The researchers noted that the Google Translate service is heavily reliant on semantic computing to distinguish between similar meanings of words, especially between languages that may use one word or symbol for multiple concepts.
In 2009, the Institute for Semantic Computing used the following definition:
[Semantic computing] brings together those disciplines concerned with connecting the (often vaguely formulated) intentions of humans with computational content. This connection can go both ways: retrieving, using and manipulating existing content according to user’s goals (‘do what the user means’); and creating, rearranging, and managing content that matches the author’s intentions (‘do what the author means’).
Currently in healthcare, however, the term is often used in relation to the concept of data lakes, or large and relatively unstructured collections of data sets that can be mixed and matched to generate new insights.
Semantic computing, or graph computing, allows healthcare organizations to ingest data once, in its native format, and then define schemas for the relationships between those data sets on the fly.
Instead of locking an organization’s data into an architecture that only allows the answer to one question, semantic data lakes can mix and match data again and again, uncovering new associations between seemingly unrelated information.
Natural language interfaces that leverage NLP techniques to query semantic databases, are becoming a popular way to interact with these freeform, malleable data sets.
For population health management, medical research, and patient safety, this capability is invaluable. In the era of value-based care, organizations need to understand complex and subtle relationships between concepts such as the unemployment rate in a given region, the average insurance deductible, and the rate at which community members are visiting emergency departments to receive uncompensated care.
As a buzzword, semantic computing has been very quickly overtaken by machine learning, deep learning, and artificial intelligence. But all of these methodologies attempt to solve similar problems in more or less similar ways.
Vendors of health IT offerings that rely on advanced analytics are hoping to equip providers with greatly enhanced decision-making capabilities that augment their ability to deliver the best possible patient care.
While the field is still in the relatively early stages of its development, healthcare providers can look forward to a broad selection of big data tools that allow access to previously untapped insights about quality, outcomes, spending, and other key metrics for success.
Emergency Medicine in the Age of Aquarius
An Emergency Department (ED) is one of those things that you hate to need, and you love to hate. EDs have been much-maligned, characterized as error-prone money-wasters and “loss leaders.” Some healthcare policy-makers have targeted EDs as major contributors to healthcare costs spiraling out of control. They could not be more wrong.
A few decades back, many hospitals were staffed by local physicians who had little or no specialized training in trauma or critical care. Worse, the coverage was incomplete, requiring night and weekend shifts to be filled by resident physicians (“moonlighters”) who had not yet completed their training. The nursing and other staff also lacked specialized training and tended to be pulled in from other departments. Rural and community hospitals could not always afford technologies like CT or MRI scanners, which limited diagnostic and treatment options. Intubating a patient in a dark, narrow semi-trailer in the parking lot because that’s where the (rented) CT scanner was located was not unheard of. Not the care you’d choose if given the option.
Thankfully, much has changed. The skills and capabilities of the teams providing this complex and crucial endeavor have grown exponentially, spilling out into even the smallest hospitals. Organizations like American College of Emergency Physicians (www.acep.org), the American Board of Emergency Medicine (www.abem.org), the Emergency Nursing Association (www.ena.org), and the Society for Academic Emergency Medicine (www.saem.org) have promulgated evidenced-based training and standards for the practice of emergency medicine that has given America an enviable resource.
We now have over 42,000 dedicated ED physicians and over 180,000 ED nurses in the United States. The majority of these providers now have specialized training and certification that uniquely qualifies them for their challenging roles. Hospitals are better-equipped and more prepared to meet the acute care needs of their communities. EMS and ambulance systems have become more organized and efficient and offer more services in the field. Furthermore, the regionalization of trauma care has assured that severely injured patients quickly reach the most appropriate care.
The ED has become center stage for diagnosis and treatment of many acute problems. EDs handle 28 percent of all US acute care visits and two-thirds of the acute care for the uninsured. The CDC reported in 2012 that one in five Americans visits the ED at least once a year. Primary care physicians are directing more patients to the ED as they can do more complex workups, provide diagnostic services not available in outpatient offices, absorb overflow, and handle unscheduled urgencies. The ED also sees most of the poor and uninsured because they turn no one away.
Calling Emergency Departments loss leaders is wrong. It is true that EDs order a lot of expensive tests. It is true that they duplicate tests when they don’t have outside results. It is true that some physicians order extra tests to protect themselves. Yet EDs still only account for 4% of the 2.6 trillion dollars we spend on health care every year. 31% goes to inpatient care, which EDs often avoid by providing outpatient workups and determining that patients don’t need to be in the hospital. It is true that 55% of ED care goes uncompensated, cutting overall hospital profits. But EDs generate up to 70% of inpatient admissions, from which hospitals make most of their money. But far more important than all of these financial considerations is the fact that EDs provide unparalleled diagnostic and treatment capabilities, 24/7, to anyone and everyone. This requires a massive effort that should make us all proud, grateful, and supportive.
Support means understanding the challenges currently facing EDs; support means being part of the solution, instead of complaining every time they fall short of our expectations. We love to let our Facebook friends know how we spent 5 hours in the ED. But do we ask why? Between 2001 and 2008, use of EDs increased at twice the rate of population growth while hospitals closed nearly 200,000 beds. When you increase inflow and decrease outflow simultaneously, you get people sitting around. EDs have responded by adding beds and providers. But this is a gradual and expensive solution. What we need is an affordable way to see more patients in less time, without decreasing quality of care. The industry terms are “throughput” and “patient processing,” but I don’t like these terms because they make us sound like cheese. A better term is “flow.”
How can we improve patient flow in the ED? Individual hospitals typically spends millions of dollars to get answers to this question. They hire consultants to analyze where patients pile up, where staff are overly taxed and where they are sitting around, where supplies run out and where they stack up and other operational details. Recommendations for staffing, layout, and operational changes are obtained, but implementing them is costly and difficult. Worse, the consultations need to be repeated every year or two because so many factors change.
Not long ago, better ways were only in our dreams. Why not use computers to figure this stuff out? Artificial intelligence (AI) is a branch of computer science that solves real-world logistical problems by teaching a computer to “think” like the greatest problem solvers on the planet: us. From humble beginnings at Dartmouth and MIT in the 1950s, AI science and technology have grown remarkably and now pervade our world to the point where we take it for granted (like asking your phone to find you a local Italian restaurant). AI has had many successes in health care, such as expert systems that render diagnoses based on signs, symptoms, and interactive interviews. “Decision support” is a less threatening term for AI embedded in medical systems, and can make suggestions and deliver warnings in real time, at the point of care. Using natural language processing of the electronic medical record to understand the context and then applying rules and inference, decision support systems can recommend alternative treatments, provide warnings about drug interactions, or alert users to a departure from hospital policy.
AI research has also produced powerful new approaches to complex logistical problems. Older approaches either took to long to consider every possibility (brute force algorithms) or settled for better but not best solutions (greedy algorithms). Newer approaches like machine learning, neural networks, and genetic algorithms let us tackle bigger, more complex problems in a reasonable amount of computing time to find truly optimal solutions.
Computer solutions that could revolutionize ED patient flow now employ modeling and simulation to predict bottlenecks. Imagine knowing that in 6 hours you are going to have double the load on radiology, or that your ED average wait is going to triple in 3 hours. Now imagine being able to do something about it. AI-driven optimization algorithms provide real-time advice on how to schedule staff and other resources to avoid problems. The systems can also analyze historical patterns and offer long-term optimization advice. Knowing when and exactly how to move resources reduces wait times and allows more patients to be served. It also improves patient safety and enhances patient satisfaction. This technology is available at a fraction of the cost of flying consultants in, and advice is offered every day instead of every 1-2 years.
There are many more insights to come in this new age of medical informatics. The impact of these emerging technologies on the difficult, complex problems facing healthcare is only beginning. AI has already proven to be useful enough to know it is a good path. As we really put our backs into designing tools for the connected, data-rich world that is upon us, we can expect game-changing results.
Every 2,160 years the sun’s position at the time of the vernal equinox moves into a new constellation. There’s debate about dates because bulls, rams, scorpions, and lions are rather fuzzy when they are made of stars, but many astronomers believe we have arrived at the Age of Aquarius. Aquarius is associated with flight and freedom, idealism and democracy, with truth and perseverance, and, most interestingly, with electricity, computers, and modernization. Whether you pay any mind to the stars or not, you’re going to notice.
White paper: Potentia Analytics, Inc.
Computational Intelligence in Medical Informatics
Intelligent Provider Scheduling | Patient Flow Optimization | Predictive Analytics
References
The Problem with Artificial Intelligence
There is a lot of buzz about artificial intelligence (AI) these days. To date, AI has enjoyed amazing successes and endured embarrassing failures. People love to believe that technology can fix everything. After all, it does have a pretty good track record over the past 2,000 years. But it can often be hard to separate science from science fiction. Where do we draw the line between AI hope and hype?
AI has always been intoxicating. We are driven to create things in our own image, in ways that transcend basic biology. And if our digital creations are better at math and logic, perhaps they could become better at thinking in general. Maybe they would start building better versions of themselves. Better and better, in fact, until one day they wouldn’t need us anymore. Yikes. A whole genre of dystopian science fiction pits us against our creations in biblical proportion. Are we opening Pandora’s box?
AI can be generally defined as the field of designing and building machines that exhibit intelligent behavior. As such, it has been carved up a number of ways and is actually quite a diverse field. Most broadly, we can consider “narrow AI,” focused on activities like language translation, image recognition, game-playing, or self-driving cars, in contrast to “general AI,” a machine with broadly applicable reasoning capability and perhaps, ultimately, self-awareness.
In the 18th century, Gottfried Wilhelm Leibniz, who co-invented calculus with Isaac Newton, demonstrated how logic could be reduced to symbols and reasoning to a set of operations on those symbols. This spawned the general idea that intelligence was, in a sense, algorithmic, mathematical. A century later, George Boole developed Boolean Algebra, a system that operated on states of truth (1 is true, 0 is false) to mathematically define a logical path from facts to conclusions. Boolean Algebra became the basis of digital information and computer programming. A hundred years after Boole, Alan Turning (whose day job was cracking Nazi secret codes) proved that a simple “Turing machine” needed only zeroes and ones to compute anything computable. This revelation coincided with the advent of electronic circuits that could represent, store, and manipulate these zeroes and ones. The result, the digital computer, transformed our world.
AI officially got started in the summer of 1956, in the little mountain town of Hanover, New Hampshire. Dartmouth College hosted a two-month gathering of geniuses, including Herbert Simon, Allen Newell, Marvin Minsky, Claude Shannon, and John McCarthy. They witnessed a demonstration of the world’s first AI program, Logic Theorist, which was able to prove mathematical theorems using symbolic logic and a list-processing architecture. Many came away from that conference convinced that the human mind could be engineered—needing only enough computer memory and processing power. What ensued was an explosion of research funding to develop the new field. It was a heady time, when computers started beating humans at everything from algebra to checkers. Computer scientists boasted that within 2 decades machines would eclipse human intelligence.
By 1976 this had proven to be far more difficult than expected. Despite their facility with math, computers, in general, were dumb as dirt. Hope floundered, ushering in the first “AI winter.” Funding dried up and there was an ebb in new ideas. Then, in the early 1980s, a fresh kind of AI arrived: expert systems. These new systems incorporated knowledge from subject matter experts and could render a kind of distilled expertise on demand. Machines were taught more than formulas—now having specific, highly relevant knowledge of their problem-solving domains. Expert systems were making headway in medical diagnosis, molecular structure determination and other complex problem spaces, and were saving some companies millions of dollars. There was a global resurgence of interest and funding for AI, along with widespread commercialization.
In the end, expert systems could only address a restricted space of problems, were hard to update, did not learn independently, and failed rather ridiculously when they strayed from their subject. Also, there was a lot of soft science and “vaporware” that got funded but never really worked. Like a lot of “bleeding edge” science, AI lacked standards and structure. This led to a growing general perception that AI was snake oil. In a 1987 conference, several of the most respected researchers urged sensibility and a more cautious tack for AI research. Such lack of faith popped the hype bubble and imploded the whole industry, ushering in the second AI winter. Funding disappeared, and businesses that had sprung up to support the effort, like companies that manufactured specialized AI computers, went under.
This proved to be a necessary and good thing, however. Like a forest fire, the brush was cleared so that the tallest trees could breathe. AI became more rigorous, more mathematical, more scientific. Machines got stronger too, doubling in memory and speed every 2 years. Most importantly, machines got connected. The emergence of ethernet, the Internet, the World Wide Web, and protocols and standards for sharing electronic data caused a sea change in the art of the possible. AI researchers realized that intelligence could be collaborative, opening the door to previously unimaginable feats. In 1997, IBM’s Deep Blue computer defeated the world’s reigning chess champion, Garry Kasparov. In 2011, IBM’s Watson computer competed on Jeopardy!, defeating two of the top champions. This was an amazing feat, requiring the machine to fathom puns, word games, and subtle inferences. These highly publicized achievements vaulted us, once again, into the hype-o-sphere. Will we yet again melt our wax wings?
AI labs, once the purview of prestigious universities are springing up all over the place, especially in gaming, social networking, and search companies. Bloomberg Technology’s Jack Clark called 2015 a breakthrough year for AI, reporting that Google’s investment in AI had grown to over 2,700 projects. Much of what was once called AI, like optical character recognition, natural language understanding, and face recognition, is now just part and parcel of systems we use in our everyday lives. There is also less tendency to call AI by name and rather focus on what it actually does and does not do. AI has diversified into many forms, including machine learning, neural networks, genetic algorithms, deep learning, self-organizing maps, and is cleverly buried in endeavors like simulation, optimization, and predictive analytics. AI comes in honed packages, built to deliver real results for real-world problems. In that sense, it doesn’t matter what you call it, as long as it is useful.
In “Machines Who Think,” Pamela McCorduck says “Science moves in rhythms, in seasons, with periods of quiet, when knowledge is being assimilated, perhaps rearranged, possibly reassessed, and periods of great exuberance, when new knowledge cascades in. We can’t always tell which is which. Technology changes, permitting the formerly infeasible, even unthinkable.”
So the problem with artificial intelligence is: it’s not artificial. In many cases, the intelligence employed by these systems derives from human insight, rendered in zeroes and ones. In other cases, humans are irrelevant. Thinking machines can take a new tack, unencumbered by human limitations. For some problems, machine intelligence can actually be better than human intelligence. In either case, the intelligence—and the solutions—are very real.
White paper: Potentia Analytics, Inc.
Computational Intelligence in Medical Informatics
Intelligent Provider Scheduling | Patient Flow Optimization | Predictive Analytics
References
The Doctor is Out
According to a 2017 report commissioned by the American Association of Medical Colleges (AAMC), we are facing an unprecedented shortage of doctors in America.1 By 2030, we may be short over 100,000 physicians. Medical specialties that are expected to be hardest hit include primary care, surgery, and psychiatry. Over the same period, the percentage of Americans over 65, who require the most healthcare resources, is expected to increase by 55%. This is a huge problem.
The situation in nursing is projected to be even worse. According to the Bureau of Labor Statistics, there will be over a million unfilled nursing positions by 2022.2 Some experts warn that this could become the worst nursing shortage in U.S. history. A 2007 report from the Institute of Medicine details the tremendous impact that adequate nurse staffing has on quality of care and patient safety. 3 Nurses bear the crucial responsibilities of monitoring and educating patients and of implementing their treatment plans. They are in a unique position to detect problems early and to correct the mistakes of other staff. A 2011 study published in the New England Journal of Medicine showed that patient death rates increase significantly when hospital nursing is understaffed.4 Studies have shown that being short on nurses increases rates of infections,5 readmissions to the hospital,6 medication errors,7 and other adverse events.
Shortages of healthcare workers are being compounded by a downward spiral of burnout and attrition. Nearly half of U.S. Physicians say they are experiencing burnout, and the numbers are getting worse.8 A 2011 survey by the American Nurses Association reported that every 3 in 4 nurses were feeling burned out, most of them blaming chronic nursing shortages as a major factor.9 Burnout leads to fatigue and psychologic distress and can lead to serious problems like alcohol and drug abuse. Undue work stress results in absenteeism, increased employee turnover, and difficulty recruiting new staff. Staff burnout impairs performance, patient safety, and patient satisfaction, and in the end is very costly to hospitals.
Organizations like the AAMC, the American Association of Colleges of Nursing (AACN), and others are working to recruit faculty and create more training positions to meet the increasing demand for providers. Unfortunately, skilled providers take many years to train and current efforts will not meet demands in time to prevent dangerous shortages of doctors and nurses.
Luckily, all is not lost. The solution to this problem, as in many other industries, is technology. We have arrived at what is being called the Fourth Industrial Revolution.10 The First Industrial Revolution hit in the 18th century with steam engines and industrial machinery. The Second, in the 19th century, gave us electricity and mass production. The Third came in the 20th century with computers, the internet, and automation. Now the Fourth Industrial Revolution is at hand with the progressive integration of physical, biological, and cyber systems. Sensors, monitors, connectivity, actuators, and machine intelligence surround us, in everything from cars to refrigerators, phones, home environment, lighting, home security, and much more. It is believed that there may be over 50 billion connected devices by 2020.11
So how can technology help us with doctor and nurse shortages? One important solution lies in scheduling software. Scheduling workers turn out to be a very hard problem. When you have more than just a few people and a few considerations, like not working nights or weekends, the number of possible solutions grows exponentially and it becomes very hard to find the fairest, most balanced schedule. Only in the past few years have we benefitted from a convergence of data connectivity and advanced computing technologies like artificial intelligence and machine learning to yield robust solutions to this difficult problem.
Efficient, fair, and flexible scheduling means better use of limited staff. It also means increased staff satisfaction. People can trade shifts, provide notifications and requests over mobile devices, and find replacements faster, over larger pools of qualified, credentialed colleagues. Automated systems, based on sophisticated algorithms, are able to keep track of myriad rules and considerations, and the systems are able to weigh literally thousands of alternative schedules to constantly deliver the best possible solution. These systems have emerged from decades of academic research and are now being deployed as commercial applications that are saving hospitals millions of dollars.
Intelligent, automated healthcare scheduling and staffing solutions are meeting another new requirement in modern healthcare: itinerant staff. Yesterday’s healthcare workers signed on at one or two hospitals and tended to stay there for their entire career. Now it is not unusual for doctors and nurses to travel year round, maintaining credentials in numerous states and organizations. They are following higher pay to areas of greatest need, easing the burdens of hospitals and communities to provide adequate staffing. Obviously, this itinerant workforce creates even more scheduling complexity.
We are fortunate to be at a point where accelerating growth in both computing power and connectivity have converged to enable technological solutions that were only pipe dreams a few years ago. Global policy efforts are also breaking down the silo-like sequestering of healthcare data, promoting the safe sharing of outcomes, performance data, and patient information. Historically, hospitals spent millions of dollars to hire consultants to painstakingly review their operation and advise improvements. The expense and effort required meant that such analyses occurred rarely, often years apart. Healthcare analytics software now enables statistically meaningful comparisons to be done continuously. Drawing on decades of artificial intelligence research, new and powerful analytics can be applied to identify areas of greatest need and to provide practical, useable advice to health care workers and administrators continuously, in real time.
Other new technologies that are compensating for provider shortages include predictive analytic software that identifies bottlenecks and offer advice to increase the speed and efficiency of patient care. Patient flow, scheduling, and staffing technologies will occupy an increasingly vital and central role in the delivery of healthcare. The degree to which they will be able to compensate for nursing and physician shortages remains to be seen, but it is clear that they will continue to have substantial and lasting benefits.
White paper: Potentia Analytics, Inc.
Computational Intelligence in Medical Informatics
Intelligent Provider Scheduling | Patient Flow Optimization | Predictive Analytics
References: