There is a lot of buzz about artificial intelligence (AI) these days.  To date, AI has enjoyed amazing successes and endured embarrassing failures. People love to believe that technology can fix everything. After all, it does have a pretty good track record over the past 2,000 years. But it can often be hard to separate science from science fiction. Where do we draw the line between AI hope and hype?

AI has always been intoxicating. We are driven to create things in our own image, in ways that transcend basic biology. And if our digital creations are better at math and logic, perhaps they could become better at thinking in general. Maybe they would start building better versions of themselves. Better and better, in fact, until one day they wouldn’t need us anymore. Yikes. A whole genre of dystopian science fiction pits us against our creations in biblical proportion. Are we opening Pandora’s box?

AI can be generally defined as the field of designing and building machines that exhibit intelligent behavior. As such, it has been carved up a number of ways and is actually quite a diverse field. Most broadly, we can consider “narrow AI,” focused on activities like language translation, image recognition, game-playing, or self-driving cars, in contrast to “general AI,” a machine with broadly applicable reasoning capability and perhaps, ultimately, self-awareness.

In the 18th century, Gottfried Wilhelm Leibniz, who co-invented calculus with Isaac Newton, demonstrated how logic could be reduced to symbols and reasoning to a set of operations on those symbols. This spawned the general idea that intelligence was, in a sense, algorithmic, mathematical. A century later, George Boole developed Boolean Algebra, a system that operated on states of truth (1 is true, 0 is false) to mathematically define a logical path from facts to conclusions. Boolean Algebra became the basis of digital information and computer programming. A hundred years after Boole, Alan Turning (whose day job was cracking Nazi secret codes) proved that a simple “Turing machine” needed only zeroes and ones to compute anything computable. This revelation coincided with the advent of electronic circuits that could represent, store, and manipulate these zeroes and ones. The result, the digital computer, transformed our world.

AI officially got started in the summer of 1956, in the little mountain town of Hanover, New Hampshire. Dartmouth College hosted a two-month gathering of geniuses, including Herbert Simon, Allen Newell, Marvin Minsky, Claude Shannon, and John McCarthy. They witnessed a demonstration of the world’s first AI program, Logic Theorist, which was able to prove mathematical theorems using symbolic logic and a list-processing architecture. Many came away from that conference convinced that the human mind could be engineered—needing only enough computer memory and processing power. What ensued was an explosion of research funding to develop the new field. It was a heady time, when computers started beating humans at everything from algebra to checkers. Computer scientists boasted that within 2 decades machines would eclipse human intelligence.

By 1976 this had proven to be far more difficult than expected. Despite their facility with math, computers, in general, were dumb as dirt. Hope floundered, ushering in the first “AI winter.” Funding dried up and there was an ebb in new ideas. Then, in the early 1980s, a fresh kind of AI arrived: expert systems. These new systems incorporated knowledge from subject matter experts and could render a kind of distilled expertise on demand. Machines were taught more than formulas—now having specific, highly relevant knowledge of their problem-solving domains. Expert systems were making headway in medical diagnosis, molecular structure determination and other complex problem spaces, and were saving some companies millions of dollars. There was a global resurgence of interest and funding for AI, along with widespread commercialization.

In the end, expert systems could only address a restricted space of problems, were hard to update, did not learn independently, and failed rather ridiculously when they strayed from their subject. Also, there was a lot of soft science and “vaporware” that got funded but never really worked. Like a lot of “bleeding edge” science, AI lacked standards and structure. This led to a growing general perception that AI was snake oil. In a 1987 conference, several of the most respected researchers urged sensibility and a more cautious tack for AI research. Such lack of faith popped the hype bubble and imploded the whole industry, ushering in the second AI winter. Funding disappeared, and businesses that had sprung up to support the effort, like companies that manufactured specialized AI computers, went under.

This proved to be a necessary and good thing, however. Like a forest fire, the brush was cleared so that the tallest trees could breathe. AI became more rigorous, more mathematical, more scientific. Machines got stronger too, doubling in memory and speed every 2 years. Most importantly, machines got connected. The emergence of ethernet, the Internet, the World Wide Web, and protocols and standards for sharing electronic data caused a sea change in the art of the possible. AI researchers realized that intelligence could be collaborative, opening the door to previously unimaginable feats. In 1997, IBM’s Deep Blue computer defeated the world’s reigning chess champion, Garry Kasparov. In 2011, IBM’s Watson computer competed on Jeopardy!, defeating two of the top champions. This was an amazing feat, requiring the machine to fathom puns, word games, and subtle inferences. These highly publicized achievements vaulted us, once again, into the hype-o-sphere. Will we yet again melt our wax wings?

AI labs, once the purview of prestigious universities are springing up all over the place, especially in gaming, social networking, and search companies. Bloomberg Technology’s Jack Clark called 2015 a breakthrough year for AI, reporting that Google’s investment in AI had grown to over 2,700 projects. Much of what was once called AI, like optical character recognition, natural language understanding, and face recognition, is now just part and parcel of systems we use in our everyday lives. There is also less tendency to call AI by name and rather focus on what it actually does and does not do. AI has diversified into many forms, including machine learning, neural networks, genetic algorithms, deep learning, self-organizing maps, and is cleverly buried in endeavors like simulation, optimization, and predictive analytics. AI comes in honed packages, built to deliver real results for real-world problems. In that sense, it doesn’t matter what you call it, as long as it is useful.

In “Machines Who Think,” Pamela McCorduck says “Science moves in rhythms, in seasons, with periods of quiet, when knowledge is being assimilated, perhaps rearranged, possibly reassessed, and periods of great exuberance, when new knowledge cascades in. We can’t always tell which is which. Technology changes, permitting the formerly infeasible, even unthinkable.”

So the problem with artificial intelligence is: it’s not artificial. In many cases, the intelligence employed by these systems derives from human insight, rendered in zeroes and ones. In other cases, humans are irrelevant. Thinking machines can take a new tack, unencumbered by human limitations. For some problems, machine intelligence can actually be better than human intelligence. In either case, the intelligence—and the solutions—are very real.

 

White paper: Potentia Analytics, Inc.

Computational Intelligence in Medical Informatics

Intelligent Provider Scheduling | Patient Flow Optimization | Predictive Analytics

 

References

  • Luger, George F. Artificial Intelligence: Structures and Strategies for Complex Problem Solving, Second Edition. Redwood City, CA: The Benjamin/Cummings Publishing Company, Inc., 1993.
  • Clark, Jack. Why 2015 Was a Breakthrough Year in Artificial Intelligence. Last modified on December 10, 2015. https://www.bloomberg.com/news/articles/2015-12-08/why-2015-was-a-breakthrough-year-in-artificial-intelligence. Accessed May 3, 2017.
  • McCorduck, Pamela. Machines Who Think: A Personal Inquiry into the History and Prospects of Artificial Intelligence, 2nd Edition. Natuck, MA: A K Peters, Ltd., 2004.
  • Hintze, Arend. Understanding the Four Types of Artificial Intelligence. November 14, 2016. http://www.govtech.com/computing/Understanding-the-Four-Types-of-Artificial-Intelligence.html. Accessed May 16, 2017.
  • US Office of Science and Technology Policy. Preparing for the Future of Artificial Intelligence. October 2016. https://obamawhitehouse.archives.gov/sites/default/files/whitehouse_files/microsites/ostp/NSTC/preparing_for_the_future_of_ai.pdf. Accessed May 16, 2017.