Final Jeopardy Read online




  Final Jeopardy

  The Story of Watson, the Computer That Will Transform Our World

  Stephen Baker

  Table of Contents

  Title Page

  Table of Contents

  Books by Stephen Baker

  Copyright

  Dedication

  Contents

  Introduction

  1. The Germ of the Jeopardy Machine

  2. And Representing the Humans …

  3. Blue J Is Born

  4. Educating Blue J

  5. Watson’s Face

  6. Watson Takes On Humans

  7. AI

  8. A Season of Jitters

  9. Watson Looks for Work

  10. How to Play the Game

  11. The Match

  Acknowledgments

  Notes

  Sources and Further Reading

  About the Author

  Footnotes

  Books by Stephen Baker

  The Numerati

  Final Jeopardy

  Copyright © 2011 by Stephen Baker

  All rights reserved

  For information about permission to reproduce selections from this book, write to Permissions, Houghton Mifflin Harcourt Publishing Company, 215 Park Avenue South, New York, New York 10003.

  www.hmhbooks.com

  The Library of Congress has cataloged the print edition as follows:

  Baker, Stephen, date.

  Final Jeopardy : the story of Watson, the computer that will transform our world / Stephen Baker.

  p. cm.

  ISBN 978-0-547-48316-0 (hardback)

  1. Natural language processing (Computer science) 2. Semantic computing. 3. Artificial intelligence. 4. Database management. 5. Watson (Computer) 6. Jeopardy (Television program) I. Title.

  QA76.9.N38B35 2011

  006.3—dc22

  2010051653

  eISBN 978-0-547-51943-2

  v2.0512

  To Sally and Jack, my fact-hungry sister and son, who each introduced me to Jeopardy! at a different stage of my life

  Contents

  Introduction

  1. The Germ of the Jeopardy Machine

  2. And Representing the Humans …

  3. Blue J Is Born

  4. Educating Blue J

  5. Watson’s Face

  6. Watson Takes On Humans

  7. AI

  8. A Season of Jitters

  9. Watson Looks for Work

  10. How to Play the Game

  11. The Match

  Acknowledgments

  Notes

  Sources and Further Reading

  About the Author

  Introduction

  Watson paused. The closest thing it had to a face, a glowing orb on a flat-panel screen, turned from forest green to a dark shade of blue. Filaments of yellow and red streamed steadily across it, like the paths of jets circumnavigating the globe. This pattern represented a state of quiet anticipation as the supercomputer awaited the next clue. It was a September morning in 2010 at IBM Research, in the hills north of New York City, and the computer, known as Watson, was annihilating two humans, both champion players, in practice rounds of Jeopardy! Within months, it would be playing the game on national television in a million-dollar man vs. machine match against two of Jeopardy’s all-time greats.

  As Todd Crain, an actor and the host of these test games, started to read the next clue, the filaments on Watson’s display began to jag and tremble. Watson was thinking—or coming as close to it as a computer could. The $1,600 clue, in the category The Eyes Have It, read: “This facial ware made Israel’s Moshe Dayan instantly recognizable worldwide.”

  The three players—two human and one electronic—could read the words as soon as they appeared on the big Jeopardy board. But they had to wait for Crain to read the entire clue before buzzing. That was the rule. As the host pronounced the last word, a light would signal that contestants could buzz. The first to hit the button could win $1,600 with the right answer—or lose the same amount with a wrong one. (In these test matches, they played with funny money.)

  This pause for reading gave Watson three or four seconds to hunt down the answer. The first step was to figure out what the clue meant. One of its programs promptly picked apart the grammar of the sentence, identifying the verbs, objects, and key words. In another section, research focused on Moshe Dayan. Was this a person? A place in Israel? Perhaps a holy site? Names like John and Maria would signal a person. But Moshe was more puzzling.

  During these seconds, Watson’s cognitive apparatus— 2,208 computer processors working in concert—mounted a massive research operation through thousands of documents around Moshe Dayan and his signature facial ware. After a second or so, different programs, or algorithms, began to suggest hundreds of possible answers. To us, many of them would look like wild guesses. Some were phrases that Dayan had uttered, others were references to his military campaigns and facts about Israel. Still others cited various articles of his clothing. At this point, the computer launched its second stage of analysis, figuring out which response, if any, merited its confidence. It proceeded to check and recheck facts, making sure that Moshe Dayan was indeed a person, an Israeli, and that the answer referred to something he wore on his face.

  A person looking at Watson’s frantic and repetitive labors might conclude that the player was unsure of itself, laughably short on common sense, and scandalously wasteful of computing resources. This was all true. Watson barked up every tree from every conceivable angle. The pattern on its screen during this process, circles exploding into little stars, provided only a hint of the industrial-scale computing at work. In a room behind the podium, visible through a horizontal window, Watson’s computers churned, and the fans cooling them roared. This time, its three seconds of exertion paid off. Watson came up with a response, sending a signal to a mechanical device on the podium. It was the size of a large aspirin bottle with a clear plastic covering. Inside was a Jeopardy buzzer. About one one-hundredth of a second later, a metal finger inside this contraption shot downward, pressing the button.

  Justin Bernbach, a thirty-eight-year-old airline lobbyist from Brooklyn, stood to Watson’s left. He had pocketed $155,000 while winning seven straight Jeopardy matches in 2009. Unlike Watson, Bernbach understood the sentence. He knew precisely who Moshe Dayan was as soon as he saw the clue, and he carried an image of the Israeli leader in his mind. He gripped the buzzer in his fist and frantically pressed it four or five times as the light came on.

  But Watson had arrived first.

  “Watson?” said Crain.

  The computer’s amiable male voice arranged the answer, as Jeopardy demands, in the form of a question: “What is eye patch?”

  “Very good,” Crain said. “An eye patch on his left eye. Choose again, Watson.”

  Bernbach slumped at his podium. This match with the machine wasn’t going well.

  It was going magnificently for David Ferrucci. As the chief scientist of the team developing the Jeopardy computer, Ferrucci was feeling vindicated. Only three years earlier, the suggestion that a computer might match wits and word skills with human champions in Jeopardy sparked opposition bordering on ridicule in the halls of IBM Research. And the final goal of the venture, a nationally televised match against two Jeopardy legends, Ken Jennings and Brad Rutter, seemed risky to some, a bit déclassé to others. Jeopardy, a television show, appeared to lack the timeless cachet of chess, which IBM computers had mastered a decade earlier.

  Nonetheless, Ferrucci and his team went ahead and built their machine. Months earlier, it had fared well in a set of test matches. But the games revealed flaws in the machine’s logic and game strategy. It was a good player, but to beat Jennings and Rutter, who would be
jousting for a million-dollar top prize, it would have to be great. So they had worked long hours over the summer to revamp Watson. This September event was the coming-out party for Watson 2.0. It was the first of fifty-six test matches against a higher level of competitor: people, like Justin Bernbach, who had won enough matches to compete in Jeopardy’s Tournament of Champions.

  In these early matches, Watson was having its way with them. Ferrucci, monitoring the matches from a crowded observation booth, was all smiles. Keen to promote its Jeopardy phenom, IBM’s advertising agency, Ogilvy & Mather, had hired a film crew to follow Ferrucci’s team and capture the drama of this opening round of championship matches. The observation room was packed with cameras. Microphones on long booms recorded the back-and-forth of engineers as they discussed algorithms and Watson’s response time, known as latency. Ferrucci, wearing a mike on his lapel, gave a blow-by-blow commentary as Watson, on the other side of the glass, strutted its new and smarter self.

  It was almost as if Watson, like a person giddy with hubris, was primed for a fall. The computer certainly had its weaknesses. Even when functioning smoothly, it would make its share of wacky mistakes. Right before the lunch break, one clue asked about “the inspiration for this title object in a novel and a 1957 movie [which] actually spanned the Mae Khlung.” Now, it would be reasonable for a computer to miss “The Bridge over the River Kwai,” especially since the actual river has a different name. Perhaps Watson had trouble understanding the sentence, which was convoluted at best. But how did the computer land on its outlandish response, “What is Kafka?” Ferrucci didn’t know. Those things happened, and Watson still won the two morning matches.

  It was after lunch that things deteriorated. Bernbach, so frustrated in the morning, started to beat Watson to the buzzer. Meanwhile, the computer was making risky bets and flubbing entire categories of clues. Defeat, which had seemed so remote in the morning, was now just one lost bet away. It came in the fourth match. Watson was winning by $4,000 when it stumbled on this Final Jeopardy clue: “On Feb. 8, 2010, the headline in a major newspaper in this city read: ‘Amen! After 43 years, our prayers are answered.’” Watson missed the reference to the previous day’s Super Bowl, won by the New Orleans Saints. It bet $23,000 on Chicago. Bernbach also botched the clue, guessing New York. But he bet less than Watson, which made him the first person to defeat the revamped machine. He pumped his fist.

  In the sixth and last match of the day, Watson trailed Bernbach, $16,200 to $21,000. The computer landed on a Daily Double in the category Colleges and Universities, which meant it could bet everything it had on nailing the clue. A $5,000 bet would have brought it into a tie with Bernbach. A larger bet, while risky, could have catapulted the computer toward victory. “I’ll take five,” Watson said.

  Five. Not $5,000, not $500. Five measly dollars of funny money. The engineers in the observation booth were stunned. But they kept quieter than usual; the cameras were rolling.

  Then Watson crashed. It occurred at some point between placing that lowly bet and attempting to answer a clue about the first Catholic college in Washington, D.C. Watson’s “front end,” its voice and avatar, was waiting for its thousands of processors, or “back end,” to deliver an answer. It received nothing. Anticipating such a situation, the engineers had prepared set phrases. “Sorry,” Watson said, reciting one of them, “I’m stumped.” Its avatar displayed a dark blue circle with a single filament orbiting mournfully in the Antarctic latitudes.

  What to do? Everyone had ideas. Maybe they should finish the game with an older version of Watson. Or perhaps they could hook it up to another up-to-date version of the program at the company’s Hawthorne labs, six miles down the road. But some worried that a remote connection would slow Watson’s response time, causing it to lose more often on the buzz. In the end, as often happens with computers, a reboot brought the hulking Jeopardy machine back to life. But Ferrucci and his team got an all-too-vivid reminder that their Jeopardy player, even as it prepared for a debut on national television, could go haywire or shut down at any moment. When Watson was lifted to the podium, facing banks of cameras and lights, it was anybody’s guess how it would perform. Watson, it was clear, had a frighteningly broad repertoire.

  Only four years earlier, in 2006, Watson was a prohibitive long shot, not just to win at Jeopardy but even to be built. For more than a year, the head of IBM Research, a physicist named Paul Horn, had been pressing a number of teams at the company to pursue a Jeopardy machine. The way he saw it, IBM had triumphed in 1997 with its chess challenge. The company’s machine, Deep Blue, had defeated the reigning world champion, Garry Kasparov. This burnished IBM’s reputation among the global computing elite while demonstrating to the world that computers could rival human beings in certain domains associated with intelligence.

  That triumph left IBM’s top executives hungry for an encore. Horn felt the pressure. But what could the researchers get a computer to do? Deep Blue had rifled through millions of scenarios per second, calculated probabilities, and made winning moves. It had aced applied math. But it had skipped the far more complex domain of words. This, Horn thought, was where the next challenge would be. Far beyond the sixty-four squares on a chess board, the next computer should charge into the vast expanse of human language and knowledge. For the test, Horn settled on Jeopardy, which debuted in 1964 and now attracted some nine million viewers every weeknight. It was the closest thing in the United States to a knowledge franchise. “People associated it with intelligence,” Horn later said.

  There was one small problem. For months, he couldn’t get any takers. Jeopardy, with its puns and strangely phrased clues, seemed too hard for a computer. IBM was already building machines to answer questions, and their performance, in speed and precision, came nowhere close to that of even a moderately informed person. How could the next machine grow so much smarter?

  And while researchers regarded the challenge as daunting, many people, Horn knew, saw it precisely the other way. Answering questions? Didn’t Google already do that?

  Horn eventually enticed David Ferrucci and his team to pursue his vision. Ferrucci, then in his midforties, wore a dark brown beard wrapped around his mouth and wire-rimmed glasses. An expert in Artificial Intelligence (AI), he had a native New Yorker’s gift of the gab and an openness, even about his own life, that was at times jolting. (“I have a growing list of potentially mortal diseases,” he said years later. “People order an MRI a week for me.”) But he also had a wide and ranging intellect. Early in his tenure at IBM he and a friend tried, in their spare time, to teach a machine to write fiction by itself. They trained it for various literary themes, from love to betrayal, and they named it Brutus, for Julius Caesar’s traitorous comrade. Ferrucci was comfortable talking about everything from the details of computational linguistics to the evolution of life on earth and the nature of human thought. This made him an ideal ambassador for a Jeopardy machine. After all, his project would raise a broad range of issues, and fears, about the role of brainy machines in society. Would they compete for jobs? Could they establish their own agendas, like the infamous computer HAL, in 2001: A Space Odyssey, and take control? What was the future of knowledge and intelligence, and how would brains and machines divvy up the cognitive work? Ferrucci was always ready with an opinion. At the same time, he could address the strategic questions—how these machines would fit into hundreds of businesses, and why the project he was working on, as he saw it, went far beyond Google.

  The Google question was his starting point; until people understood that his machine was not just a souped-up search engine, the project made little sense. For certain types of questions, Ferrucci said, a search engine could come up with answers. These were simple sentences with concrete results, what he and his team called factoids. For example: “What is the tallest mountain in Africa?” A search engine would pick out the three key words from that sentence and in a fraction of a second suggest Kenya’s 19,340-foot-high Kilimanjaro. This worked,
Ferrucci said, for about 30 percent of Jeopardy questions. But performance at that low level would condemn Watson to defeat at the hands of human amateurs.

  A Jeopardy machine would have to master far thornier questions. Just as important, it would have to judge its level of confidence in an answer. Google’s algorithms delivered users to the statistically most likely outposts of the Web and left it to the readers to find the answers. “A search engine doesn’t know that it understood the question and that the content is right,” Ferrucci said. But a Jeopardy machine would have to find answers and then decide for itself if they were worth betting on. Without this judgment, the machine would never know when to buzz. It would require complex analysis to develop this “confidence.”

  Was it worth it? Didn’t it make sense for machines to hunt through mountains of data and for people to rely on their exquisitely engineered brains to handle the final judgments? This seemed like a reasonable division of labor. After all, processing language and spotting answers come easily to humans and are so hard for machines.

  But what if machines could take the next step? What if they could go beyond locating bits and pieces of information and help us to understand it? “I think there are 1.4 million books on sale online,” Ferrucci said one afternoon. He was sharing a bottle of his own wine, a Shiraz blend that he’d cooked up in the winery attached to his kitchen in the northern suburbs of New York. He was in an expansive mood, which led him to carry out energetic dialogues with himself, asking questions and answering them emphatically. “You can probably fit all the books that are on sale on about two terabytes that you can buy at OfficeMax for a couple hundred dollars. You get every book. Every. Single. Book. Now what do you do? You can’t read them all! What I want the computer to do,” he went on, “is to read them for me and tell me what they’re about, and answer my questions about them. I want this for all information. I want machines to read, understand, summarize, describe the themes, and do the analysis so that I can take advantage of all the knowledge that’s out there. We humans need help. I know I do!”