john gordon

by Natalie Miller • @natalieatWIS

Watson in the real world: The business of cognitive computing

A Q&A with John Gordon, Vice President, IBM Watson Group

Published April 20, 2015

 
 

After IBM Watson dazzled “Jeopardy!” viewers with the defeat of game show contestant elites Brad Rutter and Ken Jennings, the enterprise world awaited the day Big Blue’s cognitive computing brand would go mainstream. Last year was a big year for IBM Watson as the platform went commercial and developers quickly began flexing the supercomputing muscles to see what it could do.

“The first couple years we called a market validation period to find out if there’s anything commercial that’s going to come out of this experiment that was with ‘Jeopardy!’,” explains John Gordon, vice president, IBM Watson Group. “We started with training Watson for Oncology with oncologists at Memorial Sloan Kettering Cancer Center and now the technology is being deployed at Bumrungrad International Hospital in Thailand, which has 1.2 million patients a year. It’s now taking that expertise and helping to scale it. It’s really an amazing project.”

Last year, IBM announced Watson Explorer V10 and the opening of the Watson Developer Cloud as the platform for building enterprise applications for any industry and for a variety of things, from narrow to broad. “This was a big shift for us, to say you can now do cognitive exploration of all your enterprise data, the structured and unstructured, and then plug in certain cognitive services that interpret it in different ways,” Gordon explains.

From apps that assist in veterinarian medicine and oncology to tackling the Japanese language, Watson’s abilities and applications seem endless. In this Q&A, Gordon talks about how IBM took Watson from gameshow phenomenon to something viable across industries in the enterprise world and how businesses can take advantage of this developing technology.

Insights Magazine: IBM is already up to version 10 of IBM Watson Explorer, is that correct?

John Gordon: Yes. We built the Watson Group in January [2014] and we announced a whole bunch of technologies, but simply put Watson boils down to two things: One, it interacts with the world naturally, such as natural language; and two, it learns. So we put the group together and part of what we did was we combined things we found in Research and in other places that were down similar paths. They were already starting to think about interacting with the world naturally or learning, but they had been in pockets. So we said, ‘Let’s pull that talent, all those groups together into Watson.’

The Explorer guys were already thinking about, ‘How do I start to get my arms around a broader set of data that I can interpret more like a person does, rather than just point out where it exists?’ I said, ‘Great, let’s take that and we’ll build off of what you built as a baseline to go and scale that across enterprises.’

IM: Before the end of 2014 there were already 11 applications in production and over 3,300 in various stages of development. So it seems this idea of Watson in the real world has really taken off.

Gordon: I joke because [in 2013] we said, ‘Let’s go do this.’ I hired a bunch of interns, and my favorite thing about interns is they don’t yet know what an unreasonable request is. So I said, ‘Here’s the deal guys, by the end of the summer your job is to go build an app that learns about health and wellness.’ We had no tools. We had no methods. I actually said, ‘Hey, the only other people who ever built this were like IBM PhDs, so just have at it. Call if you need help.’

But they did great. They were the first people that figured out how to do it, how to make it learn, how to apply a bunch of other things and build an app. They taught us all the gaps. We found all the issues that didn’t work, so then we fixed a bunch of those to build tooling and methods and others for future people, not to implement APIs but to teach them to get smarter. Now we have a full public cloud out there so other people can do this at more scale. In fact today we have thousands of partners involved in this vibrant and innovative community building 7,000 apps to date. That’s going to continue to evolve for a long time.

IM: Can you describe the mechanics of making a Watson API get smarter?

Gordon: They learn like you and I do, right? First, you instruct them. Teach them something. Give them data and tell them what you want them to know about. Then you test them, give them sample cases, homework assignments, practice. Give them input and get output, then grade their output. You actually say, ‘Yeah, this was good, bad; I liked it, I didn’t.’ It can be as simple as thumbs up, thumbs down, but usually it’s more subtle than that, right? Of these possible things, this was on a scale better, this was on a scale worse. What’s interesting, with these cognitive things, they always show their work. They tell you why.

So it’s, instruct, test, grade, and then adapt. Then you decide how you take the feedback and roll it back into Watson to make it smarter. So the data is there. To build enterprise apps, we don’t want to move all the data, so what [Watson] Explorer’s heritage was, was a virtual integration, appointing to data as it exists. It came from just looking at structured data, so we had enhanced it to make it look at both structured data, tables, databases, as well as unstructured and understanding what the notes and comments are. So that now gets it where it is, and then you can [other Watsons services] on top of it as modules to build [applications].

IM: How did the Watson Group manage to get this idea off the ground so quickly?

Gordon: When we decided we were going to try [making Watson a separate business unit], we intentionally in our market validation phase, stuck it off the beaten path in Austin, Texas. For two years we gave it the room to investigate, push boundaries, and fail really fast. We had a leadership team that’s done lots of different startup-type work and others who just had that culture, and we just kept through it, and what we were able to show was a fun experiment that was thought-provoking—which was “Jeopardy!”. At the time I came in and said, ‘You can’t solve all the world’s problems, but your system can answer relatively short questions and respond with even shorter answers.’

That’s how we started. We said, ‘Alright, for this to be a commercially viable area you have to be able take on much more interesting problems.’ Health care cases—the problem statement sent to Watson for each health care case is 25 pages long. That’s materially different from a “Jeopardy!” question. So we had to figure out how to do that, and just see what was possible and push on things fast. That’s been one thing, just the whole culture. 

The second thing was whether we were going to build all the cognitive apps in the world or were we just going to make it available for others to do. We decided we were going to try to put this all out on the platform. The path we’re on is that even the apps we build will be built off the same platform that all these other developers use. Our path is we’re putting out all of the cognitive services for other people to build on top of. We’ll pick a handful of applications that we think as a company we have a unique value in delivering, but the way to scale for the industry is to open the platform up to everyone else to create their own apps.

Our culture of what we've created as a sort of startup group and pushed forward right now is scaling. It’s been really good because we’ve been able to keep the energy and our mission, which is not to build these individual things, but to create an industry. You’ve got to be interested in reshaping the next era of computing to be doing this.

IM: How can developers actually organize the information for a Watson application in a space where the information is not already as precisely packaged as in, for instance, health care? Is IBM poised to help those innovators?

Gordon: The short answer is yes. There are different types of content. One is shared content—with oncology, it’s easy as there’s a lot of medical research and published journals, so there’s a set of data. With the Watson Developer Cloud we have a content store that we keep putting more and more information in. We work with people who have information they want to make available, so we can say, ‘Here is access across key industries, key areas, you can put content in and move that forward.’ If you have content you can make it available to the whole industry, and we’ll start putting that in. We put a bunch of free content out there that people can use as starting points to build on top of. That would be one model.

The second model is the proprietary enterprise content that each group has. When we looked at Watson Discovery Advisor for health care, there was a set of publically available medical information that Baylor College of Medicine used to find their initial discoveries. A lot of where we see it going is you would combine the insights you get from this set of information with the insights from all the failed medical research that each institution has. That information is their own; they don’t share it. It’s not out there, but actually the greatest insights in medicine could be combining the public information with what these institutions have learned from their failings. Watson Explorer now gives you a way to put your arms around all the things that you have inside. While the content store gives you access to information outside.

IM: How do these developers and organizations test the accuracy in Watson’s answers?

Gordon: It’s really interesting because these systems learn like people do. You test them the same way. When I go in and we start talking about how we would build some of the different systems, one of the things we ask is, ‘Who is the person we’re modeling after?’ I’d actually like to model it after you, right. If you and I could sit down and you could answer some questions for me, I want Watson to be able to work how you do. It’s no longer, ‘Don’t measure just by how the technology works.’ You want to scale and represent the system. I train it with you; you’ll give it feedback and it will start to mimic you. It’s going to go faster than you, but it’s going to learn from how you learn. We actually, a lot of times, go sit down and say early on, ‘Let’s both take this exam, take this test, and we’ll see how you do and we’ll see how Watson does.’ You will kill it at the beginning. But Watson will learn as you go and at some point, you’ll see how it catches up to performing like how you do at scale, and then we’ll stop so you will say, ‘Okay, I got it. It’s now doing what I expect our best people to do.’ This helps us scale expertise, so you’re having the best assistant look over and give people help and advice across your whole organization. That’s a huge opportunity to scale.

IM: What’s the vision for the future of IBM Watson?

Gordon: The biggest thing is that enterprise developers have access to Watson services through the Watson Developer Cloud community on Bluemix and through Explorer to build cognitive apps to do whatever they want them to. We have a path to building even more services, so the first thing you’ll see is that this is the first time in history we’ve had enterprise cognitive applications that can scale. That would be a number one. Second thing is we are going to keep going hard and fast at how we drive the ecosystem to scale. Currently we provide access to 13 Watson APIs and plan to continue to expand the offerings on our developer platform.  With each new idea for an application, you’re going to see the whole ecosystem continue to expand.

For more information about IBM Watson, visit www.ibm.com/watson.

 
 

Comments

No one has commented on this item.