Semantics

Teaching a Martian to Make a Martini

English: Liquid nitrogen storage facility at t...

What Happens When a Martian Makes a Martini? (Photo credit: Wikipedia)

In my last blog post, I stated I felt expert systems were an important forerunner of today’s emerging digital personal assistants and any other software technologies that include an element of ‘agency’ — acting on behalf of others, in this case the humans who invoke them. For someone or something to act on your behalf effectively, they need to understand many specific things about the particular domain they are tasked with working in, along with some general knowledge of the type that cuts horizontally across many vertical domains, and of course they need to know some things about you.

Chuck Dement, the late founder of Ontek Corporation and one of the smartest people I’ve met, used to say that teaching software to understand and execute the everyday tasks that humans do was like teaching a Martian visiting here on Earth how to make a martini. His favorite Martian, George the Gasbag, like the empty shell of a computer program, didn’t know anything about our world or how it works, let alone the specifics of making a martini. Forgetting for a moment George’s physical limitations due to being a gasbag, imagine trying to explain to him (or to encode in software) the process of martini-making — starting with basically no existing knowledge.

First, George has to know something about the laws of physics. He doesn’t need to understand the full quantum model (does anyone actually understand it?), but he does need to be aware of some of the more practical aspects of physics from the standpoint of how it applies to everyday life on the surface of Earth. Much of martini-making involves combining liquid substances. Liquid substances need to be confined in a container of some sort, preferably a non-porous one. The container has to maintain a [relatively] stable and upright position during much of the process. The container holds certain quantities of the liquids. For a martini to be a martini and to taste ‘right’ to its human consumers, the liquids have to be particular substances. Their chemical properties have to meet certain criteria to be suitable (and legal) for use. The quantities of the liquid have to measured in relative proportions to one another. The total combined quantity shouldn’t (or at least needn’t) exceed the total quantity that the container can hold.

You need some ice, which involves another substance — water — its liquid form having been transformed into a solid at a certain temperature. If you are making the martini indoors in most cases or outside when the temperature is warm, the process of producing ice from water requires special devices to create the required temperature conditions within some fixed space. And so on and so forth. You can pull on any of those threads and dive into the subject. Think of having a conversation with a 4 or 5 year-old child and answering all the “Why?” and “How?” questions.

Of course there are at least two major different processes that can be used to mix the liquids along with the ice. They involve different motions — stirring the liquid within the container versus shaking the container (after putting a lid or similar enclosure on the previously ‘open part’ of the container to keep the liquid from flying out). The latter begs the question: is the open ‘part’ of the container really even a part of it, or the absence of some part?

There are allowable variations in the substances (ingredients), both in terms of kinds and specific brands (gin versus vodka, Beefeater versus Tanqueray for gin). Both the process and the ingredients often come down to the specific preferences of the intended individual consumer (take James Bond, for example), but may also be influenced by availability, business criteria such as price or terms of supplier contracts, and whether the consumer has already consumed several martinis or similar alcoholic beverages within some relatively fixed timeframe (don’t forget here to factor in the person’s gender, body size, previous night’s sleep, history of alcohol consumption, altitude, etc.). The main point here is simply if they’ve had several such drinks, their preferences may be more flexible than for the first one or two!

Whew!!! All that just to make a martini? That’s all to illustrate that encoding knowledge for everyday tasks is non-trivial. No one ever said developing intelligent agent software would be simple. But as previously mentioned, George doesn’t need to know everything about every aspect of the domains involved in martini-making. Going overboard is a sure recipe for failure. Knowing where to draw the line is the key and so a healthy serving of pragmatism is recommended. A place to start is I think even getting in the ballpark of knowledge about everyday things and applying that approximate knowledge to practical application uses. Since you don’t always know beforehand how much knowledge you need, I’m a fan of the generative approach to semantic technologies (see my related blog post on approaches to semantic technologies). The generative approach allows agility and flexibility in the production of that knowledge, as well as providing ways to tailor it for individual differences.

And speaking of individual differences: how will George recognize when I’m ready for him to make me a martini? What are the triggers and any prerequisite conditions (like being of legal drinking age in the geo-location where the drink is being made and consumed). Well, I could always ask George (or my personal, robotically-enabled, martini-making software assistant), but I trust that he knows me well enough to recognize that telling look that says, “I could sure use a drink, my friend,…especially after all the knowledge I had to encode to enable you to make one.”

Cheers!

Enhanced by Zemanta
Standard
Semantics

Meet Clippy, Your Personal Assistant

Bill Gates recently spoke at a Microsoft Research event about the return of Microsoft Bob and by association everybody’s favorite on-screen personal assistant, Clippy. Well, he didn’t literally say that MS Bob and Clippy would be back directly incarnate, but he said he could envision them returning in some form as part of a new wave of personal agents or assistants, but with “a bit more sophistication”. A bit more? That’s sort of like saying Michelangelo’s David is like the prehistoric cave art at Lascaux Caves in France, but with “a bit more sophistication”. I mean no disrespect by the way to those cave dwelling artists, who deserve a lot of credit for being among perhaps the first humans to create art, or at least art that was preserved.

A few weeks ago, I blogged here about personal assistants. My vision for them is nothing like Microsoft Bob and his sidekick, Clippy. And in an important sense, Bill Gates and Microsoft are not even at all like the early cave artists; Bill Gates and Microsoft did not pioneer personal digital assistants.

To me, the pioneers for software-based personal assistants were the people who developed expert systems starting back in the 1970s and continuing up to about the time that Microsoft Bob debuted in 1995. I’m talking for example about things like Mycin and Eurisko. Of course the logic rules for those systems were hand-coded, something that won’t scale if personal assistants are to become commonplace in our future. Expert systems also only worked well when applied in specialized domains where specific background knowledge about the domain could be encoded without needing to pull in voluminous knowledge about the everyday world around us. Maybe Microsoft Bob’s tragic failure doomed expert systems and AI? No, at least not on their own. I think what doomed expert systems and AI was the hype gap between envisioned and expected capabilities, the latter being capabilities that far exceeded the ability of technology at that time to deliver.

For the record, I also don’t consider Apple Siri and Google Now as true personal, virtual assistants or software agents. They are flashy and fun in large part because of their natural language user interaction abilities. I do, however, like that they convey a sense that they know a little about us and our everyday world (I just wish they ‘understood’ more), and that they are trying to help us accomplish tasks in that environment. Because of that, they certainly represent steps in the right direction. I’d like to see other steps — and, yes, that includes whatever Microsoft is working on, building on the foundation it laid down with Bob and Clippy [note, later revealed to be Cortana].

Who is doing work in this important area? Tempo AI, for example, is doing some neat things within the calendaring domain. Do you know of some others, and if so, can you share without exposing intellectual property? I’d like to hear about what’s coming, and if I can help get it here faster, just have your personal assistant contact me on your behalf!

Standard