Semantics

The Postive, Sunny Side of AI

Recently I had the pleasure of presenting at DATAVERSITY’s 2015 Smart Data Conference as part of their track on artificial intelligence (AI). My presentation covered various AI technologies and products and how they are used in ‘smart applications’ in markets ranging from customer care to retail, healthcare, education, entertainment and smart homes. I also delved into intelligent virtual assistants and other forms of software agents, which if you have read many of my blog posts here, you’ll know is a passion of mine. A video of the presentation is available, courtesy of DATAVERSITY. If you don’t already have an account with DATAVERSITY, it’s just a quick registration and they’ll send you on your way to viewing the video.

Sunny Side of AI Presentation Video

Sunny Side of AI Video

The DATAVERSITY website has lots of other great resources (articles, white-papers, slides, webinars and other videos) on a wide range of topics around semantic technologies, including artificial intelligence and cognitive computing. Speaking of cognitive computing, I also participated at the conference on a panel discussing the burgeoning commercialization of cognitive computing technologies. An article about the panel session and the full video are here:

Cognitive Computing 201 Panel

And here’s another related article that provides a good introduction to cognitive computing.

The audience in both cases really seemed engaged and interested in the topics, and the discussions both in the sessions and on the sidelines afterwards were stimulating. I ran into a lot of familiar faces who have been working in this field for some time and who, like me, are encouraged to see the high levels of interest on the part of those developing AI technologies and products, and those using them for smart applications. There were lots of new faces, too, both fresh vendors unveiling their products and enterprises interested in exploring practical applications of AI. While there were lots of references to the fear factor around AI, to a large degree the attendees seemed to realize that much of what’s written about AI, the killer robot stories for example, is mostly sensationalism. What doesn’t always get written about — unfortunately — are the hundreds if not thousands of everyday uses of AI that provide real benefits to businesses and consumers. That’s the positive, sunny side of AI. Hopefully my presentation helps get that word out. Attendees who are supportive of AI now have more positive material to help counter all of the negative coverage. Perhaps we’ll get more balanced media coverage going forward, even if the positive uses aren’t always as dramatic as sentient AIs rising up and enslaving the human race!

I hope you’ll enjoy the presentation(s), too. Please feel free to contact me if you have questions or comments, or if you want to discuss any of the topics or technologies covered.

Tony

Standard
Semantics

Secret Agent Action

This blog post isn’t about some superhero or secret agent code-named Action. It’s about enabling intelligent software agents to take action. As I’ve been writing periodically about intelligent software agents or virtual personal assistants, I’ve not shied away from saying there are significant challenges to making them commonplace in our everyday lives. But that doesn’t mean we shouldn’t be starting to build intelligent agents today.

One challenge is providing software agents knowledge about the domains in which they are intended to operate, in other words making software agents ‘intelligent’. In my last post, “Teaching a Martian to Make a Martini”, I tried to provide a sense of the scale of the challenge involved in making software agents intelligent, and pointed to ways to get started using semantic networks (whether constructed, induced, blended or generated). There are at least two other significant challenges: describing or modeling the actions to be performed and specifying the triggers/conditions, inputs, controls and expected outputs from those actions. These are of course intertwined with having underlying knowledge of the domain, as these challenges involve putting domain knowledge in a form and context such that the software agent can make use of it to perform one or more tasks for their human ‘employer’.

Here’s the first secret to producing agents capable of action: create the models of the actions or processes (at least in rough form) through ‘simply’ capturing instructions via natural language processing, perhaps combined with haptic signals (such as click patterns) from the manual conduct of the tasks on devices such as smart phones. Thinking of the example from my previous post, this is the equivalent of you telling the Martian the steps to be executed to make a martini or going through the steps yourself with the Martian observing the process. In either case, this process model includes the tasks to be performed and the associated work flow (including critical path and optional or alternative steps), as well as triggers and conditions (such as prerequisites, dependencies, etc.) for the execution of the steps and the inputs, outputs and controls for those steps. Keywords extracted from the instructions can serve as the basis for building out a richer, underlying contextual model in the form of a semantic network or ontology. But there is more work to be done. A state like the existence of a certain input or the occurrence of an event such as the completion of a prior task can serve as a trigger, and the output of one process can be an input to one or more others. There can be loops, and since there can be unexpected situations, there can be other actions to take when things don’t go according to plan. Process modeling can get quite complex. Writing code can be a form of process modeling, where the code is a model or a model can be created in a form that is executable by some state machine. But we don’t want to have to do either of those in this case. The goal should be to naturally evolve these models, not to require they be developed in all their detail before they get used by an agent for the first time. And more general models that can be specialized as they get applied are the best case scenario.

I know a fair bit about complex process models. I encoded a model of the product definition (i.e., product design) process for aircraft (as developed by a team at Northrop Corporation — with a shout out here to @AlexTu) into a form executable by an artificial intelligence/expert system. My objective at the time was to test an approach to modeling of a process ontology and an associated AI technology built around it. The objective of Northrop was to be able to have a software system ‘understand’ things like the relationships among the process steps, related artifacts (e.g., input and outputs) and conditionals and to be able to optimize the process through eliminating unnecessary steps and introducing more parallelism. In other words, a major goal of the project was to enable more of what was called at the time ‘concurrent engineering’, both to shorten the time needed for product definition and to catch problems with the design of the product as early in the process as possible (since the later such problems are caught, the more they cost to correct — with the worst case being of course that a problem isn’t discovered until the aircraft has been built and is deployed and in use for its mission in the field). The project was pretty darned impressive, and the technology worked well as an ‘assistant’ to product engineers looking to improve their processes.

Many of the tasks we deal with on a regular basis in our everyday lives aren’t as complex or specialized as the product definition process for an aircraft. But even relatively simple processes can be tedious to encode if every detail has to be explicitly modeled. Here is where another secret comes in: rather than model detailed conditionals for things like triggers, why not use statistical data about the presence of certain indicative concepts in input and output data associated with the process, along with refinements to the model based on user feedback? Clearly this approach makes the most sense for non-critical processes. You don’t want to try it for brain surgery (at least not if you are one of the first patients). But virtual personal assistants and other agents aren’t intended to do jobs for us entirely on their own, so much as to help us do our job (at least at first). So if we have some patience up-front and are willing to work with ‘good enough’, I think we could see a lot more examples of such software agents. If we have expectations that the agents know everything and are right close to 100% of the time, we’ll see a lot fewer. It’s that simple, I think.

So let’s get started building and using some ‘good enough’ assistants. If other people want to wait until they’re ‘perfected’, they can join the party later, maybe after The Singularity has occurred. I think it is time to start now. And I’m convinced we’ll get farther faster if we do start now, rather than waiting until years from now to begin building such technologies en masse. Let’s refocus some of our collective efforts from yet-another social networking app or more cute cat videos onto more intelligent agents. Then intelligent, actionable software agents won’t be so secret anymore – in fact, they’ll be everywhere. And you’ll have more free time to spend with your cat.

Standard
Semantics

Teaching a Martian to Make a Martini

English: Liquid nitrogen storage facility at t...

What Happens When a Martian Makes a Martini? (Photo credit: Wikipedia)

In my last blog post, I stated I felt expert systems were an important forerunner of today’s emerging digital personal assistants and any other software technologies that include an element of ‘agency’ — acting on behalf of others, in this case the humans who invoke them. For someone or something to act on your behalf effectively, they need to understand many specific things about the particular domain they are tasked with working in, along with some general knowledge of the type that cuts horizontally across many vertical domains, and of course they need to know some things about you.

Chuck Dement, the late founder of Ontek Corporation and one of the smartest people I’ve met, used to say that teaching software to understand and execute the everyday tasks that humans do was like teaching a Martian visiting here on Earth how to make a martini. His favorite Martian, George the Gasbag, like the empty shell of a computer program, didn’t know anything about our world or how it works, let alone the specifics of making a martini. Forgetting for a moment George’s physical limitations due to being a gasbag, imagine trying to explain to him (or to encode in software) the process of martini-making — starting with basically no existing knowledge.

First, George has to know something about the laws of physics. He doesn’t need to understand the full quantum model (does anyone actually understand it?), but he does need to be aware of some of the more practical aspects of physics from the standpoint of how it applies to everyday life on the surface of Earth. Much of martini-making involves combining liquid substances. Liquid substances need to be confined in a container of some sort, preferably a non-porous one. The container has to maintain a [relatively] stable and upright position during much of the process. The container holds certain quantities of the liquids. For a martini to be a martini and to taste ‘right’ to its human consumers, the liquids have to be particular substances. Their chemical properties have to meet certain criteria to be suitable (and legal) for use. The quantities of the liquid have to measured in relative proportions to one another. The total combined quantity shouldn’t (or at least needn’t) exceed the total quantity that the container can hold.

You need some ice, which involves another substance — water — its liquid form having been transformed into a solid at a certain temperature. If you are making the martini indoors in most cases or outside when the temperature is warm, the process of producing ice from water requires special devices to create the required temperature conditions within some fixed space. And so on and so forth. You can pull on any of those threads and dive into the subject. Think of having a conversation with a 4 or 5 year-old child and answering all the “Why?” and “How?” questions.

Of course there are at least two major different processes that can be used to mix the liquids along with the ice. They involve different motions — stirring the liquid within the container versus shaking the container (after putting a lid or similar enclosure on the previously ‘open part’ of the container to keep the liquid from flying out). The latter begs the question: is the open ‘part’ of the container really even a part of it, or the absence of some part?

There are allowable variations in the substances (ingredients), both in terms of kinds and specific brands (gin versus vodka, Beefeater versus Tanqueray for gin). Both the process and the ingredients often come down to the specific preferences of the intended individual consumer (take James Bond, for example), but may also be influenced by availability, business criteria such as price or terms of supplier contracts, and whether the consumer has already consumed several martinis or similar alcoholic beverages within some relatively fixed timeframe (don’t forget here to factor in the person’s gender, body size, previous night’s sleep, history of alcohol consumption, altitude, etc.). The main point here is simply if they’ve had several such drinks, their preferences may be more flexible than for the first one or two!

Whew!!! All that just to make a martini? That’s all to illustrate that encoding knowledge for everyday tasks is non-trivial. No one ever said developing intelligent agent software would be simple. But as previously mentioned, George doesn’t need to know everything about every aspect of the domains involved in martini-making. Going overboard is a sure recipe for failure. Knowing where to draw the line is the key and so a healthy serving of pragmatism is recommended. A place to start is I think even getting in the ballpark of knowledge about everyday things and applying that approximate knowledge to practical application uses. Since you don’t always know beforehand how much knowledge you need, I’m a fan of the generative approach to semantic technologies (see my related blog post on approaches to semantic technologies). The generative approach allows agility and flexibility in the production of that knowledge, as well as providing ways to tailor it for individual differences.

And speaking of individual differences: how will George recognize when I’m ready for him to make me a martini? What are the triggers and any prerequisite conditions (like being of legal drinking age in the geo-location where the drink is being made and consumed). Well, I could always ask George (or my personal, robotically-enabled, martini-making software assistant), but I trust that he knows me well enough to recognize that telling look that says, “I could sure use a drink, my friend,…especially after all the knowledge I had to encode to enable you to make one.”

Cheers!

Enhanced by Zemanta
Standard