Semantics

The Postive, Sunny Side of AI

Recently I had the pleasure of presenting at DATAVERSITY’s 2015 Smart Data Conference as part of their track on artificial intelligence (AI). My presentation covered various AI technologies and products and how they are used in ‘smart applications’ in markets ranging from customer care to retail, healthcare, education, entertainment and smart homes. I also delved into intelligent virtual assistants and other forms of software agents, which if you have read many of my blog posts here, you’ll know is a passion of mine. A video of the presentation is available, courtesy of DATAVERSITY. If you don’t already have an account with DATAVERSITY, it’s just a quick registration and they’ll send you on your way to viewing the video.

Sunny Side of AI Presentation Video

Sunny Side of AI Video

The DATAVERSITY website has lots of other great resources (articles, white-papers, slides, webinars and other videos) on a wide range of topics around semantic technologies, including artificial intelligence and cognitive computing. Speaking of cognitive computing, I also participated at the conference on a panel discussing the burgeoning commercialization of cognitive computing technologies. An article about the panel session and the full video are here:

Cognitive Computing 201 Panel

And here’s another related article that provides a good introduction to cognitive computing.

The audience in both cases really seemed engaged and interested in the topics, and the discussions both in the sessions and on the sidelines afterwards were stimulating. I ran into a lot of familiar faces who have been working in this field for some time and who, like me, are encouraged to see the high levels of interest on the part of those developing AI technologies and products, and those using them for smart applications. There were lots of new faces, too, both fresh vendors unveiling their products and enterprises interested in exploring practical applications of AI. While there were lots of references to the fear factor around AI, to a large degree the attendees seemed to realize that much of what’s written about AI, the killer robot stories for example, is mostly sensationalism. What doesn’t always get written about — unfortunately — are the hundreds if not thousands of everyday uses of AI that provide real benefits to businesses and consumers. That’s the positive, sunny side of AI. Hopefully my presentation helps get that word out. Attendees who are supportive of AI now have more positive material to help counter all of the negative coverage. Perhaps we’ll get more balanced media coverage going forward, even if the positive uses aren’t always as dramatic as sentient AIs rising up and enslaving the human race!

I hope you’ll enjoy the presentation(s), too. Please feel free to contact me if you have questions or comments, or if you want to discuss any of the topics or technologies covered.

Tony

Standard
Semantics

Is CIA Behind Your Smart Phone’s Virtual Assistant?

No, I don’t mean the CIA as in Central Intelligence Agency  – although based on the recent revelations about the US government’s intrusion into our electronic lives, they might well be. I mean use of central(ized) intelligent agents (CIA). For example, does the intelligent virtual assistant app on your smart phone require all the data about you as an individual, and all the knowledge rules about what to do with that data, to reside in some centralized repository that embodies (or maybe, disembodies) you and your life? Put more bluntly, does the agent capability or agency depend upon central or centralized intelligence?

So far at least, based on popular intelligent virtual assistants like Google Now or Apple’s Siri, the answer would seem to be yes. Clearly companies like Google and Apple have a vested business interest in being the one place where all the data about your daily life is collected, analyzed and utilized to enable agents like Google Now and Siri to help make your life a little easier. And of course that data happens to be useful for targeting ads and services at you, too. In all fairness though, Google, Apple and others who may be taking a centralized approach to empowering intelligent agents aren’t necessarily doing so purely out of their own self interest. Arguably it makes things a lot easier from a technical perspective to have all that data and logic in one spot, harmonized and maintained by one company. It solves or at least mitigates myriad integration problems that would otherwise have to be addressed with the alternative, ‘distributed intelligence agency’ approach.

What are some of those problems? Well, to illustrate, here’s a (simple?) example.  If my intelligent virtual assistant doesn’t know that I drove to a place near my current geo-location and parked my car in a parking garage that closes at 10:00 PM, it might not know whether to recommend that I walk or drive to a nearby restaurant that it suggested for dinner based on my interest in Asian Fusion cuisine. Is it likely too far to walk (given my health, weight and normal walking habits)? Is it a safe neighborhood to walk in? At what time of day would I be walking and when does it get dark at my location? If I made reservations for more than myself, might others be walking with me? Will the weather be conducive to walking? If walking doesn’t seem reasonable, what are the alternatives (public transit, Uber/Lyft, or my car that’s presently in the parking garage)? If I decide to walk, am I likely to make it back to the parking garage before it closes? If it is too far to walk, but I prefer not to drive or take transit, are there other restaurants nearby with variants of cuisine that are similar to Asian Fusion? Or are there other restaurants with altogether different cuisine that would meet other dining interests or goals that I have expressed (e.g., “Someday I’d like to try one of those restaurants where everything on the menu contains garlic”)? What are my plans for after dinner? Do I have an early meeting or an early flight in the morning? How far is my home or my hotel? Moving seamlessly and quickly across the data and apps, including times, activities, tasks, places, people, personal preferences and other contexts without loss of data or context might well be easier if there’s one spot where all that data lives and one entity that manages it for me. But that comes at the price of “lock-in” and rigidity.

I’m a big fan of distributed systems in general, and so here in the specific case of intelligent software agents, my preference would be for distributed data and a distributed agent framework to enable collaboration across the various data sources, apps and entities that might be involved in agent-based transactions of this nature. That will likely take standards – de facto and de rigueur – or at least agreements among groups of vendors working in the intelligent software agent space. That includes vendors of both specialized ‘vertical agents’ and more general, ‘horizontal agents’. Will the W3C step up to this challenge? Will some other organization or body? Is the community of Android developers powerful enough to pressure Google to open things up at least when it comes to agents on Android? What about similarly for Apple and iOS? Without such action, the intelligent software agent space is likely to be driven entirely by a few big, well-known players who will compete through their own proprietary technologies built on the model of central(ized) intelligence agency. As technologists and/or consumers, is that what we really want?

Where do you stand? Are you for or against a CIA? Keep in mind, your intelligent virtual assistant might be listening to your answer!

Tony

Standard
Semantics

Secret Agent Action

This blog post isn’t about some superhero or secret agent code-named Action. It’s about enabling intelligent software agents to take action. As I’ve been writing periodically about intelligent software agents or virtual personal assistants, I’ve not shied away from saying there are significant challenges to making them commonplace in our everyday lives. But that doesn’t mean we shouldn’t be starting to build intelligent agents today.

One challenge is providing software agents knowledge about the domains in which they are intended to operate, in other words making software agents ‘intelligent’. In my last post, “Teaching a Martian to Make a Martini”, I tried to provide a sense of the scale of the challenge involved in making software agents intelligent, and pointed to ways to get started using semantic networks (whether constructed, induced, blended or generated). There are at least two other significant challenges: describing or modeling the actions to be performed and specifying the triggers/conditions, inputs, controls and expected outputs from those actions. These are of course intertwined with having underlying knowledge of the domain, as these challenges involve putting domain knowledge in a form and context such that the software agent can make use of it to perform one or more tasks for their human ‘employer’.

Here’s the first secret to producing agents capable of action: create the models of the actions or processes (at least in rough form) through ‘simply’ capturing instructions via natural language processing, perhaps combined with haptic signals (such as click patterns) from the manual conduct of the tasks on devices such as smart phones. Thinking of the example from my previous post, this is the equivalent of you telling the Martian the steps to be executed to make a martini or going through the steps yourself with the Martian observing the process. In either case, this process model includes the tasks to be performed and the associated work flow (including critical path and optional or alternative steps), as well as triggers and conditions (such as prerequisites, dependencies, etc.) for the execution of the steps and the inputs, outputs and controls for those steps. Keywords extracted from the instructions can serve as the basis for building out a richer, underlying contextual model in the form of a semantic network or ontology. But there is more work to be done. A state like the existence of a certain input or the occurrence of an event such as the completion of a prior task can serve as a trigger, and the output of one process can be an input to one or more others. There can be loops, and since there can be unexpected situations, there can be other actions to take when things don’t go according to plan. Process modeling can get quite complex. Writing code can be a form of process modeling, where the code is a model or a model can be created in a form that is executable by some state machine. But we don’t want to have to do either of those in this case. The goal should be to naturally evolve these models, not to require they be developed in all their detail before they get used by an agent for the first time. And more general models that can be specialized as they get applied are the best case scenario.

I know a fair bit about complex process models. I encoded a model of the product definition (i.e., product design) process for aircraft (as developed by a team at Northrop Corporation — with a shout out here to @AlexTu) into a form executable by an artificial intelligence/expert system. My objective at the time was to test an approach to modeling of a process ontology and an associated AI technology built around it. The objective of Northrop was to be able to have a software system ‘understand’ things like the relationships among the process steps, related artifacts (e.g., input and outputs) and conditionals and to be able to optimize the process through eliminating unnecessary steps and introducing more parallelism. In other words, a major goal of the project was to enable more of what was called at the time ‘concurrent engineering’, both to shorten the time needed for product definition and to catch problems with the design of the product as early in the process as possible (since the later such problems are caught, the more they cost to correct — with the worst case being of course that a problem isn’t discovered until the aircraft has been built and is deployed and in use for its mission in the field). The project was pretty darned impressive, and the technology worked well as an ‘assistant’ to product engineers looking to improve their processes.

Many of the tasks we deal with on a regular basis in our everyday lives aren’t as complex or specialized as the product definition process for an aircraft. But even relatively simple processes can be tedious to encode if every detail has to be explicitly modeled. Here is where another secret comes in: rather than model detailed conditionals for things like triggers, why not use statistical data about the presence of certain indicative concepts in input and output data associated with the process, along with refinements to the model based on user feedback? Clearly this approach makes the most sense for non-critical processes. You don’t want to try it for brain surgery (at least not if you are one of the first patients). But virtual personal assistants and other agents aren’t intended to do jobs for us entirely on their own, so much as to help us do our job (at least at first). So if we have some patience up-front and are willing to work with ‘good enough’, I think we could see a lot more examples of such software agents. If we have expectations that the agents know everything and are right close to 100% of the time, we’ll see a lot fewer. It’s that simple, I think.

So let’s get started building and using some ‘good enough’ assistants. If other people want to wait until they’re ‘perfected’, they can join the party later, maybe after The Singularity has occurred. I think it is time to start now. And I’m convinced we’ll get farther faster if we do start now, rather than waiting until years from now to begin building such technologies en masse. Let’s refocus some of our collective efforts from yet-another social networking app or more cute cat videos onto more intelligent agents. Then intelligent, actionable software agents won’t be so secret anymore – in fact, they’ll be everywhere. And you’ll have more free time to spend with your cat.

Standard