artificial intelligence, Semantics

To Bot, Or Not To Bot

bot-kirill__m-shutterstock-930x930

Are Bots the Future of Computing?

Bots — simple computer-based [micro] services that you interact with conversationally — are being hailed by some as the next wave of computing, a profound platform shift and the most exciting technology since the iPhone.

Microsoft this week became the latest technology provider to strategically embrace bots and offer a framework for developing them. Microsoft joins Facebook, Slack, WeChat, Kik and many others. In Microsoft’s case, as with several others, bots are a part of a continued push into a broad set of technologies centered around artificial intelligence or AI.

But let’s forget AI for a moment. This trend towards functional capabilities delivered as well-bounded, discrete services goes back at least as far as IT concepts such as object-oriented programming and  component-based development (think of a method or web service like “CheckBalance” or “MakeDeposit”). To some degree this basic idea is also manifest in app connection frameworks such as IFTTT and associated app integration platforms like Zapier. The rapid trend towards hosting services in cloud environments and making them easily accessible through Application Programming Interfaces (APIs) has now provided much of the infrastructure or connective tissue to enable the rise of bots.

If you’re a business looking at where technology is headed as part of your product strategy, you may be asking if bots are likely to truly transform digital products and whether that’s a good or a bad thing. Personally I think they have the potential to be transformative and that they will indeed be a good thing. As with other major tech disruptions, the transformation will almost certainly not happen overnight and as it goes mainstream it won’t necessarily look like this first wave of the technology. But I’m convinced bots are coming (some are here already) and smart businesses should start now exploring the opportunities that will come from building products using a bot model.

Wait, I thought Bots Were Bad????

We’re not talking here about hijacked PCs running programs whose sole purpose is to spew SPAM, or other automated programs that ‘mindlessly’ push pre-programmed content. The bots coming onto the scene now are helpful software agents, with many of them embodying at least some form of intelligence. By that I mean they have the ability to interpret a request for a service and to respond appropriately with relevant information, content or actions. Here’s the clincher in my mind: the services these bots perform need to offer real identifiable value to the user who is interacting with the bot.

Why Is This Happening Now? You Might Ask

In a world where everything is becoming an endpoint, bots are the little engines that interpret incoming signals — in context– and provide responses or initiate outbound actions. Think about the streams of data generated by Internet of Things (IoT) based sensors and all the data traffic traveling through messaging services.  This data is not just valuable en masse for historical or predictive analytics. It can be put to more immediate, localized and personalized use. Whether pinged directly through a single trigger word or command, or more passively activated using AI to interpret a chunk of data in a message, bots are driven by actionable data [intelligently] understood within a context. The ubiquity of devices such as sensors, beacons and smartphones, along with omnipresent network connections, always-on communications channels, vast sources of content and publicly-accessible APIs have all reached a tipping point of availability and maturity to support human interaction with the world in a way that is augmented by automated software agents – bots.

While the bots may be stand-alone services, to support a critical mass of them they need to be hosted within a platform that either resides in or interfaces with our existing communications channels. Today that takes the form of smart phone OSs, SMS/text messaging services, social messaging services such as Facebook Messenger, Kik, WeChat and Line, and/or virtual personal assistants such as Google Now, Microsoft Cortana or Amazon Echo’s Alexa. Virtual personal assistants can be thought of as universal or generalized bots that orchestrate or intermediate more specialized sub-bots. Mobile payment services such as Apple Pay are also contributing to the trend. Although simple text is sufficient as a basic user interface to bots, advances in natural language interpretation and understanding, and in voice recognition and speech synthesis provide additional, richer ways for humans and bots to interact. The technology then is wholly or largely existent today. Given the sheer volume of data and APIs, we need technology just to handle that volume. Technology also offers the opportunity to both simplify and augment our personal and business communications and transactions. So arguably the demand side of the equation exists now, too.

Should My Business Dive In Now or Wait?

It’s early days and a lot remains to be ironed out, particularly in terms of bot platforms. Every technology provider seems to be doing their own platform, there are few or no standards and little if any integration across platforms. There will be successes and failures and shakeouts. So it’s too early to jump in, right? Absolutely not! Any business that isn’t venturing into this space risks missing the boat as the wave of bots washes in. There are ways to venture into this space that mitigate the risk, while providing valuable revenue and experience to businesses and real value to their customers. Here are a few tips for getting started:

  1. Start by creating bots from services that your business already has exposed as APIs or single-function apps
  2. Focus on bots that are easy for your customers to understand and use, and that provide real, identifiable value to your customers. It’s okay if the service is simple and the value is small, like providing limited sets of information or content in response to specific customer requests or focusing on a commonly performed task with a small, fixed number of options
  3. Design your bots conceptually, then implement them initially on 1 or 2 platforms within communications channels where your customers are most active and where your business has experience, presence and familiarity. That could be SMS/text messaging for a single telecom carrier or mobile OS, or an in-message bot command embedded in one of the messaging services like Kik or Skype
  4. Let your customers know they are part of something new and that you value their input and feedback. Make the experience fun for your customers, and for your product team, too
  5. Don’t get too far ahead in setting goals and expectations. Monitor your initial bot experiences, using your own tools or through the analytics that are provided as part of a bot platform or delivery channel. Don’t be hesitant to iterate, pivot or rethink. You’re not betting your business on these, at least to start with, so failures don’t have a large downside. Look for where you are getting initial success, try to understand the success drivers, and replicate those as you add to the volume and complexity of your bots.

Keep in mind that the bot-driven era of computing will be like a marathon, or at least a 10K run, not a sprint. It’s not too soon to start venturing with bots. Along the way you’ll get a better sense for how they can benefit your customers and your business. And you’ll help determine the shape they take as they come into the mainstream as the next wave of computing.

Standard
Semantics

The Postive, Sunny Side of AI

Recently I had the pleasure of presenting at DATAVERSITY’s 2015 Smart Data Conference as part of their track on artificial intelligence (AI). My presentation covered various AI technologies and products and how they are used in ‘smart applications’ in markets ranging from customer care to retail, healthcare, education, entertainment and smart homes. I also delved into intelligent virtual assistants and other forms of software agents, which if you have read many of my blog posts here, you’ll know is a passion of mine. A video of the presentation is available, courtesy of DATAVERSITY. If you don’t already have an account with DATAVERSITY, it’s just a quick registration and they’ll send you on your way to viewing the video.

Sunny Side of AI Presentation Video

Sunny Side of AI Video

The DATAVERSITY website has lots of other great resources (articles, white-papers, slides, webinars and other videos) on a wide range of topics around semantic technologies, including artificial intelligence and cognitive computing. Speaking of cognitive computing, I also participated at the conference on a panel discussing the burgeoning commercialization of cognitive computing technologies. An article about the panel session and the full video are here:

Cognitive Computing 201 Panel

And here’s another related article that provides a good introduction to cognitive computing.

The audience in both cases really seemed engaged and interested in the topics, and the discussions both in the sessions and on the sidelines afterwards were stimulating. I ran into a lot of familiar faces who have been working in this field for some time and who, like me, are encouraged to see the high levels of interest on the part of those developing AI technologies and products, and those using them for smart applications. There were lots of new faces, too, both fresh vendors unveiling their products and enterprises interested in exploring practical applications of AI. While there were lots of references to the fear factor around AI, to a large degree the attendees seemed to realize that much of what’s written about AI, the killer robot stories for example, is mostly sensationalism. What doesn’t always get written about — unfortunately — are the hundreds if not thousands of everyday uses of AI that provide real benefits to businesses and consumers. That’s the positive, sunny side of AI. Hopefully my presentation helps get that word out. Attendees who are supportive of AI now have more positive material to help counter all of the negative coverage. Perhaps we’ll get more balanced media coverage going forward, even if the positive uses aren’t always as dramatic as sentient AIs rising up and enslaving the human race!

I hope you’ll enjoy the presentation(s), too. Please feel free to contact me if you have questions or comments, or if you want to discuss any of the topics or technologies covered.

Tony

Standard
Innovation

Can Lean Start-Up Practices Help (Big) Enterprises Be More Successful at Innovation?

Innovation (Photo credit: masondan)

Innovation (Photo credit: masondan)

Congratulations to the Lean Start-Up Machine (now Javelin) team on their funding round. I wish them best of luck in helping to make enterprises more effective and successful at innovation!

In my experiences with innovation in large enterprises the problem isn’t lack of ideas, and many of the ideas do arise from customer experiences. However, I think early development work is sometimes done in more of a ‘skunkworks’ setting. That’s okay perhaps to get to a working prototype or true MVP. But getting users/customers involved in early collaborative development, or at least early evaluation and feedback, is critical to avoid over-engineering and its associated complexity, adding features that are of more interest to developers than users and/or so-called ‘gold-plating’.

The biggest problem I’ve seen around innovation within large enterprises is getting buy-in from the enterprise’s own internal [management] staff. It sometimes seems that as soon as word starts to get out about an innovative product, corporate ‘antibodies’ start to form around it to block it or kill it off. Where there’s innovation there’s change, disruption, unfamiliarity and risk (of both failure and success!). Those are all things the ‘anti-bodies’ have been trained to snuff out. But they are things start-ups have to deal with, and sometimes even thrive on.

If Javelin can help large enterprises apply lean start-up practices around things like: Minimum Viable Products (MVPs); hypothesis setting and testing; starting small with external testing; measurement, feedback and iteration — even sometimes pivots and failures, maybe they can help re-train the antibodies away from bad behaviors that inhibit innovation. Those practices can perhaps help reduce risk and build confidence that the product is feasible, that it can be developed in an incremental fashion, that it has customer value and that it can be brought to market successfully. I’m rooting for Javelin and other companies that may follow their lead. There are few things worse than wasted innovation.

Enhanced by Zemanta
Standard
artificial intelligence, Semantics

What Netflix Could Do with Its Recommendation Engine to Excite Me as a Customer

You Might Like House of Cards, But I Couldn’t Possibly Comment

My colleague Peter Sweeney, the founder of Primal, and I were talking recently about Netflix using AI, specifically deep learning algorithms, as part of efforts to further improve its recommendation engine. I’ll admit, instead of being excited at the prospect of more insights being gleaned from my viewing history, my first reaction was concern about yet-another bubble along the lines of, or even worse than, the infamous search engine filter bubble, only this time for recommendation engines.

True, we learned earlier this winter that Netflix has re-engineered Hollywood. Netflix has a very rich and extensive categorization scheme — the product of analyzing movies and TV shows from an amazing number of angles, and presumably also following the trails of relationships that customers perceive among the films and TV shows that they view. I think Netflix provides good recommendations, certainly better than they did a few years ago. But frankly, the recommendations still hit dead ends quite often and its easy to get stuck in the rut of more of the same old thing. So my fear upon reading about what Netflix is doing was basically this: deeper mining equals even deeper rut — even more of the same old thing. And that could easily be the case. Just going deeper into analyses of the content itself, as well as my past preferences for it, might well add more categories to their classification scheme, but it doesn’t endear me more to Netflix if I still just get recommendations based largely on my past preferences, only now using more specialized categories. I’m still stuck in a rut.

What I want to experience is more of what I like to call ‘designed serendipity’. If Netflix or Amazon or one of their peers are truly uncovering deeper and more nuanced patterns, particularly within the content itself, but also about my viewing preferences, then they could use that new data to make the recommendation experience more interesting and more compelling for me — giving me something I could actually get excited about. How could they do that? They could start by proposing content from adjacent categories based on walking their classification scheme. Because there would presumably be more, finer-grained categories, exploring some of the neighboring ones could add some fun while still keeping the risk low of jarring me with an off-the-mark recommendation. They could also take those lower-level elements and apply them in somewhat different contexts, preserving the elements that I like, but also mixing in some new twists. They could even try combinations of the lower-level elements, as they’ve done fairly successfully already at higher levels of their classification scheme.

Let me use some examples to illustrate the sorts of things I’d like to see. The West Wing and House of Cards are both political dramas. But at a deeper level, The West Wing is much more about the camaraderie of the White House staff as a team, with politics and political intrigue as more of a plot device for the personal interaction. House of Cards on the other hand is more of a psychological thriller set in a political context. The political maneuvering and bold back-stabbing are core to the show — and for me at least, that’s what makes it fun to watch. Those are fairly subtle, but significant differences that if deep learning can expose, would establish its customer value. Put another way, just because I like House of Cards (the Netflix version, but even more so the original BBC version), does not mean The West Wing would be a good recommendation for me, since such a recommendation is based on more superficial similarities between the shows. I’m a friendly, collaborative, team-oriented person in my real-life, so I’d rather see ego-maniacal scheming and back-stabbing as part of my diversionary viewing!

If Netflix could ‘context lift’ those elements of House of Cards that I do like and then reapply them in different contexts, that would excite me. For example, because I like House of Cards, I might like The Tudors better than The West Wing, even though The Tudors is a historical drama. The Tudors has more of that scheming and back-stabbing (or head chopping!) that I like. While it’s a political drama of sorts, it isn’t in the same sense as The West Wing and House of Cards, so it might not come up as an obvious recommendation. To make that recommendation is more deep and subtle. I also happen to like The Americans, a drama that is political only in an espionage context, but again also a thriller with lots of unexpected twists and turns. And I hate to admit it, but I also like Revenge. Revenge has almost nothing to do with politics, but shares the dark scheming and plotting of House of Cards. Would Netflix be able to recommend either of them to me based on House of Cards? If they get to that level, they’d have my customer loyalty.

At that point, the only thing still missing for me would be adding even more pleasant surprises — turning down the ‘designed’ aspect and turning up the ‘serendipity’. What if I want something that’s conceptually related to a past viewing interest, but still quite different? If I watched Planet of the Apes (particularly the original), wouldn’t Jane Goodall‘s documentary for Animal Planet, Almost Human, be an interesting recommendation? It would for me! Or what if I want to broaden my horizons and try something completely different than what I’ve been watching? Can Netflix put my past preferences in a blender and recommend something really novel and out-of-the-ordinary? Or alternatively can I just throw at Netflix some topics that I’ve been thinking about or have a point-in-time interest in and get a recommendation made-to-order at that particular moment, based on what I just provided, or perhaps subtly influenced by my viewing history? Do those things, Netflix, and I might become a loyal customer for life!

Tony

Enhanced by ZemantaOther Related Articles:
Standard
Innovation

A Little Rant on Innovation

Innovation

Innovation (Photo credit: masondan)

Let me make it clear from the start that I am a rabid supporter of innovation. Innovation propels us forward into new technologies, products and services, rather than simply standing still or incrementally extending from the current state. This rant isn’t about innovation in concept. It is about innovation as practiced. More specifically, it is about how innovation often gets handled – or mishandled – particularly inside large, well-established, successful enterprises.

I’m going to share an example. This example is not one I simply made up; this really happened. In fact, I think it happens often enough that it is a representative example of the end game for the work products of innovation in many companies – particularly those large, well-established enterprises I mentioned before. I share the example not to pick on the company (I have great respect for them), but to call attention to the issue in hopes that raising awareness will help improve the effectiveness of future innovations occurring in similar circumstances.

This company – let’s call it Constant Corp – was a pioneer in information systems technology and built a loyal global customer base served with reliable, enterprise-class products and services. Some of the most talented engineers in the world worked there. After being hugely successful in its initial business segment, the company has never quite been able to replicate that success to the same degree in major new markets. Constant Corp couldn’t innovate, right? Wrong! During the past 20 years Constant Corp successfully innovated (from a purely technological perspective at least) and developed nascent products in the following areas (there probably were many others):

  • a unified multimedia communications environment
  • a component-based development and deployment environment built on a rich metadata repository
  • a powerful image format for what was at the time a purely text-based Internet
  • an on-demand streaming video service using advanced compression algorithms
  • natural language processing for mobile telecommunications
  • ‘hypervideo’ – semantic analysis and tagging of objects in multimedia, hosted in an open source-based cloud with enterprise-class cloud management technologies.

Constant Corp is not the leader today in any of those markets. Most of these innovations never saw the light of day outside the company. These products were incubated inside Constant Corp by what were essentially the equivalents of small venture-backed start-ups. This work was often done years before other companies – many of them new start-ups – developed similar products and successfully built multi-million dollar businesses around them.

As I said, I don’t think Constant Corp’s story is by any means unique. Companies that are quite successful in a major market are more often than not unable to replicate that success through applying their subsequent innovation to new markets, or even to the same market for that matter. They become overly conservative and unnecessarily risk averse. For a while at least, talented engineers still innovate, but their innovations are slowly starved to death or killed off outright by growing numbers of corporate ‘antibodies’ whose job, it would seem, is to prevent new, innovative products from ever reaching the market.   In some cases there is concern about eroding existing markets, which seems like a reasonable concern. The argument can be made though that if you don’t introduce a new technology yourself, a [new] competitor likely will. And in the latter case you will have very little control over the rate of migration to that new technology. But even valid concern about protecting existing markets shouldn’t spill over to products targeting new markets. Instead, innovations in new markets are sometimes stopped in their tracks by declaring them too far away from the company’s existing markets, even in the case of adjacent markets that should be relatively easy to pursue. When these behaviors happen, the companies that allow them to happen in effect become victims of their own previous success and are unable to capitalize on their subsequent innovation.

So what’s the solution here? Let’s start by acknowledging that large enterprises are often not successful at leveraging innovation from the inside out. Maybe that’s not such a good model anyway. Maybe a better model is to encourage outside innovators and entrepreneurs to do what they do well – to take risks, try new things, to fail more often than not, but to sometimes succeed. Seeing large enterprises funding ventures outside their own walls and acquiring start-ups at increasing frequency is in many ways a positive development. So let’s embrace that model and make it work even more effectively. It’s odd to think that in some cases that comparable technology developed outside might be valued more than something developed in a company’s own R&D labs, but maybe there is something about technology ‘bred in the wild’ that gives it an evolutionary edge. If this model works, maybe we’ll see more and better results for our collective innovation spend. Perhaps innovative technologies will find their way to market in better ways. Perhaps they’ll benefit from the combination of the spunkiness of start-ups augmented with the experience that large enterprises can bring in areas such as scalability and customer focus. It’s certainly worth a try.

From Huffington Post: Why Big Companies Don’t Innovate

From WSJ: Javelin to Bring Lean Start-Up Model to Big Enterprises

Standard
Semantics

Is CIA Behind Your Smart Phone’s Virtual Assistant?

No, I don’t mean the CIA as in Central Intelligence Agency  – although based on the recent revelations about the US government’s intrusion into our electronic lives, they might well be. I mean use of central(ized) intelligent agents (CIA). For example, does the intelligent virtual assistant app on your smart phone require all the data about you as an individual, and all the knowledge rules about what to do with that data, to reside in some centralized repository that embodies (or maybe, disembodies) you and your life? Put more bluntly, does the agent capability or agency depend upon central or centralized intelligence?

So far at least, based on popular intelligent virtual assistants like Google Now or Apple’s Siri, the answer would seem to be yes. Clearly companies like Google and Apple have a vested business interest in being the one place where all the data about your daily life is collected, analyzed and utilized to enable agents like Google Now and Siri to help make your life a little easier. And of course that data happens to be useful for targeting ads and services at you, too. In all fairness though, Google, Apple and others who may be taking a centralized approach to empowering intelligent agents aren’t necessarily doing so purely out of their own self interest. Arguably it makes things a lot easier from a technical perspective to have all that data and logic in one spot, harmonized and maintained by one company. It solves or at least mitigates myriad integration problems that would otherwise have to be addressed with the alternative, ‘distributed intelligence agency’ approach.

What are some of those problems? Well, to illustrate, here’s a (simple?) example.  If my intelligent virtual assistant doesn’t know that I drove to a place near my current geo-location and parked my car in a parking garage that closes at 10:00 PM, it might not know whether to recommend that I walk or drive to a nearby restaurant that it suggested for dinner based on my interest in Asian Fusion cuisine. Is it likely too far to walk (given my health, weight and normal walking habits)? Is it a safe neighborhood to walk in? At what time of day would I be walking and when does it get dark at my location? If I made reservations for more than myself, might others be walking with me? Will the weather be conducive to walking? If walking doesn’t seem reasonable, what are the alternatives (public transit, Uber/Lyft, or my car that’s presently in the parking garage)? If I decide to walk, am I likely to make it back to the parking garage before it closes? If it is too far to walk, but I prefer not to drive or take transit, are there other restaurants nearby with variants of cuisine that are similar to Asian Fusion? Or are there other restaurants with altogether different cuisine that would meet other dining interests or goals that I have expressed (e.g., “Someday I’d like to try one of those restaurants where everything on the menu contains garlic”)? What are my plans for after dinner? Do I have an early meeting or an early flight in the morning? How far is my home or my hotel? Moving seamlessly and quickly across the data and apps, including times, activities, tasks, places, people, personal preferences and other contexts without loss of data or context might well be easier if there’s one spot where all that data lives and one entity that manages it for me. But that comes at the price of “lock-in” and rigidity.

I’m a big fan of distributed systems in general, and so here in the specific case of intelligent software agents, my preference would be for distributed data and a distributed agent framework to enable collaboration across the various data sources, apps and entities that might be involved in agent-based transactions of this nature. That will likely take standards – de facto and de rigueur – or at least agreements among groups of vendors working in the intelligent software agent space. That includes vendors of both specialized ‘vertical agents’ and more general, ‘horizontal agents’. Will the W3C step up to this challenge? Will some other organization or body? Is the community of Android developers powerful enough to pressure Google to open things up at least when it comes to agents on Android? What about similarly for Apple and iOS? Without such action, the intelligent software agent space is likely to be driven entirely by a few big, well-known players who will compete through their own proprietary technologies built on the model of central(ized) intelligence agency. As technologists and/or consumers, is that what we really want?

Where do you stand? Are you for or against a CIA? Keep in mind, your intelligent virtual assistant might be listening to your answer!

Tony

Standard
Semantics

Secret Agent Action

This blog post isn’t about some superhero or secret agent code-named Action. It’s about enabling intelligent software agents to take action. As I’ve been writing periodically about intelligent software agents or virtual personal assistants, I’ve not shied away from saying there are significant challenges to making them commonplace in our everyday lives. But that doesn’t mean we shouldn’t be starting to build intelligent agents today.

One challenge is providing software agents knowledge about the domains in which they are intended to operate, in other words making software agents ‘intelligent’. In my last post, “Teaching a Martian to Make a Martini”, I tried to provide a sense of the scale of the challenge involved in making software agents intelligent, and pointed to ways to get started using semantic networks (whether constructed, induced, blended or generated). There are at least two other significant challenges: describing or modeling the actions to be performed and specifying the triggers/conditions, inputs, controls and expected outputs from those actions. These are of course intertwined with having underlying knowledge of the domain, as these challenges involve putting domain knowledge in a form and context such that the software agent can make use of it to perform one or more tasks for their human ‘employer’.

Here’s the first secret to producing agents capable of action: create the models of the actions or processes (at least in rough form) through ‘simply’ capturing instructions via natural language processing, perhaps combined with haptic signals (such as click patterns) from the manual conduct of the tasks on devices such as smart phones. Thinking of the example from my previous post, this is the equivalent of you telling the Martian the steps to be executed to make a martini or going through the steps yourself with the Martian observing the process. In either case, this process model includes the tasks to be performed and the associated work flow (including critical path and optional or alternative steps), as well as triggers and conditions (such as prerequisites, dependencies, etc.) for the execution of the steps and the inputs, outputs and controls for those steps. Keywords extracted from the instructions can serve as the basis for building out a richer, underlying contextual model in the form of a semantic network or ontology. But there is more work to be done. A state like the existence of a certain input or the occurrence of an event such as the completion of a prior task can serve as a trigger, and the output of one process can be an input to one or more others. There can be loops, and since there can be unexpected situations, there can be other actions to take when things don’t go according to plan. Process modeling can get quite complex. Writing code can be a form of process modeling, where the code is a model or a model can be created in a form that is executable by some state machine. But we don’t want to have to do either of those in this case. The goal should be to naturally evolve these models, not to require they be developed in all their detail before they get used by an agent for the first time. And more general models that can be specialized as they get applied are the best case scenario.

I know a fair bit about complex process models. I encoded a model of the product definition (i.e., product design) process for aircraft (as developed by a team at Northrop Corporation — with a shout out here to @AlexTu) into a form executable by an artificial intelligence/expert system. My objective at the time was to test an approach to modeling of a process ontology and an associated AI technology built around it. The objective of Northrop was to be able to have a software system ‘understand’ things like the relationships among the process steps, related artifacts (e.g., input and outputs) and conditionals and to be able to optimize the process through eliminating unnecessary steps and introducing more parallelism. In other words, a major goal of the project was to enable more of what was called at the time ‘concurrent engineering’, both to shorten the time needed for product definition and to catch problems with the design of the product as early in the process as possible (since the later such problems are caught, the more they cost to correct — with the worst case being of course that a problem isn’t discovered until the aircraft has been built and is deployed and in use for its mission in the field). The project was pretty darned impressive, and the technology worked well as an ‘assistant’ to product engineers looking to improve their processes.

Many of the tasks we deal with on a regular basis in our everyday lives aren’t as complex or specialized as the product definition process for an aircraft. But even relatively simple processes can be tedious to encode if every detail has to be explicitly modeled. Here is where another secret comes in: rather than model detailed conditionals for things like triggers, why not use statistical data about the presence of certain indicative concepts in input and output data associated with the process, along with refinements to the model based on user feedback? Clearly this approach makes the most sense for non-critical processes. You don’t want to try it for brain surgery (at least not if you are one of the first patients). But virtual personal assistants and other agents aren’t intended to do jobs for us entirely on their own, so much as to help us do our job (at least at first). So if we have some patience up-front and are willing to work with ‘good enough’, I think we could see a lot more examples of such software agents. If we have expectations that the agents know everything and are right close to 100% of the time, we’ll see a lot fewer. It’s that simple, I think.

So let’s get started building and using some ‘good enough’ assistants. If other people want to wait until they’re ‘perfected’, they can join the party later, maybe after The Singularity has occurred. I think it is time to start now. And I’m convinced we’ll get farther faster if we do start now, rather than waiting until years from now to begin building such technologies en masse. Let’s refocus some of our collective efforts from yet-another social networking app or more cute cat videos onto more intelligent agents. Then intelligent, actionable software agents won’t be so secret anymore – in fact, they’ll be everywhere. And you’ll have more free time to spend with your cat.

Standard