AI’s Missing Ingredient – Intelligent Content

My Saturday mornings used to be full of artificial intelligence (AI). Thanks to the TV shows I watched and the comics and books I read, I grew up expecting to live in a world of robots that could think and talk, vehicles of all sizes that would whisk me off to far-away destinations with no need for drivers or pilots, and computers that would respond to voice commands and know the answer to just about everything.

I may not yet have that robot butler, and my first experience with a self-driving car left me more apprehensive than impressed, but in other ways artificial intelligence is now part of my everyday existence, and in ways that I don’t even think about.

One of the first things I do each morning is ask Siri for the day’s weather forecast and then check to make sure that my Nest thermostat is reacting accordingly. During the day, Pandora’s predictive analytics choose my music, and in the evening Netflix serves up my favorite shows and movies. My books arrive courtesy of Amazon, and there’s a fair chance that some of those purchases were driven by recommendations generated via AI.

And now everyday I see several posts about content generated by the AI driven chatbot ChapGT (most of which seems very repetitive to me), while my artist friends debate the ethics of AI generated art (or is it even art at all).

It seems to me that we are on the edge of a potential leap forward in the application of AI, or perhaps more accurately we are making noticeable strides in the application of Machine Learning (ML) rather than true AI.

Outdated practices hampers AI advances

What we have today is just a small representation of the promise of AI, and that promise has not yet been realized.

Many companies and organizations still use older technology and systems that get in the way of a truly seamless AI customer experience. When the systems we already have don’t interact, and companies continue to build point-solution silos, duplicate processes across business units, or fail to take a holistic view of their data, content, and technology assets, then AI systems will continue to pull from a restricted set of information.

Over the past several years, as I have talked and worked with companies that are pursuing AI initiatives, I have noticed that the majority of those projects fail for a common reason; AI needs intelligent content. It may not be the only reason, but it’s definitely a common denominator.

AI needs intelligent content

No artificial intelligence proof of concept, pilot program, or full implementation will scale without the fuel that connects systems to users — content. And not just any content, but the right content at the right time to answer a question or move through a process. AI can help automate mundane tasks and free up humans to be more creative, but it needs the underpinning of data in context — and that is content, specifically content that is intelligent. According to Ann Rockley and Charles Cooper, intelligent content is “content that’s structurally rich and semantically categorized and therefore automatically discoverable, reusable, reconfigurable, and adaptable.” [Ann Rockley and Charles Cooper: Managing Enterprise Content: A Unified Content Strategy, Berkeley: New Riders, 2012]

The way we deliver and interact with content is changing. It used to be good enough to create large monolithic pieces of content: manuals, white papers, print brochures, etc. and publish them in either a traditional broadcast model or a passive mode. We would then hope that, in the best case, we could drive our customers to find our content or, in the worst case, that whoever needed it would stumbled across it via search or navigation.

With the rise of new delivery channels and AI-driven algorithms, that has changed. We no longer want to just consume content, we want to have conversations with it. The broadcast model has changed to an invoke-and-respond model. To meet the needs of the new delivery models like AI, our content needs to be active and delivered proactively. We need to build intelligent content that supports an advanced publishing process that leverages data and metadata, coordinates content efforts across departmental silos, and makes smart use of technology, including, increasingly, artificial intelligence and machine learning.

In addition to Rockley and Cooper’s definition of intelligent content, our content should also be modular, coherent, self-aware, and quantum. Here are definitions of those four characteristics:

  • Modular: existing in smaller, self-contained units of information that address single topics.
  • Coherent: defined, described, and managed through a common content model so that it can be moved across systems.
  • Self-Aware: connected with semantics, taxonomy, structure, and context.
  • Quantum: made up of content segments that can exist in multiple states and systems at the same time.

Intelligent content with a common content and semantics model that allows systems to talk the same language when moving content across silos may be the key to unlocking the technology disconnect that is holding AI back from even greater acceptance.

Machine Learning Isn’t Rocket Science

Take two astrophysicists, an Apollo engineer, a guy who designed parts of the International Space Station, a professor of robotics, and a random science fiction writer, and what do you have? It sounds like a dream sequence from the TV show, “The Big Bang Theory,” or the start of a science nerd joke. In fact, it was the make-up of a talk panel at a recent science fiction convention where I was one of the guests. The panel was ostensibly meant to be a retrospective look back at the days of Apollo, but like many such conversations, it soon turned to thinking about the future, which led to the subject of machine learning (ML) driven artificial intelligence (AI) and its current capabilities.

I expected an enthusiastic discourse, and so I was surprised when most of these actual rocket scientists seemed more ambivalent about the technology and its potential impacts.

A couple of observations caught my attention enough to tweet them out at the time:

“ML is great at recognizing patterns but not much else.”

“ML assumes tomorrow is going to be the same as today.”

Yet it seems these technologies are being received more enthusiastically elsewhere. Nearly every customer experience discussion and the majority of CX projects my team is engaged in these days includes some mention of machine learning and artificial intelligence (and often the two are used synonymously although they are different). Which got me thinking, how do the somewhat downbeat observations of a panel of space experts play into the world of customer data, and the ways we try to infer context from it?

‘ML Is Great at Recognizing Patterns but Not Much Else’

Machine learning is usually defined as “a set of algorithms and statistical models that computer systems use to perform a specific task without using explicit instructions.” It’s a subset of artificial intelligence that relies on patterns and inference to drive conclusions. In other words, as the scientists observed, it’s great at doing what it is meant to: Pattern recognition.

That means it can see what is happening in a data set, but not why it’s happening. That still (at the moment anyway) needs human interaction to derive context based on experience, knowledge, and a degree of intuition.

Machine learning can greatly reduce the workload and automate the process of recognizing patterns of behavior in large sets of customer data, but it is not a magic panacea for developing an understanding of why customers do what they do.

‘ML Assumes Tomorrow Is Going to Be the Same as Today’

The data we get from machine learning is a reflection of what happened the day the data was captured. For the purpose of pattern matching, there is an underlying assumption that the next set of data is going to be similar enough for the patterns and models it recognized to still be applicable.

Machine learning is not a predictive tool. It is a great way to analyze a lot of data and an efficient way to learn about repetitive behavior. But that’s it. The danger can be we take that baseline and believe that is how things will always be. Our customers acted that way yesterday, so they will act the same way tomorrow. If that was truly the case, to paraphrase Henry Ford’s observation, we’d still be riding horses. ML does not take into account the impact of disruptive social or technological influences. Overreliance on technologies like ML without understanding their role in developing a broader understanding of our customers can be just as much a blocker to delivering a good customer experience as any older system or technology.

We’ve Got a Long Way to Go With Machine Learning

When my wife and I get into my car on a Saturday morning, the ML system connected to my phone that analyzes my movements assumes we are heading for our favorite local diner. While that’s true around 80% of the time, on the odd weekend we head off in another direction, and the phone and GPS literally get lost for a while.

We have a long way to go (both figuratively and literally) with machine learning before it drives a true artificial intelligence-driven customer experience.