
My Saturday mornings used to be full of artificial intelligence (AI). Thanks to the TV shows I watched and the comics and books I read, I grew up expecting to live in a world of robots that could think and talk, vehicles of all sizes that would whisk me off to far-away destinations with no need for drivers or pilots, and computers that would respond to voice commands and know the answer to just about everything.
I may not yet have that robot butler, and my first experience with a self-driving car left me more apprehensive than impressed, but in other ways artificial intelligence is now part of my everyday existence, and in ways that I don’t even think about.
One of the first things I do each morning is ask Siri for the day’s weather forecast and then check to make sure that my Nest thermostat is reacting accordingly. During the day, Pandora’s predictive analytics choose my music, and in the evening Netflix serves up my favorite shows and movies. My books arrive courtesy of Amazon, and there’s a fair chance that some of those purchases were driven by recommendations generated via AI.
And now everyday I see several posts about content generated by the AI driven chatbot ChapGT (most of which seems very repetitive to me), while my artist friends debate the ethics of AI generated art (or is it even art at all).
It seems to me that we are on the edge of a potential leap forward in the application of AI, or perhaps more accurately we are making noticeable strides in the application of Machine Learning (ML) rather than true AI.
Outdated practices hampers AI advances
What we have today is just a small representation of the promise of AI, and that promise has not yet been realized.
Many companies and organizations still use older technology and systems that get in the way of a truly seamless AI customer experience. When the systems we already have don’t interact, and companies continue to build point-solution silos, duplicate processes across business units, or fail to take a holistic view of their data, content, and technology assets, then AI systems will continue to pull from a restricted set of information.
Over the past several years, as I have talked and worked with companies that are pursuing AI initiatives, I have noticed that the majority of those projects fail for a common reason; AI needs intelligent content. It may not be the only reason, but it’s definitely a common denominator.
AI needs intelligent content
No artificial intelligence proof of concept, pilot program, or full implementation will scale without the fuel that connects systems to users — content. And not just any content, but the right content at the right time to answer a question or move through a process. AI can help automate mundane tasks and free up humans to be more creative, but it needs the underpinning of data in context — and that is content, specifically content that is intelligent. According to Ann Rockley and Charles Cooper, intelligent content is “content that’s structurally rich and semantically categorized and therefore automatically discoverable, reusable, reconfigurable, and adaptable.” [Ann Rockley and Charles Cooper: Managing Enterprise Content: A Unified Content Strategy, Berkeley: New Riders, 2012]
The way we deliver and interact with content is changing. It used to be good enough to create large monolithic pieces of content: manuals, white papers, print brochures, etc. and publish them in either a traditional broadcast model or a passive mode. We would then hope that, in the best case, we could drive our customers to find our content or, in the worst case, that whoever needed it would stumbled across it via search or navigation.
With the rise of new delivery channels and AI-driven algorithms, that has changed. We no longer want to just consume content, we want to have conversations with it. The broadcast model has changed to an invoke-and-respond model. To meet the needs of the new delivery models like AI, our content needs to be active and delivered proactively. We need to build intelligent content that supports an advanced publishing process that leverages data and metadata, coordinates content efforts across departmental silos, and makes smart use of technology, including, increasingly, artificial intelligence and machine learning.
In addition to Rockley and Cooper’s definition of intelligent content, our content should also be modular, coherent, self-aware, and quantum. Here are definitions of those four characteristics:
- Modular: existing in smaller, self-contained units of information that address single topics.
- Coherent: defined, described, and managed through a common content model so that it can be moved across systems.
- Self-Aware: connected with semantics, taxonomy, structure, and context.
- Quantum: made up of content segments that can exist in multiple states and systems at the same time.
Intelligent content with a common content and semantics model that allows systems to talk the same language when moving content across silos may be the key to unlocking the technology disconnect that is holding AI back from even greater acceptance.