What level of AI is right for your DAM?

When you read about the artificial intelligence (AI) capabilities today’s DAM vendors offer, you’ll be faced with a myriad of choices. You’ll find everything from claims of automated, out-of-the-box AI to the ability to ‘train your own AI.’ To help make sense of these claims, I categorize AI into three buckets: generic, learned and self-learning AI.

Generic AI: Easy, But Lacks Context

Generic AI is typically already in a DAM, and leverages standard AI services from companies like Google, Amazon, Microsoft, IBM, and others. The underlying technology works with pre-trained models and/or algorithms and is developed to work for generic content and vocabularies. Essentially, they speak a common language that most people—across companies, industries, etc.—can understand.

An example of a typical DAM application built on generic AI is the commonly-available auto-tagging service. When new content is uploaded to the DAM, the AI service will describe the content with tags most people are familiar with so that the content is discoverable when searching for these words in the DAM.

Learned AI: Provides Business Context, But Is Time Intensive

Learned AI uses a trained model that’s specific to your company. It can be trained to recognize your brand, products, people, etc. and taught to speak your own company- or industry-specific vocabulary. To train a model, you need a set of training assets per tag or concept you want to train on. The more diversity your training set has, the better your model will be at recognizing your object. This is what is referred to as the model’s performance.

If tags are visually dispersed (visually dissimilar), you will need less training images, for example 50 images per tag. But if your tags are visually close (visually similar), then you may need closer to 100 or more training images per tag. Note that for real-world scenarios these numbers will vary and depend strongly on the specific data set and the AI technology used.

Making a machine-learned model will take effort. For each tag, someone must train the model, and this has to be done with discipline. Forgetting a tag or adding a tag non-consistently can deteriorate a model instead of improving it.

Self-learning AI: Robust, But Consider If It’s Worth The Investment

With self-learned AI, the application or feature uses a trained model in addition to the user’s behavior or feedback in the application. This process continuously fine-tunes the model for ongoing performance improvement.

However, similar to training the model, you must also consistently provide feedback to the model. If users are not working consistently in the application, the model will degrade over time.

Some Real-World Examples

Here’s a simple example for auto-tagging. Let’s assume we want to train a model to recognize both cats and dogs. If I upload picture A and ask the model to generate tags, it’s likely that the model can predict with high confidence that there are both a cat and a dog tag in the picture.

Picture A – Pixabay courtesy annvsh08

If I upload picture B, the model may be less confident and give me no tags at all because the cat’s face is covered by glasses. That’s what we call a false negative. There should have been a tag of a cat, which the DAM librarian will now have to add manually.

Picture B – Pixabay courtesy Free-Photos

Another scenario is that the model thinks this is a “cat” but is also is confident that this is a “dog”. Because the cat’s face is obscured, you may get that one tag that isn’t supposed to be there¬—a false positive—which the DAM librarian will now have to manually delete.

In the case of picture B, generic AI will do nothing when the DAM user tags the asset. Self-learning AI will retrain itself to become better over time, understanding all cats with glasses are both cats and have glasses on. But this does require discipline: the users need to ensure the model is being consistently reinforced over time, otherwise the AI may ‘learn’ incorrectly. Despite what you may read, AI is not nearly as smart as we humans!

Is It Worth The Effort To Do Learned or Self-Learning AI?

AI comes at financial and operational cost (even if it’s given to you for free!). Creating, running and using AI requires more effort as you move from generic to learned to self-learning AI.

Why? Generic AI doesn’t require any additional time or energy. But learned and self-learning AI require time and effort to ensure models are being trained and, for self-learning, maintained over time.

Should you steer away from learned AI then? Absolutely not. You just need to make sure it’s worth the investment.

If you have a business- or industry-specific tagging vocabulary, or your content is extremely hard to identify, you will likely benefit from a trained model, from which you can expect the error rate to be similar or better than an average DAM user!

Our recommendation? Make sure that your DAM vendor can clearly explain which type of AI they offer, and make sure you understand if the ROI is there. It’s not a simple endeavor to have a self-learning AI engine (despite the ‘demo magic’ some may show you). So ask yourself if the returns are worth the time you must put in to ensure models are properly learning, and trade this off against the generic AI, which will often come at no extra cost.

Have you used AI in your DAM, or do you plan to? We’d love to hear your ideas in the comments!

About the Author

Petra focuses on product strategy for the Aprimo Content Cloud and is driving Aprimo' s practical artificial intelligence (AI) solutions. She has been active in product visioning and product management most of her career, as she was part of the ADAM Software management team has prior experience managing packaging software solutions at Artwork Systems.

More Content By PETRA TANT
Petra Tant Headshot

Join over 45,000 people who receive content operations and marketing insights.

By signing up you agree to our Privacy Policy.