WEBINAR

The Aprimo Generative AI Guidebook: Part Four

Welcome to part four of the Aprimo Generative AI Guidebook! If you missed parts one, two, or three, you can find them here:

  • Part 1: The Impact of GenAI on Content Ops and DAM
  • Part 2: Using AI Prompts to Improve Content Creation
  • Part 3: Training GenAI Models for Brand Compliance

In our earlier articles, we talked a lot about AI concepts and how you could manage, leverage, and optimize them within your business. With this article, we will talk a bit more about managing your risk related to AI and steps you should consider to help protect your business.

To ensure responsible use of AI technologies, companies should consider establishing some key governance practices. We would recommend implementing controls in the following areas:

  • Policy Management: Define and publish clear policies around the use of AI
  • Transparency: Outline processes for detecting and flagging AI content
  • Human Oversight: Ensure all AI content is getting the proper level of human oversight
  • Feedback: Create channels for employees to provide feedback on their experiences with AI
  • Reporting and Tracking: Track AI initiatives and content leveraging dashboards 

Policy Management

Companies should establish, document, and communicate their policies around the use of AI within their business. This is key to giving your employees guidance on what is acceptable and expected of them when working with AI. It is also critical for your company, so if any issues or complaints arise related to AI, you have clear policies you can reference to determine what actions to take in response. Below are some different kinds of policies to consider. Review this list to determine which should apply to your business and situation. 

  • Ethics and Compliance Policy: Establish a comprehensive ethics policy that outlines the principles and values guiding AI development, addressing issues such as fairness, transparency, accountability, and privacy.
  • Bias Mitigation Policy: Develop guidelines and procedures to identify and mitigate biases in AI systems to ensure fair treatment and avoid discrimination in decision-making processes.
  • Data Governance Policy: Implement robust data governance policies to ensure the responsible collection, storage, and usage of data, taking into account privacy concerns and data protection regulations.
  • Explainability Policy: Set standards for AI model explainability, ensuring that AI systems provide clear and understandable explanations for their decisions and actions.
  • Human-in-the-Loop Policy: Establish guidelines for incorporating human oversight and intervention in AI processes, particularly in critical decision-making scenarios.
  • Security Policy: Implement strong security measures to protect AI systems and data from unauthorized access, ensuring that AI models are resilient to cyber threats.
  • Transparency and Disclosure Policy: Commit to transparent communication with users and stakeholders about the use of AI, including the presence of AI in products or services.
  • User Consent Policy: Clearly define guidelines for obtaining informed consent from users when AI systems process their data or influence their decisions.
  • Third-party AI Vendor Assessment Policy: Evaluate and select AI vendors based on their adherence to ethical practices, security standards, and compliance with relevant regulations.
  • Regulatory Compliance Policy: Stay informed about relevant AI regulations and standards in different jurisdictions and ensure compliance with these regulations.
  • Training and Education Policy: Invest in ongoing training and education for employees working with AI systems to enhance their understanding of ethical AI practices and potential risks.
  • Testing and Validation Policy: Implement rigorous testing and validation procedures for AI models to identify and correct potential biases and errors before deployment.
  • Liability and Accountability Policy: Clearly define roles and responsibilities concerning AI system development, operation, and potential consequences, ensuring clear lines of accountability.
  • Monitoring and Auditing Policy: Regularly monitor AI systems in production and conduct audits to identify and address any emerging ethical or security concerns.

Having the policies is great, but you also need to centrally manage and distribute these policies. We recommend creating a collection in the Aprimo DAM to hold your policies. Expose these policies on the Spaces page to all users so they can easily see and find your policies. If appropriate, leverage the Public Links feature to expose these policy documents in other external systems or websites while maintaining one source of truth for the policy in the DAM. This way, if your policies need to change (and with how rapidly AI is evolving, they will) you can update it in one place and have it picked up by all the other systems leveraging the Public Link.

EnfO257tQYyQWFH45lRA AI%20Policy%20Space

TIP: Customers can leverage the Aprimo workflow solution to distribute these AI policies to all the appropriate end users of Aprimo. We recommend that users creating and uploading new content into the DAM be required to review your AI policies. Using Aprimo’s workflow solution, you could assign each user a task to complete, which includes the AI policies and a checklist that requires the users to acknowledge they have reviewed and accepted the policies before completing the task. This will allow you to ensure all content contributors know your policies and agree to follow them. For larger companies, you may want to break up the workflow to run a Project per department or team to make assignments more manageable and allow managers to easily track who has completed the task.

vPz8PDhXSVi3qKQc2VZh Acknowledge%20AI%20Policy%20Task%20Unchecked

Transparency

Before you can take any actions related to AI, you have to know who is using it. We would encourage you to be open about it and ask your employees to be transparent and openly disclose when they have used AI. As we shared in earlier guidebooks, this could be as simple as a few metadata fields on the Content Item indicating that it was influenced by AI and the AI service used, or at least a category of AI that was used so you can get a better understanding of what tools your employees are leveraging. This additional detail will help answer other questions like: Do they have proper licensing for those tools? What are the cost implications of using those tools? And more.

These fields can also be exposed on tasks in workflows where content is uploaded through Data Collection templates. One simple example would be an ?AI Influenced Content: Yes/No? field that can be set by the user when completing their upload task. If the value is set on the Task, the field value can be passed to the content item using the metadata transfer options in Status Actions.

Asking your employees to self-disclose is a great start, but we would also recommend allowing technology to assist as well. Several service options are coming out in the market to attempt to “detect” if content was influenced by AI. As laws and policies evolve, more companies offering AI services will likely start to self-report or tag AI-generated content in ways that make it easier to know the source of the content. Collecting this information on your Content Items will make sure you are ready to take advantage of these options as they evolve.

To support transparency, Aprimo offers an “AI Content Detection” solution that leverages one of these services to scan incoming content as it is uploaded into the Aprimo DAM. It will generate a confidence score on the content to indicate how likely it is that it was AI-generated. Aprimo can then flag content that exceeds a threshold as being AI-influenced. Content being flagged as AI-influenced can then trigger various actions to take place, leveraging rules in the DAM and workflows.

Another key aspect of transparency is traceability. Can you tell what source(s) the AI service used to derive its response? Especially with content generation, ideally, the AI services you leverage could cite its sources to allow you to verify the authenticity and accuracy of the response.

For example, when the Aprimo Content Coach provides users with an answer, they can see which content items were used to craft the response and click through to view them. This way, if the answer seems questionable, the user can quickly drill into the source to verify it. This also helps the Content Coach owner know what content is providing the most responses, which is providing accurate responses, and which is not. This transparency allows the Content Coach owner to curate the content in their model and understand its effectiveness.  

JQXNgp7PSICe6VXrS7Ey Content%20Coach%20Related%20Content%20Zoom

Human Oversight

We have all heard it over and over at this point. We should not be allowing AI to run unchecked in these early days. We need to keep a “human in the loop” when it comes to leveraging AI. So how do we do that?

Establish an AI Governance Board with a set of members across departments who are responsible for overseeing the implementation of AI policies and guiding ethical decision-making. This board should regularly review all ongoing AI initiatives, and assess their risk and compliance with the overall company policies.  

In order to provide this board with the information it needs, you should begin centrally tracking all the AI initiatives you are planning as a business. It does not need to be a lot of information, but at least some key high-level details: Department, Owner/Sponsor, Current Status, AI Services, and Business Case (i.e., Benefits and Risks). Set a value score and a risk score and rank them based on priorities.

This could easily be tracked in the Aprimo platform, leveraging Programs and Activities and quickly visualized on the calendar. Or customers could leverage the Aprimo Planning module to help structure and manage their initiatives and link them to key strategies and goals. The Governance Board can then quickly and visually see all the ongoing and planned initiatives on the calendar and easily drill into each one to see more details when needed.

bHfM49tpRDSrX1LQurTR AI%20Content%20Plan

Beyond this, it is important to ensure all AI content is going through the appropriate human review. Most likely, you already have an approval process for any new content entering the DAM and any content being used in any marketing activities. However, those processes may need to be adjusted to handle AI-generated content differently, and the individuals involved in them may need extra enablement or guidance on how to properly vet AI content.

Customers leveraging the Aprimo workflow solution should consider updating their processes to account for AI content and potentially route that content for additional approvals OR update the expectations on the existing reviews in the process to properly verify AI content. One option would be to configure an AI Governance step checklist to display on those review steps. This checklist could provide clear guidance on what items to consider during the review. You can also leverage URL Extended Attributes to put a link to your AI policies collection directly on the task to make sure users have quick access to your policies when doing the reviews.

Consider updating your “reject reasons” on reviews to account for common AI failures like “unintentional bias” or “hallucinations in content” so you can see if AI is causing specific rejections.  

O6WMbwTTQKmuEymzsRnQ AI%20Reject%20Reasons M

Feedback

As you begin to leverage AI to generate content and improve efficiencies, it will be key to allow your users to provide feedback on the quality of the AI content and outputs. You should be building out processes and methods to allow users to report AI concerns or issues.   You could approach this in a few different ways.

Maybe something as simple as just an email inbox, “AI Feedback,” that employees could use to report AI concerns.

Getting more elaborate, create a new Work Request form in Aprimo for reporting an AI concern. The form could allow the user to specify a department, maybe categories for the “concern”, if they know the AI project involved, etc. to better quantify the concern. The request could then kick off a simple workflow to route the issue to the appropriate department or team.

Another key suggestion is that when you start engaging with different AI services or solutions, you should insist that those services also include feedback loops around any content being generated. Ideally, when employees use it, they have a way to rate responses which you should be able to review so you can see what the overall feedback is so you can assess that AI service’s performance. How often is the service providing good results vs. bad? If the results are wrong, why are they wrong? Was it offensive, inaccurate, incomplete, etc? In these early days, we should not expect these emerging AI services to be perfect. One of the coolest aspects of AI is that it can learn, get better, and improve over time. Being able to work with it, provide it feedback, and coach it to become better will be key to getting real value out of it.  

A good example of this is how the Aprimo Content Coach allows users to rate the responses they receive and categorize them. The Content Coach owner can then review those responses to see where the model is struggling and take action to correct it. Adjust the prompt, update content to close gaps, or remove content that is providing poor responses to improve the output and increase the value.  

Cl8ZLDaRTiqDpUkEoOdb Content%20Coach%20Feedback%20Page

Reporting and Tracking

Easy visibility to AI usage and policy enforcement is key to making good decisions. Once you have completed the steps outlined above, you should build dashboards to quickly show overall status and glean insights. If you are building out some of the examples discussed in Aprimo, then you will have this data immediately available to leverage in Aprimo reports and PowerBI dashboards. Or you can build the reports in your own BI platform of choice and still embed them into the Aprimo home page. 

Based on the discussion above, here are a few ideas on what to track:

  • If you are rolling out your AI policies, what % of users have completed their tasks?
  • Dashboard of all AI initiatives, showing their status and risk level
  • How much AI-influenced content is being uploaded in the DAM (type of content, status)?
  • Which departments and groups are leveraging AI the most?
  • What are the most common AI services being used?  
  • How often are reviews being rejected based on AI-related reject reasons or concerns?

Generative AI is here, and it is not going away. Every company needs to assess these technologies and develop stances appropriate to their business needs and risks. Those stances can and will change (likely often) as this ever-changing landscape of AI policies, lawsuits, and technology evolves.  

By adopting some or all of these recommendations above, your company can foster responsible AI practices and mitigate potential risks associated with AI technologies. Moreover, having well-defined policies and processes demonstrates a commitment to ethical AI development and usage, which can enhance a company’s reputation and foster trust among its employees, customers, and stakeholders.

This series offers a practical plan to quickly utilize Aprimo, GPT, and other AI tools for actionable plans and valuable outcomes. Stay tuned for part five, diving deeper into the impact of generative AI on your business and how Aprimo can lead the way.

ChatGPT Integration Demo CTA 1

Don’t miss a beat!

Sign up to receive our latest content on best practices, trends, tips, and more to elevate your content operations.

Don’t miss a beat!

Sign up to receive our latest content on best practices, trends, tips, and more to elevate your content operations.