You just found out Blueline is launching a powerful, just-in-time, just-enough, real-time response, AI-powered micro-sim engine that is going to blow your mind. Without hesitation, you call us, sign an SOW, and take a deep dive into building some of the most immersive, adaptable, and applicable training simulations you have ever created. Awesome! Now, it’s time to go live for your learners. After all, you want to be the first in your division to claim you were leading the charge on custom AI training solutions.
There is just one small problem: compliance. Every course needs to be reviewed word for word and approved by legal before it can go out. But this content is generated on the fly and shows custom feedback and responses to what learners say. None of the coaching advice is scripted; it was created a few milliseconds ago in response to a learner’s performance. What now? Is your fancy cutting-edge micro-sim course going to sit useless on the shelf?
Let’s consider a few things before engaging generative AI learning solutions.
What is the role of compliance in learning and development (L&D)?
In general, the role of compliance departments is to manage and mitigate risk, ensure an organization’s data is protected, and monitor conformity with the rules and regulations of applicable governing bodies. From a learning and development lens, the compliance team serves as gatekeepers in a few different ways. In some organizations, a representative from compliance is responsible for reviewing and approving content before it reaches a wider audience. This meticulous review process is essential to prevent any discriminatory, illegal, or policy-violating content from being disseminated. In highly regulated industries, the scrutiny is even more intense, requiring a nuanced understanding of legalities and industry-specific responsibilities. The stakes are high, as a single misstep could result in costly legal ramifications for the organization.
Where does compliance start?
It’s incumbent upon those developing learning content to understand an organization’s limits and structures. We can’t just write whatever we want and throw it to the legal team. In most cases, we’re creating content that aligns with an organization’s goals. However, that all becomes more complex when you can’t actually read or audit specific content because large language model (LLM) AI generates it dynamically in real-time response to a learner’s inputs.
Building bridges, not barriers, between compliance and L&D
The crucial question arises: how can creators of learning experiences collaborate effectively with compliance departments? It all starts with two-way communication. Too often, compliance is viewed as a downstream entity, where scripts are submitted for review, marked up, and accepted or rejected without meaningful dialogue.
Often, without the appropriate context setting, a legal team might reject a statement simply because they do not understand the spirit of the statement in context. For example, if I’m building sexual harassment training, I need to be able to say things that are exemplary of the bad stuff that I don’t want people to say. I need to do so to help set the tone, create an awkward situation, prove a point, and quite likely make people feel uncomfortable. That is the key to behavior change. But without understanding that context, your content could be a non-starter.
At Blueline, we advocate building strong relationships with compliance teams. These positive interactions involve providing context for training content, explaining the nuances and intentions behind certain statements, and engaging in open communication. As we see it, the key is to view compliance as a partner, not a hindrance. Encourage dialogue, push back when necessary, and work collaboratively to strike a balance between effective training and adherence to compliance standards.
This approach applies to all forms of relationships between L&D and compliance teams, regardless of the training modality that you’re using.
But how do we handle this new level of complexity with GenAI…
Compliance complications and GenAI
Organizations are grappling with the compliance implications of AI tech across many areas within their businesses. As learning leaders, we’re well-versed in navigating the compliance process, but we need to adjust our approach. Here are some essential steps to help you collaborate with the compliance department when you start bringing AI into the mix:
- Stop, collaborate, and listen: The potential of AI tech is enormous, but don’t get overexcited and start putting new AI-powered learning experiences, not even Blueline’s, into your environment… and then find out three months later (the hard way) that no one received sign-off from compliance or legal.
- Build relationships first: Before diving headfirst into generative AI implementation, it’s crucial to establish strong relationships and rapport with compliance teams. Understand their goals, thresholds for risk, and the overall tone and structure they seek in training content. Find out their thoughts about how your organization approaches generative AI technology. Have tough conversations.
- Educate both sides: L&D teams must grasp the underlying LLM technology and prompt crafting to integrate it into training effectively. Simultaneously, compliance teams need to understand the capabilities and limitations of AI-driven content generation. There are many safeguards that can be put in place and methods available to customize the generated content output in ways that would adhere to legal requirements, policy preferences, corporate tone, and brand identity.
- Understand the context: We need to use our authentic intelligence to understand specific learning objectives. We can’t simply input broad concepts and run with whatever the LLM spits out—it’s lazy, it’s not applicable, and you’re not going to get great training. L&D teams need to do their due diligence to create a carefully crafted training context and understand how the technology interfaces with it.
- Promote responsible use: The use of GenAI in L&D demands responsible and strategic application. Organizations need policies and procedures to guide AI-generated content, ensuring it aligns with organizational goals without compromising legal or ethical standards. As training professionals, we need to be prepared for how we plan to gather and leverage the data we collect from learners, especially as we move away from predefined multiple-choice options and start soliciting their individual opinions and thoughts. We absolutely want learners to be able to answer questions in their own words, but we must have a strategy for how we are going to use that newfound stream of data and protect the individual sharing it.
- Continuous communication with all business units: Regular discussions among all stakeholders help to address evolving challenges, set guidelines for responsible AI use, and ensure everyone is on the same page regarding the organization’s training objectives. Stakeholders must make key decisions around the use of external technologies like OpenAI versus custom in-house solutions built from the ground up and trained on your organization’s data. Those decisions significantly impact your long-term learning strategies.
- Address privacy concerns: Privacy concerns naturally emerge when discussing GenAI. Some aspects of privacy fall within the purview of compliance, as they must implement effective policies and procedures to ensure that no sensitive information is exposed, trade secrets are safeguarded, and legal boundaries are respected. However, as we plan to use this technology to engage with employees and teams, we must be aware of the positive and negative implications of collecting information and insights, training our applications, and updating our data models with the information provided by our employees. We must think through when and how it is appropriate to collect, store, and process various types of employee input back into a learning model with the hopes of improving it.
You may also be interested in: 6 considerations for using AI in L&D
GenAI and compliance: the key takeaway for L&D
If there’s one piece of advice I can offer, it’s to initiate conversations with compliance early and often. Reach out to see if your compliance divisions and departments are already thinking about and working through the opportunity to integrate generative AI. Educate yourself on the potential of GenAI, be willing to advocate for learners to gain access to this game-changing technology, and actively contribute to collaboratively building a framework that maximizes the rewards while protecting your organization. Apply your authentic intelligence to the artificial intelligence opportunity that’s before you.
It’s not just about adopting new technology; it’s about getting it right! If you have insights or questions about how your organization is handling this, I’d love to hear from you.