Curriculum Design

What training should you standardize first?

A triage method for L&D leaders. Three filters that cut through stakeholder noise and surface the standardization project that actually pays off.

Jennifer Bell, Team Leader, Custom Learning at Neovation Jennifer Bell 11 min read
Three filters for deciding which training to standardize first

Key takeaways

  • The hardest part of standardizing training isn't deciding whether to do it. It's deciding which one to start with, and that decision usually gets made by whichever stakeholder asks loudest rather than by what would actually pay off.
  • Three filters surface the right starting point: how much inconsistency costs you (risk), how often the content gets delivered and to how many people (frequency), and how concentrated the expertise is in too few heads (knowledge-at-risk).
  • Pick a topic too small and the workflow never builds momentum. Pick something too ambitious and the project stalls before it ships. The right starting point is high enough stakes to matter and contained enough to finish.
  • AI is genuinely useful for the messy middle of this work. It can pull together five versions of the same course, surface what's consistent, and flag what contradicts. The judgment about what should become the standard still belongs to a human.
  • Standardization is a leadership decision, not a content task. Get the first one right and the next three are easier.

Most L&D teams know they should be standardizing their training. The harder question is which one to standardize first.

That decision usually gets made by whichever stakeholder asks loudest, not by what would actually pay off. The result is predictable. The first project either falls flat (low stakes, no momentum) or runs aground (too ambitious, no shipping). Either way, the standardization workflow never gets the early win it needs to keep going.

This guide walks through a triage method built around three filters: risk, frequency, and knowledge-at-risk. Score your candidate topics against those three and the right starting point usually surfaces on its own. By the end, you’ll have a way to make the decision that doesn’t rely on whoever happened to email you most recently.

The short answer

The training to standardize first is the one that’s high-cost when delivered inconsistently, high-frequency in how often it runs, and concentrated in the heads of fewer people than is comfortable.

Those three properties (risk, frequency, knowledge-at-risk) are the filters. A topic that scores high on all three is your starting point. If it only scores high on one, it’s probably the wrong place to begin, and if it scores low across the board, it doesn’t need to be standardized at all.

A quick definition before we dig in: standardizing training means making sure the same content is delivered the same way every time, regardless of who’s running it or where it lands. Fewer versions. Less drift. One source of truth. The work matters because the training you standardize first sets the tone for everything that follows.

Filter one: how much does inconsistency cost you?

Risk asks what actually happens when this training is delivered five different ways across five teams. Sometimes the answer is “not much.” Different sites end up with slightly different language and the work still gets done. Other times the answer is “we got fined” or “a customer was hurt” or “the lawsuit is in motion.” Risk is the filter that surfaces the second category.

The kinds of training that score high on risk:

  • Compliance and regulated content. Anti-harassment, financial controls, OSHA, HIPAA, anything where an auditor or regulator could ask to see what was taught.
  • Safety-critical procedures. Lockout/tagout, equipment operation, anything where the wrong action causes injury.
  • Client-facing processes. Sales pitches, customer service scripts, escalation handling. Anywhere inconsistent delivery damages trust or revenue.
  • Anything tied to certifications or licenses. Where the credential depends on the training meeting a specific bar.

Risk also has a softer form. The cost of inconsistency isn’t always a fine or a lawsuit. It can be the slow erosion of brand voice across a customer base, or the manager who answers the same question three different ways and undermines their own authority. The point is to ask: when this training is delivered inconsistently, what does it actually cost us? If the answer is “nothing meaningful,” this isn’t your starting point.

Filter two: how often does this content get delivered, and to how many people?

Frequency asks about volume and reach. Training delivered once a year to a dozen people produces less return on standardization than training delivered every week to a thousand. The math is simple, but it’s the filter teams skip most often, because high-reach content is usually content with the most political weight, and politics distort priority.

Frequency has two components worth separating:

  • How often the content is delivered. Daily, weekly, monthly, annually. Higher cadence means higher payoff for standardization, because every delivery is a chance for drift.
  • How many people receive it. An onboarding module that 200 new hires complete each year is high-volume even if it only “runs” once per cohort. A weekly leadership session for ten executives is lower-volume even if the cadence is high.

The training that scores highest on frequency is usually onboarding, customer service or sales enablement, recurring compliance refreshers, and anything that gets re-delivered every time a team expands or reorganizes.

Filter three: who holds the expertise, and how soon could they leave?

Knowledge-at-risk asks how concentrated the expertise is and how exposed you are if it walks out the door. Some training content lives in formal documents and well-maintained playbooks. Other content lives entirely in the heads of three or four senior people, none of whom have written anything down, all of whom are within five years of retirement.

If the training depends on knowledge held in fewer heads than you’d like, that topic moves up the list. The filter is partly about retirement and resignation, but it’s also about role changes, restructures, and the slower forms of attrition that don’t announce themselves. The expert who moved into a different function eighteen months ago and hasn’t touched the topic since is, for practical purposes, gone.

A useful question to ask: if the three people who currently know how to deliver this training were unavailable for six months, what would happen? An answer that runs some version of “we’d be fine, the documentation is good” means knowledge-at-risk is low. An answer along the lines of “we’d have to figure it out from scratch” is where to start.

This filter is also where the math on standardization gets uncomfortable. The training that depends on the most experienced people is often the training those people care most about. Standardizing it can feel like a critique of how they do the work. The opposite is true: standardizing it is how that work survives them.

Putting the three filters together

The three filters work as a scoring system, not a checklist. You don’t need every topic to score high on all three. You need a clear-eyed view of how each candidate stacks up so the decision gets made on substance instead of volume.

A simple version: rate each candidate topic on a 1–5 scale across all three filters. Add the scores. The highest total is usually a strong contender. The lowest total is rarely worth the effort. The middle of the pack is where most teams find their actual starting point: the topic that’s strong on two filters and acceptable on the third.

A few patterns worth knowing about:

  • High on risk and frequency, low on knowledge-at-risk. Compliance content delivered to thousands of people, with the design already documented. Standardization here is mostly about consolidating versions and ensuring delivery consistency. Often the easiest first project.
  • High on risk and knowledge-at-risk, low on frequency. A specialized procedure that only a few people know how to teach, where the consequences of inconsistency are serious. Worth doing, but the payoff is slower because the volume is lower.
  • High on frequency and knowledge-at-risk, low on risk. Onboarding content held in the heads of senior staff. The cost of getting it wrong is moderate, but the cost of losing the people who know it is significant. A good candidate when leadership turnover is on the horizon.
  • High across all three. Rare, but when it shows up (usually a high-volume regulatory topic that depends on knowledge held in too few heads) it’s almost always your starting point.

Scoring like this makes the decision visible and defensible. Once it’s written down, the conversation about which project to start moves from preference to evidence.

Where AI actually helps with this work

AI is genuinely useful for the messy middle of standardization, less useful at the bookends. The judgment about what should become the standard belongs to a human. The comparison work that makes that judgment possible is exactly the kind of task AI handles well.

The setup most teams discover when they look hard at a topic for standardization is that there isn’t one version of the training. There are five, or twelve, or twenty-six, depending on how long the topic has existed and how many teams have touched it. Pulling those variants side-by-side to identify what’s actually consistent across them is tedious work that nobody volunteers for, and exactly the work AI shortens the most.

A workflow that’s been useful for our team and the L&D leaders we work with:

  • Gather every existing version of the training topic into one workspace, with the source documents accessible to the AI tool.
  • Ask the tool to surface what’s consistent across all versions, what’s contradictory, and what appears in some but not others.
  • Review the output with a subject matter expert (SME). The AI’s “consistent” might miss nuance the SME catches; its “contradictory” might be a false flag based on different wording for the same idea.
  • Use the cleaned-up comparison as the starting point for the new standard, not the standard itself.

The principle holds across tools and platforms. AI drafts, you shape. The tool can surface patterns and flag inconsistencies faster than any human review process. What it can’t do is decide which version of the truth should be the one your organization runs on. That decision involves judgment about your audience, your culture, your regulatory exposure, and your priorities, and that’s still entirely human work.

Three ways teams pick the wrong starting point

The decision goes sideways most often through three patterns:

The loudest stakeholder wins. The VP who emailed you twice this week becomes the default priority, regardless of how their topic actually scores on the filters. If the squeaky wheel is also the right wheel, fine. If not, the standardization workflow burns its first cycle on the wrong project.

The pet project gets greenlit. Someone in L&D has wanted to redesign a particular module for three years and sees standardization as the moment to finally do it. The motivation is genuine, but the topic might score poorly on all three filters. Pet projects make great second or third standardization efforts; they’re rarely the right first one.

The scope creeps to “the whole department.” The triage exercise produces a list of high-scoring topics, and instead of picking one, the project gets framed as “let’s standardize everything in this category.” The scope is too large to ship, the timeline keeps moving, and the work that was supposed to build momentum becomes the work that proves standardization doesn’t ship.

The first standardization project is allowed to be small. The one thing it has to do is finish. Get one done, learn from it, and the next three are easier.

A note on Neovation’s approach

Most of the standardization work we do at Neovation starts with this triage conversation, often before any course design begins. We work with L&D leaders to score their candidate topics against the three filters, surface the version variants that exist across teams, and identify the project that will produce the early win that makes the next three easier. An instructional designer pairs with a project manager from kickoff onward, and the work usually moves from messy reality to shippable standard inside a single quarter. For programs where standardization is one piece of a broader curriculum architecture, that work nests inside a curriculum design engagement rather than running as a separate project. The relationship between the two is covered in our guide to designing a curriculum.

If standardization is the actual problem (rather than a curriculum gap or a content gap dressed up as inconsistency), an outside partner is one option among several. An internal team is often a strong choice when the bandwidth exists. A freelancer with deep topic expertise can work for narrow, high-risk projects. Off-the-shelf content rarely solves it, because the whole point of standardizing is making the training fit your organization. If you’d like to talk through which approach fits your situation, request a quote or browse our case studies to see what these engagements have looked like.

Frequently asked questions

What does it mean to standardize training?

Standardizing training means making sure the same content is delivered the same way every time, regardless of who's running it. The goal is fewer versions, less drift across teams, and a single source of truth that everyone can point to. Standardization isn't the same as centralizing or rigidly scripting; it's about consistency in what gets taught and how, while leaving room for delivery style.

How do I know if a topic actually needs to be standardized?

Score it against the three filters: risk (what does inconsistency cost you?), frequency (how often does this content get delivered, and to how many people?), and knowledge-at-risk (how concentrated is the expertise?). If a topic scores low across all three, standardization probably isn't the right project for that content. Some training is fine being delivered with local variation.

What's the difference between standardizing training and creating a curriculum?

Standardization is about making one piece of training consistent. Curriculum design is about deciding what training should exist in the first place and how the pieces connect. The two often happen in sequence: curriculum design defines the program, standardization ensures each piece runs the same way every time. The full distinction is covered in our guide on instructional design vs curriculum design.

Should I standardize new content or fix old content first?

Usually old content. New training is easier to standardize from the outset, but it doesn't carry the version drift and inconsistency costs that make standardization valuable in the first place. The biggest payoff comes from older, widely delivered, high-stakes topics that have accumulated multiple variants over time.

Can AI just standardize our training for us?

Not entirely, and the parts where it can't are the parts that matter most. AI is useful for comparison work: pulling together five versions of a course, surfacing what's consistent, flagging contradictions. It's much less useful for deciding which version should become the standard, because that decision involves judgment about your audience, your culture, and your regulatory exposure. The right model is AI as research assistant, with a human owning the design. Our guide to instructional design covers what that human ownership actually looks like.

How long does a standardization project typically take?

It depends on the topic and the scope, but a focused first project usually takes between four and twelve weeks. A compliance refresher with three existing variants and clear source material can land closer to four. A complex client-facing process with twelve regional versions and competing stakeholders runs longer. The first project takes longer than the second, the second longer than the third, because each one builds the workflow.

Let’s figure out if we’re the right fit.

Tell us what you’re working on. We’ll give you an honest read on whether we can help — and what it would take.