Instructional design

20 minute read

Analyzing Learning Effectiveness with Amanda Abraham

Krystal Tolani Motwani

Krystal Tolani Motwani

In this episode of the L&D Explorers Podcast, we’re joined by Amanda Abraham, Senior Learning Manager at CPA Ontario, who brings years of experience in instructional design and curriculum development. 

Amanda shares her insights on evaluating learning effectiveness, aligning L&D strategies with business goals, and the importance of tailoring learning for diverse skill levels—including the C-suite.

Whether you’re an L&D professional or just starting out, this episode is packed with actionable strategies to elevate your learning programs. Don’t miss it!

Key takeaways 

1. Reactions vs real impact

Reactions are part of the learning evaluation process, but they’re not the only element.

Reactions are an important but subjective part of learning evaluation. Real impact is assessed by:

  • Actual learning outcomes.
  • Positive behavioral changes.
  • Achievement of business objectives (direct or indirect).

2. Aligning training with business goals

For training to succeed, we need to know the business goals we’re trying to affect.

You should already know what you want to assess before the training takes place. Start by understanding the organization's strategy, vision, and mission. A good way to understand an organization's goals is by collaborating with HR  and performance managers. 

3. The future of learning evaluation

The future will see more integration of technology, but the key will always be connecting the evaluation to tangible outcomes

Technology should enhance what you bring to the table in learning evaluation. While virtual reality and artificial intelligence can assess a variety of skills, combining data with L&D expertise will always achieve the best results. 

How to manage training requests

Evaluate each request by asking:

  1. Is training necessary?
  2. Can expectations be managed effectively?
  3. Does it conflict with ongoing priorities?

Actionable insights

Establish training outcomes that align directly with business objectives by consulting stakeholders early to identify measurable gaps and performance targets. Additionally, using tailored approaches, such as coaching for C-suite leaders and cohort-based learning, ensures that training effectively addresses varying skill levels.


Mentioned in episode 


We hope you enjoyed this episode of our L&D Explorers Podcast! Subscribe to our YouTube, Podbean, or Spotify so you don’t miss the next episode. 


A better way to train

It's easier than ever to track and manage your team's training with the GoSkills LMS.

Start for free

Transcript:

Dan Gorgone: Hey, everyone! Welcome back to this episode of the L&D Explorers Podcast from GoSkills. On today’s episode, we speak with Amanda Abraham, senior learning manager at CPA Ontario. She has years of experience in instructional design and curriculum development. Today we’ll be talking about analysing learning effectiveness. We’ll discuss how to measure the true impact of your training, balancing short term and long term initiatices, and meeting executive training needs. My name is Dan Gorgone, and I’m the course producer at GoSkills, and I hope you enjoy this discussion. 

Joining us today is Amanda Abraham. Thank you for being here, Amanda!

Amanda Abraham: Thank you for having me, Dan.

Dan: So, you're a senior learning manager at CPA Ontario. You have years of experience in instructional design and curriculum development, and currently lead a team of instructional designers, developers, and editors—all creating e-learning content. Isn’t that right?

Amanda: That's correct, yes. I actually joined CPA Ontario about 18 months ago. During this time, it’s been interesting because this is the first time I've had a formal people management role, with so many people reporting to me—both contract and full-time staff. I organize and handle the work that goes into creating e-learning certificate programs with the help of the staff you mentioned.

Dan: Our topic today is analyzing learning effectiveness. I know that in many cases, it’s the forgotten piece of curriculum and course development—or at least it’s the piece that comes at the end. By the time we get to it, we’re already moving on to other courses or being told, “Now that you’ve finished that, can you move on to this?”

It’s easy to assume the instant reactions we get are a good enough measure of how a course is being received. Comments like, “Hey, I liked it,” or “You’re really good at that,” are nice, but what do they mean?

I’ve worked with organizations that emphasize collecting survey data immediately after training—before people even leave the room. It feels surface-level because the real impact of training hasn’t happened yet. What can L&D managers do to differentiate between instant reactions and the real impact of training?

Amanda: That’s a great question. It’s important to recognize that reactions are part of the learning evaluation process, but they’re not the only element. Other aspects include assessing actual learning, observing positive behavioral changes, and, most critically, determining whether the training has achieved indirectly or directly, a business goal.

Reactions are the easiest to gather but also the most subjective. They depend on the individual’s experience, their rapport with the trainer, the environment, or even the peers they’re with during the training.

However, key stakeholders and decision-makers are less concerned about those reactions. They care more about the return on investment. They want to know:

  • What did employees actually learn?
  • Can you demonstrate a measurable knowledge lift?
  • If it’s behavioral training, have skills like communication or presentation improved? And how can you prove it?

If the training doesn’t tie back to achieving a business goal, its perceived value is going to be low in the eyes of senior executives. You can design a creative, engaging experience, but without measurable results, you might not get the recognition or outcomes you’re aiming for.

Dan Gorgone:
You mentioned a few key things there, and one that really stands out is the involvement of executives and leadership. Building something that resonates is so important to instructional designers. We want our work to be valued; we want it to make a difference. But “make a difference”—what does that even mean?

This brings up the metrics you mentioned earlier. For training to succeed, we need to know the business goals we’re trying to affect. We can’t develop courses in isolation; we need alignment with corporate initiatives and direction from leadership.

That seems like the ideal scenario. How can L&D professionals ensure they get the support and guidance they need from leadership to bridge that gap?

Amanda Abraham:
That’s an excellent question, and it’s one I’ve encountered throughout my L&D journey, especially early on. One thing that has really helped me is taking the time to look at the organization’s mission, vision, and strategic goals before planning an L&D strategy.

Understanding those strategic goals is key, but how do you gain access to that information? That can be tricky. Often, an HR business partner—or someone who manages performance evaluations—has visibility into those goals. L&D professionals need to collaborate closely with them.

For example, at a professional services firm I worked for, one of the key business goals was to grow the organization by 20% over three years. Leadership identified that upskilling directors in business development was critical to achieving this growth.

I pushed for a meeting with the CEO and asked very specific questions:

  • What gaps are you seeing?
  • What specific issues should the training address?
  • How will you measure success?

Together, we defined clear metrics, like requiring each director to bring on three new clients within the next year and improving their lead conversion rates. These metrics were then tied to their performance evaluations, creating alignment between the training and their business objectives.

Dan Gorgone:
That’s a great point. It underscores how one size does not fit all. What worked at one organization might not work somewhere else.

Getting leadership buy-in in the form of clear direction and support isn’t just valuable for the L&D team—it’s critical for the learners who will benefit from the program.

Let’s take a step back, though, because once you have that alignment and those clear goals, you’re faced with another challenge: building a program that meets those goals while addressing the needs of a diverse audience.

You’re likely to have participants at different experience levels—some just starting out, some transitioning from other roles, and others in leadership positions. How do you build a program that keeps those key metrics in mind but also serves a diverse audience effectively?

Amanda Abraham:
That’s a great question. One thing L&D managers need to do more of—and it’s something I learned through experience—is recognize that not every problem requires a training solution.

Sometimes, when a request comes in, it’s tempting to jump straight into designing a program. But the first question should always be, “Is training the best solution here?” For smaller issues, the answer might be no. It could be that better communication from senior leaders, mentoring, or coaching would be more effective.

The second thing is managing expectations. Training can solve some problems, but not all of them. It’s important to communicate this to stakeholders. I also believe training must be tied to performance goals and key performance indicators (KPIs). If training doesn’t connect to those, people may not see its value.

Finally, when you have a diverse audience, grouping participants into cohorts based on their skill levels can make a big difference. For example, I’ve designed programs where junior and senior staff attended separate sessions tailored to their specific needs.

But sometimes, you’ll find participants with different experience levels in the same session. When that happens, you might get feedback like, “I already knew all of this—why was I here?” In those cases, I involve the more experienced participants by asking them to share their knowledge and insights with the group. This not only values their expertise but also enhances the experience for everyone else.

It’s not easy, especially when managing training for hundreds of people. But putting in the effort to tailor the program will result in exponential value and much greater satisfaction from stakeholders.

Dan Gorgone:
I love that. There are a couple of great insights in there. One that stands out is managing expectations—ensuring that L&D isn’t seen as the “magic wand” or silver bullet that can fix everything.

I often compare training to sports coaching. Coaches teach fundamentals, run drills, provide strategies, and review game footage. But it’s not until the players get on the field and play the game that you see if they’ve truly applied what they’ve learned. Similarly, training lays the groundwork, but it’s the day-to-day actions of the learners that bring about results.

The other thing I loved was involving senior participants. I’ve seen situations in live training where the more experienced attendees sit in the back, disengaged, thinking, “I’ve heard all of this before.” Inviting them to contribute and share their experiences is such a great way to re-engage them.

One challenge with training, though, is that requests often come from leadership to solve short-term problems. It’s reactive—maybe a milestone was missed, or new market forces have emerged. In these cases, the focus is on immediate improvement.

But training also has a long-term component. It’s part of continuing education, something that should develop skills over time and serve learners throughout their careers. From your perspective, how do you balance addressing short-term needs with creating programs that provide long-term value?

Amanda Abraham:
That’s such a great question and one I’ve encountered many times. The first step is to start with a robust L&D strategy that aligns with the organization’s strategic goals—whether that’s a three-year plan, a five-year plan, or even a ten-year plan. Many organizations outline these goals in town halls, leadership meetings, or internal communications.

As an L&D manager, it’s essential to look at those strategic objectives and design a program that supports different levels of the organization in achieving them. For example, in organizations with a linear career path—associate to senior associate, manager to senior manager, and so on—you can map out what someone needs at each stage to progress.

You might design consistent, foundational training for associates and senior associates, ensuring they acquire the skills they need to be promotable. Many organizations won’t promote someone until they’re already demonstrating the behaviors and competencies required for the next level, so training should support that progression.

At the same time, short-term requests will always come up. These might involve addressing technical skills, soft skills, or leadership gaps that require immediate attention. That’s okay—they can still fit into the broader strategy.

When I design an L&D strategy, I look at multiple dimensions and allocate time and resources accordingly. It’s also important to manage expectations. If training isn’t the right solution for a problem, I’ll say so. And if it is, I’ll communicate realistic timelines that don’t overwhelm the team but still align with organizational priorities.

Short-term needs are inevitable, but as L&D professionals, we need to remain adaptable while ensuring the long-term strategy stays on track.

Dan Gorgone:
That’s such a great approach. Short-term requests will always exist, but balancing those with a long-term vision ensures you’re not constantly in reactive mode.

Let’s dig into evaluating training effectiveness. People in an organization are often at different levels, from entry-level to senior executives. Training can take many forms—online courses, live workshops, boot camps, mentorship programs, or retreats.

When you’re working with such a wide range of participants, how do you assess the effectiveness of training across all levels, especially when L&D professionals might not have full visibility into what’s happening at higher levels like the C-suite?

Amanda Abraham:
That’s a great question. One size definitely does not fit all when assessing training across different levels. Evaluating senior executives and C-suite leaders requires a much more tailored approach compared to junior or mid-level employees.

For example, at a large bank I worked for in Canada, a senior executive was flagged for having great business results but poor people management skills. Leadership was concerned because if he was promoted, his team would grow, and his lack of management skills could create larger issues for the organization.

In this case, we didn’t start with training. We began with a 360-degree feedback process, where peers, direct reports, and leaders provided input. A coach worked with him to debrief the results and identify specific areas for improvement.

Once he understood the feedback, we designed a targeted training program that included a mix of asynchronous and instructor-led sessions. After six months, we conducted another 360-degree assessment to measure progress. The results showed significant improvement, and he was successfully promoted.

For senior leaders, the evaluation process often involves custom assessments, leadership involvement, and clear communication about why the training is necessary. Transparency is key—leaders need to understand how the training supports their development and the organization’s goals.

Dan Gorgone:
I love that transparency. It’s such a critical element. When leaders know the goals, the reasons behind the training, and see the organization investing in them, it fosters trust and buy-in. It also sends a message that the organization values their growth and contributions.

Amanda Abraham:
Absolutely. And as people progress in their careers, the cohort becomes smaller and smaller. You might have a large base at the junior level, but at the top, you’re working with just a handful of individuals. That smaller size allows for highly personalized training, which is crucial because they’re the key decision-makers.

If senior leaders don’t get the right training or see the desired results, the consequences can be significant—not just for them but for the entire organization. Tailoring training to their unique needs ensures they’re equipped to lead effectively.

Dan Gorgone:
That makes perfect sense. Leadership at the top trickles down to every level, so investing in their development is essential.

Now, let’s shift to the future. How do you see the future of learning evaluation evolving, particularly with advancements in technology, data analytics, and AI?

Amanda Abraham:
The future of learning evaluation is so exciting. A few years ago, I had the chance to witness something ahead of its time—virtual reality (VR) training. At a large professional services firm, they piloted a VR program to train people managers on having difficult conversations, like terminating an employee.

In this program, managers used a VR headset to simulate the termination process. The virtual employee reacted in real time based on the manager’s tone and delivery. If the manager was harsh or unempathetic, the virtual employee became agitated, raised their voice, or caused disruptions. On the other hand, if the manager handled the conversation calmly and professionally, the virtual employee exited gracefully—sometimes even thanking the manager.

After the simulation, the program generated an assessment report, highlighting areas where the manager excelled and where they needed improvement, such as empathy or communication. It was an incredible way to measure the impact of training in a realistic, high-stakes scenario.

Of course, VR training can be expensive, so it’s not for everyone. But generative AI tools, like ChatGPT or Copilot, are making evaluation more accessible. These tools can help design evaluation methodologies, analyze data, and suggest improvements. However, they’re not a complete solution—you still need human creativity and expertise to adapt the tools to your organization’s unique needs.

I believe the future will see more integration of technology, but the key will always be connecting the evaluation to tangible outcomes: behavioral changes, business goals, and the long-term impact on people and organizations.

Dan Gorgone:
I love that. Technology is making things more sophisticated, but at the end of the day, it’s still about understanding the human side of learning and change.

One size doesn’t fit all, and having tools that enhance evaluation—rather than replace our expertise—is where the real value lies.

Before we wrap up, can you tell our audience where they can find you online if they want to connect?

Amanda Abraham:
Absolutely! You can find me on LinkedIn—my profile is open, and I love connecting with new people. Learning has been my passion since I left university, and I’m sure it will be my passion until I retire.

I’ve worked in learning and development strategy for many years and am always excited to explore new facets of the field. If you enjoyed this conversation, feel free to reach out—I’d love to continue the discussion!

Dan Gorgone:
Thank you so much, Amanda. This has been an incredibly insightful conversation. I appreciate you sharing your expertise.

Amanda Abraham:
Thank you, Dan. It’s been a pleasure!

Dan Gorgone:
All right, everyone, that’s it for this episode of the L&D Explorers Podcast. Thanks for tuning in!

Krystal Tolani Motwani

Krystal Tolani Motwani

Krystal is a Growth Product Manager at GoSkills with a background in digital marketing. She has spent the better part of the last decade working in the EdTech industry. When she's not at work, you can find her listening to podcasts or watching comedy specials on Netflix.