Skip to main content

The conversation most leaders skip before implementing AI

AI specialist Jonas Slaunwhite says defining the goal first — not the tool — determines whether AI delivers value
December 11, 2025
|
By Darcy MacDonald


Two colleagues smile while working together

When Jonas Slaunwhite sits down with a client’s senior leadership team, the pattern is almost predictable. Someone pulls up an AI tool they’ve been experimenting with. Someone else mentions a pilot that “kind of worked.” The enthusiasm is undeniable, but so is the disconnect between objectives.

“Meaningful AI projects really need to start with the business objective,” Slaunwhite explains. “Technology decisions come after.”

As a senior consultant specializing in AI and data at BDO Canada, Slaunwhite has seen organizations rush toward AI adoption amid competitive pressure and stakeholder expectations. That scramble entices them to use AI long before they’ve defined the outcome that would make the work matter.

“We’re not just doing these projects for fun experimentation anymore,” Slaunwhite notes. “When the technology first came out, we needed to explore what works. But now the focus is, ‘How do we enable this technology in your business’ day-to-day?’”

To help professionals apply the technology in their work, Slaunwhite is hosting Building AI Projects That Deliver, a one-day Signature Session offered by the John Molson Executive Centre. The session was created to support professionals in developing a structured plan to get AI ideas off the ground faster without wasted effort.

Through client engagements, Slaunwhite helps teams connect AI capabilities to their business goals. This shows them how to move away from scattered experiments and toward a structured approach that sets the stage for desired outcomes.

Objectives set the outcomes

Across organizations, Slaunwhite encounters the same tension: leaders want to make  progress but lack a reliable way to connect ambition to execution. Teams and departments see potential through their own lens. IT understands systems but not always daily bottlenecks. Business teams understand processes but can’t always judge feasibility, data needs, or risk. Without alignment, initiatives stall or advance on assumptions.

Jonas Slaunwhite, technology manager at BDO Canada Jonas Slaunwhite, technology manager at BDO Canada

His advice never changes: before discussing tools, define the outcome that supports your organization’s three-to-five-year plan.

“When you start with the problem, you eventually land on the right application,” he notes. “When you start with the tool, you get lost.”

A shared objective becomes the anchor for technical decisions, resource allocation, and stakeholder expectations.

Opportunities matter more than possibilities

Once outcomes are clear, leaders must determine which ideas deserve attention. Slaunwhite uses a simple  model based on value, feasibility, and risk.

It starts with departments describing their work and the pain points impeding their progress. Each idea is assessed the same way, with value determining whether the solution solves the problem. Feasibility tests whether the team can deliver or access capability with the right training. Risk evaluates privacy, ethics, compliance, and the responsibility to earn and maintain trust.

“Every department knows where they struggle,” Slaunwhite says. “When you evaluate ideas with discipline, the right opportunities begin to stand out.”

This step helps organizations avoid attractive but impractical pilots and focus on use cases that can be implemented, measured, and defended.

Early alignment clears the way

One of the most persistent barriers to adoption is the assumption that AI belongs to IT. Slaunwhite sees the consequences when technical teams are asked to build solutions for problems they didn’t define or when business teams push for outcomes without understanding constraints.

“Everyone in the organization is going to interact with these systems eventually,” he emphasizes.

Alignment between those who understand the work and those who design the systems keeps the problem grounded in day-to-day reality and the solution grounded in technical feasibility. When teams establish a shared language around outcomes, data, and responsibility, projects move forward with clarity.

Built-in responsible use

Even the most promising use cases can fall apart if they neglect responsible use. Slaunwhite’s model integrates this early. Privacy, bias, compliance, and public trust shape whether a use case is viable — even when the financial case looks strong.

“It might look good on a spreadsheet,” he says. “But if it damages trust, it is not a good idea.”

Responsible use strengthens credibility inside the organization and ensures AI supports the mission rather than contradicting it.

People-first projects prevail

For Slaunwhite, AI’s success depends on people. Technology can’t generate value if teams are anxious, excluded, or unprepared.

“We need to bring our people along with us,” he says. “We can’t just introduce this technology and figure out what was really important afterwards.”

Upskilling, transition skills, and clear expectations get employees to see the value in  transformation. When people understand how AI supports their expertise, the time saved becomes an investment in innovation and service. Preparing people prevents the sink-or-swim dynamic that undermines culture and adoption.

The human factor extends beyond employees, Slaunwhite notes.

“Who are your customers, who do you serve, and what community impact do you have?” he asks. “That needs to be a north star for all of us.”

Leadership holds the keys 

Slaunwhite’s method integrates business and technology strategy in a structured setting, guiding leaders through common problems with peers from across sectors. The goal isn’t to turn executives into tech experts, but to provide them with the judgment to decide where AI meaningfully fits in their long-term plan. That way business strategy guides the technology, not the other way around.

“When leaders understand their goals, their people and the value they want to create, everything becomes easier to navigate,” says Slaunwhite. “That is when AI becomes useful.”

 

Register for Slaunwhite's one-day course Building AI Projects That Deliver before January 12, 2026 and save with $100 with code SIGN26.



Back to top

© Concordia University