Skip to main content

How to tell when AI output can be trusted

AI consultant Jonas Slaunwhite explains what to look for when auditing generative AI output
April 28, 2026
|
By Darcy MacDonald


Man writes on paper while reading from laptop

Workers have learned firsthand how generative AI can draft emails, summarize research, and prepare presentations in seconds.

But that speed can conceal a major problem: reliability. AI can produce convincing results that are often incomplete, misleading, or simply wrong. 

Jonas Slaunwhite, a consultant specializing in data and AI solutions and the instructor of Working With Generative AI  at Concordia Continuing Education, sees organizations struggle with this constantly. They adopt AI for efficiency, only to realize it is harder than expected to know when to trust the output.

That responsibility falls on the user.

“The overall risk is blind trust,” Slaunwhite says. “People need to remember that they are in the driver’s seat.”

Understanding how to trust AI

One of the biggest misconceptions about generative AI is that its polished writing suggests reliable reasoning. In reality, generative models are trained to predict the next likely word, not to guarantee factual accuracy. 

That means AI output should never be accepted at face value. Instead, professionals need to test it against their own expertise, established processes, or trusted sources.

The limits of probabilistic models

Another source of confusion, Slaunwhite says, is how generative AI produces content.

Unlike traditional software that produces the same result every time, generative models are probabilistic. A small change in a prompt can yield dramatically different responses. That inconsistency can confuse new users and create misplaced frustration.

Jonas Slaunwhite, Manager of Data & AI at BDO Canada Jonas Slaunwhite, Manager of Data & AI at BDO Canada

“Some people are surprised when the answer changes,” Slaunwhite explains. “But the system is designed to provide the next most likely response based on what you give it.”

Recognizing this variability makes it easier for users to better interpret results. Instead of seeking a single correct output, users should test multiple prompts, refine context, and compare results.

Building organizational readiness

For managers, the challenge extends beyond individual skill. Slaunwhite argues that organizations must create a culture that makes space for both curiosity and caution. That starts with clarity of purpose. 

“The key is to identify high-value problems that AI can solve, and have the right tools to do the job effectively and responsibly,” Slaunwhite says. “If you’re not confident in your objectives, whatever tool you pick, you might not solve the problem.”

Strong adoption also requires early involvement from compliance, legal, and risk teams. Introducing a tool without those voices at the table can stall projects or expose organizations to unnecessary risk. 

By contrast, when managers set objectives, define guidelines, and encourage staff to point out grey areas, employees feel empowered to experiment responsibly.

Organizational readiness is less about buying the right software and more about building the right culture. When leadership presents AI solutions as one tool among many, useful in some cases and unsuitable in others, that can help mitigate both reckless enthusiasm and skepticism. 

Using AI as a thinking partner

In his classroom at Concordia Continuing Education, Slaunwhite encourages learners to use generative AI not as an answer engine, but as a partner for deeper thinking. Asking a model to explain possible approaches to a problem, without giving the final solution, prompts reflection rather than shortcutting the learning process.

He emphasizes that what matters is how professionals use the time saved. 

“If we’re using AI, we are getting some time back,” he says. “The question is, what are you doing with that extra time to create impact?”

For managers, the lesson is similar. Auditing output is not just a question of catching mistakes. It’s also ensuring that the efficiency gained translates into better decisions, stronger relationships, or more meaningful contributions to the organization.

Human judgment prevails

Slaunwhite frames the adoption of AI as an opportunity to elevate professional standards. By pairing generative tools with expertise, professionals can test assumptions, surface counterarguments, and refine outputs to a higher level of quality. But accountability cannot be outsourced.

“The tools can speed up the process,” Slaunwhite says. “But they don’t replace our responsibility to apply judgment.”



Back to top

© Concordia University