This article was originally published in The Gazette.
We don’t put a sticker on a slot machine that says “This may be addictive” and expect compulsive gamblers to stop. So why would it work for Instagram?
In a bold move aimed at addressing youth mental health, New York lawmakers have passed a bill requiring warning labels on social media platforms. These labels — modelled after those on cigarette packages — are meant to inform users, especially teens, of the mental health risks of excessive screen time.
The intentions are commendable. But is this an effective strategy?
The core issue with this approach is the assumption that rational, conscious decision-making can override the powerful unconscious mechanisms that drive social media use. That’s simply not how the brain works.
Social media platforms don’t operate on the surface level of logic and reason. They target our attention through the brain’s deeper reward systems — particularly the mesolimbic dopamine pathway, which governs motivation, craving and pleasure.
When a user gets a like, a notification or even just sees a red badge on an app icon, dopamine is released. Over time, the brain starts associating cues — like the glow of a phone screen or a buzz in the pocket — with the possibility of a reward. These are the hallmarks of behavioural addiction.
Enter the warning label. It appeals to the conscious mind, attempting to insert logic into a process already hijacked by unconscious conditioning. But by the time a user sees it, their brain has already been primed. It’s like trying to stop a stampede with a cardboard sign.

Worse still, for users who are already sensitized — those whose brains have developed strong cue-reward associations — the warning itself may become a cue. Just as a “sensitive content” tag on Instagram can spark more curiosity and engagement, a warning label may inadvertently trigger dopamine anticipation. The warning becomes part of the loop.
We’ve seen this before. In Buyology: Truth and Lies About Why We Buy, Danish author Martin Lindstrom explains how warnings on cigarette packages have not significantly reduced smoking in many populations. Even graphic warnings, while more effective than plain text, are too abstract to compete with the immediate hit of nicotine.
Similarly, a warning about social media’s addictive features won’t stand a chance against the instant pleasure of receiving a like or seeing a new comment. In fact, some studies suggest young users may interpret warnings as a challenge — or as a sign that the content is more exciting. Psychologists call this the “Pandora effect.”
There’s also the risk of desensitization. If users see the same warning every time they open TikTok or Instagram, it quickly becomes background noise — something to scroll past without a second thought.
And if the warning is too strong, it can backfire and push users toward more social media use, a phenomenon known as the “boomerang effect.” Once the brain realizes that the warning doesn’t restrict access or carry consequences, it begins filtering it out entirely — just like cigarette warning labels.
The solution to digital overuse and declining mental health won’t come from superficial labels. It needs to come from structural design changes: adding small barriers or pauses to interfaces, reducing addictive features like infinite scroll, implementing default timeouts and offering tools that encourage reflection and self-regulation.
It also requires education — not just about “risk,” but about how social media manipulates attention and exploits the brain’s wiring.
Empowering users with an understanding of how algorithms work, how reward systems shape behaviour and how their own brains respond is far more effective than a generic warning. Mindfulness, digital literacy and self-regulation should be integrated into education systems — starting in schools.
Good intentions aren’t enough. If we want to build a healthier digital future for youth — and adults— we must design for how the brain actually works, not how we wish it did.
Iman Goodarzi is a public scholar and PhD candidate in marketing at Concordia University’s John Molson School of Business. His research focuses on the role of AI in preventing excessive social media use.