Not doomer, gloomer, zoomer, or even bloomer. And definitely not waiting around for perfect regulation. Here’s who you need now — and why your people will make or break it.
I just finished Superagency by Reid Hoffman and Greg Beato, and as I closed the final chapter, I realized: I don’t belong to any of the camps described in the book — and yet, I belong to all of them at the same time.
I feel like something else entirely — a translator. An “AI Chaos Navigator,” if you will. The kind of person who listens to the boardroom buzzwords, hears the floor-level fears, observes the complete disconnect between them, and does their best to find common ground between all four mindsets through a cultural curiosity approach.
And I’ve got news: if your organization doesn’t have someone like this, you might be in trouble. Not just because AI is moving fast. But because your people are not.
Let me explain my thoughts.
The Canadian AI Paradox
Canada is proud of its AI roots — and rightly so. We are home to three globally recognized AI hubs: Toronto, Montreal, and Edmonton. Institutions like the Vector Institute, Mila, and Amii attract top researchers, partnerships, and investments. Geoffrey Hinton’s foundational work in deep learning was not just a Canadian contribution; it helped spark the entire generative AI revolution.
At Toronto Tech Week 2025, the energy was undeniable. From start-ups pitching AI-powered logistics to panels on synthetic biology, it was clear we don’t lack innovation in Canada. We’re not lagging on tech. We’re lagging on “translation”.
Because while researchers and founders race forward, most Canadian workplaces are stuck in the in-between — overwhelmed, unclear, and increasingly disengaged. In one corner, you have Geoff Hinton warning us about LLMs developing subjective experience and calling for regulation. In another, optimistic builders like Nick Frosst argue for progress and product–market fit.
But in the middle? People with their diverse needs and values, who are confused with the new tools they’ve been told to embrace. Employees who fear being replaced or redefined. Managers and teams expected to “keep up” without a map.
This is not a policy gap. It’s a change leadership gap.
What AI Readiness Actually Requires
We’ve mistaken compliance for capacity. And upskilling for transformation.
But let’s be clear: AI readiness is not a checklist. It’s a cultural operating shift.
That means fewer passive participants and more people capable of critical thinking, systems navigation, and ethical decision-making in ambiguity.
It’s also about emotional infrastructure.
I keep coming back to Prof. Dr. Nick van Dam’s framing of resistance (and subjectively the most simplified and focused intro into change leadership that I love): people either don’t get it, don’t like it, or don’t like YOU.
That third one? That’s the most underestimated. Because AI isn’t just introducing new tools — it’s challenging status, stability, and identity.
AI maturity isn’t a software stack. It’s an internal culture stack. And most organizations are still running on legacy wiring.
What Smart Regulation Needs to Learn from Human Systems
Let me be clear: I’m pro-regulation.
In fact, I’ve written before about the dangers of a “predictable society” — a world where algorithms trained on yesterday’s data define tomorrow’s decisions.
We need oversight. We need thoughtful guardrails. And we need accountability.
But as Superagency argues, smart regulation can’t be slow, reactive, or disconnected from the complexity of real systems. It must be agile, participatory regulation that evolves alongside both the technology and the people living with its consequences.
The final chapter of Superagency captures this dilemma with rare clarity:
“What’s so promising about the current moment — and simultaneously so disconcerting — is that in very real ways everyone on the planet knows less about the world to come than we’ve known in decades, maybe centuries.”
We’re no longer operating with a full map. And that means we can’t regulate the future the same way we regulate the past.
There’s a tendency, especially in risk-averse cultures like Canada, to wait until every potential danger has been quantified, debated, and risk-assessed before we move. But as Superagency reminds us, prudence alone is not progress:
“Temporary imbalances won’t necessarily course correct without deliberate intervention… Progress cannot be made through careful planning alone. It takes experimentation, learning, adaptation and improvement. The key is iterative deployment in pursuit of the better future, that can prevent worse futures.”
This is where I see the cost of inaction mounting. Not just politically. But inside our organizations.
Because while we’re still trying to perfect the rules, many of our people are operating without any compass at all. No psychological safety to experiment. No time to reflect. No structure to help them move from passive consumption to active agency.
“Technology is a time-tested key to human flourishing… To accomplish any of these things we had to envision what could possibly go right.”
This — what could possibly go right — is the mindset I believe we must nurture across all levels of our systems.
AI isn’t just another tool. It’s a portfolio-level force. A strategic asset. And as the authors write:
“We should think about existential threats not as standalone possibilities, but rather as a portfolio of risks… AI exists as a strategic asset that can be leveraged to address multiple existential threats simultaneously.”
If we lead with fear, we’ll retreat to policy without practice. If we lead with only hype, we’ll ignore the humans struggling to keep up. But if we lead with curiosity, cultural intelligence, and the courage to co-design change with the people inside our systems — we can move forward without leaving trust behind.
That’s Why We Need Change Agents at Every Level
Regulation alone won’t translate AI into value. Neither will dashboards, toolkits, or another cross-functional task force with a vague charter.
Let’s be blunt: not every HR person, comms lead, or department head is ready to steer AI transformation. That’s not a judgment — it’s a structural gap. For decades, we’ve built org charts that reward predictability, not adaptability. KPIs that prize optimization, not reflection. We’ve trained for process, not perception.
So when AI enters the room — with all its ambiguity, acceleration, and uneven distribution of knowledge — we shouldn’t be surprised that the room freezes. Or worse, pretends to move by generating new templates no one understands or uses.
We don’t need more templates. We need change agents embedded inside the system.
These are not unicorn hires. They’re often already on your payroll. But they’ve been flattened by hierarchy, over-tasked, ignored or “not seen as such.”
They’re the employee who keeps asking better questions in meetings or in emails, the team lead who bridges friction quietly, the manager who dares to say, “We’re not ready yet — but here’s how we could be.”
Here’s what these AI-literate, culturally intelligent change agents actually do:
- They translate between executive vision and front-line experience — ensuring your strategy doesn’t collapse on contact with reality.
- They build or re-build trust in rooms where AI is introduced — not with cheerleading, but with clarity, relevance and cultural intelligence.
- They spot low-hanging fruit and help teams prioritize meaningful and productive action over perfection — one pilot at a time.
- And they hold space for both fear and innovation — which is what actual transformation requires.
As I’ve written before: without culturally intelligent (read beyong ethnicity) humans embedded across systems — not just at the intersection of compliance and technical — every smart policy will collapse on contact with lived experience.
This is not about heroism. It’s about structure.
We need to design for distributed agency — not just automate old power structures faster. And yes, sometimes that design starts with creating a role of “AI Chaos Manager” (call it whatever you want).
But more often, it starts with identifying and empowering the sense-makers you already have.
Ready to Lead the Chaos?
If you’ve made it this far, chances are you’re already an “AI Chaos Navigator” in spirit — whether or not your title reflects it.
Maybe you’re the team lead quietly helping your department adapt. Maybe you’re the only one in the room asking, “Should we?” not just, “Can we?” Or maybe you’re the person everyone turns to when change gets messy — because you know how to make it human again.
This work matters. And it’s needed now. Not when the next mandate drops. Not when the regulation finally lands. But today — in conversations, decisions, priorities, and budgets.
(If you’re a senior leader, ask yourself: Who’s helping your people make sense of all this? Not just adopt new tools, but translate change across teams, cultures, and systems?)
Those people already exist inside your organization. And they might be not necessarily ones in front of your eyes
They just need the time, tools, and trust to lead well.
Explore our 1:1 Coaching – For individuals and organizations serious about building real, culturally intelligent capacity
And if you’re an immigrant navigating the Canadian job market, our upcoming email newsletter shares practical insights on how to build influence as a culturally intelligent change agent — even without a formal title. You’ll learn why this is one of the smartest strategic moves you can make in today’s workplace. Subscribe here.
Because systems don’t change on their own. People do.
We don’t need everyone to become an AI expert. But we do need more people willing to hold the messy middle — to lead through uncertainty, translate complexity, and ask the questions no algorithm can.
Call it a new role. Call it common sense. Call it professionalism in action.
Whatever name it goes by in your organization, invest in it.
Because in a world flooded with noise, it’s not more information we’re missing.
It’s interpretation. It’s connection. It’s courage.
And that, more than any tool, will determine whether this next era of innovation works for us — or just around us.
Thank you for reading and look forward to your thoughts and comments!
Kindly,
Inna