“What does good look like for my role?”
It’s the question employees ask most when businesses start adopting AI. And it’s not a sign of panic; it’s a sign of realism. People know AI is coming. They know it changes things. What they don’t know is what it means for them personally.
Here’s the uncomfortable truth: it is naïve to pretend AI won’t take any jobs. It will. It already has. But it won’t take every job, and it won’t take the jobs of those who evolve with the work. The roles most exposed are those built entirely around repetition, not judgement; predictable tasks, not nuanced decisions.
For everyone else, the real challenge isn’t replacement. It’s redefinition. AI forces leaders to rethink what “good” looks like. And when expectations shift without explanation, even the most capable teams start second-guessing their value.
Most people can handle change. What they can’t handle is ambiguity, especially the kind delivered by an algorithm that quietly rewrites their workflow at three in the morning.
The public narrative still leans heavily on the robot-apocalypse story. It makes for excellent headlines. Far fewer clicks come from: “AI supports human capability in nuanced ways across various functional domains.” (I can’t imagine why.)
Inside businesses, the picture is more complex. Roles are not disappearing en masse, but they are changing often faster than internal structures can keep pace. Value is shifting: from doing to deciding, from processing to interpreting, from execution to oversight.
And this shift is happening while leaders juggle economic pressure, regulatory scrutiny, hybrid working, and investors expecting the magic productivity boost every vendor promises.
The truth is that AI strategies rarely fail because the technology is wrong. They fail because the people, culture, and operating model around the technology aren’t ready for what it changes. AI triggers difficult questions about accountability, capability, trust, and leadership. These are not “IT issues”. They sit squarely at the heart of how the business functions or fails to.
From Tasks to Judgement
In every company I work with, the same transition plays out. AI reduces the volume of manual work but raises the importance of judgement. Being good at your job used to mean being fast, accurate, and reliable. Now it means something else entirely: interpreting outputs, challenging assumptions, understanding context, and making better decisions.
For some people, this feels like opportunity. For others, it feels like having their job rewritten mid-performance without so much as a rehearsal. We underestimate how much identity is tied to routine. AI disrupts that. Suddenly, the skill isn’t knowing the process; it’s knowing how to make sense of what comes after the process.
The people who thrive aren’t necessarily the most technical experts. They’re adaptable thinkers. They ask better questions. They treat AI as a collaborator, not a competitor.
I’ve seen businesses with the latest and greatest technology fail spectacularly because the culture wasn’t ready behaviourally for the change ahead. Often it starts subtly: people don’t trust AI outputs; leaders champion AI publicly but quietly avoid it in their own decisions; teams hesitate to experiment for fear of making mistakes; and departments argue over who “owns” the model as if it were a company car.
AI exposes cultural misalignment instantly. What do we do when data contradicts a leader’s instinct? How do functions collaborate when algorithms sit between them? How transparent are we willing to be about uncertainty?
When people feel empowered to experiment, AI accelerates improvement. When they don’t, it becomes noise, another system no one trusts but everyone is expected to justify.
It’s comforting to imagine the AI skills gap is all about PhD-level maths and data science. That narrative conveniently places responsibility “over there” with technical teams. But in reality, the true gap is operational and cultural. It lies in data literacy, AI product management, responsible oversight, workflow design, and change leadership.
These are the skillsets that sustain AI adoption. Building these capabilities internally isn’t a nice-to-have; it’s the foundation for resilience and long-term competitiveness.
One of the most outdated ideas in business right now is that AI can be “implemented” and then ticked off a project plan. AI isn’t a project; it’s a capability. And it has to live inside the operating model, not outside it.
Embedding AI means rethinking how decisions are made, how performance is measured, how workflows are designed, how governance operates, and how customer value is created. When AI becomes part of the business model, something powerful happens: risk reduces, consistency increases, learning accelerates, and value compounds.
If AI vanished from your company tomorrow and nothing broke, then you haven’t embedded it, you’ve merely piloted it.
The real engine of AI transformation is strong leadership. And not leadership by memo, slogan, or executive off-site slide deck, but leadership by behaviour.
The leaders who make AI adoption stick are those who offer clarity, not motivational vagueness. They create psychological safety for experimentation, aligning incentives with future work rather than legacy behaviours. They communicate purpose without patronising and are accepting of different paces when it comes to adaptation.
Great leaders don’t impose AI; they build confidence in it. They turn uncertainty into understanding and make the path feel navigable. Most importantly, they acknowledge the truth: yes, AI will change work; yes, some roles will disappear; but many more will evolve and people can evolve with them if leaders provide the clarity and capability to do so.
AI isn’t a threat to people. A lack of clarity about AI is.
When businesses fail to define how roles evolve, how skills develop, and how decisions change, uncertainty fills the vacuum and that uncertainty becomes friction. Friction slows progress.
The leaders who succeed do so intentionally. They redesign roles, build capability, shape culture deliberately, and embed AI into workflows, governance, and decision-making. That is the difference between an AI experiment and an AI-enabled enterprise.
The question “What does good look like for my role?” isn’t a challenge; it’s an invitation. It shows people can see the change coming and want to be part of it.
When leaders answer that question honestly and with clarity people stop fearing AI and start imagining what it makes possible. That’s where the real value lies. Not in automation. In elevation.
If you’re exploring how ready your business truly is for AI, we’ve created the AI Transformation Readiness Assessment, a quick, evidence-based diagnostic that benchmarks your capability across culture, leadership, data, governance, and workflow maturity.
You’ll receive a free readiness score and personalised insights you can act on immediately.
