While many organisations have moved quickly to define their AI ambitions, their next challenge is operationalising them into the everyday behaviour of their people. With nearly two-thirds of organisations saying they have yet to properly scale AI across their enterprise, most organisations are still navigating the transition from experimentation to deployment.
But a lack of AI guidelines doesn’t mean a lack of AI use: 78% of AI users at work are bringing their own AI tools to work according to Microsoft’s Future of Work Index. This opens the door to inconsistent use, heightened risk exposure, and fragmented adoption.
The good news? It’s not that employees prefer their own way of using AI, it’s simply that they’re waiting for instruction: only 39% of people who use AI at work say they’ve received AI training from their organisation. If your people cannot see how AI might directly apply to their role, they will default to personal judgement and their own approaches. Scaling AI guidelines effectively requires practical mechanisms that connect your organisational strategy to the lived experience of your people.
Here are three ways to ensure your AI training reflects your organisational reality and drives actual behavioural change.
Learning rarely sticks after a single session. Behaviour changes when support continues in the flow of work.
Digital guides provide structured, self-led reinforcement that allows your people to apply what they have learned directly within their roles. Rather than relying on memory, provide access to practical prompts, checklists, and scenarios that help your people experiment safely and consistently.
These resources should be hyper-contextual to sustain capability. For example:
When digital resources synthesise key concepts from live training and translate them into repeatable habits, AI use becomes disciplined rather than ad hoc.
Learn more about in-role practice for AI guidance in our 2026 GenAI Mindset Toolkit
Adoption accelerates when learning is social. Left alone, individuals experiment privately. Standards vary. Risk tolerance differs. Assumptions go unchallenged. When learning happens collectively, shared norms emerge.
Peer-based experimentation allows participants to test AI on real problems, compare outputs, question interpretations, and debate judgement calls. Design sessions where participants:
When teams learn together, they build a common language around reliability, accountability, and acceptable use. Adoption then becomes collective rather than fragmented.
The most powerful case studies are your own. Real organisational challenges feel urgent. If you want AI guidelines to stick, anchor them in situations your people recognise immediately.
Consider asking your leaders and in-house AI pioneers to share and showcase real-world examples of their AI use in action, cementing these as best-in-class examples of AI guidelines in practice.
Not sure where to start looking for AI best practice examples? Analyse your workflows. Identify high-impact decisions. Surface common pressure points. Then build learning around those realities. When your people see their own documents, metrics, and constraints reflected in the training environment, relevance increases dramatically.
Grounded in our cross-sector implementation work and pilot programmes with global organisations, our 2026 GenAI Mindset Toolkit explores clear, evidence-based approaches for fostering sustainable and consistent AI habits across your organisation.
Get your copy now to explore the psychological and contextual factors that truly enable sustainable AI adoption, supplemented by practical tools to foster a shared approach and common language around AI use across your organisation.