AI Implementation Meets the
with Future of Work

Most leaders do not need another upbeat narrative about AI.
You need progress you can defend: better decisions, less friction, faster execution
— with risks understood and controlled.
Richard Reid works at the intersection of AI implementation and the future of work, helping business leaders and key decision makers build capability that actually sticks. He understands AI and its implications for knowledge work, operating models and productivity. But his primary discipline is human: Richard is a psychologist, coach and organisational expert. That combination matters because the failure modes that derail AI are rarely technical. They are behavioural, cultural and structural — and they are predictable.
If you have seen “pilot purgatory”, low adoption after early excitement, or tools that create more noise than insight, you have already met the real problem: humans and organisations under change.

What usually goes wrong ?

Why it is not a software issue ?

When AI enters an organisation, it does not just change workflows; it changes status, identity and decision dynamics.

Under pressure, the brain prioritises threat detection. A perceived threat to competence or role can trigger an amygdala-driven threat response: defensiveness, risk-aversion, politics, and a drop in curiosity. At the same time, decision quality can degrade as cognitive load rises and the prefrontal cortex (associated with planning, inhibition and complex reasoning) gets stretched.

In practice, that shows up as:

• Automation bias: over-trusting confident outputs
Algorithm aversion: rejecting a system after one visible mistake
• Confirmation bias: using AI to reinforce a narrative rather than test it
• Cognitive overload: more information, more dashboards, more meetings — less clarity
• Role ambiguity: nobody is sure who owns the call when humans and systems disagree
• Incentives that punish learning: experimentation is requested, but mistakes are punished

Richard helps leaders anticipate these dynamics and design around them — so AI becomes a performance lever, not another change programme people privately ignore.

A practical framework:
Tools × Teams × Thinking

Richard’s work typically focuses on three connected areas that determine whether
AI improves  performance and resilience.
chevron-rightCreated with Sketch Beta.

1. Tools: AI embedded into real work, with governance

AI creates value when it changes operating rhythms: decisions, handovers, approvals, quality checks and escalation paths. Richard helps leaders move beyond “tools people can try” and into integration that is usable in day-to-day delivery.

This includes practical guardrails and artefacts such as:

    • Clear decision rights and accountability (who owns what, and when)
    • “Human-in-the-loop” thresholds for high-stakes decisions
    • Quality standards and lightweight assurance routines
    • Decision logs and documentation that support auditability and learning
    • Red-teaming and structured challenge for critical use cases

The hard part is not generating outputs. It is making them reliable, safe and adopted in context.

chevron-rightCreated with Sketch Beta.

2. Performance and Resilience

High performance with AI requires psychological safety with standards. People need permission to test and learn, alongside clear expectations for quality, ethics and responsibility.

Richard supports leaders to shape the cultural conditions that accelerate adoption:

    • A learning narrative that reduces fear and status-threat
    • Norms for responsible use (how outputs are checked, cited and challenged)
    • Shared language for uncertainty (so risk is surfaced early, not hidden)
    • Incentives and measures that reward quality and learning velocity, not just speed

Culture is what gets repeated when nobody is watching — and AI adoption lives or dies there.

chevron-rightCreated with Sketch Beta.

3. Coaching and Counselling (Confidential)

AI should reduce cognitive load, not amplify it. Richard helps leaders become more skilled in their own cognition — attention, bias, motivation, stress response and recovery — so they can keep judgement strong when stakes are high.

 

Grounded in evidence-based principles (including neuroplasticity and habit formation), the focus is practical: building small, repeatable behaviours that improve clarity and decision quality, such as:

    • Better question discipline (testing assumptions, not just generating answers)
    • Trust calibration (neither blind faith nor reflexive rejection)
    • Attention protection (reducing noise so strategy does not get crowded out)
    • Recovery rhythms that sustain performance over time

You do not need leaders who “know about the brain”. You need leaders who can recognise when judgement is compromised — and reliably bring it back online.

Human–AI collaboration is a capability , not a slogan

Most organisations treat AI capability as a stack and a training plan. The bigger differentiator is collaboration: between functions, across teams, and increasingly between humans and AI systems. Richard helps leaders build operating practices that make collaboration real:
  • How teams evaluate outputs and disagree productively
  • How decisions are made when AI and human judgement diverge
  • How work is redesigned so humans retain judgement, ethics and accountability
  • How AI is used to simplify and sharpen — not to flood the organisation with more content
The organisations that win will not be the ones with the most tools. They will be the ones with the best thinking, the clearest accountability, and the healthiest learning culture.

What support can look like?

Richard works with organisations through leadership workshops, executive coaching, programme advisory, and support on organisational design and cultural adoption — always tailored to the realities of performance pressure, limited change capacity, and legitimate governance and reputational concerns.

The outcome: measurable progress, fewer failure modes

Richard’s work is designed for leaders who want outcomes they can stand behind: clearer decision-making, faster learning cycles, higher adoption, reduced friction, and a more resilient organisation that can handle change without exhausting its best people.

If you want AI implementation that improves performance and strengthens resilience — by using the full capability of both technology and the human brain — Richard Reid brings the psychological, organisational and AI fluency to make it happen.

Your cart
  • No products in the cart.
Scroll to Top

Learn about the 7 Psychological Levers, or high performing leaders, and how you can improve yours.

Download the guide below.
0