An AI algorithm will not sit in a witness box. A person will.

A few years ago, if a decision went wrong, the line of responsibility was clear. A person had assessed the situation, interpreted what they saw, and chosen a course of action. If that choice was later questioned, they could explain how they reached it. The reasoning might have been flawed, or incomplete, or influenced by pressures they’d not fully acknowledged, but it was theirs. It could be examined, challenged and understood.

We’ve always relied on a quiet interplay between instinct and reflection when making decisions. Instinct is not a random impulse; it’s recognition shaped by experience. It’s the subtle awareness that something does not quite add up, even before we can articulate why.

A manager senses hesitation behind confident-sounding words. A clinician notices a detail that doesn’t sit comfortably with the data. These impressions are formed over time, through repeated exposure, feedback and consequence.

Reflection gives those impressions structure. It’s the deliberate slowing down that asks what sits beneath the feeling. It involves gathering evidence, testing assumptions and considering alternative explanations. The quality of a decision often lies in that disciplined tension, where instinct signals something and reflection subjects it to careful scrutiny.

As AI becomes embedded in everyday processes, that balance begins to shift. Systems now analyse patterns, rank risks, filter options, and produce recommendations at a speed and scale that humans cannot match. What once required sustained attention and analytical effort can now appear as a clear output on a screen. The attraction is obvious. It feels efficient, data-driven…and difficult to argue with.

Yet something subtle happens when a system consistently provides the first answer. Reflection, which once examined our own thinking, can turn towards interpreting the machine’s reasoning instead. In some cases, even that examination becomes light. If the system has processed vast amounts of information, the human instinct that quietly resists may feel less credible. Over time, deference can replace interrogation.

Instinct itself depends on feedback. We make a judgement, observe the outcome and adjust. Through that cycle, our internal compass becomes more finely tuned. When AI filters decisions before we encounter them, or routinely overrides our first impressions, that feedback loop narrows. We have fewer opportunities to test our own perceptions against reality. It becomes easier to doubt our instincts, not because they’re inherently unreliable, but because they’re exercised less often.

Reflection can weaken in a different way. It demands effort and time. In environments where speed is valued and workloads are heavy, a neatly-presented recommendation reduces friction. The pause that once accompanied significant decisions can feel unnecessary, even indulgent. Yet that pause has always served a protective function. It’s often in the slowing down that hidden assumptions surface and unintended consequences are recognised.

This matters most when decisions are later challenged. If an outcome is questioned, the explanation cannot rest solely on what the system produced. The algorithm will not sit in a witness box. A person will.

That person may point to validated models, risk thresholds, and compliance processes, yet beneath all of that lies a human judgement: the judgement to rely on the system, to accept its output, to refrain from overriding it.

The presence of AI does not remove accountability, it redistributes it. Someone chose the data on which the system was trained. Someone defined the parameters. Someone decided how much weight to give its recommendations. Each of those points involves judgement, even if the judgement is less visible than a single decision taken in isolation.

A few years ago, leaders did not need to consider whether automated analysis might drown out human intuition. Decisions were human by default, and reflection was an embedded part of professional responsibility. Now, decisions are increasingly shaped by an interaction between human perception and machine calculation. That interaction is rarely examined with the same rigour as the output itself.

The real issue is not whether AI improves accuracy in certain contexts, but whether humans remain capable of articulating how and why a decision was ultimately taken.

When scrutiny arrives, it rarely focuses only on outcomes. It asks how a decision was reached, what was noticed, what alternatives were considered, and why the final course of action was chosen. An answer that begins and ends with ‘the system recommended it’ is unlikely to satisfy that line of questioning. The deeper inquiry will probe what the human understood about the system’s limitations, what reservations were considered, and whether the recommendation was simply accepted because it was available.

As AI becomes more deeply integrated into organisational life, the discipline of reflection becomes more important, not less. Instinct should not be dismissed simply because it’s harder to quantify, nor should reflection be delegated because a system appears comprehensive. Technology can support decision-making, but it cannot replace the responsibility to think, to question, and to justify.

The integration of AI is not only a technical development, it’s a cultural shift in how judgement is exercised and defended. In moments of challenge, it is reasoning that will be tested, and that requires a conscious, practised engagement between human judgement and the tools that now inform it.

 

Next
Next

Is human thought being absorbed by AI?