Is human thought being absorbed by AI?

There’s something quietly fascinating happening inside larger organisations at the moment, and it isn’t loud enough yet to be called a crisis. AI is being adopted in pockets and then in swathes.

Teams are experimenting with it, then relying on it, then building it into the fabric of how work gets done.

Reports are drafted more quickly. Data is summarised in seconds. Ideas are sketched out instantly. It feels efficient, forward-looking, and hard to argue with.

And yet, if you sit with it for a while, a gentler question begins to form.

In many medium and large companies, the people ultimately accountable for risk, strategy and reputation are not the people using these tools in any sustained, hands-on way. That isn’t negligence, it’s simply how seniority tends to evolve. Leadership roles move away from operational detail. Time is spent allocating resources, setting direction, responding to external pressures, and carrying legal responsibility. There are only so many hours in a day, and drafting copy or interrogating datasets is rarely where those hours are invested.

But AI is not merely another operational tool. It is not comparable to being ‘up to date with coding’, where a defined input leads to a defined output and the logic can be traced with relative clarity. Traditional software executes instructions. AI, particularly generative systems, participates in shaping them. It offers language that sounds reasoned. It produces analysis that appears coherent. It suggests structure where there was once blank space. It has the capacity to influence not only the end product, but the route taken to arrive there.

That difference matters.

If a marketing campaign fails because of a misjudged message, or a strategic decision unravels because an assumption was flawed, there has always been a pathway back to the human thinking that produced it. Even where multiple people were involved, there was a shared understanding that reasoning lived somewhere identifiable. You could sit in a room and ask, ‘Why did we think this was the right move?’ and someone would attempt to reconstruct the chain.

When AI enters that chain, the reconstruction becomes more complex.

Prompts are logged, of course. Outputs are stored. There’s a digital footprint. Yet the subtle interplay between human judgement and machine suggestion is harder to capture. An employee might ask AI to draft a market analysis, receive a polished response, adjust a few sections and forward it on. The final document looks credible. It reads well. It appears considered.

If an underlying assumption turns out to be wrong within this creation, where did it originate? Was it embedded in the prompt? Was it inferred by the model? Was it accepted too readily by the human reviewing it? Was it challenged and then reinstated because the deadline was tight and the language sounded persuasive enough?

For leaders who are not directly engaging with these systems, this creates a peculiar distance. They remain accountable for outcomes, yet the cognitive terrain that produces those outcomes is shifting in ways they may not have experienced first-hand. It’s one thing to read about AI’s limitations, it’s another to witness how confidently it can fill a gap in knowledge, or how subtly it can nudge a line of reasoning in a particular direction.

There’s also a more human dimension that interests me. We speak frequently about AI ‘freeing people up’. Tasks that once took hours now take minutes. Drafting becomes lighter. Research becomes faster. The time saved is, in theory, redirected towards higher-value thinking.

But what if, in practice, some of that thinking is quietly absorbed by the tool itself? Not maliciously, not catastrophically, but incrementally.

How many times have you gone home after making some decisions and, having sat with them and sleeping on the experience, changed your mind or decided to make some changes when you got back into work the next day?

In today’s AI-fuelled workplaces, the project could have been finished, signed-off and sent to the client in the same timeframe.

Instant decisions are not always the best ones. Sometimes, a challenge or dilemma needs to be ruminated, sat with, and explored, with the creator using different lenses to consider the potential outcome(s).

I notice, particularly with younger professionals who have grown up alongside these technologies, that explaining the journey from A to B can sometimes be more difficult than producing B itself. When asked how they reached a conclusion, there’s a pause. The answer exists, but it’s buried in a prompt. Part of it emerged in a generated paragraph. Part of it was accepted because it sounded plausible. The human contribution is present, yet it’s not clearly defined.

This is not an accusation of laziness or incompetence. It’s an observation about environment. If your formative professional years involve collaboration with a system that offers fluent reasoning at speed, your habits will inevitably adapt. The muscle of articulation may be exercised differently. The friction that once forced deeper understanding may be smoothed away.

If that shift is happening, then the question for those in charge is not simply whether AI is permitted or prohibited. It’s whether they understand how thinking itself is being reshaped within their teams.

When something goes wrong in an AI-assisted workflow, the temptation may be to look for technical failure. Was the model flawed? Was the data biased? Were safeguards insufficient? These are important considerations, but they may miss the quieter issue of human engagement. Did someone accept an answer without interrogating it because it felt authoritative? Did speed subtly displace scrutiny? Did the organisational culture reward output over explanation?

Tracing these elements requires more than an audit trail. It requires curiosity about cognition.

If you’re leading an organisation and you do not use AI personally, even at a basic level, how will you recognise its influence when it’s woven into the work presented to you? How will you distinguish between genuinely rigorous analysis and analysis that’s simply well-phrased? How will you sense when your team is using AI as a tool to test their thinking, and when they’re using it as a substitute for thought?

There’s a tendency to treat governance as something that can be documented and delegated. A policy is written. Training is delivered. A risk register is updated. These are necessary steps, but they can create a comforting illusion that the terrain is stable. AI is not static. Its capabilities evolve. Its integration into everyday workflows deepens. What began as experimentation can become reliance before anyone consciously decides that it has.

I’m not convinced that the answer lies in resisting this evolution. The efficiencies are real. The creative possibilities are substantial. There are teams using AI thoughtfully, challenging its outputs, refining its suggestions, allowing it to accelerate work without surrendering judgement.

What concerns me is not use, but invisibility.

If the reasoning process becomes opaque even to those who are responsible for its consequences, accountability shifts from something lived to something theoretical. Leaders may still sign off decisions, yet their capacity to interrogate the underlying thinking diminishes if they have no experiential sense of how that thinking was formed.

It’s possible that the most significant risks will not present as dramatic failures. They may appear instead as a narrowing of perspective, as analysis that is consistently competent yet rarely original, as strategic decisions that feel safe but lack depth. It may be difficult to pinpoint a single moment where things went wrong because nothing overtly did. The erosion, if it comes, may be gradual.

So perhaps the question is less about control and more about proximity. Not proximity in the sense of micromanaging outputs, but proximity to the process. A willingness to engage with the tools shaping the work. To experience their strengths and their seductions. To understand how easily a convincing paragraph can mask a weak premise.

From that place, conversations about accountability become more grounded. You can ask your teams not only whether they used AI, but how they used it. Where they disagreed with it. Where it sharpened their thinking and where it dulled it.

I can’t offer a neat solution because I’m not sure one exists. The relationship between human judgement and machine assistance is still unfolding. What seems clear is that distance carries its own risk. If those at the top remain conceptually supportive of AI yet practically detached from it, there may come a moment when an outcome demands explanation and the explanation feels uncomfortably distant.

At that point, the question will not simply be whether the technology failed, but whether leadership understood the environment in which it was operating.

It may be that the most responsible stance in an AI-shaped organisation is not to stand apart from the tools, nor to be consumed by them, but to remain in conversation with them (and with the people who use them). To accept that thinking is becoming collaborative in new ways, and to ask, gently but persistently, where the human judgement sits within that collaboration.

 

Next
Next

What if you had to show your ‘working out’?