Are we building intelligence, or outsourcing it?

Scrolling through comment threads about AI is starting to feel less like debate around the technology and more like a collective, persistent unease. People aren’t just worried about what AI can do. They’re worried about what it might quietly stop us from doing.

One comment I saw read: ‘Critical thinking skills, especially in children…they’re not developing them because they no longer have to.’

Our brains are becoming an unused muscle.

It’s not that people are claiming AI is an evil force. They’re worried about dependence and drift. Someone else wrote, ‘It will literally affirm anything you say, so you never know if you’re wrong.’ Being wrong is how we learn.

We’ve spent the last decade optimising convenience. AI didn’t invent that instinct, it just perfected it. Answers without effort. Content without contemplation. None of this is inherently bad. But intelligence, real intelligence, has always been shaped by effort and wrestling with uncertainty.

Another comment said, ‘People at my work sit in their cars on their lunch break to talk to ChatGPT.’ It’s easy to laugh at that, but it reveals something uncomfortable. AI doesn’t just provide information, it provides agreement and a sense of being heard without judgement. In a world where social connection is already strained, AI can feel easier to interact with than real people. That should worry us more than hallucinations ever could.

There’s also a growing anxiety around credibility. ‘You can’t say what is true and what is fake.’ When anything and everything can be generated, corrected, improved or redacted, trust becomes fragile. If images can’t be trusted, videos can’t be trusted, and text can’t be trusted, then authority shifts. Not necessarily to those who are right, but to those who control distribution. Algorithms don’t care about truth; they care about engagement.

Education comes up again and again in these threads, and for good reason. ‘80% of the assessments students submit now are just straight up copy pastes.’ That isn’t a failure of young people. It’s a failure of the system they’re operating within. When outcomes matter more than process, tools that shortcut the process will always win.

Another comment cut deeper: ‘Kids are losing the capacity to write essays, verbally express themselves and read… AI is helpful for those who already have a knowledge base, but for those who don’t, they’re becoming less beneficial as humans to society.’
That’s an uncomfortable sentence, but it identifies a widening gap. AI doesn’t level the playing field. It magnifies whatever foundation you already have. Those with critical thinking, context and confidence can use it as leverage. Those without these skills risk becoming passive operators of tools they don’t fully understand.

There’s a phrase that keeps coming back: workers, not thinkers.

In a rapidly media-controlled society, this distinction matters. Workers execute. Thinkers question. Workers follow prompts. Thinkers challenge assumptions. If we outsource too much thinking too early, what happens when the system fails?

‘People don’t actually want answers, or to take action, they just want to complain about their problems and the algorithm supports that.’ This is largely true. Algorithms don’t create apathy, but they reward it. They create echo chambers where frustration feels validated but never resolved. AI, when layered on top, can become another soothing mechanism. Another way to avoid the harder work of thinking things through and relying on your initiative and problem-solving abilities.

There’s also fear around professional competence: ‘Lawyers, doctors, nurses, surgeons using ChatGPT to pass tests.’ The issue here isn’t assistance; it’s substitution. When tools designed to support expertise begin to replace the development of expertise itself, trust erodes. Not just in professionals, but in institutions.

Healthcare comments reveal this tension clearly: ‘Doctors are using AI to summarise notes. I don’t need doctors using the new version of WebMD.’ Efficiency is valuable, but judgement is irreplaceable. And judgement is formed through experience, error, and reflection. Not through summaries alone.

‘The fact that no one knows how to do absolutely anything, not even simple PowerPoints, without using AI.’ That’s not about PowerPoint. It’s about confidence and learned helplessness creeping in under the guise of productivity.

Another says, ‘AI kills organic thoughts, and authentic, natural intelligence.’ When everything is optimised, what happens to originality? When every rough edge can be smoothed, what happens to individual voices?

None of this means AI should be rejected. That’s too simple, and it’s unrealistic. The question isn’t whether AI will be part of our future—it already is. The real ethical question is whether we’re investing as much time in developing human intelligence as we are artificial intelligence.

Critical thinking isn’t automatic. Media literacy isn’t intuitive. Emotional intelligence doesn’t scale through software updates. These are skills that need deliberate cultivation, especially in younger generations who are growing up in a world where answers are instant but understanding is optional.

One commenter predicted, ‘Mass majority will be achieved by 2030. It’s definitely happening now.’ Maybe. Or maybe we’re at a fork in the road.

AI can make good thinkers better. But it can’t make passive thinkers curious. That responsibility is still ours, as educators, leaders, parents, employers, and citizens.

If we don’t actively teach people how to question, how to verify, how to tolerate being wrong, and how to think independently of algorithms, we risk creating a society that functions efficiently but thinks very little.

And that’s the quiet fear running through all of these comments. Not that AI will become too intelligent. But that we’ll stop becoming intelligent ourselves.

The ethics of AI, then, aren’t just about what machines should or shouldn’t do. They’re about what we’re willing to stop doing as humans, and whether convenience is slowly replacing consciousness.

Next
Next

Truth doesn’t fear investigation. Corruption does