Skip to content

AI Realism

Note: in everyday conversation we tend to say “AI” when we mean “an LLM.” I’ll use “AI” here for convenience.

At the time of writing, it’s looking like AI is: pretty good and writing code; pretty okay, and sometimes bad, at other things.

My three big ideas for AI Realism:

  1. Don’t Believe The Hype.
    • Be a sceptic. Ask lots of questions using a critical lens. In particular: ask for the details and specifics.
  2. Fight The Power.
    • Be a Cynic. Choose thoughtfully if and when to use AI. Sometimes choose not to use AI at all.
  3. Bring The Noise.
    • Be a Luddite. (Also) Highlight the cons, downsides, and harms of using AI. And: offer good, time-tested, alternatives.

These phrases are all titles of songs by hip hop group Public Enemy (Yes, I did make a short playlist of these three songs!). I’ve chosen them because the words match. I’ve also chosen them because being anything other than an AI booster tends to get you treated as, well, a public enemy. I’m taking a deliberately slightly provocative stance as a counter-balance.


Table of contents


1. Don’t Believe The Hype

Be a sceptic
Ask lots of questions using a critical lens.

Ask questions about AI use

  • Ask for the details, the specifics.
    • What is being automated?
    • What are the inputs?
    • What are the outputs?
  • Read calls to use AI as a sales pitch.
    • AI can’t do your job, but an AI salesperson (or an AI!) can convince your boss it can.
    • Replacing people with AI is a marketing strategy, not a production strategy.
  • Notice how the real breakthrough is always just around the corner.

Notice the humans workers behind-the-scenes of AI

  • Follow the chain and see how it all comes down to humans: the training data, the Reinforcement Learning from Human Feedback.
  • Clarify that it’s just sufficiently advanced technology, not magic.
  • Be wary of anthropomorphising. AIs are not sentient or conscious.
    • They can’t choose or select or decide or interpret.
    • They can’t have empathy or personal interest: these things require subjective experience and human connection.

2. Fight The Power

Be a Cynic
Choose thoughtfully if and when to use AI.

Aside: Cynic in the historical and philosophical sense (question and challenge conventions and customs), not in the everyday sense (question and challenge people’s motivations in general).

Use AI carefully

  • Check everything it produces, in fine detail.
    • Check for accuracy and consistency.
    • Be an expert in the domain you’re using it for.
    • AI is know to be inconsistent and inaccurate, to contain errors and omissions.

Use AI less

  • Start with the problem rather than AI as the solution.
    • Social and systemic problems rarely have technology solutions.
    • Problems are often best understood looking at the wider context and systems.
  • Put the burden of proof of effectiveness on the AI systems.
    • Clarify how the results of adding AI will be measured, and what will happen afterwards.
    • Extraordinary claims require extraordinary proof.
  • Note that the iron triangle probably still applies. Fast, cheap, good: pick two!

Sometimes don’t use AI at all

  • Work on the problem without AI first.
    • Show your work.
    • Show your progress.
  • Borrow from the Slow movement playbook.
    • Offer alternatives, drawn from time-tested traditions, tools, and process.
  • Use a human-centred, people-first, approach.
    • Remember that AI provides information and knowledge, not wisdom. Wisdom requires broad context, weighing options, working with paradoxes and “it depends”.

3. Bring The Noise

Be a Luddite
(Also) Highlight the cons, downsides, and harms of using AI.

Aside: Luddite in the historical sense (oppose and resist technologies of control and coercion), not in the modern sense (oppose or resist new technologies in general).

Object to the current harms

  • Ecological, environmental, harm of the water and power use.
  • Ethical harm to the underpaid and traumatised gig workers on the training data and doing Reinforcement Learning from Human Feedback. Psychological harm and deadly harm, such as AI-assisted suicides.
  • Financial harm from the hundreds of billions of dollar investments with little or no return.

Critique the inputs

  • Mostly Western ideas and approaches.
  • Mostly discriminatory data, replicating the biases and power imbalances of the world.
  • Training data is only the recorded data of the world. Much of human experience isn’t and can’t be recorded.

Critique the outputs

  • Mode amplification. The most frequent data points are represented as the one true answer.
  • Confabulation and “hallucinations.” Confident bullshitting, made-up things presented as facts.
  • The Wikipedia version of reality. The style of the output is the rational discourse of the educated classes of society.

Show how reliance on AI can devalue humans and lived experience

  • Atrophying skills of critical thinking and analysis.
    • We learn by actively applied effort.
    • Devaluing human skills such as contextual awareness and conflict resolution.
  • Presenting the process as more important than the product.
    • Thinking and understanding happens as we write, draw, code.
  • Narrowing our curiosity.
    • When we start with AI as a solution, we look for problems that AI can solve.
  • Watch out for echoes from history of humans’ relationship with technology.

Last updated: 2026-03-31.