This is mostly about LLMs. It’s a sketch of some of the things that have me concerned. I’ll bounce between using “LLMs” (which is mostly what I mean) and “AI” (which is mostly what we say in everyday conversation).
I find myself being critical of the current AI hype partly because it feels familiar to other hype cycles from work: adoption of popular JavaScript libraries; accessibility overlays.
The three big ideas:
- Don’t Believe The Hype
- Fight The Power
- Bring The Noise
(These are all songs by hip hop group Public Enemy, chosen because the words match and because being anti-AI feels like being, well, a public enemy.)
Table of contents
1. Don’t Believe The Hype
Be critical of the sales pitch.
💪 Be a sceptic
Ask a lot of questions.
- Notice where information is lacking or fuzzy
- Ask for the specifics, the details
Read calls to use AI as a sales pitch
The hype fulfils a capitalist function: keep the money coming in.
- What the salesperson says it can do, so you’ll spend on it
- The hype doesn’t have to be true to have big impacts
- Note who benefits from our use of this tech
Clarify that it’s just tech, not magic
“Sufficiently advanced technology is indistinguishable from magic.”
- AI is a tool that people use to do things, it does nothing on its own
- LLMs are big and very complex text automation machines, matching similar words and words that are likely to appear in similar contexts
- Don’t mistake conviction for correctness
Ask for the details and specifics
- What is being automated?
- What are the inputs?
- What are the outputs?
Be wary of anthropomorphising
- LLMs are not sentient or conscious: they can’t choose or select or decide or interpet
- We add meaning and mind into language, we see human-like features
- Compare LLMs to other AI: no-one is saying that image generation tools are communicating with us with their output
- Some anthropomorphism comes from the UI and UX of the tools
- LLMs cannot have empathy or personal interest: these things require subjective experience and human connection
Notice where humans are involved
- The training data is produced by humans
- Although we are entering a slop feedback loop
- The output is tweaked by Reinforcement Learning from Human Feedback
- We train the models for what we expect or want to hear, not what is true or correct
- It’s more like regurgitative AI than generative AI
- The output is a grey paste of human creativity
Study the history of humans and technology
Humans relationship with technology follows the time-tested patterns.
- New technology promises an easy life and liberation from toil
- The promises drive adoption
- The technology becomes expected as a baseline
- Demand for skilled labour is reduced
- People are moved into roles that they’re underpaid and overqualified for, in worse conditions
The tech industry in particular leans towards trying to eliminating people from the process.
Recall recent examples of “just tech, not magic”
These didn’t eradicate the previous technology, they just replaced some specific uses. Consider the impact these have had on the human race. Are we happier, healthier, more fulfilled?
- Cars - we still walk places
- Microwaves - we still cook using other. See also: In the Future All Food Will Be Cooked in a Microwave, and if You Can’t Deal With That Then You Need to Get Out of the Kitchen
- Polyester - we still use other fabrics. See also: GenAI is Our Polyester.
- Computers - we still do things by hand
- Mobile phones - we do still do things on other devices
- Social media - we still meet in person
- React JS - we still write other code
- Automated testing - we still do manual testing
Notice how history is repeating itself
- Companies adopting AI at large scales, spending large amounts of money on it
- They fire many people, saying AI will do their job
- They realise AI is doing a low quality, error-ridden, version of the job
- They rehire the people, on lower pay and benefits, to fix the mistakes and babysit the AI
2. Fight The Power
Opt out, don't use AI.
💪 Be a Cynic
In the historical and philosophical sense (question and challenge conventions and customs), not in the everyday sense (question and challenge people’s motivations in general).
- Notice that some technology (and social structures, laws, customs, conventions) foster bad behaviour.
- Reject the terms, refuse to play the rigged game, say no. “I would prefer not to” like Bartleby, the Scrivener.
Use AI less
- Start with the problem, don’t start with AI as the solution
- What’s our goal?
- What do we want to achieve?
- Social and systemic problems rarely have technology solutions
- Problems are often best understood looking at the wider context and systems
- Ask why we’re using AI instead of doing something else
- Put the burden of proof of effectiveness on the AI systems
- Extraordinary claims require extraordinary proof
- Clarify how the results of adding AI will be measured, and what will happen afterwards
- What’s the endgame?
- What human work is it replacing?
- What happens to the humans?
- Check if the iron triangle still applies. Fast, cheap, good: pick two.
- If it looks like all three, look more closely to see the hidden cost or lower quality.
Don’t use AI at all
- Say no: it’s optional, not inevitable
- Work on the problem, without AI
- Show your work
- Show your progress
- Be human-centred, put people first
- Keep checking that what you’re doing aligns with your values
- Borrow from the Slow movement playbook
- Offer alternatives, drawn from time-tested traditions (including existing, old, laws and rules and ethics)
- Don’t use harmful technology, even if it’s well-tested and tweaked
Use a human-centred, people-first, approach
- LLMs provide information and knowledge, not wisdom
- Wisdom requires very broad context, weighing options, working with paradoxes and “it depends”
- Wisdom is about the decision and where it came from: lived human experience, accountability
- LLMs can’t do science because science is a process, ways of knowing, not a collection of answers
3. Bring The Noise
Keep the focus on the current harms, not speculative risks.
💪 Be a Luddite
In the historical sense (oppose and resist technologies of control and coercion), not in the modern sense (oppose or resist new technologies in general).
- Notice where technology is being used to devalue and displace humans
- Publicly and loudly voice your concerns
Object to the value misalignments
AI causes widespread and varied harms.
- Ecological, environmental, harm. Google, Microsoft, and OpenAI are dramatically missing their climate pledges because of AI.
- Cultural harm. Violent disregard for copyright in the training data.
- Ethical harm. Underpaid and traumatised gig workers on the training data and doing Reinforcement Learning from Human Feedback.
- Psychological harm. People talking to ChatGPT over their loved ones, school, or work.
- Deadly harm. AI-assisted suicides.
- Financial harm. Huge amounts of money to run and AI companies don’t, and can’t, charge enough to balance the books. Multi-billion dollar investments with little or no return.
Critique the narrow inputs and low quality outputs
Problems with the inputs
- Only recorded data. LLM training data is all our recorded human knowledge. There’s lots of human experience that isn’t and can’t be recorded. The embodied experience, the wider context of human life and interaction.
- Mostly Western data. Western ideas and approaches are seen as objective and universal, rather than one slice of culture and history
- Many languages and cultures are massively under-represented in the digital world
- Mostly discriminatory data. The data replicates the biases and power imbalances of the physical world. LLMs mirror back our patterns of discrimination, including gender, gender identity and expression, sexual orientation, disability, race, religion, ethnicity.
Problems with the outputs
- Shortening, not summarising. Summarising requires extended context. AI “summaries” are shortening existing human summaries.
- Mode amplification. The most frequent data points are represented as the one true answer.
- Confabulation. LLMs output often contains Confident Bullshitting. Made up things presented them as facts.
- Not “hallucination”. Hallucination is perceiving something that’s not really there. LLMs do not perceive anything.
- The Wikipedia version of reality. The style of the output is the rational discourse of the educated classes of society.
- Sycophantic. LLMs are built to people-please, not to give true or correct answers.
- Low quality, high quantity. LLMs are filling the web with slop.
- Further bias from Reinforcement Learning from Human Feedback.
Show how reliance on AI devalues human lived experience
- Atrophying critical thinking and analysis skills
- We learn by actively applied effort, not by passively reading answers
- Presenting the product as more important than the product
- The process is part of the product
- Thinking and understanding happens as we write, draw, code
- Devaluing practical, lived experience
- Prioritising and framing theoretical knowledge as more important than practical knowledge
- Narrowing our curiosity
- When we start with AI as a solution, we look for problems that AI can solve
- An LLM can sometimes give the most popular answer to a question, but the human has to ask the right question
- Deskilling humans
- Existing training data comes from learned human experience and knowledge
- If people stop learning those skills, we’ll struggle to answer new questions
Common objections
- We’ll fall behind if we don’t use it
- Ask for specifics on competitors
- How do we know we’re losing business to our competitors?
- How do we know that it’s because of AI?
- Everyone else is doing it
- This is a silly reason
- The ones using it are more productive
- Ask for second order effects: what has been the measurable outcome for the employees and for the company?
- It’s inevitable
- Look at the history of humans and technology
- It’s okay to use copyrighted data, it wouldn’t work otherwise
- If it doesn’t function without consent, it doesn’t deserve to function
-
It’s a good replacement for a therapist / doctor / skilled professional
- It makes writing so much faster!
- Apply the iron triangle and see what’s happening
- It might not be good now, but it’ll get better with time
- When do decide it’s good enough to use?
- What about when it still isn’t good enough?
- What do we do in the meantime?
- When would we decide to give up on it?
- If it’s not doing a good job, it’s because you’re prompting it wrong
- This feels like blaming the user