Skip to content

Some AI-related links

I haven’t written much here about AI (only A human-centred process is more important than an AI-tool-centred product a few months ago), but I have been reading a bunch.

I’m interested in the responsible, ethical, careful use of AI. And of being more clear on the drawbacks and as well as the benefits. I find that much of what’s written is about the benefits, with little or no discussion of the drawback. These gathered links err more towards the drawbacks of things to be careful of.

I’ve included a few choice snippets from the linked articles.

Using AI Right Now: A Quick Guide - by Ethan Mollick

The risk of hallucination is why I always recommend using AI for topics you understand until you have a sense for their capabilities and issues.

GenAI is Our Polyester

The best way to understand generative AI art and aesthetics is to consider how previous “synthetics” lost value in the long-run

While polyester took a few decades to lose its appeal, GenAI is already feeling a bit cheesy. We’re only a few years into the AI Revolution, and Facebook and X are filled to the brim with “AI slop.”

But the historical rejection of polyester gives me hope. Humans ultimately are built to pursue value, and create it where it doesn’t exist.

AI Chatbots Discourage Error Checking - NN/g

Summary: AI hallucinations threaten the usefulness of LLM-generated text in professional environments, but today’s LLMs encourage users to take outputs at face value.

Dear Dostoevsky: Should we take advice from AI? - by Peco

When actual humans try to charm us into believing their deceptions, we call them sociopaths. Oddly, when machines do it, we call it amazing and groundbreaking.

The 70% problem: Hard truths about AI-assisted coding

The reality is that AI is like having a very eager junior developer on your team. They can write code quickly, but they need constant supervision and correction.

The very thing that makes AI coding tools accessible to non-engineers - their ability to handle complexity on your behalf - can actually impede learning.

This creates a dependency where you need to keep going back to AI to fix issues, rather than developing the expertise to handle them yourself.

What AI does do is let us iterate and experiment faster, potentially leading to better solutions through more rapid exploration. But only if we maintain our engineering discipline and use AI as a tool, not a replacement for good software practices.

The Whippet #174: Extending my physical influence

Karpf says that a new technology can fail in two ways. The first is the one everyone talks about and worries about: the tech works extremely well, and then gets used for nefarious purposes.

The other failure mode is: what if it’s not very good, but it gets widely adopted anyway?

People are asking “will AI take my job?” and the media has answered with discussion of AI capabilities and what jobs it might be able to do well. But that’s not really the failure mode that seems to be happening. The question is more, “will I be replaced with AI despite the fact that it can’t do my job at all?” and the answer probably depends on the brazen short-sightedness of your boss/company/industry.

What’s UnAI-able - UX Magazine

However, there are certain actions, tasks, and skills that cannot be digitized or automated, such as: contextual awareness; conflict resolution; critical thinking.

Jobs that involve a blend of these human-driven decision-making competencies are likely to evolve rather than disappear, requiring professionals to shift their competence.

Speed and Efficiency are not Human Values - by John Warner

“More” Is not Necessarily a Market Advantage

Speed Is Not a Criteria for Quality

Art does not exist independent of the experiencing of it

Inside the “Mind” of ChatGPT - by David Epstein

Unlike the human brain, these large language models don’t start with conceptual models that they then describe with language. They are instead autoregressive word guessers. You give it some text and it outputs guesses at what word comes next.

… this approach makes these models into something like an “unrepentant fabulist.”

Ultimately, however, the best summary of what these models can do is the following: in response to a user request, write natural text on arbitrary combinations of known subjects in arbitrary combinations of known topics, where “known” means encountered them enough during training. In doing so, it has no ability to actually check if what it’s saying is true or not. The key question to ask is how much of your current job could be replaced by this ability?

Or to give an analogy I like: a few years ago I was looking at news coverage from the early 1970s when the ATM was introduced. Some of the coverage was apocalyptic — 300,000 bank tellers are going to be out of work overnight! But instead, over the next 50 years, as there were more ATMs, there were more bank tellers. ATMs made branches cheaper to operate, so banks opened more branches. Fewer tellers per branch, but more tellers overall. But more than that, it fundamentally changed the job, from one of repetitive cash transactions, to one where the person is, say, a customer service rep, a marketing professional, a financial adviser, etc. They needed a much broader mix of more strategic skills to add value.

AI for Accessibility: Opportunities and Challenges | Equal Entry

AI relies on averages. This can have negative implications for people outside the average, especially for people who are typically underrepresented in the data.

AI relies on labels. These can be missing from datasets (because of bias, or because of the ethics of collection), leading to stereotyping

Artificial Intelligence Playbook for the UK Government (HTML) - GOV.UK

Some important principles in there, including: You know what AI is and what its limitations are; You know how to use AI securely; You have meaningful human control at the right stages; You have the skills and expertise needed to implement and use AI solutions.

Unforeseen Consequences of Artificial Intelligence

History suggests that powerful technologies tend to amplify existing power structures and inequalities unless deliberate interventions occur.

As we navigate this uncertain terrain, we would be wise to proceed with humility about our ability to predict and control the ultimate impact of this new form of intelligence, which, in some ways, is already more powerful than our own.

Key questions about artificial sentience: an opinionated guide

In an ideal world, I think the question that we would want an answer to is: What is the precise computational theory that specifies what it takes for a biological or artificial system to have various kinds of conscious, valenced experiences—that is, conscious experiences that are pleasant or unpleasant, such as pain, fear, and anguish or pleasure, satisfaction, and bliss?

Putting the “Art” in Artificial Intelligence – Integral Life

Whatever meaning we might see there is being generated by the user and/or observer, not by the machine itself. The images being produced may be beautiful, but I am not sure they become “art” until that beauty is then enacted and inhabited by a perspective.

beauty as an inherent quality of the universe, and art as a deliberate creative effort.

can something really be considered “art” if there is no artist’s intent to be found?

So what are these algorithms actually doing? Are they “art-generators”, or are they more like “beauty-generators”

“Artificial Intelligence & Humanity,” an article by Dan Mall

Overall, I rely on on AI to help me design more and design faster. I don’t rely on AI to help me design better. I definitely don’t rely on AI to design for me or instead of me.

AI is great anything quantity-related and bad and anything quality-related.

In other words, it leans heavily on averages; the closer the training data matches an average, the higher degree of confidence that the result is more “correct,” or at least desirable. The problem is that this is the polar opposite of what we consider creativity to be.

Neither artificial, nor intelligent - hidde.blog

The sentences systems like ChatGPT generate today merely do a very good job at pretending.

the balance is off between what’s useful, meaningful, sensible and ethical for all on the one hand, and what can generate money for the few on the other.

Artificial intelligence: who owns the future? - ethical.net

as AI systems mature they also pose deep questions about the future we want to inhabit, and who gets to build it.

AI can find patterns with ease, but not for the reasons we might hope. Data always describes the past, and the past is biased. E.g. systemic racism.

Make a system fairer by one definition and you often create unfairness in another direction. So tackling bias often means choosing which biases to erase and which to accept: a complex, human, and social decision that computers aren’t well suited to answer.

Whether consciously or not, AI manufacturers have decided to prioritise plausibility over accuracy.

Data that is harmless today may make you traceable tomorrow.

These are profound questions which deserve democratic debate. But today only a tiny cluster of AI firms, often funded by the world’s richest and most powerful people, are calling the shots.

Artificial intelligence - Austin Kleon

This note made me laugh. “We chose instead to pick the best parts of each… We cut lines and paragraphs, and rearranged the order of them in some places.” Honey, that means a human wrote this piece. Writing is editing. It is about making choices.

There are many moments as an artist (and an adult, come to think of it) where you think, “God, I wish somebody would just tell me what to do.” But figuring out what to do is the art. That’s why I laughed at the article “written” by the robot: I mean, I wish somebody would give me a prompt and four sentences to start with! Talk about a head start!

When Nick Cave was asked if AI could create a great song, he emphasized that when we listen to music, we aren’t just listening to the music, we’re listening to the story of the musicians, too.

Welcome to the Analog Renaissance: The Future Is Trust

My notes:

  • AI changes and challenges our ability to trust each other
  • When we’re duped by an AI we call it a tech success. Before we would have called it being conned by a psychopath.
  • Generative cognition is perceiving things in the real world and making expressive things from that
  • It’s more like regurgitative AI than generative AI
  • The more we turn to AI to substitute for human generative cognition, the more we’ll mistrust what we see, read, hear
  • AI saturation could encourage duplicity as a normative way of life
  • Refuse to compete on the machines’ terms
  • Leave our human mark on the things we create
  • Prioritise human originality and human effort

Intelligence in the Age of Mechanical Reproduction by Charles Eisenstein

My notes:

  • When we outsource physical or cognitive functions, that function can atrophy in ourself
  • The convergence of recording technology with generative technology requires that we know and trust the source
  • The commodity-based object is detached from its origins and stripped of its uniqueness
  • When machines do the work for us we risk succumbing to a passive conditioned helplessness disconnected from our creative authorship
  • The orthodoxy and homogenization of cognitive output of a human brain on autopilot and generative AI is uncanny
  • (AI-generated) summaries (of video meetings) don’t include many contextual details that change the embodied experience: speech speed and tone, building on or tearing down, facial expressions and body language
  • Summaries are inherently biased towards certain kinds of information, rejecting and removing aspects that don’t fit the model.
  • AI draws on the database of all recorded human knowledge. Only information that can be and has been recorded.
  • AI entrenches certain orthodoxies, erodes our own resistance to the unorthodox.
  • AI text generation tends to produce “the Wikipedia version of reality” - rational discourse of the educated classes of society
  • The point is not that we should never use metrics, symbols, or categories, but that we must connect them repeatedly to the reality they represent, their material, sensory source, or we will be lost
  • As with AI, orthodoxies filter out and distort the very information that would overthrow them
  • It is the latest iteration of the original alignment problem of symbolic culture that every society has grappled with. AI merely brings to it a new level of urgency.

Some gathered notes on AI and ethics

  • Social Media as a foreshadowing for what’s to come with AI. The Social Dilemma becomes the AI Dilemma. The Attention Economy becomes The Intimacy Economy.
  • Bias.
    • LLMs, algorithms, can’t be biased because humans are involved at some point and we are biased. Humans are involved in choosing the training data, refining the model, using the output, interpreting the output.
    • Data represents the past, including our mistakes. In particular, systemic bias.
    • The data implies what’s Average or Normal. But that’s reducing the complexity of human existence. Sometimes we want the outliers, the more creative options.
    • Reinforcement Learning from Human Feedback (RLHF). The training data to output to training data feedback loop. Only a small group of humans, with one set of perspectives, providing feedback. The AI trained to say what we expect to hear, not what is true or correct.
  • Opaque. We don’t know why: who got the job, the loan, the medical treatment, the prison sentence?
  • Regulation. Preventing harm, including things we have thought of yet.
  • Confabulation. AI’s are great at Confident Bullshitting. They make things up, but present it as fact.
  • Quality. AI is good at quantity-related things, but bad at quality-related things.
  • Failure modes. New technology has two failure modes: it works very well, get used for nefarious purposes; it doesn’t work well, gets widely used anyway. We’re seeing more examples of the second one: “Will AI take my job, despite the fact that it can’t really do it?”.