On intentionality in a world flooded with AI

I genuinely enjoy writing. It isn't necessarily a pleasurable activity, but it's a form of expression that gives me emotional and professional satisfaction. Yes, it's my job — among other things, I write for a living.

But the reason I like writing isn't so much the act of sitting down and typing. It's the intentionality of putting my thoughts into written words.

I admit I've been a little obsessed lately with the importance of intentionality in my life and in journalism, especially now, when digital technology makes it increasingly easier to leave behind a big layer of human interaction. Intentionality is the reason we call our parents on their birthdays, the reason we cook for our wives and husbands, the reason we play, love, and get into heated arguments.

"Writing is choosing your own words. In the end, that's what you do. If your words are being chosen for you, you're no longer the author," said Brazilian writer Sérgio Rodrigues in an interview.

I agree with that, a lot. Choosing words involves not just the transmission of information (which AI can do quite well) but also the intentionality of the person writing — a bundle of authenticity elements that includes experience, perspective, language, expression and personality. This is true not only for literary texts but also for academic ones, journalistic ones, letters and even a text message to your sister asking for a cake recipe.

The other day I was reading an interesting post by the journalist Pedro Burgos, who has been doing really good work communicating about artificial intelligence. In the piece, Burgos discusses the distribution of texts generated entirely by AI, using the work of sociolinguist Basil Bernstein to argue that their quality depends on people's linguistic formation, and that for most people, AI text is good enough.

For those who already have an elaborated [linguistic] code from birth, who grew up reading books, who took composition classes, who have been writing for twenty years, it's easy to see that the AI version is still a shadow of what you could do — or at least a pretty shell, with no soul. For those people, it is a downgrade, yes, compared to a good human text. But for most people, the point of comparison isn't a good human text. It's the text they themselves would have written.

That point about class struggle (workers vs. the privileged) is interesting, but I think it's incomplete.

Maybe it works when the need behind a piece is purely the transmission of information. If I just need to communicate that inflation data is up, maybe AI is enough. I can see it being also true even for more complex things, like describing a methodology, putting together a travel itinerary, or generating things that require little or no authenticity.

But if I need to develop an argument, reflect on a problem, and intentionally connect with other people in any capacity, AI-written text is not remotely adequate.

First, for a practical reason: if AI builds an argument in your name, you have to review all of it, edit it and agree with the final result. Unless you're completely irresponsible and careless with your own reputation, that's arguably more work than writing from scratch. Besides, why would I read your argument when I can just ask the AI myself?

Second, for an epistemological reason: if language models are based on statistical patterns that regress toward a certain homogenization of knowledge, what contribution does that make?

In an article that is both illuminating and challenged by some Silicon Valley enthusiasts, education policy expert Benjamin Riley draws on scientific research to argue that intelligence is different from language, which is a tool for communication, not for thinking.

Our cognition improves because of language — but it’s not created or defined by it. Take away our ability to speak, and we can still think, reason, form beliefs, fall in love, and move about the world; our range of what we can experience and think about remains vast. But take away language from a large language model, and you are left with literally nothing at all.

Third, and lastly, for a more human reason: who wants to read an argument the other person didn't want to write? Why would I engage in a discussion with someone who didn't even bother to argue in their own words?

The developer Alex Schroeder questioned exactly this same thing after getting a happy-birthday email from a friend that was written entirely by AI. In his words:

When receiving a text that seems personal, I feel betrayed by the part that was written by a machine. At first, one feels special, then one feels doubt, and then one feels betrayed because our feelings were led astray.

My counterpoint to Burgos's argument isn't that a text generated by artificial intelligence is fundamentally useless, or even bad, but rather that it probably comes from a place of very low intentionality. Maybe even of laziness, absence of opinion or ignorance. If it were only meant to inform, it wouldn't need to be signed by an author at all.

I don't want to write, I don't know how to articulate my thoughts or I don't understand the subject, so a large language model takes a position on my behalf. "Here you go, have a conversation with my bot!"

According to him, "the aesthetic complaint against AI text, taken seriously, is a complaint against the redistribution of a skill that used to be a competitive advantage." Maybe it is, for some people. But, as he himself says right afterward, without elaborating, "there are more metaphysical questions involved." That metaphysics he's referring to I believe it's tangent to intentionality.

I recently talked with photojournalist Cengiz Yar, who spent years making his beautiful book "The Alabaster Grave," about the impact of war on the Iraqi city of Mosul. The process was excruciating, from choosing the sequence of the photos to the texture of the paper. He self-published it, then traveled back to Iraq to deliver more than 500 copies to the people in it. These days, he still ships the books himself to whoever buys them online.

Of course, not everyone needs to do all of that to show intentionality. But that kind of interest and care may be the only differentiator that will really matter once we can no longer tell what's made by humans and what's made by AI. A good part of what makes an action more human is the recognition of other people's effort.

In the end, the big question is what — and how — we humans are going to value what matters most in our societies.

As the designer Peter Adam Boeckel put it, in one of the best texts I've read on the subject:

The technology leading to this 'displacement of purpose' is neither good nor bad. It is simply efficient. It exposes the incentives we already live by — speed, time, money — and amplifies them to their logical conclusion. The results are neither dystopian nor utopian; they are precise reflections of our priorities. Whether AI liberates or devastates depends less on what it can do and more on what we choose to value.