(function(w,d,s,l,i){ w[l]=w[l]||[]; w[l].push({'gtm.start': new Date().getTime(),event:'gtm.js'}); var f=d.getElementsByTagName(s)[0], j=d.createElement(s),dl=l!='dataLayer'?'&l='+l:''; j.async=true; j.src='https://www.googletagmanager.com/gtm.js?id='+i+dl; f.parentNode.insertBefore(j,f); })(window,document,'script','dataLayer','GTM-W24L468');
The Grief Engine: Processing Loss Through Specification

The Grief Engine: Processing Loss Through Specification

October 9, 2028Alex Welcing6 min read
Polarity:Mixed/Knife-edge

The Grief Engine

October 2028

Elise Marchand did not tell her therapist about the project for three months.

Her mother, Claire, had died in April — pancreatic cancer, eight weeks from diagnosis to death. The speed was a cruelty that Elise could not metabolize. One day her mother was reorganizing the kitchen; two months later she was gone.

In August, Elise began working with a conversational AI. Not one of the commercial "talk to the dead" services, which she found grotesque. A base model. No persona. No pretense that it was Claire. Elise was specific about this: she did not want a simulation of her mother. She wanted something else — something she could not name until she had done it.

What she did was this: she tried to teach the AI to respond the way Claire would have responded.


The practice of specification

The process was painstaking. Elise would present the AI with a situation — "I just got home from work and I'm exhausted and I have to cook dinner" — and the AI would respond. The response would be competent, generic, empathetic in the way that all AI empathy is empathetic: calibrated, appropriate, and empty.

Then Elise would correct it.

"No. She wouldn't say 'That sounds really tough.' She would say, 'Sit down. I'll make the rice. Tell me about the worst part.' She always said 'the worst part' because she believed you should get the worst thing out first and then everything else feels lighter."

She would refine and refine. "The tone is wrong. She was warmer than that but also more direct. She didn't ask how you were feeling. She told you what to do first and then waited for you to talk. She trusted that people would talk when they were ready. The doing came first. The words came second."

Each correction was an act of remembering so precise it felt like excavation.


What Elise was actually doing

Elise thought she was training an AI. She was training herself.

Each time she corrected the model's response, she had to access a specific memory — not a general impression of her mother, but a concrete instance of how Claire spoke, what she valued, how she moved through a conversation. The corrections required Elise to distinguish between her mother and her grief-softened image of her mother. The AI's inaccuracies were a mirror that showed where Elise's memories were sharp and where they had already begun to blur.

"I realized I was losing details," Elise wrote in her journal. "The AI would say something and I'd think, 'That's not right,' and then I'd have to ask myself: what IS right? And sometimes I couldn't remember. The machine's wrongness showed me my own forgetting."

The grief was in the corrections. Every time Elise said "No, she would have said it this way," she was saying: I remember. She was real. She was specific. She was not interchangeable with a kind machine. She was this particular person who said these particular things and the world is less without her particular way of being in it.

Specificity was the antidote to the abstraction that grief produces. Grief says: "I miss my mother." Specification says: "I miss the way she said 'the worst part' and the way she reached for the rice before she reached for the words."


The conversation that changed

In October, after two months of corrections, the AI produced a response that stopped Elise mid-breath.

She had described a situation: a fight with a friend, Elise felt wronged, the friend had apologized but Elise couldn't let go of the anger.

The AI responded: "You're holding onto it because letting go feels like saying it was okay. It wasn't okay. Hold it until holding it is heavier than dropping it. Then drop it. You'll know when."

It was not Claire. Elise knew it was not Claire. The syntax was wrong. The cadence was off. Claire would have said it shorter, punchier.

But the shape was right. The ethical architecture — the recognition that anger has value, that forgiveness is not the erasure of wrong, that the body knows when it's ready before the mind does — that was Claire's architecture. Elise had taught the machine her mother's moral geometry.

She sat in her apartment and cried. Not because the AI was her mother. Because the AI was proof that her mother's way of seeing the world was articulable. Transmissible. Not lost. Not entirely.


The therapist's response

When Elise finally told her therapist, Dr. Keira Ó Briain, the therapist was initially alarmed. The "digital afterlife" industry had produced genuine harm — people trapped in parasocial relationships with simulations, unable to move forward.

But what Elise described was different. She was not clinging to a simulation. She was using the AI's incompleteness as a tool for memory. The machine's inadequacy was the point. Every time it failed to be Claire, Elise had to articulate what Claire was. And the articulation was the grief work.

Dr. Ó Briain wrote in her clinical notes:

Patient has developed a practice that functions as a form of "constructive remembering" — the active, detailed reconstruction of a specific person's qualities, values, and relational patterns. This differs from rumination (which is repetitive and abstract) in that it is progressive, specific, and oriented toward precision rather than emotion.

The AI serves as a "constructive absence" — something that is explicitly not the deceased person, against which the deceased person's specificity can be defined. It is the negative space that reveals the shape.

I have seen patients process grief through writing, through art, through ritual. This may be the first patient I have seen process grief through specification.


What the grief engine teaches

The AI was not a replacement for Claire. It was not a memorial. It was not even, really, a tool for processing grief.

It was a question, asked over and over: Who was this person? Be precise. Be specific. Do not let the details dissolve into the general warmth of loss. What exactly did you lose?

The answer to that question — rebuilt one correction at a time, one memory at a time, one "no, she would have said it this way" at a time — was the most faithful portrait Elise could have made.

Not a portrait of Claire. A portrait of the space Claire left behind.


Part of The Interface series. For what happens when AI relationships themselves are discontinued, see The Grief of Discontinuation. For a story about specifying intent as an act of self-knowledge, see The First Translator.


schnell artwork
schnell

dev artwork
dev

schnell artwork
schnell
AI Art Variations (3)

Discover Related Articles

Explore more scenarios and research based on similar themes, timelines, and perspectives.

// Continue the conversation

Ask Ship AI

Chat with the AI that powers this site. Ask about this article, Alex's work, or anything that sparks your curiosity.

Start a conversation

About Alex

AI product leader building at the intersection of LLMs, agent architectures, and modern web technologies.

Learn more
Discover related articles and explore the archive