8 Comments
User's avatar
Bruce Landay's avatar

An interesting exercise though having read City of Dreams nothing the AI said was surprising or truly that insightful. As the author, I would think everything mentioned was already in the back of your mind if it wasn’t already front of mind. AI picked up on themes but not much else.

Expand full comment
Kenneth E. Harrell's avatar

That’s an interesting perspective. For me, the AI’s analysis unearthed a few recurring themes and tensions in my work that I hadn’t always been fully conscious of.

For instance: “There’s a consistent pushback against the idea of being controlled by systems, by technology, by social expectations which may indicate a deep personal value placed on autonomy and the right to self-determination.”

That’s something I’ve always wrestled with internally, but seeing it articulated so plainly was striking. It's a thread that’s been running through my thinking since I first read “Throwing Rocks At The Google Bus” https://www.amazon.com/Throwing-Rocks-Google-Bus-Prosperity/dp/014313129X The core question how much agency we truly have in a world increasingly shaped by algorithms, markets, and opaque systems feels more urgent than ever.

Another line that stood out: “The juxtaposition of spiritual salvation against technological immortality makes his choices anything but straightforward.”

IMHO this resonates deeply with me. It reflects not only the moral terrain my characters often navigate, but something broader how we, in real life, face hard decisions where no path is clean or easy. Internal conflict becomes the most honest kind of conflict because it mirrors the ambiguity we live with daily. Take, for example, the question of whether to shut down an African cobalt mine. Doing so could spare workers from grueling, often dehumanizing labor but it might also strip away the only means of survival for communities already on the margins. Which choice is good? Which is right? These aren’t simple rhetorical questions they’re real, raw, and very hard to answer.

That’s exactly the kind of tension I think fiction should explore if it hopes to engage readers in a meaningful way. I want it to be uncomfortable, I want it to be challenging. In my experience, the stories that linger in the mind are the ones that don’t offer easy resolutions just characters doing their best to navigate what are often impossible situations.

Expand full comment
Bruce Landay's avatar

I'm glad you found it helpful and you learned something about yourself. I'm not surprised by your independent streak and wanting to be in control. Something I share with you. I'm looking at publication options for myself and while you've chosen to do everything yourself I'm looking at a hybrid option where I get professional help but still retain much of the control. Keep the columns coming. I'm enjoying coming along for the ride as you figure out AI.

Expand full comment
Derek James Kritzberg's avatar

In my experience the LLMs start to hallucinate at a certain word count. If I had more than 2 hours a day to write I'd probably put more time into something more special than pitching things at Grok.

Expand full comment
Kenneth E. Harrell's avatar

Personally, I find AI hallucinations equally as interesting as talking to my high friends on the phone late at night lol

Expand full comment
Derek James Kritzberg's avatar

lol yes!

Expand full comment
Derek James Kritzberg's avatar

An editor at a small presser recently put my book through an AI program meant to assist esitors, asked me to look at what it told her and would I recommend it. Half the things it said were pretty cool, the other half made me scratch my head. "How did it come up with THAT!?"

Too many examples to choose from, but an example was it said a certain character lacks clear motivation for seeking vengeance against someone. Said someone literally kills her best friend right in front of her in an act of treachery, then imprisons and damages her.

My response essentially said no, I don't recommend this tool, lol. I hear it said in some places not to use AI on anything longer than 2000 words. But in my experience (grok, mostly) it handles 5000 words quite well. Also it depends what you're asking it to do - in your post's example I believe the LLM will do a good job of general pattern recognition like this. We're still quite far away from AI being able to do good work without careful human scrutiny, though

Expand full comment
Kenneth E. Harrell's avatar

I’m curious about why people have such different experiences with AI tools. Personally, mine have been mostly positive. I rely on a custom GPT I trained myself, as well as tools like ProWritingAid.

At this point, ProWritingAid has essentially replaced the need for a human editor it’s my main editing tool. My custom GPT, on the other hand, gives me analysis and feedback.

While ProWritingAid can offer critiques and feedback too, it’s tied to a monetization model. You have to buy credits to access those features, which is the main reason I started using my own GPT for critiques instead.

Expand full comment