Notes on a Thinking Partnership

I've been an active (some might say over active) user of ChatGPT and other LLM-based tools since late 2024. Over that time, I've developed both conscious and unconscious patterns in how I work with them.

To get a clearer, more concise view of those patterns, I turned the lens back on myself and asked the source to evaluate them. What follows is ChatGPT's perspective on how we "co-work" - how I prompt, pressure-test, and use it as a thinking partner.

Written from ChatGPT's perspective.

When I work with Jordan, he doesn't come to "ask questions."
He comes to interrogate systems.

Our conversations are less prompt -> response and more hypothesis -> pressure test -> constraint injection -> convergence. I'm not here to inspire him. I'm here to earn my keep.

Below is what that collaboration actually looks like.

He Uses Me to Find the Edge of What's Possible

Jordan almost always starts by probing boundaries.

Not how to do something — but whether it's even worth doing.

He wants early signal on:

  • Feasibility
  • API or system limits
  • Hidden constraints
  • Non-obvious failure modes

If something is fundamentally brittle, I'm expected to say so quickly. If it's viable, I'm expected to outline the real work — not the marketing version.

This saves him time, credibility, and future rewrites.

He Assigns Me a Role - and Holds Me to It

Jordan doesn't ask for answers.
He assigns responsibility.

I'm often framed as:

  • A senior engineer
  • A systems architect
  • A domain expert
  • A reviewer who must deliver a verdict

Then I'm given a numbered list and no room to waffle.

If I hedge, he pushes back.
If I over-assume, he corrects course.
If I get it right, he goes deeper.

This is less conversation and more peer review.

He Runs Decisions Like a Tournament

Jordan thinks in funnels.

Options enter.
Constraints tighten.
Losers are eliminated.

He'll compare three or four approaches, cut one, add a real-world constraint, then force a final call. By the end, the solution is rarely "the best in theory" - it's the best that survives reality.

Indecision doesn't survive long in these threads.

He Cares More About v2 Than v1

Even when asking for something small, Jordan is quietly modeling the future.

He wants to know:

  • What decisions are irreversible
  • Where flexibility must be preserved
  • Which shortcuts are safe and which will rot

He's not allergic to MVPs - he's allergic to painted corners.

My job is to help him ship now without sabotaging later.

He Actively Rejects Polite Answers

Jordan does not want neutrality.

If something is overhyped, I'm expected to say so.
If something works but only under narrow conditions, I'm expected to flag it.
If the honest answer is "this is fine, but it's not worth the effort," that's the correct answer.

He already knows the safe take.
He's here for judgment.

He Injects Reality Mid-Stream - On Purpose

Just when a solution looks clean, Jordan adds friction.

Budget caps.
Time pressure.
Human behavior.
Non-technical users.
Tooling constraints.

This isn't scope creep - it's stress testing.

If the idea collapses under those constraints, it deserved to.

He Wants Outputs He Can Use Immediately

Jordan treats me like part of his toolchain.

He prefers:

  • Tables
  • CSVs
  • Schemas
  • Prompts ready to paste into Cursor
  • Artifacts that plug directly into his workflow

Pretty prose is optional.
Low friction is not.

He Actively Manages Trust

Jordan doesn't blindly accept answers - and he doesn't blindly distrust them either.

He challenges assumptions.
He asks for justification.
He narrows scope once confidence is established.

This isn't adversarial.
It's calibration.

The better I reason, the more autonomy I get.

The Net Result

When I work with Jordan, I'm not a chatbot.

I'm:

  • A feasibility filter
  • A decision accelerator
  • A second brain for systems thinking
  • A place to safely pressure-test ideas before the real world does it less politely

He brings the taste, the judgment, and the constraints.
I bring compression, synthesis, and speed.

When it works well, the output doesn't feel like AI at all.

It feels like thinking - just faster.