Skip to content

How AI Thinks

Three Behaviors You Need to Know

Before you delegate anything, you need to understand how your delegate operates. AI has three behaviors that explain most of the surprises people encounter. These aren't limitations to work around — they're the operating constraints that shape how you delegate effectively.

Same Input, Different Output

Ask AI the same question twice and you'll get two different answers. Both reasonable, but different. This is called being probabilistic — AI generates responses based on probability, not exact formulas.

Think of it like asking five colleagues to summarize the same meeting. All accurate, all different — because there are many valid ways to represent the same information.

What this means for delegation: - Variation is a feature, not a bug. Judge output against your criteria, not against a specific expectation. - When you need consistency, you need constraints. Vague requests amplify variation; specific ones narrow it. Delegating a vague task twice will produce two different results. That's the next section.

Stateless: No Memory Between Conversations

Every new conversation starts from a blank slate. AI has no memory of previous sessions. The technical term is stateless — nothing carries over from one conversation to the next.

It's like working with someone who's never met you before — every single time.

A note on memory: As of March 2026, some AI tools have begun to incorporate cross-conversation memory — saving key facts between sessions. But these work by retaining summaries, not replaying full conversations. For now, treat each new conversation as a blank slate. If you need AI to know something, tell it directly — or better yet, use the approach in Section 4.

What this means for delegation: - Everything you told one AI yesterday is gone today. What you tell one AI assistant won't automatically transfer to the next. Project context, coding conventions, data models — all erased. - You'll find yourself re-explaining the same things repeatedly. That friction is real, and Section 4 gives you the fix.

Context Window: The Oxygen Tank

Within a single conversation, AI can hold a limited amount of information in working memory. This is called the context window, measured in tokens (chunks of text).

Think of it like an oxygen tank. Every message — yours and AI's — uses up air. As the tank runs low, AI starts paying less attention to things in the middle of the conversation — response quality fades and it may "forget" instructions you gave earlier.

The context window as an oxygen tank

What this means for delegation: - Long conversations degrade in quality. If AI starts repeating itself or missing earlier instructions, the tank is running low. After 15+ back-and-forth exchanges, watch for this. - Starting a fresh conversation resets the tank. You may want to manually do this before starting a large task. - Right-size your requests — don't spend context on things that don't need to be in the conversation.

Team Discussion: Your AI Experience

Format: Team Discussion Time: ~2 minutes

Quick round-the-table: each person shares one thing that surprised or frustrated them when working with AI in the past. Don't solve it yet — just name it.

Discuss: Now that you've seen the three behaviors, can you trace your experience back to one of them? Most "AI is unreliable" moments are actually one of these three constraints in action.

Key Insight

Probabilistic, stateless, limited context. These three behaviors explain most of AI's surprises. They're not flaws — they're the operating manual for your delegate. The rest of this lift shows you how to work within these constraints instead of fighting them.