Does How You Feed Context to an LLM Agent Change What It Remembers? I Tested With Canary Strings.
I work with three LLM agents daily — Claude Code, Codex CLI, and Gemini CLI. Before each task, I load project context (design docs, data models, implementation guides) into the agent so it understa...

Source: DEV Community
I work with three LLM agents daily — Claude Code, Codex CLI, and Gemini CLI. Before each task, I load project context (design docs, data models, implementation guides) into the agent so it understands what it's working on. But there are multiple ways to load that context. You can have the agent run a shell command and read the output. You can point it at a file. You can split the context across several files. Does the delivery method affect how much the agent actually retains? I ran a small experiment using canary strings — unique, unpredictable markers embedded throughout the context — to measure retention objectively. Here's what I found. Background: What's a "Base Session"? A base session is a pattern for multi-agent development: you load project context into an agent once, record the session_id, and resume that session for every subsequent task. Instead of re-explaining your project from scratch each time, the agent picks up where it left off — already understanding your codebase.