The 429 That Poisoned Every Fallback
Your agent has a fallback chain: GPT-5.4 → DeepSeek → Gemini Flash. GPT-5.4 hits a 429 rate limit. No problem — that's what fallbacks are for, right? Except DeepSeek never makes a request. It fails...

Source: DEV Community
Your agent has a fallback chain: GPT-5.4 → DeepSeek → Gemini Flash. GPT-5.4 hits a 429 rate limit. No problem — that's what fallbacks are for, right? Except DeepSeek never makes a request. It fails with the exact same error message and exact same error hash as the GPT-5.4 rejection. Then it gets put into cooldown. The Bug Issue #62672 documents this. Three providers configured: openai-codex/gpt-5.4 — OAuth, ChatGPT Plus deepseek/deepseek-chat — separate API key google/gemini-2.5-flash — separate API key When Codex returns 429, the fallback chain identifies DeepSeek as next. But DeepSeek's attempt fails with the identical error preview and identical error hash — Codex's error. DeepSeek was never actually called. How Error Poisoning Works The primary model's error response object gets carried forward into the secondary attempt's evaluation context. The error propagation: Codex 429 → error object (hash: sha256:2aa86b51b539) → fallback to DeepSeek → DeepSeek evaluated against same error ob