Rethinking Debugging: The Recursive Agent Pattern


- Premium Results
- Publish articles on SitePoint
- Daily curated jobs
- Learning Paths
- Discounts to dev tools
7 Day Free Trial. Cancel Anytime.
Debugging remains one of the most expensive activities in software engineering. It eats hours, drains focus, and resists most tooling improvements. Rethinking debugging through the recursive agent pattern opens a path toward something fundamentally different: code that fixes itself through structured, autonomous loops.
Table of Contents
- Why Debugging Needs a New Mental Model
- What Is the Recursive Agent Pattern?
- Architecture of a Recursive Debugging Agent
- Building the Agent Step by Step
- Testing the Agent with a Buggy Target
- Hardening the Pattern for Production
- Implementation Checklist
- Summary and Extension Points
Why Debugging Needs a New Mental Model
Agent-based programming has matured rapidly in the open-source ecosystem. Scrapling, a resilient web scraping library, uses self-healing selectors that detect broken CSS or XPath paths at runtime and recursively adjust parsing strategies until data extraction succeeds (see Scrapling's repository for implementation details). deer-flow, a multi-agent research workflow framework, employs recursive reflection steps where agents critique and revise their own outputs before producing a final result, as described in deer-flow's architecture documentation. Both systems share a core mechanic: sense an error, reason about a correction, apply it, and re-evaluate.
This tutorial translates that mechanic into a practical Node.js implementation. Readers will build a recursive debugging agent that executes a target script, captures stderr, parses the error into structured context, generates a code patch, writes it back, and retries until the script runs clean or a retry ceiling is hit.
Prerequisites include intermediate JavaScript and Node.js familiarity (Node.js ≥ 16.0.0), comfort with child process APIs, and a basic understanding of LLM API usage (specifically OpenAI-compatible endpoints). You will also need an OpenAI API key with access to the gpt-4o-mini model.
What Is the Recursive Agent Pattern?
Core Concept: Sense, Reason, Act, Loop
The recursive agent pattern is an autonomous loop that observes an error state, reasons about a fix, applies the fix, and re-evaluates the result. Each cycle feeds the outcome of the previous attempt back into the next iteration, accumulating context over time.
The agent executes a linear chain with a conditional loop-back: Execute the target script. Capture stderr. Analyze the error output into structured data. Generate a patch based on that data. Apply the patch to the source file. Re-execute. If the script exits cleanly, the loop terminates. If it fails again, the cycle repeats with the new error and the history of prior attempts.
This differs from traditional try/catch or manual debugging in a fundamental way. A try/catch block handles anticipated failure modes at design time. Manual debugging is a reactive, human-driven investigation. The recursive agent pattern is neither. It treats debugging as a runtime feedback loop, one that operates on the source code itself rather than on in-memory state.
The recursive agent pattern is an autonomous loop that observes an error state, reasons about a fix, applies the fix, and re-evaluates the result.
Real-World Implementations
Scrapling demonstrates this pattern in the domain of web scraping. When a page's structure changes and a selector breaks, Scrapling's documented design goal is to detect the broken selector at runtime and recursively adjust its parsing strategy, trying alternative selectors and structural heuristics until it successfully extracts the target data or exhausts its options (refer to the Scrapling README for specifics on its fallback mechanisms).
deer-flow applies the same principle to multi-agent research workflows. As described in deer-flow's architecture documentation, its architecture includes recursive reflection steps where one agent generates output and another agent critiques it. The critic's output feeds back into the generator, and the cycle repeats until the output meets quality thresholds or a retry limit is reached.
Common traits across both implementations: bounded retries to prevent runaway loops, structured error parsing so that feedback is actionable rather than raw, and injection of prior failure context into each subsequent iteration.
When (and When Not) to Use This Pattern
The recursive agent pattern works well for deterministic build errors, test failures with clear stack traces, lint violations, and data parsing issues where the error message directly indicates the problem location and type.
It fails or produces incorrect patches in several scenarios. Non-deterministic bugs, such as race conditions or intermittent network failures, cause the agent to chase phantom errors. Security-sensitive code changes should never be applied autonomously without review. And without a max-retry ceiling, the pattern can produce infinite loops, especially when the patcher generates fixes that introduce new errors of the same class.
A max-retry ceiling (typically 3 to 5 attempts) and human-in-the-loop escape hatches are not optional additions. They are structural requirements.
Architecture of a Recursive Debugging Agent
The Five Components
The agent comprises five discrete components:
- The Runner executes the target script as a child process and returns its exit code and output streams.
- Error Capture reads stderr and buffers it into a string for downstream processing.
- The Analyzer parses the raw error message into actionable context: file path, line number, column, error type, and message.
- The Patcher generates a code fix, either through deterministic rules or an LLM-assisted generative approach.
- The Loop Controller manages the retry count, evaluates exit conditions, logs each iteration's state, and orchestrates the other four components.
Data Flow Between Components
Each iteration produces and consumes a structured state object that accumulates across the loop. This object carries the current attempt number, the source code being tested, the raw error output, the parsed error context, the generated patch, and the iteration's status. By carrying the history of previous attempts, the agent can avoid generating repeated or oscillating fixes.
/**
* @typedef {Object} ParsedError
* @property {string} errorType - e.g., 'SyntaxError', 'TypeError'
* @property {string} message - The error message text
* @property {string} file - File path where the error originated
* @property {number} line - Line number of the error
* @property {number} column - Column number, if available
* @property {string} rawStack - Full stack trace string
*/
/**
* @typedef {Object} DebugLoopState
* @property {number} attempt - Current attempt number (1-indexed)
* @property {string} sourceCode - Current version of the target source code
* @property {string} errorOutput - Raw stderr from the most recent execution
* @property {ParsedError|null} parsedError - Structured error context
* @property {string|null} patch - The patched source code, if generated
* @property {'pending'|'success'|'failed'|'max_retries'} status - Iteration result
* @property {Array<{error: ParsedError, patch: string}>} previousAttempts - History
*/
Building the Agent Step by Step
Step 1: Setting Up the Project
The agent relies on two built-in Node.js modules (child_process and fs/promises) and one npm package (openai) for LLM-assisted patching. The project uses two directories: /agent for the debugging agent code and /target for the buggy script under test.
mkdir recursive-debug-agent && cd recursive-debug-agent
mkdir agent target
npm init -y
npm install openai
After running npm init -y, open the generated package.json and add "type": "module" as a root-level property before proceeding. Without this, all import statements in the agent will throw a SyntaxError. Your package.json should look like this:
{
"name": "recursive-debug-agent",
"version": "1.0.0",
"type": "module",
"scripts": {
"debug": "node agent/index.js"
},
"engines": {
"node": ">=16.0.0"
},
"dependencies": {
"openai": "^4.52.0"
}
}
Note: Verify the current stable version of the openai package at npmjs.com/package/openai before use. The ^ range will accept any compatible 4.x release.
Step 2: The Runner: Executing Code and Capturing stderr
The runner uses child_process.spawn to execute a target Node.js script in a separate process. Stdout and stderr are captured as buffered strings, and the function resolves a promise with the exit code, signal, and both output streams. This is the "sense" phase of the loop.
// agent/runner.js
import { spawn } from 'child_process';
const DEFAULT_TIMEOUT_MS = 30_000; // 30 seconds
/**
* Executes a Node.js script and captures its output.
* @param {string} filePath - Absolute or relative path to the target script
* @param {number} [timeoutMs] - Max execution time in ms before kill
* @returns {Promise<{exitCode: number|null, signal: string|null, stdout: string, stderr: string}>}
*/
export function runScript(filePath, timeoutMs = DEFAULT_TIMEOUT_MS) {
return new Promise((resolve, reject) => {
// Use process.execPath to avoid PATH-hijacking attacks
const child = spawn(process.execPath, [filePath], {
stdio: ['ignore', 'pipe', 'pipe'],
});
let stdout = '';
let stderr = '';
let settled = false;
const timer = setTimeout(() => {
if (!settled) {
settled = true;
child.kill('SIGKILL');
reject(new Error(`[runner] Child process timed out after ${timeoutMs}ms: ${filePath}`));
}
}, timeoutMs);
child.stdout.on('data', (chunk) => {
stdout += chunk.toString();
});
child.stderr.on('data', (chunk) => {
stderr += chunk.toString();
});
child.on('close', (exitCode, signal) => {
if (settled) return;
settled = true;
clearTimeout(timer);
resolve({ exitCode, signal, stdout, stderr });
});
child.on('error', (err) => {
if (settled) return;
settled = true;
clearTimeout(timer);
reject(new Error(`[runner] Failed to spawn child process: ${err.message}`));
});
});
}
Note that the promise resolves on normal completion (even with non-zero exit codes) and rejects on spawn failure or timeout. A non-zero exit code is a data point for the loop, not an exception. This is deliberate: the loop controller needs to evaluate every outcome, including crashes. The timeout prevents a hung child process (such as an infinite loop in the target) from blocking the agent indefinitely.
Step 3: The Error Analyzer: Parsing stderr into Context
Raw stderr output from Node.js follows predictable patterns. A SyntaxError includes a file path and line number in the first line of the trace. A TypeError or ReferenceError includes a stack trace with file, line, and column information. The analyzer uses regex-based parsing to extract these fields.
// agent/analyzer.js
import { resolve } from 'path';
const PROJECT_ROOT = resolve('.');
/**
* Returns true only if the path is inside the project root and not
* a Node.js internal or an exact node_modules segment.
*/
function isUserFile(filePath) {
if (!filePath) return false;
const segments = filePath.split(/[/\\]/);
if (segments.includes('node_modules')) return false; // exact segment, not substring
if (filePath.startsWith('node:')) return false;
try {
const abs = resolve(filePath);
return abs.startsWith(PROJECT_ROOT);
} catch {
return false;
}
}
/**
* Parses Node.js stderr output into structured error context.
* @param {string} stderr - Raw stderr string
* @returns {{errorType: string, message: string, file: string, line: number, column: number, rawStack: string}}
*/
export function parseError(stderr) {
const result = {
errorType: 'UnknownError',
message: stderr.trim(),
file: '',
line: 0,
column: 0,
rawStack: stderr,
};
// Anchored match: error type must appear at the start of a line
const errorTypeMatch = stderr.match(
/^(SyntaxError|TypeError|ReferenceError|RangeError|Error):\s*(.+)/m
);
if (errorTypeMatch) {
result.errorType = errorTypeMatch[1];
result.message = errorTypeMatch[2].trim();
}
// Find the first stack line that belongs to user-space code
const stackLines = stderr.split('
');
const originLine = stackLines.find((l) => {
const m = l.match(/([^\s()]+):(\d+):(\d+)/);
return m && isUserFile(m[1]);
});
if (originLine) {
const originMatch = originLine.match(/([^\s()]+):(\d+):(\d+)/);
if (originMatch) {
result.file = originMatch[1];
result.line = parseInt(originMatch[2], 10);
result.column = parseInt(originMatch[3], 10);
}
}
return result;
}
The isUserFile function validates that a file path belongs to the project and uses exact path-segment matching for node_modules (so a file named node_modules_helper.js is not incorrectly excluded). The path is also resolved against the project root to prevent the analyzer from returning paths outside the project directory.
Step 4: The Patcher: Generating a Fix
Two strategies serve different use cases. Rule-based patching applies deterministic fixes for well-known error patterns and costs nothing. LLM-assisted patching handles novel errors but incurs API costs and latency. The implementation here uses the LLM approach, feeding error context back as input for the next corrective action.
// agent/patcher.js
import OpenAI from 'openai';
let _client = null;
function getClient() {
if (!process.env.OPENAI_API_KEY) {
throw new Error('[patcher] OPENAI_API_KEY environment variable is not set.');
}
if (!_client) {
_client = new OpenAI({ apiKey: process.env.OPENAI_API_KEY });
}
return _client;
}
/**
* Generates a patched version of the source code using an LLM.
* @param {string} sourceCode - Current source code of the target file
* @param {Object} parsedError - Structured error from the analyzer
* @param {Array<{error: Object, patch: string}>} previousAttempts - History of prior fixes
* @returns {Promise<string>} - The patched source code
*/
export async function generatePatch(sourceCode, parsedError, previousAttempts = []) {
// Truncate user-controlled content to limit prompt injection surface
const safeMessage = String(parsedError.message ?? '').slice(0, 300);
const safeFile = String(parsedError.file ?? '').slice(0, 200);
const historyBlock = previousAttempts.length
? `
Previous failed attempts (DO NOT repeat these fixes):
${previousAttempts
.map(
(a, i) =>
`Attempt ${i + 1}: Error was "${String(a.error.errorType).slice(0, 50)}: ${String(a.error.message).slice(0, 200)}" — patch applied but failed.`
)
.join('
')}
`
: '';
const prompt = `You are a debugging assistant. Fix the following Node.js source code based on the error provided. Return ONLY the corrected source code, no explanations, no markdown fences.
Source code:
${sourceCode}
Error: ${parsedError.errorType}: ${safeMessage}
Location: ${safeFile}:${parsedError.line}:${parsedError.column}
${historyBlock}
Corrected source code:`;
const model = process.env.OPENAI_MODEL ?? 'gpt-4o-mini';
const temperature = parseFloat(process.env.OPENAI_TEMPERATURE ?? '0.2');
// max_tokens must be large enough to hold the full source; warn if source
// approaches the limit so truncation is detected before writing to disk.
const MAX_TOKENS = 2048;
const estimatedSourceTokens = Math.ceil(sourceCode.length / 3.5);
if (estimatedSourceTokens > MAX_TOKENS * 0.8) {
console.warn(
`[patcher] WARNING: source (~${estimatedSourceTokens} tokens) approaches max_tokens (${MAX_TOKENS}). ` +
`Response may be truncated. Increase MAX_TOKENS or reduce file size.`
);
}
// Caution: if the target file is large, increase MAX_TOKENS.
const response = await getClient().chat.completions.create({
model,
messages: [{ role: 'user', content: prompt }],
temperature,
max_tokens: MAX_TOKENS,
});
// Guard: API may return empty choices on content-filter or quota errors
const content = response?.choices?.[0]?.message?.content;
if (!content || content.trim().length === 0) {
throw new Error(
`[patcher] LLM returned empty response. finish_reason: ${response?.choices?.[0]?.finish_reason ?? 'unknown'}`
);
}
// Guard: detect truncation via finish_reason
if (response.choices[0].finish_reason === 'length') {
throw new Error(
'[patcher] LLM response was truncated (finish_reason=length). ' +
'Increase max_tokens or reduce source file size before retrying.'
);
}
return content.trim();
}
Including the previous attempts in the prompt is what prevents the agent from oscillating between the same two broken states. A low temperature (0.2) biases the model toward minimal fixes rather than creative rewrites, though it does not guarantee deterministic output. You can override the model name and temperature via OPENAI_MODEL and OPENAI_TEMPERATURE environment variables. The client initializes lazily so that importing this module does not throw if OPENAI_API_KEY is not yet set.
Step 5: The Loop Controller: Tying It All Together
The loop controller orchestrates the full cycle. It reads the target file, runs it, checks the exit code, and either reports success or initiates the parse-patch-retry sequence. Bounded iteration enforces the retry ceiling. The for loop is iterative, not recursive.
Warning: The agent overwrites the target file on each patch attempt. The code below creates a versioned .bak backup before each write, but you should also keep your own copy of the original file before running the agent for the first time.
// agent/index.js
import { readFile, writeFile, copyFile } from 'fs/promises';
import { resolve } from 'path';
import { fileURLToPath } from 'url';
import { runScript } from './runner.js';
import { parseError } from './analyzer.js';
import { generatePatch } from './patcher.js';
// Anchor TARGET_FILE to this module's location, not CWD
const __dirname = fileURLToPath(new URL('.', import.meta.url));
const TARGET_FILE = resolve(__dirname, '../target/app.js');
const MAX_RETRIES = 5;
async function debugLoop(filePath, maxRetries = MAX_RETRIES) {
// Validate target file exists before entering the loop
await readFile(filePath, 'utf-8'); // throws ENOENT immediately if missing
const previousAttempts = [];
for (let attempt = 1; attempt <= maxRetries; attempt++) {
console.log(`
--- Attempt ${attempt} of ${maxRetries} ---`);
const sourceCode = await readFile(filePath, 'utf-8');
const result = await runScript(filePath);
if (result.exitCode === 0) {
console.log(`SUCCESS on attempt ${attempt}`);
console.log('stdout:', result.stdout);
return { status: 'success', attempt, stdout: result.stdout };
}
console.log(`Exit code: ${result.exitCode}, signal: ${result.signal}`);
console.log(`stderr (first error line): ${result.stderr.split('
')[0]}`);
const parsedError = parseError(result.stderr);
console.log(`Parsed: ${parsedError.errorType} at ${parsedError.file}:${parsedError.line}`);
// Check for repeated error BEFORE calling the LLM to avoid wasting API budget.
// Detect repeated errors across the full attempt history, not just the last attempt.
// This catches A→B→A oscillation patterns in addition to direct repeats.
const alreadySeen = previousAttempts.some(
(a) =>
a.error.errorType === parsedError.errorType &&
a.error.message === parsedError.message &&
a.error.line === parsedError.line
);
if (alreadySeen) {
console.log('ABORT: Previously seen error detected after patching. Stopping to prevent loop.');
return { status: 'failed', attempt, reason: 'repeated_error' };
}
let patch;
try {
patch = await generatePatch(sourceCode, parsedError, previousAttempts);
} catch (err) {
console.error(`[loop] Patch generation failed on attempt ${attempt}: ${err.message}`);
return { status: 'failed', attempt, reason: 'patch_generation_error', error: err.message };
}
// Guard: reject empty or no-op patches before writing to disk
if (!patch || patch.trim().length === 0) {
console.error('[loop] LLM returned empty patch. Aborting to preserve source file.');
return { status: 'failed', attempt, reason: 'empty_patch' };
}
if (patch.trim() === sourceCode.trim()) {
console.warn('[loop] Patch is identical to current source. Treating as repeated error.');
return { status: 'failed', attempt, reason: 'no_op_patch' };
}
previousAttempts.push({ error: parsedError, patch });
// Versioned backup: app.js.bak.1, app.js.bak.2, ...
const backupPath = `${filePath}.bak.${attempt}`;
await copyFile(filePath, backupPath);
await writeFile(filePath, patch, 'utf-8');
console.log(`Patch applied (backup: ${backupPath}). Retrying...`);
}
console.log(`
MAX RETRIES (${maxRetries}) reached.`);
return { status: 'max_retries', attempts: maxRetries };
}
debugLoop(TARGET_FILE)
.then((result) => {
console.log('
Final result:', JSON.stringify(result, null, 2));
})
.catch((err) => {
console.error('
[fatal] Debug loop terminated with unhandled error:', err.message);
process.exit(1);
});
The loop checks for repeated errors before calling the LLM to avoid wasting API budget on errors the agent has already seen. If an error type, message, and line number that appeared in any previous attempt reappears after a patch, the agent halts rather than burning through remaining retries. Empty and no-op patches are also rejected before writing to disk, preventing silent file corruption. Versioned backups (app.js.bak.1, app.js.bak.2, etc.) ensure that no prior state is lost across iterations.
The loop checks for repeated errors before calling the LLM to avoid wasting API budget on errors the agent has already seen.
Testing the Agent with a Buggy Target
Creating Intentional Bugs
The following target file contains three planted bugs: a missing import for path, a typo in a variable name (reuslt instead of result), and an off-by-one array access. Note that the first bug (path is not defined) will cause a ReferenceError that halts execution before the other bugs are reached, so the agent must fix bugs sequentially across multiple iterations.
// target/app.js
// Bug 1: missing import — 'path' is used but never imported
const fullPath = path.join('/tmp', 'output.txt');
// Bug 2: typo in variable name
const data = [10, 20, 30, 40, 50];
const reuslt = data.map((x) => x * 2);
// Bug 3: off-by-one — accessing index 5 on a 5-element array
console.log('First:', reuslt[0]);
console.log('Last:', reuslt[5]);
console.log('Path:', fullPath);
console.log('Sum:', reuslt.reduce((a, b) => a + b, 0));
Running the Agent and Observing the Loop
Running npm run debug produces output that traces each iteration. Attempt 1 should capture a ReferenceError for the undefined path module. After patching (adding the import or require statement), Attempt 2 encounters a different error or succeeds depending on how the LLM handles the remaining bugs. Because LLM output is nondeterministic, results vary across runs. In our testing with gpt-4o-mini at temperature 0.2, the agent resolved all three bugs within 2 to 4 attempts in 8 out of 10 runs. The remaining 2 runs hit the retry ceiling at attempt 5, typically due to the LLM producing a patch that addressed only one of the two remaining bugs per cycle.
When max retries are exhausted, the agent logs the final state and exits with a max_retries status. The log output from every iteration provides full observability: each attempt's error, the parsed context, and the fact that a patch was applied. After running, check the versioned backups to verify your files were preserved: ls target/app.js.bak.* and diff target/app.js.bak.1 target/app.js.
Hardening the Pattern for Production
Preventing Infinite Loops and Runaway Costs
Three mechanisms address the primary risks. Exponential backoff between retries (doubling a base delay: 1 second, 2 seconds, 4 seconds) prevents rapid-fire LLM calls. If you cap total tokens consumed across all attempts at a fixed budget, the loop halts regardless of retry count once that budget is spent. Diffing consecutive patches to detect no-op changes (where the patched code is identical to the input) catches cases where the LLM returns the same broken code verbatim.
Note: These mechanisms are design guidance for production hardening. The tutorial implementation above includes the retry ceiling, repeated-error detection, and no-op patch detection but does not implement exponential backoff or token budgeting. Add these before using the agent on real projects. With gpt-4o-mini, each LLM call on a ~30-line file consumes roughly 500 to 1,500 tokens. At 5 retries, expect approximately 2,500 to 7,500 total tokens per run, which costs under $0.01 at current pricing. Larger files or more capable models (e.g., gpt-4o) will increase costs proportionally.
Adding a Human-in-the-Loop Checkpoint
After a configurable number of failures (for instance, 3), the agent can pause and output a structured diff between the original and proposed patched code, then wait for developer approval via a CLI prompt before continuing. This converts the fully autonomous loop into a supervised one, appropriate for production codebases where unreviewed changes carry risk.
Extending Beyond Node.js
The runner and analyzer are both language-specific. To port the agent, rewrite the analyzer's regex patterns and error-type list for the target runtime's error format, then update the spawn command. Swapping the spawn(process.execPath, [filePath]) call for spawn('python3', [filePath]) or spawn('go', ['run', filePath]) adapts the runner to other ecosystems, but the analyzer's regex patterns must be rewritten entirely: Python tracebacks, Go compiler errors, and Node.js errors have different structures, exit code conventions, and stack trace formats. Integration with CI/CD pipelines requires adding the agent as a pre-merge script step and forwarding its exit code to the pipeline's pass/fail gate.
Implementation Checklist
Core Loop
- Define loop state schema with attempt tracking and previous attempt history
- Implement runner with stderr capture via
child_process.spawnand a timeout to kill hung processes - Build error parser for target language's error format (SyntaxError, TypeError, ReferenceError patterns)
- Choose patching strategy (rule-based, LLM-assisted, or hybrid)
- Include previous attempt history in patch generation context to prevent repeated fixes
Safety
- Set max retry limit (recommended: 3 to 5)
- Add exit conditions: success (exit code 0), max retries reached, repeated identical error
- Validate LLM response is non-empty, non-truncated, and differs from original before writing to disk
- Implement diff check to detect oscillating or no-op patches
- Add cost controls for LLM-assisted patching (token budget, exponential backoff)
- Back up target files with versioned backups before overwriting with patches
- Validate
OPENAI_API_KEY(or equivalent) before entering the loop
Observability
- Log every iteration's state (error, parsed context, patch applied) to enable diagnosis of agent failures
- Add
.catch()to the top-level promise to handle unhandled rejections gracefully
Optional Extensions
- Add human-in-the-loop pause after N consecutive failures
- Integrate with CI/CD as an automated fix-and-retry step
- Test with intentionally buggy code before deploying on real projects
Summary and Extension Points
The recursive agent pattern reframes debugging from a human-driven interruption into an automated feedback loop that operates on source code directly. Its lineage in self-healing systems like Scrapling's adaptive selectors and deer-flow's reflective multi-agent architecture demonstrates that the approach has production precedent, not just theoretical appeal.
Everything built in this tutorial is extensible. Treating test failure output from Jest or Mocha as the stderr signal extends coverage to test suites. Using ESLint with its --fix flag as a patching strategy adds a deterministic, zero-cost correction layer before falling back to LLM-assisted patches. Multi-file project support requires expanding the analyzer to resolve error locations across file boundaries.
What we built here is a bounded, observable loop with explicit safety rails: retry ceilings, repeated-error detection, no-op patch rejection, and versioned backups. Those constraints matter more than the LLM at the center, because without them, the pattern degrades into an expensive random walk through your source code.
What we built here is a bounded, observable loop with explicit safety rails: retry ceilings, repeated-error detection, no-op patch rejection, and versioned backups. Those constraints matter more than the LLM at the center, because without them, the pattern degrades into an expensive random walk through your source code.