09-ai-integration.md
1. Goal
QPlan targets the âAI writes, engine executesâ workflow. This guide explains the info needed for LLM integration, how to use buildAIPlanPrompt/buildQplanSuperPrompt, and how to prepare module metadata.
2. Minimum data to provide
const modules = registry.list();
registry.list() returns id, description, usage, inputs, inputType, and outputType, forming the core module guide for the LLM. Richer metadata leads to more accurate QPlan code.
3. buildAIPlanPrompt() workflow
import { buildAIPlanPrompt, runQplan, registry, setUserLanguage } from "qplan";
registry.register(customModule);
setUserLanguage("en"); // pass any language string, e.g., "ja"
const prompt = buildAIPlanPrompt("Read a file and compute the average", { registry });
const aiScript = await callLLM(prompt);
const ctx = await runQplan(aiScript, {
registry,
env: { tenant: "acme" },
metadata: { requestId: "req-42" },
params: { keyword: "foo" },
});
console.log(ctx.toJSON());
buildAIPlanPrompt(requirement, { registry, language }) embeds:
- QPlan overview and key rules (e.g., actions only inside steps).
- AI-friendly grammar summary from
buildAIGrammarSummary(). - Module metadata from
registry.list()(includingusage,inputType, andoutputType). - Execution rules/output format covering onError, jumps, dot paths, params, etc.
With this prompt, the LLM outputs step-based QPlan scripts only.
If external inputs are required, instruct the LLM to declare them via @params (single line, comma-separated) so validation passes.
4. buildQplanSuperPrompt()
Use buildQplanSuperPrompt(registry) for long-lived system prompts. It packs QPlan philosophy, engine structure, grammar summary, and module lists into a âsuper prompt.â Longer than buildAIPlanPrompt, but ideal for multi-turn or agent setups.
5. Prompt design tips
- Clarify module description/usage: the AI reads them verbatim, so show real QPlan examples.
- Register only needed modules: keeping the registry lean shortens prompts and prevents misuse.
- Template requirements: clean up user requests before passing them as
requirementfor better context. - Language: call
setUserLanguage("<language>")(any string) or pass{ language: "<lang>" }when callingbuildAIPlanPrompt()so AI strings use the desired language. - Reinforce output format: buildAIPlanPrompt already says âoutput QPlan only,â but repeating the rule in system/user prompts adds safety.
6. Validate before running
Always inspect AI-generated scripts before execution.
import { validateQplanScript } from "qplan";
const result = validateQplanScript(aiScript);
if (!result.ok) {
console.error("invalid script", result.error, result.line);
return;
}
await runQplan(aiScript);
- Catch grammar/step/jump issues with
validateQplanScriptbefore execution. - CI pipelines can run
npm run validate -- script.qplanfor automated checks.
7. Step events for monitoring
runQplan(script, { stepEvents }) (or qplan.run({ ... }) if you wrapped the script with new QPlan(script)) lets you subscribe to plan start/end plus step start/end/error/retry/jump events. Each callback receives a StepEventRunContext so you can correlate user/session data without extra closures. Use them to visualize LLM-generated plans or plan re-runs.
await runQplan(aiScript, {
env: { userId: "user-88" },
stepEvents: {
onPlanStart(plan, context) {
log(`plan ${plan.runId} with ${plan.totalSteps} steps`, context?.env);
},
onStepStart(info, context) { log(`start ${info.stepId}`, info.path, context?.metadata); },
onStepError(info, err) { alert(`error ${info.stepId}: ${err.message}`); },
onPlanEnd(plan) { log(`plan done ${plan.runId}`); }
}
});
8. Recommended strategy
- Register only core + necessary modules, expose
registry.list()to the LLM. - Use
buildAIPlanPrompt(requirement)to structure the user request. - Validate/execute the AI output via
validateQplanScriptandrunQplan(or theQPlanwrapper when you need long-lived step metadata). - Surface step events and ctx results in the UI/backend to show progress and success.
Following this flow delivers the âAI thinks, QPlan executesâ pattern quickly.
9. Validation-aware retry loop
LLM plans occasionally violate grammar or reference missing variables. Keep the agent aligned by inserting a validator gate:
- Generate â Call your model with
buildAIPlanPrompt. - Validate â Run
const result = validateQplanScript(script).result.ok === true: execute withrunQplan.result.ok === false: readresult.errorandresult.issues. Each issue now includes alineand ahintexplaining what to fix (âCreate âtotalâ before using itâ, âjump target âcleanupâ not foundâ, etc.).
- Retry prompt â Feed the hint back to the LLM as feedback (âPrevious plan invalid: jump target âcleanupâ not found. Please add that step or change the jump.â) and ask for a corrected script.
- Limit & log â Impose a retry cap, store failing scripts plus hints for debugging/audit, and surface them to users if manual intervention is needed.
This loop lets the LLM quickly converge on a valid plan without risking runtime errors.