Agent Workflows
This page is for developers and contributors working inside the AGILAB repository. It is not the newcomer path.
If you are new to AGILAB, stay on Quick-Start first. Use this page only when you intentionally want a CLI coding-agent workflow against the repository itself.
What “repo-ready” means
The repository already ships the configuration, wrappers, and conventions needed to work with these agent paths against the same repo contract:
Codex
Claude
Aider
OpenCode
That does not mean the four tools behave identically. It means the repo now contains a prepared entry path for each of them instead of relying on ad hoc local setup.
Supported agent paths
Codex and Claude
Repo-managed skills live under
.codex/skillsand.claude/skills.AGENTS.mdremains the source of truth for repo policy, validation, and launch rules.
Aider
Use the wrapper from the repository root:
./tools/aider_workflow.sh chat
For a one-off task:
./tools/aider_workflow.sh exec "Refactor only ... keeping behavior unchanged"
What the repo already provides:
.aider.conf.ymlfor repo-local defaultstools/aider_workflow.shfor the standard entry pathtools/aider_workflow.mdfor usage details
Default local model path:
qwen-local->ollama_chat/qwen2.5-coder:latest
Additional local aliases:
gpt-oss-local->ollama_chat/gpt-oss:20bqwen3-local->ollama_chat/qwen3:30b-a3b-instruct-2507-q4_K_Mqwen3-coder-local->ollama_chat/qwen3-coder:30b-a3b-q4_K_Mministral-local->ollama_chat/ministral-3:14b-instruct-2512-q4_K_Mphi4-mini-local->ollama_chat/phi4-mini:3.8b-q4_K_M
OpenCode
Use the wrapper from the repository root:
./tools/opencode_workflow.sh chat
For a one-off task:
./tools/opencode_workflow.sh exec "Add a regression test for ..."
What the repo already provides:
opencode.jsonfor project configuration.opencode/agents/for project-scoped agentstools/opencode_workflow.shfor the standard entry pathtools/opencode_workflow.mdfor usage details
Default local model path:
ollama/qwen2.5-coder:latest
Useful efficient local overrides include ollama/gpt-oss:20b,
ollama/qwen3-coder:30b-a3b-q4_K_M,
ollama/qwen3:30b-a3b-instruct-2507-q4_K_M,
ollama/ministral-3:14b-instruct-2512-q4_K_M, and
ollama/phi4-mini:3.8b-q4_K_M.
Local model prerequisite
Aider and OpenCode in this repo are prepared for local Ollama-backed models. In practice this means:
keep a local Ollama server running
use the repo defaults or override them with the documented environment variables
The prepared local families are the same ones already documented elsewhere in
AGILAB: gpt-oss, qwen, deepseek, qwen3, qwen3-coder,
ministral, and phi4-mini. If a model is served through vLLM or another
OpenAI-compatible gateway instead of Ollama, configure the AGILAB assistant
with AGILAB_LLM_BASE_URL and AGILAB_LLM_MODEL.
Where to read the repo-local files
The public docs page gives the high-level entry points. The operational details stay in the repository itself:
When not to use this page
If you are doing your first real AGILAB run, use Quick-Start.
If you want the notebook-first runtime path, use agi-core Demo.
If you want a public demo route instead of repo work, use Demos.