AGILab Documentation
Welcome to AGILab
AGILab lets teams go beyond notebooks by building runnable apps.
Build from experiments to apps: turn notebook logic into packaged, runnable applications with standard inputs/outputs, controls, and visualizations.
Unified app experience: a consistent UI layer makes apps easy to use, test, and maintain.
App store + scale-out: apps are orchestrable on a cluster for scalability, enabling seamless distribution and repeatable runs.
Cross-app reuse with apps-pages: share UI pages and development effort across apps to avoid duplication and speed iteration.
Shared dataframes: exchange tabular data between apps to compose workflows without brittle file hand-offs.
Experiment at speed: track, compare, and reproduce algorithm variants with MLflow built into the flow.
Assisted by Generative AI: seamless integration with OpenAI API (online), GPT-OSS (local), and Mistral-instruct (local) to assist iteration, debugging, and documentation.
You’ll find everything from quickstarts to API references, as well as example projects.
%s
Audience profiles
Managers run packaged demos via the IDE entry points or demo commands to quickly evaluate AGILab flows (read-only usage).
End users clone the repository and customize existing apps (configs, workers, small UI tweaks) to fit their use case—no need to modify the core framework.
uvx
is for demos/quick checks only.Developers extend the framework: create new apps, add apps-pages (e.g., new views), workers, and deeper changes. Use PyCharm run configurations (or generate terminal wrappers with
python3 tools/generate_runconfig_scripts.py
).
Shell wrappers for developers
Developers who prefer a terminal can mirror PyCharm run configurations by regenerating shell wrappers with:
python3 tools/generate_runconfig_scripts.py
This emits executable scripts under tools/run_configs/<group>/
(agilab
, apps
, components
); each mirrors a PyCharm run configuration (working directory, environment variables, and uv
invocation).
Note
The “uvx -p 3.13 agilab” command is intended for demos or quick checks only; edits made inside the cached package are not persisted. For development work, clone the repo or use a dedicated virtual environment. For offline workflows pick one of the bundled providers:
- Launch a GPT-OSS responses server with
python -m gpt_oss.responses_api.serve --inference-backend stub --port 8000
and switch the Experiment sidebar to GPT-OSS (local). - Install
universal-offline-ai-chatbot
(Mistral-based) and point the Experiment sidebar to your PDF corpus to enable the Mistral-instruct (local) provider.
When GPT-OSS is installed and the endpoint targets localhost
, the sidebar auto-starts the stub server for you.
Assistant providers
The Experiment page ships with three assistants:
OpenAI (online) — default cloud models via your API key.
GPT-OSS (local) — local responses API with stub, transformers, or custom backends.
Mistral-instruct (local) — local Mistral assistant powered by
universal-offline-ai-chatbot
; build the FAISS index from your PDFs.
Roadmap
Keep an eye on the roadmap for recently shipped features and upcoming milestones. It highlights the IDE-neutral tooling, shell wrappers, dataset recovery automation, and planned documentation updates.
Getting Started
Core Topics
- Cluster
- Cluster Help
- AGILab
- Framework API
- Environment Variables
- FAQ
- Missing worker packages during AGI.run_*
- Why installers still build eggs
- Do we already have DAG/task orchestration?
- Who manages multithreading when Dask is disabled?
- Regenerating IDE run configurations
- “VIRTUAL_ENV … does not match the project environment” warning
- Why does a run create
distribution.json
? - Switching the active app in Streamlit
- Docs drift after touching core APIs
- AGI.install_* fails looking for
pyproject.toml
- Where are installer logs written?
- Project Files Structure
- Installation Overview
- Troubleshooting
- Known Bugs
- Licenses
Apps Examples
Hosting Sites