Async AI agent runtime
Pillbug
Like its namesake, it keeps to itself.
Pillbug runs one AI agent per container —
sandboxed tools, channels you plug in, memory you choose.
A tidy little runtime, not a platform.
Why Pillbug
Each runtime has its own workspace, identity, and security boundary. No multi-tenant routing inside the process.
File reads, edits, search, command execution, scheduling, and URL fetches all live behind a local MCP server scoped to WORKSPACE_ROOT.
CLI, Telegram, Matrix, Slack, WebSocket, A2A, HTTP trigger — each is a plugin package, registered through env config, not hardcoded into the loop.
Non-root container PID 1, bearer-protected control plane, reloadable security patterns, structured JSON logs.
Gemini developer or Vertex natively; llama.cpp, vLLM, Ollama, and LiteLLM through the bundled OpenAI-compatibility proxy.
Docket-backed background agent tasks plus workspace skill discovery from skills/*/SKILL.md, seeded per workspace on first run.
Highlights
- Async runtime with debounced inbound message handling
- Native audio recognition and vision via multi-modal Gemini
- Local MCP server for file, search, command, and fetch tools
- Built-in session commands, summarization, and planning
- Optional packages: A2A, Telegram, Slack, Matrix, WebSocket, dashboard
- Per-workspace AGENTS.md instructions seeded on first run
Quickstart
Requires Python 3.14+, uv, and a Gemini API key.
git clone https://github.com/m0nochr0me/pillbug.git cd pillbug uv sync --locked export PB_GEMINI_API_KEY=your_api_key ./run.sh