daemon active
written in Rust · MIT License

aidaemon

A personal AI agent that runs as a daemon.
Always on. Always learning. Chat from Telegram,
extend with MCP, powered by any LLM.

View on GitHub $ cargo install
~/ aidaemon
$ cargo build --release
   Compiling aidaemon v0.1.0
    Finished release [optimized] target(s)

$ ./target/release/aidaemon
✓ Config loaded from config.toml
✓ SQLite state store initialized
✓ MCP tools discovered: 12 tools from 3 servers
✓ Telegram bot connected
◆ daemon active — listening on all channels
$
Works with any OpenAI-compatible provider
OpenAI
OpenRouter
Ollama
Google AI Studio
Any OpenAI-compatible API
Features

Everything an AI agent should be

Built for extensibility, safety, and persistent intelligence. aidaemon is your always-on AI agent that remembers, learns, and acts.

💬
Telegram Interface
Chat with your agent from any device. Multi-user access control with allowed user IDs. Your AI, always in your pocket.
teloxide
🤖
Agentic Tool Use
The LLM autonomously calls tools in a loop to complete complex tasks. Up to 10 iterations per request for multi-step reasoning.
agent loop
🔌
MCP Integration
Extend capabilities dynamically via Model Context Protocol. Connect filesystem, databases, APIs — any MCP server works.
extensible
🧠
Persistent Memory
Two-layer memory: SQLite-backed conversation history plus a facts table for long-term knowledge. Your agent remembers everything.
sqlite
📧
Email Triggers
IMAP IDLE monitoring watches your inbox. New emails trigger automatic notifications and intelligent processing via Telegram.
imap idle
🔒
Command Approval
Interactive approval system for restricted terminal commands. Inline keyboard buttons in Telegram let you approve or deny in real-time.
safety
🛠
Self-Maintenance
The agent can read, update, validate, and restore its own configuration. Auto-backup before changes, auto-restore on errors.
self-healing
Multi-Model Router
Tier-based model selection — Fast, Primary, Smart. Route requests to the right model for the job. Switch models on the fly from Telegram.
router
🏠
Service Install
One command to install as a system service. Supports systemd on Linux and launchd on macOS. Starts on boot, runs forever.
daemon
Architecture

How it works

A clean async architecture built on Tokio, with trait-based abstractions for providers, tools, and state stores.

Telegram ───> Agent ───> LLM Provider ├──> Tools ├── terminal ├── system info ├── memory (facts) ├── config manager └── MCP tools (dynamic) ├──> State ├── SQLite persistence └── in-memory working mem └──> Facts └── injected into system prompt Triggers ───> EventBus ───> Agent └── IMAP IDLE Health ───> GET /health (axum)
01

Message arrives

A message from Telegram or an event from a trigger (email) enters the system via async channels.

02

Agent loop begins

The Agent loads working memory, injects long-term facts into the system prompt, and sends everything to the LLM provider.

03

Tools are called

If the LLM requests tool calls, the Agent executes them — terminal commands (with approval), MCP tools, memory storage, or system queries.

04

Loop continues

Tool results feed back into the LLM for up to 10 iterations, allowing multi-step reasoning and complex task completion.

05

Response delivered

The final response is sent back to Telegram. All messages are persisted to SQLite for future context.

Quickstart

Up and running in minutes

Clone, build, configure, run. The setup wizard handles the rest.

01
Clone & build
git clone https://github.com/davo20019/aidaemon.git
cd aidaemon && cargo build --release
02
Run the setup wizard
On first run, aidaemon launches an interactive wizard to configure your LLM provider, Telegram bot token, and allowed users.
03
Configure MCP servers (optional)
Add MCP server definitions to config.toml to extend your agent with filesystem access, databases, and more.
04
Install as service
./aidaemon install-service
Runs as systemd (Linux) or launchd (macOS). Starts on boot.
config.toml
[provider]
api_key = "sk-..."
base_url = "https://api.openai.com/v1"
model = "gpt-4o"
fast_model = "gpt-4o-mini"
smart_model = "o1-preview"

[telegram]
bot_token = "123:ABC..."
allowed_user_ids = [123456789]

[state]
db_path = "aidaemon.db"
working_memory_cap = 50

[daemon]
health_port = 8080