Documentation
Everything you need to know about Aura — your local-first, AI-native desktop partner.
What is Aura?
Aura is a desktop application that puts AI at the center of your development workflow. Instead of switching between chat windows, terminals, and editors, Aura brings everything together in one place.
Aura is not a wrapper around a single AI model. She is an intelligent orchestrator that can:
- Route tasks to the right model — simple questions go to fast local models, complex reasoning goes to powerful cloud models, and coding tasks get dispatched to CLI agents that work in isolated git worktrees.
- Dispatch and observe AI agents — watch Claude Code, Codex, Gemini CLI, and other coding agents work in real-time terminal panes. Multiple agents can run in parallel on different tasks.
- Remember everything — semantic memory persists across sessions. Aura learns your codebase, your preferences, and your project context over time.
- Work entirely offline — bundled voice recognition, text-to-speech, and local model support mean Aura works without an internet connection.
- Connect to anything via MCP — dashboard widgets, workflow engines, monitoring tools, and infrastructure management all plug in through the Model Context Protocol.
Core Principles
Local-first, cloud-optional
Everything runs on your machine. Cloud sync is available but never required.
AI-native
Persistent identity, semantic memory, intelligent model routing — not just a chat wrapper.
Observable AI
When Aura dispatches agents, you watch them work live in terminal panes. No black boxes.
MCP-everything
Every integration is an MCP server. Community-extensible, ecosystem-aligned.
Federated collaboration
Each person runs their own Aura. Auras communicate directly for team workflows.
Privacy by default
Bundled voice engines, local models, no telemetry unless opted in.
Installation
Download
Download Aura for macOS from the releases page.
- macOS (Apple Silicon & Intel) —
.dmginstaller - Windows — Coming soon
- Linux — Coming soon
System Requirements
- macOS 12.0 or later (Apple Silicon or Intel)
- Git must be installed (
git --versionto verify) - GitHub CLI with active authentication (
gh auth status) — required for GitHub integration
Quick Start
Getting started with Aura takes about two minutes:
1. Launch Aura
Open the app after installation. You'll see the Orb — Aura's visual presence — glowing gently on the dashboard.
2. Add a Project
Click Projects on the dashboard or use the sidebar to add a repository:
- From local folder — Select an existing git repository on your machine
- From Git URL — Clone a repository by URL
3. Create a Workspace
Select a branch and Aura creates an isolated workspace for it. Each workspace gets its own terminal sessions, branch-specific context, and independent agent dispatch scope.
4. Start Talking to Aura
Click the Orb to expand the chat panel. Ask Aura anything:
- Questions — Aura responds directly using the best available model
- Code tasks — Aura dispatches a CLI agent and you watch it work live
- Voice — Hold the Orb or press the hotkey to speak naturally
The Orb
The Orb is Aura's visual presence. It's always on screen, providing ambient awareness of what Aura is doing.
Display Modes
| Mode | Description |
|---|---|
| Ambient | Small glowing orb in the dashboard corner. Soft breathing animation. Click to expand. |
| Expanded | Chat panel slides open alongside the dashboard. The Orb sits atop the conversation. |
| Fullscreen | Chat takes over the entire window. The Orb is large and centered above the conversation. |
Orb States
| State | What You See |
|---|---|
| Idle | Gentle slow breathing glow |
| Listening | Brighter pulse with mic indicator |
| Thinking | Faster pulse, swirling animation |
| Agents active | Orbiting particles — one ring per running agent |
| Attention needed | Warm amber color shift with notification badge |
The Orb color follows your app's color theme. Default is purple, fully customizable in settings.
Dashboard
The dashboard is your mission control — a grid of widgets that show what matters to you at a glance.
Built-in Widgets
| Widget | What It Shows |
|---|---|
| Projects & Workspaces | Your repositories with quick-access to branches and worktrees |
| Tasks | Active, completed, and failed tasks from agent dispatches |
| System Resources | CPU, memory, and disk usage |
| System Monitor | Real-time system performance graphs |
| Processes | Running processes on your machine |
| Services | Health status of connected services |
| Log Stream | Live log output from agents and services |
| Software Inventory | Installed development tools and their versions |
Community widgets can be added by connecting MCP servers — each server can declare widgets that appear in the widget picker automatically.
Chat & Conversation
Aura's chat is more than a chatbot interface. It's the primary way you interact with your AI partner.
How it works
- Click the Orb or press the keyboard shortcut to open the chat panel
- Type or speak your request
- Aura classifies the task — is it a question, a code task, or something else?
- For questions: Aura responds directly in chat
- For code tasks: Aura dispatches a CLI agent into a terminal pane and tells you what she's doing and why
Key behavior: Aura never writes code in chat. Code work always goes to a dispatched CLI agent working in an isolated worktree. You watch the agent work live.
Conversation history persists across sessions in your local SQLite database. Chat supports Markdown rendering with syntax highlighting, file references, agent dispatch status updates, and inline task creation.
AI Agent Dispatch
This is Aura's signature feature. When you ask for code work, Aura dispatches a CLI coding agent into an isolated environment and you watch it work in real-time.
Supported CLI Agents
| Agent | Binary | How Aura Runs It |
|---|---|---|
| Claude Code | claude | claude --print --dangerously-skip-permissions <task> |
| Gemini CLI | gemini | gemini --prompt <task> --yolo |
| Codex | codex | codex --full-auto <task> |
| Pi | pi | pi --print <task> |
| OpenCode | opencode | opencode run <task> |
| Copilot | copilot | copilot <task> |
| Cursor | cursor | cursor --goto <task> |
Agents are auto-detected from your PATH. If a CLI tool is installed and authenticated, Aura can use it.
The Dispatch Flow
Smart Agent Selection
Aura uses a recruiter system to pick the best agent for each task:
- Infers required capabilities from your task description (e.g., "testing", "refactoring", "frontend")
- Scores each installed agent based on capability match (50%), reliability (30%), and keyword relevance (20%)
- Picks the top scorer — or creates a custom agent on the fly if nothing scores high enough
Multi-Agent Orchestration
Aura can dispatch multiple agents in parallel, each on its own worktree and branch. Each agent gets its own terminal pane, and the Orb shows orbiting particles — one per active agent.
Dynamic Agent Creation
When no existing agent fits your task, Aura builds one on the fly — analyzing requirements, generating a tailored system prompt, finding the best base CLI binary, and saving the manifest for reuse. Over time, Aura builds a library of specialized agents tuned to your work.
Workspaces & Git Worktrees
Workspaces give each branch its own isolated environment with dedicated terminal sessions and context.
| Type | How It Works |
|---|---|
| Branch Workspace | Uses the main repo directory. One per project. Switching branches is a checkout. |
| Worktree Workspace | Creates an isolated git worktree directory. Multiple can exist per project simultaneously. |
Branch workspaces are ideal for quick branch switching. Worktree workspaces are what Aura creates when dispatching agents — each agent gets its own isolated copy of the repo on a dedicated branch.
Workspaces are auto-created on project open, support safe branch switching with uncommitted change warnings, and each has independent terminal sessions.
Terminal
Aura includes a full-featured terminal built on xterm.js and node-pty.
- Multiple tabs — Open as many terminal sessions as you need per workspace
- Session persistence — Terminal state survives app restarts
- Agent terminal panes — Dispatched agents run in visible terminal panes with live output
- Split views — View multiple terminal sessions side by side
- Workspace-scoped — Each workspace gets its own set of terminal tabs
The terminal is where you observe agents working. When Aura dispatches an agent, a new terminal pane opens showing the agent's real-time CLI output — every command, every file edit, every test run.
Model Routing
Aura intelligently routes tasks to the best available model based on complexity and type.
Aura Mode (Default)
Aura automatically selects the best model for each task:
| Task Type | Typical Route | Why |
|---|---|---|
| Quick questions | Local or fast cloud model (Haiku, Flash) | Fast, cheap, private |
| Complex reasoning | Powerful cloud model (Sonnet, GPT-4o) | Best quality |
| Code implementation | CLI agent with appropriate model | Observable, isolated |
| Voice interaction | Local models (whisper.cpp, Piper) | Zero latency, offline |
| Embeddings, memory | Local embedding model | Private, fast |
Manual Mode
Lock in your model choices globally by picking one model for each tier (Simple, Complex, Escalation). Selections apply across all projects.
Auto-Escalation
If a dispatched agent fails (non-zero exit, repeated errors), Aura can automatically kill it and re-dispatch at a higher tier, notifying you of the escalation.
Voice
Aura ships with fully local voice capabilities — no cloud APIs required.
| Component | Purpose | Engine |
|---|---|---|
| whisper.cpp | Speech-to-text | WASM or native binary (~150MB) |
| Piper | Text-to-speech | ONNX runtime (~50MB per voice) |
| Silero VAD | Voice activity detection | ONNX runtime (~2MB) |
All processing happens on your machine. No voice data ever leaves the device.
Interaction Modes
- Push-to-talk (default) — Click and hold the Orb, or press a hotkey to speak
- Wake word (opt-in) — Say "Hey Aura" to activate, then speak naturally
Voice Pipeline
The Orb animates through each stage — listening, thinking, speaking — so you always know what Aura is doing.
MCP Integrations
Every integration in Aura connects through the Model Context Protocol (MCP). Any MCP-compatible server can plug into Aura and provide tools, data, and dashboard widgets.
What MCP Servers Provide
- Tools — Actions Aura can perform ("trigger workflow", "restart container", "create ticket")
- Resources — Data Aura can read ("current alerts", "container status", "sensor readings")
- Widgets — Dashboard components that auto-appear when connected
Connecting an MCP Server
- Go to Settings → Integrations
- Add a new MCP server connection (URL or local command)
- Aura discovers capabilities automatically
- Widgets appear in the dashboard picker, tools become available in conversations
Example Integrations
| Integration | What It Does |
|---|---|
| n8n | Trigger workflows, view execution history, automate tasks |
| Proxmox | Container and VM status, start/stop/restart |
| Zabbix | Active alerts, host status, monitoring dashboards |
| Home Assistant | Smart home device control, sensor readings, automations |
Federation (Aura-to-Aura)
Each Aura instance is both an MCP server and an MCP client. Team collaboration happens through direct Aura-to-Aura connections — no central server required.
Team Capabilities
| Feature | Description |
|---|---|
| Shared task board | Tasks sync between connected Auras. Assign work across instances. |
| Agent delegation | "Ask Sarah's Aura to run the security audit." Handoff sent via MCP, results flow back. |
| Shared worktrees | Both Auras work on the same repo, coordinating via git. |
| Status awareness | See teammate's Aura status: online/offline, current work, active agents. |
Permission Tiers
| Tier | What They Can Do |
|---|---|
| Observe | See online/offline status |
| Request | Ask for tasks, request information |
| Delegate | Assign work, send handoffs with full context |
| Admin | Full control and configuration access |
Each connection is individually permissioned with a full audit trail of all shared data.
Setting Up Model Providers
Cloud Providers
- Go to Settings → Models
- Under Cloud Providers, click the provider you want to add
- Anthropic — Sign in with OAuth (recommended) or paste an API key
- OpenAI — Paste your API key
- Google — Sign in with OAuth or paste an API key
- Aura auto-discovers available models on sign-in
Local Providers
- Under Your Endpoints, click Add Endpoint
- Enter a name and the base URL (e.g.,
http://localhost:11434for Ollama) - Click Discover Models — Aura calls
/v1/modelsand lists what's available - Enable/disable individual models and assign them to tiers
Supported local model servers: Ollama, LM Studio, mlx-lm (Apple Silicon), or any OpenAI-compatible API endpoint.
Configuring CLI Agents
CLI agents are auto-detected from your PATH. Install and authenticate them, and Aura handles the rest.
Claude Code
# Install
npm install -g @anthropic-ai/claude-code
# Authenticate
claude /login
Gemini CLI
# Install
npm install -g @anthropic-ai/gemini
# Authenticate (Google OAuth)
gemini auth
Codex
# Install
npm install -g @openai/codex
# Set API key
export OPENAI_API_KEY=your-key
Go to Settings → CLI Agents to see all detected agents with their status (installed, authenticated, version).
Creating Saved Agents
Saved agents are persistent, named agent configurations tuned for specific projects or tasks.
Create via Chat
You: "Create an agent for our frontend work"
Aura: "I'll set up a frontend specialist. I see React, TanStack Router, shadcn/ui, and TailwindCSS. I'll scope it to apps/desktop/src/renderer/ with theme system context. Want to lock it to a specific model or let me pick?"
Create via Settings
- Go to Settings → CLI Agents → Saved Agents
- Click Create Agent
- Configure: name, description, preferred CLI, model, project path, context files, system prompt, and constraints
Saved agents match tasks by project path and keyword matching. When you ask Aura to work on a project with a matching saved agent, she uses it automatically.
Keyboard Shortcuts
| Shortcut | Action |
|---|---|
⌘ + L | Toggle chat panel |
⌘ + B | Toggle sidebar |
⌘ + T | New terminal tab |
⌘ + W | Close current tab |
⌘ + , | Open settings |
Customize shortcuts in Settings → Keyboard Shortcuts.
Custom Themes
Aura supports full theme customization including the Orb color, UI colors, and font choices.
- Go to Settings → Appearance
- Select a base theme to start from
- Customize colors, fonts, and the Orb's glow color
- Save with a name for easy switching
Theme files can be exported and shared with other Aura users.
Architecture
Aura is built as a monorepo with clear separation between the desktop shell (Electron) and the core AI logic.
Technology Stack
| Layer | Technology |
|---|---|
| Runtime | Bun |
| Build | Turborepo |
| Desktop | Electron + electron-vite |
| Frontend | React + TailwindCSS v4 + shadcn/ui |
| Routing | TanStack Router (file-based) |
| State | Zustand (UI) + React Query (server state) |
| IPC | tRPC over Electron bridge |
| Terminal | node-pty + xterm.js |
| Local DB | Drizzle ORM + SQLite |
| Vector Store | SQLite vec extension |
| Voice STT | whisper.cpp |
| Voice TTS | Piper |
| Voice VAD | Silero (ONNX) |
| Integrations | MCP (Model Context Protocol) |
Data Storage
Everything lives in ~/.aura/:
Fully portable — copy ~/.aura/ to a new machine and everything comes with you.
Optional Cloud Sync
When enabled, Aura syncs to a Supabase instance (self-hosted or cloud):
| What Syncs | Direction |
|---|---|
| Tasks | Bidirectional |
| Agent manifests | Bidirectional |
| Projects | Bidirectional |
| Conversations | Push (opt-in per conversation) |
| Memory/embeddings | Push (opt-in) |
| Settings | Never synced (local only) |
You provide your own Supabase URL and key. Aura never phones home.
Building from Source
Prerequisites
Steps
# Clone the repository
git clone https://github.com/pavetech/aura.git
cd aura
# Install dependencies
bun install
# Start development mode
SKIP_ENV_VALIDATION=1 bun run dev
# Build for production
bun run build
Common Commands
| Command | What It Does |
|---|---|
bun dev | Start all dev servers |
bun test | Run tests |
bun build | Build all packages |
bun run lint | Check for lint issues |
bun run lint:fix | Auto-fix lint issues |
bun run format | Format code |
bun run typecheck | Type check all packages |
FAQ
What AI providers does Aura support?
Cloud: Anthropic (Claude), OpenAI (GPT), Google (Gemini)
Local: Any OpenAI-compatible endpoint — Ollama, LM Studio, mlx-lm, vLLM, or your own server.
CLI Agents: Claude Code, Gemini CLI, Codex, Pi, OpenCode, Copilot, Cursor — any CLI coding tool on your PATH.
Does Aura require an internet connection?
No. Aura works fully offline with local models and bundled voice. Cloud providers and syncing are optional.
Where is my data stored?
Everything is in ~/.aura/ on your machine. Conversations, memories, agent configurations, and settings are stored locally in SQLite. Nothing leaves your device unless you explicitly enable cloud sync.
Can I use Aura with my own models?
Yes. Add any OpenAI-compatible endpoint in Settings → Models. Aura auto-discovers available models and lets you assign them to tiers.
How do CLI agents authenticate?
Each CLI agent manages its own authentication. Install and authenticate the CLI tool normally (e.g., claude /login), and Aura detects and uses it. Aura passes through your existing auth credentials.
Can multiple agents run at the same time?
Yes. Aura dispatches each agent into its own git worktree. Multiple agents can work in parallel on different tasks without interfering with each other or your working directory.
What is the Model Context Protocol (MCP)?
MCP is an open protocol for connecting AI tools to data sources and services. Aura uses MCP for all integrations. Learn more at modelcontextprotocol.io.
How does federation work?
Each Aura instance is both an MCP server and client. Add a teammate's Aura by URL, and the two instances can share tasks, delegate agent work, and coordinate on shared repositories. All communication is encrypted and permission-gated.