Documentation

Everything you need to know about Aura — your local-first, AI-native desktop partner.

What is Aura?

Aura is a desktop application that puts AI at the center of your development workflow. Instead of switching between chat windows, terminals, and editors, Aura brings everything together in one place.

Aura is not a wrapper around a single AI model. She is an intelligent orchestrator that can:

  • Route tasks to the right model — simple questions go to fast local models, complex reasoning goes to powerful cloud models, and coding tasks get dispatched to CLI agents that work in isolated git worktrees.
  • Dispatch and observe AI agents — watch Claude Code, Codex, Gemini CLI, and other coding agents work in real-time terminal panes. Multiple agents can run in parallel on different tasks.
  • Remember everything — semantic memory persists across sessions. Aura learns your codebase, your preferences, and your project context over time.
  • Work entirely offline — bundled voice recognition, text-to-speech, and local model support mean Aura works without an internet connection.
  • Connect to anything via MCP — dashboard widgets, workflow engines, monitoring tools, and infrastructure management all plug in through the Model Context Protocol.

Core Principles

Local-first, cloud-optional

Everything runs on your machine. Cloud sync is available but never required.

AI-native

Persistent identity, semantic memory, intelligent model routing — not just a chat wrapper.

Observable AI

When Aura dispatches agents, you watch them work live in terminal panes. No black boxes.

MCP-everything

Every integration is an MCP server. Community-extensible, ecosystem-aligned.

Federated collaboration

Each person runs their own Aura. Auras communicate directly for team workflows.

Privacy by default

Bundled voice engines, local models, no telemetry unless opted in.

Installation

Download

Download Aura for macOS from the releases page.

  • macOS (Apple Silicon & Intel) — .dmg installer
  • Windows — Coming soon
  • Linux — Coming soon

System Requirements

  • macOS 12.0 or later (Apple Silicon or Intel)
  • Git must be installed (git --version to verify)
  • GitHub CLI with active authentication (gh auth status) — required for GitHub integration

Quick Start

Getting started with Aura takes about two minutes:

1. Launch Aura

Open the app after installation. You'll see the Orb — Aura's visual presence — glowing gently on the dashboard.

2. Add a Project

Click Projects on the dashboard or use the sidebar to add a repository:

  • From local folder — Select an existing git repository on your machine
  • From Git URL — Clone a repository by URL

3. Create a Workspace

Select a branch and Aura creates an isolated workspace for it. Each workspace gets its own terminal sessions, branch-specific context, and independent agent dispatch scope.

4. Start Talking to Aura

Click the Orb to expand the chat panel. Ask Aura anything:

  • Questions — Aura responds directly using the best available model
  • Code tasks — Aura dispatches a CLI agent and you watch it work live
  • Voice — Hold the Orb or press the hotkey to speak naturally

The Orb

The Orb is Aura's visual presence. It's always on screen, providing ambient awareness of what Aura is doing.

Display Modes

ModeDescription
AmbientSmall glowing orb in the dashboard corner. Soft breathing animation. Click to expand.
ExpandedChat panel slides open alongside the dashboard. The Orb sits atop the conversation.
FullscreenChat takes over the entire window. The Orb is large and centered above the conversation.

Orb States

StateWhat You See
IdleGentle slow breathing glow
ListeningBrighter pulse with mic indicator
ThinkingFaster pulse, swirling animation
Agents activeOrbiting particles — one ring per running agent
Attention neededWarm amber color shift with notification badge

The Orb color follows your app's color theme. Default is purple, fully customizable in settings.

Dashboard

The dashboard is your mission control — a grid of widgets that show what matters to you at a glance.

Built-in Widgets

WidgetWhat It Shows
Projects & WorkspacesYour repositories with quick-access to branches and worktrees
TasksActive, completed, and failed tasks from agent dispatches
System ResourcesCPU, memory, and disk usage
System MonitorReal-time system performance graphs
ProcessesRunning processes on your machine
ServicesHealth status of connected services
Log StreamLive log output from agents and services
Software InventoryInstalled development tools and their versions

Community widgets can be added by connecting MCP servers — each server can declare widgets that appear in the widget picker automatically.

Chat & Conversation

Aura's chat is more than a chatbot interface. It's the primary way you interact with your AI partner.

How it works

  1. Click the Orb or press the keyboard shortcut to open the chat panel
  2. Type or speak your request
  3. Aura classifies the task — is it a question, a code task, or something else?
  4. For questions: Aura responds directly in chat
  5. For code tasks: Aura dispatches a CLI agent into a terminal pane and tells you what she's doing and why

Key behavior: Aura never writes code in chat. Code work always goes to a dispatched CLI agent working in an isolated worktree. You watch the agent work live.

Conversation history persists across sessions in your local SQLite database. Chat supports Markdown rendering with syntax highlighting, file references, agent dispatch status updates, and inline task creation.

AI Agent Dispatch

This is Aura's signature feature. When you ask for code work, Aura dispatches a CLI coding agent into an isolated environment and you watch it work in real-time.

Supported CLI Agents

AgentBinaryHow Aura Runs It
Claude Codeclaudeclaude --print --dangerously-skip-permissions <task>
Gemini CLIgeminigemini --prompt <task> --yolo
Codexcodexcodex --full-auto <task>
Pipipi --print <task>
OpenCodeopencodeopencode run <task>
Copilotcopilotcopilot <task>
Cursorcursorcursor --goto <task>

Agents are auto-detected from your PATH. If a CLI tool is installed and authenticated, Aura can use it.

The Dispatch Flow

You ask Aura Task classified Best agent picked Git worktree created Agent spawned live You review the diff

Smart Agent Selection

Aura uses a recruiter system to pick the best agent for each task:

  1. Infers required capabilities from your task description (e.g., "testing", "refactoring", "frontend")
  2. Scores each installed agent based on capability match (50%), reliability (30%), and keyword relevance (20%)
  3. Picks the top scorer — or creates a custom agent on the fly if nothing scores high enough

Multi-Agent Orchestration

Aura can dispatch multiple agents in parallel, each on its own worktree and branch. Each agent gets its own terminal pane, and the Orb shows orbiting particles — one per active agent.

Dynamic Agent Creation

When no existing agent fits your task, Aura builds one on the fly — analyzing requirements, generating a tailored system prompt, finding the best base CLI binary, and saving the manifest for reuse. Over time, Aura builds a library of specialized agents tuned to your work.

Workspaces & Git Worktrees

Workspaces give each branch its own isolated environment with dedicated terminal sessions and context.

TypeHow It Works
Branch WorkspaceUses the main repo directory. One per project. Switching branches is a checkout.
Worktree WorkspaceCreates an isolated git worktree directory. Multiple can exist per project simultaneously.

Branch workspaces are ideal for quick branch switching. Worktree workspaces are what Aura creates when dispatching agents — each agent gets its own isolated copy of the repo on a dedicated branch.

Workspaces are auto-created on project open, support safe branch switching with uncommitted change warnings, and each has independent terminal sessions.

Terminal

Aura includes a full-featured terminal built on xterm.js and node-pty.

  • Multiple tabs — Open as many terminal sessions as you need per workspace
  • Session persistence — Terminal state survives app restarts
  • Agent terminal panes — Dispatched agents run in visible terminal panes with live output
  • Split views — View multiple terminal sessions side by side
  • Workspace-scoped — Each workspace gets its own set of terminal tabs

The terminal is where you observe agents working. When Aura dispatches an agent, a new terminal pane opens showing the agent's real-time CLI output — every command, every file edit, every test run.

Model Routing

Aura intelligently routes tasks to the best available model based on complexity and type.

Aura Mode (Default)

Aura automatically selects the best model for each task:

Task TypeTypical RouteWhy
Quick questionsLocal or fast cloud model (Haiku, Flash)Fast, cheap, private
Complex reasoningPowerful cloud model (Sonnet, GPT-4o)Best quality
Code implementationCLI agent with appropriate modelObservable, isolated
Voice interactionLocal models (whisper.cpp, Piper)Zero latency, offline
Embeddings, memoryLocal embedding modelPrivate, fast

Manual Mode

Lock in your model choices globally by picking one model for each tier (Simple, Complex, Escalation). Selections apply across all projects.

Auto-Escalation

If a dispatched agent fails (non-zero exit, repeated errors), Aura can automatically kill it and re-dispatch at a higher tier, notifying you of the escalation.

Voice

Aura ships with fully local voice capabilities — no cloud APIs required.

ComponentPurposeEngine
whisper.cppSpeech-to-textWASM or native binary (~150MB)
PiperText-to-speechONNX runtime (~50MB per voice)
Silero VADVoice activity detectionONNX runtime (~2MB)

All processing happens on your machine. No voice data ever leaves the device.

Interaction Modes

  • Push-to-talk (default) — Click and hold the Orb, or press a hotkey to speak
  • Wake word (opt-in) — Say "Hey Aura" to activate, then speak naturally

Voice Pipeline

Microphone Silero VAD whisper.cpp Aura's brain Piper TTS Speaker

The Orb animates through each stage — listening, thinking, speaking — so you always know what Aura is doing.

MCP Integrations

Every integration in Aura connects through the Model Context Protocol (MCP). Any MCP-compatible server can plug into Aura and provide tools, data, and dashboard widgets.

What MCP Servers Provide

  • Tools — Actions Aura can perform ("trigger workflow", "restart container", "create ticket")
  • Resources — Data Aura can read ("current alerts", "container status", "sensor readings")
  • Widgets — Dashboard components that auto-appear when connected

Connecting an MCP Server

  1. Go to Settings → Integrations
  2. Add a new MCP server connection (URL or local command)
  3. Aura discovers capabilities automatically
  4. Widgets appear in the dashboard picker, tools become available in conversations

Example Integrations

IntegrationWhat It Does
n8nTrigger workflows, view execution history, automate tasks
ProxmoxContainer and VM status, start/stop/restart
ZabbixActive alerts, host status, monitoring dashboards
Home AssistantSmart home device control, sensor readings, automations

Federation (Aura-to-Aura)

Each Aura instance is both an MCP server and an MCP client. Team collaboration happens through direct Aura-to-Aura connections — no central server required.

Team Capabilities

FeatureDescription
Shared task boardTasks sync between connected Auras. Assign work across instances.
Agent delegation"Ask Sarah's Aura to run the security audit." Handoff sent via MCP, results flow back.
Shared worktreesBoth Auras work on the same repo, coordinating via git.
Status awarenessSee teammate's Aura status: online/offline, current work, active agents.

Permission Tiers

TierWhat They Can Do
ObserveSee online/offline status
RequestAsk for tasks, request information
DelegateAssign work, send handoffs with full context
AdminFull control and configuration access

Each connection is individually permissioned with a full audit trail of all shared data.

Setting Up Model Providers

Cloud Providers

  1. Go to Settings → Models
  2. Under Cloud Providers, click the provider you want to add
  3. Anthropic — Sign in with OAuth (recommended) or paste an API key
  4. OpenAI — Paste your API key
  5. Google — Sign in with OAuth or paste an API key
  6. Aura auto-discovers available models on sign-in

Local Providers

  1. Under Your Endpoints, click Add Endpoint
  2. Enter a name and the base URL (e.g., http://localhost:11434 for Ollama)
  3. Click Discover Models — Aura calls /v1/models and lists what's available
  4. Enable/disable individual models and assign them to tiers

Supported local model servers: Ollama, LM Studio, mlx-lm (Apple Silicon), or any OpenAI-compatible API endpoint.

Configuring CLI Agents

CLI agents are auto-detected from your PATH. Install and authenticate them, and Aura handles the rest.

Claude Code

# Install
npm install -g @anthropic-ai/claude-code

# Authenticate
claude /login

Gemini CLI

# Install
npm install -g @anthropic-ai/gemini

# Authenticate (Google OAuth)
gemini auth

Codex

# Install
npm install -g @openai/codex

# Set API key
export OPENAI_API_KEY=your-key

Go to Settings → CLI Agents to see all detected agents with their status (installed, authenticated, version).

Creating Saved Agents

Saved agents are persistent, named agent configurations tuned for specific projects or tasks.

Create via Chat

You: "Create an agent for our frontend work"

Aura: "I'll set up a frontend specialist. I see React, TanStack Router, shadcn/ui, and TailwindCSS. I'll scope it to apps/desktop/src/renderer/ with theme system context. Want to lock it to a specific model or let me pick?"

Create via Settings

  1. Go to Settings → CLI Agents → Saved Agents
  2. Click Create Agent
  3. Configure: name, description, preferred CLI, model, project path, context files, system prompt, and constraints

Saved agents match tasks by project path and keyword matching. When you ask Aura to work on a project with a matching saved agent, she uses it automatically.

Keyboard Shortcuts

ShortcutAction
⌘ + LToggle chat panel
⌘ + BToggle sidebar
⌘ + TNew terminal tab
⌘ + WClose current tab
⌘ + ,Open settings

Customize shortcuts in Settings → Keyboard Shortcuts.

Custom Themes

Aura supports full theme customization including the Orb color, UI colors, and font choices.

  1. Go to Settings → Appearance
  2. Select a base theme to start from
  3. Customize colors, fonts, and the Orb's glow color
  4. Save with a name for easy switching

Theme files can be exported and shared with other Aura users.

Architecture

Aura is built as a monorepo with clear separation between the desktop shell (Electron) and the core AI logic.

aura/ ├── apps/ │ ├── desktop/ # Electron app │ │ ├── src/ │ │ │ ├── main/ # Main process (Node.js) │ │ │ │ ├── aura-core/ # Aura's brain (routing, memory, identity) │ │ │ │ ├── agents/ # Agent dispatch & monitoring │ │ │ │ ├── terminal/ # PTY sessions │ │ │ │ └── git/ # Git provider abstraction │ │ │ ├── renderer/ # React UI │ │ │ └── shared/ # Types shared between processes │ └── api/ # API backend ├── packages/ │ ├── ui/ # shadcn/ui + Tailwind components │ ├── local-db/ # SQLite schema (Drizzle ORM) │ ├── agent-manifest/ # Agent capability schema │ ├── mcp/ # MCP client integration │ └── shared/ # Shared types and utilities

Technology Stack

LayerTechnology
RuntimeBun
BuildTurborepo
DesktopElectron + electron-vite
FrontendReact + TailwindCSS v4 + shadcn/ui
RoutingTanStack Router (file-based)
StateZustand (UI) + React Query (server state)
IPCtRPC over Electron bridge
Terminalnode-pty + xterm.js
Local DBDrizzle ORM + SQLite
Vector StoreSQLite vec extension
Voice STTwhisper.cpp
Voice TTSPiper
Voice VADSilero (ONNX)
IntegrationsMCP (Model Context Protocol)

Data Storage

Everything lives in ~/.aura/:

~/.aura/ ├── config.json # Identity, preferences, theme ├── providers.json # Model endpoints and API keys ├── models.json # Model registry with tier assignments ├── routing.json # Routing mode and preferences ├── aura.db # SQLite -- conversations, tasks, settings ├── vectors.db # Semantic memory embeddings ├── manifests/ # Agent manifest files ├── agents/ # Saved agent configurations ├── projects/ # Per-project code embeddings ├── voice/ # Voice model cache ├── bin/ # Agent wrapper scripts └── hooks/ # Agent lifecycle hooks

Fully portable — copy ~/.aura/ to a new machine and everything comes with you.

Optional Cloud Sync

When enabled, Aura syncs to a Supabase instance (self-hosted or cloud):

What SyncsDirection
TasksBidirectional
Agent manifestsBidirectional
ProjectsBidirectional
ConversationsPush (opt-in per conversation)
Memory/embeddingsPush (opt-in)
SettingsNever synced (local only)

You provide your own Supabase URL and key. Aura never phones home.

Building from Source

Prerequisites

  • Bun (package manager)
  • Git
  • macOS 12.0+ (Windows/Linux support coming)

Steps

# Clone the repository
git clone https://github.com/pavetech/aura.git
cd aura

# Install dependencies
bun install

# Start development mode
SKIP_ENV_VALIDATION=1 bun run dev

# Build for production
bun run build

Common Commands

CommandWhat It Does
bun devStart all dev servers
bun testRun tests
bun buildBuild all packages
bun run lintCheck for lint issues
bun run lint:fixAuto-fix lint issues
bun run formatFormat code
bun run typecheckType check all packages

FAQ

What AI providers does Aura support?

Cloud: Anthropic (Claude), OpenAI (GPT), Google (Gemini)

Local: Any OpenAI-compatible endpoint — Ollama, LM Studio, mlx-lm, vLLM, or your own server.

CLI Agents: Claude Code, Gemini CLI, Codex, Pi, OpenCode, Copilot, Cursor — any CLI coding tool on your PATH.

Does Aura require an internet connection?

No. Aura works fully offline with local models and bundled voice. Cloud providers and syncing are optional.

Where is my data stored?

Everything is in ~/.aura/ on your machine. Conversations, memories, agent configurations, and settings are stored locally in SQLite. Nothing leaves your device unless you explicitly enable cloud sync.

Can I use Aura with my own models?

Yes. Add any OpenAI-compatible endpoint in Settings → Models. Aura auto-discovers available models and lets you assign them to tiers.

How do CLI agents authenticate?

Each CLI agent manages its own authentication. Install and authenticate the CLI tool normally (e.g., claude /login), and Aura detects and uses it. Aura passes through your existing auth credentials.

Can multiple agents run at the same time?

Yes. Aura dispatches each agent into its own git worktree. Multiple agents can work in parallel on different tasks without interfering with each other or your working directory.

What is the Model Context Protocol (MCP)?

MCP is an open protocol for connecting AI tools to data sources and services. Aura uses MCP for all integrations. Learn more at modelcontextprotocol.io.

How does federation work?

Each Aura instance is both an MCP server and client. Add a teammate's Aura by URL, and the two instances can share tasks, delegate agent work, and coordinate on shared repositories. All communication is encrypted and permission-gated.