Timoro runs entirely on your machine — no cloud, no API keys, no subscriptions required. It watches your terminal, fixes errors in real time, audits security continuously and logs everything. Rust-powered for maximum performance.
Timoro spawns alongside your dev process, reading stdout and stderr in real time. The moment an error appears, it diagnoses the cause, applies a surgical fix and writes a detailed entry to .timoro/log.md — with file, line, column, diff and recommendation.
Every power works locally, in real time, configured via a single TypeScript object.
Parses stack traces with file, line and column precision. Semantic search retrieves relevant context from your KB. The Rust engine applies a surgical diff via the `similar` crate — no full rewrites.
similar · surgical diffSpawns alongside any running process — npm, cargo, python, node, any command. Reads stdout and stderr in real time via Tokio async runtime. Zero CPU overhead when idle.
tokio::processTwo modes: static (OWASP, secrets, CVEs, injection patterns in source) and active (real payloads — SQL injection, XSS, IDOR, brute force, port scan, CORS, SSL/TLS). All findings logged with severity.
mode: 'static' | 'active'LLaMA 3.2 and Mistral run on-device via Candle (Rust). Brain is pluggable: swap to OpenAI, Claude, Gemini or Ollama via one config line. All other powers stay fully local regardless.
candle-core · napi-rsIndexes local files (MD, TXT, PDF, DOCX, code), databases (PostgreSQL, MySQL, SQLite, MongoDB via knex), external URLs and neighboring projects. Embeddings stored in usearch vector store.
usearch · tokenizersEvery error, fix, pentest finding and indexed event is written to .timoro/log.md with timestamp, file path, line/column, before/after diff and a concrete fix recommendation.
.timoro/log.mdTimoro runs a local LLM by default — zero cloud, zero API key. The brain is pluggable. Swap to any provider via a single config line without changing anything else.
Embedded LLM via Candle (Rust). Runs LLaMA 3.2 and Mistral fully on-device. No internet, no API key, no subscriptions. Model downloaded once at install (~1.5GB).
provider: 'local'GPT-4o and GPT-4 mini. Swap the brain to OpenAI while keeping all local powers — file watching, auto-fix, pentester — intact.
provider: 'openai'Claude Sonnet and Opus via @anthropic-ai/sdk. All Timoro capabilities remain local — only the LLM inference is delegated.
provider: 'claude'Gemini Pro and Flash via @google/generative-ai. Full feature parity with all other providers.
provider: 'gemini'Delegates inference to a local Ollama installation. Run any model available in the Ollama library. Requires Ollama running on the machine.
provider: 'ollama'1import { TimoroConfig } from 'timoro' 2 3const config: TimoroConfig = { 4 5 // brain engine 6 brain: { 7 provider: 'local', // or 'openai' | 'claude' | 'gemini' 8 model: 'llama3.2', 9 temperature: 0.3, 10 }, 11 12 // knowledge sources 13 knowledge: { 14 dirs: ['./src', './docs'], 15 files: ['./README.md'], 16 db: { url: process.env.DATABASE_URL }, 17 urls: ['https://docs.example.com'], 18 }, 19 20 // terminal watcher 21 watch: { 22 terminal: 'npm run dev', 23 autoFix: true, 24 confirmBeforeFix: false, 25 }, 26 27 // security analysis 28 pentester: { 29 enabled: true, 30 mode: 'active', 31 target: 'http://localhost:3000', 32 static: { owasp: true, secrets: true, deps: true }, 33 bruteforce: { enabled: true, detectRateLimit: true }, 34 }, 35 36 // log 37 log: { 38 path: './.timoro/log.md', 39 includeDiff: true, 40 }, 41 42} 43 44export default config
Install Timoro globally or as a dev dependency. The Rust core compiles automatically on first install via napi-rs.
npm install timoroScaffolds timoro.config.ts, creates .timoro/ folder and downloads the default LLM model (~1.5GB, one time only).
npx timoro initIndexes your knowledge base, spawns alongside your dev process and starts watching. The full agent in one command.
npx timoro start Timoro's active pentester runs continuously alongside your code — not as a periodic audit, but as a real-time guardian. Every finding goes to .timoro/log.md with severity, location and a concrete fix recommendation.
⚠ Active mode should only be used on systems you own or have explicit authorization to test.
Scaffolds timoro.config.ts in your project, creates .timoro/ folder and downloads the default LLM model (~1.5GB, one time only).
Indexes all configured knowledge sources — local files, database tables and external URLs — into the local vector store.
Starts the full agent: terminal watcher, file watcher, knowledge base and pentester if configured. The primary entry point.
Starts only the terminal watcher. Reads stdout/stderr of your dev process in real time. Pass --cmd to specify the process.
Queries the knowledge base directly from the terminal. The agent searches your indexed project, DB and docs to answer.
Runs the security analysis engine manually. Performs static and/or active testing based on your pentester config.
Displays the current .timoro/log.md formatted in the terminal. Shows all errors, fixes and security findings.
Clears the vector store (vectors.usearch) and re-indexes everything from scratch. Use when knowledge base is stale.
import { Timoro } from 'timoro' const ai = new Timoro(config) await ai.init() // index + setup await ai.watch() // start agent await ai.pentest() // run security await ai.ask('...') // query KB await ai.stop() // cleanup // Events ai.on('error-detected', fn) ai.on('fix-applied', fn) ai.on('pentest-finding', fn) ai.on('indexed', fn)