# Introduction What Routecraft is and how it works. ## What is Routecraft? Routecraft is a **code-first automation platform** for TypeScript that bridges traditional integration (Software 1.0) and AI-native workflows (Software 3.0). Whether you need to process a daily CSV on a cron job, route incoming webhooks, or give Claude the ability to manage your Google Calendar, Routecraft handles it all through a single, unified DSL. Routecraft is built for both eras of software: - **Traditional Automation:** Build robust data pipelines, process webhooks, and run scheduled tasks with a type-safe DSL. - **AI-Native Tools:** Expose those exact same capabilities to Claude, ChatGPT, Cursor, and other agents via MCP. TypeScript all the way. Full IDE support, version controlled, and testable. **Safe by Design** Free-thinking models should not have free reign over your system. Routecraft inverts the default: nothing is accessible until you explicitly write a capability for it. Write a **deterministic** capability for predictable, code-controlled actions. Write a **non-deterministic** one and the agent reasons within the boundary you defined. Either way, you stay in control. --- ## Core Concepts These concepts give you a high-level map of how everything fits together. ### Capabilities and Routes From an AI agent's perspective, everything you build is a **Capability**: a discoverable action it can invoke, like "send an email" or "book a meeting." Under the hood, each capability is implemented as a **Route**: a TypeScript pipeline connecting a **source** to one or more **steps** (operations, processors, or adapters), and eventually to a **destination**. Capabilities can be fully **deterministic** (the same input always produces the same output) or **non-deterministic** (an embedded agent reasons and decides at runtime). You choose the level of autonomy for each one. ### The DSL Routecraft uses a **fluent DSL (Domain-Specific Language)** to author capabilities. It reads like a pipeline: ```ts craft() .from(source) .transform(fn) .to(destination) ``` This makes capabilities easy to write, read, and extend. ### Operations Operations are the **steps inside a capability**. They can transform data, filter messages, enrich responses with external calls, or split and aggregate streams. They are the verbs of the DSL: `transform`, `filter`, `enrich`, and more. ### Adapters Adapters are **connectors** that let your capabilities interact with the outside world. They come in different types: - **Sources**: where data enters (HTTP requests, timers, files). - **Processors**: steps that modify or enrich the exchange. - **Destinations**: where the data ends up (logs, databases, APIs). Adapters make Routecraft extensible. You can use the built-ins or create your own. ### Exchange Every step passes along an **exchange**. An exchange carries the **body** (the main data) and **headers** (metadata such as IDs, parameters, or context). It is the message envelope that moves through the pipeline from start to finish. ```json { "id": "a3f4e1b2-9c6d-4e8a-b1f3-2d7c0e5a9f12", "body": { "to": "alice@example.com", "subject": "Your meeting is confirmed" }, "headers": { "routecraft.correlation_id": "abc-123" } } ``` ### Context The **Routecraft context** is the runtime that manages your capabilities. It handles: - Loading capabilities. - Starting and stopping them. - Hot reload in development. - Running a capability once for batch jobs or tests. You can drive a context through the CLI, or embed it programmatically in your own application. ### How it all fits - **Capabilities** are the secure workflows. - **DSL** is how you describe them. - **Operations** are the steps. - **Adapters** connect to the outside world. - **Exchange** is the data that flows through. - **Context** is the engine that runs everything. These concepts make Routecraft a **developer-first automation framework**: straightforward to start, and powerful enough to grow with your needs. --- ## Related - [Installation](/docs/introduction/installation) -- Install via CLI or manually add packages. - [Project structure](/docs/introduction/project-structure) -- Nuxt-style folder layout and auto-discovery. - [Capabilities](/docs/introduction/capabilities) -- Author small, focused capabilities using the DSL. # Installation System requirements, manual setup, and production builds. ## System requirements - **Node.js 22.6 or later** - required for the `--experimental-strip-types` flag, which lets you run `.ts` files directly without a build step. - **Node.js 23.6 or later** - recommended. TypeScript stripping is stable and enabled by default; no flags needed. - macOS, Windows (including WSL), or Linux. ## Create a new project Scaffold a complete Routecraft project with one command: **pnpm:** ```bash pnpm create routecraft@latest my-app ``` Follow the prompts to configure your project name, package manager, and directory layout. Then: ```bash cd my-app npm run start ``` For all flags and options, see [CLI -- create](/docs/reference/cli#create). ## Manual installation Add Routecraft to an existing project: **pnpm:** ```bash pnpm add @routecraft/routecraft ``` Create your first capability: ```ts // capabilities/my-capability.ts import { craft, simple, log } from "@routecraft/routecraft"; export default craft() .id("my-first-capability") .from(simple("Hello, Routecraft!")) .to(log()); ``` Run it directly with the CLI: ```bash npx @routecraft/cli run capabilities/my-capability.ts ``` On Node 22.6+, the CLI strips TypeScript at runtime with no `tsc` step required. On Node 23.6+ this happens automatically without any flags. ## TypeScript configuration Routecraft is TypeScript-first. The recommended `tsconfig.json` for a capabilities project: ```json { "compilerOptions": { "target": "ES2022", "module": "NodeNext", "moduleResolution": "NodeNext", "strict": true, "outDir": "dist" }, "include": ["capabilities/**/*.ts", "src/**/*.ts"] } ``` You only need to compile (`tsc`) when building for production. During development, the CLI runs your `.ts` files directly. ## Production builds Build and start for production: **pnpm:** ```bash pnpm build && pnpm start ``` The build step compiles your capabilities to JavaScript. The compiled output in `dist/` is what runs in production with no Node flags and no runtime overhead. ## Embedding Routecraft in your app If you're running capabilities from within an existing Node application instead of using the CLI, use `CraftContext` directly: ```ts import { CraftContext } from "@routecraft/routecraft"; const ctx = new CraftContext(); await ctx.load("capabilities/"); await ctx.start(); ``` This gives you full programmatic control: load specific capability files, run a single capability for a batch job, or integrate Routecraft into a larger Express or Fastify server. --- ## Related - [CLI reference](/docs/reference/cli) -- All CLI commands and options. - [Project structure](/docs/introduction/project-structure) -- Understand the layout of a Routecraft project. # Project structure A conventional folder layout that Routecraft expects out of the box. ## Folder layout ```text my-app ├── craft.config.ts ├── capabilities │ ├── send-email.ts │ ├── sync-users.ts │ └── reports │ └── daily-summary.ts ├── adapters │ ├── kafka.ts │ └── google-sheets.ts ├── plugins │ └── logger.ts ├── package.json ├── tsconfig.json └── .env ``` All application code can live at the project root or inside an optional `src` folder. Routecraft treats both layouts identically. ## Folders | Folder | Purpose | | --- | --- | | `capabilities/` | Your capabilities as `.ts` files. Nest them freely in sub-folders. | | `adapters/` | Custom adapters that connect to external systems. Each implements one of the adapter interfaces: `subscribe`, `send`, or `process`. | | `plugins/` | Runtime plugins that hook into the Routecraft context lifecycle, such as MCP transport or custom telemetry. | **Adapters vs plugins:** an adapter connects to an external system (a queue, an API, a file system). A plugin extends the runtime itself (exposing MCP, adding metrics, wiring up observability). ## Files | File | Purpose | | --- | --- | | `craft.config.ts` | Registers plugins and configures the context. Exported as default. | | `package.json` | Dependencies and convenience scripts. | | `tsconfig.json` | TypeScript configuration. | | `.env` | Environment variables. Pass a custom path with `--env` in CLI commands. | ## craft.config.ts The config file is the entry point for the Routecraft runtime. A minimal setup: ```ts // craft.config.ts import type { CraftConfig } from "@routecraft/routecraft"; const config: CraftConfig = {}; export default config; ``` Sub-folders inside `capabilities/` are supported. `capabilities/reports/daily-summary.ts` is just as valid as a flat file. The capability ID set in `.id()` is what identifies it at runtime, not the filename. --- ## Related - [Configuration reference](/docs/reference/configuration) -- craft.config.ts options and context settings. # Capabilities Define what your AI can do, and exactly how it does it. ## What is a capability? A capability is a TypeScript file that defines a secure, type-safe action your system can perform. It uses the Routecraft DSL to wire a **source** through **operations** to a **destination**. ```ts // capabilities/send-email.ts import { craft, http, smtp } from "@routecraft/routecraft"; export default craft() .id("send-email") .from(http({ path: "/send", method: "POST" })) .transform((body) => ({ to: body.recipient, subject: body.subject })) .to(smtp()); ``` When an AI agent calls `send-email`, it executes exactly this pipeline. You define the boundary; the agent works within it. ## The DSL Every capability follows the same shape: ```ts craft() .id("capability-id") // Unique identifier .from(source) // Where data enters .transform(fn) // Optional operations .to(destination) // Where data goes ``` `.id()` is what identifies the capability at runtime, not the filename. Name your files descriptively, but the ID is what matters. > **Note: Always set an ID** > > It is recommended to give every capability a unique `.id()`. Without one, Routecraft generates an ID automatically but it may change between runs, making debugging and MCP tool discovery harder. The `require-named-route` ESLint rule enforces this and can be disabled per-project. ## Source types The `.from()` adapter determines how a capability is triggered: **Request-driven** -- responds to an inbound call and returns a result: ```ts .from(http({ path: "/users", method: "GET" })) ``` **Scheduled** -- runs on a timer, no caller to respond to: ```ts .from(timer({ intervalMs: 60_000 })) ``` **One-shot** -- processes a fixed payload and completes: ```ts .from(simple({ report: "daily-summary" })) ``` **Channel-driven** -- receives messages from another capability: ```ts .from(direct("incoming-jobs", {})) ``` ## Operations Operations are the steps between source and destination. They are composable and run in order: | Operation | What it does | | --- | --- | | `.transform(fn)` | Replaces the body with the return value of `fn` | | `.filter(fn)` | Drops the exchange if `fn` returns false | | `.tap(adapter)` | Side effect (logging, metrics) without altering the exchange | | `.sample({ every: n })` | Passes through every nth exchange | | `.batch({ size: n })` | Groups exchanges before passing them on | ## Destinations `.to()` sends the processed exchange to its final target. It is recommended to use only one `.to()` per capability -- if you need to fan out, use `.tap()` for side-effect destinations and reserve `.to()` for the primary output. > **Note: One destination per capability** > > Using multiple `.to()` calls on a single capability is supported but not recommended. The `single-destination` ESLint rule warns when more than one `.to()` is chained. Use `.tap()` for fire-and-forget side effects instead. ```ts .to(log()) // Print to console .to(http({ url: "https://api.com" })) // POST to external API .to(json({ path: "./output.json" })) // Write to file .to(direct("next-stage")) // Hand off to another capability ``` ## Multiple capabilities in one file A single `craft()` call can define multiple capabilities by chaining `.id().from().to()` blocks. This is useful for grouping related capabilities that belong to the same domain. ```ts // capabilities/calendar.ts export default craft() .id("calendar.fetch-events") .from(http({ path: "/calendar/events", method: "GET" })) .transform(mapCalendarEvents) .to(log()) .id("calendar.create-event") .from(http({ path: "/calendar/events", method: "POST" })) .validate(eventSchema) .to(googleCalendar()) ``` Each `.id()` starts a new capability definition. Every ID must be unique -- it is what identifies the capability at runtime, not the filename. ## Inter-capability communication Capabilities can pass data to each other using `direct()`. This keeps each capability focused on a single concern: ```ts // capabilities/fetch-orders.ts export default craft() .id("fetch-orders") .from(timer({ intervalMs: 300_000 })) .transform(fetchNewOrders) .to(direct("process-orders")); // capabilities/process-orders.ts export default craft() .id("process-orders") .from(direct("process-orders", {})) .transform(fulfillOrder) .to(log()); ``` --- ## Related - [Operations reference](/docs/reference/operations) -- Full API: all operations with signatures and examples. # The Exchange The data envelope that flows through every capability. ## What is an exchange? Every piece of data that moves through a capability is wrapped in an **exchange**. When a source produces data, it becomes an exchange. Every operation receives that exchange and passes it along. The destination receives it last. An exchange has two parts: - **`body`** -- the main payload. This is your data: an object, a string, a number, whatever your capability is working with. - **`headers`** -- metadata about the exchange. Timestamps, IDs, adapter-specific context, and anything you want to carry alongside the data without putting it in the body. ```json { "id": "a3f4e1b2-9c6d-4e8a-b1f3-2d7c0e5a9f12", "body": { "to": "alice@example.com", "subject": "Your order is confirmed" }, "headers": { "routecraft.correlation_id": "req-abc-123", "routecraft.route": "send-confirmation" } } ``` ## Body The body is what your operations act on. `.transform()`, `.filter()`, and `.process()` all receive the current body (or the full exchange) and return something new. ```ts craft() .id('greet') .from(simple({ name: 'Alice' })) .transform((body) => `Hello, ${body.name}!`) // body is { name: 'Alice' } .to(log()) // body is now 'Hello, Alice!' ``` The body type flows through the DSL. TypeScript tracks what shape the body is at each step, giving you full type safety throughout the pipeline. ## Headers Headers travel alongside the body without being part of it. They are useful for metadata you want available throughout the pipeline but do not want polluting the body. Set a header with `.header()`: ```ts craft() .id('process-order') .from(simple({ orderId: '123', amount: 49.99 })) .header('x-tenant', 'acme-corp') .header('x-priority', (exchange) => exchange.body.amount > 100 ? 'high' : 'normal') .process((exchange) => { const tenant = exchange.headers['x-tenant'] // 'acme-corp' const priority = exchange.headers['x-priority'] // 'normal' return exchange }) .to(log()) ``` Headers can be static values or derived from the exchange at runtime. ## Built-in headers Routecraft sets a number of `routecraft.*` headers automatically on every exchange: | Header | Description | | --- | --- | | `routecraft.exchange_id` | Unique ID for this exchange | | `routecraft.correlation_id` | Shared ID across split/tap branches for tracing | | `routecraft.route` | ID of the capability that produced this exchange | | `routecraft.context_id` | ID of the running context | These are useful for logging, debugging, and correlating exchanges across capability chains. ## Body vs full exchange access Most operations give you a choice: work with just the body, or the full exchange. **Body only** with `.transform()`: ```ts .transform((body) => body.toUpperCase()) ``` **Full exchange** with `.process()`: ```ts .process((exchange) => { const tenantId = exchange.headers['x-tenant'] return { ...exchange, body: { ...exchange.body, tenantId } } }) ``` **Full exchange** with `.filter()`: ```ts .filter((exchange) => exchange.headers['x-priority'] === 'high') ``` Use `.transform()` when you only need the data. Use `.process()` or `.filter()` when you need headers, correlation IDs, or the context. ## Exchange in taps When you `.tap()`, the tap receives a **deep copy** of the exchange with a new ID. The correlation ID is preserved so you can trace the tap back to its parent exchange. The main pipeline continues immediately without waiting for the tap. ```ts craft() .id('order-pipeline') .from(source) .tap((exchange) => { // exchange.headers['routecraft.correlation_id'] links back to the parent auditLog.write(exchange) }) .to(destination) ``` --- ## Related - [Exchange headers reference](/docs/reference/configuration#headers) -- Full list of built-in routecraft.* headers. # Operations The steps that transform, filter, and route data inside a capability. ## What are operations? Operations are the verbs of the DSL. They run in the order you write them -- the exchange passes through each one in sequence. ```ts craft() .id('process-order') .from(timer({ intervalMs: 60_000 })) .transform((body) => normalise(body)) .filter((ex) => ex.body.amount > 0) .enrich(http({ url: '/inventory' })) .tap(log()) .to(destination) ``` ## Operation categories ### Capability(Route)-level Capability(Route)-level operations configure the capability itself. They go **before** `.from()` and apply to the entire capability, not to individual operations. `.from()` is the most important one -- it defines the source adapter and creates the capability. Everything before it (`.id()`, `.batch()`) is configuration. Everything after it operates on exchanges. ### Transform Transform operations reshape the data as it flows through the pipeline. They receive the current exchange and return a new version of it. The distinction between them is how much of the exchange they expose. `.transform()` receives the body only and returns the new body -- the right choice for most data reshaping. `.process()` receives the full exchange, giving access to headers and context. `.map()` projects fields into a new typed shape. `.enrich()` calls an adapter and **merges** the result into the body rather than replacing it. `.header()` sets metadata without touching the body at all. ### Flow control Flow control operations decide which exchanges continue and how they are split or merged. `.filter()` drops exchanges that do not match a predicate -- the exchange simply does not continue downstream. Return `{ reason: "..." }` instead of `false` to record why in telemetry. `.validate()` checks the body against a StandardSchema (Zod, Valibot, ArkType); invalid exchanges are dropped with a reason describing which fields failed. `.split()` fans an array body out into one exchange per item, so each can be processed independently. `.aggregate()` collects those back into a single exchange. `.choice()` [wip] routes to different sub-pipelines based on conditions, like a switch statement for data flows. ### Wrappers Wrappers modify the behaviour of the **next operation only**. They do not stand alone -- they must be followed by the operation they wrap, placed immediately before it. `.retry()` re-runs the next operation on failure. `.timeout()` cancels it if it takes too long. `.throttle()` rate-limits it. `.delay()` adds a pause before it runs. `.onError()` [wip] catches any error and lets you provide a fallback exchange. `.cache()` [wip] skips re-running if the same input has been seen before. Multiple wrappers can be stacked. They apply in outside-in order, so the first listed is the outermost. This means the order changes the semantics: ```ts // Each retry attempt gets a fresh 5s timeout .retry({ maxAttempts: 3 }) .timeout(5000) .process(slowOp) // Total 30s budget shared across all retry attempts .timeout(30000) .retry({ maxAttempts: 3 }) .process(flakyOp) ``` ### Side effects `.to()` sends the exchange to a destination adapter and ends the main pipeline. If the adapter returns a value, the body is replaced with it. `.tap()` is fire-and-forget. It gets a deep copy of the exchange with the correlation ID preserved and runs in the background while the main pipeline continues immediately. Use `.tap()` for logging, metrics, and auditing that should never slow down the critical path. --- ## Related - [Operations reference](/docs/reference/operations) -- Full API: all operations with signatures, options, and examples. # Adapters Connectors that link your capabilities to the outside world. ## What are adapters? Adapters are the boundary between Routecraft and external systems. They handle the integration details -- making HTTP calls, reading files, triggering on a schedule -- so your capabilities stay focused on business logic. Every capability starts with a source adapter in `.from()` and ends with a destination adapter in `.to()`. Operations in the middle can also use adapters to enrich data or observe side effects. ## The three adapter roles ### Source A source produces data and starts the flow. It goes in `.from()`. ```ts // Triggered by a timer .from(timer({ intervalMs: 60_000 })) // One-shot with a fixed payload .from(simple({ report: 'daily-summary' })) // Receives messages from another capability .from(direct('incoming-jobs', {})) ``` ### Destination A destination receives the final exchange. It goes in `.to()`. ```ts .to(log()) .to(http({ method: 'POST', url: 'https://api.example.com/events' })) .to(json({ path: './output.json' })) .to(direct('next-stage')) ``` If the destination returns a value, the exchange body is replaced with it. If it returns nothing, the body is unchanged. ### Processor A processor sits in the middle of a pipeline and modifies the exchange. It goes in `.process()`. ```ts .process(myCustomProcessor) ``` Any `Destination` adapter can also be passed to `.tap()`. The `.tap()` operation is what makes it fire-and-forget -- the adapter itself is still just a `Destination`. ## Configuring adapters Most adapters accept an options object. Options can be static values or functions that derive a value from the exchange at runtime. ```ts // Static .to(http({ method: 'POST', url: 'https://api.example.com/events' })) // Dynamic -- derived from the exchange .to(http({ method: 'POST', url: (exchange) => `https://api.example.com/users/${exchange.body.userId}`, body: (exchange) => exchange.body, })) ``` ### MergedOptions and craft config Many adapters support **MergedOptions**: they merge their own options with any matching config registered in your project's `craft.config.ts`. This means you can define shared settings once -- connection strings, base URLs, credentials -- and adapters pick them up automatically without repeating them at every call site. ```ts // craft.config.ts export default defineConfig({ adapters: { http: { baseUrl: 'https://api.example.com', headers: { Authorization: `Bearer ${process.env.API_KEY}` }, }, }, }) ``` ```ts // capability file -- base URL and auth header come from craft.config.ts .to(http({ method: 'POST', path: '/events' })) ``` Options passed directly to the adapter always take precedence over config-level defaults. See the [Adapters reference](/docs/reference/adapters) for which adapters support MergedOptions. --- ## Related - [Adapters reference](/docs/reference/adapters) -- Full catalog with all options and signatures. - [Creating adapters](/docs/advanced/custom-adapters) -- Build your own source, destination, or processor adapter. # Advanced Extend the Routecraft runtime with cross-cutting behaviour. ## What is a plugin? A plugin is code that runs once when the context starts, before any capabilities are registered. It has access to the full `CraftContext` and can: - Subscribe to lifecycle events (capability started, error occurred, context stopped) - Write shared state to the context store for adapters to read - Register additional capabilities dynamically **Plugins vs capabilities:** a capability defines what your system does. A plugin extends how the runtime behaves. Logging, metrics, tracing, auth headers, and connection pooling are all plugin concerns, not capability concerns. ## Writing a plugin A plugin is a function that receives the context: ```ts // plugins/logger.ts import { type CraftContext } from '@routecraft/routecraft' export default function loggerPlugin(context: CraftContext) { context.on('route:started', ({ details: { route } }) => { context.logger.info(`Started: ${route.definition.id}`) }) context.on('error', ({ details: { error, route } }) => { context.logger.error(error, `Error in ${route?.definition.id ?? 'context'}`) }) } ``` Or as an object if you need a `register` step: ```ts // plugins/metrics.ts export default { async register(context: CraftContext) { context.setStore('metrics.counters', { started: 0, errors: 0 }) context.on('route:started', ({ context }) => { const counters = context.getStore('metrics.counters') as any counters.started += 1 }) }, } ``` ## Registering a plugin Pass plugins in `craft.config.ts`: ```ts // craft.config.ts import type { CraftConfig } from '@routecraft/routecraft' import logger from './plugins/logger' import metrics from './plugins/metrics' const config: CraftConfig = { plugins: [logger, metrics], } export default config ``` ## Setting global adapter defaults The most common plugin pattern is writing to the context store so adapters can read global configuration instead of requiring it per-capability. ```ts // plugins/defaults.ts export default function defaults(context: CraftContext) { context.setStore('db.config', { connectionString: process.env.DB_URL, poolSize: 10, }) context.setStore('api.defaults', { headers: { Authorization: `Bearer ${process.env.API_TOKEN}` }, }) } ``` An adapter reads it at call time: ```ts class DbAdapter implements Destination { async send(exchange) { const config = exchange.context.getStore('db.config') as { connectionString: string } await db(config.connectionString).insert(exchange.body) } } ``` This keeps connection strings and tokens out of every capability file. ## Managing external services Plugins can manage long-lived external processes. The built-in `mcpPlugin` demonstrates this pattern: it spawns stdio MCP server subprocesses, monitors their health, and restarts them with exponential backoff when they crash. ```ts import { mcpPlugin } from '@routecraft/ai' const config: CraftConfig = { plugins: [ mcpPlugin({ clients: { filesystem: { transport: 'stdio', command: 'npx', args: ['-y', '@modelcontextprotocol/server-filesystem', '/tmp'], }, }, maxRestarts: 5, }), ], } ``` The plugin starts each subprocess when the context starts and tears them down when it stops. Tools from all sources (local routes, stdio clients, HTTP clients) are collected into a unified registry accessible from the context store. ## Lifecycle events Plugins subscribe to events using `context.on(eventName, handler)`. Common events include `route:started`, `route:stopped`, `context:started`, `context:stopped`, and `error`. See the [Events reference](/docs/reference/events) for the full list. ## Dynamically registering capabilities Because plugins run before capabilities are registered, they can add capabilities to the context at startup: ```ts // plugins/admin.ts export default function adminPlugin(context: CraftContext) { if (process.env.ENABLE_ADMIN === 'true') { context.registerRoutes( craft() .id('admin-health') .from(simple({ ok: true })) .to(log()) .build()[0] ) } } ``` --- ## Related - [Plugins reference](/docs/reference/plugins) -- Full API for plugin interfaces and context methods. - [Monitoring](/docs/introduction/monitoring) -- Observability patterns built on plugins and events. # Events Observe and react to what happens inside the runtime without touching capability code. ## What is the event system? Every significant thing that happens in Routecraft emits an event: context startup, capability lifecycle, individual exchange progress, retry attempts, batch flushes. You can subscribe to any of these from a plugin, an adapter, or anywhere you have access to the `CraftContext`. Events are the primary hook for cross-cutting concerns: logging, metrics, tracing, alerting, and audit trails. ## Subscribing via craft config The simplest way to react to events is via the `on` property in `craft.config.ts`. This works with `craft run` out of the box -- no plugin required. ```ts // craft.config.ts import type { CraftConfig } from '@routecraft/routecraft' const config: CraftConfig = { on: { 'context:started': ({ ts }) => { console.log(`Context ready at ${ts}`) }, 'error': ({ details: { error, route } }) => { console.error(`Error in ${route?.definition.id ?? 'context'}`, error) }, 'route:*:exchange:failed': ({ details: { routeId, error } }) => { alerts.send(routeId, error) }, }, } export default config ``` Each key is an event name or wildcard pattern. The value can be a single handler or an array of handlers. ## Subscribing via a plugin When you need the full context API (dynamic subscriptions, `context.once`, cleanup), use a plugin instead: Call `context.on(event, handler)` with an event name or pattern. The handler receives `{ ts, context, details }`. ```ts // plugins/logger.ts import { type CraftContext } from '@routecraft/routecraft' export default function loggerPlugin(ctx: CraftContext) { ctx.on('context:started', ({ ts }) => { ctx.logger.info(`Context ready at ${ts}`) }) ctx.on('route:started', ({ details: { route } }) => { ctx.logger.info(`Capability running: ${route.definition.id}`) }) ctx.on('error', ({ details: { error, route } }) => { ctx.logger.error(error, `Error in ${route?.definition.id ?? 'context'}`) }) } ``` Use `context.once` when you only need the first occurrence: ```ts ctx.once('context:started', () => { console.log('Ready -- fires once only') }) ``` To unsubscribe, call the function returned by `context.on`: ```ts const unsub = ctx.on('route:started', handler) unsub() // stops receiving events ``` ## Event naming convention Event names are colon-separated segments that describe scope from broad to specific: ```text context:started route:started route:{capabilityId}:exchange:completed route:{capabilityId}:operation:to:{adapterId}:stopped route:{capabilityId}:operation:retry:attempt plugin:{pluginId}:started ``` This structure is what makes wildcard subscriptions useful. ## Wildcard patterns Subscribe to a group of events using glob patterns. **`*`** matches exactly one segment. **`**`** matches zero or more segments. ```ts // Every event emitted by the runtime ctx.on('*', ({ ts, details }) => { audit.write({ ts, details }) }) // All events for a specific capability ctx.on('route:order-processor:**', ({ ts, details }) => { trace.record(ts, details) }) // Exchange completed or failed on any capability ctx.on('route:*:exchange:completed', ({ details }) => { metrics.increment('exchange.completed') }) ctx.on('route:*:exchange:failed', ({ details: { error } }) => { alerts.send(error) }) // All operation events across all capabilities ctx.on('route:*:operation:**', ({ details }) => { observability.track(details) }) ``` ## Emitting custom events from plugins Plugins can emit their own events on the context for other plugins or adapters to observe: ```ts // plugins/auth.ts export default function authPlugin(ctx: CraftContext) { ctx.on('route:started', ({ details: { route } }) => { // Emit a custom event that other plugins can subscribe to ctx.emit('plugin:auth:capability:secured', { capabilityId: route.definition.id, }) }) } ``` Any subscriber using `plugin:auth:**` or `plugin:auth:capability:secured` will receive it. ## Adapter metadata in operation events Adapters can expose structured metadata that is included in their operation events. This is useful for enriching traces or logs with adapter-specific context like HTTP status codes, response sizes, or queue depths. ```ts import { type Destination, type Exchange } from '@routecraft/routecraft' class HttpStorageAdapter implements Destination { readonly adapterId = 'my.http-storage' async send(exchange: Exchange) { const res = await fetch(this.url, { method: 'POST', body: JSON.stringify(exchange.body) }) this.lastStatus = res.status } getMetadata(): Record { return { statusCode: this.lastStatus } } } ``` The metadata appears under `details.metadata` in the corresponding `operation:to:{adapterId}:stopped` event. ## Common patterns ### Log every exchange result ```ts ctx.on('route:*:exchange:completed', ({ details: { routeId, exchangeId, duration } }) => { logger.info({ routeId, exchangeId, duration }, 'exchange completed') }) ctx.on('route:*:exchange:failed', ({ details: { routeId, exchangeId, error } }) => { logger.error({ routeId, exchangeId, error }, 'exchange failed') }) ``` ### Count retries ```ts ctx.on('route:*:operation:retry:attempt', ({ details: { routeId, attemptNumber } }) => { metrics.increment(`retry.attempt`, { routeId }) }) ``` ### Alert on batch flush ```ts ctx.on('route:*:operation:batch:flushed', ({ details: { routeId, batchSize, reason } }) => { if (reason === 'time' && batchSize < 10) { alerts.warn(`Low throughput on ${routeId}: only ${batchSize} items in batch`) } }) ``` --- ## Related - [Events reference](/docs/reference/events) -- Full event catalog with all payload shapes and wildcard patterns. # Composing Capabilities Connect capabilities together to build multi-stage pipelines. The `direct()` adapter is an in-process channel that lets one capability hand off data to another. Each capability stays focused on a single concern; `direct()` connects them without coupling the files. ## Linear chain The simplest pattern: one capability fetches data, passes it to a processor, which passes it to a notifier. ```ts // capabilities/fetch-orders.ts export default craft() .id('orders.fetch') .from(timer({ intervalMs: 300_000 })) .transform(fetchNewOrders) .to(direct('orders.process')) ``` ```ts // capabilities/process-orders.ts export default craft() .id('orders.process') .from(direct('orders.process', {})) .transform(fulfillOrder) .to(direct('orders.notify')) ``` ```ts // capabilities/notify-orders.ts export default craft() .id('orders.notify') .from(direct('orders.notify', {})) .to(http({ method: 'POST', path: '/notifications' })) ``` The channel name is just a string -- use a namespaced convention (e.g. `domain.stage`) to keep them readable as the project grows. ## Fan-out To send to multiple downstream capabilities, use `.tap()` for all but the primary output. `.tap()` is fire-and-forget and does not alter the exchange. ```ts // capabilities/ingest-event.ts export default craft() .id('events.ingest') .from(http({ path: '/events', method: 'POST' })) .tap(direct('events.audit')) .tap(direct('events.metrics')) .to(direct('events.process')) ``` ```ts // capabilities/audit-event.ts export default craft() .id('events.audit') .from(direct('events.audit', {})) .to(json({ path: './logs/audit.jsonl' })) ``` ```ts // capabilities/metrics-event.ts export default craft() .id('events.metrics') .from(direct('events.metrics', {})) .transform(({ type }) => ({ counter: type })) .to(http({ method: 'POST', path: '/metrics' })) ``` ## Dynamic routing The destination channel can be resolved at runtime from the exchange body or headers. This lets a single capability route to different consumers without knowing them all in advance. ```ts // capabilities/route-by-priority.ts export default craft() .id('jobs.route') .from(http({ path: '/jobs', method: 'POST' })) .to(direct((exchange) => `jobs.${exchange.body.priority}`)) ``` ```ts // capabilities/high-priority.ts export default craft() .id('jobs.high') .from(direct('jobs.high', {})) .transform(processUrgent) .to(log()) ``` ```ts // capabilities/normal-priority.ts export default craft() .id('jobs.normal') .from(direct('jobs.normal', {})) .transform(processNormal) .to(log()) ``` ## Schema validation on receive The source side of `direct()` accepts a `schema` option. Routecraft validates the incoming body before the capability runs and throws `RC5002` if validation fails. ```ts import { z } from 'zod' export default craft() .id('orders.process') .from(direct('orders.process', { schema: z.object({ orderId: z.string(), items: z.array(z.string()), }), })) .transform(fulfillOrder) .to(log()) ``` ## How direct() knows its role `direct()` is overloaded -- the number of arguments determines whether it acts as a source or destination: - **`direct('channel', options)`** -- two arguments, acts as a **source** (`.from()`) - **`direct('channel')`** -- one argument, acts as a **destination** (`.to()`, `.tap()`) One channel name, one import, two roles. --- ## Related - [Capabilities](/docs/introduction/capabilities) -- Author small, focused capabilities using the DSL. - [Adapters reference](/docs/reference/adapters) -- Full catalog with all options and signatures. # Error Handling Catch pipeline errors and recover gracefully with `.error()`. By default, when a step throws an unhandled error, Routecraft logs it and emits `error` and `exchange:failed` events -- then swallows the error so the route keeps running. `.error()` extends this behavior with a custom recovery handler. ## Basic usage Define `.error()` before `.from()`. When any step in the pipeline throws, the handler is invoked instead: ```ts craft() .id('process-orders') .error((error, exchange) => { return { status: 'failed', reason: (error as Error).message } }) .from(timer({ intervalMs: 60_000 })) .transform(fetchOrders) .to(processOrder) ``` The handler's return value becomes the route's final exchange body. The pipeline does not resume after the handler runs. ## Parameters | Parameter | Type | Description | |-----------|------|-------------| | `error` | `unknown` | The thrown error | | `exchange` | `Exchange` | The exchange at the point of failure -- headers include route id, correlation id, and operation type | | `forward` | `(routeId, payload) => Promise` | Send a payload to another capability via the direct adapter | ## The `forward` function The third parameter, `forward`, sends a payload to another capability by route id and returns its result. It uses the direct adapter channel internally -- no extra transport or configuration is needed. ```ts forward(routeId: string, payload: unknown): Promise ``` | Argument | Description | |----------|-------------| | `routeId` | The target capability's direct endpoint id (must match the string passed to `direct()` in the target's `.from()`) | | `payload` | Any value -- becomes the target capability's exchange body | | **returns** | The final exchange body produced by the target capability's pipeline | `forward` is async. The error handler waits for the target capability to finish processing and returns whatever that capability produces. This means you can use the target's result as the recovery value for the failed capability. ### Example: delegate to a dedicated error capability ```ts // capabilities/process-orders.ts craft() .id('process-orders') .error(async (error, exchange, forward) => { // Send failure details to the error capability. // forward() returns what the error capability's pipeline produces. const result = await forward('errors.orders', { originalBody: exchange.body, reason: (error as Error).message, failedAt: exchange.headers['routecraft.operation'], }) // result is now the recovery value for this capability return result }) .from(timer({ intervalMs: 60_000 })) .transform(fetchOrders) .to(processOrder) ``` ```ts // capabilities/error-orders.ts craft() .id('errors.orders') .from(direct('errors.orders', { description: 'Receives failed order payloads for alerting', })) .transform((body) => { // Log, enrich, or reshape the failure payload return { alerted: true, reason: body.reason } }) .to(http({ url: 'https://alerts.example.com/orders' })) ``` In this example, `forward('errors.orders', ...)` sends the failure payload to `errors.orders`, waits for it to run its full pipeline (transform then HTTP call), and returns `{ alerted: true, reason: '...' }` back to the error handler. That value becomes the final exchange body for `process-orders`. ### When not to use `forward` If you only need to log or return a static fallback, you do not need `forward` at all. Just return a value directly: ```ts .error((error) => { return { status: 'failed', reason: (error as Error).message } }) ``` ## When the error handler itself throws If your `.error()` handler throws, the context takes over: 1. The error is logged 2. The global `error` event fires (same as the default no-handler path) 3. `route::exchange:failed` fires with the handler's error 4. `route::operation:error:failed` fires so you can distinguish handler failures from step failures 5. The route stays alive -- it will process the next message normally This means you always have a safety net. Even a broken error handler cannot crash the route. ## Events When `.error()` is defined, the following events are emitted instead of the default `error` + `exchange:failed` pair: | Event | When | |-------|------| | `route::operation:error:invoked` | Error handler is called | | `route::operation:error:recovered` | Handler returned successfully | | `route::operation:error:failed` | Handler itself threw | On successful recovery, only `error:invoked` and `error:recovered` fire -- `exchange:failed` does **not** fire because the exchange was recovered. If the handler throws, all three fire: `error:invoked`, `error:failed`, and `exchange:failed`. ### Subscribing to events Use `ctx.on()` to listen. Wildcards let you monitor error handling across all routes: ```ts const ctx = new ContextBuilder() .routes(myRoutes) .on('route:*:operation:error:invoked', ({ details }) => { console.log( `Error handler called on ${details.routeId}`, `failed at: ${details.failedOperation}`, ) }) .on('route:*:operation:error:recovered', ({ details }) => { console.log(`Recovered: ${details.routeId}`) }) .on('route:*:operation:error:failed', ({ details }) => { // The handler itself failed -- alert alertOps(`Error handler crashed on ${details.routeId}`, details.originalError) }) .build() ``` For a catch-all, subscribe to the global `error` event. This fires for all unhandled errors and for handler failures: ```ts ctx.on('error', ({ details }) => { console.error('Unhandled error:', details.error) }) ``` --- ## Related - [Composing Capabilities](/docs/advanced/composing-capabilities) -- Build modular systems with direct() and reusable capability chains. - [Events](/docs/introduction/events) -- Subscribe to error and exchange lifecycle events. # Creating adapters Build your own source, destination, or processor adapter. When the built-in adapters do not cover a use case, you can write your own. Adapters are plain TypeScript classes that implement one of three interfaces. ## Source A source produces data and starts the pipeline. Implement the `Source` interface: ```ts import { type Source } from '@routecraft/routecraft' class MyQueueAdapter implements Source { readonly adapterId = 'acme.adapter.my-queue' async subscribe(context, handler, abort) { while (!abort.signal.aborted) { const message = await queue.receive() await handler(message) } } } ``` ## Destination A destination receives the final exchange. Implement the `Destination` interface: ```ts import { type Destination } from '@routecraft/routecraft' class MyStorageAdapter implements Destination, void> { readonly adapterId = 'acme.adapter.my-storage' async send(exchange) { await storage.write(exchange.body) } } ``` If `send` returns a value, the exchange body is replaced with it. If it returns nothing, the body is unchanged. Use a `Destination` with `.enrich()` when you need to fetch external data and merge it into the body: ```ts class MyEnricherAdapter implements Destination { readonly adapterId = 'acme.adapter.my-enricher' async send(exchange) { return fetchExtra(exchange.body.id) } } // The returned value is merged into the body .enrich(myEnricher({ apiKey: process.env.ENRICH_KEY })) ``` ## Processor A processor sits in the middle of a pipeline and modifies the exchange. Implement the `Processor` interface. Use this when you need header or context access alongside body reshaping -- for body-only changes, `.transform()` is the simpler choice: ```ts import { type Processor } from '@routecraft/routecraft' class MyTransformAdapter implements Processor { readonly adapterId = 'acme.adapter.my-transform' async process(exchange) { const tenantId = exchange.headers['x-tenant'] return { ...exchange, body: { ...exchange.body, tenantId } } } } ``` ## Factory function Expose your adapter as a factory function so it reads naturally in the DSL. The recommended pattern is one factory per adapter -- one name, one import: ```ts // adapters/my-storage.ts export function myStorage(options?: MyStorageOptions) { return new MyStorageAdapter(options) } // Usage -- destination .to(myStorage({ bucket: 'uploads' })) ``` ```ts // adapters/my-queue.ts export function myQueue(options?: MyQueueOptions) { return new MyQueueAdapter(options) } // Usage -- source .from(myQueue({ queue: 'orders' })) ``` ```ts // adapters/my-enricher.ts export function myEnricher(options?: MyEnricherOptions) { return new MyEnricherAdapter(options) } // Usage -- enricher (merges result into body) .enrich(myEnricher({ apiKey: process.env.ENRICH_KEY })) ``` Keeping one factory per adapter makes imports predictable and avoids a proliferation of role-suffixed exports (`myQueueSource`, `myQueueDestination`, etc.). The adapter class itself handles the role -- the factory just wires up the options. An adapter class can implement multiple interfaces when it makes sense. A queue adapter, for example, may work as both a source and a destination: ```ts class MyQueueAdapter implements Source, Destination { readonly adapterId = 'acme.adapter.my-queue' async subscribe(context, handler, abort) { while (!abort.signal.aborted) { const message = await queue.receive(this.options.queue) await handler(message) } } async send(exchange) { await queue.send(this.options.queue, exchange.body) } } export function myQueue(options: MyQueueOptions) { return new MyQueueAdapter(options) } // Same factory, different positions .from(myQueue({ queue: 'orders' })) .to(myQueue({ queue: 'results' })) ``` ## Sharing state between adapters Adapters can use the context store to share state, read global configuration set by plugins, or maintain connections across exchanges. ```ts class DbAdapter implements Destination { async send(exchange) { const config = exchange.context.getStore('db.config') await db(config.connectionString).insert(exchange.body) } } ``` See [Plugins](/docs/advanced/plugins) for how to populate the context store at startup. --- ## Related - [Adapters](/docs/introduction/adapters) -- How adapters work and how to configure them. - [Adapters reference](/docs/reference/adapters) -- Full catalog with all options and signatures. # Expose as MCP Run your capabilities as MCP tools for Claude, Cursor, and other AI clients. ## How it works Routecraft uses the Model Context Protocol (MCP) to expose capabilities as typed tools. You define the tool as a capability using the `mcp()` source adapter, run it with `craft run`, and point your AI client at the process. The AI can then call your tool with validated inputs -- nothing else is accessible. ## Install ```bash npm install @routecraft/ai zod ``` ## Define a capability A capability becomes an MCP tool when you use `mcp()` as its source. Give it a `description` the AI uses to decide when to call it, and a Zod `schema` for the input. ```ts // capabilities/search-orders.ts import { mcp } from '@routecraft/ai' import { craft, http } from '@routecraft/routecraft' import { z } from 'zod' export default craft() .id('orders.search') .from(mcp('orders.search', { description: 'Search orders by customer ID or date range', schema: z.object({ customerId: z.string().optional(), from: z.string().date().optional(), to: z.string().date().optional(), }), keywords: ['orders', 'search'], })) .transform(({ customerId, from, to }) => buildQuery(customerId, from, to)) .to(http({ method: 'GET', path: '/orders' })) ``` The `schema` is validated before the capability runs. Invalid inputs are rejected with a structured error before any business logic executes. ## Stdio transport (default) Stdio is the simplest transport. The AI client spawns Routecraft as a subprocess and communicates over stdin/stdout. No networking, no auth required. ### Claude Desktop Edit `~/Library/Application Support/Claude/claude_desktop_config.json` (macOS): ```json { "mcpServers": { "my-tools": { "command": "npx", "args": [ "@routecraft/cli", "run", "./capabilities/search-orders.ts" ] } } } ``` Restart Claude Desktop completely after saving. Look for the hammer icon in the input area. ### Cursor Open **Cursor Settings** > **Features** > **Model Context Protocol**, then add: ```json { "my-tools": { "command": "npx", "args": [ "@routecraft/cli", "run", "./capabilities/search-orders.ts" ] } } ``` ### Claude Code Add the following to your `.mcp.json` (project-level) or `~/.claude/mcp.json` (global): ```json { "mcpServers": { "my-tools": { "command": "npx", "args": [ "@routecraft/cli", "run", "./capabilities/search-orders.ts" ] } } } ``` ## HTTP transport Use the HTTP transport when you want a long-running server that multiple clients can connect to, or when you need authentication. Add `mcpPlugin` to your config with `transport: 'http'`: ```ts // craft.config.ts import { mcpPlugin, jwt } from '@routecraft/ai' export default { plugins: [ mcpPlugin({ transport: 'http', port: 3001, auth: jwt({ secret: process.env.JWT_SECRET! }), }), ], } ``` Start the server with `craft run`, then point your AI client at it. ### Claude Desktop (HTTP) ```json { "mcpServers": { "my-tools": { "url": "http://localhost:3001/mcp", "headers": { "Authorization": "Bearer " } } } } ``` ### Cursor (HTTP) ```json { "my-tools": { "url": "http://localhost:3001/mcp", "headers": { "Authorization": "Bearer " } } } ``` ### Claude Code (HTTP) ```json { "mcpServers": { "my-tools": { "url": "http://localhost:3001/mcp", "headers": { "Authorization": "Bearer " } } } } ``` ## Authentication When using HTTP transport, secure the endpoint with the `auth` option. Routecraft ships with a built-in `jwt()` helper that verifies JWT signatures using `node:crypto` (zero dependencies). ```ts import { jwt } from '@routecraft/ai' // HMAC (HS256, default) auth: jwt({ secret: process.env.JWT_SECRET! }) // RSA (RS256) auth: jwt({ algorithm: 'RS256', publicKey: fs.readFileSync('./public.pem', 'utf-8'), }) ``` For other auth schemes, pass a custom `validator` function: ```ts auth: { validator: async (token) => { const user = await db.verifyApiKey(token) if (!user) return null return { subject: user.id, scheme: 'api-key', roles: user.roles } }, } ``` The validator receives the raw bearer token and returns an `AuthPrincipal` on success or `null` to reject with 401. The principal's fields (`subject`, `scheme`, `roles`, etc.) are set as exchange headers so your routes can read the caller's identity. See the [plugins reference](/docs/reference/plugins#mcpplugin) for the full `AuthPrincipal` field list. ## Production Pin the CLI version so your capabilities do not break on package updates: ```json { "mcpServers": { "my-tools": { "command": "npx", "args": [ "@routecraft/cli@2.0.0", "run", "/absolute/path/to/capabilities/search-orders.ts" ] } } } ``` Use absolute paths in production to avoid working-directory ambiguity. ## Security - **Validate all inputs** -- every capability should have a Zod schema; Routecraft enforces it before execution - **Authenticate HTTP endpoints** -- always set `auth` when using HTTP transport in production - **Guardrails** -- use `.filter()` to reject exchanges that fail a business rule, and `.transform()` to sanitize or normalise values before they reach downstream systems - **Principle of least privilege** -- only expose capabilities the AI actually needs - **Audit trail** -- add `.tap(log())` to record every invocation; subscribe to `plugin:mcp:tool:**` events for MCP-specific tracing - **Never hardcode credentials** -- use `process.env` and `.env` files --- ## Related - [Call an MCP](/docs/advanced/call-an-mcp) -- Call external MCP servers from within a capability. - [AI Package reference](/docs/reference/ai) -- Full MCP adapter API and options. # Call an MCP Call tools on external MCP servers from within a capability. ## How it works The `mcpPlugin` connects your Routecraft context to one or more remote MCP servers. Once registered, you can call any tool on those servers using `.to(mcp('server:tool'))` or `.enrich(mcp('server:tool'))` inside any capability. ## Install ```bash npm install @routecraft/ai ``` ## Register remote servers Add `mcpPlugin` to your `craft.config.ts` and list the servers your capabilities need to reach: ```ts // craft.config.ts import { mcpPlugin } from '@routecraft/ai' import type { CraftConfig } from '@routecraft/routecraft' const config: CraftConfig = { plugins: [ mcpPlugin({ clients: { browser: { url: 'http://127.0.0.1:8089/mcp' }, search: { url: 'http://127.0.0.1:9000/mcp' }, }, }), ], } export default config ``` Each key under `clients` is the server alias you use in your capabilities. ## Call a tool Use the `server:tool` shorthand in `.to()` to send the exchange body as tool arguments and replace it with the result: ```ts // capabilities/web-search.ts import { mcp } from '@routecraft/ai' import { craft, simple, log } from '@routecraft/routecraft' export default craft() .id('web.search') .from(simple({ query: 'Routecraft documentation' })) .to(mcp('search:web_search')) .to(log()) ``` Or use `.enrich()` to merge the result into the exchange body instead of replacing it: ```ts export default craft() .id('orders.enrich') .from(http({ path: '/orders/:id', method: 'GET' })) .enrich(mcp('search:lookup_customer')) .to(http({ method: 'POST', path: '/crm/orders' })) ``` ## Custom argument mapping By default, the exchange body is passed as-is to the tool. Use the `args` option to map the body to the exact shape the tool expects: ```ts .to(mcp('browser:navigate', { args: (exchange) => ({ url: exchange.body.targetUrl }), })) ``` ## Full URL (no plugin required) If you only need to call a single external tool and do not want to register it globally, pass the URL directly: ```ts .to(mcp({ url: 'http://127.0.0.1:8089/mcp', tool: 'navigate' })) ``` --- ## Related - [Expose as MCP](/docs/advanced/expose-as-mcp) -- Run your own capabilities as MCP tools for AI clients. - [AI Package reference](/docs/reference/ai) -- Full MCP adapter API and options. # Linting Enforce Routecraft best practices with ESLint. ## Installation **pnpm:** ```bash pnpm add -D eslint @eslint/js typescript-eslint @routecraft/eslint-plugin-routecraft ``` ## Configuration Add the plugin to your ESLint flat config and spread the recommended preset: ```js // eslint.config.mjs import pluginJs from '@eslint/js' import tseslint from 'typescript-eslint' import routecraftPlugin from '@routecraft/eslint-plugin-routecraft' /** @type {import('eslint').Linter.Config[]} */ export default [ pluginJs.configs.recommended, ...tseslint.configs.recommended, { files: ['**/*.{js,mjs,cjs,ts}'], plugins: { '@routecraft/routecraft': routecraftPlugin }, ...routecraftPlugin.configs.recommended, }, ] ``` The `recommended` preset enables all rules at their default levels. See the [Linting reference](/docs/reference/linting) for the full rule list and defaults. ## Presets The plugin ships two presets: | Preset | Description | |--------|-------------| | `routecraftPlugin.configs.recommended` | Recommended rules at their default levels | | `routecraftPlugin.configs.all` | All rules enabled as errors | Use `recommended` for most projects. Use `all` if you want to enforce every rule strictly from the start. ## Customizing severity Override individual rules in your config to change severity or disable them: ```js // eslint.config.mjs export default [ // ... other configs { files: ['**/*.{js,mjs,cjs,ts}'], plugins: { '@routecraft/routecraft': routecraftPlugin }, ...routecraftPlugin.configs.recommended, rules: { // Downgrade to a warning '@routecraft/routecraft/require-named-route': 'warn', // Elevate to an error '@routecraft/routecraft/batch-before-from': 'error', // Turn off entirely '@routecraft/routecraft/mcp-server-options': 'off', }, }, ] ``` Valid severity values: `'error'`, `'warn'`, `'off'` (or `2`, `1`, `0`). --- ## Related - [Linting reference](/docs/reference/linting) -- Full rule catalog with defaults and descriptions. # Testing Test your capabilities with fast unit tests and optional E2E runs. ## Quick start Use `testContext()` to build a test context and `t.test()` to run the full lifecycle (start, wait for routes ready, drain, stop). Assert after `await t.test()`: ```ts import { describe, it, expect, vi } from "vitest"; import { testContext, type TestContext } from "@routecraft/testing"; import helloRoute from "../capabilities/hello-world"; describe("hello capability", () => { let t: TestContext; afterEach(async () => { if (t) await t.stop(); }); it("emits and logs", async () => { t = await testContext().routes(helloRoute).build(); await t.test(); expect(t.logger.info).toHaveBeenCalled(); }); }); ``` **Tip:** `t.logger` is a spy (vi.fn() methods). Use `expect(t.logger.info).toHaveBeenCalled()` or inspect `t.logger.info.mock.calls` for log assertions. ## Vitest configuration For a new project, use a single `vitest.config.mjs` at the project root: ```js import { defineConfig } from "vitest/config"; export default defineConfig({ test: { environment: "node", coverage: { provider: "v8", reporter: ["text", "lcov"] }, }, }); ``` ## Route lifecycle in tests Use `testContext()` and `t.test()` for the recommended flow. `t.test()` runs start → wait for all routes ready → drain → stop, so you don't need manual timeouts for direct/simple routes: ```ts import { testContext, type TestContext } from "@routecraft/testing"; import routes from "../capabilities/hello-world"; // your capability export const t = await testContext().routes(routes).build(); await t.test(); // Assert here: mocks, t.errors, t.ctx.getStore(), etc. ``` Checklist: - Prefer `await t.test()` for full lifecycle; assert after it returns. - Use `t.ctx` when you need the raw context (e.g. `t.ctx.start()`, `t.ctx.getStore()`). - Use `t.logger` to assert on log calls (e.g. `expect(t.logger.info).toHaveBeenCalled()`). - For custom timing (e.g. timer routes), use `t.ctx.start()` and `t.ctx.stop()` manually. - Restore mocks in `beforeEach/afterEach`. ## Common testing patterns ### Using the spy adapter The `spy()` adapter is purpose-built for testing. It records all interactions and provides convenient assertion methods: ```ts import { spy } from "@routecraft/routecraft"; const spyAdapter = spy(); // Available properties: spyAdapter.received // Array of exchanges received spyAdapter.calls.send // Number of send() calls spyAdapter.calls.process // Number of process() calls (if used as processor) spyAdapter.calls.enrich // Number of enrich() calls (if used as enricher) // Methods: spyAdapter.reset() // Clear all recorded data spyAdapter.lastReceived() // Get the most recent exchange spyAdapter.receivedBodies() // Get array of just the body values ``` ### Spy on destinations to assert outputs ```ts import { testContext } from "@routecraft/testing"; import { craft, simple, spy } from "@routecraft/routecraft"; import { expect } from "vitest"; const spyAdapter = spy(); const route = craft().id("out").from(simple("payload")).to(spyAdapter); const t = await testContext().routes(route).build(); await t.test(); expect(spyAdapter.received).toHaveLength(1); expect(spyAdapter.received[0].body).toBe("payload"); expect(spyAdapter.calls.send).toBe(1); ``` ### Assert on log output `testContext().build()` returns a test context whose `t.logger` is a spy. Use it to assert on pino log calls (e.g. from `.to(log())` or adapter logging): ```ts import { testContext } from "@routecraft/testing"; import { craft, simple, log } from "@routecraft/routecraft"; import { expect, vi } from "vitest"; test('logs messages correctly', async () => { const route = craft() .id("log-test") .from(simple("Hello, World!")) .to(log()); const t = await testContext().routes(route).build(); await t.test(); expect(t.logger.info).toHaveBeenCalled(); const loggedMessage = (t.logger.info as ReturnType).mock.calls[0][1]; expect(loggedMessage).toContain("Hello, World!"); }); ``` **Tip:** Use `spy()` adapter instead of `log()` when you need more control over assertions. Filter logs by route id (from `LogAdapter` headers): ```ts const infoCalls = (t.logger.info as ReturnType).mock.calls.map((c) => c[0]); const logsForRoute = infoCalls.filter( (arg) => typeof arg === "object" && arg != null && "headers" in arg && (arg as any).headers?.["routecraft.route"] === "channel-adapter-1", ); ``` ### Test custom sources that await the final exchange ```ts import { testContext } from "@routecraft/testing"; import { craft, spy } from "@routecraft/routecraft"; let observed: any; const spyAdapter = spy(); const route = craft() .id("return-final") .from({ subscribe: async (_ctx, handler, controller) => { try { observed = await handler("hello"); } finally { controller.abort(); } }, }) .transform((body: string) => body.toUpperCase()) .to(spyAdapter) .transform((body: string) => `${body}!`); const t = await testContext().routes(route).build(); await t.test(); expect(observed.body).toBe("HELLO!"); expect(spyAdapter.received[0].body).toBe("HELLO!"); ``` ### Timers and long-running routes Use `.routesReadyTimeout(ms)` to give timer or slow-starting routes more time to become ready before `t.test()` proceeds: ```ts const t = await testContext() .routesReadyTimeout(500) .routes(timerRoute) .build(); await t.test(); ``` For cases where you need precise control over the run window, drive the lifecycle manually: ```ts const t = await testContext().routes(timerRoute).build(); const execution = t.ctx.start(); await new Promise((r) => setTimeout(r, 150)); await t.ctx.stop(); await execution; ``` ## Assertion patterns ### Spy adapter assertions ```ts // Basic assertions expect(spyAdapter.received).toHaveLength(3); expect(spyAdapter.calls.send).toBe(3); // Body content validation expect(spyAdapter.receivedBodies()).toEqual(['msg1', 'msg2', 'msg3']); expect(spyAdapter.lastReceived().body).toBe('final-message'); // Header validation expect(spyAdapter.received[0].headers['routecraft.route']).toBe('my-route'); // Complex object validation const lastExchange = spyAdapter.lastReceived(); expect(lastExchange.body).toHaveProperty("original"); expect(lastExchange.body).toHaveProperty("additional"); ``` ### Using spy as processor or enricher ```ts // Test processing behavior const processSpy = spy(); const route = craft() .id("test-process") .from(simple("input")) .process(processSpy) // Use spy as processor .to(spy()); const t = await testContext().routes(route).build(); await t.test(); expect(processSpy.calls.process).toBe(1); expect(processSpy.received[0].body).toBe("input"); // Test enrichment behavior const enrichSpy = spy(); const route2 = craft() .id("test-enrich") .from(simple({ name: "John" })) .enrich(enrichSpy) // Use spy as enricher .to(spy()); const t2 = await testContext().routes(route2).build(); await t2.test(); expect(enrichSpy.calls.enrich).toBe(1); ``` ### Route validation ```ts // Ensure a route id is set after build const r = craft().id("x").from(simple("y")).to(spy()); expect(r.build()[0].id).toBe("x"); ``` ### Multiple spies in one route ```ts const transformSpy = spy(); const destinationSpy = spy(); const route = craft() .id("multi-spy") .from(simple("start")) .process(transformSpy) .to(destinationSpy); const t = await testContext().routes(route).build(); await t.test(); // Verify the pipeline expect(transformSpy.calls.process).toBe(1); expect(destinationSpy.calls.send).toBe(1); expect(transformSpy.received[0].body).toBe("start"); expect(destinationSpy.received[0].body).toBe("start"); // Assuming spy processes pass-through ``` ### Headers and correlation ```ts const captured: string[] = []; // inside a .process/.tap captured.push(exchange.headers["routecraft.correlation_id"] as string); expect(new Set(captured).size).toBe(1); ``` ## Run capability files Use the CLI to run compiled capability files/folders as an integration check: ```bash pnpm craft run ./examples/dist/hello-world.js ``` ## Troubleshooting - Hanging tests: use `await t.test()` for standard flows, or ensure you `await t.ctx.stop()` and then `await execution` when driving lifecycle manually. - Flaky timers: prefer fake timers or increase the wait to 100–200ms. - No logs captured: ensure your route includes `.to(log())` and assert on `t.logger.info` (or `t.logger.warn` / `t.logger.debug`) after `await t.test()`. - Errors in tests: check `t.errors` after `await t.test()`; Routecraft errors are collected automatically. --- ## Related - [Errors reference](/docs/reference/errors) -- RC error codes -- useful when asserting on t.errors in tests. # Deployment Deploy Routecraft as a Node.js process or a Docker container. ## Node.js server Routecraft runs on any provider that supports Node.js. Add a `start` script to your `package.json`: ```json { "scripts": { "start": "craft run ./capabilities/index.ts" } } ``` Run `npm run start` to launch. ## Docker Use a two-stage image to keep the runtime image small: ```dockerfile # 1) Dependencies FROM node:22-alpine AS deps WORKDIR /app RUN corepack enable COPY package.json pnpm-lock.yaml ./ RUN pnpm install --frozen-lockfile --prod # 2) Production runtime FROM node:22-alpine WORKDIR /app RUN corepack enable ENV NODE_ENV=production COPY --from=deps /app/node_modules ./node_modules COPY package.json ./ COPY capabilities ./capabilities CMD ["pnpm", "craft", "run", "./capabilities/index.ts"] ``` --- ## Related - [CLI reference](/docs/reference/cli) -- All craft CLI commands including run and build. # Monitoring Log and observe your capabilities at runtime. ## Capability-level logging Use `tap(log())` anywhere in a capability to emit a structured log of the current exchange without altering it. Use `tap(debug())` for verbose output you only want visible at debug level. Both can also be used as a final destination with `.to()`. ```ts import { craft, simple, log, debug } from '@routecraft/routecraft' export default craft() .id('order-pipeline') .from(simple({ orderId: '123' })) .tap(debug()) // debug-level: verbose, filtered out by default .transform(enrichOrder) .tap(log()) // info-level: visible in normal operation .to(log()) // log the final exchange as the destination ``` Each log entry includes `contextId`, `routeId`, `exchangeId`, and `correlationId` for end-to-end tracing in your log aggregator. To set the log level, pass `--log-level` to the CLI: ```bash craft run ./capabilities/orders.ts --log-level debug ``` ## Subscribing to events Use the `on` property in `craft.config.ts` to react to lifecycle and error events without writing a plugin: ```ts // craft.config.ts import type { CraftConfig } from '@routecraft/routecraft' export const craftConfig: CraftConfig = { on: { 'context:started': ({ ts }) => { console.log(`Ready at ${ts}`) }, 'error': ({ details: { error, route } }) => { console.error(`Error in ${route?.definition.id ?? 'context'}`, error) }, 'route:*:exchange:failed': ({ details: { routeId, error } }) => { alerts.send(routeId, error) }, }, } ``` For the full event catalog see the [Events reference](/docs/reference/events). ## Writing a custom monitoring plugin If event subscriptions in `craft.config.ts` become unwieldy, extract them into a plugin so they can be reused across projects: ```ts // plugins/monitoring.ts import { type CraftContext } from '@routecraft/routecraft' export default function monitoring(ctx: CraftContext) { ctx.on('route:started', ({ details: { route } }) => { metrics.increment('route.started', { route: route.definition.id }) }) ctx.on('error', ({ details: { error, route } }) => { alerts.send({ route: route?.definition.id, code: error?.code, message: error?.message, }) }) ctx.on('context:stopped', () => { metrics.flush() }) } ``` Then register it in `craft.config.ts`: ```ts import monitoring from './plugins/monitoring' import type { CraftConfig } from '@routecraft/routecraft' export const craftConfig: CraftConfig = { plugins: [monitoring], } ``` ## Telemetry plugin The built-in `telemetry()` plugin instruments the framework with [OpenTelemetry](https://opentelemetry.io/) traces and persists data to a local SQLite database for `craft tui`. ```ts import { telemetry } from '@routecraft/routecraft' export const craftConfig = { plugins: [telemetry()], } ``` The database is written to `.routecraft/telemetry.db` in the current working directory. `better-sqlite3` must be installed: ```bash pnpm add better-sqlite3 ``` ### Configuration ```ts telemetry({ sqlite: { dbPath: './logs/telemetry.db', // custom path (default .routecraft/telemetry.db) eventBatchSize: 100, // events buffered before flush (default 50) eventFlushIntervalMs: 2000, // max ms between flushes (default 1000) maxExchanges: 50_000, // rows to retain (default 50000, 0 to disable) maxEvents: 100_000, // rows to retain (default 100000, 0 to disable) }, }) ``` ### Exporting traces to an external provider Because the telemetry plugin uses OpenTelemetry, you can export traces to any OTel-compatible backend alongside the local SQLite database. Install the OTel SDK and an OTLP exporter: ```bash pnpm add @opentelemetry/sdk-trace-base @opentelemetry/exporter-trace-otlp-http ``` Then configure a `TracerProvider` and pass it to `telemetry()`. Here is an example using [Better Stack](https://betterstack.com/): ```ts import { telemetry } from '@routecraft/routecraft' import { BasicTracerProvider, BatchSpanProcessor } from '@opentelemetry/sdk-trace-base' import { OTLPTraceExporter } from '@opentelemetry/exporter-trace-otlp-http' const tracerProvider = new BasicTracerProvider() tracerProvider.addSpanProcessor( new BatchSpanProcessor( new OTLPTraceExporter({ url: 'https://in-otel.logs.betterstack.com/traces', headers: { Authorization: 'Bearer ' }, }) ) ) tracerProvider.register() export const craftConfig = { plugins: [telemetry({ tracerProvider })], } ``` This sends OTel traces to Better Stack while keeping the local SQLite database for the TUI. The same pattern works with Grafana Tempo, Datadog, Jaeger, or any backend that accepts OTLP. Just change the exporter URL and headers. To disable the SQLite backend entirely (external only): ```ts telemetry({ tracerProvider, disableSqlite: true }) ``` ### What gets traced The plugin creates OTel spans for: - **Route lifecycle**: registration, start, stop (long-lived spans) - **Exchange lifecycle**: start, complete, fail, drop (per-message spans with duration) - **Step execution**: each adapter operation as a child span (from, to, process, filter, etc.) Span attributes use the `routecraft.*` namespace (`routecraft.route.id`, `routecraft.exchange.id`, `routecraft.correlation.id`, etc.) so you can filter and query traces in your provider's UI. ### Terminal UI Once the plugin is active, launch the terminal UI in a separate terminal to browse routes, exchanges, and the live event stream: ```bash craft tui ``` See the [Terminal UI guide](/docs/introduction/tui) for navigation and options. --- ## Related - [Events reference](/docs/reference/events) -- Full event catalog with payload shapes and wildcard patterns. - [Plugins](/docs/advanced/plugins) -- How to write and register plugins. - [Terminal UI](/docs/introduction/tui) -- Browse routes, exchanges, and live events from the terminal. # Terminal UI Inspect routes, exchanges, and live events from the terminal. ## Prerequisites The TUI reads from the SQLite database written by the `telemetry()` plugin. Enable it in your context before launching the UI: ```ts import { CraftContext, telemetry } from '@routecraft/routecraft' const ctx = new CraftContext({ plugins: [telemetry()], }) ``` See [Monitoring](/docs/introduction/monitoring#telemetry-plugin) for full plugin options. ## Launching the TUI Start the TUI in a separate terminal while your context is running (or after it has stopped; the database persists): ```bash craft tui ``` To read from a non-default database path: ```bash craft tui --db ./logs/telemetry.db ``` The TUI polls the database every 2 seconds. Because SQLite runs in WAL mode, reads never block the running context. ## Layout The TUI uses a three-column layout: - **Left** -- Navigation panel (view switcher + capability list) and keymap - **Center** -- Main content (exchange lists, exchange detail, or event stream) - **Right** -- Metrics panel with throughput stats, latency percentiles (p90/p95/p99), and a live traffic sparkline ## Views ### Capabilities (1) The default view. The left panel lists all routes (capabilities) seen in the database. Select a route to see its summary in the center panel with recent exchanges. Press `Enter` to drill into a route's exchange list in the center panel. Press `Esc` to return focus to the route list. ### Exchanges (2) A chronological list of all exchanges across all routes, ordered most recent first. | Column | Description | | --- | --- | | ID | Unique exchange identifier | | Status | `started`, `completed`, `failed`, or `dropped` | | Duration | Processing time | | Time | Timestamp of the exchange | Press `Enter` on any exchange to see its detail view with related events grouped by parent/child flow. ### Errors (3) Same layout as Exchanges but filtered to show only failed exchanges. Useful for quickly spotting and investigating failures. ### Events (4) A chronological tail of all framework events with human-readable summaries: context lifecycle, route lifecycle, exchange events, and step events. Useful for debugging unexpected behaviour. | Column | Description | | --- | --- | | Timestamp | When the event occurred | | Event | Full event name (e.g. `route:myRoute:exchange:started`) | | Details | Formatted summary of the event payload | ## Keyboard shortcuts ### Navigation | Key | Action | | --- | --- | | `j` / `↓` | Move selection down | | `k` / `↑` | Move selection up | | `Ctrl+j` / `Ctrl+↓` | Jump 10 rows down | | `Ctrl+k` / `Ctrl+↑` | Jump 10 rows up | ### Views and drill-down | Key | Action | | --- | --- | | `1` | Switch to Capabilities view | | `2` | Switch to Exchanges view | | `3` | Switch to Errors view | | `4` | Switch to Events view | | `Enter` | Drill into selected item (route exchanges or exchange detail) | | `Esc` | Go back to the previous panel or view | | `q` | Quit | --- ## Related - [Monitoring](/docs/introduction/monitoring) -- Logging, events, and the telemetry plugin. - [CLI reference](/docs/reference/cli) -- All craft commands and options. # Reference Full catalog of adapters with signatures and options. ## Adapter overview | Adapter | Category | Description | Types | |---------|----------|-------------|-------| | [`simple`](#simple) | Core | Static or dynamic data sources | `Source` | | [`log`](#log) | Core | Console logging for debugging | `Destination` | | [`timer`](#timer) | Core | Scheduled/recurring execution | `Source` | | [`cron`](#cron) | Core | Cron-scheduled execution with timezone support | `Source` | | [`direct`](#direct) | Core | Synchronous inter-route communication | `Source`, `Destination` | | [`http`](#http) | Core | Outbound HTTP client requests (inbound/server support planned) | `Destination` | | [`noop`](#noop) | Test | No-operation placeholder | `Destination` | | [`pseudo`](#pseudo) | Test | Typed placeholder for docs/examples | `Source`, `Destination`, `Processor` | | [`spy`](#spy) | Test | Records exchanges for assertions | `Destination`, `Processor` | | [`file`](#file) | File | Read/write text files | `Source`, `Destination` | | [`json`](#json) | File | JSON file handling with parsing | `Source`, `Destination`, `Transformer` | | [`csv`](#csv) | File | CSV file processing | `Source`, `Destination` | | [`html`](#html) | File | HTML parsing and file handling | `Source`, `Destination`, `Transformer` | | [`agentBrowser`](#agentbrowser) | Browser | Automate a browser session (navigate, click, snapshot, etc.) | `Destination` | | [`mcp`](#mcp) | AI | Expose capabilities as MCP tools or call remote MCP servers | `Source`, `Destination` | | [`llm`](#llm) | AI | Call a language model and get text or structured output | `Destination` | | [`embedding`](#embedding) | AI | Generate vector embeddings from text | `Destination` | ## Core adapters ### simple ```ts simple(producer: (() => T | Promise) | T): Source ``` Create a static or dynamic data source. When the producer returns an **array**, each element becomes a separate exchange processed independently through the pipeline. ```ts // Static value .id('hello-route') .from(simple('Hello, World!')) // Array of values (each becomes a separate exchange) .id('items-route') .from(simple(['item1', 'item2', 'item3'])) // Dynamic function .id('api-route') .from(simple(async () => { const response = await fetch('https://api.example.com/data') return response.json() })) // With custom ID .id('data-loader') .from(simple(() => loadData())) ``` **Use cases:** Testing, static data, API polling, file reading ### log ```ts log(formatter?: (exchange: Exchange) => unknown, options?: LogOptions): Destination ``` Log messages to the console. Can be used as a destination with `.to()` or for side effects with `.tap()`. ```ts // Log final result (default: logs exchange ID, body, and headers at info level) .to(log()) // Log intermediate data without changing flow .tap(log()) // Log with custom formatter function .tap(log((ex) => `Exchange with id: ${ex.id}`)) .tap(log((ex) => `Body: ${JSON.stringify(ex.body)}`)) .tap(log((ex) => `Exchange with uuid: ${ex.headers.uuid}`)) // Log at different levels .tap(log(undefined, { level: 'debug' })) .tap(log((ex) => ex.body, { level: 'warn' })) .tap(log((ex) => ex.body, { level: 'error' })) // For debug logging, use the convenience helper .tap(debug()) .tap(debug((ex) => ex.body)) ``` **Log Levels:** - `trace` - Most verbose - `debug` - Development/debugging (use `debug()` helper) - `info` - Default level - `warn` - Warnings - `error` - Errors - `fatal` - Critical failures **Output format:** - Without formatter: Logs exchange ID, body, and headers in a clean format - With formatter: Logs the value returned by the formatter function ### debug ```ts debug(formatter?: (exchange: Exchange) => unknown): Destination ``` Convenience helper for debug-level logging. Equivalent to `log(formatter, { level: 'debug' })`. ```ts // Log at debug level (default format) .tap(debug()) // Log with custom formatter at debug level .tap(debug((ex) => `Debug: ${JSON.stringify(ex.body)}`)) .tap(debug((ex) => ({ id: ex.id, bodySize: JSON.stringify(ex.body).length }))) // Use throughout development workflow craft().from(source).tap(debug((ex) => `Input: ${ex.body}`)).transform(processData).tap(debug((ex) => `Processed: ${ex.body}`)).to(destination) ``` **Use cases:** Development debugging, verbose logging during troubleshooting ### timer ```ts timer(options?: TimerOptions): Source ``` Trigger routes at regular intervals or specific times. Produces `undefined` as the message body. ```ts // Simple interval (every second) .id('ticker') .from(timer({ intervalMs: 1000 })) // Limited runs (10 times, then stop) .id('batch-job') .from(timer({ intervalMs: 5000, repeatCount: 10 })) // Start with delay .id('delayed-start') .from(timer({ intervalMs: 1000, delayMs: 5000 })) // Daily at specific time .id('daily-report') .from(timer({ exactTime: '09:30:00' })) // Fixed rate (ignore execution time) .id('heartbeat') .from(timer({ intervalMs: 1000, fixedRate: true })) // Add random jitter to prevent synchronized execution .id('distributed-task') .from(timer({ intervalMs: 1000, jitterMs: 200 })) ``` Options: | Field | Type | Default | Required | Description | | --- | --- | --- | --- | --- | | `intervalMs` | `number` | `1000` | No | Time between executions in milliseconds | | `delayMs` | `number` | `0` | No | Delay before first execution in milliseconds | | `repeatCount` | `number` | `Infinity` | No | Number of executions before stopping | | `fixedRate` | `boolean` | `false` | No | Execute at exact intervals ignoring processing time | | `exactTime` | `string` | — | No | Execute daily at time of day `HH:mm:ss` (fires once/day) | | `timePattern` | `string` | — | No | Custom date format for execution times | | `jitterMs` | `number` | `0` | No | Random jitter added to each scheduled run | **Headers added:** Timer metadata including fired time, counter, period, and next run time ### cron ```ts cron(expression: string, options?: CronOptions): Source ``` Trigger routes on a cron schedule with timezone support. Produces `undefined` as the message body. More expressive than `timer()` for complex recurring schedules. Supports standard 5-field cron (minute granularity), extended 6-field (second granularity), and nicknames (`@daily`, `@weekly`, `@hourly`, `@monthly`, `@yearly`, `@annually`, `@midnight`). ```ts // Every 5 minutes .id('poller') .from(cron('*/5 * * * *')) // Weekdays at 9am Eastern .id('morning-report') .from(cron('0 9 * * 1-5', { timezone: 'America/New_York' })) // Daily at midnight (nickname) .id('nightly-cleanup') .from(cron('@daily')) // Every 30 seconds (6-field) .id('health-check') .from(cron('*/30 * * * * *')) // First day of month, limited to 12 fires .id('monthly-report') .from(cron('@monthly', { maxFires: 12, name: 'monthly-report' })) // With jitter to prevent thundering herd .id('distributed-poll') .from(cron('*/5 * * * *', { jitterMs: 5000 })) // Run only during Q1 2026 .id('q1-campaign') .from(cron('@daily', { startAt: '2026-01-01', stopAt: '2026-04-01' })) ``` Options: | Field | Type | Default | Required | Description | | --- | --- | --- | --- | --- | | `timezone` | `string` | System local | No | IANA timezone (e.g., `"America/New_York"`, `"UTC"`) | | `maxFires` | `number` | `Infinity` | No | Maximum number of fires before stopping (delegated to croner's `maxRuns`) | | `jitterMs` | `number` | `0` | No | Random delay in milliseconds added to each fire | | `name` | `string` | -- | No | Human-readable job name for observability | | `protect` | `boolean` | `true` | No | Prevents overlapping handler execution when the previous run is still in progress | | `startAt` | `Date \| string` | -- | No | Date or ISO 8601 string at which the cron job should start running | | `stopAt` | `Date \| string` | -- | No | Date or ISO 8601 string at which the cron job should stop running | **Cron expression format:** | Format | Example | Description | | --- | --- | --- | | 5-field | `*/5 * * * *` | minute, hour, day-of-month, month, day-of-week | | 6-field | `*/30 * * * * *` | second, minute, hour, day-of-month, month, day-of-week | | Nickname | `@daily` | Predefined schedule | **Supported nicknames:** `@yearly` / `@annually`, `@monthly`, `@weekly`, `@daily` / `@midnight`, `@hourly` **Headers added:** Cron metadata including expression, fired time, counter, next run, timezone, and name (via `routecraft.cron.*` headers) ### direct ```ts direct(endpoint: string | ((exchange: Exchange) => string), options?: Partial): DirectAdapter ``` Enable synchronous inter-route communication with single consumer semantics. Perfect for composable route architectures where you need request-response patterns. Supports dynamic endpoint names based on exchange data for destinations. ```ts // Producer route that sends to direct endpoint craft() .id('data-producer') .from(source) .transform(processData) .to(direct('processed-data')) // Consumer route that receives from direct endpoint craft() .id('data-consumer') .from(direct('processed-data', {})) .process(businessLogic) .to(destination) // Planned: inbound HTTP API with direct routing craft() .id('api-endpoint') .from(http({ path: '/api/orders', method: 'POST' })) // Planned HTTP source API .to(direct('order-processing')) // Synchronous call craft() .id('order-processor') .from(direct('order-processing', {})) .process(validateOrder) .process(saveOrder) .transform(() => ({ status: 'created', orderId: '12345' })) // Planned response flow goes back to the HTTP client automatically // Dynamic endpoint based on message content craft() .id('dynamic-router') .from(source) .to(direct((ex) => `handler-${ex.body.type}`)) // Route messages to different handlers based on priority craft() .id('priority-router') .from(source) .to(direct((ex) => { const priority = ex.headers['priority'] || 'normal'; return `processing-${priority}`; })) // Consumer routes (static endpoints required) craft() .id('high-priority-handler') .from(direct('processing-high', {})) .to(urgentProcessor) craft() .id('normal-priority-handler') .from(direct('processing-normal', {})) .to(standardProcessor) ``` **Options:** - `channelType` - Custom direct channel implementation (default: in-memory) - `schema` - Body validation schema (StandardSchema compatible: Zod, Valibot, ArkType) - `headerSchema` - Header validation schemas (can be optional/required) - `description` - Human-readable description for route discovery - `keywords` - Keywords for route categorization **Key characteristics:** - **Synchronous**: Calling route waits for response from consuming route - **Single consumer**: Only one route can consume from each endpoint (last one wins) - **Request-response**: Perfect for HTTP APIs and composable route architectures - **Automatic endpoint name sanitization**: Special chars become dashes - **Dynamic routing**: Endpoint names can be determined at runtime using exchange data (destination only) - **Static sources**: Source endpoints (`.from()`) must use static strings; dynamic functions only work with `.to()` and `.tap()` **Perfect for:** - Breaking large routes into smaller, composable pieces - HTTP request-response patterns - Synchronous business logic orchestration - Testing individual route segments in isolation **Limitations:** - **Not compatible with `batch()`**: Because `direct()` is synchronous and blocking, each sender waits for the consumer route to fully process the message before the next message can be sent. This prevents the batch consumer from accumulating multiple messages. If you need to batch messages from multiple sources or split branches, use the `aggregate()` operation instead. #### Schema Validation Direct routes support StandardSchema validation for type safety. Behavior depends on your schema library. **No Schema (Default)** Without a schema, all data passes through unchanged: ```ts craft() .from(direct('user-processor', {})) // No schema - all data passes through .process(processUser) ``` **Zod 4 Object Types** Zod 4 uses different object constructors to control extra field handling: | Constructor | Extra fields | Use case | |-------------|--------------|----------| | `z.object()` | Stripped (default) | Strict contracts, clean data | | `z.looseObject()` | Preserved | Flexible schemas, passthrough | | `z.strictObject()` | Error (RC5002) | Reject unexpected fields | ```ts import { z } from 'zod' // z.object() - strips extra fields (default behavior) const strictSchema = z.object({ userId: z.string().uuid(), action: z.enum(['create', 'update', 'delete']) }) craft() .from(direct('user-processor', { schema: strictSchema })) .process(processUser) // Passes: { userId: '...', action: 'create' } // Passes: { userId: '...', action: 'create', extra: 'field' } // Extra fields silently removed from result // RC5002: { userId: '...', missing: 'action' } ``` ```ts // z.looseObject() - preserves extra fields const looseSchema = z.looseObject({ userId: z.string().uuid(), action: z.enum(['create', 'update']) }) craft() .from(direct('user-processor', { schema: looseSchema })) .process(processUser) // Passes: { userId: '...', action: 'create', extra: 'field' } // All fields preserved including extra ``` ```ts // z.strictObject() - rejects extra fields with error const veryStrictSchema = z.strictObject({ userId: z.string().uuid(), action: z.enum(['create', 'update']) }) craft() .from(direct('user-processor', { schema: veryStrictSchema })) .process(processUser) // Passes: { userId: '...', action: 'create' } // RC5002: { userId: '...', action: 'create', extra: 'field' } ``` **Header Validation** Without `headerSchema`, all headers pass through unchanged. When specified, the same Zod 4 rules apply: ```ts // No headerSchema - all headers pass through unchanged craft() .from(direct('api-handler', { schema: z.object({ id: z.string() }) // headerSchema not specified - all headers preserved })) .process(handleRequest) // z.looseObject() - validate required headers, keep extras craft() .from(direct('api-handler', { headerSchema: z.looseObject({ 'x-tenant-id': z.string().uuid(), 'x-trace-id': z.string().optional(), }) })) .process(handleRequest) // Passes: { 'x-tenant-id': '...', 'x-other': '...' } (validates x-tenant-id, keeps x-other) // z.object() - validate and strip extra headers craft() .from(direct('api-handler', { headerSchema: z.object({ 'x-tenant-id': z.string().uuid(), }) })) .process(handleRequest) // Passes: { 'x-tenant-id': '...', 'x-other': '...' } (x-other stripped from result) ``` **Schema Coercion** Validated values are used (schemas can transform data): ```ts const schema = z.object({ userId: z.string(), createdAt: z.coerce.date() // Transforms string to Date }) craft() .from(direct('processor', { schema })) .process((data) => { // data.createdAt is Date, not string console.log(data.createdAt.getFullYear()) }) ``` **Validation occurs on consumer side only.** Producers send data unchanged; consumers validate on receive. #### Route Registry All direct routes are registered and can be queried. Routes with descriptions and keywords are more discoverable: ```ts import { DirectAdapter } from '@routecraft/routecraft' craft() .from(direct('fetch-content', { description: 'Fetch and summarize web content from URL', schema: z.object({ url: z.string().url() }), keywords: ['fetch', 'web', 'scrape'] })) .process(fetchAndSummarize) // Later, query discoverable routes from context const ctx = await new ContextBuilder().routes(...).build() await ctx.start() const registry = ctx.getStore(DirectAdapter.ADAPTER_DIRECT_REGISTRY) const routes = registry ? Array.from(registry.values()) : [] // [{ endpoint: 'fetch-content', description: '...', schema, keywords }] ``` Useful for runtime introspection, documentation generation, and building dynamic routing systems. ### http ```ts http(options: HttpOptions): Destination> ``` Make HTTP requests. Returns a `Destination` adapter that works with both `.to()` and `.enrich()`. **Current support:** Routecraft currently exports `http()` only as an outbound/client adapter for making HTTP requests. **Planned inbound support:** Routecraft does **not** yet ship an inbound HTTP source/server adapter. The planned design is shown in [Planned inbound/server HTTP support](#planned-inboundserver-http-support) below and may change before implementation. **With `.enrich()` (merge result into body):** ```ts // Static GET request - result merged into body .enrich(http({ method: 'GET', url: 'https://api.example.com/users' })) // Dynamic URL based on exchange data .enrich(http({ method: 'GET', url: (exchange) => `https://api.example.com/users/${exchange.body.userId}` })) // Custom aggregator to control merge behavior .enrich( http({ url: 'https://api.example.com/profile' }), (original, result) => ({ ...original, body: { ...original.body, profileData: result.body } }) ) ``` **With `.to()` (side-effect or body replacement):** `.to(http(...))` always invokes the `http()` adapter. When the adapter returns an `HttpResult`, `.to()` replaces the exchange body with that result. The first example below is a fire-and-forget pattern in intent only (the code does not read the response), but at runtime the body is still replaced by the `HttpResult`. To merge or preserve the original exchange body, use `.enrich()` with an aggregator instead of `.to(http(...))`. ```ts // Fire-and-forget intent (code does not read the response); body is still replaced by HttpResult at runtime .to(http({ method: 'POST', url: 'https://api.example.com/webhook', body: (exchange) => exchange.body })) // http() returns HttpResult; .to() replaces exchange body with it .to(http({ method: 'GET', url: 'https://api.example.com/transform' })) // Body is now the HttpResult (status, headers, body). Use .enrich() with an aggregator to merge or preserve the original body. // With query parameters .enrich(http({ url: 'https://api.example.com/search', query: (exchange) => ({ q: exchange.body.searchTerm, limit: 10 }) })) ``` Options: | Field | Type | Default | Required | Description | | --- | --- | --- | --- | --- | | `method` | `HttpMethod` | `'GET'` | No | HTTP method to use | | `url` | `string \| (exchange) => string` | — | Yes | Target URL (string or derived from exchange) | | `headers` | `Record \| (exchange) => Record` | `{}` | No | Request headers | | `query` | `Record \| (exchange) => Query` | `{}` | No | Query parameters appended to URL | | `body` | `unknown \| (exchange) => unknown` | — | No | Request body (JSON serialized when not string/binary) | | `throwOnHttpError` | `boolean` | `true` | No | Throw when response is non-2xx | | `timeoutMs` | `number` | — | No | Request timeout in milliseconds | **Returns:** `HttpResult` object with `status`, `headers`, `body`, and `url` #### Planned inbound/server HTTP support [wip] Tentative source signature: `http({ path, method, ...options })`. ```ts // Simple webhook endpoint .id('webhook-receiver') .from(http({ path: '/webhook', method: 'POST' })) // Multiple methods on same path .id('data-api') .from(http({ path: '/api/data', method: ['GET', 'POST', 'PUT'] })) ``` | Option | Type | Default | Required | Description | | --- | --- | --- | --- | --- | | `path` | `string` | `'/'` | No | URL path to mount | | `method` | `HttpMethod \| HttpMethod[]` | `'POST'` | No | Accepted HTTP methods | Exchange body: `{ method, url, headers, body, query, params }`. The final exchange becomes the HTTP response; no explicit `.to()` step is required. Response behavior: - The final exchange is returned to the HTTP client. If the final body is an object with optional fields `{ status?: number, headers?: Record, body?: unknown }`, those fields are used to build the response. - If `status` or `headers` are not provided, Routecraft returns the body with `200` status and no additional headers. - For serialization and setting `Content-Type`, use a formatting step in your capability (e.g., a `.transform(...)` that sets appropriate headers). ## Test adapters ### noop ```ts noop(): NoopAdapter ``` A no-operation adapter that discards messages. Useful for testing, development, or conditional routing. ```ts // Conditional destination based on environment .to(process.env.NODE_ENV === 'production' ? realDestination() : noop()) // Testing placeholder .to(noop()) // Messages are discarded but logged ``` ### spy ```ts spy(): SpyAdapter ``` Records all exchanges passing through it. Use as a destination, processor, or enricher to capture and assert on pipeline output. ```ts import { spy } from '@routecraft/routecraft' const spyAdapter = spy() const route = craft() .id('my-route') .from(simple('payload')) .to(spyAdapter) const t = await testContext().routes(route).build() await t.test() expect(spyAdapter.received).toHaveLength(1) expect(spyAdapter.received[0].body).toBe('payload') expect(spyAdapter.calls.send).toBe(1) ``` **Properties:** | Field | Type | Default | Required | Description | |-------|------|---------|----------|-------------| | `received` | `Exchange[]` | `[]` | No | All exchanges recorded | | `calls.send` | `number` | `0` | No | Number of times used as destination | | `calls.process` | `number` | `0` | No | Number of times used as processor | | `calls.enrich` | `number` | `0` | No | Number of times used as enricher | **Methods:** | Method | Returns | Description | |--------|---------|-------------| | `reset()` | `void` | Clear all recorded data | | `lastReceived()` | `Exchange` | Most recent exchange | | `receivedBodies()` | `unknown[]` | Array of just the body values | See [Testing](/docs/introduction/testing) for full usage patterns. ### pseudo ```ts pseudo(name?: string, options?: PseudoOptions): PseudoFactory pseudo(name: string, options: PseudoKeyedOptions): PseudoKeyedFactory ``` Create a **typed placeholder adapter** that satisfies the DSL at compile time but throws at runtime (or no-ops when `runtime: "noop"`). Use it to write example routes and documentation that compile without real adapter implementations; later, swap in the real adapter by changing only the import. The returned factory can be used in `.from()`, `.to()`, `.enrich()`, `.tap()`, and `.process()`. Specify the **result type** with a generic on the call so the route body type flows correctly: ```ts import { craft, timer, log, pseudo } from "@routecraft/routecraft"; // Option types (move to real adapter package later) interface McpCallOptions { server: string; tool: string; args?: Record; } interface GmailListResult { messages: { id: string; subject?: string }[]; nextPageToken?: string; } const mcp = pseudo("mcp"); // Object-only call: mcp(options) craft() .from(timer({ intervalMs: 60_000 })) .enrich( mcp({ server: "gmail", tool: "messages.list", args: { query: "is:unread" }, }), ) .split((r) => r.messages) .tap(log()); ``` **Keyed (string-first) signature:** use `args: "keyed"` when the real adapter takes a key then options (e.g. queue name, table name): ```ts const queue = pseudo<{ ttl?: number }>("queue", { args: "keyed" }); craft() .from(source) .to(queue("outbound", { ttl: 5000 })); ``` **Options:** | Field | Type | Default | Description | | --- | --- | --- | --- | | `runtime` | `"throw"` or `"noop"` | `"throw"` | `"throw"` (default): throw with adapter name when executed. `"noop"`: resolve without error (for tests). | | `args` | `"keyed"` | — | Set to `"keyed"` to get a factory `(key: string, opts?) => PseudoAdapter`. | **Replacing with a real adapter:** keep the same call shape; only the import changes: ```ts // Before (pseudo) import { pseudo } from "@routecraft/routecraft"; const mcp = pseudo("mcp"); // After (real adapter) import { mcp } from "@routecraft/mcp-adapter"; // mcp({ server, tool, args }) still works ``` **Exported types:** `PseudoAdapter`, `PseudoFactory`, `PseudoKeyedFactory`, `PseudoOptions`, `PseudoKeyedOptions` ## File adapters ### file ```ts file(options: FileOptions): FileAdapter ``` Read and write plain text files. For structured data, use `json` or `csv` adapters. **Source mode** (reads files): ```ts // Read file once .from(file({ path: './input.txt' })) // Custom encoding .from(file({ path: './data.txt', encoding: 'latin1' })) ``` **Destination mode** (writes files): ```ts // Write to file (overwrite) .to(file({ path: './output.txt', mode: 'write' })) // Append to file .to(file({ path: './log.txt', mode: 'append' })) // Dynamic file paths with directory creation .to(file({ path: (exchange) => `./data/${exchange.body.date}.txt`, mode: 'write', createDirs: true })) ``` **Options:** | Option | Type | Default | Description | |--------|------|---------|-------------| | `path` | `string \| (exchange) => string` | Required | File path (static or dynamic function) | | `mode` | `'read' \| 'write' \| 'append'` | `'read'` for source, `'write'` for destination | File operation mode | | `encoding` | `BufferEncoding` | `'utf-8'` | Text encoding | | `createDirs` | `boolean` | `false` | Create parent directories (destination only) | **Exported types:** `FileAdapter`, `FileOptions` ### json ```ts json(options?: JsonOptions): JsonAdapter | JsonFileAdapter ``` Parse and format JSON data, or read/write JSON files. **Transformer mode** (in-memory JSON parsing): ```ts // Parse JSON string from body .transform(json()) // Extract nested data using dot notation .transform(json({ path: 'data.items' })) // Custom parsing with getValue .transform(json({ from: (b) => b.rawJson, getValue: (parsed) => parsed as User[] })) // Write to custom field .transform(json({ to: (body, result) => ({ ...body, parsed: result }) })) ``` **Source mode** (read JSON files): ```ts // Read and parse JSON file .from(json({ path: './data.json' })) // With custom reviver .from(json({ path: './data.json', reviver: (key, value) => { if (key === 'date') return new Date(value); return value; } })) ``` **Destination mode** (write JSON files): ```ts // Write with formatting .to(json({ path: './output.json', indent: 2 })) // Dynamic paths with directory creation .to(json({ path: (exchange) => `./exports/${exchange.body.id}.json`, createDirs: true })) // With custom replacer .to(json({ path: './filtered.json', replacer: (key, value) => { if (key.startsWith('_')) return undefined; return value; } })) ``` **Transformer Options** (when no `path` provided): | Option | Type | Default | Description | |--------|------|---------|-------------| | `path` | `string` | — | Dot-notation path to extract (e.g., `"data.items[0]"`) | | `from` | `(body) => string` | Uses `body` or `body.body` | Extract JSON string from exchange | | `getValue` | `(parsed) => V` | — | Transform parsed value | | `to` | `(body, result) => R` | Replaces body | Where to put result | **File Options** (when `path` is a file path): | Option | Type | Default | Description | |--------|------|---------|-------------| | `path` | `string \| (exchange) => string` | Required | File path (static or dynamic) | | `mode` | `'read' \| 'write' \| 'append'` | `'read'` for source, `'write'` for destination | File operation mode | | `encoding` | `BufferEncoding` | `'utf-8'` | Text encoding | | `createDirs` | `boolean` | `false` | Create parent directories (destination only) | | `indent` / `space` | `number` | `0` | JSON formatting spaces (destination only) | | `reviver` | `(key, value) => unknown` | — | JSON.parse reviver (source only) | | `replacer` | `(key, value) => unknown` | — | JSON.stringify replacer (destination only) | **Exported types:** `JsonAdapter`, `JsonFileAdapter`, `JsonOptions`, `JsonTransformerOptions`, `JsonFileOptions` ### csv ```ts csv(options: CsvOptions): CsvAdapter ``` Read and write CSV files with automatic parsing/formatting. **Requires `papaparse` as a peer dependency.** ```bash npm install papaparse ``` **Source mode** (read CSV files): ```ts // Read CSV with headers .from(csv({ path: './data.csv', header: true })) // Emits array of objects: [{ name: 'Alice', age: '30' }, ...] // Read CSV without headers .from(csv({ path: './data.csv', header: false })) // Emits array of arrays: [['Alice', '30'], ['Bob', '25'], ...] // Custom delimiter and encoding .from(csv({ path: './data.csv', delimiter: ';', encoding: 'latin1', header: true })) ``` **Destination mode** (write CSV files): ```ts // Write array of objects to CSV .to(csv({ path: './output.csv', header: true })) // Automatically includes headers from object keys // Write to tab-separated file .to(csv({ path: './data.tsv', delimiter: '\t', header: true })) // Dynamic paths with directory creation .to(csv({ path: (exchange) => `./reports/${exchange.body.reportDate}.csv`, createDirs: true, header: true })) // Append to existing CSV (skips header if file exists) .to(csv({ path: './log.csv', mode: 'append', header: true })) ``` **Options:** | Option | Type | Default | Description | |--------|------|---------|-------------| | `path` | `string \| (exchange) => string` | Required | File path (static or dynamic) | | `header` | `boolean` | `true` | Use first row as headers (source), include headers (destination) | | `delimiter` | `string` | `','` | Field separator | | `quoteChar` | `string` | `'"'` | Quote character | | `skipEmptyLines` | `boolean` | `true` | Skip empty lines during parsing | | `encoding` | `BufferEncoding` | `'utf-8'` | Text encoding | | `mode` | `'write' \| 'append'` | `'write'` | File operation mode (destination only) | | `createDirs` | `boolean` | `false` | Create parent directories (destination only) | **Behavior:** - **Source**: Emits entire CSV as array of records (objects if `header: true`, arrays if `header: false`) - **Destination**: Writes exchange body (array of objects/arrays) as CSV. For `mode: 'append'`, skips header row if file exists. **Peer dependency:** Requires `papaparse` to be installed separately. **Exported types:** `CsvAdapter`, `CsvOptions` ### html ```ts html(options: HtmlOptions): HtmlAdapter ``` Extract data from HTML using CSS selectors (powered by cheerio), or read/write HTML files. **Transformer mode** (in-memory HTML parsing): ```ts // Extract text from title .transform(html({ selector: 'title', extract: 'text' })) // Extract multiple elements (returns array) .transform(html({ selector: 'h2', extract: 'text' })) // Result: ['First Heading', 'Second Heading', ...] // Extract HTML content .transform(html({ selector: '.content', extract: 'html' })) // Extract attribute value .transform(html({ selector: 'a', extract: 'attr', attr: 'href' })) // Extract outer HTML (including element tag) .transform(html({ selector: 'article', extract: 'outerHtml' })) // Custom parsing from sub-field .transform(html({ selector: 'p', extract: 'text', from: (body) => body.htmlContent, to: (body, result) => ({ ...body, paragraphs: result }) })) ``` **Source mode** (read HTML files and extract): ```ts // Read HTML file and extract title .from(html({ path: './page.html', selector: 'title', extract: 'text' })) // Extract multiple links from file .from(html({ path: './page.html', selector: 'a', extract: 'attr', attr: 'href' })) // Emits array: ['https://example.com', '/about', ...] ``` **Destination mode** (write HTML files): ```ts // Write HTML string to file .to(html({ path: './output.html' })) // Dynamic paths with directory creation .to(html({ path: (exchange) => `./pages/${exchange.body.slug}.html`, createDirs: true })) // Append to HTML file .to(html({ path: './log.html', mode: 'append' })) ``` **Transformer Options** (when no `path` provided): | Option | Type | Default | Description | |--------|------|---------|-------------| | `selector` | `string` | Required | CSS selector to match elements | | `extract` | `'text' \| 'html' \| 'attr' \| 'outerHtml' \| 'innerText' \| 'textContent'` | `'text'` | What to extract from matched elements | | `attr` | `string` | — | Attribute name (required when `extract: 'attr'`) | | `from` | `(body) => string` | Uses `body` or `body.body` | Extract HTML string from exchange | | `to` | `(body, result) => R` | Replaces body | Where to put extracted result | **File Options** (when `path` is provided): All transformer options above, plus: | Option | Type | Default | Description | |--------|------|---------|-------------| | `path` | `string \| (exchange) => string` | Required | File path (static or dynamic) | | `mode` | `'read' \| 'write' \| 'append'` | `'read'` for source, `'write'` for destination | File operation mode | | `encoding` | `BufferEncoding` | `'utf-8'` | Text encoding | | `createDirs` | `boolean` | `false` | Create parent directories (destination only) | **Extract types:** - `text` / `innerText` / `textContent`: Plain text content (strips HTML tags, removes `