Barbacane Documentation
Barbacane is a spec-driven API gateway. Define your API using OpenAPI, add Barbacane extensions for routing and middleware, compile to an artifact, and deploy.
Why Barbacane?
- Spec as config — Your OpenAPI 3.x or AsyncAPI 3.x specification is the single source of truth. No separate gateway DSL to maintain.
- Compile-time safety — Misconfigurations, ambiguous routes, and missing plugins are caught at compile time, not at 3 AM.
- Fast and predictable — Built on Rust, Tokio, and Hyper. No garbage collector, no latency surprises.
- Secure by default — Memory-safe runtime, TLS via Rustls, sandboxed WASM plugins, secrets never baked into artifacts.
- Extensible — Write plugins in any language that compiles to WebAssembly. They run in a sandbox, so a buggy plugin can’t take down the gateway.
- Observable — Prometheus metrics, structured JSON logging, and distributed tracing with W3C Trace Context and OTLP export.
Quick Start
With Docker
The standalone image bundles the binary and all official plugins:
# Compile your OpenAPI spec (all plugins are pre-bundled)
docker run --rm -v $(pwd):/work ghcr.io/barbacane-dev/barbacane-standalone \
compile --spec /work/api.yaml --manifest /etc/barbacane/plugins.yaml --output /work/api.bca
# Run the gateway
docker run --rm -p 8080:8080 -v $(pwd)/api.bca:/config/api.bca ghcr.io/barbacane-dev/barbacane-standalone \
serve --artifact /config/api.bca --listen 0.0.0.0:8080
Also available on Docker Hub as barbacane/barbacane-standalone. To build locally:
git clone https://github.com/barbacane-dev/barbacane.git
cd barbacane
make docker-build-standalone
Playground
Full demo environment with observability stack (Prometheus, Grafana, Loki, Tempo) and control plane UI:
git clone https://github.com/barbacane-dev/barbacane.git
cd barbacane/playground
docker-compose up -d
# Gateway: http://localhost:8080
# Grafana: http://localhost:3000 (admin/admin)
# Control Plane: http://localhost:3001
From source
git clone https://github.com/barbacane-dev/barbacane.git
cd barbacane
cargo build --release
make plugins
# Compile your OpenAPI spec
./target/release/barbacane compile --spec api.yaml --manifest barbacane.yaml --output api.bca
# Run the gateway
./target/release/barbacane serve --artifact api.bca --listen 0.0.0.0:8080
Documentation
User Guide
- Getting Started - First steps with Barbacane
- Spec Configuration - Configure routing and middleware in your OpenAPI spec
- Dispatchers - Route requests to backends
- Middlewares - Add authentication, rate limiting, and more
- Secrets - Manage secrets in plugin configurations
- Observability - Metrics, logging, and distributed tracing
- Control Plane - REST API for spec and artifact management
- Web UI - Web-based management interface
Reference
- CLI Reference - Command-line tools
- Spec Extensions - Complete
x-barbacane-*reference - Artifact Format -
.bcafile format - Reserved Endpoints -
/__barbacane/*endpoints
Contributing
- Architecture - System design and crate structure
- Development Guide - Setting up and building
- Plugin Development - Creating WASM plugins
Supported Spec Versions
| Format | Version | Status |
|---|---|---|
| OpenAPI | 3.0.x | Supported |
| OpenAPI | 3.1.x | Supported |
| OpenAPI | 3.2.x | Supported |
| AsyncAPI | 3.0.x | Supported |
| AsyncAPI | 3.1.x | Supported |
AsyncAPI Support
Barbacane supports AsyncAPI 3.x for event-driven APIs. AsyncAPI send operations are accessible via HTTP POST requests, enabling a sync-to-async bridge pattern where HTTP clients can publish messages to Kafka or NATS brokers.
License
Apache 2.0 - See LICENSE
Getting Started
This guide walks you through creating your first Barbacane-powered API gateway.
Prerequisites
- An OpenAPI 3.x specification
- One of the installation methods below
Installation
Pre-built Binaries (Recommended)
Download the latest release for your platform from GitHub Releases:
# Linux (x86_64)
curl -LO https://github.com/barbacane-dev/barbacane/releases/latest/download/barbacane-x86_64-unknown-linux-gnu
chmod +x barbacane-x86_64-unknown-linux-gnu
sudo mv barbacane-x86_64-unknown-linux-gnu /usr/local/bin/barbacane
# Linux (ARM64)
curl -LO https://github.com/barbacane-dev/barbacane/releases/latest/download/barbacane-aarch64-unknown-linux-gnu
chmod +x barbacane-aarch64-unknown-linux-gnu
sudo mv barbacane-aarch64-unknown-linux-gnu /usr/local/bin/barbacane
# macOS (Intel)
curl -LO https://github.com/barbacane-dev/barbacane/releases/latest/download/barbacane-x86_64-apple-darwin
chmod +x barbacane-x86_64-apple-darwin
sudo mv barbacane-x86_64-apple-darwin /usr/local/bin/barbacane
# macOS (Apple Silicon)
curl -LO https://github.com/barbacane-dev/barbacane/releases/latest/download/barbacane-aarch64-apple-darwin
chmod +x barbacane-aarch64-apple-darwin
sudo mv barbacane-aarch64-apple-darwin /usr/local/bin/barbacane
Verify installation:
barbacane --version
Container Images
For Docker or Kubernetes deployments:
# Data plane (from Docker Hub)
docker pull barbacane/barbacane:latest
# Control plane (from Docker Hub)
docker pull barbacane/barbacane-control:latest
Also available from GitHub Container Registry:
docker pull ghcr.io/barbacane-dev/barbacane:latest
docker pull ghcr.io/barbacane-dev/barbacane-control:latest
Quick start with Docker:
docker run -v ./api.bca:/config/api.bca -p 8080:8080 \
ghcr.io/barbacane-dev/barbacane serve --artifact /config/api.bca
Using Cargo
If you have Rust installed:
cargo install barbacane
cargo install barbacane-control # Optional: control plane CLI
From Source
For development or custom builds:
git clone https://github.com/barbacane-dev/barbacane.git
cd barbacane
cargo build --release
# Binaries are in target/release/
Your First Gateway
Quick Start with barbacane init
The fastest way to start a new project:
# Create a new project with example spec and official plugins
barbacane init my-api --fetch-plugins
cd my-api
This creates:
barbacane.yaml— project manifest with plugins configuredapi.yaml— OpenAPI spec with example endpointsplugins/mock.wasm— mock dispatcher pluginplugins/http-upstream.wasm— HTTP proxy plugin.gitignore— ignores build artifacts
For a minimal skeleton without example endpoints:
barbacane init my-api --template minimal --fetch-plugins
Skip to Step 3: Validate the Spec if using barbacane init.
1. Create an OpenAPI Spec
Create a file called api.yaml:
openapi: "3.1.0"
info:
title: My API
version: "1.0.0"
paths:
/health:
get:
operationId: healthCheck
x-barbacane-dispatch:
name: mock
config:
status: 200
body: '{"status":"ok"}'
responses:
"200":
description: Health check response
/users:
get:
operationId: listUsers
x-barbacane-dispatch:
name: http-upstream
config:
url: "https://api.example.com"
path: /api/users
responses:
"200":
description: List of users
/users/{id}:
get:
operationId: getUser
x-barbacane-dispatch:
name: http-upstream
config:
url: "https://api.example.com"
path: /api/users/{id}
parameters:
- name: id
in: path
required: true
schema:
type: string
format: uuid
responses:
"200":
description: User details
The key additions are:
x-barbacane-dispatchon each operation: tells Barbacane how to handle the request
2. Create a Manifest
Create a barbacane.yaml manifest to declare which plugins to use:
plugins:
mock:
path: ./plugins/mock.wasm
http-upstream:
path: ./plugins/http-upstream.wasm
The manifest declares all WASM plugins used by your spec. Plugins can be sourced from a local path or a remote URL:
- Local path:
path: ./plugins/name.wasm - Remote URL:
url: https://github.com/barbacane-dev/barbacane/releases/download/v0.5.1/name.wasm
Remote plugins are downloaded at compile time and cached locally. You can optionally pin integrity with a sha256 checksum:
plugins:
jwt-auth:
url: https://github.com/barbacane-dev/barbacane/releases/download/v0.5.1/jwt-auth.wasm
sha256: abc123...
3. Validate the Spec
barbacane validate --spec api.yaml
Output:
✓ api.yaml is valid
validated 1 spec(s): 1 valid, 0 invalid
4. Compile to Artifact
barbacane compile --spec api.yaml --manifest barbacane.yaml --output api.bca
Output:
compiled 1 spec(s) to api.bca (3 routes, 2 plugin(s) bundled)
The .bca (Barbacane Compiled Artifact) file contains:
- Compiled routing table
- Embedded source specs (for
/__barbacane/specs) - Bundled WASM plugins
- Manifest with checksums
5. Run the Gateway
barbacane serve --artifact api.bca --listen 127.0.0.1:8080 --dev
Output:
barbacane: loaded 3 route(s) from artifact
barbacane: listening on 127.0.0.1:8080
6. Test It
# Health check (mock dispatcher)
curl http://127.0.0.1:8080/health
# {"status":"ok"}
# Gateway health
curl http://127.0.0.1:8080/__barbacane/health
# {"status":"healthy","artifact_version":1,"compiler_version":"0.1.0","routes_count":3}
# View the API specs
curl http://127.0.0.1:8080/__barbacane/specs
# Returns index of specs with links to merged OpenAPI/AsyncAPI
# Try a non-existent route
curl http://127.0.0.1:8080/nonexistent
# {"error":"not found"}
# Try wrong method
curl -X POST http://127.0.0.1:8080/health
# {"error":"method not allowed"}
What’s Next?
- Spec Configuration - Learn about all
x-barbacane-*extensions - Dispatchers - Route to HTTP backends, mock responses, and more
- Middlewares - Add authentication, rate limiting, CORS
- Secrets - Manage API keys, tokens, and passwords securely
- Observability - Metrics, logging, and distributed tracing
- Control Plane - Manage specs and artifacts via REST API
- Web UI - Visual interface for managing your gateway
Development Mode
The --dev flag enables:
- Verbose error messages with dispatcher details
- Detailed logging
- No production-only restrictions
For production, omit the flag:
barbacane serve --artifact api.bca --listen 0.0.0.0:8080
Observability
Barbacane includes built-in observability features:
# Pretty logs for development
barbacane serve --artifact api.bca --log-format pretty --log-level debug
# JSON logs with OTLP tracing for production
barbacane serve --artifact api.bca \
--log-format json \
--otlp-endpoint http://otel-collector:4317
Prometheus metrics are available on the admin API port (default 127.0.0.1:8081):
curl http://127.0.0.1:8081/metrics
See the Observability Guide for full details.
Spec Configuration
Barbacane extends OpenAPI and AsyncAPI specs with custom x-barbacane-* extensions. These tell the gateway how to route requests, apply middleware, and connect to backends.
Extension Overview
| Extension | Location | Purpose |
|---|---|---|
x-barbacane-dispatch | Operation | Route request to a dispatcher (required) |
x-barbacane-middlewares | Root or Operation | Apply middleware chain |
Path Parameters
Regular Parameters
Use {paramName} for single-segment parameters — the parameter captures exactly one path segment:
paths:
/users/{id}/orders/{orderId}:
get:
parameters:
- name: id
in: path
required: true
schema:
type: string
- name: orderId
in: path
required: true
schema:
type: string
Wildcard Parameters
Use {paramName+} to capture all remaining path segments as a single value, including any / characters:
paths:
/files/{bucket}/{key+}:
get:
parameters:
- name: bucket
in: path
required: true
schema:
type: string
- name: key
in: path
required: true
allowReserved: true # tells client tooling not to percent-encode '/'
schema:
type: string
A GET /files/my-bucket/docs/2024/report.pdf request captures bucket=my-bucket and key=docs/2024/report.pdf.
Rules:
- The wildcard parameter must be the last segment of the path
- At most one wildcard parameter per path
- Parameter names use the same characters as regular params (alphanumeric and
_)
allowReserved note: This is advisory metadata for client generators and documentation tools — it signals that the value may contain unencoded / characters. Barbacane does not parse or enforce it, but including it produces correct client SDKs.
Dispatchers
Every operation needs an x-barbacane-dispatch to tell Barbacane how to handle it:
paths:
/users:
get:
x-barbacane-dispatch:
name: http-upstream
config:
url: "https://api.example.com"
path: /api/v2/users
Dispatch Properties
| Property | Type | Required | Description |
|---|---|---|---|
name | string | Yes | Dispatcher plugin name |
config | object | No | Plugin-specific configuration |
Built-in dispatchers: mock, http-upstream, lambda, kafka, nats, s3. See Dispatchers for configuration details.
Config values support secret references: env://VAR_NAME (environment variable) or file:///path/to/secret (file). See Secrets.
Middlewares
Middlewares process requests before dispatching and responses after.
Global Middlewares
Apply to all operations:
openapi: "3.1.0"
info:
title: My API
version: "1.0.0"
x-barbacane-middlewares:
- name: rate-limit
config:
quota: 100
window: 60
- name: cors
config:
allowed_origins: ["https://app.example.com"]
paths:
# ... all operations get these middlewares
Operation Middlewares
Apply to a specific operation (runs after global middlewares):
paths:
/admin/users:
get:
x-barbacane-middlewares:
- name: jwt-auth
config:
required: true
scopes: ["admin:read"]
x-barbacane-dispatch:
name: http-upstream
config:
url: "https://api.example.com"
Middleware Merging
Operation middlewares are merged with global ones. If an operation middleware has the same name as a global one, the operation config overrides it. Non-overridden globals are preserved.
# Global: rate limit 100/min + cors
x-barbacane-middlewares:
- name: rate-limit
config:
quota: 100
window: 60
- name: cors
config:
allow_origin: "*"
paths:
/public/stats:
get:
# Override rate-limit; cors still applies from globals
x-barbacane-middlewares:
- name: rate-limit
config:
quota: 1000
window: 60
# Resolved chain: cors (global) → rate-limit (operation override)
Use an empty array to explicitly disable all middlewares for an operation:
x-barbacane-middlewares: [] # No middlewares at all
Middleware Chain Order
- Global middlewares (in order defined)
- Operation middlewares (in order defined)
- Dispatch
- Response middlewares (reverse order)
Complete Example
openapi: "3.1.0"
info:
title: E-Commerce API
version: "2.0.0"
# Global middlewares
x-barbacane-middlewares:
- name: request-id
config:
header: X-Request-ID
- name: cors
config:
allowed_origins: ["https://shop.example.com"]
- name: rate-limit
config:
quota: 200
window: 60
paths:
/health:
get:
operationId: healthCheck
x-barbacane-dispatch:
name: mock
config:
status: 200
body: '{"status":"healthy"}'
responses:
"200":
description: OK
/products:
get:
operationId: listProducts
x-barbacane-middlewares:
- name: cache
config:
ttl: 300
x-barbacane-dispatch:
name: http-upstream
config:
url: "https://api.shop.example.com"
path: /api/products
responses:
"200":
description: Product list
/orders:
post:
operationId: createOrder
x-barbacane-middlewares:
- name: jwt-auth
config:
required: true
- name: idempotency
config:
header: Idempotency-Key
x-barbacane-dispatch:
name: http-upstream
config:
url: "https://api.shop.example.com"
path: /api/orders
responses:
"201":
description: Order created
/orders/{orderId}/pay:
post:
operationId: payOrder
x-barbacane-middlewares:
- name: jwt-auth
config:
required: true
x-barbacane-dispatch:
name: http-upstream
config:
url: "https://payments.example.com" # Different backend!
path: /process/{orderId}
timeout: 45.0
responses:
"200":
description: Payment processed
AsyncAPI Support
Barbacane supports AsyncAPI 3.x for event-driven APIs. AsyncAPI specs work similarly to OpenAPI, with channels and operations instead of paths and methods.
Sync-to-Async Bridge Pattern
AsyncAPI send operations are accessible via HTTP POST requests. This enables clients to publish messages to Kafka or NATS through the gateway:
- Client sends HTTP POST to the channel address
- Gateway validates the message against the schema
- Dispatcher publishes to Kafka/NATS
- Gateway returns 202 Accepted
Basic AsyncAPI Example
asyncapi: "3.0.0"
info:
title: User Events API
version: "1.0.0"
channels:
userEvents:
address: /events/users
messages:
UserCreated:
contentType: application/json
payload:
type: object
required:
- userId
- email
properties:
userId:
type: string
format: uuid
email:
type: string
format: email
operations:
publishUserCreated:
action: send
channel:
$ref: '#/channels/userEvents'
x-barbacane-dispatch:
name: kafka
config:
brokers: "kafka.internal:9092"
topic: "user-events"
Channel Parameters
AsyncAPI channels can have parameters (like path params in OpenAPI):
channels:
orderEvents:
address: /events/orders/{orderId}
parameters:
orderId:
schema:
type: string
format: uuid
messages:
OrderPlaced:
payload:
type: object
Message Validation
Message payloads are validated against the schema before dispatch. Invalid messages receive a 400 response with validation details.
HTTP Method Mapping
| AsyncAPI Action | HTTP Method |
|---|---|
send | POST |
receive | GET |
Middlewares
AsyncAPI operations support the same middleware extensions as OpenAPI:
operations:
publishEvent:
action: send
channel:
$ref: '#/channels/events'
x-barbacane-middlewares:
- name: jwt-auth
config:
required: true
x-barbacane-dispatch:
name: kafka
config:
brokers: "kafka.internal:9092"
topic: "events"
API Lifecycle
Barbacane supports API lifecycle management through standard OpenAPI deprecation and the x-sunset extension.
Marking Operations as Deprecated
Use the standard OpenAPI deprecated field:
paths:
/v1/users:
get:
deprecated: true
summary: List users (deprecated, use /v2/users)
x-barbacane-dispatch:
name: http-upstream
config:
url: "https://api.example.com"
When a client calls a deprecated endpoint, the response includes a Deprecation: true header per draft-ietf-httpapi-deprecation-header.
Setting a Sunset Date
Use x-sunset to specify when an endpoint will be removed (per RFC 8594):
paths:
/v1/users:
get:
deprecated: true
x-sunset: "Sat, 31 Dec 2025 23:59:59 GMT"
x-barbacane-dispatch:
name: http-upstream
config:
url: "https://api.example.com"
The sunset date must be in HTTP-date format (RFC 9110). When set, the response includes a Sunset header per RFC 8594.
Example Response Headers
HTTP/1.1 200 OK
Deprecation: true
Sunset: Sat, 31 Dec 2025 23:59:59 GMT
Content-Type: application/json
Best Practices
- Mark deprecated first: Set
deprecated: truebefore setting a sunset date - Give advance notice: Set the sunset date at least 6 months in advance
- Update API docs: Include migration instructions in the operation summary or description
- Monitor usage: Track calls to deprecated endpoints via metrics
Validation
The compiler validates your spec:
barbacane validate --spec api.yaml
Errors you might see:
| Error Code | Meaning |
|---|---|
| E1010 | Routing conflict (same path+method in multiple specs) |
| E1020 | Missing x-barbacane-dispatch on operation |
| E1031 | Plaintext http:// upstream URL (use HTTPS or --allow-plaintext at compile time) |
| E1054 | Invalid path template (unbalanced braces, empty param name, duplicate param, {param+} not last segment, multiple wildcards) |
Next Steps
- Dispatchers - All dispatcher types and options
- Middlewares - Available middleware plugins
- CLI Reference - Full command options
Dispatchers
Dispatchers handle how requests are processed and responses are generated. Every operation in your OpenAPI or AsyncAPI spec needs an x-barbacane-dispatch extension.
Overview
paths:
/example:
get:
x-barbacane-dispatch:
name: <dispatcher-name>
config:
# dispatcher-specific config
Available Dispatchers
All dispatchers are implemented as WASM plugins and must be declared in your barbacane.yaml manifest.
mock
Returns static responses. Useful for health checks, stubs, and testing.
x-barbacane-dispatch:
name: mock
config:
status: 200
body: '{"status":"ok"}'
Configuration
| Property | Type | Default | Description |
|---|---|---|---|
status | integer | 200 | HTTP status code |
body | string | "" | Response body |
headers | object | {} | Additional response headers |
content_type | string | "application/json" | Content-Type header value |
Examples
Simple health check:
x-barbacane-dispatch:
name: mock
config:
status: 200
body: '{"status":"healthy","version":"1.0.0"}'
Not found response:
x-barbacane-dispatch:
name: mock
config:
status: 404
body: '{"error":"resource not found"}'
Empty success:
x-barbacane-dispatch:
name: mock
config:
status: 204
Custom headers:
x-barbacane-dispatch:
name: mock
config:
status: 200
body: '<html><body>Hello</body></html>'
content_type: 'text/html'
headers:
X-Custom-Header: 'custom-value'
Cache-Control: 'no-cache'
http-upstream
Reverse proxy to an HTTP/HTTPS upstream backend. Supports path parameter substitution, header forwarding, and configurable timeouts.
x-barbacane-dispatch:
name: http-upstream
config:
url: "https://api.example.com"
path: "/api/v2/resource"
timeout: 30.0
Configuration
| Property | Type | Required | Default | Description |
|---|---|---|---|---|
url | string | Yes | - | Base URL of the upstream (must be HTTPS in production) |
path | string | No | Same as operation path | Upstream path template with {param} substitution |
timeout | number | No | 30.0 | Request timeout in seconds |
tls | object | No | - | TLS configuration for mTLS (see below) |
TLS Configuration (mTLS)
For upstreams that require mutual TLS (client certificate authentication):
| Property | Type | Required | Description |
|---|---|---|---|
tls.client_cert | string | If mTLS | Path to PEM-encoded client certificate |
tls.client_key | string | If mTLS | Path to PEM-encoded client private key |
tls.ca | string | No | Path to PEM-encoded CA certificate for server verification |
Example with mTLS:
x-barbacane-dispatch:
name: http-upstream
config:
url: "https://secure-backend.internal"
tls:
client_cert: "/etc/barbacane/certs/client.crt"
client_key: "/etc/barbacane/certs/client.key"
ca: "/etc/barbacane/certs/ca.crt"
Notes:
- Both
client_certandclient_keymust be specified together - Certificate files must be in PEM format
- The
caoption adds a custom CA for server verification (in addition to system roots) - TLS-configured clients are cached and reused across requests
Path Parameters
Path parameters from the OpenAPI spec are automatically substituted in the path template:
# OpenAPI path: /users/{userId}/orders/{orderId}
/users/{userId}/orders/{orderId}:
get:
x-barbacane-dispatch:
name: http-upstream
config:
url: "https://backend.internal"
path: "/api/users/{userId}/orders/{orderId}"
# Request: GET /users/123/orders/456
# Backend: GET https://backend.internal/api/users/123/orders/456
Path Rewriting
Map frontend paths to different backend paths:
# Frontend: /v2/products
# Backend: https://catalog.internal/api/v1/catalog/products
/v2/products:
get:
x-barbacane-dispatch:
name: http-upstream
config:
url: "https://catalog.internal"
path: "/api/v1/catalog/products"
Wildcard Proxy
Proxy any path to upstream using a greedy wildcard parameter ({param+}):
/proxy/{path+}:
get:
parameters:
- name: path
in: path
required: true
allowReserved: true
schema:
type: string
x-barbacane-dispatch:
name: http-upstream
config:
url: "https://httpbin.org"
path: "/{path}"
timeout: 10.0
Timeout Override
Per-operation timeout for long-running operations:
/reports/generate:
post:
x-barbacane-dispatch:
name: http-upstream
config:
url: "https://reports.internal"
path: "/generate"
timeout: 120.0 # 2 minutes for report generation
Error Handling
The dispatcher returns RFC 9457 error responses:
| Status | Condition |
|---|---|
| 502 Bad Gateway | Connection failed or upstream returned invalid response |
| 503 Service Unavailable | Circuit breaker is open |
| 504 Gateway Timeout | Request exceeded configured timeout |
Security
- HTTPS required in production:
http://URLs are rejected by the compiler (E1031) - Development mode: Use
--allow-plaintextat compile time and--allow-plaintext-upstreamat runtime - TLS: Uses rustls with system CA roots by default
- mTLS: Configure
tls.client_certandtls.client_keyfor mutual TLS authentication - Custom CA: Use
tls.cato add a custom CA certificate for private PKI
lambda
Invokes AWS Lambda functions via Lambda Function URLs. Implemented as a WASM plugin.
x-barbacane-dispatch:
name: lambda
config:
url: "https://abc123.lambda-url.us-east-1.on.aws/"
Configuration
| Property | Type | Required | Default | Description |
|---|---|---|---|---|
url | string | Yes | - | Lambda Function URL |
timeout | number | No | 30.0 | Request timeout in seconds |
pass_through_headers | boolean | No | true | Pass incoming headers to Lambda |
Setup
- Enable Lambda Function URLs in AWS Console or via CLI
- Use the generated URL in your OpenAPI spec
# Enable Function URL for your Lambda
aws lambda create-function-url-config \
--function-name my-function \
--auth-type NONE
# Get the URL
aws lambda get-function-url-config --function-name my-function
Examples
Basic Lambda invocation:
/api/process:
post:
x-barbacane-dispatch:
name: lambda
config:
url: "https://abc123.lambda-url.us-east-1.on.aws/"
With custom timeout:
/api/long-running:
post:
x-barbacane-dispatch:
name: lambda
config:
url: "https://xyz789.lambda-url.eu-west-1.on.aws/"
timeout: 120.0
Request/Response Format
The dispatcher passes the incoming HTTP request to Lambda:
- Method, headers, and body are forwarded
- Lambda should return a standard HTTP response
Lambda response format:
{
"statusCode": 200,
"headers": {"content-type": "application/json"},
"body": "{\"result\": \"success\"}"
}
Error Handling
| Status | Condition |
|---|---|
| 502 Bad Gateway | Lambda invocation failed or returned invalid response |
| 504 Gateway Timeout | Request exceeded configured timeout |
kafka
Publishes messages to Apache Kafka topics. Designed for AsyncAPI specs using the sync-to-async bridge pattern: HTTP POST requests publish messages and return 202 Accepted. Uses a pure-Rust Kafka client with connection caching and a dedicated runtime.
x-barbacane-dispatch:
name: kafka
config:
brokers: "kafka.internal:9092"
topic: "user-events"
Configuration
| Property | Type | Required | Default | Description |
|---|---|---|---|---|
brokers | string | Yes | - | Comma-separated Kafka broker addresses (e.g. "kafka:9092" or "broker1:9092, broker2:9092") |
topic | string | Yes | - | Kafka topic to publish to |
key | string | No | - | Message key expression (see below) |
ack_response | object | No | - | Custom acknowledgment response |
include_metadata | boolean | No | false | Include partition/offset in response |
headers_from_request | array | No | [] | Request headers to forward as message headers |
Key Expression
The key property supports dynamic expressions:
| Expression | Description |
|---|---|
$request.header.X-Key | Extract key from request header |
$request.path.userId | Extract key from path parameter |
literal-value | Use a literal string value |
Custom Acknowledgment Response
Override the default 202 Accepted response:
x-barbacane-dispatch:
name: kafka
config:
brokers: "kafka.internal:9092"
topic: "orders"
ack_response:
body: {"queued": true, "estimatedDelivery": "5s"}
headers:
X-Queue-Name: "orders"
Examples
Basic Kafka publish:
# AsyncAPI spec
asyncapi: "3.0.0"
info:
title: Order Events
version: "1.0.0"
channels:
orderEvents:
address: /events/orders
messages:
OrderCreated:
payload:
type: object
properties:
orderId:
type: string
operations:
publishOrder:
action: send
channel:
$ref: '#/channels/orderEvents'
x-barbacane-dispatch:
name: kafka
config:
brokers: "kafka.internal:9092"
topic: "order-events"
key: "$request.header.X-Order-Id"
include_metadata: true
With request header forwarding:
x-barbacane-dispatch:
name: kafka
config:
brokers: "kafka.internal:9092"
topic: "audit-events"
headers_from_request:
- "x-correlation-id"
- "x-user-id"
Response
On successful publish, returns 202 Accepted:
{
"status": "accepted",
"topic": "order-events",
"partition": 3,
"offset": 12345
}
(partition/offset only included if include_metadata: true)
Error Handling
| Status | Condition |
|---|---|
| 502 Bad Gateway | Kafka publish failed |
nats
Publishes messages to NATS subjects. Designed for AsyncAPI specs using the sync-to-async bridge pattern. Uses a pure-Rust NATS client with connection caching and a dedicated runtime.
x-barbacane-dispatch:
name: nats
config:
url: "nats://nats.internal:4222"
subject: "notifications.user"
Configuration
| Property | Type | Required | Default | Description |
|---|---|---|---|---|
url | string | Yes | - | NATS server URL (e.g. "nats://localhost:4222") |
subject | string | Yes | - | NATS subject to publish to (supports wildcards) |
ack_response | object | No | - | Custom acknowledgment response |
headers_from_request | array | No | [] | Request headers to forward as message headers |
Examples
Basic NATS publish:
asyncapi: "3.0.0"
info:
title: Notification Service
version: "1.0.0"
channels:
notifications:
address: /notifications/{userId}
parameters:
userId:
schema:
type: string
messages:
Notification:
payload:
type: object
required:
- title
properties:
title:
type: string
body:
type: string
operations:
sendNotification:
action: send
channel:
$ref: '#/channels/notifications'
x-barbacane-dispatch:
name: nats
config:
url: "nats://nats.internal:4222"
subject: "notifications"
headers_from_request:
- "x-request-id"
Custom acknowledgment:
x-barbacane-dispatch:
name: nats
config:
url: "nats://nats.internal:4222"
subject: "events.user.signup"
ack_response:
body: {"accepted": true}
headers:
X-Subject: "events.user.signup"
Response
On successful publish, returns 202 Accepted:
{
"status": "accepted",
"subject": "notifications"
}
Error Handling
| Status | Condition |
|---|---|
| 502 Bad Gateway | NATS publish failed |
s3
Proxies requests to AWS S3 or any S3-compatible object storage (MinIO, RustFS, Ceph, etc.) with AWS Signature Version 4 signing. Supports multi-bucket routing via path parameters and single-bucket CDN-style routes.
x-barbacane-dispatch:
name: s3
config:
region: us-east-1
access_key_id: env://AWS_ACCESS_KEY_ID
secret_access_key: env://AWS_SECRET_ACCESS_KEY
Configuration
| Property | Type | Required | Default | Description |
|---|---|---|---|---|
access_key_id | string | Yes | - | AWS access key ID. Supports env:// references (e.g. env://AWS_ACCESS_KEY_ID) |
secret_access_key | string | Yes | - | AWS secret access key. Supports env:// references |
region | string | Yes | - | AWS region (e.g. us-east-1, eu-west-1) |
session_token | string | No | - | Session token for temporary credentials (STS / AssumeRole / IRSA). Supports env:// references |
endpoint | string | No | - | Custom S3-compatible endpoint URL (e.g. https://minio.internal:9000). When set, path-style URLs are always used |
force_path_style | boolean | No | false | Use path-style URLs (s3.{region}.amazonaws.com/{bucket}/{key}) instead of virtual-hosted style. Automatically true when endpoint is set |
bucket | string | No | - | Hard-coded bucket name. When set, bucket_param is ignored. Use for single-bucket routes like /assets/{key+} |
bucket_param | string | No | "bucket" | Name of the path parameter that holds the bucket |
key_param | string | No | "key" | Name of the path parameter that holds the object key |
timeout | number | No | 30 | Request timeout in seconds |
URL Styles
Virtual-hosted (default for AWS S3):
Host: {bucket}.s3.{region}.amazonaws.com
Path: /{key}
Path-style (set force_path_style: true or when endpoint is set):
Host: s3.{region}.amazonaws.com # or custom endpoint host
Path: /{bucket}/{key}
Custom endpoints always use path-style regardless of force_path_style.
Wildcard Keys
Use {key+} (greedy wildcard) to capture multi-segment S3 keys containing slashes:
/files/{bucket}/{key+}:
get:
parameters:
- { name: bucket, in: path, required: true, schema: { type: string } }
- { name: key, in: path, required: true, allowReserved: true, schema: { type: string } }
x-barbacane-dispatch:
name: s3
config:
region: us-east-1
access_key_id: env://AWS_ACCESS_KEY_ID
secret_access_key: env://AWS_SECRET_ACCESS_KEY
GET /files/uploads/2024/01/report.pdf → S3 key 2024/01/report.pdf in bucket uploads.
Examples
Multi-bucket proxy with OIDC authentication:
paths:
/storage/{bucket}/{key+}:
get:
parameters:
- { name: bucket, in: path, required: true, schema: { type: string } }
- { name: key, in: path, required: true, allowReserved: true, schema: { type: string } }
x-barbacane-middlewares:
- name: oidc-auth
config:
issuer: https://auth.example.com
audience: my-api
required: true
x-barbacane-dispatch:
name: s3
config:
region: eu-west-1
access_key_id: env://AWS_ACCESS_KEY_ID
secret_access_key: env://AWS_SECRET_ACCESS_KEY
Single-bucket CDN (hard-coded bucket, rate limited):
paths:
/assets/{key+}:
get:
parameters:
- { name: key, in: path, required: true, allowReserved: true, schema: { type: string } }
x-barbacane-middlewares:
- name: rate-limit
config:
quota: 120
window: 60
x-barbacane-dispatch:
name: s3
config:
region: us-east-1
bucket: public-assets
access_key_id: env://AWS_ACCESS_KEY_ID
secret_access_key: env://AWS_SECRET_ACCESS_KEY
S3-compatible storage (MinIO / RustFS):
x-barbacane-dispatch:
name: s3
config:
region: us-east-1
endpoint: https://minio.internal:9000
access_key_id: env://MINIO_ACCESS_KEY
secret_access_key: env://MINIO_SECRET_KEY
bucket: uploads
Temporary credentials (STS / AssumeRole / IRSA):
x-barbacane-dispatch:
name: s3
config:
region: us-east-1
access_key_id: env://AWS_ACCESS_KEY_ID
secret_access_key: env://AWS_SECRET_ACCESS_KEY
session_token: env://AWS_SESSION_TOKEN
bucket: my-bucket
Error Handling
| Status | Condition |
|---|---|
| 400 Bad Request | Missing bucket or key path parameter |
| 502 Bad Gateway | host_http_call failed (network error, endpoint unreachable) |
| 403 / 404 / 5xx | Passed through transparently from S3 |
Security
- Credentials: Use
env://references so secrets are never baked into spec files or compiled artifacts - Session tokens: Support for STS, AssumeRole, and IRSA (IAM Roles for Service Accounts) via
session_token - Signing: All requests are signed with AWS Signature Version 4. The signed headers are
host,x-amz-content-sha256,x-amz-date, andx-amz-security-token(when a session token is present) - Binary objects: The current implementation returns the response body as a UTF-8 string. Binary objects (images, archives, etc.) are not suitable for this dispatcher — use pre-signed URLs for binary downloads
ws-upstream
Transparent WebSocket proxy. Upgrades the client connection to WebSocket and relays frames bidirectionally to an upstream WebSocket server. The gateway handles the full lifecycle: handshake, frame relay, and connection teardown.
x-barbacane-dispatch:
name: ws-upstream
config:
url: "ws://echo.internal:8080"
Configuration
| Property | Type | Required | Default | Description |
|---|---|---|---|---|
url | string | Yes | - | Upstream WebSocket URL (ws:// or wss://) |
connect_timeout | number | No | 5 | Connection timeout in seconds (0.1–300) |
path | string | No | Same as operation path | Upstream path template with {param} substitution |
How It Works
- Client sends an HTTP request with
Upgrade: websocket - The plugin validates the upgrade header and connects to the upstream WebSocket server
- On success, the gateway returns
101 Switching Protocolsto the client - Frames are relayed bidirectionally between client and upstream until either side closes
The middleware chain (x-barbacane-middlewares) runs on the initial HTTP request and can inspect/modify headers, enforce authentication, apply rate limiting, etc. Once the connection is upgraded, middleware is bypassed for individual frames.
Path Parameters
Path parameters from the OpenAPI spec are substituted in the path template:
/ws/{room}:
get:
parameters:
- name: room
in: path
required: true
schema:
type: string
x-barbacane-dispatch:
name: ws-upstream
config:
url: "ws://chat.internal:8080"
path: "/rooms/{room}"
# Request: GET /ws/general → Upstream: ws://chat.internal:8080/rooms/general
Query String Forwarding
Query parameters from the client request are automatically forwarded to the upstream URL:
Client: GET /ws/echo?token=abc → Upstream: ws://echo.internal:8080/?token=abc
Examples
Basic WebSocket proxy:
/ws:
get:
x-barbacane-dispatch:
name: ws-upstream
config:
url: "ws://backend.internal:8080"
With authentication and path routing:
/ws/{room}:
get:
parameters:
- name: room
in: path
required: true
schema:
type: string
x-barbacane-middlewares:
- name: jwt-auth
config:
required: true
x-barbacane-dispatch:
name: ws-upstream
config:
url: "wss://realtime.internal"
path: "/rooms/{room}"
connect_timeout: 10
Secure upstream (WSS):
/live:
get:
x-barbacane-dispatch:
name: ws-upstream
config:
url: "wss://stream.example.com"
connect_timeout: 15
Error Handling
| Status | Condition |
|---|---|
| 400 Bad Request | Missing Upgrade: websocket header |
| 502 Bad Gateway | Upstream connection failed or timed out |
Security
- WSS in production: Use
wss://for encrypted upstream connections - Development mode:
ws://URLs are allowed with--allow-plaintext-upstream - Authentication: Apply auth middleware (jwt-auth, oidc-auth, etc.) to protect WebSocket endpoints — middleware runs on the initial upgrade request
Best Practices
Set Appropriate Timeouts
- Fast endpoints (health, simple reads): 5-10s
- Normal operations: 30s (default)
- Long operations (reports, uploads): 60-120s
x-barbacane-dispatch:
name: http-upstream
config:
url: "https://backend.internal"
timeout: 10.0 # Quick timeout for simple operation
Mock for Development
Use mock dispatchers during API design:
/users/{id}:
get:
x-barbacane-dispatch:
name: mock
config:
status: 200
body: '{"id":"123","name":"Test User"}'
Then switch to real backend:
/users/{id}:
get:
x-barbacane-dispatch:
name: http-upstream
config:
url: "https://user-service.internal"
path: "/api/users/{id}"
Use HTTPS in Production
Barbacane enforces HTTPS for upstream connections at two levels:
| Flag | Command | Purpose |
|---|---|---|
--allow-plaintext | compile | Allow http:// URLs in spec (bypasses E1031 validation) |
--allow-plaintext-upstream | serve | Allow runtime HTTP client to connect to http:// upstreams |
Why two flags?
- Compile-time validation catches insecure URLs early in your CI pipeline
- Runtime enforcement provides defense-in-depth, even if specs are modified
For local development with services like WireMock or Docker containers:
# Allow http:// URLs during compilation
barbacane compile --spec api.yaml --manifest barbacane.yaml --output api.bca --allow-plaintext
# Allow plaintext connections at runtime
barbacane serve --artifact api.bca --dev --allow-plaintext-upstream
In production, omit both flags to ensure all upstream connections use TLS.
Middlewares
Middlewares process requests before they reach dispatchers and can modify responses on the way back. They’re used for cross-cutting concerns like authentication, rate limiting, and caching.
Overview
Middlewares are configured with x-barbacane-middlewares:
x-barbacane-middlewares:
- name: <middleware-name>
config:
# middleware-specific config
Middleware Chain
Middlewares execute in order:
Request → [Global MW 1] → [Global MW 2] → [Operation MW] → Dispatcher
│
Response ← [Global MW 1] ← [Global MW 2] ← [Operation MW] ←───────┘
Global vs Operation Middlewares
Global Middlewares
Apply to all operations:
openapi: "3.1.0"
info:
title: My API
version: "1.0.0"
# These apply to every operation
x-barbacane-middlewares:
- name: request-id
config:
header: X-Request-ID
- name: cors
config:
allowed_origins: ["https://app.example.com"]
paths:
/users:
get:
# Inherits global middlewares
x-barbacane-dispatch:
name: http-upstream
config:
url: "https://api.example.com"
Operation Middlewares
Apply to specific operations (run after global):
paths:
/admin/users:
get:
x-barbacane-middlewares:
- name: jwt-auth
config:
required: true
scopes: ["admin:read"]
x-barbacane-dispatch:
name: http-upstream
config:
url: "https://api.example.com"
Merging with Global Middlewares
When an operation declares its own middlewares, they are merged with the global chain:
- Global middlewares run first, in order
- If an operation middleware has the same name as a global one, the operation config overrides that global entry
- Non-overridden global middlewares are preserved
# Global: rate-limit at 100/min + cors
x-barbacane-middlewares:
- name: rate-limit
config:
quota: 100
window: 60
- name: cors
config:
allow_origin: "*"
paths:
/public/feed:
get:
# Override rate-limit, cors is still applied from globals
x-barbacane-middlewares:
- name: rate-limit
config:
quota: 1000
window: 60
# Resolved chain: cors (global) → rate-limit (operation override)
To explicitly disable all middlewares for an operation, use an empty array:
paths:
/internal/health:
get:
x-barbacane-middlewares: [] # No middlewares at all
Consumer Identity Headers
All authentication middlewares set two standard headers on successful authentication, in addition to their plugin-specific headers:
| Header | Description | Example |
|---|---|---|
x-auth-consumer | Canonical consumer identifier | "alice", "user-123" |
x-auth-consumer-groups | Comma-separated group/role memberships | "admin,editor", "read" |
These standard headers enable downstream middlewares (like acl) to enforce authorization without coupling to a specific auth plugin.
| Plugin | x-auth-consumer source | x-auth-consumer-groups source |
|---|---|---|
basic-auth | username | roles array |
jwt-auth | sub claim | configurable via groups_claim |
oidc-auth | sub claim | scope claim (space→comma) |
oauth2-auth | sub claim (fallback: username) | scope claim (space→comma) |
apikey-auth | id field | scopes array |
Authentication Middlewares
jwt-auth
Validates JWT tokens with RS256/HS256 signatures.
x-barbacane-middlewares:
- name: jwt-auth
config:
issuer: "https://auth.example.com" # Optional: validate iss claim
audience: "my-api" # Optional: validate aud claim
groups_claim: "roles" # Optional: claim name for consumer groups
skip_signature_validation: true # Required until JWKS support is implemented
Accepted algorithms: RS256, RS384, RS512, ES256, ES384, ES512. HS256/HS512 and none are rejected.
Note: Cryptographic signature validation is not yet implemented. Set skip_signature_validation: true in production until JWKS support lands. Without it, all tokens are rejected with 401 at the signature step.
Configuration
| Property | Type | Default | Description |
|---|---|---|---|
issuer | string | - | Expected iss claim. Tokens not matching are rejected |
audience | string | - | Expected aud claim. Tokens not matching are rejected |
clock_skew_seconds | integer | 60 | Tolerance in seconds for exp/nbf validation |
groups_claim | string | - | Claim name to extract consumer groups from (e.g., "roles", "groups"). Value is set as x-auth-consumer-groups |
skip_signature_validation | boolean | false | Skip cryptographic signature check. Required until JWKS support is implemented |
Context Headers
Sets headers for downstream:
x-auth-consumer- Consumer identifier (fromsubclaim)x-auth-consumer-groups- Comma-separated groups (fromgroups_claim, if configured)x-auth-sub- Subject (user ID)x-auth-claims- Full JWT claims as JSON
apikey-auth
Validates API keys from header or query parameter.
x-barbacane-middlewares:
- name: apikey-auth
config:
key_location: header # or "query"
header_name: X-API-Key # when key_location is "header"
query_param: api_key # when key_location is "query"
keys:
sk_live_abc123:
id: key-001
name: Production Key
scopes: ["read", "write"]
sk_test_xyz789:
id: key-002
name: Test Key
scopes: ["read"]
Configuration
| Property | Type | Default | Description |
|---|---|---|---|
key_location | string | header | Where to find key (header or query) |
header_name | string | X-API-Key | Header name (when key_location: header) |
query_param | string | api_key | Query param name (when key_location: query) |
keys | object | {} | Map of valid API keys to metadata |
Context Headers
Sets headers for downstream:
x-auth-consumer- Consumer identifier (from keyid)x-auth-consumer-groups- Comma-separated groups (from keyscopes)x-auth-key-id- Key identifierx-auth-key-name- Key human-readable namex-auth-key-scopes- Comma-separated scopes
oauth2-auth
Validates Bearer tokens via RFC 7662 token introspection.
x-barbacane-middlewares:
- name: oauth2-auth
config:
introspection_endpoint: https://auth.example.com/oauth2/introspect
client_id: my-api-client
client_secret: "env://OAUTH2_CLIENT_SECRET" # resolved at startup
required_scopes: "read write" # space-separated
timeout: 5.0 # seconds
The client_secret uses a secret reference (env://) which is resolved at gateway startup. See Secrets for details.
Configuration
| Property | Type | Default | Description |
|---|---|---|---|
introspection_endpoint | string | required | RFC 7662 introspection URL |
client_id | string | required | Client ID for introspection auth |
client_secret | string | required | Client secret for introspection auth |
required_scopes | string | - | Space-separated required scopes |
timeout | float | 5.0 | Introspection request timeout (seconds) |
Context Headers
Sets headers for downstream:
x-auth-consumer- Consumer identifier (fromsub, fallback tousername)x-auth-consumer-groups- Comma-separated groups (fromscope)x-auth-sub- Subjectx-auth-scope- Token scopesx-auth-client-id- Client IDx-auth-username- Username (if present)x-auth-claims- Full introspection response as JSON
Error Responses
401 Unauthorized- Missing token, invalid token, or inactive token403 Forbidden- Token lacks required scopes
Includes RFC 6750 WWW-Authenticate header with error details.
oidc-auth
OpenID Connect authentication via OIDC Discovery and JWKS. Automatically fetches the provider’s signing keys and validates JWT tokens with full cryptographic verification.
x-barbacane-middlewares:
- name: oidc-auth
config:
issuer_url: https://accounts.google.com
audience: my-api-client-id
required_scopes: "openid profile email"
issuer_override: https://external.example.com # optional
clock_skew_seconds: 60
jwks_refresh_seconds: 300
timeout: 5.0
allow_query_token: false # RFC 6750 §2.3 query param fallback
Configuration
| Property | Type | Default | Description |
|---|---|---|---|
issuer_url | string | required | OIDC issuer URL (e.g., https://accounts.google.com) |
audience | string | - | Expected aud claim. If set, tokens must match |
required_scopes | string | - | Space-separated required scopes |
issuer_override | string | - | Override expected iss claim (for split-network setups like Docker) |
clock_skew_seconds | integer | 60 | Clock skew tolerance for exp/nbf validation |
jwks_refresh_seconds | integer | 300 | How often to refresh JWKS keys (seconds) |
timeout | float | 5.0 | HTTP timeout for discovery and JWKS calls (seconds) |
allow_query_token | boolean | false | Allow token extraction from the access_token query parameter (RFC 6750 §2.3). Use with caution — tokens in URLs risk leaking via logs and referer headers. |
How It Works
- Extracts the Bearer token from the
Authorizationheader (or from theaccess_tokenquery parameter ifallow_query_tokenis enabled and no header is present) - Parses the JWT header to determine the signing algorithm and key ID (
kid) - Fetches
{issuer_url}/.well-known/openid-configuration(cached) - Fetches the JWKS endpoint from the discovery document (cached with TTL)
- Finds the matching public key by
kid(orkty/usefallback) - Verifies the signature using
host_verify_signature(RS256/RS384/RS512, ES256/ES384) - Validates claims:
iss,aud,exp,nbf - Checks required scopes (if configured)
Context Headers
Sets headers for downstream:
x-auth-consumer- Consumer identifier (fromsubclaim)x-auth-consumer-groups- Comma-separated groups (fromscope, space→comma)x-auth-sub- Subject (user ID)x-auth-scope- Token scopesx-auth-claims- Full JWT payload as JSON
Error Responses
401 Unauthorized- Missing token, invalid token, expired token, bad signature, unknown issuer403 Forbidden- Token lacks required scopes
Includes RFC 6750 WWW-Authenticate header with error details.
basic-auth
Validates credentials from the Authorization: Basic header per RFC 7617. Useful for internal APIs, admin endpoints, or simple services that don’t need a full identity provider.
x-barbacane-middlewares:
- name: basic-auth
config:
realm: "My API"
strip_credentials: true
credentials:
admin:
password: "env://ADMIN_PASSWORD"
roles: ["admin", "editor"]
readonly:
password: "env://READONLY_PASSWORD"
roles: ["viewer"]
Configuration
| Property | Type | Default | Description |
|---|---|---|---|
realm | string | api | Authentication realm shown in WWW-Authenticate challenge |
strip_credentials | boolean | true | Remove Authorization header before forwarding to upstream |
credentials | object | {} | Map of username to credential entry |
Each credential entry:
| Property | Type | Default | Description |
|---|---|---|---|
password | string | required | Password for this user (supports secret references) |
roles | array | [] | Optional roles for authorization |
Context Headers
Sets headers for downstream:
x-auth-consumer- Consumer identifier (username)x-auth-consumer-groups- Comma-separated groups (fromroles)x-auth-user- Authenticated usernamex-auth-roles- Comma-separated roles (only set if the user has roles)
Error Responses
Returns 401 Unauthorized with WWW-Authenticate: Basic realm="<realm>" and Problem JSON:
{
"type": "urn:barbacane:error:authentication-failed",
"title": "Authentication failed",
"status": 401,
"detail": "Invalid username or password"
}
Authorization Middlewares
acl
Enforces access control based on consumer identity and group membership. Reads the standard x-auth-consumer and x-auth-consumer-groups headers set by upstream auth plugins.
x-barbacane-middlewares:
- name: basic-auth
config:
realm: "my-api"
credentials:
admin:
password: "env://ADMIN_PASSWORD"
roles: ["admin", "editor"]
viewer:
password: "env://VIEWER_PASSWORD"
roles: ["viewer"]
- name: acl
config:
allow:
- admin
deny:
- banned
Configuration
| Property | Type | Default | Description |
|---|---|---|---|
allow | array | [] | Group names allowed access. If non-empty, consumer must belong to at least one |
deny | array | [] | Group names denied access (takes precedence over allow) |
allow_consumers | array | [] | Specific consumer IDs allowed (bypasses group checks) |
deny_consumers | array | [] | Specific consumer IDs denied (highest precedence) |
consumer_groups | object | {} | Static consumer-to-groups mapping, merged with x-auth-consumer-groups header |
message | string | Access denied by ACL policy | Custom 403 error message |
hide_consumer_in_errors | boolean | false | Suppress consumer identity in 403 error body |
Evaluation Order
- Missing/empty
x-auth-consumerheader → 403 deny_consumersmatch → 403allow_consumersmatch → 200 (bypasses group checks)- Resolve groups (merge
x-auth-consumer-groupsheader + staticconsumer_groupsconfig) denygroup match → 403 (takes precedence over allow)allownon-empty + group match → 200allownon-empty + no group match → 403allowempty → 200 (only deny rules active)
Static Consumer Groups
You can supplement the groups from the auth plugin with static mappings:
- name: acl
config:
allow:
- premium
consumer_groups:
free_user:
- premium # Grant premium access to specific consumers
Groups from the consumer_groups config are merged with the x-auth-consumer-groups header (deduplicated).
Error Response
Returns 403 Forbidden with Problem JSON (RFC 9457):
{
"type": "urn:barbacane:error:acl-denied",
"title": "Forbidden",
"status": 403,
"detail": "Access denied by ACL policy",
"consumer": "alice"
}
Set hide_consumer_in_errors: true to omit the consumer field.
opa-authz
Policy-based access control via Open Policy Agent. Sends request context to an OPA REST API endpoint and enforces the boolean decision. Typically placed after an authentication middleware so that auth claims are available as OPA input.
x-barbacane-middlewares:
- name: jwt-auth
config:
issuer: "https://auth.example.com"
skip_signature_validation: true
- name: opa-authz
config:
opa_url: "http://opa:8181/v1/data/authz/allow"
Configuration
| Property | Type | Default | Description |
|---|---|---|---|
opa_url | string | (required) | OPA Data API endpoint URL (e.g., http://opa:8181/v1/data/authz/allow) |
timeout | number | 5 | HTTP request timeout in seconds for OPA calls |
include_body | boolean | false | Include the request body in the OPA input payload |
include_claims | boolean | true | Include parsed x-auth-claims header (set by upstream auth plugins) in the OPA input |
deny_message | string | Authorization denied by policy | Custom message returned in the 403 response body |
OPA Input Payload
The plugin POSTs the following JSON to your OPA endpoint:
{
"input": {
"method": "GET",
"path": "/admin/users",
"query": "page=1",
"headers": { "x-auth-consumer": "alice" },
"client_ip": "10.0.0.1",
"claims": { "sub": "alice", "roles": ["admin"] },
"body": "..."
}
}
claimsis included only wheninclude_claimsistrueand thex-auth-claimsheader contains valid JSON (set by auth plugins likejwt-auth,oauth2-auth)bodyis included only wheninclude_bodyistrue
Decision Logic
The plugin expects OPA to return the standard Data API response:
{ "result": true }
| OPA Response | Result |
|---|---|
{"result": true} | 200 — request continues |
{"result": false} | 403 — access denied |
{} (undefined document) | 403 — access denied |
Non-boolean result | 403 — access denied |
| OPA unreachable or error | 503 — service unavailable |
Error Responses
403 Forbidden — OPA denies access:
{
"type": "urn:barbacane:error:opa-denied",
"title": "Forbidden",
"status": 403,
"detail": "Authorization denied by policy"
}
503 Service Unavailable — OPA is unreachable or returns a non-200 status:
{
"type": "urn:barbacane:error:opa-unavailable",
"title": "Service Unavailable",
"status": 503,
"detail": "OPA service unreachable"
}
Example OPA Policy
package authz
default allow := false
# Allow admins everywhere
allow if {
input.claims.roles[_] == "admin"
}
# Allow GET on public paths
allow if {
input.method == "GET"
startswith(input.path, "/public/")
}
cel
Inline policy evaluation using CEL (Common Expression Language). Evaluates expressions directly in-process — no external service needed. CEL is the same language used by Envoy, Kubernetes, and Firebase for policy rules.
x-barbacane-middlewares:
- name: jwt-auth
config:
issuer: "https://auth.example.com"
- name: cel
config:
expression: >
'admin' in request.claims.roles
|| (request.method == 'GET' && request.path.startsWith('/public/'))
Configuration
| Property | Type | Default | Description |
|---|---|---|---|
expression | string | (required) | CEL expression that must evaluate to a boolean |
deny_message | string | Access denied by policy | Custom message returned in the 403 response body |
Request Context
The expression has access to a request object with these fields:
| Variable | Type | Description |
|---|---|---|
request.method | string | HTTP method (GET, POST, etc.) |
request.path | string | Request path (e.g., /api/users) |
request.query | string | Query string (empty string if none) |
request.headers | map | Request headers (e.g., request.headers.authorization) |
request.body | string | Request body (empty string if none) |
request.client_ip | string | Client IP address |
request.path_params | map | Path parameters (e.g., request.path_params.id) |
request.consumer | string | Consumer identity from x-auth-consumer header (empty if absent) |
request.claims | map | Parsed JSON from x-auth-claims header (empty map if absent/invalid) |
CEL Features
CEL supports a rich expression language:
// String operations
request.path.startsWith('/api/')
request.path.endsWith('.json')
request.headers.host.contains('example')
// List operations
'admin' in request.claims.roles
request.claims.roles.exists(r, r == 'editor')
// Field presence
has(request.claims.email)
// Logical operators
request.method == 'GET' && request.consumer != ''
request.method in ['GET', 'HEAD', 'OPTIONS']
!(request.client_ip.startsWith('192.168.'))
Decision Logic
| Expression Result | HTTP Response |
|---|---|
true | Request continues to next middleware/dispatcher |
false | 403 Forbidden |
| Non-boolean | 500 Internal Server Error |
| Parse/evaluation error | 500 Internal Server Error |
Error Responses
403 Forbidden — expression evaluates to false:
{
"type": "urn:barbacane:error:cel-denied",
"title": "Forbidden",
"status": 403,
"detail": "Access denied by policy"
}
500 Internal Server Error — invalid expression or non-boolean result:
{
"type": "urn:barbacane:error:cel-evaluation",
"title": "Internal Server Error",
"status": 500,
"detail": "expression returned string, expected bool"
}
CEL vs OPA
cel | opa-authz | |
|---|---|---|
| Deployment | Embedded (no sidecar) | External OPA server |
| Language | CEL | Rego |
| Latency | Microseconds (in-process) | HTTP round-trip |
| Best for | Inline route-level rules | Complex policy repos, audit trails |
Rate Limiting
rate-limit
Limits request rate per client using a sliding window algorithm. Implements IETF draft-ietf-httpapi-ratelimit-headers.
x-barbacane-middlewares:
- name: rate-limit
config:
quota: 100
window: 60
policy_name: default
partition_key: client_ip
Configuration
| Property | Type | Default | Description |
|---|---|---|---|
quota | integer | required | Maximum requests allowed in the window |
window | integer | required | Window duration in seconds |
policy_name | string | default | Policy name for RateLimit-Policy header |
partition_key | string | client_ip | Rate limit key source |
Partition Key Sources
client_ip- Client IP fromX-Forwarded-FororX-Real-IPheader:<name>- Header value (e.g.,header:X-API-Key)context:<key>- Context value (e.g.,context:auth.sub)- Any static string - Same limit for all requests
Response Headers
On allowed requests:
X-RateLimit-Policy- Policy name and configurationX-RateLimit-Limit- Maximum requests in windowX-RateLimit-Remaining- Remaining requestsX-RateLimit-Reset- Unix timestamp when window resets
On rate-limited requests (429):
RateLimit-Policy- IETF draft headerRateLimit- IETF draft combined headerRetry-After- Seconds until retry is allowed
CORS
cors
Handles Cross-Origin Resource Sharing per the Fetch specification. Processes preflight OPTIONS requests and adds CORS headers to responses.
x-barbacane-middlewares:
- name: cors
config:
allowed_origins:
- https://app.example.com
- https://admin.example.com
allowed_methods:
- GET
- POST
- PUT
- DELETE
allowed_headers:
- Authorization
- Content-Type
expose_headers:
- X-Request-ID
max_age: 86400
allow_credentials: false
Configuration
| Property | Type | Default | Description |
|---|---|---|---|
allowed_origins | array | [] | Allowed origins (["*"] for any, or specific origins) |
allowed_methods | array | ["GET", "POST"] | Allowed HTTP methods |
allowed_headers | array | [] | Allowed request headers (beyond simple headers) |
expose_headers | array | [] | Headers exposed to browser JavaScript |
max_age | integer | 3600 | Preflight cache time (seconds) |
allow_credentials | boolean | false | Allow credentials (cookies, auth headers) |
Origin Patterns
Origins can be:
- Exact match:
https://app.example.com - Wildcard subdomain:
*.example.com(matchessub.example.com) - Wildcard:
*(only whenallow_credentials: false)
Error Responses
403 Forbidden- Origin not in allowed list403 Forbidden- Method not allowed (preflight)403 Forbidden- Headers not allowed (preflight)
Preflight Responses
Returns 204 No Content with:
Access-Control-Allow-OriginAccess-Control-Allow-MethodsAccess-Control-Allow-HeadersAccess-Control-Max-AgeVary: Origin, Access-Control-Request-Method, Access-Control-Request-Headers
Request Tracing
correlation-id
Propagates or generates correlation IDs (UUID v7) for distributed tracing. The correlation ID is passed to upstream services and included in responses.
x-barbacane-middlewares:
- name: correlation-id
config:
header_name: X-Correlation-ID
generate_if_missing: true
trust_incoming: true
include_in_response: true
Configuration
| Property | Type | Default | Description |
|---|---|---|---|
header_name | string | X-Correlation-ID | Header name for the correlation ID |
generate_if_missing | boolean | true | Generate new UUID v7 if not provided |
trust_incoming | boolean | true | Trust and propagate incoming correlation IDs |
include_in_response | boolean | true | Include correlation ID in response headers |
Request Protection
ip-restriction
Allows or denies requests based on client IP address or CIDR ranges. Supports both allowlist and denylist modes.
x-barbacane-middlewares:
- name: ip-restriction
config:
allow:
- 10.0.0.0/8
- 192.168.1.0/24
deny:
- 10.0.0.5
message: "Access denied from your IP address"
status: 403
Configuration
| Property | Type | Default | Description |
|---|---|---|---|
allow | array | [] | Allowed IPs or CIDR ranges (allowlist mode) |
deny | array | [] | Denied IPs or CIDR ranges (denylist mode) |
message | string | Access denied | Custom error message for denied requests |
status | integer | 403 | HTTP status code for denied requests |
Behavior
- If
denyis configured, IPs in the list are blocked (denylist takes precedence) - If
allowis configured, only IPs in the list are permitted (allowlist mode) - Client IP is extracted from
X-Forwarded-For,X-Real-IP, or direct connection - Supports both single IPs (
10.0.0.1) and CIDR notation (10.0.0.0/8)
Error Response
Returns Problem JSON (RFC 7807):
{
"type": "urn:barbacane:error:ip-restricted",
"title": "Forbidden",
"status": 403,
"detail": "Access denied",
"client_ip": "203.0.113.50"
}
bot-detection
Blocks requests from known bots and scrapers by matching the User-Agent header against configurable deny patterns. An allow list lets trusted crawlers bypass the deny list.
x-barbacane-middlewares:
- name: bot-detection
config:
deny:
- scrapy
- ahrefsbot
- semrushbot
- mj12bot
- dotbot
allow:
- Googlebot
- Bingbot
block_empty_ua: false
message: "Automated access is not permitted"
status: 403
Configuration
| Property | Type | Default | Description |
|---|---|---|---|
deny | array | [] | User-Agent substrings to block (case-insensitive substring match) |
allow | array | [] | User-Agent substrings that override the deny list (trusted crawlers) |
block_empty_ua | boolean | false | Block requests with no User-Agent header |
message | string | Access denied | Custom error message for blocked requests |
status | integer | 403 | HTTP status code for blocked requests |
Behavior
- Matching is case-insensitive substring:
"bot"matches"AhrefsBot","DotBot", etc. - The allow list takes precedence over deny: a UA matching both allow and deny is allowed through
- Missing
User-Agentis permitted by default; setblock_empty_ua: trueto block it - Both
denyandalloware empty by default — the plugin is a no-op unless configured
Error Response
Returns Problem JSON (RFC 7807):
{
"type": "urn:barbacane:error:bot-detected",
"title": "Forbidden",
"status": 403,
"detail": "Access denied",
"user_agent": "scrapy/2.11"
}
The user_agent field is omitted when the request had no User-Agent header.
request-size-limit
Rejects requests that exceed a configurable body size limit. Checks both Content-Length header and actual body size.
x-barbacane-middlewares:
- name: request-size-limit
config:
max_bytes: 1048576 # 1 MiB
check_content_length: true
Configuration
| Property | Type | Default | Description |
|---|---|---|---|
max_bytes | integer | 1048576 | Maximum allowed request body size in bytes (default: 1 MiB) |
check_content_length | boolean | true | Check Content-Length header for early rejection |
Error Response
Returns 413 Payload Too Large with Problem JSON:
{
"type": "urn:barbacane:error:payload-too-large",
"title": "Payload Too Large",
"status": 413,
"detail": "Request body size 2097152 bytes exceeds maximum allowed size of 1048576 bytes."
}
Caching
cache
Caches responses in memory with TTL support.
x-barbacane-middlewares:
- name: cache
config:
ttl: 300
vary:
- Accept-Language
- Accept-Encoding
methods:
- GET
- HEAD
cacheable_status:
- 200
- 301
Configuration
| Property | Type | Default | Description |
|---|---|---|---|
ttl | integer | 300 | Cache duration (seconds) |
vary | array | [] | Headers that vary cache key |
methods | array | ["GET", "HEAD"] | HTTP methods to cache |
cacheable_status | array | [200, 301] | Status codes to cache |
Cache Key
Cache key is computed from:
- HTTP method
- Request path
- Vary header values (if configured)
Cache-Control Respect
The middleware respects Cache-Control response headers:
no-store- Response not cachedno-cache- Cache but revalidatemax-age=N- Use specified TTL instead of config
Logging
http-log
Sends structured JSON log entries to an HTTP endpoint for centralized logging. Captures request metadata, response status, timing, and optional headers/body sizes. Compatible with Datadog, Splunk, ELK, or any HTTP log ingestion endpoint.
x-barbacane-middlewares:
- name: http-log
config:
endpoint: https://logs.example.com/ingest
method: POST
timeout_ms: 2000
include_headers: false
include_body: true
custom_fields:
service: my-api
environment: production
Configuration
| Property | Type | Default | Description |
|---|---|---|---|
endpoint | string | required | URL to send log entries to |
method | string | POST | HTTP method (POST or PUT) |
timeout_ms | integer | 2000 | Timeout for the log HTTP call (100-10000 ms) |
content_type | string | application/json | Content-Type header for the log request |
include_headers | boolean | false | Include request and response headers in log entries |
include_body | boolean | false | Include request and response body sizes in log entries |
custom_fields | object | {} | Static key-value fields included in every log entry |
Log Entry Format
Each log entry is a JSON object:
{
"timestamp_ms": 1706500000000,
"duration_ms": 42,
"correlation_id": "abc-123",
"request": {
"method": "POST",
"path": "/users",
"query": "page=1",
"client_ip": "10.0.0.1",
"headers": { "content-type": "application/json" },
"body_size": 256
},
"response": {
"status": 201,
"headers": { "content-type": "application/json" },
"body_size": 64
},
"service": "my-api",
"environment": "production"
}
Optional fields (correlation_id, headers, body_size, query) are omitted when not available or not enabled.
Behavior
- Runs in the response phase (after dispatch) to capture both request and response data
- Log delivery is best-effort — failures never affect the upstream response
- The
correlation_idfield is automatically populated if thecorrelation-idmiddleware runs earlier in the chain - Custom fields are flattened into the top-level JSON object
Request Transformation
request-transformer
Declaratively modifies requests before they reach the dispatcher. Supports header, query parameter, path, and JSON body transformations with variable interpolation.
x-barbacane-middlewares:
- name: request-transformer
config:
headers:
add:
X-Gateway: "barbacane"
X-Client-IP: "$client_ip"
set:
X-Request-Source: "external"
remove:
- Authorization
- X-Internal-Token
rename:
X-Old-Name: X-New-Name
querystring:
add:
gateway: "barbacane"
userId: "$path.userId"
remove:
- internal_token
rename:
oldParam: newParam
path:
strip_prefix: "/api/v1"
add_prefix: "/internal"
replace:
pattern: "/users/(\\w+)/orders"
replacement: "/v2/orders/$1"
body:
add:
/metadata/gateway: "barbacane"
/userId: "$path.userId"
remove:
- /password
- /internal_flags
rename:
/userName: /user_name
Configuration
headers
| Property | Type | Default | Description |
|---|---|---|---|
add | object | {} | Add or overwrite headers. Supports variable interpolation |
set | object | {} | Add headers only if not already present. Supports variable interpolation |
remove | array | [] | Remove headers by name (case-insensitive) |
rename | object | {} | Rename headers (old-name to new-name) |
querystring
| Property | Type | Default | Description |
|---|---|---|---|
add | object | {} | Add or overwrite query parameters. Supports variable interpolation |
remove | array | [] | Remove query parameters by name |
rename | object | {} | Rename query parameters (old-name to new-name) |
path
| Property | Type | Default | Description |
|---|---|---|---|
strip_prefix | string | - | Remove prefix from path (e.g., /api/v2) |
add_prefix | string | - | Add prefix to path (e.g., /internal) |
replace.pattern | string | - | Regex pattern to match in path |
replace.replacement | string | - | Replacement string (supports regex capture groups) |
Path operations are applied in order: strip prefix, add prefix, regex replace.
body
JSON body transformations use JSON Pointer (RFC 6901) paths.
| Property | Type | Default | Description |
|---|---|---|---|
add | object | {} | Add or overwrite JSON fields. Supports variable interpolation |
remove | array | [] | Remove JSON fields by JSON Pointer path |
rename | object | {} | Rename JSON fields (old-pointer to new-pointer) |
Body transformations only apply to requests with application/json content type. Non-JSON bodies pass through unchanged.
Variable Interpolation
Values in add, set, and body add support variable templates:
| Variable | Description | Example |
|---|---|---|
$client_ip | Client IP address | 192.168.1.1 |
$header.<name> | Request header value (case-insensitive) | $header.host |
$query.<name> | Query parameter value | $query.page |
$path.<name> | Path parameter value | $path.userId |
context:<key> | Request context value (set by other middlewares) | context:auth.sub |
Variables always resolve against the original incoming request, regardless of transformations applied by earlier sections. This means a query parameter removed in querystring.remove is still available via $query.<name> in body.add.
If a variable cannot be resolved, it is replaced with an empty string.
Transformation Order
Transformations are applied in this order:
- Path — strip prefix, add prefix, regex replace
- Headers — add, set, remove, rename
- Query parameters — add, remove, rename
- Body — add, remove, rename
Use Cases
Strip API version prefix:
- name: request-transformer
config:
path:
strip_prefix: "/api/v2"
Move query parameter to body (ADR-0020 showcase):
- name: request-transformer
config:
querystring:
remove:
- userId
body:
add:
/userId: "$query.userId"
Add gateway metadata to every request:
# Global middleware
x-barbacane-middlewares:
- name: request-transformer
config:
headers:
add:
X-Gateway: "barbacane"
X-Client-IP: "$client_ip"
Response Transformation
response-transformer
Declaratively modifies responses before they return to the client. Supports status code mapping, header transformations, and JSON body transformations.
x-barbacane-middlewares:
- name: response-transformer
config:
status:
200: 201
400: 403
500: 503
headers:
add:
X-Gateway: "barbacane"
X-Frame-Options: "DENY"
set:
X-Content-Type-Options: "nosniff"
remove:
- Server
- X-Powered-By
rename:
X-Old-Name: X-New-Name
body:
add:
/metadata/gateway: "barbacane"
remove:
- /internal_flags
- /debug_info
rename:
/userName: /user_name
Configuration
status
A mapping of upstream status codes to replacement status codes. Unmapped codes pass through unchanged.
status:
200: 201 # Created instead of OK
400: 422 # Unprocessable Entity instead of Bad Request
500: 503 # Service Unavailable instead of Internal Server Error
headers
| Property | Type | Default | Description |
|---|---|---|---|
add | object | {} | Add or overwrite response headers |
set | object | {} | Add headers only if not already present in the response |
remove | array | [] | Remove headers by name (case-insensitive) |
rename | object | {} | Rename headers (old-name to new-name) |
body
JSON body transformations use JSON Pointer (RFC 6901) paths.
| Property | Type | Default | Description |
|---|---|---|---|
add | object | {} | Add or overwrite JSON fields |
remove | array | [] | Remove JSON fields by JSON Pointer path |
rename | object | {} | Rename JSON fields (old-pointer to new-pointer) |
Body transformations only apply to responses with JSON bodies. Non-JSON bodies pass through unchanged.
Transformation Order
Transformations are applied in this order:
- Status — map status code
- Headers — remove, rename, set, add
- Body — remove, rename, add
Use Cases
Strip upstream server headers:
- name: response-transformer
config:
headers:
remove: [Server, X-Powered-By, X-AspNet-Version]
Add security headers to all responses:
- name: response-transformer
config:
headers:
add:
X-Frame-Options: "DENY"
X-Content-Type-Options: "nosniff"
Strict-Transport-Security: "max-age=31536000"
Clean up internal fields from response body:
- name: response-transformer
config:
body:
remove:
- /internal_metadata
- /debug_trace
- /password_hash
Map status codes for API versioning:
- name: response-transformer
config:
status:
200: 201
URL Redirection
redirect
Redirects requests based on configurable path rules. Supports exact path matching, prefix matching with path rewriting, configurable status codes (301/302/307/308), and query string preservation.
x-barbacane-middlewares:
- name: redirect
config:
status_code: 302
preserve_query: true
rules:
- path: /old-page
target: /new-page
status_code: 301
- prefix: /api/v1
target: /api/v2
- target: https://fallback.example.com
Configuration
| Property | Type | Default | Description |
|---|---|---|---|
status_code | integer | 302 | Default HTTP status code for redirects (301, 302, 307, 308) |
preserve_query | boolean | true | Append the original query string to the redirect target |
rules | array | required | Redirect rules evaluated in order; first match wins |
Rule Properties
| Property | Type | Description |
|---|---|---|
path | string | Exact path to match. Mutually exclusive with prefix |
prefix | string | Path prefix to match. The matched prefix is stripped and the remainder is appended to target |
target | string | Required. Redirect target URL or path |
status_code | integer | Override the top-level status_code for this rule |
If neither path nor prefix is set, the rule matches all requests (catch-all).
Matching Behavior
- Rules are evaluated in order. The first matching rule wins.
- Exact match (
path): redirects only when the request path equals the value exactly. - Prefix match (
prefix): strips the matched prefix and appends the remainder totarget. For example,prefix: /api/v1withtarget: /api/v2redirects/api/v1/users?page=2to/api/v2/users?page=2. - Catch-all: omit both
pathandprefixto redirect all requests hitting the route.
Status Codes
| Code | Meaning | Method preserved? |
|---|---|---|
| 301 | Moved Permanently | No (may change to GET) |
| 302 | Found | No (may change to GET) |
| 307 | Temporary Redirect | Yes |
| 308 | Permanent Redirect | Yes |
Use 307/308 when you need POST/PUT/DELETE requests to be retried with the same method.
Use Cases
Domain migration:
- name: redirect
config:
status_code: 301
rules:
- target: https://new-domain.com
API versioning:
- name: redirect
config:
rules:
- prefix: /api/v1
target: /api/v2
status_code: 301
Multiple redirects:
- name: redirect
config:
rules:
- path: /blog
target: https://blog.example.com
status_code: 301
- path: /docs
target: https://docs.example.com
status_code: 301
- prefix: /old-api
target: /api
Planned Middlewares
The following middlewares are planned for future milestones:
idempotency
Ensures idempotent processing.
x-barbacane-middlewares:
- name: idempotency
config:
header: Idempotency-Key
ttl: 86400
Configuration
| Property | Type | Default | Description |
|---|---|---|---|
header | string | Idempotency-Key | Header containing key |
ttl | integer | 86400 | Key expiration (seconds) |
Context Passing
Middlewares can set context for downstream components:
# Auth middleware sets context:auth.sub
x-barbacane-middlewares:
- name: auth-jwt
config:
required: true
# Rate limit uses auth context
- name: rate-limit
config:
partition_key: context:auth.sub # Rate limit per user
Best Practices
Order Matters
Put middlewares in logical order:
x-barbacane-middlewares:
- name: correlation-id # 1. Add tracing ID first
- name: http-log # 2. Log all requests (captures full lifecycle)
- name: cors # 3. Handle CORS early
- name: ip-restriction # 4. Block bad IPs immediately
- name: request-size-limit # 5. Reject oversized requests
- name: rate-limit # 6. Rate limit before auth (cheaper)
- name: oidc-auth # 7. Authenticate (OIDC/JWT)
- name: basic-auth # 8. Authenticate (fallback)
- name: acl # 9. Authorize (after auth sets consumer headers)
- name: request-transformer # 10. Transform request before dispatch
- name: response-transformer # 11. Transform response before client (runs first in reverse)
Fail Fast
Put restrictive middlewares early to reject bad requests quickly:
x-barbacane-middlewares:
- name: ip-restriction # Block banned IPs immediately
- name: request-size-limit # Reject large payloads early
- name: rate-limit # Reject over-limit immediately
- name: jwt-auth # Reject unauthorized before processing
Use Global for Common Concerns
# Global: apply to everything
x-barbacane-middlewares:
- name: correlation-id
- name: cors
- name: request-size-limit
config:
max_bytes: 10485760 # 10 MiB global limit
- name: rate-limit
paths:
/public:
get:
# No additional middlewares needed
/private:
get:
# Only add what's different
x-barbacane-middlewares:
- name: auth-jwt
/upload:
post:
# Override size limit for uploads
x-barbacane-middlewares:
- name: request-size-limit
config:
max_bytes: 104857600 # 100 MiB for uploads
Secrets Management
Barbacane supports secure secret management to keep sensitive values like API keys, tokens, and passwords out of your specs and artifacts.
Overview
Instead of hardcoding secrets in your OpenAPI specs, you reference them using special URI schemes. The gateway resolves these references at startup, before handling any requests.
Key principles:
- No secrets in specs
- No secrets in artifacts
- No secrets in Git
- Secrets resolved at gateway startup
Secret Reference Formats
Environment Variables
Use env:// to reference environment variables:
x-barbacane-middlewares:
- name: oauth2-auth
config:
client_id: my-client
client_secret: "env://OAUTH2_CLIENT_SECRET"
At startup, the gateway reads the OAUTH2_CLIENT_SECRET environment variable and substitutes its value.
File-based Secrets
Use file:// to read secrets from files:
x-barbacane-middlewares:
- name: jwt-auth
config:
secret: "file:///etc/secrets/jwt-signing-key"
The gateway reads the file content, trims whitespace, and uses the result. This works well with:
- Kubernetes Secrets (mounted as files)
- Docker secrets
- HashiCorp Vault Agent (file injection)
Examples
OAuth2 Authentication Middleware
x-barbacane-middlewares:
- name: oauth2-auth
config:
introspection_endpoint: https://auth.example.com/introspect
client_id: my-api-client
client_secret: "env://OAUTH2_SECRET"
timeout: 5.0
Run with:
export OAUTH2_SECRET="super-secret-value"
barbacane serve --artifact api.bca --listen 0.0.0.0:8080
HTTP Upstream with API Key
x-barbacane-dispatch:
name: http-upstream
config:
url: https://api.provider.com/v1
headers:
Authorization: "Bearer env://UPSTREAM_API_KEY"
JWT Auth with File-based Key
x-barbacane-middlewares:
- name: jwt-auth
config:
public_key: "file:///var/run/secrets/jwt-public-key.pem"
issuer: https://auth.example.com
audience: my-api
Multiple Secrets
You can use multiple secret references in the same config:
x-barbacane-middlewares:
- name: oauth2-auth
config:
introspection_endpoint: "env://AUTH_SERVER_URL"
client_id: "env://CLIENT_ID"
client_secret: "env://CLIENT_SECRET"
Startup Behavior
Resolution Timing
Secrets are resolved once at gateway startup:
- Gateway loads the artifact
- Gateway scans all dispatcher and middleware configs for secret references
- Each reference is resolved (env var read, file read)
- Resolved values replace the references in memory
- HTTP server starts listening
If any secret cannot be resolved, the gateway refuses to start.
Exit Codes
| Exit Code | Meaning |
|---|---|
| 0 | Normal shutdown |
| 1 | General error |
| 11 | Plugin hash mismatch |
| 13 | Secret resolution failure |
When exit code 13 occurs, the error message indicates which secret failed:
error: failed to resolve secrets: environment variable not found: OAUTH2_SECRET
Missing Secrets
The gateway fails fast on missing secrets:
# Missing env var
$ barbacane serve --artifact api.bca --listen 0.0.0.0:8080
error: failed to resolve secrets: environment variable not found: API_KEY
$ echo $?
13
# Missing file
$ barbacane serve --artifact api.bca --listen 0.0.0.0:8080
error: failed to resolve secrets: file not found: /etc/secrets/api-key
$ echo $?
13
This fail-fast behavior ensures the gateway never starts in an insecure state.
Supported Schemes
| Scheme | Example | Status |
|---|---|---|
env:// | env://API_KEY | Supported |
file:// | file:///etc/secrets/key | Supported |
vault:// | vault://secret/data/api-keys | Planned |
aws-sm:// | aws-sm://prod/api-key | Planned |
k8s:// | k8s://namespace/secret/key | Planned |
Best Practices
Development
Use environment variables with .env files (not committed to Git):
# .env (add to .gitignore)
OAUTH2_SECRET=dev-secret-value
API_KEY=dev-api-key
# Load and run
source .env
barbacane serve --artifact api.bca --listen 127.0.0.1:8080 --dev
Production
Use your platform’s secret management:
Docker:
docker run -e OAUTH2_SECRET="$OAUTH2_SECRET" barbacane serve ...
Kubernetes:
apiVersion: v1
kind: Pod
spec:
containers:
- name: gateway
env:
- name: OAUTH2_SECRET
valueFrom:
secretKeyRef:
name: api-secrets
key: oauth2-secret
Kubernetes with file-based secrets:
apiVersion: v1
kind: Pod
spec:
containers:
- name: gateway
volumeMounts:
- name: secrets
mountPath: /etc/secrets
readOnly: true
volumes:
- name: secrets
secret:
secretName: api-secrets
Then use file:///etc/secrets/key-name in your spec.
Secret Rotation
For secrets that need rotation:
- Update the secret value in your secret store
- Restart the gateway (rolling restart in Kubernetes)
The gateway does not hot-reload secrets. This simplifies the security model and avoids race conditions.
Troubleshooting
“environment variable not found”
error: failed to resolve secrets: environment variable not found: MY_SECRET
Solutions:
- Verify the env var is set:
echo $MY_SECRET - Ensure the env var is exported:
export MY_SECRET=value - Check for typos in the reference:
env://MY_SECRET
“file not found”
error: failed to resolve secrets: file not found: /path/to/secret
Solutions:
- Verify the file exists:
ls -la /path/to/secret - Check file permissions: the gateway process must be able to read it
- Use absolute paths starting with
/
“unsupported secret scheme”
error: failed to resolve secrets: unsupported secret scheme: vault
This means you’re using a scheme that isn’t implemented yet. Currently only env:// and file:// are supported.
Security Considerations
- Never commit secrets to Git - Use
.gitignorefor.envfiles - Rotate secrets regularly - Plan for secret rotation via gateway restarts
- Use least privilege - Only grant the gateway access to secrets it needs
- Audit secret access - Use your secret store’s audit logging
- Encrypt at rest - Ensure your secret storage encrypts secrets
Observability
Barbacane provides comprehensive observability features out of the box: structured logging, Prometheus metrics, and distributed tracing with OpenTelemetry support.
Logging
Structured logs are written to stdout. Two formats are available:
JSON Format (Default)
Production-ready structured JSON logs:
barbacane serve --artifact api.bca --log-format json
{"timestamp":"2024-01-15T10:30:00Z","level":"INFO","target":"barbacane","message":"request completed","trace_id":"abc123","request_id":"def456","method":"GET","path":"/users","status":200,"duration_ms":12}
Pretty Format (Development)
Human-readable format for local development:
barbacane serve --artifact api.bca --log-format pretty --log-level debug
2024-01-15T10:30:00Z INFO barbacane > request completed method=GET path=/users status=200 duration_ms=12
Log Levels
Control verbosity with --log-level:
| Level | Description |
|---|---|
error | Errors only |
warn | Warnings and errors |
info | Normal operation (default) |
debug | Detailed debugging |
trace | Very verbose tracing |
barbacane serve --artifact api.bca --log-level debug
Or use the RUST_LOG environment variable:
RUST_LOG=debug barbacane serve --artifact api.bca
Metrics
Prometheus metrics are exposed on the dedicated admin API port (default 127.0.0.1:8081):
curl http://localhost:8081/metrics
Available Metrics
| Metric | Type | Description |
|---|---|---|
barbacane_requests_total | counter | Total requests by method, path, status, api |
barbacane_request_duration_seconds | histogram | Request latency |
barbacane_request_size_bytes | histogram | Request body size |
barbacane_response_size_bytes | histogram | Response body size |
barbacane_active_connections | gauge | Current open connections |
barbacane_connections_total | counter | Total connections accepted |
barbacane_validation_failures_total | counter | Validation errors by reason |
barbacane_middleware_duration_seconds | histogram | Middleware execution time |
barbacane_dispatch_duration_seconds | histogram | Dispatcher execution time |
barbacane_wasm_execution_duration_seconds | histogram | WASM plugin execution time |
Prometheus Integration
Configure Prometheus to scrape metrics:
# prometheus.yml
scrape_configs:
- job_name: 'barbacane'
static_configs:
- targets: ['barbacane:8081']
metrics_path: '/metrics'
scrape_interval: 15s
Example Queries
# Request rate
rate(barbacane_requests_total[5m])
# P99 latency
histogram_quantile(0.99, rate(barbacane_request_duration_seconds_bucket[5m]))
# Error rate
sum(rate(barbacane_requests_total{status=~"5.."}[5m])) / sum(rate(barbacane_requests_total[5m]))
# Active connections
barbacane_active_connections
Distributed Tracing
Barbacane supports W3C Trace Context for distributed tracing and can export spans via OpenTelemetry Protocol (OTLP).
Enable OTLP Export
barbacane serve --artifact api.bca \
--otlp-endpoint http://otel-collector:4317
Or use the environment variable:
OTEL_EXPORTER_OTLP_ENDPOINT=http://otel-collector:4317 barbacane serve --artifact api.bca
Trace Context Propagation
Barbacane automatically:
- Extracts
traceparentandtracestateheaders from incoming requests - Generates a new trace ID if none provided
- Injects trace context into upstream requests
This enables end-to-end tracing across your entire service mesh.
Span Structure
Each request creates a span tree:
barbacane.request (root span)
├── barbacane.routing
├── barbacane.validation
├── barbacane.middleware.jwt-auth.request
├── barbacane.middleware.rate-limit.request
├── barbacane.dispatch.http-upstream
├── barbacane.middleware.rate-limit.response
└── barbacane.middleware.jwt-auth.response
Span Attributes
Spans include attributes like:
http.method,http.route,http.status_codebarbacane.api,barbacane.operation_idbarbacane.middleware,barbacane.dispatcher
Integration with Collectors
Works with any OpenTelemetry-compatible backend:
Jaeger:
barbacane serve --artifact api.bca --otlp-endpoint http://jaeger:4317
Grafana Tempo:
barbacane serve --artifact api.bca --otlp-endpoint http://tempo:4317
Datadog (with collector):
# otel-collector-config.yaml
exporters:
datadog:
api:
key: ${DD_API_KEY}
Production Setup
A typical production observability stack:
barbacane serve --artifact api.bca \
--log-format json \
--log-level info \
--otlp-endpoint http://otel-collector:4317
Combined with:
- Prometheus scraping
/metricson the admin port (default 8081) - OpenTelemetry Collector receiving OTLP traces
- Grafana for dashboards and alerting
- Log aggregation (Loki, Elasticsearch) ingesting stdout
Example Alert Rules
# prometheus-rules.yaml
groups:
- name: barbacane
rules:
- alert: HighErrorRate
expr: |
sum(rate(barbacane_requests_total{status=~"5.."}[5m]))
/ sum(rate(barbacane_requests_total[5m])) > 0.01
for: 5m
labels:
severity: critical
annotations:
summary: "High error rate detected"
- alert: HighLatency
expr: |
histogram_quantile(0.99, rate(barbacane_request_duration_seconds_bucket[5m])) > 1
for: 5m
labels:
severity: warning
annotations:
summary: "P99 latency exceeds 1 second"
What’s Next?
- CLI Reference - All command-line options
- Reserved Endpoints - Full metrics endpoint documentation
- Spec Extensions - Barbacane spec extensions reference
Control Plane
The Barbacane Control Plane provides a REST API for managing API specifications, plugins, and compiled artifacts. It enables centralized management of your API gateway configuration with PostgreSQL-backed storage and async compilation.
Overview
The control plane is a separate component from the data plane (gateway). While the data plane handles request routing and processing, the control plane manages:
- Specs - Upload, version, and manage OpenAPI/AsyncAPI specifications
- Plugins - Registry for WASM plugins with version management
- Artifacts - Compiled
.bcafiles ready for deployment - Compilations - Async compilation jobs with status tracking
Quick Start
Start the Server
# Start PostgreSQL (Docker example)
docker run -d --name barbacane-db \
-e POSTGRES_PASSWORD=barbacane \
-e POSTGRES_DB=barbacane \
-p 5432:5432 \
postgres:16
# Run the control plane
barbacane-control serve \
--database-url postgres://postgres:barbacane@localhost/barbacane \
--listen 127.0.0.1:9090
The server automatically runs database migrations on startup.
Upload a Spec
curl -X POST http://localhost:9090/specs \
-F "file=@api.yaml"
Response:
{
"id": "550e8400-e29b-41d4-a716-446655440000",
"name": "Petstore API",
"revision": 1,
"sha256": "a1b2c3..."
}
Start Compilation
curl -X POST http://localhost:9090/specs/550e8400-e29b-41d4-a716-446655440000/compile \
-H "Content-Type: application/json" \
-d '{"production": true}'
Response (202 Accepted):
{
"id": "660e8400-e29b-41d4-a716-446655440001",
"spec_id": "550e8400-e29b-41d4-a716-446655440000",
"status": "pending",
"production": true,
"started_at": "2024-01-15T10:30:00Z"
}
Poll Compilation Status
curl http://localhost:9090/compilations/660e8400-e29b-41d4-a716-446655440001
When complete:
{
"id": "660e8400-e29b-41d4-a716-446655440001",
"spec_id": "550e8400-e29b-41d4-a716-446655440000",
"status": "succeeded",
"artifact_id": "770e8400-e29b-41d4-a716-446655440002",
"started_at": "2024-01-15T10:30:00Z",
"completed_at": "2024-01-15T10:30:05Z"
}
Download Artifact
curl -o api.bca http://localhost:9090/artifacts/770e8400-e29b-41d4-a716-446655440002/download
API Versioning
All JSON responses from the control plane include a versioned content type:
Content-Type: application/vnd.barbacane.v1+json
This allows clients to detect the API version and handle future breaking changes gracefully.
API Reference
Full OpenAPI specification is available at crates/barbacane-control/openapi.yaml.
Specs
| Method | Endpoint | Description |
|---|---|---|
| POST | /specs | Upload a new spec (multipart) |
| GET | /specs | List all specs |
| GET | /specs/{id} | Get spec metadata |
| DELETE | /specs/{id} | Delete spec and revisions |
| GET | /specs/{id}/history | Get revision history |
| GET | /specs/{id}/content | Download spec content |
Query Parameters
type- Filter by spec type (openapiorasyncapi)name- Filter by name (case-insensitive partial match)revision- Specific revision (for/contentendpoint)
Plugins
| Method | Endpoint | Description |
|---|---|---|
| POST | /plugins | Register a plugin (multipart) |
| GET | /plugins | List all plugins |
| GET | /plugins/{name} | List versions of a plugin |
| GET | /plugins/{name}/{version} | Get plugin metadata |
| DELETE | /plugins/{name}/{version} | Delete a plugin version |
| GET | /plugins/{name}/{version}/download | Download WASM binary |
Plugin Registration
curl -X POST http://localhost:9090/plugins \
-F "name=my-middleware" \
-F "version=1.0.0" \
-F "type=middleware" \
-F "description=My custom middleware" \
-F "capabilities=[\"http\", \"log\"]" \
-F "config_schema={\"type\": \"object\"}" \
-F "file=@my-middleware.wasm"
Artifacts
| Method | Endpoint | Description |
|---|---|---|
| GET | /artifacts | List all artifacts |
| GET | /artifacts/{id} | Get artifact metadata |
| DELETE | /artifacts/{id} | Delete an artifact |
| GET | /artifacts/{id}/download | Download .bca file |
Compilations
| Method | Endpoint | Description |
|---|---|---|
| POST | /specs/{id}/compile | Start async compilation |
| GET | /specs/{id}/compilations | List compilations for a spec |
| GET | /compilations/{id} | Get compilation status |
| DELETE | /compilations/{id} | Delete compilation record |
Compilation Request
{
"production": true,
"additional_specs": ["uuid-of-another-spec"]
}
Compilation Status
| Status | Description |
|---|---|
pending | Job queued, waiting to start |
compiling | Compilation in progress |
succeeded | Completed, artifact_id available |
failed | Failed, check errors array |
Health
curl http://localhost:9090/health
Response:
{
"status": "healthy",
"version": "0.1.0"
}
Projects
Projects organize your APIs and configure which plugins to use. Each project can have its own set of specs, plugin configurations, and connected data planes.
Create a Project
curl -X POST http://localhost:9090/projects \
-H "Content-Type: application/json" \
-d '{"name": "My API Gateway", "description": "Production gateway"}'
Response:
{
"id": "880e8400-e29b-41d4-a716-446655440003",
"name": "My API Gateway",
"description": "Production gateway",
"created_at": "2024-01-15T10:00:00Z"
}
Configure Plugins for a Project
Add plugins from the registry to your project with custom configuration:
curl -X POST http://localhost:9090/projects/880e8400.../plugins \
-H "Content-Type: application/json" \
-d '{
"plugin_name": "rate-limit",
"plugin_version": "0.1.0",
"enabled": true,
"config": {
"quota": 1000,
"window": 60
}
}'
Each plugin’s configuration is validated against its JSON Schema (if one is defined).
Data Planes
Data planes are gateway instances that connect to the control plane to receive configuration updates.
Data Plane Connection
Data planes connect via WebSocket to receive artifacts and configuration:
# Start a data plane connected to the control plane
barbacane serve \
--control-plane ws://localhost:9090/ws/data-plane \
--project-id 880e8400-e29b-41d4-a716-446655440003 \
--api-key dp_key_abc123
Create API Key for Data Plane
curl -X POST http://localhost:9090/projects/880e8400.../api-keys \
-H "Content-Type: application/json" \
-d '{"name": "Production Data Plane"}'
Response:
{
"id": "990e8400-e29b-41d4-a716-446655440004",
"name": "Production Data Plane",
"key": "dp_key_abc123...",
"created_at": "2024-01-15T10:30:00Z"
}
Note: The API key is only shown once at creation time. Store it securely.
List Connected Data Planes
curl http://localhost:9090/projects/880e8400.../data-planes
Response:
[
{
"id": "aa0e8400-e29b-41d4-a716-446655440005",
"name": "production-1",
"status": "connected",
"current_artifact_id": "770e8400...",
"connected_at": "2024-01-15T10:35:00Z"
}
]
Deploy
Deploy compiled artifacts to connected data planes for zero-downtime updates.
Trigger Deployment
curl -X POST http://localhost:9090/projects/880e8400.../deploy \
-H "Content-Type: application/json" \
-d '{"artifact_id": "770e8400-e29b-41d4-a716-446655440002"}'
Response:
{
"deployment_id": "bb0e8400-e29b-41d4-a716-446655440006",
"artifact_id": "770e8400...",
"target_data_planes": 3,
"status": "in_progress"
}
The control plane notifies all connected data planes, which download the new artifact, verify its checksum, and perform a hot-reload.
Web UI
The control plane includes a web-based management interface at http://localhost:5173 (when running the UI development server).
Running the UI
# Using Makefile
make ui
# Or manually
cd ui && npm run dev
The UI provides:
- Dashboard - Overview of specs, artifacts, and data planes
- Specs Management - Upload, view, and delete API specifications
- Plugin Registry - Browse registered plugins with their schemas
- Projects - Create projects and configure plugins
- Artifacts - View compiled artifacts and download them
Plugin Configuration
When adding plugins to a project, the UI:
- Shows the plugin’s JSON Schema (if available)
- Pre-fills a skeleton configuration based on required fields
- Validates configuration in real-time before saving
Interactive API Documentation
The control plane includes interactive API documentation powered by Scalar:
http://localhost:9090/docs
This provides a browsable interface for exploring and testing all API endpoints directly from your browser.
Seeding the Plugin Registry
Use the seed-plugins command to populate the plugin registry with built-in plugins:
# Using Makefile (builds plugins first)
make seed-plugins
# Or manually
barbacane-control seed-plugins \
--plugins-dir plugins \
--database-url postgres://localhost/barbacane \
--verbose
This scans the plugins/ directory for plugin manifests (plugin.toml) and registers them in the database along with their WASM binaries and JSON Schemas.
See CLI Reference for full options.
Error Handling
All errors follow RFC 9457 Problem Details format:
{
"type": "urn:barbacane:error:not-found",
"title": "Not Found",
"status": 404,
"detail": "Spec 550e8400-e29b-41d4-a716-446655440000 not found"
}
Error Types
| URN | Status | Description |
|---|---|---|
urn:barbacane:error:not-found | 404 | Resource not found |
urn:barbacane:error:bad-request | 400 | Invalid request |
urn:barbacane:error:conflict | 409 | Resource already exists or is in use |
urn:barbacane:error:spec-invalid | 422 | Spec validation failed |
urn:barbacane:error:internal-error | 500 | Server error |
Database Schema
The control plane uses PostgreSQL with the following tables:
specs- Spec metadata (name, type, version, timestamps)spec_revisions- Version history with content (BYTEA)plugins- Plugin registry with WASM binaries and JSON Schemasartifacts- Compiled.bcafiles with manifestsartifact_specs- Junction table linking artifacts to specscompilations- Async job trackingprojects- Project definitionsproject_plugin_configs- Plugin configurations per projectdata_planes- Connected gateway instancesapi_keys- Authentication keys for data planes
Migrations run automatically on startup with --migrate (enabled by default).
Configuration
Environment Variables
| Variable | Description |
|---|---|
DATABASE_URL | PostgreSQL connection string |
RUST_LOG | Log level (trace, debug, info, warn, error) |
CLI Options
barbacane-control serve [OPTIONS]
Options:
--listen <ADDR> Listen address [default: 127.0.0.1:9090]
--database-url <URL> PostgreSQL URL [env: DATABASE_URL]
--migrate Run migrations on startup [default: true]
Deployment
Container Images
Official container images are available from Docker Hub:
# Control plane (includes web UI)
docker pull barbacane/barbacane-control:latest
# Data plane
docker pull barbacane/barbacane:latest
Also available from GitHub Container Registry:
docker pull ghcr.io/barbacane-dev/barbacane-control:latest
docker pull ghcr.io/barbacane-dev/barbacane:latest
Images are available for:
linux/amd64(x86_64)linux/arm64(ARM64/Graviton)
Tags:
latest- Latest stable releasex.y.z- Specific version (e.g.,0.2.0)x.y- Latest patch for minor version (e.g.,0.2)x- Latest minor for major version (e.g.,0)
Docker Compose Example
services:
postgres:
image: postgres:16
environment:
POSTGRES_DB: barbacane
POSTGRES_PASSWORD: barbacane
volumes:
- pgdata:/var/lib/postgresql/data
control-plane:
image: ghcr.io/barbacane-dev/barbacane-control:latest
environment:
DATABASE_URL: postgres://postgres:barbacane@postgres/barbacane
ports:
- "80:80" # Web UI
- "9090:9090" # API
depends_on:
- postgres
data-plane:
image: ghcr.io/barbacane-dev/barbacane:latest
command: >
serve
--control-plane ws://control-plane:9090/ws/data-plane
--project-id ${PROJECT_ID}
--api-key ${DATA_PLANE_API_KEY}
ports:
- "8080:8080"
depends_on:
- control-plane
volumes:
pgdata:
Production Considerations
- Database backups - Regular PostgreSQL backups for spec and plugin data
- Connection pooling - Consider PgBouncer for high-traffic deployments
- Authentication - Add a reverse proxy with authentication (not built-in)
- TLS - Terminate TLS at the load balancer or reverse proxy
What’s Next?
- CLI Reference - Full command-line options
- Artifact Format - Understanding
.bcafiles - Getting Started - Basic workflow with local compilation
Web UI
The Barbacane Control Plane includes a React-based web interface for managing your API gateway. This guide covers the features and workflows available in the UI.
Getting Started
Starting the UI
# Using Makefile (from project root)
make ui
# Or manually
cd ui && npm run dev
The UI runs at http://localhost:5173 and proxies API requests to the control plane at http://localhost:9090.
Prerequisites
Before using the UI, ensure:
- Control Plane is running - Start with
make control-plane - Database is ready - Start PostgreSQL with
make db-up - Plugins are seeded - Run
make seed-pluginsto populate the plugin registry
Features Overview
| Feature | Description |
|---|---|
| Projects | Create and manage API gateway projects |
| Specs | Upload and manage OpenAPI/AsyncAPI specifications |
| Plugin Registry | Browse available plugins with schemas |
| Plugin Configuration | Configure plugins per project with validation |
| Builds | Compile specs into deployable artifacts |
| Deploy | Deploy artifacts to connected data planes |
| API Keys | Manage authentication for data planes |
Projects
Projects organize your API gateway configuration. Each project contains:
- Specs - API specifications to compile
- Plugins - Configured middleware and dispatchers
- Builds - Compilation history and artifacts
- Data Planes - Connected gateway instances
Creating a Project
- Navigate to Projects from the sidebar
- Click New Project
- Enter a name and optional description
- Click Create
You can also create a project from a template by clicking From Template, which uses the /init endpoint to generate a starter API specification.
Project Navigation
Each project has a tabbed interface:
| Tab | Description |
|---|---|
| Specs | Manage API specifications for this project |
| Plugins | Configure which plugins to use and their settings |
| Builds | View compilation history and trigger new builds |
| Deploy | Deploy artifacts and manage connected data planes |
| Settings | Project settings and danger zone |
Specs Management
Uploading Specs
- Navigate to a project’s Specs tab
- Click Upload Spec
- Select an OpenAPI or AsyncAPI YAML/JSON file
- The spec is parsed, validated, and stored
Spec Features
- Revision History - Each upload creates a new revision
- Content Preview - View the raw spec content
- Validation - Specs are validated on upload
- Type Detection - Automatically detects OpenAPI vs AsyncAPI
Global Specs View
The Specs page in the sidebar shows all specs across all projects. Use this for:
- Browsing all uploaded specifications
- Searching specs by name
- Viewing specs not yet assigned to projects
Plugin Registry
The Plugin Registry shows all available WASM plugins:
Plugin Types
| Type | Description |
|---|---|
| Middleware | Request/response processing (auth, rate limiting, CORS) |
| Dispatcher | Backend integration (HTTP upstream, Lambda, mock) |
Plugin Information
Each plugin displays:
- Name and Version - Unique identifier
- Description - What the plugin does
- Capabilities - Required host functions (e.g.,
http,log,kv) - Config Schema - JSON Schema for configuration (if defined)
Deleting Plugins
Plugins can be deleted from the registry if they’re not in use by any project. If a plugin is in use, you’ll see an error message asking you to remove it from all projects first.
Plugin Configuration
Configure plugins for each project from the Plugins tab.
Adding a Plugin
- Click Add Plugin
- Select a plugin from the registry dropdown
- The plugin’s JSON Schema (if available) generates a configuration form
- Fill in required and optional fields
- Click Add Plugin
Configuration Features
- Schema-based forms - Auto-generated from JSON Schema
- Real-time validation - Invalid configs are rejected before save
- Enable/Disable - Toggle plugins without removing configuration
- Reorder - Drag to change middleware execution order
Example: Configuring Rate Limiting
- Add the
rate-limitplugin - Configure:
quota: Maximum requests per window (e.g.,1000)window: Time window in seconds (e.g.,60)
- Save the configuration
Example: Configuring CORS
- Add the
corsplugin - Configure:
allowed_origins: Array of allowed origins (e.g.,["https://example.com"])allowed_methods: HTTP methods (e.g.,["GET", "POST", "OPTIONS"])max_age: Preflight cache duration in seconds
- Save the configuration
Builds
Compile your specs and plugins into deployable .bca artifacts.
Triggering a Build
- Navigate to a project’s Builds tab
- Click Build
- Watch the compilation progress
- Download the artifact when complete
Build Status
| Status | Description |
|---|---|
| Pending | Build queued |
| Compiling | Compilation in progress |
| Succeeded | Artifact ready for download/deploy |
| Failed | Check error details for issues |
Build Errors
Common build errors:
- Missing dispatcher - Operation has no
x-barbacane-dispatch - Invalid config - Plugin configuration doesn’t match schema
- Plugin not found - Referenced plugin not in registry
- HTTP URL rejected - Use HTTPS for production builds
Deploy
Deploy artifacts to connected data planes for zero-downtime updates.
Connecting Data Planes
Data planes connect to the control plane via WebSocket:
barbacane serve \
--control-plane ws://localhost:9090/ws/data-plane \
--project-id <project-uuid> \
--api-key <api-key>
Connected data planes appear in the Deploy tab with status indicators:
| Status | Description |
|---|---|
| Online | Connected and receiving updates |
| Deploying | Currently loading new artifact |
| Offline | Not connected |
Creating API Keys
- Navigate to a project’s Deploy tab
- Click Create Key in the API Keys section
- Enter a descriptive name
- Copy the key immediately - it’s only shown once
Deploying an Artifact
- Ensure at least one data plane is connected
- Click Deploy Latest
- The control plane notifies all connected data planes
- Data planes download, verify, and hot-reload the artifact
Project Templates
Create new projects from templates using the Init page:
Available Templates
| Template | Description |
|---|---|
| Basic | Minimal OpenAPI spec with health endpoint |
| Auth | Includes JWT authentication middleware |
| Full | Complete example with auth, rate limiting, and CORS |
Using Templates
- Click From Template on the Projects page
- Enter project name and select a template
- Preview the generated files
- Choose Setup in Control Plane to create project and upload spec
- Or choose Download to get the files locally
Settings
Project Settings
Access from a project’s Settings tab:
- Rename project - Update name and description
- Production mode - Enable/disable production compilation
- Delete project - Permanently remove project and all data
Global Settings
Access from Settings in the sidebar:
- Theme - Light/dark mode toggle
- API URL - Control plane endpoint configuration
Artifacts Browser
The Artifacts page shows all compiled artifacts:
- Download - Get
.bcafile for local deployment - View manifest - See included specs and plugins
- Delete - Remove old artifacts
Activity Log
The Activity page shows recent operations:
- Spec uploads
- Compilation jobs
- Deployments
- Plugin configuration changes
Keyboard Shortcuts
| Shortcut | Action |
|---|---|
Ctrl/Cmd + K | Quick search |
Esc | Close dialogs |
Troubleshooting
UI won’t load
- Check that the control plane is running on port 9090
- Verify the UI dev server is running on port 5173
- Check browser console for errors
Plugin configuration not saving
- Verify the configuration matches the plugin’s JSON Schema
- Check for required fields that are empty
- Look for validation errors in the form
Data plane not appearing
- Verify the data plane is started with correct flags
- Check the API key is valid and not revoked
- Ensure the project ID matches
- Check WebSocket connection in data plane logs
Build fails with “plugin not found”
- Run
make seed-pluginsto populate the registry - Verify the plugin name in your spec matches the registry
- Check the plugin version exists
Next Steps
- Control Plane API - Full REST API reference
- CLI Reference - Command-line options
- Plugin Development - Create custom plugins
Linting with Vacuum
Barbacane provides a vacuum ruleset that validates your OpenAPI specs against Barbacane-specific conventions. Catch plugin configuration errors, missing dispatch blocks, and security misconfigurations at lint time — before barbacane compile or runtime.
Note: The ruleset currently supports OpenAPI specs only. Vacuum does not yet support AsyncAPI 3.x linting (tracking issue). AsyncAPI specs are validated at compile time by
barbacane compile.
Quick Start
1. Install vacuum
# macOS
brew install daveshanley/vacuum/vacuum
# Linux, Windows, Docker: https://quobix.com/vacuum/installing/
2. Download the Barbacane ruleset
Several rules use custom JavaScript functions (config schema validation, duplicate detection, etc.). Vacuum requires custom functions on the local filesystem, so download the ruleset and its functions:
mkdir -p .barbacane/rulesets/functions .barbacane/rulesets/schemas
curl -fsSL https://docs.barbacane.dev/rulesets/barbacane.yaml -o .barbacane/rulesets/barbacane.yaml
for f in barbacane-auth-opt-out barbacane-no-duplicate-middlewares barbacane-no-plaintext-upstream \
barbacane-no-unknown-extensions barbacane-valid-path-params barbacane-valid-secret-refs \
barbacane-validate-dispatch-config barbacane-validate-middleware-config; do
curl -fsSL "https://docs.barbacane.dev/rulesets/functions/${f}.js" -o ".barbacane/rulesets/functions/${f}.js"
done
This creates a .barbacane/rulesets/ directory with the ruleset YAML and custom functions. You may want to add .barbacane/ to your .gitignore.
3. Create a .vacuum.yml in your project
extends:
- - .barbacane/rulesets/barbacane.yaml
- recommended
4. Lint your spec
vacuum lint -f .barbacane/rulesets/functions my-api.yaml
Rules
The Barbacane ruleset includes the following rules, grouped by category.
Dispatch Validation
| Rule | Severity | Description |
|---|---|---|
barbacane-dispatch-required | error | Every operation must declare x-barbacane-dispatch |
barbacane-dispatch-has-name | error | Dispatch block must include a name field |
barbacane-dispatch-known-plugin | error | Plugin name must be a known dispatcher (ai-proxy, http-upstream, kafka, lambda, mock, nats, s3, ws-upstream) |
barbacane-dispatch-has-config | error | Dispatch block must include a config object |
barbacane-dispatch-config-valid | error | Config must validate against the plugin’s JSON Schema (required fields, types, no unknown fields) |
Middleware Validation
| Rule | Severity | Description |
|---|---|---|
barbacane-middleware-has-name | error | Each middleware entry must include a name field |
barbacane-middleware-known-plugin | warn | Name must be a known middleware plugin |
barbacane-middleware-config-valid | error | Config must validate against the plugin’s JSON Schema |
barbacane-middleware-no-duplicate | warn | No duplicate middleware names in a chain |
The same rules apply to operation-level middlewares (barbacane-op-middleware-*).
Path Parameter Validation
| Rule | Severity | Description |
|---|---|---|
barbacane-valid-path-params | error | Path parameters must match their template, including Barbacane’s {param+} wildcard catch-all syntax |
Note: The built-in vacuum
path-paramsrule is disabled by the Barbacane ruleset because it rejects the{paramName+}wildcard syntax. The custombarbacane-valid-path-paramsrule replaces it with full wildcard support.
Extension Hygiene
| Rule | Severity | Description |
|---|---|---|
barbacane-no-unknown-extension | warn | Only x-barbacane-dispatch and x-barbacane-middlewares are recognized |
Upstream & Secrets
| Rule | Severity | Description |
|---|---|---|
barbacane-no-plaintext-upstream | warn | http-upstream URLs should use HTTPS |
barbacane-secret-ref-format | error | Secret references must match env://VAR_NAME or file:///path |
Auth Safety
| Rule | Severity | Description |
|---|---|---|
barbacane-auth-opt-out-explicit | info | When global auth is set, operations that override middlewares without auth should use x-barbacane-middlewares: [] to explicitly opt out |
Extending the Ruleset
You can override individual rules in your .vacuum.yml:
extends:
- - .barbacane/rulesets/barbacane.yaml
- recommended
rules:
# Downgrade to warning instead of error
barbacane-dispatch-required: warn
# Disable a rule entirely
barbacane-no-plaintext-upstream: off
CI Integration
GitHub Actions
- name: Install vacuum
run: |
curl -fsSL https://github.com/daveshanley/vacuum/releases/latest/download/vacuum_linux_amd64 -o vacuum
chmod +x vacuum
- name: Download Barbacane ruleset
run: |
mkdir -p .barbacane/rulesets/functions
curl -fsSL https://docs.barbacane.dev/rulesets/barbacane.yaml -o .barbacane/rulesets/barbacane.yaml
for f in barbacane-auth-opt-out barbacane-no-duplicate-middlewares barbacane-no-plaintext-upstream \
barbacane-no-unknown-extensions barbacane-valid-path-params barbacane-valid-secret-refs \
barbacane-validate-dispatch-config barbacane-validate-middleware-config; do
curl -fsSL "https://docs.barbacane.dev/rulesets/functions/${f}.js" -o ".barbacane/rulesets/functions/${f}.js"
done
- name: Lint OpenAPI spec
run: ./vacuum lint -f .barbacane/rulesets/functions my-api.yaml
Pre-commit
vacuum lint -f .barbacane/rulesets/functions my-api.yaml
Custom Functions
Several rules use custom JavaScript functions for validations that go beyond what built-in vacuum functions can express (config schema validation, duplicate detection, secret reference format, etc.). Vacuum requires custom functions to be on the local filesystem — it does not fetch them from remote URLs.
The download steps above place these functions into .barbacane/rulesets/functions/. The -f flag in the vacuum lint command tells vacuum where to find them.
If you cloned the Barbacane repository, you can also point directly at the source:
# .vacuum.yml
extends:
- - path/to/Barbacane/docs/rulesets/barbacane.yaml
- recommended
vacuum lint -f path/to/Barbacane/docs/rulesets/functions my-api.yaml
Plugin Config Schemas
The ruleset validates plugin configurations against their JSON Schemas. These schemas are published alongside the ruleset at:
https://docs.barbacane.dev/rulesets/schemas/<plugin-name>.json
For example:
https://docs.barbacane.dev/rulesets/schemas/http-upstream.jsonhttps://docs.barbacane.dev/rulesets/schemas/jwt-auth.jsonhttps://docs.barbacane.dev/rulesets/schemas/rate-limit.json
FIPS 140-3 Compliance
FIPS 140-3 is a US federal standard for cryptographic modules. Government agencies and their contractors are required to use FIPS-validated cryptography. Many large enterprises — especially in finance, healthcare, and defense — enforce FIPS compliance as a baseline security requirement.
Barbacane uses rustls with the aws-lc-rs cryptographic backend — no OpenSSL dependency. AWS-LC has received FIPS 140-3 Level 1 certification from NIST, making it straightforward to run Barbacane in FIPS-compliant mode.
How It Works
By default, Barbacane links against aws-lc-sys (the non-FIPS build of AWS-LC). Enabling the fips feature switches the entire TLS stack to aws-lc-fips-sys, which is the FIPS-validated module.
When FIPS mode is enabled:
- Only FIPS-approved cipher suites and key exchange algorithms are available
- TLS Extended Master Secret (EMS) is required for TLS 1.2 connections
- The aws-lc-fips-sys crate performs a power-on self-test at startup to verify cryptographic integrity
Enabling FIPS Mode
1. Install Additional Build Dependencies
The FIPS build of AWS-LC requires Go in addition to the standard build tools:
| Tool | Standard Build | FIPS Build |
|---|---|---|
| cmake | Required | Required |
| clang | Required | Required |
| Go 1.18+ | Not required | Required |
On Debian/Ubuntu:
apt-get install -y cmake clang golang
On macOS:
brew install cmake go
2. Build with the FIPS Feature Flag
The barbacane crate exposes a fips Cargo feature. Pass it at build time:
cargo build -p barbacane --release --features fips
This enables rustls/fips, which transitively pulls in aws-lc-fips-sys instead of aws-lc-sys. No source edits required.
3. Use the FIPS Crypto Provider
No code change is needed — the startup code in main.rs already installs the aws-lc-rs default provider:
let _ = rustls::crypto::aws_lc_rs::default_provider().install_default();
When built with the fips feature, this provider automatically uses the FIPS-validated module.
4. Verify FIPS Mode (Optional)
You can assert FIPS compliance at runtime using ServerConfig::fips():
let tls_config = load_tls_config(&config)?;
assert!(tls_config.fips(), "TLS config is not FIPS-compliant");
In production, prefer a health-check or log message over panicking:
if !tls_config.fips() {
tracing::warn!("TLS configuration is NOT FIPS-compliant");
}
5. Build
cargo build -p barbacane --release --features fips
The first FIPS build takes longer because aws-lc-fips-sys compiles AWS-LC from source with FIPS self-tests enabled.
Docker
Update the Dockerfile to install Go for the FIPS build:
FROM rust:1.93-slim-bookworm AS builder
# Build dependencies — Go required for aws-lc-fips-sys
RUN apt-get update && apt-get install -y --no-install-recommends \
cmake \
clang \
golang \
pkg-config \
&& rm -rf /var/lib/apt/lists/*
The runtime image (distroless/cc-debian12) does not need changes — the FIPS module is statically linked.
FIPS Cipher Suites
When FIPS mode is active, rustls restricts the available cipher suites to FIPS-approved algorithms:
TLS 1.3:
TLS_AES_256_GCM_SHA384TLS_AES_128_GCM_SHA256
TLS 1.2:
TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256
Note: ChaCha20-Poly1305 cipher suites are not available in FIPS mode, as ChaCha20 is not FIPS-approved.
Platform Support
| Platform | FIPS Build | Notes |
|---|---|---|
| Linux (glibc) | Supported | Primary target |
| Linux (musl) | Supported | Requires cross-rs (same as non-FIPS) |
| macOS | Supported | Development only — not FIPS-certified |
| Windows | Supported | Static build used automatically |
Important: FIPS 140-3 validation only covers the specific certified platforms. For production FIPS compliance, deploy on Linux. macOS and Windows builds use the same code but are not covered by the NIST certificate.
Downstream Dependencies
The reqwest and tokio-tungstenite crates also use rustls for outbound TLS. Since they share the same workspace-level rustls dependency, enabling the fips feature applies to all TLS connections — both ingress and egress.
| Crate | TLS Usage | FIPS Coverage |
|---|---|---|
barbacane | Ingress TLS termination | Covered |
barbacane-wasm | Outbound HTTP (plugin host_http_call) | Covered via reqwest |
barbacane-control | WebSocket to data plane | Covered via tokio-tungstenite |
References
- rustls FIPS documentation
- aws-lc-rs GitHub
- AWS-LC FIPS 140-3 certification announcement
- ADR-0019: Packaging and Release Strategy — documents the aws-lc-rs backend choice
CLI Reference
Barbacane provides two command-line tools:
- barbacane - Data plane (gateway) for compiling specs and serving traffic
- barbacane-control - Control plane for managing specs, plugins, and artifacts via REST API
barbacane
barbacane <COMMAND> [OPTIONS]
Commands
| Command | Description |
|---|---|
init | Initialize a new Barbacane project |
compile | Compile OpenAPI spec(s) into a .bca artifact |
validate | Validate spec(s) without compiling |
serve | Run the gateway server |
barbacane init
Initialize a new Barbacane project with manifest, spec, and directory structure.
barbacane init [NAME] [OPTIONS]
Arguments
| Argument | Required | Default | Description |
|---|---|---|---|
NAME | No | . | Project name (creates a directory with this name, or initializes in current directory if .) |
Options
| Option | Required | Default | Description |
|---|---|---|---|
--template, -t | No | basic | Template to use: basic (full example) or minimal (bare bones) |
--fetch-plugins | No | false | Download official plugins (mock, http-upstream) from GitHub releases |
Plugin Download
The --fetch-plugins flag downloads official Barbacane plugins from GitHub releases:
- mock — Returns static responses (useful for testing and mocking)
- http-upstream — Proxies requests to HTTP/HTTPS backends
Downloaded plugins are placed in the plugins/ directory and automatically configured in barbacane.yaml.
# Create project with plugins downloaded
barbacane init my-api --fetch-plugins
If download fails (e.g., network issues), the project is still created with an empty plugins directory.
Templates
basic (default):
- Complete OpenAPI spec with
/healthand/usersendpoints - Example
x-barbacane-dispatchconfigurations - Ready to compile and run
minimal:
- Bare-bones OpenAPI spec with just the required structure
- Single
/healthendpoint placeholder - Start from scratch
Examples
# Create project in new directory with basic template
barbacane init my-api
# Create project with official plugins downloaded
barbacane init my-api --fetch-plugins
# Create project with minimal template
barbacane init my-api --template minimal
# Initialize in current directory
barbacane init .
# Short form
barbacane init my-api -t minimal
Generated Files
my-api/
├── barbacane.yaml # Project manifest (plugin declarations)
├── api.yaml # OpenAPI 3.1 specification
├── plugins/ # Directory for WASM plugins
└── .gitignore # Ignores *.bca, target/, plugins/*.wasm
Exit Codes
| Code | Meaning |
|---|---|
| 0 | Success |
| 1 | Directory exists and is not empty, or write error |
barbacane compile
Compile one or more OpenAPI specs into a .bca artifact.
barbacane compile --spec <FILES>... --manifest <PATH> --output <PATH>
Options
| Option | Required | Default | Description |
|---|---|---|---|
--spec, -s | Yes | - | One or more spec files (YAML or JSON) |
--output, -o | Yes | - | Output artifact path |
--manifest, -m | Yes | - | Path to barbacane.yaml manifest |
--allow-plaintext | No | false | Allow http:// upstream URLs during compilation |
--provenance-commit | No | - | Git commit SHA to embed in artifact provenance metadata |
--provenance-source | No | - | Build source identifier (e.g., ci/github-actions) to embed in artifact provenance |
--no-cache | No | false | Bypass the plugin download cache entirely — remote plugins are re-downloaded and not cached |
Examples
# Compile single spec with manifest
barbacane compile --spec api.yaml --manifest barbacane.yaml --output api.bca
# Compile multiple specs
barbacane compile -s users.yaml -s orders.yaml -m barbacane.yaml -o combined.bca
# Short form
barbacane compile -s api.yaml -m barbacane.yaml -o api.bca
# With provenance metadata (CI/CD)
barbacane compile -s api.yaml -m barbacane.yaml -o api.bca \
--provenance-commit $(git rev-parse HEAD) \
--provenance-source ci/github-actions
Exit Codes
| Code | Meaning |
|---|---|
| 0 | Success |
| 1 | Compilation error (validation failed, routing conflict, undeclared plugin) |
| 2 | Manifest or plugin resolution error |
barbacane validate
Validate specs without full compilation. Checks for spec validity and extension errors.
barbacane validate --spec <FILES>... [OPTIONS]
Options
| Option | Required | Default | Description |
|---|---|---|---|
--spec, -s | Yes | - | One or more spec files to validate |
--format | No | text | Output format: text or json |
Error Codes
| Code | Category | Description |
|---|---|---|
| E1001 | Spec validity | Not a valid OpenAPI 3.x or AsyncAPI 3.x |
| E1002 | Spec validity | YAML/JSON parse error |
| E1003 | Spec validity | Unresolved $ref reference |
| E1004 | Spec validity | Schema validation error (missing info, etc.) |
| E1010 | Extension | Routing conflict (same path+method in multiple specs) |
| E1011 | Extension | Middleware entry missing name |
| E1015 | Extension | Unknown x-barbacane-* extension (warning) |
| E1020 | Extension | Operation missing x-barbacane-dispatch (warning) |
| E1031 | Extension | Plaintext HTTP URL not allowed (use --allow-plaintext to override) |
| E1040 | Manifest | Plugin used in spec but not declared in barbacane.yaml |
Examples
# Validate single spec
barbacane validate --spec api.yaml
# Validate multiple specs (checks for routing conflicts)
barbacane validate -s users.yaml -s orders.yaml
# JSON output (for CI/tooling)
barbacane validate --spec api.yaml --format json
Output Examples
Text format (default):
✓ api.yaml is valid
validated 1 spec(s): 1 valid, 0 invalid
Text format with errors:
✗ api.yaml has 1 error(s)
E1004 [api.yaml]: E1004: schema validation error: missing 'info' object
validated 1 spec(s): 0 valid, 1 invalid
JSON format:
{
"results": [
{
"file": "api.yaml",
"valid": true,
"errors": [],
"warnings": []
}
],
"summary": {
"total": 1,
"valid": 1,
"invalid": 0
}
}
Exit Codes
| Code | Meaning |
|---|---|
| 0 | All specs valid |
| 1 | One or more specs have errors |
barbacane serve
Run the gateway server, loading routes from a compiled artifact.
barbacane serve --artifact <PATH> [OPTIONS]
Options
| Option | Required | Default | Description |
|---|---|---|---|
--artifact | Yes | - | Path to the .bca artifact file |
--listen | No | 0.0.0.0:8080 | Listen address (ip:port) |
--dev | No | false | Enable development mode |
--log-level | No | info | Log level (trace, debug, info, warn, error) |
--log-format | No | json | Log format (json or pretty) |
--otlp-endpoint | No | - | OpenTelemetry endpoint for trace export (e.g., http://localhost:4317) |
--trace-sampling | No | 1.0 | Trace sampling rate (0.0 to 1.0). 1.0 = 100%, 0.1 = 10%, 0.0 = disabled |
--max-body-size | No | 1048576 | Maximum request body size in bytes (1MB) |
--max-headers | No | 100 | Maximum number of request headers |
--max-header-size | No | 8192 | Maximum size of a single header in bytes (8KB) |
--max-uri-length | No | 8192 | Maximum URI length in characters (8KB) |
--allow-plaintext-upstream | No | false | Allow http:// upstream URLs (dev only) |
--tls-cert | No | - | Path to TLS certificate file (PEM format) |
--tls-key | No | - | Path to TLS private key file (PEM format) |
--tls-min-version | No | 1.2 | Minimum TLS version (1.2 or 1.3) |
--keepalive-timeout | No | 60 | HTTP keep-alive idle timeout in seconds |
--shutdown-timeout | No | 30 | Graceful shutdown timeout in seconds |
--admin-bind | No | 127.0.0.1:8081 | Admin API listen address for health, metrics, and provenance endpoints. Set to off to disable |
Connected mode (optional — connect to a control plane for centralized management):
| Option | Required | Default | Description |
|---|---|---|---|
--control-plane | No | - | Control plane WebSocket URL (e.g., ws://control:9090/ws/data-plane) |
--project-id | No | - | Project UUID for control plane registration. Required if --control-plane is set |
--api-key | No | - | API key for control plane auth. Also readable from BARBACANE_API_KEY env var |
--data-plane-name | No | - | Name for this instance in the control plane UI |
Examples
# Run with defaults (HTTP)
barbacane serve --artifact api.bca
# Custom port
barbacane serve --artifact api.bca --listen 127.0.0.1:3000
# Development mode (verbose errors)
barbacane serve --artifact api.bca --dev
# Production with TLS (HTTPS)
barbacane serve --artifact api.bca \
--tls-cert /etc/barbacane/certs/server.crt \
--tls-key /etc/barbacane/certs/server.key
# Production with custom limits
barbacane serve --artifact api.bca \
--max-body-size 5242880 \
--max-headers 50
# With observability (OTLP export)
barbacane serve --artifact api.bca \
--log-format json \
--otlp-endpoint http://otel-collector:4317
# With reduced trace sampling (10%)
barbacane serve --artifact api.bca \
--otlp-endpoint http://otel-collector:4317 \
--trace-sampling 0.1
# Development mode with pretty logging
barbacane serve --artifact api.bca --dev --log-format pretty
# All options
barbacane serve --artifact api.bca \
--listen 0.0.0.0:8080 \
--tls-cert /etc/barbacane/certs/server.crt \
--tls-key /etc/barbacane/certs/server.key \
--log-level info \
--log-format json \
--otlp-endpoint http://otel-collector:4317 \
--max-body-size 1048576 \
--max-headers 100 \
--max-header-size 8192 \
--max-uri-length 8192
TLS Termination
The gateway supports HTTPS with TLS termination. To enable TLS, provide both --tls-cert and --tls-key:
barbacane serve --artifact api.bca \
--tls-cert /path/to/server.crt \
--tls-key /path/to/server.key
For maximum security with TLS 1.3 only (modern clients):
barbacane serve --artifact api.bca \
--tls-cert /path/to/server.crt \
--tls-key /path/to/server.key \
--tls-min-version 1.3
TLS Configuration:
- Minimum TLS version: 1.2 (default) or 1.3 (via
--tls-min-version) - Modern cipher suites (via aws-lc-rs)
- ALPN support for HTTP/2 and HTTP/1.1
Certificate Requirements:
- Certificate and key must be in PEM format
- Certificate file can contain the full chain (cert + intermediates)
- Both
--tls-certand--tls-keymust be provided together
HTTP/2 Support
The gateway supports both HTTP/1.1 and HTTP/2 with automatic protocol detection:
- With TLS: HTTP/2 is negotiated via ALPN (Application-Layer Protocol Negotiation). Clients that support HTTP/2 will automatically use it when connecting over HTTPS.
- Without TLS: HTTP/1.1 is used by default. HTTP/2 cleartext (h2c) is also supported via protocol detection.
HTTP/2 Features:
- Multiplexed streams over a single connection
- Header compression (HPACK)
- Keep-alive with configurable ping intervals (20 seconds)
- Full support for all gateway features (routing, validation, middlewares)
No configuration is needed—HTTP/2 works automatically when TLS is enabled. To verify HTTP/2 is working:
# Test HTTP/2 with curl
curl -v --http2 https://localhost:8080/__barbacane/health
# Expected output shows HTTP/2:
# * Using HTTP/2
# < HTTP/2 200
Development Mode
The --dev flag enables:
- Verbose error messages with field names, locations, and detailed reasons
- Extended RFC 9457 problem details with
errorsarray - Useful for debugging but do not use in production - it may expose internal information
Request Limits
The gateway enforces request limits to protect against abuse:
| Limit | Default | Description |
|---|---|---|
| Body size | 1 MB | Requests with larger bodies are rejected with 400 |
| Header count | 100 | Requests with more headers are rejected with 400 |
| Header size | 8 KB | Individual headers larger than this are rejected |
| URI length | 8 KB | URIs longer than this are rejected with 400 |
Requests exceeding limits receive an RFC 9457 problem details response:
{
"type": "urn:barbacane:error:validation-failed",
"title": "Request validation failed",
"status": 400,
"detail": "request body too large: 2000000 bytes exceeds limit of 1048576 bytes"
}
Graceful Shutdown
The gateway handles shutdown signals (SIGTERM, SIGINT) gracefully:
- Stop accepting new connections immediately
- Drain in-flight requests for up to
--shutdown-timeoutseconds (default: 30) - Force close any remaining connections after timeout
- Exit with code 0 on successful shutdown
# Send SIGTERM to gracefully shutdown
kill -TERM $(pgrep barbacane)
# Output during graceful shutdown
barbacane: received shutdown signal, draining connections...
barbacane: waiting for 3 active connection(s) to complete...
barbacane: all connections drained, shutting down
Response Headers
Every response includes these standard headers:
| Header | Description |
|---|---|
Server | barbacane/<version> (e.g., barbacane/0.1.0) |
X-Request-Id | Request ID - propagates incoming header or generates UUID v4 |
X-Trace-Id | Trace ID - extracted from traceparent header or generated |
X-Content-Type-Options | nosniff - prevents MIME sniffing attacks |
X-Frame-Options | DENY - prevents clickjacking via iframes |
Example response headers:
HTTP/1.1 200 OK
Server: barbacane/0.1.0
X-Request-Id: 550e8400-e29b-41d4-a716-446655440000
X-Trace-Id: 4bf92f3577b34da6a3ce929d0e0e4736
X-Content-Type-Options: nosniff
X-Frame-Options: DENY
Content-Type: application/json
API Lifecycle Headers
For deprecated operations, additional headers are included:
| Header | Description |
|---|---|
Deprecation | true - indicates the endpoint is deprecated (per draft-ietf-httpapi-deprecation-header) |
Sunset | HTTP-date when the endpoint will be removed (per RFC 8594) |
Example for deprecated endpoint:
HTTP/1.1 200 OK
Server: barbacane/0.1.0
Deprecation: true
Sunset: Sat, 31 Dec 2025 23:59:59 GMT
Content-Type: application/json
See API Lifecycle for configuration details.
Exit Codes
| Code | Meaning |
|---|---|
| 0 | Clean shutdown |
| 1 | Startup error (artifact not found, bind failed) |
| 11 | Plugin hash mismatch (artifact tampering detected) |
| 13 | Secret resolution failure (missing env var or file) |
Exit code 13 occurs when a secret reference in your spec cannot be resolved:
$ export OAUTH2_SECRET="" # unset the variable
$ unset OAUTH2_SECRET
$ barbacane serve --artifact api.bca
error: failed to resolve secrets: environment variable not found: OAUTH2_SECRET
$ echo $?
13
Environment Variables
| Variable | Description |
|---|---|
RUST_LOG | Override log level (e.g., RUST_LOG=debug) |
OTEL_EXPORTER_OTLP_ENDPOINT | Alternative to --otlp-endpoint flag |
Observability
Barbacane provides built-in observability features:
Logging
Structured logs are written to stdout in either JSON (default) or pretty format:
# JSON format (production)
barbacane serve --artifact api.bca --log-format json
# Pretty format (development)
barbacane serve --artifact api.bca --log-format pretty --log-level debug
Metrics
Prometheus metrics are exposed on the admin API port (default 127.0.0.1:8081):
curl http://localhost:8081/metrics
Key metrics include:
barbacane_requests_total- Request counter by method, path, statusbarbacane_request_duration_seconds- Request latency histogrambarbacane_active_connections- Current connection countbarbacane_validation_failures_total- Validation error counter
Distributed Tracing
Enable OTLP export to send traces to OpenTelemetry Collector:
barbacane serve --artifact api.bca \
--otlp-endpoint http://otel-collector:4317
Barbacane supports W3C Trace Context propagation (traceparent/tracestate headers) for distributed tracing across services.
Secret References
Dispatcher and middleware configs can reference secrets using special URI schemes. These are resolved at startup:
| Scheme | Example | Description |
|---|---|---|
env:// | env://API_KEY | Read from environment variable |
file:// | file:///etc/secrets/key | Read from file |
Example config with secrets:
x-barbacane-middlewares:
- name: oauth2-auth
config:
client_secret: "env://OAUTH2_SECRET"
Run with:
export OAUTH2_SECRET="my-secret-value"
barbacane serve --artifact api.bca
See Secrets Guide for full documentation.
Common Workflows
Development Cycle
# Edit spec and manifest
vim api.yaml barbacane.yaml
# Validate (quick check)
barbacane validate --spec api.yaml
# Compile with manifest
barbacane compile --spec api.yaml --manifest barbacane.yaml --output api.bca
# Run in dev mode
barbacane serve --artifact api.bca --dev
CI/CD Pipeline
#!/bin/bash
set -e
# Validate all specs
barbacane validate --spec specs/*.yaml --format json > validation.json
# Compile artifact with provenance
barbacane compile \
--spec specs/users.yaml \
--spec specs/orders.yaml \
--manifest barbacane.yaml \
--output dist/gateway.bca \
--provenance-commit "$(git rev-parse HEAD)" \
--provenance-source ci/github-actions
echo "Artifact built: dist/gateway.bca"
Multi-Spec Gateway
# Compile multiple specs into one artifact
barbacane compile \
--spec users-api.yaml \
--spec orders-api.yaml \
--spec payments-api.yaml \
--manifest barbacane.yaml \
--output combined.bca
# Routes from all specs are merged
# Conflicts (same path+method) cause E1010 error
Testing Locally
# Start gateway
barbacane serve --artifact api.bca --dev --listen 127.0.0.1:8080 &
# Test endpoints
curl http://localhost:8080/health
curl http://localhost:8080/__barbacane/health
curl http://localhost:8080/__barbacane/specs
# Stop gateway
kill %1
barbacane-control
The control plane CLI for running the control plane server and managing the plugin registry.
barbacane-control <COMMAND> [OPTIONS]
Commands
| Command | Description |
|---|---|
serve | Start the control plane REST API server |
seed-plugins | Seed the plugin registry with built-in plugins |
barbacane-control seed-plugins
Seed the plugin registry with built-in plugins from the local plugins/ directory. This command scans plugin directories, reads their manifests (plugin.toml), and registers them in the database.
barbacane-control seed-plugins [OPTIONS]
Options
| Option | Required | Default | Description |
|---|---|---|---|
--plugins-dir | No | plugins | Path to the plugins directory |
--database-url | Yes | - | PostgreSQL connection URL |
--force | No | false | Re-seed plugins that already exist (updates metadata and binary) |
--verbose | No | false | Show detailed output |
The --database-url can also be set via the DATABASE_URL environment variable.
Plugin Directory Structure
Each plugin directory should contain:
plugins/
├── http-upstream/
│ ├── plugin.toml # Plugin manifest (required)
│ ├── config-schema.json # JSON Schema for config (optional)
│ ├── http-upstream.wasm # Compiled WASM binary (required)
│ └── src/
│ └── lib.rs
├── rate-limit/
│ ├── plugin.toml
│ ├── config-schema.json
│ └── rate-limit.wasm
└── ...
Plugin Manifest (plugin.toml)
[plugin]
name = "http-upstream"
version = "0.1.0"
type = "dispatcher" # or "middleware"
description = "HTTP upstream proxy" # optional
wasm = "http-upstream.wasm" # optional, defaults to {name}.wasm
[capabilities]
host_functions = ["host_http_call", "host_log"]
Examples
# Build plugins and seed them into the registry
make seed-plugins
# Or manually:
cargo run -p barbacane-control -- seed-plugins \
--plugins-dir plugins \
--database-url postgres://localhost/barbacane \
--verbose
# Output:
# Registered http-upstream v0.1.0 (dispatcher)
# Registered rate-limit v0.1.0 (middleware)
# Registered cors v0.1.0 (middleware)
# ...
# Seeded 9 plugin(s) into the registry.
Exit Codes
| Code | Meaning |
|---|---|
| 0 | Success |
| 1 | Error (database connection, invalid manifest, etc.) |
barbacane-control serve
Start the control plane HTTP server with PostgreSQL backend.
barbacane-control serve [OPTIONS]
Options
| Option | Required | Default | Description |
|---|---|---|---|
--listen | No | 127.0.0.1:9090 | Listen address (ip:port) |
--database-url | Yes | - | PostgreSQL connection URL |
--migrate | No | true | Run database migrations on startup |
The --database-url can also be set via the DATABASE_URL environment variable.
Examples
# Start with explicit database URL
barbacane-control serve \
--database-url postgres://postgres:password@localhost/barbacane \
--listen 0.0.0.0:9090
# Using environment variable
export DATABASE_URL=postgres://postgres:password@localhost/barbacane
barbacane-control serve
# Skip migrations (not recommended)
barbacane-control serve \
--database-url postgres://localhost/barbacane \
--migrate=false
Database Setup
The control plane requires PostgreSQL 14+. Tables are created automatically via migrations:
# Create database
createdb barbacane
# Start server (migrations run automatically)
barbacane-control serve --database-url postgres://localhost/barbacane
API Endpoints
The server exposes a REST API for managing specs, plugins, artifacts, and projects:
| Endpoint | Description |
|---|---|
| System | |
GET /health | Health check |
GET /openapi | OpenAPI specification (YAML) |
GET /docs | Interactive API documentation (Scalar) |
POST /init | Initialize a new project |
| Specs | |
POST /specs | Upload spec (multipart) |
GET /specs | List specs |
GET /specs/{id} | Get spec metadata |
DELETE /specs/{id} | Delete spec |
GET /specs/{id}/history | Revision history |
GET /specs/{id}/content | Download spec content |
POST /specs/{id}/compile | Start async compilation |
GET /specs/{id}/compilations | List compilations for a spec |
PATCH /specs/{id}/operations | Update plugin bindings for spec operations |
| Compilations | |
GET /compilations/{id} | Poll compilation status |
DELETE /compilations/{id} | Delete a compilation record |
| Plugins | |
POST /plugins | Register plugin (multipart) |
GET /plugins | List plugins |
GET /plugins/{name} | List versions of a plugin |
GET /plugins/{name}/{version} | Get plugin metadata |
DELETE /plugins/{name}/{version} | Delete plugin |
GET /plugins/{name}/{version}/download | Download WASM binary |
| Artifacts | |
GET /artifacts | List artifacts |
GET /artifacts/{id} | Get artifact metadata |
DELETE /artifacts/{id} | Delete an artifact |
GET /artifacts/{id}/download | Download .bca file |
| Projects | |
POST /projects | Create a new project |
GET /projects | List all projects |
GET /projects/{id} | Get project details |
PUT /projects/{id} | Update project |
DELETE /projects/{id} | Delete project |
GET /projects/{id}/specs | List specs in a project |
POST /projects/{id}/specs | Upload spec to a project |
GET /projects/{id}/operations | List all operations across project specs |
GET /projects/{id}/plugins | List plugins configured for project |
POST /projects/{id}/plugins | Add plugin to project |
PUT /projects/{id}/plugins/{name} | Update plugin config |
DELETE /projects/{id}/plugins/{name} | Remove plugin from project |
GET /projects/{id}/compilations | List compilations for a project |
GET /projects/{id}/artifacts | List artifacts for a project |
POST /projects/{id}/deploy | Deploy artifact to connected data planes |
| Data Planes | |
GET /projects/{id}/data-planes | List connected data planes |
GET /projects/{id}/data-planes/{dp_id} | Get data plane status |
DELETE /projects/{id}/data-planes/{dp_id} | Disconnect a data plane |
| API Keys | |
POST /projects/{id}/api-keys | Create API key for data plane auth |
GET /projects/{id}/api-keys | List API keys |
DELETE /projects/{id}/api-keys/{key_id} | Revoke API key |
Interactive API Documentation
The control plane includes interactive API documentation powered by Scalar. Access it at:
http://localhost:9090/docs
This provides a browsable interface for exploring and testing all API endpoints.
Full API Specification
Full OpenAPI specification: Control Plane OpenAPI
See the Control Plane Guide for detailed usage examples.
Spec Extensions Reference
Complete reference for all x-barbacane-* OpenAPI extensions.
Summary
| Extension | Location | Required | Purpose |
|---|---|---|---|
x-barbacane-dispatch | Operation | Yes | Route to dispatcher |
x-barbacane-middlewares | Root / Operation | No | Apply middleware chain |
x-barbacane-dispatch
Specifies how to handle a request for an operation.
Location
Operation object (get, post, put, delete, patch, options, head).
Schema
x-barbacane-dispatch:
name: string # Required. Dispatcher name
config: object # Optional. Dispatcher-specific configuration
Properties
| Property | Type | Required | Description |
|---|---|---|---|
name | string | Yes | Name of the dispatcher (e.g., mock, http) |
config | object | No | Configuration passed to the dispatcher |
Dispatcher: mock
Returns static responses.
x-barbacane-dispatch:
name: mock
config:
status: integer # HTTP status (default: 200)
body: string # Response body (default: "")
Dispatcher: http-upstream
Reverse proxy to HTTP/HTTPS backend.
x-barbacane-dispatch:
name: http-upstream
config:
url: string # Required. Base URL (HTTPS required in production)
path: string # Optional. Upstream path template (default: operation path)
timeout: number # Optional. Timeout in seconds (default: 30.0)
Dispatcher: kafka
Publish messages to Apache Kafka topics.
x-barbacane-dispatch:
name: kafka
config:
brokers: string # Required. Comma-separated broker addresses
topic: string # Required. Kafka topic
Dispatcher: nats
Publish messages to NATS subjects.
x-barbacane-dispatch:
name: nats
config:
url: string # Required. NATS server URL (e.g. "nats://localhost:4222")
subject: string # Required. NATS subject
Dispatcher: s3
Proxy requests to AWS S3 or any S3-compatible endpoint with AWS Signature Version 4 signing.
x-barbacane-dispatch:
name: s3
config:
access_key_id: string # Required. AWS access key ID
secret_access_key: string # Required. AWS secret access key
region: string # Required. AWS region (e.g. "us-east-1")
session_token: string # Optional. STS/AssumeRole session token
endpoint: string # Optional. Custom S3-compatible endpoint (e.g. "https://minio.internal:9000")
# Always uses path-style URLs when set.
force_path_style: boolean # Optional. Use path-style URLs (default: false)
bucket: string # Optional. Hard-coded bucket; ignores bucket_param when set
bucket_param: string # Optional. Path param name for bucket (default: "bucket")
key_param: string # Optional. Path param name for object key (default: "key")
timeout: number # Optional. Timeout in seconds (default: 30.0)
URL styles:
- Virtual-hosted (default):
{bucket}.s3.{region}.amazonaws.com/{key} - Path-style (
force_path_style: true):s3.{region}.amazonaws.com/{bucket}/{key} - Custom endpoint:
{endpoint}/{bucket}/{key}(always path-style)
Multi-segment keys require {key+} (wildcard) in the route and allowReserved: true on the parameter:
paths:
/storage/{bucket}/{key+}:
get:
parameters:
- { name: bucket, in: path, required: true, schema: { type: string } }
- { name: key, in: path, required: true, allowReserved: true, schema: { type: string } }
x-barbacane-dispatch:
name: s3
config:
region: us-east-1
access_key_id: env://AWS_ACCESS_KEY_ID
secret_access_key: env://AWS_SECRET_ACCESS_KEY
Single-bucket CDN (hard-coded bucket, public route):
paths:
/assets/{key+}:
get:
parameters:
- { name: key, in: path, required: true, allowReserved: true, schema: { type: string } }
x-barbacane-dispatch:
name: s3
config:
bucket: my-assets
region: eu-west-1
access_key_id: env://AWS_ACCESS_KEY_ID
secret_access_key: env://AWS_SECRET_ACCESS_KEY
Dispatcher: ws-upstream
Transparent WebSocket proxy. Upgrades the connection and relays frames bidirectionally.
x-barbacane-dispatch:
name: ws-upstream
config:
url: string # Required. Upstream WebSocket URL (ws:// or wss://)
connect_timeout: number # Optional. Connection timeout in seconds (default: 5)
path: string # Optional. Upstream path template with {param} substitution
Examples
Mock response:
paths:
/health:
get:
x-barbacane-dispatch:
name: mock
config:
status: 200
body: '{"status":"ok"}'
HTTP upstream proxy:
paths:
/users/{id}:
get:
x-barbacane-dispatch:
name: http-upstream
config:
url: "https://user-service.internal"
path: "/api/v2/users/{id}"
Wildcard proxy (multi-segment path capture with {param+}):
paths:
/proxy/{path+}:
get:
parameters:
- name: path
in: path
required: true
allowReserved: true # value may contain unencoded '/'
schema:
type: string
x-barbacane-dispatch:
name: http-upstream
config:
url: "https://backend.internal"
path: "/{path}"
timeout: 10.0
A request to /proxy/api/v2/users/123 captures path=api/v2/users/123 and forwards to https://backend.internal/api/v2/users/123. See Path Parameters for details.
Secret References
Config values can reference secrets instead of hardcoding sensitive data. Secrets are resolved at gateway startup.
| Scheme | Example | Description |
|---|---|---|
env:// | env://API_KEY | Read from environment variable |
file:// | file:///etc/secrets/key | Read from file (content trimmed) |
Example with secret reference:
x-barbacane-dispatch:
name: http-upstream
config:
url: "https://api.example.com"
headers:
Authorization: "Bearer env://UPSTREAM_API_KEY"
If a secret cannot be resolved, the gateway fails to start with exit code 13.
See Secrets Guide for full documentation.
x-barbacane-middlewares
Defines a middleware chain.
Location
- Root level: Applies to all operations (global)
- Operation level: Applies to specific operation (after global)
Schema
x-barbacane-middlewares:
- name: string # Required. Middleware name
config: object # Optional. Middleware-specific configuration
Properties
| Property | Type | Required | Description |
|---|---|---|---|
name | string | Yes | Name of the middleware plugin |
config | object | No | Configuration passed to the middleware |
Middleware Merging
Operation middlewares are merged with the global chain. Global middlewares not overridden by name are preserved. When an operation defines a middleware with the same name as a global one, the operation config overrides the global config for that entry. An empty array (x-barbacane-middlewares: []) disables all middlewares for that operation.
Examples
Global middlewares:
openapi: "3.1.0"
info:
title: My API
version: "1.0.0"
x-barbacane-middlewares:
- name: request-id
config:
header: X-Request-ID
- name: rate-limit
config:
quota: 100
window: 60
- name: cors
config:
allowed_origins: ["https://app.example.com"]
paths:
/users:
get:
# Inherits all global middlewares
x-barbacane-dispatch:
name: http-upstream
config:
url: "https://api.example.com"
Operation-specific middlewares:
paths:
/admin:
get:
x-barbacane-middlewares:
- name: jwt-auth
config:
required: true
scopes: ["admin:read"]
x-barbacane-dispatch:
name: http-upstream
config:
url: "https://api.example.com"
Override global config:
# Global: 100 req/min
x-barbacane-middlewares:
- name: rate-limit
config:
quota: 100
window: 60
paths:
/high-traffic:
get:
# Override: 1000 req/min for this endpoint
x-barbacane-middlewares:
- name: rate-limit
config:
quota: 1000
window: 60
x-barbacane-dispatch:
name: http-upstream
config:
url: "https://api.example.com"
Available Middlewares
auth-jwt
- name: auth-jwt
config:
required: true
header: Authorization
scheme: Bearer
issuer: https://auth.example.com
audience: my-api
scopes: ["read"]
rate-limit
- name: rate-limit
config:
quota: 100 # Maximum requests allowed in window
window: 60 # Window duration in seconds
policy_name: "default" # Optional: name for RateLimit-Policy header
partition_key: "client_ip" # Options: "client_ip", "header:<name>", "context:<key>"
Returns IETF draft-ietf-httpapi-ratelimit-headers compliant headers:
RateLimit-Policy: Policy description (e.g.,default;q=100;w=60)RateLimit: Current limit statusRetry-After: Seconds until quota reset (only on 429)
cors
- name: cors
config:
allowed_origins: ["https://app.example.com"]
allowed_methods: ["GET", "POST", "PUT", "DELETE"]
allowed_headers: ["Authorization", "Content-Type"]
max_age: 86400
cache
- name: cache
config:
ttl: 300 # TTL in seconds (default: 300)
vary: ["Accept-Language"] # Headers that differentiate cache entries
methods: ["GET", "HEAD"] # Cacheable methods (default: GET, HEAD)
cacheable_status: [200, 301, 404] # Cacheable status codes
Adds X-Cache header to responses:
HIT: Response served from cacheMISS: Response not in cache (will be cached if cacheable)
request-id
- name: request-id
config:
header: X-Request-ID
generate_if_missing: true
idempotency
- name: idempotency
config:
header: Idempotency-Key
ttl: 86400
acl
- name: acl
config:
allow: ["admin", "editor"] # Groups allowed access
deny: ["banned"] # Groups denied (precedence over allow)
allow_consumers: ["superadmin"] # Consumer IDs allowed (bypass groups)
deny_consumers: ["attacker"] # Consumer IDs denied (highest precedence)
consumer_groups: # Static consumer→groups supplement
free_user: ["premium"]
message: "Access denied by ACL policy" # Custom 403 message
hide_consumer_in_errors: false # Show consumer in error body
Reads x-auth-consumer and x-auth-consumer-groups headers set by upstream auth plugins. Must run after an authentication middleware (basic-auth, jwt-auth, oidc-auth, oauth2-auth, apikey-auth).
Returns RFC 9457 Problem JSON on 403 with "type": "urn:barbacane:error:acl-denied".
request-transformer
Declarative request transformations before upstream dispatch.
- name: request-transformer
config:
headers:
add: { X-Gateway: "barbacane" } # Add/overwrite headers
set: { X-Source: "external" } # Add only if absent
remove: ["Authorization"] # Remove by name
rename: { X-Old: X-New } # Rename headers
querystring:
add: { version: "1.0" } # Add/overwrite params
remove: ["internal"] # Remove params
rename: { old: new } # Rename params
path:
strip_prefix: "/api/v1" # Remove path prefix
add_prefix: "/internal" # Add path prefix
replace: # Regex replace
pattern: "/v1/(.*)"
replacement: "/v2/$1"
body:
add: { /metadata/gw: "barbacane" } # JSON Pointer add
remove: ["/password"] # JSON Pointer remove
rename: { /userName: /user_name } # JSON Pointer rename
Supports variable interpolation: $client_ip, $header.*, $query.*, $path.*, context:*. Variables resolve against the original request.
See Middlewares Guide for full documentation.
response-transformer
Declarative response transformations before client delivery.
- name: response-transformer
config:
status: # Map status codes
200: 201
400: 403
headers:
add: { X-Gateway: "barbacane" } # Add/overwrite headers
set: { X-Frame-Options: "DENY" } # Add only if absent
remove: ["Server"] # Remove by name
rename: { X-Old: X-New } # Rename headers
body:
add: { /metadata/gw: "barbacane" } # JSON Pointer add
remove: ["/internal"] # JSON Pointer remove
rename: { /userName: /user_name } # JSON Pointer rename
See Middlewares Guide for full documentation.
observability
Per-operation observability middleware for SLO monitoring, detailed logging, and custom metrics.
- name: observability
config:
latency_slo_ms: 200 # Emit SLO violation metric if exceeded
detailed_request_logs: true # Log request details (method, path, headers, body size)
detailed_response_logs: true # Log response details (status, duration, body size)
emit_latency_histogram: true # Emit per-operation latency histogram
| Option | Type | Default | Description |
|---|---|---|---|
latency_slo_ms | integer | - | Latency threshold in ms; emits barbacane_plugin_observability_slo_violation counter when exceeded |
detailed_request_logs | boolean | false | Log incoming request details |
detailed_response_logs | boolean | false | Log outgoing response details including duration |
emit_latency_histogram | boolean | false | Emit barbacane_plugin_observability_latency_ms histogram |
Validation Errors
| Code | Message | Cause |
|---|---|---|
| E1010 | Routing conflict | Same path+method in multiple specs |
| E1020 | Missing dispatch | Operation has no x-barbacane-dispatch |
Artifact Format
Barbacane compiles OpenAPI specs into .bca (Barbacane Compiled Artifact) files. This document describes the artifact format.
Overview
A .bca file is a gzip-compressed tar archive containing:
artifact.bca (tar.gz)
├── manifest.json # Artifact metadata
├── routes.json # Compiled routing table
├── specs/ # Embedded source specifications
│ ├── api.yaml
│ └── ...
└── plugins/ # Bundled WASM plugins (optional)
├── rate-limit.wasm
└── ...
File Structure
manifest.json
Metadata about the artifact.
{
"barbacane_artifact_version": 2,
"compiled_at": "2026-03-01T10:30:00Z",
"compiler_version": "0.2.1",
"source_specs": [
{
"file": "api.yaml",
"sha256": "abc123...",
"type": "openapi",
"version": "3.1.0"
}
],
"bundled_plugins": [
{
"name": "rate-limit",
"version": "1.0.0",
"plugin_type": "middleware",
"wasm_path": "plugins/rate-limit.wasm",
"sha256": "789abc..."
}
],
"routes_count": 12,
"checksums": {
"routes.json": "sha256:def456..."
},
"artifact_hash": "sha256:a1b2c3d4e5f6...",
"provenance": {
"commit": "abc123def456",
"source": "ci/github-actions"
}
}
Fields
| Field | Type | Description |
|---|---|---|
barbacane_artifact_version | integer | Format version (currently 2) |
compiled_at | string | ISO 8601 timestamp of compilation |
compiler_version | string | Version of barbacane compiler |
source_specs | array | List of source specifications |
bundled_plugins | array | List of bundled WASM plugins (optional) |
routes_count | integer | Number of compiled routes |
checksums | object | SHA-256 checksums for integrity |
artifact_hash | string | Combined SHA-256 fingerprint of all artifact inputs (hash-of-hashes) |
provenance | object | Build provenance metadata |
provenance.commit | string? | Git commit SHA (if provided via --provenance-commit) |
provenance.source | string? | Build source identifier (if provided via --provenance-source) |
source_specs entry
| Field | Type | Description |
|---|---|---|
file | string | Original filename |
sha256 | string | Hash of source content |
type | string | Spec type (openapi or asyncapi) |
version | string | Spec version (e.g., 3.1.0) |
bundled_plugins entry
| Field | Type | Description |
|---|---|---|
name | string | Plugin name (kebab-case) |
version | string | Plugin version (semver) |
plugin_type | string | Plugin type (middleware or dispatcher) |
wasm_path | string | Path to WASM file within artifact |
sha256 | string | SHA-256 hash of WASM file |
routes.json
Compiled operations with routing information.
{
"operations": [
{
"index": 0,
"path": "/users",
"method": "GET",
"operation_id": "listUsers",
"dispatch": {
"name": "http",
"config": {
"upstream": "backend",
"path": "/api/users"
}
}
},
{
"index": 1,
"path": "/users/{id}",
"method": "GET",
"operation_id": "getUser",
"dispatch": {
"name": "http",
"config": {
"upstream": "backend",
"path": "/api/users/{id}"
}
}
}
]
}
operation entry
| Field | Type | Description |
|---|---|---|
index | integer | Unique operation index |
path | string | OpenAPI path template |
method | string | HTTP method (uppercase) |
operation_id | string | Operation ID (optional) |
dispatch | object | Dispatcher configuration |
specs/
Directory containing the original source specifications. These are embedded for:
- Serving via
/__barbacane/specsendpoint - Documentation and debugging
- Audit trail
Files retain their original names.
Version History
| Version | Changes |
|---|---|
| 2 | Added artifact_hash (combined SHA-256 fingerprint) and provenance (build metadata) fields |
| 1 | Initial format |
Inspecting Artifacts
List Contents
tar -tzf artifact.bca
Output:
manifest.json
routes.json
specs/
specs/api.yaml
plugins/
plugins/rate-limit.wasm
Extract and View
# Extract
tar -xzf artifact.bca -C ./extracted
# View manifest
cat extracted/manifest.json | jq .
# View routes
cat extracted/routes.json | jq '.operations | length'
Verify Checksums
# Extract
tar -xzf artifact.bca -C ./extracted
# Verify routes.json
sha256sum extracted/routes.json
# Compare with manifest.checksums["routes.json"]
Security Considerations
Integrity
- All embedded files have SHA-256 checksums in the manifest
- The gateway verifies checksums on load
- The
artifact_hashprovides a single combined fingerprint of all inputs for drift detection - When connected to a control plane, hash mismatches trigger drift alerts
Contents
- Source specs are embedded and served publicly via
/__barbacane/specs - Do not include secrets in spec files
- Use environment variables or secret management for sensitive config
Programmatic Access
Rust
use barbacane_compiler::{load_manifest, load_routes, load_specs, load_plugins};
use std::path::Path;
let path = Path::new("artifact.bca");
// Load manifest
let manifest = load_manifest(path)?;
println!("Routes: {}", manifest.routes_count);
// Load routes
let routes = load_routes(path)?;
for op in &routes.operations {
println!("{} {}", op.method, op.path);
}
// Load specs
let specs = load_specs(path)?;
for (name, content) in &specs {
println!("Spec: {} ({} bytes)", name, content.len());
}
// Load plugins
let plugins = load_plugins(path)?;
for (name, wasm_bytes) in &plugins {
println!("Plugin: {} ({} bytes)", name, wasm_bytes.len());
}
Best Practices
Naming
Use descriptive names:
my-api-v2.1.0.bca
gateway-prod-2025-01-29.bca
Version Control
Don’t commit .bca files to git. Instead:
- Commit source specs
- Build artifacts in CI/CD
- Store in artifact registry
CI/CD Pipeline
# Compile in CI with provenance
barbacane compile \
--spec specs/*.yaml \
--manifest barbacane.yaml \
--output dist/gateway-${VERSION}.bca \
--provenance-commit "$(git rev-parse HEAD)" \
--provenance-source ci/github-actions
# Upload to registry
aws s3 cp dist/gateway-${VERSION}.bca s3://artifacts/
# Deploy
ssh prod "barbacane serve --artifact /opt/barbacane/gateway.bca"
# Verify provenance on running gateway
curl -s http://gateway:8081/provenance | jq '.artifact_hash'
Reserved Endpoints
Barbacane reserves the /__barbacane/* path prefix for gateway introspection and management endpoints. These are always available regardless of your spec configuration.
Health Check
GET /__barbacane/health
Returns the gateway health status.
Response
{
"status": "healthy",
"artifact_version": 1,
"compiler_version": "0.1.0",
"routes_count": 12
}
Fields
| Field | Type | Description |
|---|---|---|
status | string | Always "healthy" if responding |
artifact_version | integer | .bca format version |
compiler_version | string | barbacane version that compiled the artifact |
routes_count | integer | Number of routes loaded |
Usage
# Kubernetes liveness probe
livenessProbe:
httpGet:
path: /__barbacane/health
port: 8080
initialDelaySeconds: 5
periodSeconds: 10
# Load balancer health check
curl -f http://localhost:8080/__barbacane/health
API Specs
GET /__barbacane/specs
Returns an index of all embedded API specifications (OpenAPI and AsyncAPI).
Index Response
curl http://localhost:8080/__barbacane/specs
{
"openapi": {
"specs": [
{ "name": "users-api.yaml", "url": "/__barbacane/specs/users-api.yaml" },
{ "name": "orders-api.yaml", "url": "/__barbacane/specs/orders-api.yaml" }
],
"count": 2,
"merged_url": "/__barbacane/specs/openapi"
},
"asyncapi": {
"specs": [
{ "name": "events.yaml", "url": "/__barbacane/specs/events.yaml" }
],
"count": 1,
"merged_url": "/__barbacane/specs/asyncapi"
}
}
Merged Specs
Get all OpenAPI specs merged into one (for Swagger UI):
curl http://localhost:8080/__barbacane/specs/openapi
Get all AsyncAPI specs merged into one (for AsyncAPI Studio):
curl http://localhost:8080/__barbacane/specs/asyncapi
Individual Specs
Fetch a specific spec by filename:
curl http://localhost:8080/__barbacane/specs/users-api.yaml
Format Selection
Request specs in JSON or YAML format using the format query parameter:
# Get merged OpenAPI as JSON (for tools that prefer JSON)
curl "http://localhost:8080/__barbacane/specs/openapi?format=json"
# Get merged OpenAPI as YAML (default)
curl "http://localhost:8080/__barbacane/specs/openapi?format=yaml"
Extension Stripping
All specs served via these endpoints have internal x-barbacane-* extensions stripped automatically. Only standard OpenAPI/AsyncAPI fields and the x-sunset extension (RFC 8594) are preserved.
Usage
# Swagger UI integration (for OpenAPI specs)
# Point Swagger UI to: http://your-gateway/__barbacane/specs/openapi
# AsyncAPI Studio integration (for AsyncAPI specs)
# Point to: http://your-gateway/__barbacane/specs/asyncapi
# Download merged spec for documentation
curl -o api.yaml http://localhost:8080/__barbacane/specs/openapi
# API client generation
curl http://localhost:8080/__barbacane/specs/openapi | \
openapi-generator generate -i /dev/stdin -g typescript-fetch -o ./client
Path Reservation
The entire /__barbacane/ prefix is reserved. Attempting to define operations under this path in your spec will result in undefined behavior (your routes may be shadowed by built-in endpoints).
Don’t do this:
paths:
/__barbacane/custom: # BAD: Reserved prefix
get:
...
Admin API (Dedicated Port)
Starting with v0.3.0, operational endpoints (metrics, provenance) are served on a dedicated admin port (default 127.0.0.1:8081), separate from user traffic. This follows ADR-0022 and keeps operational data off the public-facing port.
Configure with --admin-bind (default: 127.0.0.1:8081). Set to off to disable.
Health Check (Admin)
GET /health
Returns the gateway health status with uptime.
{
"status": "healthy",
"artifact_version": 2,
"compiler_version": "0.2.1",
"routes_count": 12,
"uptime_secs": 3600
}
Prometheus Metrics
GET /metrics
Returns gateway metrics in Prometheus text exposition format.
Response
# HELP barbacane_requests_total Total number of HTTP requests processed
# TYPE barbacane_requests_total counter
barbacane_requests_total{method="GET",path="/users",status="200",api="users-api"} 42
# HELP barbacane_request_duration_seconds HTTP request duration in seconds
# TYPE barbacane_request_duration_seconds histogram
barbacane_request_duration_seconds_bucket{method="GET",path="/users",status="200",api="users-api",le="0.01"} 35
...
# HELP barbacane_active_connections Number of currently active connections
# TYPE barbacane_active_connections gauge
barbacane_active_connections 5
Available Metrics
| Metric | Type | Labels | Description |
|---|---|---|---|
barbacane_requests_total | counter | method, path, status, api | Total requests processed |
barbacane_request_duration_seconds | histogram | method, path, status, api | Request latency |
barbacane_request_size_bytes | histogram | method, path, status, api | Request body size |
barbacane_response_size_bytes | histogram | method, path, status, api | Response body size |
barbacane_active_connections | gauge | - | Current open connections |
barbacane_connections_total | counter | - | Total connections accepted |
barbacane_validation_failures_total | counter | method, path, reason | Validation errors |
barbacane_middleware_duration_seconds | histogram | middleware, phase | Middleware execution time |
barbacane_dispatch_duration_seconds | histogram | dispatcher, upstream | Dispatcher execution time |
barbacane_wasm_execution_duration_seconds | histogram | plugin, function | WASM plugin execution time |
Usage
# Scrape metrics from admin port
curl http://localhost:8081/metrics
# Prometheus scrape config
scrape_configs:
- job_name: 'barbacane'
static_configs:
- targets: ['barbacane:8081']
metrics_path: '/metrics'
Provenance
GET /provenance
Returns full artifact provenance data: cryptographic fingerprint, build metadata, source specs, bundled plugins, and drift detection status.
Response
{
"artifact_hash": "sha256:a1b2c3d4e5f6...",
"compiled_at": "2026-03-01T10:30:00Z",
"compiler_version": "0.2.1",
"artifact_version": 2,
"provenance": {
"commit": "abc123def456",
"source": "ci/github-actions"
},
"source_specs": [
{ "file": "api.yaml", "sha256": "abc123...", "type": "openapi" }
],
"plugins": [
{ "name": "rate-limit", "version": "1.0.0", "sha256": "789abc..." }
],
"drift_detected": false
}
Fields
| Field | Type | Description |
|---|---|---|
artifact_hash | string | Combined SHA-256 fingerprint of all artifact inputs |
compiled_at | string | ISO 8601 compilation timestamp |
compiler_version | string | Barbacane compiler version |
provenance.commit | string? | Git commit SHA (if provided at compile time) |
provenance.source | string? | Build source identifier |
source_specs | array | Source specifications with individual hashes |
plugins | array | Bundled plugins with versions and hashes |
drift_detected | boolean | true if control plane detected a hash mismatch |
Usage
# Check what's running
curl http://localhost:8081/provenance
# Verify artifact hash
curl -s http://localhost:8081/provenance | jq -r '.artifact_hash'
# Check for drift
curl -s http://localhost:8081/provenance | jq '.drift_detected'
Security Considerations
The /__barbacane/* endpoints on the main traffic port (8080) serve health checks and API specs — both are typically safe to expose publicly. Health checks are standard for load balancers and Kubernetes probes. API specs are designed for API consumers (Swagger UI, client generation).
Operational endpoints (metrics, provenance) are served on the admin port (default 127.0.0.1:8081), which binds to localhost only by default.
In production, consider:
- Keep admin port internal: The default
127.0.0.1:8081binding is already safe; if you change it to0.0.0.0:8081, ensure firewall rules restrict access - Network segmentation: Only expose port 8080 to your load balancer
- Spec access control: If your API specs contain sensitive information, restrict
/__barbacane/specsvia reverse proxy
Example nginx configuration:
location /__barbacane/specs {
# Restrict spec access to internal network if needed
allow 10.0.0.0/8;
deny all;
proxy_pass http://barbacane:8080;
}
location / {
proxy_pass http://barbacane:8080;
}
Architecture
This document describes Barbacane’s system architecture for contributors.
High-Level Overview
┌─────────────────────────────────────────────────────────────────┐
│ Control Plane │
│ ┌─────────────┐ ┌─────────────┐ ┌─────────────────────┐ │
│ │ OpenAPI │───▶│ Parser │───▶│ Compiler │ │
│ │ Specs │ │ │ │ (validation, trie) │ │
│ └─────────────┘ └─────────────┘ └──────────┬──────────┘ │
│ │ │
│ ▼ │
│ ┌───────────────┐ │
│ │ .bca Artifact │ │
│ └───────┬───────┘ │
└───────────────────────────────────────────────────┼─────────────┘
│
▼
┌─────────────────────────────────────────────────────────────────┐
│ Data Plane │
│ ┌─────────────┐ ┌─────────────┐ ┌─────────────────────┐ │
│ │ Artifact │───▶│ Router │───▶│ Dispatchers │ │
│ │ Loader │ │ (trie) │ │ (mock, http, ...) │ │
│ └─────────────┘ └─────────────┘ └─────────────────────┘ │
│ │ │ │ │
│ │ ▼ ▼ │
│ │ ┌─────────────┐ ┌─────────────────────┐ │
│ │ │ Middlewares │◀──▶│ Plugin Runtime │ │
│ │ │ Chain │ │ (WASM) │ │
│ │ └─────────────┘ └─────────────────────┘ │
│ │ │
│ ▼ │
│ ┌─────────────────────────────────────────────────────────┐ │
│ │ HTTP Server (hyper) │ │
│ └─────────────────────────────────────────────────────────┘ │
└─────────────────────────────────────────────────────────────────┘
Crate Structure
The project is organized as a Cargo workspace with specialized crates:
crates/
├── barbacane/ # Main CLI (compile, validate, serve)
├── barbacane-control/ # Control plane CLI (spec upload, plugin register)
├── barbacane-compiler/ # Spec compilation & artifact format
├── barbacane-spec-parser/ # OpenAPI/AsyncAPI parsing
├── barbacane-router/ # Prefix trie request routing
├── barbacane-validator/ # Request validation
├── barbacane-wasm/ # WASM plugin runtime (wasmtime)
├── barbacane-plugin-sdk/ # WASM plugin development kit
├── barbacane-plugin-macros/# Proc macros for plugin development
└── barbacane-test/ # Integration test harness
Crate Dependencies
barbacane (CLI / data plane)
├── barbacane-compiler
│ ├── barbacane-spec-parser
│ └── barbacane-router
├── barbacane-validator
├── barbacane-router
└── barbacane-wasm
└── barbacane-plugin-sdk
barbacane-plugin-sdk
└── barbacane-plugin-macros
barbacane-test
└── barbacane-compiler
Crate Details
barbacane-spec-parser
Parses OpenAPI and AsyncAPI specifications and extracts Barbacane extensions.
Key types:
ApiSpec- Parsed specification with operations and metadataOperation- Single API operation with dispatch/middleware configDispatchConfig- Dispatcher name and configurationMiddlewareConfig- Middleware name and configurationChannel- AsyncAPI channel with publish/subscribe operations
Supported formats:
- OpenAPI 3.0.x
- OpenAPI 3.1.x
- OpenAPI 3.2.x (draft)
- AsyncAPI 3.x (with Kafka and NATS dispatchers)
barbacane-router
Prefix trie implementation for fast HTTP request routing.
Key types:
Router- The routing trieRouteEntry- Points to compiled operation indexRouteMatch- Found / MethodNotAllowed / NotFound
Features:
- O(path length) lookup
- Static routes take precedence over parameters
- Path parameter extraction
- Path normalization (trailing slashes, double slashes)
barbacane-compiler
Compiles parsed specs into deployable artifacts.
Responsibilities:
- Validate dispatcher requirements (every operation needs dispatch)
- Detect routing conflicts (same path+method in multiple specs)
- Build routing trie
- Package into
.bcaarchive
Artifact format (.bca):
artifact.bca (tar.gz)
├── manifest.json # Metadata, checksums, bundled plugins
├── routes.json # Compiled operations
├── specs/ # Embedded source specs
│ ├── api.yaml
│ └── ...
└── plugins/ # Bundled WASM plugins (optional)
├── rate-limit.wasm
└── ...
barbacane
Main CLI with three subcommands:
compile- Compile specs to artifactvalidate- Validate specs without compilationserve- Run the gateway
barbacane (serve)
Data plane binary - the actual gateway.
Startup flow:
- Load artifact from disk
- Load compiled routes from artifact
- Load bundled plugins from artifact
- Compile WASM modules (AOT)
- Resolve secrets - scan configs for
env://andfile://references - Create plugin instance pool with resolved secrets
- Start HTTP server
If any secret cannot be resolved in step 5, the gateway exits with code 13.
Request flow:
- Receive HTTP request
- Check reserved endpoints (
/__barbacane/*) - Route lookup in trie
- Apply middleware chain
- Dispatch to handler
- Apply response middlewares
- Send response
barbacane-wasm
WASM plugin runtime built on wasmtime.
Key types:
WasmEngine- Configured wasmtime engine with AOT compilationInstancePool- Instance pooling per (plugin_name, config_hash)PluginInstance- Single WASM instance with host function bindingsMiddlewareChain- Ordered middleware execution
Host functions:
host_set_output- Plugin writes result to host bufferhost_log- Structured logging with trace contexthost_context_get/set- Per-request key-value storehost_clock_now- Monotonic time in millisecondshost_http_call- Make outbound HTTP requestshost_http_read_result- Read HTTP response datahost_get_secret- Get a resolved secret by referencehost_secret_read_result- Read secret value into plugin memoryhost_kafka_publish- Publish messages to Kafka topicshost_nats_publish- Publish messages to NATS subjectshost_rate_limit_check- Check rate limitshost_cache_read/write- Read/write response cachehost_metric_counter_inc- Increment Prometheus counterhost_metric_histogram_observe- Record histogram observation
Resource limits:
- 16 MB linear memory
- 1 MB stack
- 100ms execution timeout (via fuel)
barbacane-plugin-sdk
SDK for developing WASM plugins (dispatchers and middlewares).
Provides:
Request,Response,Actiontypes#[barbacane_middleware]macro - generates WASM exports for middlewares#[barbacane_dispatcher]macro - generates WASM exports for dispatchers- Host function FFI bindings
barbacane-plugin-macros
Proc macros for plugin development (used by barbacane-plugin-sdk).
Generates:
init(ptr, len) -> i32- Initialize with JSON configon_request(ptr, len) -> i32- Process request (0=continue, 1=short-circuit)on_response(ptr, len) -> i32- Process responsedispatch(ptr, len) -> i32- Handle request and return response
barbacane-test
Integration testing harness.
Key types:
TestGateway- Spins up gateway with compiled artifact on random port- Request helpers for easy HTTP testing
Request Lifecycle
┌──────────────────────────────────────────────────────────────────┐
│ Request Flow │
└──────────────────────────────────────────────────────────────────┘
Client Request
│
▼
┌───────────┐
│ Receive │ TCP accept, HTTP parse
└─────┬─────┘
│
▼
┌───────────┐
│ Reserved │ /__barbacane/* check
│ Endpoint │ (health, openapi, etc.)
└─────┬─────┘
│ Not reserved
▼
┌───────────┐
│ Route │ Trie lookup: path + method
│ Lookup │ Returns: Found / NotFound / MethodNotAllowed
└─────┬─────┘
│ Found
▼
┌───────────┐
│ Middleware│ Global middlewares
│ (Global) │ auth, rate-limit, cors, etc.
└─────┬─────┘
│
▼
┌───────────┐
│ Middleware│ Operation-specific middlewares
│ (Operation│ May override global config
└─────┬─────┘
│
▼
┌───────────┐
│ Dispatch │ mock, http, custom plugins
└─────┬─────┘
│
▼
┌───────────┐
│ Response │ Reverse middleware chain
│ Middleware│ Transform response
└─────┬─────┘
│
▼
┌───────────┐
│ Send │ HTTP response to client
└───────────┘
Plugin Architecture
Plugins are WebAssembly (WASM) modules that implement dispatchers or middlewares.
┌─────────────────────────────────────────────────────────┐
│ Plugin Contract │
├─────────────────────────────────────────────────────────┤
│ Middleware exports: │
│ - on_request(ctx) -> Continue | Respond | Error │
│ - on_response(ctx) -> Continue | Modify | Error │
│ │
│ Dispatcher exports: │
│ - dispatch(ctx) -> Response | Error │
│ │
│ Common: │
│ - init(config) -> Ok | Error │
├─────────────────────────────────────────────────────────┤
│ Host functions (provided by runtime): │
│ - http_call(req) -> Response │
│ - log(level, message) │
│ - get_secret(name) -> Value │
│ - context_get(key) -> Value │
│ - context_set(key, value) │
└─────────────────────────────────────────────────────────┘
Key Design Decisions
Compilation Model
Decision: Compile specs to artifacts at build time, not runtime.
Rationale:
- Fail fast: catch configuration errors before deployment
- Reproducible: artifact is immutable, version-controlled
- Fast startup: no parsing at runtime
- Secure: no spec files needed in production
Prefix Trie Routing
Decision: Use a prefix trie for routing instead of linear search.
Rationale:
- O(path length) lookup regardless of route count
- Natural handling of path parameters
- Easy static-over-param precedence
WASM Plugins
Decision: Use WebAssembly for plugin sandboxing.
Rationale:
- Language agnostic (Rust, Go, AssemblyScript, etc.)
- Secure sandbox (no filesystem, network without host functions)
- Near-native performance
- Portable across platforms
Embedded Specs
Decision: Embed source specs in the artifact.
Rationale:
- Self-documenting:
/__barbacane/specsalways works - No external dependencies at runtime
- Version consistency
Testing Strategy
Unit Tests (per crate)
├── Parser: various OpenAPI versions, edge cases
├── Router: routing scenarios, parameters, precedence
└── Compiler: validation, conflict detection
Integration Tests (barbacane-test)
└── TestGateway: full request/response cycles
├── Health endpoint
├── Mock dispatcher
├── 404 / 405 handling
└── Path parameters
Run all tests:
cargo test --workspace
Performance Considerations
- Zero-copy routing: Trie lookup doesn’t allocate
- Connection reuse: HTTP/1.1 keep-alive by default
- Async I/O: Tokio runtime, non-blocking everything
- Plugin caching: WASM modules compiled once, instantiated per-request
Tech Debt
Schema composition not interpreted at compile time
allOf, oneOf, anyOf, and discriminator are stored as opaque JSON values. The jsonschema crate handles them correctly at runtime validation, but the compiler cannot analyze or optimize polymorphic schemas.
Future Directions
- gRPC passthrough: Transparent proxying for gRPC services
- Hot reload: Reload artifacts without restart via control plane notifications
- Cluster mode: Distributed configuration across multiple nodes
Development Guide
This guide helps you set up a development environment for contributing to Barbacane.
Prerequisites
- Rust 1.75+ - Install via rustup
- Git - For version control
- Node.js 20+ - For the UI (if working on the web interface)
- PostgreSQL 14+ - For the control plane (or use Docker)
- Docker - For running PostgreSQL locally (optional)
Optional:
- cargo-watch - For auto-rebuild on file changes
- wasm32-unknown-unknown target - For building WASM plugins (
rustup target add wasm32-unknown-unknown) - tmux - For running multiple services in one terminal
Quick Start with Makefile
The easiest way to get started is using the Makefile:
# Start PostgreSQL in Docker
make db-up
# Build all WASM plugins and seed them into the database
make seed-plugins
# Start the control plane (port 9090)
make control-plane
# In another terminal, start the UI (port 5173)
make ui
Then open http://localhost:5173 in your browser.
Makefile Targets
| Target | Description |
|---|---|
| Build & Test | |
make | Run check + test (default) |
make test | Run all workspace tests |
make test-verbose | Run tests with output |
make test-one TEST=name | Run specific test |
make clippy | Run clippy lints |
make fmt | Format all code |
make check | Run fmt-check + clippy |
make build | Build debug |
make release | Build release |
make plugins | Build all WASM plugins |
make compile | Compile spec to artifact.bca |
make seed-plugins | Build plugins and seed registry |
make clean | Clean all build artifacts |
| Development | |
make control-plane | Start control plane server (port 9090) |
make ui | Start UI dev server (port 5173) |
make dev | Show instructions to start both |
make dev-tmux | Start both in tmux session |
| Docker | |
make docker-build | Build both Docker images |
make docker-build-gateway | Build data plane image |
make docker-build-control | Build control plane image |
make docker-up | Start full stack (compose) |
make docker-down | Stop full stack |
make docker-run | Run data plane standalone |
make docker-run-control | Run control plane only |
| Database | |
make db-up | Start PostgreSQL container |
make db-down | Stop PostgreSQL container |
make db-reset | Reset database (removes all data) |
Override the database URL:
make control-plane DATABASE_URL=postgres://user:pass@host/db
Getting Started
Clone the Repository
git clone https://github.com/barbacane/barbacane.git
cd barbacane
Build
# Build all crates
cargo build --workspace
# Build in release mode
cargo build --workspace --release
Test
# Run all tests
cargo test --workspace
# Run tests for a specific crate
cargo test -p barbacane-router
# Run tests with output
cargo test --workspace -- --nocapture
# Run a specific test
cargo test -p barbacane-router trie::tests::static_takes_precedence
Run
# Validate a spec
cargo run --bin barbacane -- validate --spec tests/fixtures/minimal.yaml
# Compile a spec
cargo run --bin barbacane -- compile --spec tests/fixtures/minimal.yaml --output test.bca
# Run the gateway
cargo run --bin barbacane -- serve --artifact test.bca --listen 127.0.0.1:8080 --dev
Project Structure
barbacane/
├── Cargo.toml # Workspace definition
├── Makefile # Development shortcuts
├── Dockerfile # Data plane container image
├── Dockerfile.control # Control plane container image
├── docker-compose.yml # PostgreSQL for local dev
├── docker/ # Docker support files
│ ├── nginx-control.conf # Nginx config for control plane UI
│ └── control-entrypoint.sh
├── LICENSE
├── CONTRIBUTING.md
├── README.md
│
├── crates/
│ ├── barbacane/ # Data plane CLI (compile, validate, serve)
│ │ ├── Cargo.toml
│ │ └── src/
│ │ └── main.rs
│ │
│ ├── barbacane-control/ # Control plane server
│ │ ├── Cargo.toml
│ │ ├── openapi.yaml # API specification
│ │ └── src/
│ │ ├── main.rs
│ │ ├── server.rs
│ │ └── db/
│ │
│ ├── barbacane-compiler/ # Compilation logic
│ │ ├── Cargo.toml
│ │ └── src/
│ │ ├── lib.rs
│ │ ├── artifact.rs
│ │ └── error.rs
│ │
│ ├── barbacane-spec-parser/ # OpenAPI/AsyncAPI parsing
│ │ ├── Cargo.toml
│ │ └── src/
│ │ ├── lib.rs
│ │ ├── openapi.rs
│ │ ├── asyncapi.rs
│ │ └── error.rs
│ │
│ ├── barbacane-router/ # Request routing
│ │ ├── Cargo.toml
│ │ └── src/
│ │ ├── lib.rs
│ │ └── trie.rs
│ │
│ ├── barbacane-plugin-sdk/ # Plugin development SDK
│ │ ├── Cargo.toml
│ │ └── src/
│ │ └── lib.rs
│ │
│ └── barbacane-test/ # Test harness
│ ├── Cargo.toml
│ └── src/
│ ├── lib.rs
│ └── gateway.rs
│
├── plugins/ # Built-in WASM plugins
│ ├── http-upstream/ # HTTP reverse proxy dispatcher
│ ├── mock/ # Mock response dispatcher
│ ├── lambda/ # AWS Lambda dispatcher
│ ├── kafka/ # Kafka dispatcher (AsyncAPI)
│ ├── nats/ # NATS dispatcher (AsyncAPI)
│ ├── rate-limit/ # Rate limiting middleware
│ ├── cors/ # CORS middleware
│ ├── cache/ # Caching middleware
│ ├── jwt-auth/ # JWT authentication
│ ├── apikey-auth/ # API key authentication
│ └── oauth2-auth/ # OAuth2 token introspection
│
├── ui/ # React web interface
│ ├── package.json
│ ├── vite.config.ts
│ └── src/
│ ├── pages/ # Page components
│ ├── components/ # UI components
│ ├── hooks/ # Custom React hooks
│ └── lib/ # API client, utilities
│
├── tests/
│ └── fixtures/ # Test spec files
│ ├── minimal.yaml
│ ├── train-travel-3.0.yaml
│ └── ...
│
├── docs/ # Documentation
│ ├── index.md
│ ├── guide/
│ ├── reference/
│ └── contributing/
│
└── adr/ # Architecture Decision Records
├── 0001-*.md
└── ...
Development Workflow
Making Changes
-
Create a branch
git checkout -b feature/my-feature -
Make changes and test
cargo test --workspace -
Format code
cargo fmt --all -
Check lints
cargo clippy --workspace -- -D warnings -
Commit
git commit -m "feat: add my feature"
Commit Messages
Follow Conventional Commits:
feat:- New featurefix:- Bug fixdocs:- Documentationrefactor:- Code refactoringtest:- Adding testschore:- Maintenance
Examples:
feat: add cache middleware
fix: handle empty path in router
docs: add middleware configuration guide
refactor: extract trie traversal logic
test: add integration tests for 405 responses
Adding a New Crate
-
Create the crate directory:
mkdir -p crates/barbacane-mycrate/src -
Create
Cargo.toml:[package] name = "barbacane-mycrate" description = "Description here" version.workspace = true edition.workspace = true license.workspace = true [dependencies] # Use workspace dependencies serde = { workspace = true } -
Add to workspace in root
Cargo.toml:[workspace] members = [ # ...existing crates... "crates/barbacane-mycrate", ] [workspace.dependencies] barbacane-mycrate = { path = "crates/barbacane-mycrate" } -
Create
src/lib.rs://! Brief description of the crate. //! //! More detailed explanation.
Testing
Unit Tests
Place unit tests in the same file:
// src/parser.rs
pub fn parse(input: &str) -> Result<Spec, Error> {
// implementation
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn parse_minimal() {
let result = parse("openapi: 3.1.0...");
assert!(result.is_ok());
}
}
Integration Tests
Use barbacane-test crate for full-stack tests:
use barbacane_test::TestGateway;
#[tokio::test]
async fn test_my_feature() {
let gateway = TestGateway::from_spec("tests/fixtures/my-fixture.yaml")
.await
.expect("failed to start gateway");
let resp = gateway.get("/my-endpoint").await.unwrap();
assert_eq!(resp.status(), 200);
}
Test Fixtures
Add test spec files to tests/fixtures/:
# tests/fixtures/my-feature.yaml
openapi: "3.1.0"
info:
title: Test
version: "1.0.0"
paths:
/test:
get:
x-barbacane-dispatch:
name: mock
config:
status: 200
UI Development
The web interface is a React application in the ui/ directory.
Setup
cd ui
npm install
Development Server
# Using Makefile (from project root)
make ui
# Or manually
cd ui && npm run dev
The UI runs at http://localhost:5173 and proxies API requests to the control plane at http://localhost:9090.
Testing
cd ui
npm run test # Run tests once
npm run test:watch # Watch mode
Key Directories
ui/src/
├── pages/ # Page components (ProjectPluginsPage, etc.)
├── components/ # Reusable UI components
│ └── ui/ # Base components (Button, Card, Badge)
├── hooks/ # Custom hooks (useJsonSchema, usePlugins)
├── lib/
│ ├── api/ # API client (types, requests)
│ └── utils.ts # Utilities (cn, formatters)
└── App.tsx # Main app with routing
Adding a New Page
-
Create page component in
src/pages/:// src/pages/my-feature.tsx export function MyFeaturePage() { return <div>...</div> } -
Add route in
src/App.tsx:<Route path="/my-feature" element={<MyFeaturePage />} />
API Client
Use the typed API client from @/lib/api:
import { useQuery } from '@tanstack/react-query'
import { listPlugins } from '@/lib/api'
function MyComponent() {
const { data: plugins, isLoading } = useQuery({
queryKey: ['plugins'],
queryFn: () => listPlugins(),
})
// ...
}
Debugging
Logging
Use eprintln! for development logging:
if cfg!(debug_assertions) {
eprintln!("debug: processing request to {}", path);
}
Running with Verbose Output
# Gateway with dev mode
cargo run --bin barbacane -- serve --artifact test.bca --dev
# Compile with output
cargo run --bin barbacane -- compile --spec api.yaml --output api.bca
Integration Test Debugging
# Run single test with output
cargo test -p barbacane-test test_gateway_health -- --nocapture
Performance Profiling
Benchmarks
Criterion benchmarks are available for performance-critical components:
# Run all benchmarks
cargo bench --workspace
# Run router benchmarks (trie lookup and insertion)
cargo bench -p barbacane-router
# Run validator benchmarks (schema validation)
cargo bench -p barbacane-validator
Router benchmarks (crates/barbacane-router/benches/routing.rs):
router_lookup- Measures lookup performance for static paths, parameterized paths, and not-found casesrouter_insert- Measures route insertion performance at various route counts (10-1000 routes)
Validator benchmarks (crates/barbacane-validator/benches/validation.rs):
validator_creation- Measures schema compilation timepath_param_validation- Validates path parameters against schemasquery_param_validation- Validates query parametersbody_validation- Validates JSON request bodiesfull_request_validation- End-to-end request validation
Benchmark results are saved to target/criterion/ with HTML reports.
Flamegraph
cargo install flamegraph
cargo flamegraph -p barbacane -- --artifact test.bca
Documentation
Doc Comments
All public APIs should have doc comments:
/// Parse an OpenAPI specification from a string.
///
/// # Arguments
///
/// * `input` - YAML or JSON string containing the spec
///
/// # Returns
///
/// Parsed `ApiSpec` or error if parsing fails.
///
/// # Example
///
/// ```
/// let spec = parse_spec("openapi: 3.1.0...")?;
/// println!("Found {} operations", spec.operations.len());
/// ```
pub fn parse_spec(input: &str) -> Result<ApiSpec, ParseError> {
// ...
}
Generate Docs
cargo doc --workspace --open
Release Process
- Update version in workspace
Cargo.toml - Update CHANGELOG.md
- Create git tag:
git tag v0.1.0 - Push:
git push origin main --tags - CI builds and publishes
Getting Help
- Open an issue on GitHub
- Check existing ADRs for design decisions
- Read the architecture docs
Plugin Development Guide
This guide explains how to create WASM plugins for Barbacane.
Overview
Barbacane plugins are WebAssembly (WASM) modules that extend gateway functionality. There are two types:
| Type | Purpose | Exports |
|---|---|---|
| Middleware | Process requests/responses in a chain | init, on_request, on_response |
| Dispatcher | Handle requests and generate responses | init, dispatch |
Prerequisites
- Rust stable with
wasm32-unknown-unknowntarget barbacane-plugin-sdkcrate
# Add the WASM target
rustup target add wasm32-unknown-unknown
Quick Start
1. Create a New Plugin
cargo new --lib my-plugin
cd my-plugin
2. Configure Cargo.toml
[package]
name = "my-plugin"
version = "0.1.0"
edition = "2021"
[lib]
crate-type = ["cdylib"]
[dependencies]
barbacane-plugin-sdk = { path = "../path/to/barbacane/crates/barbacane-plugin-sdk" }
serde = { version = "1", features = ["derive"] }
serde_json = "1"
3. Write the Plugin
Middleware example:
use barbacane_plugin_sdk::prelude::*;
use serde::Deserialize;
#[barbacane_middleware]
#[derive(Deserialize)]
pub struct MyMiddleware {
// Configuration fields from the spec
header_name: String,
header_value: String,
}
impl MyMiddleware {
pub fn on_request(&mut self, req: Request) -> Action {
// Add a header to the request
let mut req = req;
req.headers.insert(
self.header_name.clone(),
self.header_value.clone(),
);
Action::Continue(req)
}
pub fn on_response(&mut self, resp: Response) -> Response {
// Pass through unchanged
resp
}
}
Dispatcher example:
use barbacane_plugin_sdk::prelude::*;
use serde::Deserialize;
#[barbacane_dispatcher]
#[derive(Deserialize)]
pub struct MyDispatcher {
status: u16,
body: String,
}
impl MyDispatcher {
pub fn dispatch(&mut self, _req: Request) -> Response {
Response::text(self.status, Default::default(), &self.body)
}
}
4. Create plugin.toml
[plugin]
name = "my-plugin"
version = "0.1.0"
type = "middleware" # or "dispatcher"
description = "My custom plugin"
wasm = "my_plugin.wasm"
[capabilities]
host_functions = ["log"]
5. Create config-schema.json
{
"$schema": "https://json-schema.org/draft/2020-12/schema",
"type": "object",
"required": ["header_name", "header_value"],
"properties": {
"header_name": {
"type": "string",
"description": "Header name to add"
},
"header_value": {
"type": "string",
"description": "Header value to set"
}
}
}
6. Build
cargo build --target wasm32-unknown-unknown --release
cp target/wasm32-unknown-unknown/release/my_plugin.wasm .
Plugin SDK Types
Request
pub struct Request {
pub method: String,
pub path: String,
pub query: Option<String>,
pub headers: BTreeMap<String, String>,
pub body: Option<Vec<u8>>, // binary-safe, travels via side-channel
pub client_ip: String,
pub path_params: BTreeMap<String, String>,
}
Helper methods: body_str() -> Option<&str>, body_string() -> Option<String>, set_body_text(&str).
Response
pub struct Response {
pub status: u16,
pub headers: BTreeMap<String, String>,
pub body: Option<Vec<u8>>, // binary-safe, travels via side-channel
}
Helper methods: body_str() -> Option<&str>, set_body_text(&str), Response::text(status, headers, &str).
Note: Bodies travel as raw bytes via side-channel host functions (
host_body_read/host_body_set), not embedded in JSON. The proc macros handle this transparently — plugin authors just read and writerequest.body/response.bodyasOption<Vec<u8>>. This design (matching proxy-wasm and http-wasm) avoids the ~3.65× memory overhead of base64 encoding, allowing 10MB+ bodies within the default 16MB WASM memory limit.
Action (Middleware only)
pub enum Action {
/// Continue to next middleware/dispatcher with (possibly modified) request
Continue(Request),
/// Short-circuit the chain and return this response immediately
Respond(Response),
}
Host Functions
Plugins can call host functions to access gateway capabilities. Declare required capabilities in plugin.toml:
Logging
[capabilities]
host_functions = ["log"]
use barbacane_plugin_sdk::host;
host::log("info", "Processing request");
host::log("error", "Something went wrong");
Context (per-request key-value store)
[capabilities]
host_functions = ["context_get", "context_set"]
use barbacane_plugin_sdk::host;
// Set a value for downstream middleware/dispatcher
host::context_set("user_id", "12345");
// Get a value set by upstream middleware
if let Some(value) = host::context_get("auth_token") {
// use value
}
Clock
[capabilities]
host_functions = ["clock_now"]
use barbacane_plugin_sdk::host;
let timestamp_ms = host::clock_now();
Secrets
[capabilities]
host_functions = ["get_secret"]
use barbacane_plugin_sdk::host;
// Secrets are resolved at gateway startup from env:// or file:// references
if let Some(api_key) = host::get_secret("api_key") {
// use api_key
}
HTTP Calls (Dispatcher only)
[capabilities]
host_functions = ["http_call"]
use barbacane_plugin_sdk::prelude::*;
use serde::Serialize;
// HTTP request struct — body travels via side-channel, not in JSON.
#[derive(Serialize)]
struct HttpRequest {
method: String,
url: String,
headers: BTreeMap<String, String>,
timeout_ms: Option<u64>,
}
// Optionally set request body via side-channel:
// barbacane_plugin_sdk::body::set_http_request_body(b"request body");
// Serialize and call:
let req = HttpRequest { method: "GET".into(), url: "https://api.example.com".into(), headers: BTreeMap::new(), timeout_ms: Some(5000) };
let json = serde_json::to_vec(&req).unwrap();
unsafe { host_http_call(json.as_ptr() as i32, json.len() as i32); }
// Read response body via side-channel:
// let body = barbacane_plugin_sdk::body::read_http_response_body();
Using Plugins in Specs
Declare in barbacane.yaml
Plugins can be sourced from a local path or a remote URL:
plugins:
# Local path (development)
my-plugin:
path: ./plugins/my_plugin.wasm
# Remote URL (production, CI/CD)
jwt-auth:
url: https://github.com/barbacane-dev/barbacane/releases/download/v0.5.1/jwt-auth.wasm
sha256: abc123... # optional integrity check
Remote plugins are downloaded at compile time and cached at ~/.barbacane/cache/plugins/. Use --no-cache to bypass the cache entirely (re-download without caching).
Use in OpenAPI spec
As middleware:
paths:
/users:
get:
x-barbacane-middlewares:
- name: my-plugin
config:
header_name: "X-Custom"
header_value: "hello"
As dispatcher:
paths:
/mock:
get:
x-barbacane-dispatch:
name: my-plugin
config:
status: 200
body: '{"message": "Hello"}'
Testing Plugins
Unit Testing
Test your plugin logic directly:
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_adds_header() {
let mut plugin = MyMiddleware {
header_name: "X-Test".to_string(),
header_value: "value".to_string(),
};
let req = Request {
method: "GET".to_string(),
path: "/test".to_string(),
headers: Default::default(),
..Default::default()
};
let action = plugin.on_request(req);
match action {
Action::Continue(req) => {
assert_eq!(req.headers.get("X-Test"), Some(&"value".to_string()));
}
_ => panic!("Expected Continue"),
}
}
}
Integration Testing
Use fixture specs with barbacane-test:
use barbacane_test::TestGateway;
#[tokio::test]
async fn test_my_plugin() {
let gw = TestGateway::from_spec("tests/fixtures/my-plugin-test.yaml")
.await
.unwrap();
let resp = gw.get("/test").await.unwrap();
assert_eq!(resp.status(), 200);
assert_eq!(resp.headers().get("X-Test"), Some("value"));
}
Official Plugins
Barbacane includes these official plugins in the plugins/ directory:
| Plugin | Type | Description |
|---|---|---|
mock | Dispatcher | Return static responses |
http-upstream | Dispatcher | Reverse proxy to HTTP backends |
lambda | Dispatcher | Invoke AWS Lambda functions |
kafka | Dispatcher | Publish messages to Kafka |
nats | Dispatcher | Publish messages to NATS |
s3 | Dispatcher | S3 / S3-compatible object storage proxy with SigV4 signing |
jwt-auth | Middleware | JWT token validation |
apikey-auth | Middleware | API key authentication |
oauth2-auth | Middleware | OAuth2 token introspection |
rate-limit | Middleware | Sliding window rate limiting |
cache | Middleware | Response caching |
cors | Middleware | CORS header management |
correlation-id | Middleware | Request correlation ID propagation |
request-size-limit | Middleware | Request body size limits |
ip-restriction | Middleware | IP allowlist/blocklist |
bot-detection | Middleware | Block bots by User-Agent pattern |
observability | Middleware | SLO monitoring and detailed logging |
Use these as references for your own plugins.
Best Practices
- Keep plugins focused - One plugin, one responsibility
- Validate configuration - Use JSON Schema to catch config errors at compile time
- Handle errors gracefully - Return appropriate error responses, don’t panic
- Document capabilities - Only declare host functions you actually use
- Test thoroughly - Unit test logic, integration test with the gateway
- Use semantic versioning - Follow semver for plugin versions
Resource Limits
Plugins run in a sandboxed WASM environment with these limits:
| Resource | Limit |
|---|---|
| Linear memory | max(16 MB, max_body_size + 4 MB) |
| Stack size | 1 MB |
| Execution time | 100 ms |
WASM memory scales automatically based on the configured max_body_size. Exceeding these limits results in a trap (500 error for request phase, fault-tolerant for response phase).
Troubleshooting
Plugin not found
Ensure the plugin is declared in barbacane.yaml and the WASM file exists at the specified path.
Config validation failed
Check that your plugin’s configuration in the OpenAPI spec matches the JSON Schema in config-schema.json.
WASM trap
Your plugin exceeded resource limits or panicked. Check logs for details. Common causes:
- Infinite loops
- Large memory allocations
- Unhandled errors causing panic
Unknown capability
You’re using a host function not declared in plugin.toml. Add it to capabilities.host_functions.
Distributing Plugins
GitHub Releases
The recommended way to distribute plugins is as GitHub release assets. Upload both the .wasm binary and plugin.toml alongside your release:
my-plugin.wasm
my-plugin.plugin.toml
Generate checksums for integrity verification:
sha256sum my-plugin.wasm > checksums.txt
Users reference your plugin by URL in their barbacane.yaml:
plugins:
my-plugin:
url: https://github.com/your-org/my-plugin/releases/download/v1.0.0/my-plugin.wasm
sha256: <from checksums.txt>
Official Plugins
All official Barbacane plugins are published as release assets on every tagged release. Pre-built .wasm files and checksums (plugin-checksums.txt) are available at:
https://github.com/barbacane-dev/barbacane/releases/download/v<VERSION>/<plugin-name>.wasm
Plugin Metadata Discovery
When resolving a url: source, the compiler attempts to fetch plugin.toml from sibling URLs to extract version and type metadata:
<name>.plugin.toml(same directory as the.wasm)plugin.toml(parent directory)
If neither is found, the plugin still works but without version/type metadata in the artifact manifest.