Skip to main content

Local Flow Development

Develop and test BtScript flows locally using flowctl and the dev kit, before deploying to Beacon Tower.

This guide walks you through setting up a local development environment, writing flows, testing with real and simulated telemetry, and iterating rapidly.

Prerequisites

No .NET SDK or other toolchain is required for flow development.

Dev Kit Setup

The dev kit is a self-contained package that includes both the flowctl CLI and the btscript compiler, along with Docker Compose configuration and everything needed to run the full stack locally.

Updated versions of flowctl and btscript can be downloaded individually from github.com/beacontower/tools/releases.

1. Download and Extract

Download the latest dev kit from github.com/beacontower/tools/releases.

unzip flowctl-devkit-*.zip
cd kit-*

2. Run Setup

# Mac / Linux
./setup.sh

# Windows (PowerShell)
.\setup.bat

The setup script:

  • Pulls container images from ghcr.io/beacontower/
  • Starts all services (PostgreSQL, NATS, Orleans silo, Flow Service, Redis)
  • Installs flowctl and btscript to ~/.local/bin/
  • Creates ~/.flowctl/config.yaml with default local settings

3. Verify

flowctl doctor

Expected output:

┌───────────────┬────────┬─────────────────────────────────┐
│ Check │ Status │ Details │
├───────────────┼────────┼─────────────────────────────────┤
│ Configuration │ Ok │ Environment: local │
│ Flow Service │ Ok │ http://localhost:5000 - healthy │
│ NATS │ Ok │ nats://localhost:4222 - healthy │
│ Docker │ Ok │ Docker '...' │
└───────────────┴────────┴─────────────────────────────────┘

Also verify btscript:

btscript --version

Local Services

The dev kit starts the following services:

ServicePortDescription
Flow Servicelocalhost:5000REST API for artifacts and instances
Swagger UIlocalhost:5000/swaggerInteractive API docs
Orleans Silolocalhost:8080Flow execution engine
NATSlocalhost:4222Messaging backbone (JetStream)
NATS Monitorlocalhost:8222NATS HTTP monitoring
PostgreSQL (Orleans)localhost:5432Orleans cluster state
PostgreSQL (Flows)localhost:5433Flow artifact metadata
Redislocalhost:6379Caching layer

Data Flow Architecture

Understanding the end-to-end data pipeline helps when debugging and monitoring flows:

Telemetry Source

NATS: telemetry.{providerId}::{assetId}

IngressProcessor (NATS consumer)

Orleans Stream: flowctl::{assetId}.{signal}-out

TelemetryGrain (signal-level grain)

Orleans Stream: {assetId}.{signal}-out

FlowGrain (flow instance)

Flow executes → emits side effects

NATS: events.{flowName}.{instanceId}.{emitName}
events.{flowName}.{instanceId}.output.{signalName}

Key stages:

  1. NATS Telemetry — Raw telemetry arrives on telemetry.{providerId}::{assetId}
  2. IngressProcessor — Splits multi-signal messages into atoms and publishes to Orleans streams
  3. TelemetryGrain — One grain per asset signal, applies transforms from asset bindings
  4. FlowGrain — Subscribes to input-mapped signals, executes flow logic
  5. Output Streams — Flow emits to NATS events and output streams

When you run flowctl instance send, the message enters at the telemetry stage and flows through the entire pipeline.

Managing Services

# Check service health
flowctl env status

# Start services
flowctl env start

# Stop services (preserves data volumes)
flowctl env stop

# Reset everything (DESTRUCTIVE — deletes all data, instances, artifacts)
flowctl env reset --force

# Re-run JetStream stream setup
flowctl env bootstrap

# View service logs
flowctl env logs
flowctl env logs flow-service

flowctl env status shows per-service health:

┌──────────────────┬───────────┬───────────────────────────────────────┐
│ Service │ Status │ Details │
├──────────────────┼───────────┼───────────────────────────────────────┤
│ flow-service │ ✓ Healthy │ HTTP 200 from http://localhost:5000 │
│ silo │ ✓ Healthy │ HTTP 200 from http://localhost:8080 │
│ nats │ ✓ Healthy │ NATS with JetStream enabled │
│ postgres-orleans │ ✓ Healthy │ Connected successfully │
│ postgres-flows │ ✓ Healthy │ Connected successfully │
│ redis │ ✓ Healthy │ PONG received │
└──────────────────┴───────────┴───────────────────────────────────────┘

Project Structure

Create a new flow project:

mkdir my-project && cd my-project
flowctl project init

This generates:

my-project/
├── flowproject.yaml # Project manifest
├── flows/
│ └── example.bts # Sample flow
├── tasks/
├── functions/
├── models/
│ └── generic.model.yaml
└── tests/

flowproject.yaml

name: my-project
version: 1.0.0

artifacts:
flows: flows/*
tasks: tasks/*
functions: functions/*

dev:
active_flow: flows/example.bts
instance_id: dev-1
asset_id: dev-asset-001
provider_id: flowctl

Key fields:

  • name — Project name, used in BTI identifiers (e.g. btf:example;1.0.0)
  • version — Default artifact version
  • dev.active_flow — Flow used by flowctl dev mode
  • dev.instance_id — Default instance ID for development
  • dev.asset_id — Default asset to bind to

Development Workflow

Step 1: Write Your Flow

Edit flows/example.bts:

(flow id: temp-conversion
(inputs
(fahrenheit "dev-asset-001.temperature"))
(trigger on-any: fahrenheit)
(let ((celsius (/ (* (- fahrenheit 32) 5) 9)))
(emit celsius-output value: celsius)))

This flow:

  1. Binds the telemetry signal dev-asset-001.temperature to fahrenheit
  2. Triggers whenever fahrenheit receives a new value
  3. Converts Fahrenheit to Celsius
  4. Emits the result as celsius-output

Step 2: Build

flowctl project build

Compiles all .bts files using the btscript compiler. Output .cs files are cached in .flowctl/build/.

You can also compile manually:

btscript compile flows/example.bts > build/example.cs

Step 3: Upload

Upload compiled artifacts to the local Flow Service:

flowctl project upload

Or upload individually:

flowctl artifact upload build/example.cs

Verify the upload:

flowctl artifact list

Step 4: Initialize an Asset

Before creating an instance, the asset must be initialized. Use a DTDL model file or the generic model:

# With a model (validates signal schema)
flowctl asset init dev-asset-001 --model models/pump.model.yaml

# Generic model (accepts any signals — good for development)
flowctl asset init dev-asset-001 --generic

Verify:

flowctl asset get dev-asset-001

Step 5: Create an Instance

Create a runtime instance of the flow, bound to the asset:

flowctl instance create dev-1 \
--bti "btf:example;1.0.0" \
--asset dev-asset-001

This:

  • Creates a FlowGrain (Orleans virtual actor) in the silo
  • Subscribes the flow to the asset's telemetry streams
  • Begins processing immediately

Input Mapping: When --input is omitted, flowctl automatically reads the asset's endpoints and derives input mappings in the format {signal} = flowctl::{assetId}.{signal}. For custom mappings, use --input signal=custom.orleans.stream.subject

Verify:

flowctl instance list
flowctl instance get dev-1

Step 6: Send Telemetry

flowctl instance send dev-1 temperature 212
flowctl instance send dev-1 temperature 32
flowctl instance send dev-1 temperature 98.6

Each send publishes a NATS message through the telemetry pipeline. The signal name (temperature) must match an input binding in the flow.

Step 7: Observe Output

Watch all flow streams in real time:

flowctl instance watch dev-1

This subscribes to four NATS streams simultaneously:

StreamColorNATS subject
Output (telemetry)greenoutput.telemetry.{assetId}
Eventsyellowevents.*.{instanceId}.>
Alarmsredalarms.*.{instanceId}.>
Notificationsmagentanotifications.*.{instanceId}.>

Filter to specific streams with --*-only flags (composable):

flowctl instance watch dev-1 --events-only              # just events
flowctl instance watch dev-1 --alarms-only # just alarms
flowctl instance watch dev-1 --events-only --alarms-only # events + alarms

Or subscribe to a NATS stream directly:

flowctl nats sub "events.>"

Press Ctrl+C to stop watching.

Step 8: Check Instance State

Inspect execution statistics and input values:

flowctl instance state dev-1

Step 9: Clean Up

flowctl instance delete dev-1 --force
flowctl asset remove dev-asset-001
flowctl artifact delete "btf:example;1.0.0" --force

The Inner Dev Loop

The core development cycle is:

Edit .bts → build → upload → send data → observe output → repeat

For rapid iteration:

# Build and upload in one step
flowctl project build && flowctl project upload

# Send test data
flowctl instance send dev-1 temperature 75

# Watch output
flowctl instance watch dev-1

Dev Mode (Watch)

For the fastest iteration, use dev mode:

flowctl dev --watch

This:

  1. Builds all flows
  2. Uploads artifacts
  3. Creates or migrates the dev instance (from flowproject.yaml)
  4. Tails output
  5. Watches for file changes and repeats

Edit a .bts file, save, and the flow automatically rebuilds and redeploys. Stop with Ctrl+C.

Without --watch, it runs the cycle once:

flowctl dev

Pushing and Feeding Data

Send Individual Values

The simplest way to push data to a running instance:

flowctl instance send dev-1 temperature 85.5
flowctl instance send dev-1 pressure 150.0

Generate Synthetic Data

Generate test patterns for a signal:

# Sine wave: 50 data points
flowctl feed --generate sine \
--asset dev-asset-001 \
--signal temperature \
--count 50

# Random values between 0 and 100
flowctl feed --generate random \
--asset dev-asset-001 \
--signal pressure \
--count 100 \
--min 0 --max 100

# Sawtooth wave
flowctl feed --generate sawtooth \
--asset dev-asset-001 \
--signal vibration \
--count 200

Available generators: sine, sawtooth, random, step, ramp.

Generator options:

  • --count — Number of data points (default: 100)
  • --min, --max — Value range
  • --amplitude, --offset — For sine waves
  • --period — Oscillation period
  • --interval — Milliseconds between values

Feed from JSONL File

For scripted scenarios, create a .jsonl file:

{"assetId": "dev-asset-001", "signal": "temperature", "value": 85.5}
{"assetId": "dev-asset-001", "signal": "temperature", "value": 82.1, "delay": "100ms"}
{"assetId": "dev-asset-001", "signal": "temperature", "value": 79.8, "delay": "100ms"}
{"assetId": "dev-asset-001", "signal": "pressure", "value": 150.0, "delay": "50ms"}

Feed it:

flowctl feed data.jsonl

# 10x speed
flowctl feed data.jsonl --speed 10x

# Dry run (show what would be sent)
flowctl feed data.jsonl --dry-run

Telemetry Data Format

Every telemetry message published to NATS requires a specific subject, headers, and JSON payload. The flowctl feed and instance send commands handle this automatically, but understanding the format is important for debugging and raw NATS testing.

Subject: telemetry.{providerId}::{assetId} (e.g., telemetry.flowctl::pump-042)

Required headers:

HeaderValue
messageTypeProviderClientTelemetry
providerIdProvider identifier (default: flowctl)
providerClientIdAsset/device identifier

Payload: Flat JSON object with signal names as keys:

{"motor_temp": 85.5, "inlet_pressure": 101.3}

Multi-signal payloads are split into individual atoms by the IngressProcessor. Signal names must match the asset's model telemetry definitions. The payload can also include a "timestamp" field (ISO 8601) which the parser uses as the telemetry timestamp — it takes precedence over the enqueuedTime header.

Full reference: See Telemetry Ingestion for the complete message format, message types, JSONL feed formats, parsers, and validation rules.

Publish Raw NATS Messages

For low-level testing, publish directly to NATS:

flowctl nats pub "telemetry.flowctl::dev-asset-001" \
'{"temperature": 85.5}' \
--header "providerId=flowctl" \
--header "providerClientId=dev-asset-001" \
--header "messageType=ProviderClientTelemetry"

The NATS subject format for telemetry is telemetry.<providerId>::<providerClientId>.

Replaying Data

Replay Instance History

Re-process historical messages for an instance:

flowctl replay dev-1 --from 2026-02-09T10:00:00Z --take 100

# Replay at 10x speed
flowctl replay dev-1 --from 1 --take 50 --speed 10x

Replay NATS Stream Messages

Replay raw messages from a NATS stream:

# Replay last 10 telemetry messages
flowctl nats replay "telemetry.>" --from 1 --take 10

# Replay events from a specific sequence
flowctl nats replay "events.>" --from 100 --take 50

Monitoring Output

Watch Instance Streams

The primary way to monitor what a flow produces — subscribes to all four output streams:

flowctl instance watch dev-1

Example output (color-coded by stream type):

[14:32:01] OUTPUT   dev-asset.celsius = 25.0
[14:32:01] EVENT temperature_alert
{"level":"warning","value":25.0}
[14:32:02] ALARM overtemp
{"threshold":100,"actual":25.0}
[14:32:02] NOTIFY status_change
{"from":"idle","to":"active"}

Filter to specific streams with --*-only flags (composable):

flowctl instance watch dev-1 --events-only                # just events
flowctl instance watch dev-1 --alarms-only # just alarms
flowctl instance watch dev-1 --alarms-only --events-only # alarms + events
flowctl instance watch dev-1 --notifications-only # just notifications
flowctl instance watch dev-1 --output-only # just telemetry output

Subscribe to NATS Streams

For lower-level access, subscribe to NATS subjects directly (wildcards: * matches a single token, > matches one or more tokens):

# All events from any flow
flowctl nats sub "events.>"

# All alarms
flowctl nats sub "alarms.>"

# All notifications
flowctl nats sub "notifications.>"

# All telemetry arriving
flowctl nats sub "telemetry.>"

Watch output for a specific asset:

flowctl nats watch --asset dev-asset-001

Watch output for a specific instance:

flowctl nats watch --instance dev-1

View NATS Streams

List all JetStream streams and their message counts:

flowctl nats streams

The key streams are:

StreamSubjectsPurpose
TELEMETRYtelemetry.>Incoming telemetry from providers
EVENTSevents.>Flow-emitted events
OUTPUToutput.>Flow output signals
ALARMSalarms.>Alarm events
NOTIFICATIONSnotifications.>Notification events
AUDITaudit.>System audit events

NATS Subject Patterns

NATS subjects follow hierarchical naming conventions. Wildcards: * matches a single token, > matches one or more tokens.

PatternExampleDescription
telemetry.{providerId}::{assetId}telemetry.flowctl::pump-042Incoming telemetry for an asset
events.{flowName}.{instanceId}.{emitName}events.temp-conversion.dev-1.celsius-outputFlow emit events
events.{flowName}.{instanceId}.output.{signal}events.temp-conversion.dev-1.output.temperatureFlow output signals
events.>All flow events (wildcard)
telemetry.flowctl::*All flowctl-provider telemetry

Instance State and Logs

# Execution statistics, input values, state
flowctl instance state dev-1

# Instance logs (if available)
flowctl instance logs dev-1

# Lifecycle events
flowctl instance events dev-1

Ingress Processor

Control the telemetry ingress processor (the NATS consumer that feeds the silo):

# Check status
flowctl ingress status

# Pause processing (useful for debugging)
flowctl ingress pause

# Resume
flowctl ingress resume

Testing Flows

Write test scenarios in YAML:

flowctl test tests/my-test.test.yaml

# Run all tests
flowctl test --all

# Record expected outputs (creates/updates test file)
flowctl test tests/my-test.test.yaml --record

Purging Data Between Test Runs

When iterating on flows, clean up old data:

# Clear instance outputs and events
flowctl instance purge dev-1 --outputs -y

# Clear instance state (fresh restart)
flowctl instance purge dev-1 --state -y

# Clear both
flowctl instance purge dev-1 --all -y

# Purge a NATS stream
flowctl nats purge EVENTS --yes

# Purge all NATS streams (fresh slate)
flowctl nats purge --all --yes

Working with Models

Models define the telemetry schema for an asset using DTDL (Digital Twins Definition Language). Example model YAML:

name: dtmi:myorg:Pump;1
displayName: Pump
telemetry:
- name: temperature
schema: double
- name: pressure
schema: double
- name: vibration
schema: double

Validate a DTDL model file:

flowctl model validate models/pump.model.yaml

View parsed model:

flowctl model show models/pump.model.yaml

Example output:

Model: dtmi:myorg:Pump;1
Display Name: Pump

Telemetry:
- temperature: double
- pressure: double
- vibration: double

Properties: (none)
Commands: (none)

Create a model from template:

flowctl model new my-sensor

Artifact Lifecycle

Versioning

Artifacts are identified by BTIs (Beacon Tower Identifiers, a unique identifier format in Beacon Tower): btf:<name>;<version>.

# Upload with explicit version
flowctl artifact upload flow.cs --version 1.2.0

# Auto-bump version
flowctl artifact upload flow.cs --bump patch # 1.0.0 → 1.0.1
flowctl artifact upload flow.cs --bump minor # 1.0.0 → 1.1.0

Migration

Migrate a running instance to a new artifact version:

flowctl instance migrate dev-1 --to "btf:example;1.1.0" --force

# Keep state during migration
flowctl instance migrate dev-1 --to "btf:example;1.1.0" --keep-state --force

Migrate all instances of an artifact at once:

flowctl artifact migrate "btf:example;1.0.0" --to "btf:example;1.1.0" --force

Compare Versions

flowctl artifact diff "btf:example;1.0.0" "btf:example;1.1.0"

Delete

# Fails if instances are still using the artifact
flowctl artifact delete "btf:example;1.0.0" --force

Configuration

flowctl reads configuration from ~/.flowctl/config.yaml:

default_environment: local

environments:
local:
flow_service: http://localhost:5000
nats: nats://localhost:4222
orleans:
postgres_connection: "Host=localhost;Port=5432;Database=orleans;Username=orleans;Password=orleans"
cluster_id: docker-cluster
service_id: docker-cluster

Override the environment per-command:

flowctl --environment staging instance list

Troubleshooting

flowctl doctor shows failures

  • Ensure Docker Desktop is running
  • Run flowctl env start to start containers
  • Check for port conflicts: another service on 5000, 4222, 5432, etc.

No output from flow

  1. Verify telemetry is arriving:

    flowctl nats sub "telemetry.>"

    Then send a test message and confirm it appears.

  2. Check signal names. The signal in flowctl instance send must match the flow's input binding. If the flow binds "dev-asset-001.temperature", the signal name is temperature.

  3. Check instance state:

    flowctl instance state dev-1

    Look at execution count — if it's 0, the flow hasn't received any matching input.

  4. Check ingress processor:

    flowctl ingress status

    If paused, resume with flowctl ingress resume.

  5. Check silo logs:

    flowctl env logs silo

Instance won't start

  1. Verify the asset is initialized: flowctl asset list
  2. Verify the artifact exists: flowctl artifact list
  3. Check that the BTI matches exactly (including version)

Port conflicts

# Check what's using a port
lsof -i :5432 # Mac/Linux
ss -tlnp | grep 5432 # Linux

Stop conflicting services or change ports in the Docker Compose file.

Diagnose a flow with flowctl doctor flow

The doctor flow command traces the entire pipeline for a specific flow instance and reports what's broken:

flowctl doctor flow dev-1

Example output:

┌─────────────────────┬─────────┬──────────────────────────────────┐
│ Check │ Status │ Details │
├─────────────────────┼─────────┼──────────────────────────────────┤
│ Instance exists │ Ok │ dev-1 — status: Active │
│ Artifact exists │ Ok │ btf:temp-monitor;1.0.0 │
│ Asset exists │ Ok │ pump-042 — 3 endpoints │
│ Input mappings │ Warning │ unmapped: inlet_pressure │
│ Flow grain state │ Warning │ executions: 0 │
│ NATS streams │ Ok │ TELEMETRY: 42 msgs, EVENTS: 0, ALARMS: 0, NOTIF: 0 │
│ Recent logs │ Ok │ No errors in last 5 entries │
└─────────────────────┴─────────┴──────────────────────────────────┘

Suggestions:
> Asset signal 'inlet_pressure' has no input mapping — data won't reach the flow
> Flow has never executed. Send test data: flowctl instance send dev-1 <signal> <value>

Use --output json for machine-readable output.

Validate data before feeding

Check that your JSONL data file has correct signal names before sending it:

# Basic format validation
flowctl feed data.jsonl --validate

# Also check signal names against a real asset
flowctl feed data.jsonl --validate --asset my-asset-001

# Limit error output
flowctl feed data.jsonl --validate --max-errors 5

When --asset is provided, the validator connects to the Orleans silo and checks each signal name in your data against the asset's actual endpoints. Unknown signals are reported as warnings.

Pipeline stage diagram

Understanding where things can break helps you diagnose faster:

data source


NATS telemetry.{assetId}.{signal} ← E15: NATS unreachable
│ E12: TELEMETRY stream missing

IngressProcessor (splits into atoms)


Orleans stream: flowctl::{assetId}.{signal} ← E16: Orleans unreachable
│ E05: Asset not initialized
▼ E06: Asset has no endpoints
TelemetryGrain (subscribed via auto-bind)


Orleans stream: {assetId}.{signal} ← E07: No input mappings
│ E08: Mapping mismatch

FlowGrain ← E01: Instance not found
│ E02: Instance in Error state
▼ E03: Instance suspended
Flow executes ← E04: Artifact not found
│ E09: Never executed
├──▶ NATS events.{flow}.{inst}.{emit} E10: Flow paused
├──▶ NATS alarms.{flow}.{inst}.{emit} E11: Not all inputs initialized
└──▶ NATS notifications.{flow}.{inst}.{emit}
E13: Emit streams missing

Error reference

IDSymptomCauseFix
E01Instance not foundInstance doesn't existflowctl instance create <id> --bti <bti> --asset <assetId>
E02Instance in Error stateFlow execution failureflowctl instance restart <id>
E03Instance suspendedInstance was pausedflowctl instance resume <id>
E04Artifact not foundFlow code not uploadedflowctl artifact upload <file>
E05Asset not initializedasset init never runflowctl asset init <id> --model model.yaml
E06Asset has no endpointsModel file missing signalsFix model.yaml and re-run flowctl asset init
E07No input mappingsInstance created without auto-deriveRecreate instance (auto-derives from asset)
E08Input mappings mismatchMapping keys don't match assetRecreate instance or flowctl instance update
E09Flow never executedNo data sentflowctl instance send <id> <signal> <value>
E10Flow pausedFlow execution pausedflowctl instance resume <id>
E11Not all inputs initializedSome inputs still nullSeed all with flowctl instance send
E12TELEMETRY stream missingNATS not bootstrappedflowctl env bootstrap
E13Emit streams missing (FLOW-EVENTS/ALARMS/NOTIFICATIONS)NATS not bootstrappedflowctl env bootstrap
E14Flow-service unreachableService not runningStart flow-service, check URL
E15NATS unreachableNATS not runningStart NATS, check connection URL
E16Orleans unreachableSilo not runningStart silo, check postgres/cluster config
E17Wrong signal namesFeed data has bad signalsFix signal names in data file
E18Recent execution errorsFlow logic errorsCheck flow code and instance logs

Common symptoms

"My flow isn't producing output"

Run flowctl doctor flow <id> first. Most common causes: E09 (never executed — forgot to send data), E07 (no input mappings — recreate instance), E11 (not all inputs initialized — seed threshold or other constants).

"Data sent but flow never executes"

Check E05/E06 (asset not initialized or no endpoints), then E07/E08 (input mapping issues). The data path is: NATS → IngressProcessor → TelemetryGrain → FlowGrain. Each hop requires correct configuration.

"Instance is in Error state"

Check instance logs: flowctl instance logs <id>. Usually a flow code error. Restart with flowctl instance restart <id> after fixing the underlying issue.

"Everything looks right but still no output"

  1. flowctl nats sub "events.>" — watch for events
  2. flowctl instance send <id> <signal> <value> — send data manually
  3. flowctl instance state <id> — check execution count increases
  4. Check silo logs: flowctl env logs silo

Setup verification checklist

Quick checklist to verify your local setup is correct:

# 1. Environment healthy?
flowctl doctor

# 2. Asset initialized with endpoints?
flowctl asset get my-asset

# 3. Artifact uploaded?
flowctl artifact list

# 4. Instance created with input mappings?
flowctl instance get my-instance

# 5. Flow grain receiving data?
flowctl instance state my-instance # Check execution count

# 6. All-in-one diagnostic:
flowctl doctor flow my-instance

Reset everything

flowctl env reset --force
flowctl env start
flowctl env bootstrap

Example Catalog

The dev kit includes 29 example flows in the examples/ directory, organized by complexity. Each example has both the BtScript source (.bts) and compiled C# output (.cs).

01-basics — Core Concepts

ExampleWhat it demonstrates
temp-conversionSingle input, let, emit — Celsius to Fahrenheit
temp-alarmcond branching — threshold-based emit levels
multi-inputon-all trigger — fires only when all inputs have new values
emit-channeledchannel: on emit — routes to event/alarm/notification streams
function-simple(function ...) form — reusable pure function

02-patterns — Intermediate Patterns

ExampleWhat it demonstrates
synchronized-samplinggate zip: — synchronize multiple sensor inputs
hourly-averagegate zip: + time-window — windowed aggregation
mixed-telemetry-property(latest ...) — use property values without triggering
rolling-window-timerrolling-avg + trigger timer: — smoothed periodic output
route-conditional(route ...) — value-based branching with different processing per range
parallel-join(parallel ...) + (join ...) — concurrent analysis branches

03-industrial — Real-World Flows

ExampleWhat it demonstrates
equipment-stateBool/int types, safety interlocks, composite status
batch-metricsInt/long/dateTime types, production rate tracking
comprehensive-pump-monitorrequire, parallel, join, efficiency calculation
district-heating-substationon-all + on-change + timer triggers, let*, gate zip:
dishwasher-cycle-monitorAppliance cycle state machine

04-energy — Load Optimization Domain

13 flows implementing coordinated peak demand avoidance: energy monitoring, EV charging control, heat pump scheduling, comfort constraints, and peak prediction/mitigation.

Running an Example

# Compile
btscript compile examples/01-basics/temp-conversion.bts > /tmp/flow.cs

# Upload
flowctl artifact upload /tmp/flow.cs

# Deploy
flowctl asset init my-asset --model model.yaml
flowctl instance create test-1 --bti "btf:temp-conversion;1.0.0" --asset my-asset

# Test
flowctl instance send test-1 celsius 25.0
flowctl instance watch test-1

Next Steps