Introduction
Welcome to the BLVM documentation.
BLVM (Bitcoin Low-Level Virtual Machine) implements Bitcoin consensus from the Orange Paper, provides protocol abstraction for multiple Bitcoin variants, a reference full node with P2P networking, a developer SDK, and cryptographic governance for transparent development.
What is BLVM?
BLVM is compiler-like infrastructure for Bitcoin implementations. There is a mathematical specification (the Orange Paper, treated as an intermediate representation / IR) and an implementation. The implementation is validated against the spec using BLVM Specification Lock (formal verification with Z3)—it is not generated or transformed from the IR. Alternative implementations can target the same Orange Paper and tooling.
Stack:
- Orange Paper – Mathematical specification (IR)
- blvm-spec-lock – Links code to spec; validates implementation against the IR
- blvm-consensus – Consensus implementation with formal verification
- blvm-protocol – Protocol abstraction (mainnet, testnet, regtest)
- blvm-node – Full node (storage, networking, RPC, modules)
- blvm-sdk – Developer toolkit and module composition
- Governance – Cryptographic governance enforcement
Why “LVM”? Like LLVM’s infrastructure for compilers, BLVM provides shared infrastructure for Bitcoin implementations—with the spec as the reference and the implementation validated against it, not generated from it.
Documentation Structure
- Getting Started – Installation, quick start, first node
- Architecture – System design, module system, events
- Layers – Consensus, protocol, node (each with overview and detailed pages)
- Developer SDK – Module development, API reference, examples
- Governance – Model, configuration, procedures
- Reference – Specifications, API Index, glossary
Documentation is maintained in source repositories alongside code and is aggregated at docs.thebitcoincommons.org.
Getting Help
Report bugs or request features via GitHub Issues, ask questions in GitHub Discussions, or report security issues to security@thebitcoincommons.org.
License
This documentation is licensed under the MIT License, same as the BLVM codebase.
Installation
This guide covers installing BLVM from pre-built binaries available on GitHub releases.
Prerequisites
Pre-built binaries are available for Linux, macOS, and Windows on common platforms. No Rust installation required - binaries are pre-compiled and ready to use.
Installing blvm-node
The reference node is the main entry point for running a BLVM node.
Quick Start
-
Download the latest release from GitHub Releases
-
Extract the archive for your platform:
# Linux tar -xzf blvm-*-linux-x86_64.tar.gz # macOS tar -xzf blvm-*-macos-x86_64.tar.gz # Windows # Extract the .zip file using your preferred tool -
Move the binary to your PATH (optional but recommended):
# Linux/macOS sudo mv blvm /usr/local/bin/ # Or add to your local bin directory mkdir -p ~/.local/bin mv blvm ~/.local/bin/ export PATH="$HOME/.local/bin:$PATH" # Add to ~/.bashrc or ~/.zshrc -
Verify installation:
blvm --version
Release Variants
Releases include two variants:
Base Variant (blvm-{version}-{platform}.tar.gz)
Stable, minimal release with core Bitcoin node functionality, production optimizations, standard storage backends, and process sandboxing. Use for production deployments prioritizing stability.
Experimental Variant (blvm-experimental-{version}-{platform}.tar.gz)
Full-featured build with experimental features: UTXO commitments, BIP119 CTV, Dandelion++, BIP158, Stratum V2, and enhanced signature operations counting. See Protocol Specifications for details.
Use for development, testing, or when experimental capabilities are required.
Installing blvm-sdk Tools
The SDK tools (blvm-keygen, blvm-sign, blvm-verify) are included in the blvm-node release archives.
After extracting the release archive, you’ll find:
blvm- Bitcoin reference nodeblvm-keygen- Generate governance keypairsblvm-sign- Sign governance messagesblvm-verify- Verify signatures and multisig thresholds
All tools are in the same directory. Move them to your PATH as described above.
Platform-Specific Notes
Linux
- x86_64: Standard 64-bit Linux
- ARM64: For ARM-based systems (Raspberry Pi, AWS Graviton, etc.)
- glibc 2.31+: Required for Linux binaries
macOS
- x86_64: Intel Macs
- ARM64: Apple Silicon (M1/M2/M3)
- macOS 11.0+: Required for macOS binaries
Windows
- x86_64: 64-bit Windows
- Extract the
.zipfile and runblvm.exefrom the extracted directory - Add the directory to your PATH for command-line access
Verifying Installation
After installation, verify everything works:
# Check blvm-node version
blvm --version
# Check SDK tools
blvm-keygen --help
blvm-sign --help
blvm-verify --help
Building from Source (Advanced)
Building from source is primarily for development. Crates in this stack declare rust-version 1.82 or 1.83 in their Cargo.toml (for example blvm-consensus and blvm-spec-lock use 1.83; blvm-node and blvm-protocol use 1.82). Use Rust 1.83 or later unless you are working only in a crate with a lower MSRV. Clone the blvm repository and follow the build instructions in its README.
Next Steps
- See Quick Start for running your first node
- See Node Configuration for detailed setup options
See Also
- Quick Start - Run your first node
- First Node Setup - Complete setup guide
- Node Configuration - Configuration options
- Node Overview - Node features and capabilities
- Release Process - How releases are created
- GitHub Releases - Download releases
Quick Start
Get up and running with BLVM in minutes.
Running Your First Node
After installing the binary, you can start a node:
Regtest Mode (Recommended for Development)
Regtest mode is safe for development - it creates an isolated network:
blvm
Or explicitly:
blvm --network regtest
Starts a node in regtest mode (default), creating an isolated network with instant block generation for testing and development.
Testnet Mode
Connect to Bitcoin testnet:
blvm --network testnet
Mainnet Mode
⚠️ Warning: Only use mainnet if you understand the risks.
blvm --network mainnet
Basic Node Operations
Checking Node Status
Once the node is running, check its status via RPC:
# Mainnet uses port 8332, testnet/regtest use 18332
curl -X POST http://localhost:8332 \
-H "Content-Type: application/json" \
-d '{"jsonrpc": "2.0", "method": "getblockchaininfo", "params": [], "id": 1}'
Example Response:
{
"jsonrpc": "2.0",
"result": {
"chain": "regtest",
"blocks": 100,
"headers": 100,
"bestblockhash": "0f9188f13cb7b2c71f2a335e3a4fc328bf5beb436012afca590b1a11466e2206",
"difficulty": 4.656542373906925e-10,
"mediantime": 1231006505,
"verificationprogress": 1.0,
"chainwork": "0000000000000000000000000000000000000000000000000000000000000064",
"pruned": false,
"initialblockdownload": false
},
"id": 1
}
Getting Peer Information
# Mainnet uses port 8332, testnet/regtest use 18332
curl -X POST http://localhost:8332 \
-H "Content-Type: application/json" \
-d '{"jsonrpc": "2.0", "method": "getpeerinfo", "params": [], "id": 2}'
Getting Mempool Information
# Mainnet uses port 8332, testnet/regtest use 18332
curl -X POST http://localhost:8332 \
-H "Content-Type: application/json" \
-d '{"jsonrpc": "2.0", "method": "getmempoolinfo", "params": [], "id": 3}'
Verifying Installation
blvm --version # Verify installation
blvm --help # View available options
The node connects to the P2P network, syncs blockchain state, accepts RPC commands on port 8332 (mainnet default) or 18332 (testnet/regtest), and can mine blocks if configured.
Using the SDK
Generate a Governance Keypair
blvm-keygen --output my-key.key
Sign a Message
blvm-sign release \
--version v1.0.0 \
--commit abc123 \
--key my-key.key \
--output signature.txt
Verify Signatures
blvm-verify release \
--version v1.0.0 \
--commit abc123 \
--signatures sig1.txt,sig2.txt,sig3.txt \
--threshold 3-of-5 \
--pubkeys keys.json
Next Steps
- First Node Setup - Detailed configuration guide
- Node Configuration - Full configuration options
- RPC API Reference - Interact with your node
See Also
- Installation - Installing BLVM
- First Node Setup - Complete setup guide
- Node Configuration - Configuration options
- Node Operations - Managing your node
- RPC API Reference - Complete RPC API
- Troubleshooting - Common issues
First Node Setup (regtest)
This guide is regtest-only from top to bottom — local chain, no public seeds, default RPC 127.0.0.1:18332. You are not syncing mainnet here.
Optional later: Configuration examples include testnet and mainnet. Ignore those until you intentionally switch networks.
Regtest: step-by-step
Step 1: Create configuration directory
mkdir -p ~/.config/blvm
cd ~/.config/blvm
Step 2: Create blvm.toml (regtest)
Create ~/.config/blvm/blvm.toml:
# --- regtest (local dev) ---
transport_preference = "tcponly"
# BLVM P2P listen — use a port other than 18444 if another local regtest peer already binds 18444 (common default).
listen_addr = "127.0.0.1:18445"
protocol_version = "Regtest"
[storage]
data_dir = "~/.local/share/blvm"
database_backend = "auto"
[logging]
level = "info"
transport_preferenceis required. If TOML parse fails, the node never appliesprotocol_version = "Regtest"— fix the file before continuing.- P2P port:
18444is a widely used default regtest listen port. Give BLVM a different port (e.g.18445) when two nodes share a host; pointpersistent_peersat the other peer’s address when you want blocks from it. - RPC is not in this file: use defaults (
127.0.0.1:18332) or--rpc-addr/BLVM_RPC_ADDR.
Step 2b: Validate the file
From any directory, using the same blvm binary you will run:
/path/to/blvm config validate --path ~/.config/blvm/blvm.toml
You should see Configuration file is valid. If not, fix the TOML; do not start sync until this passes.
Step 3: Start the regtest node
Same binary as validate. Clear mainnet IBD env vars if your shell still has them from other work:
env -u BLVM_ASSUME_VALID_HEIGHT -u BLVM_ASSUMEVALID \
/path/to/blvm --config ~/.config/blvm/blvm.toml --verbose
Confirm regtest in the first log lines:
Configuration loaded successfully from fileNetwork: RegtestData directory:matches your[storage].data_dir
No config file (still regtest):
env -u BLVM_ASSUME_VALID_HEIGHT -u BLVM_ASSUMEVALID \
/path/to/blvm -n regtest -d ~/.local/share/blvm --verbose
Example lines:
[INFO] blvm: Network: Regtest
[INFO] blvm: Data directory: /home/you/.local/share/blvm
[INFO] blvm_node::rpc: Starting TCP RPC server on 127.0.0.1:18332
Peers on regtest: there are no public DNS seeds. 0 peers and “skipping DNS seed discovery” are normal. To sync blocks you need another regtest peer (e.g. a second blvm, or any regtest full node) and persistent_peers / a local harness — that is still regtest, not mainnet.
Step 4: Verify regtest RPC
In another terminal (regtest RPC port 18332):
curl -X POST http://localhost:18332 \
-H "Content-Type: application/json" \
-d '{"jsonrpc": "2.0", "method": "getblockchaininfo", "params": [], "id": 1}'
Expected Response:
{
"jsonrpc": "2.0",
"result": {
"chain": "regtest",
"blocks": 0,
"headers": 0,
"bestblockhash": "...",
"difficulty": 4.656542373906925e-10,
"mediantime": 1231006505,
"verificationprogress": 1.0,
"chainwork": "0000000000000000000000000000000000000000000000000000000000000001",
"pruned": false,
"initialblockdownload": false
},
"id": 1
}
Configuration examples (other networks)
The steps above are regtest. Below are copy-paste starting points if you leave regtest.
Development node (regtest, extended)
transport_preference = "tcponly"
listen_addr = "127.0.0.1:18445"
protocol_version = "Regtest"
[storage]
data_dir = "~/.local/share/blvm"
database_backend = "auto"
[rbf]
mode = "standard" # Standard RBF for development
[mempool]
max_mempool_mb = 100
min_relay_fee_rate = 1
Start with: blvm -n regtest -d ~/.local/share/blvm (RPC defaults to 127.0.0.1:18332).
Testnet node
transport_preference = "tcponly"
listen_addr = "127.0.0.1:18333"
protocol_version = "Testnet3"
[storage]
data_dir = "~/.local/share/blvm-testnet"
database_backend = "redb"
[rbf]
mode = "standard"
[mempool]
max_mempool_mb = 300
min_relay_fee_rate = 1
eviction_strategy = "lowest_fee_rate"
Start with: blvm -n testnet -d ~/.local/share/blvm-testnet -r 127.0.0.1:18332
Production mainnet node
transport_preference = "tcponly"
listen_addr = "0.0.0.0:8333"
protocol_version = "BitcoinV1"
[storage]
data_dir = "/var/lib/blvm"
database_backend = "redb"
[storage.cache]
# Example values; canonical defaults in [Configuration Reference](../reference/configuration-reference.md)
block_cache_mb = 200
utxo_cache_mb = 100
header_cache_mb = 20
[rbf]
mode = "standard"
[mempool]
max_mempool_mb = 300
max_mempool_txs = 100000
min_relay_fee_rate = 1
eviction_strategy = "lowest_fee_rate"
max_ancestor_count = 25
max_descendant_count = 25
Start with: blvm -n mainnet -d /var/lib/blvm -r 127.0.0.1:8332. Use [rpc_auth] and RPC_AUTH_TOKENS for production.
See Node Configuration for complete configuration options.
Storage
The node stores blockchain data (blocks, UTXO set, chain state, and indexes) in the configured data directory. See Storage Backends for configuration options.
Peers and sync (regtest vs public networks)
- Regtest: No wide-area peer discovery. You only get blocks from peers you configure (
persistent_peers, local second node, or another regtestbitcoind). Staying at height 0 with 0 peers is expected until you add that. - Mainnet / testnet: DNS seeds and addr relay apply; IBD pulls from the public network.
Regtest: how you get past genesis (height > 0)
You need a peer that already has blocks, or a datadir that already contains those blocks. A lone BLVM on regtest with no peers stays at genesis — that is not “missing a feature,” it is how regtest is defined (no public seeds).
1. Second regtest full node (typical way to create blocks)
Run a reference regtest daemon on default P2P 127.0.0.1:18444, mine some blocks, then run BLVM on a different local P2P port and peer outbound to it. Example using the common bitcoind / bitcoin-cli CLI:
bitcoind -regtest -daemon
bitcoin-cli -regtest createwallet w
ADDR=$(bitcoin-cli -regtest getnewaddress)
bitcoin-cli -regtest generatetoaddress 200 "$ADDR"
BLVM TOML (listen on 18445, peer the regtest seed on 18444; matches local/regtest-two-node-seed-bootstrap.toml):
transport_preference = "tcponly"
listen_addr = "127.0.0.1:18445"
protocol_version = "Regtest"
persistent_peers = ["127.0.0.1:18444"]
[storage]
data_dir = "~/.local/share/blvm-regtest-from-core"
database_backend = "auto"
[ibd]
mode = "parallel"
preferred_peers = ["127.0.0.1:18444"]
[logging]
level = "info"
Start BLVM with that config (or pass --listen-addr 127.0.0.1:18445 if your file is merged with other settings); IBD should pull blocks until it matches the peer’s tip.
2. Two BLVM nodes
Works once the seed already has height > 0 (from (1), or from copying a populated seed data_dir). A follower then syncs from the seed over persistent_peers. If both nodes start from an empty datadir, both stay at genesis — add blocks with (1) first, or reuse a filled datadir.
3. Reuse a populated data directory
Copy/sync the configured [storage].data_dir from a machine that already completed regtest sync for the same network.
4. submitblock on a running BLVM node
On a normal node (RPC server wired with NetworkManager), submitblock checks that the block’s prev_block_hash matches the current tip, then queues the block for the same run-loop processing as P2P-received blocks, so a valid next block can extend your local chain. You still have to produce that block (for example getblocktemplate plus a miner); BLVM does not yet mirror every convenience RPC some reference stacks expose for regtest mining.
If MiningRpc is used without a network manager (some tests or minimal tooling), submitblock remains validation-only and does not advance the chain.
If you use the workspace local harness (BitcoinCommons/local/regtest-two-blvm.sh), regtest-two-node-seed-bootstrap.toml is wired for a regtest seed on :18444 and BLVM P2P on :18445. After the seed has blocks, run ./local/regtest-two-blvm.sh start-seed-bootstrap, then start-follower (or start-both after the seed has height > 0) as documented in the script header.
RPC interface (regtest)
Default listen in this guide: 127.0.0.1:18332.
curl -s -X POST http://127.0.0.1:18332 \
-H "Content-Type: application/json" \
-d '{"jsonrpc":"2.0","method":"getblockchaininfo","params":[],"id":1}'
Mainnet uses port 8332 by default; do not use that for the regtest flow above.
See RPC API Reference for the full API.
See Also
- Node Configuration - Complete configuration options
- Node Operations - Running and managing your node
- RPC API Reference - Complete API documentation
- Troubleshooting - Common issues and solutions
Security Considerations
⚠️ Important: This implementation is designed for pre-production testing and development. Additional hardening is required for production mainnet use. Use regtest or testnet for development, never expose RPC to untrusted networks, configure RPC authentication, and keep software updated.
Troubleshooting
See Troubleshooting for common issues and solutions.
RBF Configuration Example
Complete example of configuring RBF (Replace-By-Fee) for different use cases.
Exchange Node Configuration
For exchanges that need to protect users from unexpected transaction replacements:
[rbf]
mode = "conservative"
min_fee_rate_multiplier = 2.0
min_fee_bump_satoshis = 5000
min_confirmations = 1
max_replacements_per_tx = 3
cooldown_seconds = 300
[mempool]
max_mempool_mb = 500
max_mempool_txs = 200000
min_relay_fee_rate = 2
eviction_strategy = "lowest_fee_rate"
max_ancestor_count = 25
max_descendant_count = 25
persist_mempool = true
Why This Configuration:
- Conservative RBF: Requires 2x fee increase, preventing low-fee replacements
- 1 Confirmation Required: Additional safety check before allowing replacement
- Higher Fee Threshold: 2 sat/vB minimum relay fee filters low-priority transactions
- Mempool Persistence: Survives restarts for better reliability
Mining Pool Configuration
For mining pools that want to maximize fee revenue:
[rbf]
mode = "aggressive"
min_fee_rate_multiplier = 1.05
min_fee_bump_satoshis = 500
allow_package_replacements = true
max_replacements_per_tx = 10
cooldown_seconds = 60
[mempool]
max_mempool_mb = 1000
max_mempool_txs = 500000
min_relay_fee_rate = 1
eviction_strategy = "lowest_fee_rate"
max_ancestor_count = 50
max_descendant_count = 50
Why This Configuration:
- Aggressive RBF: Only 5% fee increase required, maximizing fee opportunities
- Package Replacements: Allows parent+child transaction replacements
- Larger Mempool: 1GB capacity for more transaction opportunities
- Relaxed Ancestor Limits: 50 transactions for larger packages
Standard Node Configuration
For general-purpose nodes using conventional mempool defaults:
[rbf]
mode = "standard"
min_fee_rate_multiplier = 1.1
min_fee_bump_satoshis = 1000
[mempool]
max_mempool_mb = 300
max_mempool_txs = 100000
min_relay_fee_rate = 1
eviction_strategy = "lowest_fee_rate"
max_ancestor_count = 25
max_descendant_count = 25
mempool_expiry_hours = 336
Why This Configuration:
- Standard RBF: BIP125-compliant with 10% fee increase
- Conventional defaults: Matches common mainnet mempool parameters
- Balanced: Good for most use cases
Testing RBF Configuration
Test Transaction Replacement
- Create initial transaction:
# Send transaction with RBF signaling (sequence < 0xffffffff)
bitcoin-cli sendtoaddress <address> 0.001 "" "" true
- Replace with higher fee:
# Create replacement with higher fee
bitcoin-cli bumpfee <txid> --fee_rate 20
- Verify replacement:
# Check mempool
curl -X POST http://localhost:8332 \
-H "Content-Type: application/json" \
-d '{"jsonrpc": "2.0", "method": "getmempoolentry", "params": ["<new_txid>"], "id": 1}'
Monitor RBF Activity
# Get mempool info
curl -X POST http://localhost:8332 \
-H "Content-Type: application/json" \
-d '{"jsonrpc": "2.0", "method": "getmempoolinfo", "params": [], "id": 1}'
Expected Response:
{
"jsonrpc": "2.0",
"result": {
"size": 1234,
"bytes": 567890,
"usage": 1234567,
"maxmempool": 314572800,
"mempoolminfee": 0.00001,
"minrelaytxfee": 0.00001
},
"id": 1
}
See Also
- RBF and Mempool Policies - Complete configuration guide
- Node Configuration - All configuration options
- Node Operations - Managing your node
System Overview
Bitcoin Commons is a Bitcoin implementation ecosystem with six tiers building on the Orange Paper mathematical specifications. blvm-consensus and blvm-protocol share the blvm-primitives crate for types, serialization, and crypto. The system implements consensus rules directly from the spec, provides protocol abstraction, delivers a full node implementation, and includes a developer SDK.
6-Tier Component Architecture
Mathematical Foundation] T2[blvm-consensus
Pure Math Implementation] T3[blvm-protocol
Protocol Abstraction] T4[blvm-node
Full Node Implementation] T5[blvm-sdk
Developer Toolkit] T6[blvm-commons
Governance Enforcement] T1 -->|direct implementation| T2 T2 -->|protocol abstraction| T3 T3 -->|full node| T4 T4 -->|ergonomic API| T5 T5 -->|cryptographic governance| T6 style T1 fill:#f9f,stroke:#333,stroke-width:2px style T2 fill:#bbf,stroke:#333,stroke-width:2px style T3 fill:#bfb,stroke:#333,stroke-width:2px style T4 fill:#fbf,stroke:#333,stroke-width:2px style T5 fill:#ffb,stroke:#333,stroke-width:2px style T6 fill:#fbb,stroke:#333,stroke-width:2px
BLVM Stack Architecture
Figure: BLVM stack (marketing image): Orange Paper / blvm-spec as the foundation, blvm-consensus with verification tooling, then blvm-protocol, blvm-node, blvm-sdk, and governance enforcement (blvm-commons). The numbered 6-tier diagram above is the canonical layer list.
Tiered Architecture
Figure: High-level tiered view (simplified graphic). Canonical numbering is the six layers in the mermaid diagram and section headings above (Orange Paper → consensus → protocol → node → SDK → blvm-commons); this image simplifies the stack for layout.
Component Overview
Tier 1: Orange Paper (Mathematical Foundation)
- Mathematical specifications for Bitcoin consensus rules
- Source of truth for all implementations
- Timeless, immutable consensus rules
Tier 2: blvm-consensus (Pure Math Implementation)
- Direct implementation of Orange Paper functions
- Formal proofs verify mathematical correctness
- Side-effect-free, deterministic functions
- Consensus-critical dependencies pinned to exact versions
Code: README.md
Tier 3: blvm-protocol (Protocol Abstraction)
- Bitcoin protocol abstraction for multiple variants
- Supports mainnet, testnet, regtest
- Commons-specific protocol extensions (UTXO commitments, ban list sharing)
- BIP implementations (BIP152, BIP157, BIP158, BIP173/350/351)
Code: README.md
Tier 4: blvm-node (Node Implementation)
- Reference full node (non-consensus infrastructure: storage, P2P, RPC, modules); operational hardening required for real deployments
- Storage layer (database abstraction with multiple backends)
- Network manager (multi-transport: TCP, QUIC, Iroh)
- RPC server (JSON-RPC 2.0, conventional Bitcoin RPC surface)
- Module system (process-isolated runtime modules)
- Payment processing with CTV (CheckTemplateVerify) support
- RBF and mempool policies (4 configurable modes)
- Advanced indexing (address and value range indexing)
- Mining coordination (Stratum V2, merge mining)
- P2P governance message relay
- Governance integration (webhooks, user signaling)
- ZeroMQ notifications (optional)
Code: README.md
Tier 5: blvm-sdk (Developer Toolkit)
- Governance primitives (key management, signatures, multisig)
- CLI tools (blvm-keygen, blvm-sign, blvm-verify)
- Composition framework (declarative node composition)
- Bitcoin-compatible signing standards
Code: README.md
Tier 6: blvm-commons (Governance Enforcement)
- GitHub App for governance enforcement
- Cryptographic signature verification
- Multisig threshold enforcement
- Audit trail management
- OpenTimestamps integration
Data Flow
- Orange Paper provides mathematical consensus specifications
- blvm-consensus directly implements mathematical functions
- blvm-protocol layers protocol parameters and network behavior on blvm-consensus types and validation
- blvm-node uses blvm-protocol and blvm-consensus for validation
- blvm-sdk provides governance primitives
- blvm-commons uses blvm-sdk and blvm-protocol for governance enforcement and shared types
Cross-Layer Validation
- Dependencies between layers are strictly enforced in the crate graph (application layers do not reimplement consensus).
- Consensus rule modifications are prevented in application layers by design (validation calls into blvm-consensus).
- The Orange Paper is the specification; blvm-consensus is checked with formal verification, tests, and review—not a single proof of the entire spec in one step.
- Version coordination (Cargo / release sets) keeps compatible crate versions together.
Key Features
Mathematical Rigor
- Direct implementation of Orange Paper specifications
- Formal verification with BLVM Specification Lock
- Property-based testing for mathematical invariants
- Formal proofs verify critical consensus functions
Protocol Abstraction
- Multiple Bitcoin variants (mainnet, testnet, regtest)
- Commons-specific protocol extensions
- BIP implementations (BIP152, BIP157, BIP158)
- Protocol evolution support
Node and operational features
- Full Bitcoin node–style functionality (when configured and secured appropriately)
- Performance optimizations (PGO, parallel validation)
- Multiple storage backends with automatic fallback
- Multi-transport networking (TCP, QUIC, Iroh)
- Payment processing infrastructure
- REST API alongside JSON-RPC
Governance Infrastructure
- Cryptographic governance primitives
- Multisig threshold enforcement
- Transparent audit trails
- Forkable governance rules
See Also
- Component Relationships - Detailed component interactions
- Design Philosophy - Core design principles
- Module System - Module system architecture
- Node Overview - Node implementation details
- Consensus Overview - Consensus layer details
- Protocol Overview - Protocol layer details
Component Relationships
BLVM implements a 6-tier layered architecture where each tier builds upon the previous one.
Dependency Graph
Edges point from a crate toward a crate it depends on (library import direction). The Orange Paper is not a Rust crate; it informs consensus (dotted).
blvm-primitives (types, serialization, crypto) sits under blvm-consensus and blvm-protocol; it is not shown as its own tier here.
Layer Descriptions
Tier 1: Orange Paper (blvm-spec)
- Purpose: Mathematical foundation - timeless consensus rules
- Type: Documentation and specification
- Governance: Layer 1 (Constitutional - 6-of-7 maintainers, 180 days, see Layer-Tier Model)
Tier 2: Consensus Layer (blvm-consensus)
- Purpose: Pure mathematical implementation of Orange Paper functions
- Type: Rust library (pure functions, no side effects)
- Dependencies: blvm-primitives (shared types, serialization, consensus crypto); pinned transitive crates as in
Cargo.toml - Governance: Layer 2 (Constitutional - 6-of-7 maintainers, 180 days, see Layer-Tier Model)
- Key Functions: CheckTransaction, ConnectBlock, EvalScript, VerifyScript
Tier 3: Protocol Layer (blvm-protocol)
- Purpose: Protocol abstraction layer for multiple Bitcoin variants
- Type: Rust library
- Dependencies: blvm-consensus and blvm-primitives (exact / ranged versions per
Cargo.toml) - Governance: Layer 3 (Implementation - 4-of-5 maintainers, 90 days, see Layer-Tier Model)
- Supports: mainnet, testnet, regtest, and additional protocol variants
Tier 4: Node Implementation (blvm-node)
- Purpose: Minimal reference full node—non-consensus infrastructure only; deploy with security hardening and check System Status for governance and maturity
- Type: Rust binaries (full node)
- Dependencies: blvm-protocol, blvm-consensus (exact versions)
- Governance: Layer 4 (Application - 3-of-5 maintainers, 60 days, see Layer-Tier Model)
- Components: Block validation, storage, P2P networking, RPC, mining
Tier 5: Developer SDK (blvm-sdk)
- Purpose: Developer toolkit and governance cryptographic primitives
- Type: Rust library and CLI tools
- Dependencies: Declares
blvm-protocolandblvm-consensuson crates.io (for composition and module tooling); optionalblvm-nodevia the defaultnodefeature. See the crateCargo.toml. - Governance: Layer 5 (Extension - 2-of-3 maintainers, 14 days, see Layer-Tier Model)
- Components: Key generation, signing, verification, multisig operations
Tier 6: Governance Infrastructure (blvm-commons)
- Purpose: Cryptographic governance enforcement
- Type: Rust service (GitHub App / server binaries)
- Dependencies:
blvm-sdk,blvm-protocol(seeblvm-commons/Cargo.toml) - Governance: Layer 5 (Extension - 2-of-3 maintainers, 14 days)
- Components: GitHub integration, signature verification, status checks
Data flow
The dependency graph above is the accurate picture for crate dependencies. At runtime, blocks and transactions flow through the node, which calls into protocol and consensus libraries to validate; the Orange Paper remains the specification those libraries implement.
Figure: Operational view (IBD, validation, modules, governance). For which crate depends on which, use the dependency graph in this page.
- Orange Paper specifies consensus rules.
- blvm-consensus implements those rules (pure functions).
- blvm-protocol layers network parameters, wire helpers, and protocol policy on top of consensus types.
- blvm-node runs networking, storage, RPC, and orchestration; validation calls into protocol + consensus.
- blvm-sdk supplies governance crypto, composition, and (optional) node integration for modules.
- blvm-commons runs governance enforcement services using blvm-sdk and blvm-protocol types.
Cross-Layer Validation
- Dependencies between layers are strictly enforced in the crate graph (no application layer should reimplement consensus).
- Consensus rule modifications are prevented in application layers by design (validation calls into blvm-consensus).
- The Orange Paper is the specification; blvm-consensus is checked with formal verification, tests, and review—not a single “one-shot” equivalence proof of the whole spec.
- Version coordination (Cargo / release sets) keeps compatible crate versions together.
See Also
- System Overview - High-level architecture overview
- Design Philosophy - Core design principles
- Consensus Architecture - Consensus layer details
- Protocol Architecture - Protocol layer details
- Node Overview - Node implementation details
Design Philosophy
BLVM is built on core principles that guide all design decisions.
Core Principles
1. Mathematical Correctness First
- Direct implementation of Orange Paper specifications
- No interpretation or approximation
- BLVM Specification Lock, tests, and review validate consensus-critical code against the Orange Paper
- Pure functions with no side effects (where the design allows)
2. Layered Architecture
- Clear separation of concerns
- Each layer builds on previous layers
- No circular dependencies
- Independent versioning where possible
3. Zero Consensus Re-implementation
- All consensus logic comes from blvm-consensus
- Application layers cannot modify consensus rules
- Protocol abstraction enables variants without consensus changes
- Clear security boundaries
4. Cryptographic Governance
- Apply Bitcoin’s cryptographic primitives to governance
- Make power visible, capture expensive, exit cheap
- Multi-signature requirements for all changes
- Transparent audit trails
5. User Sovereignty
- Users control what software they run
- No forced network upgrades
- Forkable governance model
Design Decisions
Why Pure Functions?
Pure functions are:
- Testable: Same input always produces same output
- Verifiable: Mathematical properties can be proven
- Composable: Can be combined without side effects
- Predictable: No hidden state or dependencies
Why Layered Architecture?
Layered architecture provides:
- Separation of Concerns: Each layer has a single responsibility
- Reusability: Lower layers can be used independently
- Testability: Each layer can be tested in isolation
- Evolution: Protocol can evolve without consensus changes
Why Formal Verification?
BLVM Specification Lock adds Z3-checked proofs on spec-locked consensus code, alongside tests and review:
- Correctness: Machine-checked linkage to Orange Paper contracts (PROOF_LIMITATIONS.md)
- Defense in depth: Fuzzing and integration tests exercise behavior; spec-lock discharges proofs on spec-locked functions
- Auditability: CI and optional OpenTimestamps on verification artifacts
Why Cryptographic Governance?
Cryptographic governance provides:
- Transparency: All decisions are cryptographically verifiable
- Accountability: Clear audit trail of all actions
- Resistance to Capture: Multi-signature requirements make capture expensive
- User Protection: Forkable governance allows users to exit if they disagree
Trade-offs
Performance vs Correctness
- Choice: Correctness first
- Rationale: Consensus violations are catastrophic
- Mitigation: Optimize after verification
Flexibility vs Safety
- Choice: Safety first
- Rationale: Bitcoin consensus must be stable
- Mitigation: Protocol abstraction enables experimentation
Simplicity vs Features
- Choice: Simplicity where possible
- Rationale: Complex code is harder to verify
- Mitigation: Add features only when necessary
Design Evolution
BLVM is designed to support Bitcoin’s evolution for the next 500 years:
- Protocol Evolution: New variants without consensus changes
- Feature Addition: New capabilities through protocol abstraction
- Governance Evolution: Governance rules can evolve through proper process
- User Choice: Multiple implementations can coexist
See Also
- System Overview - High-level architecture overview
- Component Relationships - Layer dependencies and data flow
- Consensus Architecture - Mathematical correctness implementation
- Governance Overview - Cryptographic governance system
- Orange Paper - Mathematical foundation
Module System
Overview
The module system supports optional features (Lightning Network, merge mining, privacy enhancements) without affecting consensus or base node stability. Modules run in separate processes with IPC communication, providing security through isolation.
Available Modules
The following modules are available for blvm-node:
- Lightning Network Module - Lightning Network payment processing, invoice verification, payment routing, and channel management
- Commons Mesh Module - Payment-gated mesh networking with routing fees, traffic classification, and anti-monopoly protection
- Stratum V2 Module - Stratum V2 mining protocol support and mining pool management
- Datum Module - DATUM Gateway mining protocol
- Mining OS Module - MiningOS integration
- Merge Mining Module - Merge mining available as separate paid plugin (
blvm-merge-mining)
For detailed documentation on each module, see the Modules section.
Writing modules: Use the SDK declarative style (blvm-sdk attribute macros and run_module!) to define CLI, RPC, and event handling in one impl block without manual IPC loops; see Module Development. Alternatively use the integration API or low-level IPC for custom control.
Architecture
Process Isolation
Each module runs in a separate process with isolated memory. The base node consensus state is protected and read-only to modules.
Protected, Read-Only] MM[Module Manager
Orchestration] NM[Network Manager] SM[Storage Manager] RM[RPC Manager] end subgraph "Module Process 1
blvm-lightning" LS[Lightning State
Isolated Memory] SB1[Sandbox
Resource Limits] end subgraph "Module Process 2
blvm-mesh" MS[Mesh State
Isolated Memory] SB2[Sandbox
Resource Limits] end subgraph "Module Process 3
blvm-stratum-v2" SS[Stratum V2 State
Isolated Memory] SB3[Sandbox
Resource Limits] end MM -->|IPC Unix Sockets| LS MM -->|IPC Unix Sockets| MS MM -->|IPC Unix Sockets| SS CS -.->|Read-Only Access| MM NM --> MM SM --> MM RM --> MM style CS fill:#fbb,stroke:#333,stroke-width:3px style MM fill:#bbf,stroke:#333,stroke-width:2px style LS fill:#bfb,stroke:#333,stroke-width:2px style MS fill:#bfb,stroke:#333,stroke-width:2px style SS fill:#bfb,stroke:#333,stroke-width:2px
Code: mod.rs
Core Components
ModuleManager
Orchestrates all modules, handling lifecycle, runtime loading/unloading/reloading, and coordination.
Features:
- Module discovery and loading
- Process spawning and monitoring
- IPC server management
- Event subscription management
- Dependency resolution
- Registry integration
Code: manager.rs
Process Isolation
Modules run in separate processes via ModuleProcessSpawner:
- Separate memory space
- Isolated execution environment
- Resource limits enforced
- Crash containment
Code: spawner.rs
IPC Communication
Modules communicate with the base node via Unix domain sockets (Unix) or named pipes (Windows):
- Request/response protocol
- Event subscription system (
SubscribeEvents/EventType— node → module notifications) - Correlation IDs for async operations
- Type-safe message serialization
- Targeted node control (module → node):
NodeAPI/ IPC also exposes bounded writes that are not consensus changes — e.g. P2P serve denylists (block/txgetdatapolicy),get_sync_status,ban_peer, and block-serve maintenance mode. Details: Module IPC Protocol, Module development.
Code: protocol.rs
Security Sandbox
Modules run in sandboxed environments with:
- Resource limits (CPU, memory, file descriptors)
- Filesystem restrictions
- Network restrictions
- Permission-based API access
Code: network.rs
Permission System
Modules request capabilities that are validated before API access. Capabilities use snake_case in module.toml (e.g., read_blockchain) and map to Permission enum variants (e.g., ReadBlockchain).
Core Permissions:
read_blockchain/ReadBlockchain- Read-only blockchain access (blocks, headers, transactions)read_utxo/ReadUTXO- Query UTXO set (read-only)read_chain_state/ReadChainState- Query chain state (height, tip)subscribe_events/SubscribeEvents- Subscribe to node eventssend_transactions/SendTransactions- Submit transactions to mempool (future: may be restricted)
Mempool & Network Permissions:
read_mempool/ReadMempool- Read mempool data (transactions, size, fee estimates)read_network/ReadNetwork- Read network data (peers, stats)network_access/NetworkAccess- Send network packets (mesh packets, etc.)
Lightning & Payment Permissions:
read_lightning/ReadLightning- Read Lightning network dataread_payment/ReadPayment- Read payment data
Storage Permissions:
read_storage/ReadStorage- Read from module storagewrite_storage/WriteStorage- Write to module storagemanage_storage/ManageStorage- Manage storage (create/delete trees, manage quotas)
Filesystem Permissions:
read_filesystem/ReadFilesystem- Read files from module data directorywrite_filesystem/WriteFilesystem- Write files to module data directorymanage_filesystem/ManageFilesystem- Manage filesystem (create/delete directories, manage quotas)
RPC & Timers Permissions:
register_rpc_endpoint/RegisterRpcEndpoint- Register RPC endpointsmanage_timers/ManageTimers- Manage timers and scheduled tasks
Metrics Permissions:
report_metrics/ReportMetrics- Report metricsread_metrics/ReadMetrics- Read metrics
Module Communication Permissions:
discover_modules/DiscoverModules- Discover other modulespublish_events/PublishEvents- Publish events to other modulescall_module/CallModule- Call other modules’ APIsregister_module_api/RegisterModuleApi- Register module API for other modules to call
Code: permissions.rs
Module Lifecycle
Discovery → Verification → Loading → Execution → Monitoring
│ │ │ │ │
│ │ │ │ │
▼ ▼ ▼ ▼ ▼
Registry Signer Loader Process Monitor
Discovery
Modules discovered through:
- Local filesystem (
modules/directory) - Module registry (REST API)
- Manual installation
Code: discovery.rs
Verification
Each module verified through:
- Hash verification (binary integrity)
- Signature verification (multisig maintainer signatures)
- Permission checking (capability validation)
- Compatibility checking (version requirements)
Code: manifest_validator.rs
Loading
Module loaded into isolated process:
- Sandbox creation (resource limits)
- IPC connection establishment
- API subscription setup
Code: manager.rs
Execution
Module runs in isolated process:
- Separate memory space
- Resource limits enforced
- IPC communication only
- Event subscription active
Monitoring
Module health monitored:
- Process status tracking
- Resource usage monitoring
- Error tracking
- Crash isolation
Code: monitor.rs
Security Model
Consensus Isolation
Modules cannot:
- Modify consensus rules
- Modify UTXO set
- Access node private keys
- Bypass security boundaries
- Affect other modules
Guarantee: Module failures are isolated and cannot affect consensus.
Crash Containment
Module crashes are isolated and do not affect the base node. The ModuleProcessMonitor detects crashes and automatically removes failed modules.
Code: manager.rs
Security Flow
Module Binary
│
├─→ Hash Verification ──→ Integrity Check
│
├─→ Signature Verification ──→ Multisig Check ──→ Maintainer Verification
│
├─→ Permission Check ──→ Capability Validation
│
└─→ Sandbox Creation ──→ Resource Limits ──→ Isolation
Module Manifest
Module manifests use TOML format:
# Module Identity
name = "lightning-network"
version = "1.2.3"
description = "Lightning Network implementation"
author = "Alice <alice@example.com>"
# Governance
[governance]
tier = "application"
maintainers = ["alice", "bob", "charlie"]
threshold = "2-of-3"
review_period_days = 14
# Signatures
[signatures]
maintainers = [
{ name = "alice", key = "02abc...", signature = "..." },
{ name = "bob", key = "03def...", signature = "..." }
]
threshold = "2-of-3"
# Binary
[binary]
hash = "sha256:abc123..."
size = 1234567
download_url = "https://registry.bitcoincommons.org/modules/lightning-network/1.2.3"
# Dependencies
[dependencies]
"blvm-node" = ">=1.0.0"
"another-module" = ">=0.5.0"
# Compatibility
[compatibility]
min_consensus_version = "1.0.0"
min_protocol_version = "1.0.0"
min_node_version = "1.0.0"
tested_with = ["1.0.0", "1.1.0"]
# Capabilities
capabilities = [
"read_blockchain",
"subscribe_events"
]
Code: manifest.rs
API Hub
The ModuleApiHub routes API requests from modules to the appropriate handlers:
- Blockchain API (blocks, headers, transactions)
- Governance API (proposals, votes)
- Communication API (P2P messaging)
Code: hub.rs
Event System
The module event system provides a comprehensive, consistent, and reliable way for modules to receive notifications about node state changes, blockchain events, and system lifecycle events.
Event Subscription
Modules subscribe to events they need during initialization:
#![allow(unused)]
fn main() {
let event_types = vec![
EventType::NewBlock,
EventType::NewTransaction,
EventType::ModuleLoaded,
EventType::ConfigLoaded,
];
client.subscribe_events(event_types).await?;
}
Event Categories
Core Blockchain Events:
NewBlock- Block connected to chainNewTransaction- Transaction in mempoolBlockDisconnected- Block disconnected (reorg)ChainReorg- Chain reorganization
Payment Events:
PaymentRequestCreated- Payment request createdPaymentSettled- Payment settled (confirmed on-chain)PaymentFailed- Payment failedPaymentVerified- Lightning payment verifiedPaymentRouteFound- Payment route discoveredPaymentRouteFailed- Payment routing failedChannelOpened- Lightning channel openedChannelClosed- Lightning channel closed
Mining Events:
BlockMined- Block mined successfullyBlockTemplateUpdated- Block template updatedMiningDifficultyChanged- Mining difficulty changedMiningJobCreated- Mining job createdShareSubmitted- Mining share submittedMergeMiningReward- Merge mining reward receivedMiningPoolConnected- Mining pool connectedMiningPoolDisconnected- Mining pool disconnected
Mesh Networking Events:
MeshPacketReceived- Mesh packet received from network
Stratum V2 Events:
StratumV2MessageReceived- Stratum V2 message received from network
Module Lifecycle Events:
ModuleLoaded- Module loaded (published after subscription)ModuleUnloaded- Module unloadedModuleCrashed- Module crashedModuleDiscovered- Module discoveredModuleInstalled- Module installedModuleUpdated- Module updatedModuleRemoved- Module removed
Configuration Events:
ConfigLoaded- Node configuration loaded/changed
Node Lifecycle Events:
NodeStartupCompleted- Node fully operationalNodeShutdown- Node shutting downNodeShutdownCompleted- Shutdown complete
Maintenance Events:
DataMaintenance- Unified cleanup/flush event (replaces StorageFlush + DataCleanup)MaintenanceStarted- Maintenance startedMaintenanceCompleted- Maintenance completedHealthCheck- Health check performed
Resource Management Events:
DiskSpaceLow- Disk space lowResourceLimitWarning- Resource limit warning
Governance Events:
GovernanceProposalCreated- Proposal createdGovernanceProposalVoted- Vote castGovernanceProposalMerged- Proposal mergedGovernanceForkDetected- Governance fork detectedWebhookSent- Webhook sentWebhookFailed- Webhook delivery failed
Network Events:
PeerConnected- Peer connectedPeerDisconnected- Peer disconnectedPeerBanned- Peer bannedPeerUnbanned- Peer unbannedMessageReceived- Network message receivedMessageSent- Network message sentBroadcastStarted- Broadcast startedBroadcastCompleted- Broadcast completedRouteDiscovered- Route discoveredRouteFailed- Route failedConnectionAttempt- Connection attempt (success/failure)AddressDiscovered- New peer address discoveredAddressExpired- Peer address expiredNetworkPartition- Network partition detectedNetworkReconnected- Network partition reconnectedDoSAttackDetected- DoS attack detectedRateLimitExceeded- Rate limit exceeded
Consensus Events:
BlockValidationStarted- Block validation startedBlockValidationCompleted- Block validation completed (success/failure)ScriptVerificationStarted- Script verification startedScriptVerificationCompleted- Script verification completedUTXOValidationStarted- UTXO validation startedUTXOValidationCompleted- UTXO validation completedDifficultyAdjusted- Network difficulty adjustedSoftForkActivated- Soft fork activated (SegWit, Taproot, CTV, etc.)SoftForkLockedIn- Soft fork locked in (BIP9)ConsensusRuleViolation- Consensus rule violation detected
Sync Events:
HeadersSyncStarted- Headers sync startedHeadersSyncProgress- Headers sync progress updateHeadersSyncCompleted- Headers sync completedBlockSyncStarted- Block sync started (IBD)BlockSyncProgress- Block sync progress updateBlockSyncCompleted- Block sync completed
Mempool Events:
MempoolTransactionAdded- Transaction added to mempoolMempoolTransactionRemoved- Transaction removed from mempoolFeeRateChanged- Fee rate changed
Additional Event Categories:
- Dandelion++ Events (DandelionStemStarted, DandelionStemAdvanced, DandelionFluffed, etc.)
- Compact Blocks Events (CompactBlockReceived, BlockReconstructionStarted, etc.)
- FIBRE Events (FibreBlockEncoded, FibreBlockSent, FibrePeerRegistered)
- Package Relay Events (PackageReceived, PackageRejected)
- UTXO Commitments Events (UtxoCommitmentReceived, UtxoCommitmentVerified)
- Ban List Sharing Events (BanListShared, BanListReceived)
For a complete list of all event types, see EventType enum.
Event Delivery Guarantees
At-Most-Once Delivery:
- Events are delivered at most once per subscriber
- If channel is full, event is dropped (not retried)
- If channel is closed, module is removed from subscriptions
Best-Effort Delivery:
- Events are delivered on a best-effort basis
- No guaranteed delivery (modules can be slow/dead)
- Statistics track delivery success/failure rates
Ordering Guarantees:
- Events are delivered in order per module (single channel)
- No cross-module ordering guarantees
- ModuleLoaded events are ordered: subscription → ModuleLoaded
Event Timing and Consistency
ModuleLoaded Event Timing:
ModuleLoadedevents are only published AFTER a module has subscribed (after startup is complete)- This ensures modules are fully ready before receiving ModuleLoaded events
- Hotloaded modules automatically receive all already-loaded modules when subscribing
Event Flow:
- Module process is spawned
- Module connects via IPC and sends Handshake
- Module sends
SubscribeEventsrequest - At subscription time:
- Module receives
ModuleLoadedevents for all already-loaded modules (hotloaded modules get existing modules) ModuleLoadedis published for the newly subscribing module (if it’s loaded)
- Module receives
- Module is now fully operational
Event Delivery Reliability
Channel Buffering:
- 100-event buffer per module (prevents unbounded memory growth)
- Non-blocking delivery (publisher never blocks)
- Channel full events are tracked in statistics
Error Handling:
- Channel Full: Event dropped with warning, module subscription NOT removed (module is slow, not dead)
- Channel Closed: Module subscription removed, statistics track failed delivery
- Serialization Errors: Event dropped with warning, module subscription NOT removed
Delivery Statistics:
- Track success/failure/channel-full counts per module
- Available via
EventManager::get_delivery_stats() - Useful for monitoring and debugging
Code: events.rs
For detailed event system documentation, see:
- Event System Integration - Complete integration guide
- Event Consistency - Event timing and consistency guarantees
- Janitorial Events - Maintenance and lifecycle events
Module Registry
Modules can be discovered and installed from a module registry:
- REST API client for module discovery
- Binary download and verification
- Dependency resolution
- Signature verification
Code: client.rs
Usage
Loading a Module
#![allow(unused)]
fn main() {
use blvm_node::module::{ModuleManager, ModuleMetadata};
let mut manager = ModuleManager::new(
modules_dir,
data_dir,
socket_dir,
);
manager.start(socket_path, node_api).await?;
manager.load_module(
"lightning-network",
binary_path,
metadata,
config,
).await?;
}
Auto-Discovery
#![allow(unused)]
fn main() {
// Automatically discover and load all modules
manager.auto_load_modules().await?;
}
Code: manager.rs
Benefits
- Consensus Isolation: Modules cannot affect consensus rules
- Crash Containment: Module failures don’t affect base node
- Security: Process isolation and permission system
- Extensibility: Add features without consensus changes
- Flexibility: Load/unload modules at runtime
- Governance: Modules subject to governance approval
Use Cases
- Lightning Network: Payment channel management
- Merge Mining: Auxiliary chain support
- Privacy Enhancements: Transaction mixing, coinjoin
- Alternative Mempool Policies: Custom transaction selection
- Smart Contracts: Layer 2 contract execution
Components
The module system includes:
- Process isolation
- IPC communication
- Security sandboxing
- Permission system
- Module registry
- Event system
- API hub
Location: blvm-node/src/module/
IPC Communication
Modules communicate with the node via the Module IPC Protocol:
- Protocol: Length-delimited binary messages over Unix domain sockets
- Message Types: Requests, Responses, Events, Logs
- Security: Process isolation, permission-based API access, resource sandboxing
- Performance: Persistent connections, concurrent requests, correlation IDs
Integration Approaches
There are two approaches for modules to integrate with the node:
1. ModuleIntegration (Recommended for New Modules)
The ModuleIntegration API provides a simplified, unified interface for module integration:
#![allow(unused)]
fn main() {
use blvm_node::module::integration::ModuleIntegration;
// Connect to node (socket_path must be PathBuf)
let socket_path = std::path::PathBuf::from(socket_path);
let mut integration = ModuleIntegration::connect(
socket_path,
module_id,
module_name,
version,
).await?;
// Subscribe to events
integration.subscribe_events(event_types).await?;
// Get NodeAPI
let node_api = integration.node_api();
// Get event receiver
let mut event_receiver = integration.event_receiver();
}
Benefits:
- Single unified API for all integration needs
- Automatic handshake and connection management
- Simplified event subscription
- Direct access to NodeAPI and event receiver
Used by: blvm-mesh and its submodules (blvm-onion, blvm-mining-pool, blvm-messaging, blvm-bridge)
2. ModuleClient + NodeApiIpc (Legacy Approach)
The traditional approach uses separate components:
#![allow(unused)]
fn main() {
use blvm_node::module::ipc::client::ModuleIpcClient;
use blvm_node::module::api::node_api::NodeApiIpc;
// Connect to IPC socket
let mut ipc_client = ModuleIpcClient::connect(&socket_path).await?;
// Perform handshake manually
let handshake_request = RequestMessage { /* ... */ };
let response = ipc_client.request(handshake_request).await?;
// Create NodeAPI wrapper
// NodeApiIpc requires Arc<Mutex<ModuleIpcClient>> and module_id
let ipc_client_arc = Arc::new(tokio::sync::Mutex::new(ipc_client));
let node_api = Arc::new(NodeApiIpc::new(ipc_client_arc, "my-module".to_string()));
// Create ModuleClient for event subscription
let mut client = ModuleClient::connect(/* ... */).await?;
client.subscribe_events(event_types).await?;
let mut event_receiver = client.event_receiver();
}
Benefits:
- More granular control over IPC communication
- Direct access to IPC client for custom requests
- Established, stable API
Used by: blvm-lightning, blvm-stratum-v2, blvm-datum, blvm-miningos
Migration: New modules should use ModuleIntegration. Existing modules can continue using ModuleClient + NodeApiIpc, but migration to ModuleIntegration is recommended for consistency and simplicity.
For detailed protocol documentation, see Module IPC Protocol.
See Also
- Module IPC Protocol - Complete IPC protocol documentation
- Modules Overview - Overview of all available modules
- Lightning Network Module - Lightning Network payment processing
- Commons Mesh Module - Payment-gated mesh networking
- Stratum V2 Module - Stratum V2 mining protocol
- Datum Module - DATUM Gateway mining protocol
- Mining OS Module - MiningOS integration
- Module Development - Guide for developing custom modules
Module IPC Protocol
Overview
The Module IPC (Inter-Process Communication) protocol enables secure communication between process-isolated modules and the base node. Modules run in separate processes and communicate via Unix domain sockets using a length-delimited binary message protocol.
Architecture
┌─────────────────────────────────────┐
│ blvm-node Process │
│ ┌───────────────────────────────┐ │
│ │ Module IPC Server │ │
│ │ (Unix Domain Socket) │ │
│ └───────────────────────────────┘ │
└─────────────────────────────────────┘
│ IPC Protocol
│ (Unix Domain Socket)
│
┌─────────────┴─────────────────────┐
│ Module Process (Isolated) │
│ ┌───────────────────────────────┐ │
│ │ Module IPC Client │ │
│ │ (Unix Domain Socket) │ │
│ └───────────────────────────────┘ │
└─────────────────────────────────────┘
Protocol Format
Message Encoding
Messages use length-delimited binary encoding:
[4-byte length][message payload]
- Length: 4-byte little-endian integer (message size)
- Payload: Binary-encoded message (bincode serialization)
Code: mod.rs
Message Types
The protocol supports four message types:
- Request: Module → Node (API calls)
- Response: Node → Module (API responses)
- Event: Node → Module (event notifications)
- Log: Module → Node (logging)
Code: protocol.rs
Message Structure
Request Message
#![allow(unused)]
fn main() {
pub struct RequestMessage {
pub correlation_id: CorrelationId,
pub request_type: MessageType,
pub payload: RequestPayload,
}
}
Request types (representative):
Reads and subscriptions include GetBlock, GetBlockHeader, GetTransaction, GetChainTip, GetBlockHeight, GetUTXO, SubscribeEvents, GetMempoolTransactions, GetNetworkStats, GetNetworkPeers, GetChainInfo, and many others (mining, storage, RPC, timers, …).
P2P serve policy & sync (module → node):
MessageType | Role |
|---|---|
MergeBlockServeDenylist | Add block hashes that must not receive full block on getdata (notfound instead). |
GetBlockServeDenylistSnapshot | Bounded snapshot of the block denylist. |
ClearBlockServeDenylist / ReplaceBlockServeDenylist | Clear or replace the full set. |
MergeTxServeDenylist | Same pattern for full tx on getdata. |
GetTxServeDenylistSnapshot | Bounded snapshot of the tx denylist. |
ClearTxServeDenylist / ReplaceTxServeDenylist | Clear or replace the tx set. |
GetSyncStatus | Sync coordinator status (SyncStatus). |
BanPeer | Ban peer by address; optional duration. |
SetBlockServeMaintenanceMode | Refuse all full-block getdata answers when enabled. |
These affect relay/serving only, not consensus validation. See NodeAPI for the Rust surface.
Code: protocol.rs
Response Message
#![allow(unused)]
fn main() {
pub struct ResponseMessage {
pub correlation_id: CorrelationId,
pub payload: ResponsePayload,
}
}
Response payload variants carry typed data (blocks, templates, snapshots, booleans, errors, etc.); denylist merges return dedicated merged/snapshot payloads where applicable.
Code: protocol.rs
Event Message
#![allow(unused)]
fn main() {
pub struct EventMessage {
pub event_type: EventType,
pub payload: EventPayload,
}
}
Event Types (46+ event types):
- Network events:
PeerConnected,MessageReceived,PeerDisconnected - Payment events:
PaymentRequestCreated,PaymentVerified,PaymentSettled - Chain events:
NewBlock,ChainTipUpdated,BlockDisconnected - Mempool events:
MempoolTransactionAdded,FeeRateChanged,MempoolTransactionRemoved
Code: protocol.rs
Log Message
#![allow(unused)]
fn main() {
pub struct LogMessage {
pub level: LogLevel,
pub message: String,
pub module_id: String,
}
}
Log Levels: Error, Warn, Info, Debug, Trace
Code: protocol.rs
Communication Flow
Request-Response Pattern
- Module sends Request: Module sends request message with correlation ID
- Node processes Request: Node processes request and generates response
- Node sends Response: Node sends response with matching correlation ID
- Module receives Response: Module matches response to request using correlation ID
Code: server.rs
Event Subscription Pattern
- Module subscribes: Module sends
SubscribeEventsrequest with event types - Node confirms: Node sends subscription confirmation
- Node publishes Events: Node sends event messages as they occur
- Module receives Events: Module processes events asynchronously
Code: server.rs
Connection Management
Handshake
On connection, modules send a handshake message:
#![allow(unused)]
fn main() {
pub struct HandshakeMessage {
pub module_id: String,
pub capabilities: Vec<String>,
pub version: String,
}
}
Code: server.rs
Connection Lifecycle
- Connect: Module connects to Unix domain socket
- Handshake: Module sends handshake, node validates
- Active: Connection active, ready for requests/events
- Disconnect: Connection closed (graceful or error)
Code: server.rs
Security
Process Isolation
- Modules run in separate processes with isolated memory
- No shared memory between node and modules
- Module crashes don’t affect the base node
Code: spawner.rs
Permission System
Modules request capabilities that are validated before API access:
ReadBlockchain- Read-only blockchain accessReadUTXO- Query UTXO set (read-only)ReadChainState- Query chain state (height, tip)SubscribeEvents- Subscribe to node eventsSendTransactions- Submit transactions to mempool
Code: permissions.rs
Sandboxing
Modules run in sandboxed environments with:
- Resource limits (CPU, memory, file descriptors)
- Filesystem restrictions
- Network restrictions (modules cannot open network connections)
- Permission-based API access
Code: mod.rs
Error Handling
Error Types
#![allow(unused)]
fn main() {
pub enum ModuleError {
ConnectionError(String),
ProtocolError(String),
PermissionDenied(String),
ResourceExhausted(String),
Timeout(String),
}
}
Code: traits.rs
Error Recovery
- Connection Errors: Automatic reconnection with exponential backoff
- Protocol Errors: Clear error messages, connection termination
- Permission Errors: Detailed error messages, request rejection
- Timeout Errors: Request timeout, connection remains active
Code: client.rs
Performance
Message Serialization
- Format: bincode (binary encoding)
- Size: Compact binary representation
- Speed: Fast serialization/deserialization
Code: protocol.rs
Connection Pooling
- Persistent Connections: Connections remain open for multiple requests
- Concurrent Requests: Multiple requests can be in-flight simultaneously
- Correlation IDs: Match responses to requests asynchronously
Code: client.rs
Implementation Details
IPC Server
The node-side IPC server:
- Listens on Unix domain socket
- Accepts module connections
- Routes requests to NodeAPI implementation
- Publishes events to subscribed modules
Code: server.rs
IPC Client
The module-side IPC client:
- Connects to Unix domain socket
- Sends requests and receives responses
- Subscribes to events
- Handles connection errors
Code: client.rs
See Also
- Module System - Module system architecture
- Module Development - Building modules
- Modules Overview - Available modules
Event System Integration
Overview
The module event system is designed to handle common integration pain points in distributed module architectures. This document covers all integration scenarios, reliability guarantees, and best practices.
Integration Pain Points Addressed
1. Event Delivery Reliability
Problem: Events can be lost if modules are slow or channels are full.
Solution:
- Channel Buffering: 100-event buffer per module (configurable)
- Non-Blocking Delivery: Uses
try_sendto avoid blocking the publisher - Channel Full Handling: Events are dropped with warning (module is slow, not dead)
- Channel Closed Detection: Automatically removes dead modules from subscriptions
- Delivery Statistics: Track success/failure rates per module
Code:
#![allow(unused)]
fn main() {
// EventManager tracks delivery statistics
let stats = event_manager.get_delivery_stats("module_id").await;
// Returns: Option<(successful_deliveries, failed_deliveries, channel_full_count)>
}
2. Event Ordering and Timing
Problem: Events might arrive out of order or modules might miss events during startup.
Solution:
- ModuleLoaded Timing: Only published AFTER module subscribes (startup complete)
- Hotloaded Modules: Automatically receive all already-loaded modules when subscribing
- Consistent Ordering: Subscription → ModuleLoaded events (guaranteed order)
Flow:
- Module loads → Recorded in
loaded_modules - Module subscribes → Receives all already-loaded modules
- ModuleLoaded published → After subscription (startup complete)
3. Event Channel Backpressure
Problem: Fast publishers can overwhelm slow consumers.
Solution:
- Bounded Channels: 100-event buffer prevents unbounded memory growth
- Non-Blocking: Publisher never blocks, events dropped if channel full
- Statistics Tracking: Monitor channel full events to identify slow modules
- Automatic Cleanup: Dead modules automatically removed
Monitoring:
#![allow(unused)]
fn main() {
let stats = event_manager.get_delivery_stats("module_id").await;
if let Some((_, _, channel_full_count)) = stats {
if channel_full_count > 100 {
warn!("Module {} is slow, dropping events", module_id);
}
}
}
4. Missing Events During Startup
Problem: Modules that start later miss events from earlier modules.
Solution:
- Hotloaded Module Support: Newly subscribing modules receive all already-loaded modules
- Event Replay: ModuleLoaded events sent to newly subscribing modules
- Consistent State: All modules have consistent view of loaded modules
5. Event Type Coverage
Problem: Not all events have corresponding payloads or are published.
Solution:
- Complete Coverage: All EventType variants have corresponding EventPayload variants
- Governance Events: All governance events are published
- Network Events: All network events are published
- Lifecycle Events: All lifecycle events are published
Event Categories
Core Blockchain Events
NewBlock: Block connected to chainNewTransaction: Transaction in mempoolBlockDisconnected: Block disconnected (reorg)ChainReorg: Chain reorganization
Governance Events
GovernanceProposalCreated: Proposal createdGovernanceProposalVoted: Vote castGovernanceProposalMerged: Proposal mergedGovernanceForkDetected: Fork detected
Network Events
PeerConnected: Peer connectedPeerDisconnected: Peer disconnectedPeerBanned: Peer bannedMessageReceived: Network message receivedBroadcastStarted: Broadcast startedBroadcastCompleted: Broadcast completed
Module Lifecycle Events
ModuleLoaded: Module loaded (after subscription)ModuleUnloaded: Module unloadedModuleCrashed: Module crashedModuleHealthChanged: Health status changed
Maintenance Events
DataMaintenance: Unified cleanup/flush (replaces StorageFlush + DataCleanup)MaintenanceStarted: Maintenance startedMaintenanceCompleted: Maintenance completedHealthCheck: Health check performed
Resource Management Events
DiskSpaceLow: Disk space lowResourceLimitWarning: Resource limit warning
Event Delivery Guarantees
At-Most-Once Delivery
- Events are delivered at most once per subscriber
- If channel is full, event is dropped (not retried)
- If channel is closed, module is removed from subscriptions
Best-Effort Delivery
- Events are delivered on a best-effort basis
- No guaranteed delivery (modules can be slow/dead)
- Statistics track delivery success/failure rates
Ordering Guarantees
- Events are delivered in order per module (single channel)
- No cross-module ordering guarantees
- ModuleLoaded events are ordered: subscription → ModuleLoaded
Error Handling
Channel Full
- Event is dropped with warning
- Module subscription is NOT removed (module is slow, not dead)
- Statistics track channel full count
Channel Closed
- Module subscription is removed
- Statistics track failed delivery count
- Module is automatically cleaned up
Serialization Errors
- Event is dropped with warning
- Module subscription is NOT removed
- Error is logged for debugging
Monitoring and Debugging
Delivery Statistics
#![allow(unused)]
fn main() {
// Get statistics for a module
let stats = event_manager.get_delivery_stats("module_id").await;
// Returns: Option<(successful, failed, channel_full)>
// Get statistics for all modules
let all_stats = event_manager.get_all_delivery_stats().await;
// Returns: HashMap<module_id, (successful, failed, channel_full)>
// Reset statistics (for testing)
event_manager.reset_delivery_stats("module_id").await;
}
Event Subscribers
#![allow(unused)]
fn main() {
// Get list of subscribers for an event type
let subscribers = event_manager.get_subscribers(EventType::NewBlock).await;
// Returns: Vec<module_id>
}
Best Practices
For Module Developers
- Subscribe Early: Subscribe to events as soon as possible after handshake
- Handle Events Quickly: Keep event handlers fast and non-blocking
- Monitor Statistics: Check delivery statistics to ensure events are received
- Handle ModuleLoaded: Always handle ModuleLoaded to know about other modules
- Graceful Shutdown: Handle NodeShutdown and DataMaintenance (urgency: “high”)
For Node Developers
- Publish Consistently: Publish events at consistent points in the code
- Use EventPublisher: Use EventPublisher for all event publishing
- Monitor Statistics: Monitor delivery statistics to identify slow modules
- Handle Errors: Log warnings for failed event deliveries
- Test Integration: Test event delivery in integration tests
Common Integration Scenarios
Scenario 1: Module Startup
- Module process spawned
- Module connects via IPC
- Module sends Handshake
- Module subscribes to events
- Module receives ModuleLoaded for all already-loaded modules
- ModuleLoaded published for this module (after subscription)
Scenario 2: Hotloaded Module
- Module B loads while Module A is already running
- Module B subscribes to events
- Module B receives ModuleLoaded for Module A
- ModuleLoaded published for Module B
- Module A receives ModuleLoaded for Module B
Scenario 3: Slow Module
- Module receives events slowly
- Event channel fills up (100 events)
- New events are dropped with warning
- Statistics track channel full count
- Module subscription is NOT removed (module is slow, not dead)
Scenario 4: Dead Module
- Module process crashes
- Event channel is closed
- Event delivery fails
- Module subscription is automatically removed
- Statistics track failed delivery count
Scenario 5: Governance Event Flow
- Network receives governance event
- Event published to governance module
- Governance module processes event
- Governance module may publish additional events
- All events delivered via same reliable channel
Configuration
Channel Buffer Size
Currently hardcoded to 100 events per module. Can be made configurable in the future.
Event Statistics
Statistics are kept in memory and reset on node restart. Can be persisted in the future.
Future Improvements
- Configurable Buffer Size: Make channel buffer size configurable per module
- Event Persistence: Persist events for replay after module restart
- Event Filtering: Allow modules to filter events by criteria
- Event Priority: Add priority queue for critical events
- Event Metrics: Add Prometheus metrics for event delivery
- Event Replay: Allow modules to replay missed events
See Also
- Module System - Module system architecture
- Event Consistency - Event timing and consistency guarantees
- Janitorial Events - Maintenance and lifecycle events
- Module IPC Protocol - IPC communication details
Module Event System Consistency
Overview
The module event system is designed to be consistent, minimal, and extensible. All events follow a clear pattern and timing to ensure modules can integrate seamlessly with the node.
Event Timing and Consistency
ModuleLoaded Event
Key Principle: ModuleLoaded events are only published AFTER a module has subscribed (after startup is complete).
Flow:
- Module process is spawned
- Module connects via IPC and sends Handshake
- Module sends
SubscribeEventsrequest - At subscription time:
- Module receives
ModuleLoadedevents for all already-loaded modules (hotloaded modules get existing modules) ModuleLoadedis published for the newly subscribing module (if it’s loaded)
- Module receives
- Module is now fully operational
Why this design?
- Ensures
ModuleLoadedonly happens after module is fully ready (subscribed) - Hotloaded modules automatically receive all existing modules
- Consistent event ordering: subscription → ModuleLoaded
- No race conditions: modules can’t miss events
Example Flow
Startup (Module A loads first):
- Module A process spawned
- Module A connects and handshakes
- Module A subscribes to events
ModuleLoadedpublished for Module A (no other modules yet)
Hotload (Module B loads later):
- Module B process spawned
- Module B connects and handshakes
- Module B subscribes to events
- Module B receives
ModuleLoadedfor Module A (already loaded) ModuleLoadedpublished for Module B (all modules get it)
Unified Events
DataMaintenance (Unified Cleanup/Flush)
Replaces: StorageFlush and DataCleanup (unified into one extensible event)
Purpose: Single event for all data maintenance operations
Payload:
operation: “flush”, “cleanup”, or “both”urgency: “low”, “medium”, or “high”reason: “periodic”, “shutdown”, “low_disk”, “manual”target_age_days: Optional (for cleanup operations)timeout_seconds: Optional (for high urgency operations)
Usage Examples:
- Shutdown:
DataMaintenance { operation: "flush", urgency: "high", reason: "shutdown", timeout_seconds: Some(5) } - Periodic Cleanup:
DataMaintenance { operation: "cleanup", urgency: "low", reason: "periodic", target_age_days: Some(30) } - Low Disk:
DataMaintenance { operation: "both", urgency: "high", reason: "low_disk", target_age_days: Some(7), timeout_seconds: Some(10) }
Benefits:
- Single event for all maintenance operations
- Extensible: easy to add new operation types or urgency levels
- Clear semantics: operation + urgency + reason
- Modules can handle all maintenance in one place
Event Categories
1. Node Lifecycle
NodeStartupCompleted: Node is fully operationalNodeShutdown: Node is shutting down (modules should clean up)NodeShutdownCompleted: Shutdown finished
2. Module Lifecycle
ModuleLoaded: Module loaded and subscribed (after startup complete)ModuleUnloaded: Module unloadedModuleReloaded: Module reloadedModuleCrashed: Module crashed
3. Configuration
ConfigLoaded: Node configuration loaded/changed
4. Maintenance
DataMaintenance: Unified cleanup/flush event (replaces StorageFlush + DataCleanup)MaintenanceStarted: Maintenance operation startedMaintenanceCompleted: Maintenance operation completedHealthCheck: Health check performed
5. Resource Management
DiskSpaceLow: Disk space is lowResourceLimitWarning: Resource limit approaching
Best Practices
- Subscribe Early: Modules should subscribe to events as soon as possible after handshake
- Handle ModuleLoaded: Always handle
ModuleLoadedto know about other modules - DataMaintenance: Handle all maintenance operations in one place using
DataMaintenance - Graceful Shutdown: Always handle
NodeShutdownandDataMaintenance(urgency: “high”) - Non-Blocking: Keep event handlers fast and non-blocking
Consistency Guarantees
- ModuleLoaded Timing: Always happens after subscription (startup complete)
- Hotloaded Modules: Always receive all already-loaded modules
- Event Ordering: Consistent ordering (subscription → ModuleLoaded)
- No Race Conditions: Events are delivered reliably
- Unified Maintenance: Single event for all maintenance operations
Extensibility
The event system is designed to be easily extensible:
- Add New Events: Add to
EventTypeenum andEventPayloadenum - Add Event Publishers: Add methods to
EventPublisher - Add Event Handlers: Modules subscribe and handle events
- Unified Patterns: Follow existing patterns (e.g., DataMaintenance)
Migration from Old Events
Old: StorageFlush + DataCleanup
New: DataMaintenance with operation and urgency fields
Migration:
#![allow(unused)]
fn main() {
// Old
match event_type {
EventType::StorageFlush => { flush_data().await?; }
EventType::DataCleanup => { cleanup_data().await?; }
}
// New
match event_type {
EventType::DataMaintenance => {
if let EventPayload::DataMaintenance { operation, urgency, .. } = payload {
if operation == "flush" || operation == "both" {
flush_data().await?;
}
if operation == "cleanup" || operation == "both" {
cleanup_data().await?;
}
}
}
}
}
See Also
- Module System - Module system architecture
- Event System Integration - Complete integration guide
- Janitorial Events - Maintenance and lifecycle events
- Module IPC Protocol - IPC communication details
Janitorial and Maintenance Events
Overview
The module system provides comprehensive janitorial and maintenance events that allow modules to participate in node lifecycle, resource management, and data maintenance operations. This ensures modules can perform their own cleanup, maintenance, and resource management in sync with the node.
Event Categories
1. Node Lifecycle Events
NodeShutdown
When: Node is shutting down (before components stop) Purpose: Allow modules to clean up gracefully Payload:
reason: String - Shutdown reason (“graceful”, “signal”, “rpc”, “error”)timeout_seconds: u64 - Graceful shutdown timeout
Module Action:
- Save state
- Close connections
- Flush data
- Clean up resources
NodeShutdownCompleted
When: Node shutdown is complete Purpose: Notify modules that shutdown finished Payload:
duration_ms: u64 - Shutdown duration
NodeStartupCompleted
When: Node startup is complete (all components initialized) Purpose: Notify modules that node is fully operational Payload:
duration_ms: u64 - Startup durationcomponents:Vec<String>- Components that were initialized
Module Action:
- Initialize connections
- Load state
- Start processing
2. Storage Events
DataMaintenance (Unified)
When: Data maintenance is requested (shutdown, periodic, low disk, manual) Purpose: Allow modules to flush data and/or clean up old data Payload:
operation: String - “flush”, “cleanup”, or “both”urgency: String - “low”, “medium”, or “high”reason: String - “periodic”, “shutdown”, “low_disk”, “manual”target_age_days:Option<u64>- Target age for cleanup (if operation includes cleanup)timeout_seconds:Option<u64>- Timeout for high urgency operations
Module Action:
- Flush: Write pending data to disk
- Cleanup: Delete old data based on target_age_days
- Both: Flush and cleanup
Urgency Levels:
- Low: Periodic maintenance, can be done asynchronously
- Medium: Scheduled maintenance, should complete soon
- High: Urgent (shutdown, low disk), must complete quickly
3. Maintenance Events
MaintenanceStarted
When: Maintenance operation started Purpose: Allow modules to prepare for maintenance Payload:
maintenance_type: String - “backup”, “cleanup”, “prune”estimated_duration_seconds:Option<u64>- Estimated duration
Module Action:
- Pause non-critical operations
- Prepare for maintenance
MaintenanceCompleted
When: Maintenance operation completed Purpose: Notify modules that maintenance finished Payload:
maintenance_type: String - Maintenance typesuccess: bool - Success statusduration_ms: u64 - Duration in millisecondsresults:Option<String>- Results/statistics (optional JSON)
Module Action:
- Resume normal operations
- Process results if needed
HealthCheck
When: Health check performed Purpose: Allow modules to report their health status Payload:
check_type: String - “periodic”, “manual”, “startup”node_healthy: bool - Node health statushealth_report:Option<String>- Health report (optional JSON)
Module Action:
- Report module health status
- Perform internal health checks
4. Resource Management Events
DiskSpaceLow
When: Disk space is low Purpose: Allow modules to clean up data to free space Payload:
available_bytes: u64 - Available space in bytestotal_bytes: u64 - Total space in bytespercent_free: f64 - Percentage freedisk_path: String - Disk path
Module Action:
- Clean up old data
- Reduce data retention
- Flush and compress data
ResourceLimitWarning
When: Resource limit approaching Purpose: Allow modules to reduce resource usage Payload:
resource_type: String - “memory”, “cpu”, “disk”, “network”usage_percent: f64 - Current usage percentagecurrent_usage: u64 - Current usage valuelimit: u64 - Limit valuethreshold_percent: f64 - Warning threshold percentage
Module Action:
- Reduce resource usage
- Clean up resources
- Optimize operations
Usage Examples
Handling Shutdown
#![allow(unused)]
fn main() {
match event_type {
EventType::NodeShutdown => {
if let EventPayload::NodeShutdown { reason, timeout_seconds } = payload {
info!("Node shutting down: {}, timeout: {}s", reason, timeout_seconds);
// Save state
save_state().await?;
// Close connections
close_connections().await?;
// Flush data
flush_data().await?;
}
}
_ => {}
}
}
Handling Data Maintenance
#![allow(unused)]
fn main() {
match event_type {
EventType::DataMaintenance => {
if let EventPayload::DataMaintenance { operation, urgency, reason, target_age_days, timeout_seconds } = payload {
match operation.as_str() {
"flush" => {
flush_pending_data().await?;
}
"cleanup" => {
let age_days = target_age_days.unwrap_or(30);
cleanup_old_data(age_days).await?;
}
"both" => {
flush_pending_data().await?;
let age_days = target_age_days.unwrap_or(30);
cleanup_old_data(age_days).await?;
}
_ => {}
}
if urgency == "high" {
// High urgency - must complete quickly
if let Some(timeout) = timeout_seconds {
tokio::time::timeout(
Duration::from_secs(timeout),
maintenance_operation()
).await?;
}
}
}
}
_ => {}
}
}
Handling Disk Space Low
#![allow(unused)]
fn main() {
match event_type {
EventType::DiskSpaceLow => {
if let EventPayload::DiskSpaceLow { available_bytes, percent_free, .. } = payload {
warn!("Disk space low: {} bytes available, {:.2}% free", available_bytes, percent_free);
// Clean up old data
cleanup_old_data(7).await?; // Keep only last 7 days
// Compress data
compress_data().await?;
}
}
_ => {}
}
}
Best Practices
- Always Handle Shutdown: Modules must handle
NodeShutdownandDataMaintenance(urgency: “high”) - Non-Blocking Operations: Keep maintenance operations fast and non-blocking
- Respect Timeouts: For high urgency operations, respect timeout_seconds
- Clean Up Resources: Always clean up resources on shutdown
- Monitor Health: Report health status during
HealthCheckevents
Integration Timing
Startup Sequence
- Node starts
- Modules load
- Modules subscribe to events
NodeStartupCompletedpublished- Modules can start processing
Shutdown Sequence
NodeShutdownpublished (with timeout)- Modules clean up (within timeout)
DataMaintenancepublished (urgency: “high”, operation: “flush”)- Modules flush data
- Node components stop
NodeShutdownCompletedpublished
Periodic Maintenance
DataMaintenancepublished (urgency: “low”, operation: “cleanup”, reason: “periodic”)- Modules clean up old data
MaintenanceCompletedpublished
See Also
- Module System - Module system architecture
- Event System Integration - Complete integration guide
- Event Consistency - Event timing and consistency guarantees
- Module IPC Protocol - IPC communication details
Consensus Layer Overview
The consensus layer (blvm-consensus) provides a pure mathematical implementation of Bitcoin consensus rules from the Orange Paper. All functions are deterministic, side-effect-free, and directly implement the mathematical specifications without interpretation.
Architecture Position
Tier 2 of the 6-tier Bitcoin Commons architecture:
1. Orange Paper (mathematical foundation)
2. blvm-consensus (pure math implementation) ← THIS LAYER
3. blvm-protocol (Bitcoin abstraction)
4. blvm-node (full node implementation)
5. blvm-sdk (developer toolkit)
6. blvm-commons (governance enforcement)
Core Functions
Implements major Bitcoin consensus functions from the Orange Paper:
Transaction Validation
CheckTransaction: Transaction structure and limit validationCheckTxInputs: Input validation against UTXO setEvalScript: Script execution engineVerifyScript: Script verification with witness data
Code: transaction.rs
Block Validation
ConnectBlock: Block connection and validationApplyTransaction: Transaction application to UTXO setCheckProofOfWork: Proof of work verificationShouldReorganize: Chain reorganization logic
Code: block/mod.rs
Economic Model
GetBlockSubsidy: Block reward calculation with halvingTotalSupply: Total supply computationGetNextWorkRequired: Difficulty adjustment calculation
Code: economic.rs
Mempool Protocol
AcceptToMemoryPool: Transaction mempool validationIsStandardTx: Standard transaction checksReplacementChecks: RBF (Replace-By-Fee) logic
Code: mempool.rs
Mining Protocol
CreateNewBlock: Block creation from mempoolMineBlock: Block mining and nonce findingGetBlockTemplate: Block template generation
Code: mining.rs
Advanced Features
- SegWit: Witness data validation and weight calculation
- Taproot: P2TR output validation and key aggregation
Code: segwit.rs
Design Principles
- Pure Functions: All functions are deterministic and side-effect-free
- Mathematical Accuracy: Direct implementation of Orange Paper specifications
- Optimization Passes: Optimization passes (e.g. constant folding, batch script verification) optimize the implementation; the implementation is validated against the spec, not generated from it
- Exact Version Pinning: All consensus-critical dependencies pinned to exact versions
- Comprehensive Testing: Extensive test coverage with unit tests, property-based tests, and integration tests
- No Consensus Rule Interpretation: Only mathematical implementation
- Formal Verification: BLVM Specification Lock and property-based testing ensure correctness
Formal Verification
Implements mathematical verification of Bitcoin consensus rules:
Verification uses BLVM Specification Lock and property-based tests. Critical proofs run in CI; see Formal Verification for coverage.
Code: block/mod.rs
Verification Coverage
Chain Selection: should_reorganize, calculate_chain_work verified
Block Subsidy: get_block_subsidy halving schedule verified
Proof of Work: check_proof_of_work, target expansion verified
Transaction Validation: check_transaction structure rules verified
Block Connection: connect_block UTXO consistency verified
Code: VERIFICATION.md
BIP Implementation
Critical Bitcoin Improvement Proposals (BIPs) implemented:
- BIP30: Duplicate coinbase prevention (integrated in
connect_block()) - BIP34: Block height in coinbase (integrated in
connect_block()) - BIP66: Strict DER signatures (enforced via script verification)
- BIP90: Block version enforcement (integrated in
connect_block()) - BIP147: NULLDUMMY enforcement (enforced via script verification)
Code: block/mod.rs
Performance Optimizations
Profile-Guided Optimization (PGO)
For maximum performance:
./scripts/pgo-build.sh
Expected gain: Build and run with PGO for better throughput; measure on your workload.
Optimization Passes
Optimization passes optimize the implementation (the implementation is validated against the Orange Paper, not generated from it):
- Constant Folding: Compile-time constant evaluation
- Memory Layout Optimization: Cache-friendly data structures
- SIMD Vectorization: Parallel processing where applicable
- Bounds Check Optimization: Eliminate unnecessary checks
- Dead Code Elimination: Remove unused code paths
Code: optimizations.rs
Mathematical Lock
The Orange Paper specifies consensus rules; blvm-consensus implements them, checked by tests, review, and BLVM Specification Lock on spec-locked code.
Chain of trust:
Orange Paper → blvm-consensus → tests + spec-lock → deployment & operations
Details: VERIFICATION.md, PROOF_LIMITATIONS.md.
Dependencies
All consensus-critical dependencies are pinned to exact versions:
# Consensus-critical cryptography - EXACT VERSIONS
secp256k1 = "=0.28.2"
sha2 = "=0.10.9"
ripemd = "=0.1.3"
bitcoin_hashes = "=0.11.0"
Code: Cargo.toml
See Also
- Consensus Architecture - Consensus layer design
- Formal Verification - Verification methodology
- Mathematical Correctness - Verification approach and coverage
- UTXO Commitments - UTXO commitment system
- Orange Paper - Mathematical foundation
Consensus Layer Architecture
The consensus layer is designed as a pure mathematical implementation with no side effects. Block and script logic are organized in block/ and script/ submodules. Canonical types, serialization, and crypto live in blvm-primitives and are re-exported by blvm-consensus for compatibility.
Design Principles
- Pure Functions: All functions are deterministic and side-effect-free
- Mathematical Accuracy: Direct implementation of Orange Paper specifications
- Optimization Passes: Runtime optimizations applied to the implementation (constant folding, memory layout optimization, SIMD vectorization, bounds check optimization, dead code elimination). The implementation is validated against the Orange Paper, not generated from it. See
blvm-consensus/docs/FEATURES.mdin the blvm-consensus repository (sibling of this book) for production and Rayon feature flags. - Exact Version Pinning: All consensus-critical dependencies pinned to exact versions
- Testing: Test coverage with unit tests, property-based tests, and integration tests
- No Consensus Rule Interpretation: Only mathematical implementation
- Formal Verification: BLVM Specification Lock and property-based testing ensure correctness
Core Functions
Transaction Validation
- Transaction structure and limit validation
- Input validation against UTXO set
- Script execution and verification
Block Validation
- Block connection and validation
- Transaction application to UTXO set
- Proof of work verification
Economic Model
- Block reward calculation
- Total supply computation
- Difficulty adjustment
Mempool Protocol
- Transaction mempool validation
- Standard transaction checks
- Transaction replacement (RBF) logic
Mining Protocol
- Block creation from mempool
- Block mining and nonce finding
- Block template generation
Chain Management
- Chain reorganization handling
- P2P network message processing
Advanced Features
- SegWit: Witness data validation and weight calculation (see BIP141)
- Taproot: P2TR output validation and key aggregation (see BIP341)
Optimization Passes
The implementation is validated against the Orange Paper; optimization passes optimize the implementation code (not the spec):
- Constant Folding - Pre-computed constants and constant propagation
- Memory Layout Optimization - Cache-aligned structures and compact stack frames
- SIMD Vectorization - Batch hash operations with parallel processing
- Bounds Check Optimization - Removes redundant runtime bounds checks using BLVM Specification Lock-proven bounds
- Dead Code Elimination - Removes unused code paths
- Inlining Hints - Aggressive inlining of hot functions
Mathematical Protections
Mathematical protection mechanisms ensure correctness through formal verification. See Mathematical Specifications for details.
Spec Maintenance Workflow
Figure: Specification maintenance workflow showing how changes are detected, verified, and integrated.
See Also
- Consensus Overview - Consensus layer introduction
- Formal Verification - Verification methodology and tools
- Mathematical Correctness - Verification approach and coverage
- Orange Paper - Mathematical foundation
- Protocol Architecture - Protocol layer built on consensus
Formal Verification
blvm-consensus combines the Orange Paper (normative math spec), BLVM Specification Lock (Z3-backed proofs on spec-locked consensus code), and a large automated test suite. Proof scope and limits: PROOF_LIMITATIONS.md.
Verification Stack
Verification approach follows: “Rust + Tests + Math Specs = Source of Truth”
Comprehensive Coverage] PT[Property Tests
Randomized Edge Cases] IT[Integration Tests
Cross-System Validation] end subgraph "Layer 2: Symbolic Verification" SPECLOCK[BLVM Specification Lock
Tiered Execution] SPEC[Math Specifications
Orange Paper] SSE[State Space Exploration
Contract-Checked Paths] end subgraph "Layer 3: CI Enforcement" AUTO[Automated Testing
Required for Merge] PROOF[Formal Proofs
Separate Execution] OTS[OpenTimestamps
Immutable Audit Trail] end UT --> AUTO PT --> AUTO IT --> AUTO SPECLOCK --> PROOF SPEC --> SPECLOCK SSE --> SPECLOCK PROOF --> OTS AUTO --> OTS style UT fill:#bbf,stroke:#333,stroke-width:2px style PT fill:#bbf,stroke:#333,stroke-width:2px style IT fill:#bbf,stroke:#333,stroke-width:2px style SPECLOCK fill:#bfb,stroke:#333,stroke-width:3px style SPEC fill:#fbf,stroke:#333,stroke-width:2px style AUTO fill:#ffb,stroke:#333,stroke-width:2px style OTS fill:#fbb,stroke:#333,stroke-width:2px
Layer 1: Empirical Testing
- Unit tests: Broad coverage across consensus modules and public APIs
- Property-based tests: Randomized testing with
proptestto discover edge cases - Integration tests: Cross-system validation between consensus components
Layer 2: Symbolic Verification
- BLVM Specification Lock: Z3-backed proofs on spec-locked functions; tiered execution (strong/medium/slow)
- Mathematical specifications: Orange Paper and inline consensus specs
- State space exploration: Paths relevant to spec-lock contracts
Layer 3: CI Enforcement
- Automated testing: Required for merge
- BLVM Specification Lock: Tiered runs; policy in VERIFICATION.md
- OpenTimestamps audit logging: Optional timestamps of verification artifacts
Verification Statistics
Formal Proofs
BLVM Specification Lock checks spec-locked functions in tiers (strong/medium/slow). Current inventory and policy: VERIFICATION.md.
Verification Command:
# Run BLVM Specification Lock verification
cargo spec-lock verify
For tiered execution:
# Run all Z3 proofs (uses tiered execution)
cargo spec-lock verify
# Run specific tier
cargo spec-lock verify --tier strong
Tier System:
- Strong Tier: Critical consensus proofs (AWS spot instance integration)
- Medium Tier: Important proofs (parallel execution)
- Slow Tier: Comprehensive coverage proofs
Infrastructure:
- AWS spot instance integration for expensive proof execution
- Parallel proof execution with tiered scheduling
- Automated proof verification in CI/CD
Property-Based Tests
Property-based tests in tests/consensus_property_tests.rs cover economic rules, proof of work, transaction validation, script execution, performance, deterministic execution, integer overflow safety, temporal/state transitions, compositional verification, and SHA256 correctness.
Verification Command:
cargo test --test consensus_property_tests
Runtime Assertions
Runtime assertions provide invariant checking during execution.
Runtime Invariant Feature Flag:
#[cfg(any(debug_assertions, feature = "runtime-invariants"))]enables assertionssrc/block/mod.rs: Supply invariant checks inconnect_block
Verification: Runtime assertions execute during debug builds and can be enabled in production with --features runtime-invariants.
Fuzz targets (libFuzzer)
Harnesses live under fuzz/fuzz_targets/; names are registered in fuzz/Cargo.toml. Overview: Fuzzing.
cd fuzz
cargo +nightly fuzz run <target_name>
MIRI Runtime Checks
Status: Integrated in CI
Location: .github/workflows/verify.yml
Checks:
- Property tests under MIRI
- Critical unit tests under MIRI
- Undefined behavior detection
Verification Command:
cargo +nightly miri test --test consensus_property_tests
Mathematical Specifications
Multiple functions have formal documentation aligned with the Orange Paper and consensus crate sources (there is no separate shipped MATHEMATICAL_SPECIFICATIONS_COMPLETE.md in this book).
Documented Functions:
- Economic:
get_block_subsidy,total_supply,calculate_fee - Proof of Work:
expand_target,compress_target,check_proof_of_work - Transaction:
check_transaction,is_coinbase - Block:
connect_block,apply_transaction - Script:
eval_script,verify_script - Reorganization:
calculate_chain_work,should_reorganize - Cryptographic:
SHA256
Mathematical Specifications
The formulas and invariants below state intended consensus behavior from the Orange Paper. Key functions tie this spec to the implementation through tests and BLVM Specification Lock where those functions are spec-locked. Coverage: VERIFICATION.md, PROOF_LIMITATIONS.md.
Chain Selection (src/reorganization.rs)
Mathematical Specification:
∀ chains C₁, C₂: work(C₁) > work(C₂) ⇒ select(C₁)
Invariants:
- Selected chain has maximum cumulative work
- Work calculation is deterministic
- Empty chains are rejected
- Chain work is always non-negative
Key functions:
should_reorganize— longest-work chain selectioncalculate_chain_work— cumulative workexpand_target— difficulty target edge cases
Block Subsidy (src/economic.rs)
Mathematical Specification:
∀ h ∈ ℕ: subsidy(h) = 50 * 10^8 * 2^(-⌊h/210000⌋) if ⌊h/210000⌋ < 64 else 0
Invariants:
- Subsidy halves every 210,000 blocks
- After 64 halvings, subsidy becomes 0
- Subsidy is always non-negative
- Total supply approaches 21M BTC asymptotically
Key functions:
get_block_subsidy— halving scheduletotal_supply— supply monotonicityvalidate_supply_limit— supply cap
Proof of Work (src/pow.rs)
Mathematical Specification:
∀ header H: CheckProofOfWork(H) = SHA256(SHA256(H)) < ExpandTarget(H.bits)
Target Compression/Expansion:
∀ bits ∈ [0x03000000, 0x1d00ffff]:
Let expanded = expand_target(bits)
Let compressed = compress_target(expanded)
Let re_expanded = expand_target(compressed)
Then:
- re_expanded ≤ expanded (compression truncates, never increases)
- re_expanded.0[2] = expanded.0[2] ∧ re_expanded.0[3] = expanded.0[3]
(significant bits preserved exactly)
- Precision loss in words 0, 1 is acceptable (compact format limitation)
Invariants:
- Hash must be less than target for valid proof of work
- Target expansion handles edge cases correctly
- Target compression preserves significant bits (words 2, 3) exactly
- Target compression may lose precision in lower bits (words 0, 1)
- Difficulty adjustment respects bounds [0.25, 4.0]
- Work calculation is deterministic
Key functions:
check_proof_of_work— hash vs targetexpand_target/compress_target— compact difficulty encodingtarget_expand_compress_round_trip— compact round-trip propertiesget_next_work_required— difficulty adjustment bounds
Transaction Validation (src/transaction.rs)
Mathematical Specification:
∀ tx ∈ 𝒯𝒳: CheckTransaction(tx) = valid ⟺
(|tx.inputs| > 0 ∧ |tx.outputs| > 0 ∧
∀o ∈ tx.outputs: 0 ≤ o.value ≤ M_max ∧
|tx.inputs| ≤ M_max_inputs ∧ |tx.outputs| ≤ M_max_outputs ∧
|tx| ≤ M_max_tx_size)
Invariants:
- Valid transactions have non-empty inputs and outputs
- Output values are bounded [0, MAX_MONEY]
- Input/output counts respect limits
- Transaction size respects limits
- Coinbase transactions have special validation rules
Key functions:
check_transaction— structural validitycheck_tx_inputs— input checks including coinbaseis_coinbase— coinbase detection
Block Connection (src/block/mod.rs)
Mathematical Specification:
∀ block B, UTXO set US, height h: ConnectBlock(B, US, h) = (valid, US') ⟺
(ValidateHeader(B.header) ∧
∀ tx ∈ B.transactions: CheckTransaction(tx) ∧ CheckTxInputs(tx, US, h) ∧
VerifyScripts(tx, US) ∧
CoinbaseOutput ≤ TotalFees + GetBlockSubsidy(h) ∧
US' = ApplyTransactions(B.transactions, US))
Invariants:
- Valid blocks have valid headers and transactions
- UTXO set consistency is preserved
- Coinbase output respects economic rules
- Transaction application is atomic
Key functions:
connect_block— full block validationapply_transaction— UTXO updatescalculate_tx_id— transaction id
Verification Tools
BLVM Specification Lock
Purpose: Z3-backed BLVM Specification Lock proofs for spec-locked Rust functions against Orange Paper contracts.
Usage: cargo spec-lock verify
Coverage: Spec-locked functions (#[spec_locked] and related tooling).
Details: PROOF_LIMITATIONS.md
Proptest Property Testing
Purpose: Randomized testing to discover edge cases
Usage: cargo test (runs automatically)
Coverage: Property tests using proptest! and related harnesses in the crate
Strategy: Generates random inputs within specified bounds
Example:
#![allow(unused)]
fn main() {
proptest! {
#[test]
fn prop_function_invariant(input in strategy) {
let result = function_under_test(input);
prop_assert!(result.property_holds());
}
}
}
CI Integration
Verification Workflow
The .github/workflows/verify.yml workflow runs verification jobs such as:
-
Unit & Property Tests (required)
cargo test --all-features- Must pass for CI to succeed
-
BLVM Specification Lock Verification (release- or tier-gated)
cargo spec-lock verify- Z3 obligations for
#[spec_locked]functions in the workflow - Full verification run before each release
- Slower tiers can be deferred between major releases
- Not required for merge
-
OpenTimestamps Audit (non-blocking)
- Collect verification artifacts
- Timestamp proof bundle with
ots stamp - Upload artifacts for transparency
Local Development
Run all tests:
cargo test --all-features
Run BLVM Specification Lock verification:
cargo spec-lock verify
Run specific verification:
cargo test --test property_tests
cargo spec-lock verify --proof <function_name>
Verification Coverage
Consensus combines BLVM Specification Lock, property tests, fuzzing, and integration tests across economic rules, PoW, transactions, blocks, scripts, reorg, crypto, mempool, SegWit, and serialization. PROOF_LIMITATIONS.md documents what formal proofs do and do not cover.
Network Protocol Verification
blvm-protocol can use the same BLVM Specification Lock machinery for wire messages: headers, checksums, size limits, and round-trip properties for the message types in scope.
Proof targets: Header layout (magic, command, length, checksum), checksum validation, size limits, parse(serialize(msg)) == msg for covered messages.
Message tiers: Tier 1: Version, VerAck, Ping, Pong. Tier 2: Transaction, Block, Headers, Inv, GetData, GetHeaders.
Use the verify feature for full protocol verification builds; see blvm-protocol crate docs.
Consensus Coverage Comparison
Figure: Baseline: broad tests and review. Bitcoin Commons adds BLVM Specification Lock and Orange Paper–driven methodology on top.
Proof Maintenance Cost
Figure: Proof maintenance cost: proofs changed per change by area; highlights refactor hotspots.
Spec Drift vs Test Coverage
Figure: Spec drift decreases as test coverage increases. Higher test coverage reduces the likelihood of specification drift over time.
See also Network Protocol for transport and wire-format documentation.
See Also
- Consensus Overview - Consensus layer introduction
- Consensus Architecture - Consensus layer design
- Mathematical Specifications - Mathematical spec details
- Mathematical Correctness - Correctness guarantees
- Property-Based Testing - Property-based testing
- Fuzzing - Fuzzing infrastructure
- Testing Infrastructure - Complete testing overview
Peer Consensus Protocol
Overview
Bitcoin Commons implements an N-of-M peer consensus protocol for UTXO set verification. The protocol discovers diverse peers and finds consensus among them to verify UTXO commitments without trusting any single peer.
Architecture
N-of-M Consensus Model
The protocol uses an N-of-M consensus model:
- N: Minimum number of peers required
- M: Target number of diverse peers
- Threshold: Consensus threshold (e.g., 70% agreement)
- Diversity: Peers must be diverse across ASNs, subnets, geographic regions
Code: peer_consensus.rs
Peer Information
Peer information tracks diversity:
#![allow(unused)]
fn main() {
pub struct PeerInfo {
pub address: IpAddr,
pub asn: Option<u32>, // Autonomous System Number
pub country: Option<String>, // Country code (ISO 3166-1 alpha-2)
pub implementation: Option<String>, // Bitcoin implementation
pub subnet: u32, // /16 subnet for diversity checks
}
}
Code: peer_consensus.rs
Diverse Peer Discovery
Diversity Requirements
Peers must be diverse across:
- ASNs: Maximum N peers per ASN
- Subnets: No peers from same /16 subnet
- Geographic Regions: Geographic diversity
- Bitcoin Implementations: Implementation diversity
Code: peer_consensus.rs
Discovery Process
- Collect All Peers: Gather all available peers
- Filter by ASN: Limit peers per ASN
- Filter by Subnet: Remove duplicate subnets
- Select Diverse Set: Select diverse peer set
- Stop at Target: Stop when target number reached
Code: peer_consensus.rs
Consensus Finding
Commitment Grouping
Commitments are grouped by their values:
- Merkle Root: UTXO commitment Merkle root
- Total Supply: Total Bitcoin supply
- UTXO Count: Number of UTXOs
- Block Height: Block height of commitment
Code: peer_consensus.rs
Consensus Threshold
Consensus threshold check:
- Threshold: Configurable threshold (e.g., 70%)
- Agreement Count: Number of peers agreeing
- Required Count:
ceil(total_peers * threshold) - Verification: Check if agreement count >= required count
Code: peer_consensus.rs
Mathematical Invariants
Consensus finding maintains invariants:
required_agreement_count <= total_peersrequired_agreement_count >= 1best_agreement_count <= total_peers- If
agreement_count >= required_agreement_count, thenagreement_count/total_peers >= threshold
Code: peer_consensus.rs
Checkpoint Height Determination
Median-Based Checkpoint
Checkpoint height determined from peer chain tips:
- Median Calculation: Uses median of peer tips
- Safety Margin: Subtracts safety margin to prevent deep reorgs
- Mathematical Invariants:
- Median is always between min(tips) and max(tips)
- Checkpoint height is always >= 0
- Checkpoint height <= median_tip
Code: peer_consensus.rs
Ban List Sharing
Ban List Protocol
Nodes share ban lists to protect against malicious peers:
- Ban List Messages:
GetBanList,BanListprotocol messages - Hash Verification: Ban list hash verification
- Merging: Ban list merging from multiple peers
- Network-Wide Protection: Protects entire network
Code: mod.rs
Ban List Validation
Ban list entries are validated:
- Entry Validation: Each entry validated
- Hash Verification: Ban list hash verified
- Merging Logic: Merged with local ban list
- Duplicate Prevention: Duplicate entries prevented
Code: mod.rs
Ban List Merging
Ban lists are merged from multiple peers:
- Hash Verification: Verify ban list hash
- Entry Validation: Validate each ban entry
- Merging: Merge with local ban list
- Conflict Resolution: Resolve conflicts (longest ban wins)
Code: ban_list_merging.rs
Filtered Blocks
Filtered Block Protocol
Nodes can request filtered blocks:
- GetFilteredBlock: Request filtered block
- FilteredBlock: Response with filtered block
- Efficiency: More efficient than full blocks
- Privacy: Better privacy for light clients
Code: protocol.rs
Network-Wide Malicious Peer Protection
Protection Mechanisms
Network-wide protection against malicious peers:
- Ban List Sharing: Share ban lists across network
- Peer Reputation: Track peer reputation
- Auto-Ban: Automatic banning of abusive peers
- Eclipse Prevention: Prevent eclipse attacks
Code: SECURITY.md
Configuration
Consensus Configuration
#![allow(unused)]
fn main() {
pub struct ConsensusConfig {
pub min_peers: usize, // Minimum peers required
pub target_peers: usize, // Target number of diverse peers
pub consensus_threshold: f64, // Consensus threshold (0.0-1.0)
pub max_peers_per_asn: usize, // Max peers per ASN
pub safety_margin_blocks: Natural, // Safety margin for checkpoint
}
}
Code: peer_consensus.rs
Benefits
- No Single Point of Trust: No need to trust any single peer
- Diversity: Diverse peer set reduces attack surface
- Consensus: Majority agreement ensures correctness
- Network Protection: Ban list sharing protects entire network
- Efficiency: Filtered blocks reduce bandwidth
Components
The peer consensus protocol includes:
- N-of-M consensus model
- Diverse peer discovery
- Consensus finding algorithm
- Checkpoint height determination
- Ban list sharing
- Filtered block protocol
- Network-wide malicious peer protection
Location: blvm-protocol/src/utxo_commitments/peer_consensus.rs, blvm-node/src/network/ban_list_merging.rs, blvm-node/src/network/mod.rs
See Also
- Consensus Overview - Consensus layer introduction
- UTXO Commitments - UTXO commitment system
- Mathematical Specifications - Mathematical spec details
- Network Protocol - Network layer details
Mathematical Specifications
Overview
Bitcoin Commons documents Orange Paper–aligned mathematical specifications for consensus behavior. The Rust code implements this spec, checked by tests and BLVM Specification Lock on spec-locked functions. Proof scope: PROOF_LIMITATIONS.md.
Specification Format
Mathematical specifications use formal notation to define consensus rules:
- Quantifiers: Universal (∀) and existential (∃) quantifiers
- Functions: Mathematical function definitions
- Invariants: Properties that must always hold
- Constraints: Bounds and limits
Code: VERIFICATION.md
Core Specifications
Chain Selection
Mathematical Specification:
∀ chains C₁, C₂: work(C₁) > work(C₂) ⇒ select(C₁)
Invariants:
- Selected chain has maximum cumulative work
- Work calculation is deterministic
- Empty chains are rejected
- Chain work is always non-negative
Key functions:
should_reorganize: Longest-work selectioncalculate_chain_work: Cumulative work calculationexpand_target: Difficulty target edge cases (see also PoW specs)
Code: VERIFICATION.md
Block Subsidy
Mathematical Specification:
∀ h ∈ ℕ: subsidy(h) = 50 * 10^8 * 2^(-⌊h/210000⌋) if ⌊h/210000⌋ < 64 else 0
Invariants:
- Subsidy halves every 210,000 blocks
- Subsidy is non-negative
- Subsidy decreases monotonically
- Total supply converges to 21 million BTC
Code: VERIFICATION.md
Total Supply
Mathematical Specification:
∀ h ∈ ℕ: total_supply(h) = Σ(i=0 to h) subsidy(i)
Invariants:
- Total supply is monotonic (never decreases)
- Total supply is bounded (≤ 21 * 10^6 * 10^8 satoshis)
- Total supply converges to 21 million BTC
Code: PROOF_LIMITATIONS.md
Difficulty Adjustment
Mathematical Specification:
target_new = target_old * (timespan / expected_timespan)
timespan_clamped = clamp(timespan, expected/4, expected*4)
Invariants:
- Target is always positive
- Timespan is clamped to [expected/4, expected*4]
- Difficulty adjustment is deterministic
Code: VERIFICATION.md
Consensus Threshold
Mathematical Specification:
required_agreement_count = ceil(total_peers * threshold)
consensus_met ⟺ agreement_count >= required_agreement_count
Invariants:
1 <= required_agreement_count <= total_peersagreement_count >= required⟺ratio >= threshold- Integer comparison is deterministic
Code: VERIFICATION.md
Median Calculation
Mathematical Specification:
median(tips) = {
tips[n/2] if n is odd,
(tips[n/2-1] + tips[n/2]) / 2 if n is even
}
Invariants:
min(tips) <= median <= max(tips)- Median is deterministic
- Checkpoint = max(0, median - safety_margin)
Code: VERIFICATION.md
Specification Coverage
Functions with Specifications
Multiple functions have formal mathematical specifications:
- Chain selection (
should_reorganize,calculate_chain_work) - Block subsidy (
get_block_subsidy) - Total supply (
total_supply) - Difficulty adjustment (
get_next_work_required,expand_target) - Transaction validation (
check_transaction,check_tx_inputs) - Block validation (
connect_block,apply_transaction) - Script execution (
eval_script,verify_script) - Consensus threshold (
find_consensus) - Median calculation (
determine_checkpoint_height)
Code: PROOF_LIMITATIONS.md
Mathematical Protections
Integer-Based Arithmetic
Floating-point arithmetic replaced with integer-based calculations:
#![allow(unused)]
fn main() {
// Integer-based threshold calculation
let required_agreement_count = ((total_peers as f64) * threshold).ceil() as usize;
if agreement_count >= required_agreement_count {
// Consensus met
}
}
Code: VERIFICATION.md
Runtime Assertions
Runtime assertions verify mathematical invariants:
- Threshold calculation bounds
- Consensus result invariants
- Median calculation bounds
- Checkpoint bounds
Code: VERIFICATION.md
Checked Arithmetic
Checked arithmetic prevents overflow/underflow:
#![allow(unused)]
fn main() {
// Median calculation with overflow protection
let median_tip = if sorted_tips.len() % 2 == 0 {
let mid = sorted_tips.len() / 2;
let lower = sorted_tips[mid - 1];
let upper = sorted_tips[mid];
(lower + upper) / 2 // Safe: Natural type prevents overflow
} else {
sorted_tips[sorted_tips.len() / 2]
};
}
Code: VERIFICATION.md
Formal Verification
Z3 Proofs
BLVM Specification Lock uses Z3 to prove spec-locked functions against Orange Paper contracts. The symbolic specs above are not each a separate Z3 theorem; see PROOF_LIMITATIONS.md.
Code: VERIFICATION.md
Property-Based Tests
Property-based tests verify invariants:
- Generate random inputs
- Verify properties hold
- Discover edge cases
- Test mathematical correctness
Code: VERIFICATION.md
Documentation
Specification Documents
Mathematical specifications and verification are documented in the consensus repository (see docs/README.md):
- VERIFICATION.md — how to run verification and what is in scope
- PROOF_LIMITATIONS.md — proof bounds, coverage, and protections beyond formal verification
Code: README.md
Components
The mathematical specifications system includes:
- Formal mathematical notation for consensus functions
- Mathematical invariants documentation
- Integer-based arithmetic (prevents floating-point bugs)
- Runtime assertions (verify invariants)
- Checked arithmetic (prevents overflow)
- BLVM Specification Lock / Z3 where enabled on annotated functions
- Property-based tests (invariant verification)
Location: blvm-consensus/docs/VERIFICATION.md, blvm-consensus/docs/PROOF_LIMITATIONS.md, blvm-consensus/docs/README.md (index)
See Also
- Consensus Overview - Consensus layer introduction
- Consensus Architecture - Consensus layer design
- Formal Verification - BLVM Specification Lock verification details
- Mathematical Correctness - Correctness guarantees
- Property-Based Testing - Property-based testing
Mathematical Correctness
The consensus layer uses the Orange Paper, BLVM Specification Lock, tests, and code review together. Proof scope: PROOF_LIMITATIONS.md, VERIFICATION.md.
Verification Approach
Our verification approach follows: “Rust + Tests + Math Specs = Source of Truth”
Layer 1: Empirical Testing (Required, Must Pass)
- Unit tests: Broad coverage across consensus modules
- Property-based tests: Randomized testing with
proptestto discover edge cases - Integration tests: Cross-system validation between consensus components
Layer 2: Symbolic Verification
- BLVM Specification Lock: Z3-backed proofs on spec-locked consensus functions
- Mathematical specifications: Formal documentation of consensus rules
- State space exploration: Paths checked under spec-lock contracts
Layer 3: CI Enforcement
- Automated testing: Required for merge
- BLVM Specification Lock: Tiered runs; merge/release policy in VERIFICATION.md
- OpenTimestamps audit logging: Optional transparency for verification artifacts
Primary verification areas
- Chain selection:
should_reorganize,calculate_chain_work - Block subsidy:
get_block_subsidyhalving schedule - Proof of work:
check_proof_of_work, target expansion - Transaction validation:
check_transactionstructure rules - Block connection:
connect_block, UTXO consistency
Protection coverage
Proof bounds and related limits upstream:
See Also
- Consensus Architecture - Consensus layer design
- Formal Verification - Verification methodology and tools
- Consensus Overview - Consensus layer introduction
- Orange Paper - Mathematical foundation
Spam Filtering
Overview
Spam filtering provides transaction-level filtering for bandwidth optimization and non-monetary transaction detection. The system filters spam transactions to achieve 40-60% bandwidth savings during ongoing sync while maintaining consensus correctness.
Code: spam_filter/mod.rs
Spam filtering is implemented in the protocol layer (blvm-protocol). It can be used independently of UTXO commitments; mempool and consensus config reference it where needed.
Spam Detection Types
1. Ordinals/Inscriptions (SpamType::Ordinals)
Detects data embedded in Bitcoin transactions:
- Witness Scripts: Detects data embedded in witness scripts (SegWit v0 or Taproot) - PRIMARY METHOD
- OP_RETURN Outputs: Detects OP_RETURN outputs with large data pushes
- Envelope Protocol: Detects envelope protocol patterns (OP_FALSE OP_IF … OP_ENDIF)
- Pattern Detection: Large scripts (>100 bytes) or OP_RETURN with >80 bytes
- Witness Detection: Large witness stacks (>1000 bytes) or suspicious data patterns
2. Dust Outputs (SpamType::Dust)
Filters outputs below threshold:
- Threshold: Default 546 satoshis (configurable)
- Detection: All outputs must be below threshold for transaction to be considered dust
- Configuration:
SpamFilterConfig::dust_threshold
3. BRC-20 Tokens (SpamType::BRC20)
Detects BRC-20 token transactions:
- Pattern Matching: Detects BRC-20 JSON patterns in OP_RETURN outputs
- Patterns:
"p":"brc-20","op":"mint","op":"transfer","op":"deploy"
4. Large Witness Data (SpamType::LargeWitness)
Detects transactions with suspiciously large witness data:
- Threshold: Default 1000 bytes (configurable)
- Indication: Potential data embedding in witness data
- Configuration:
SpamFilterConfig::max_witness_size
5. Low Fee Rate (SpamType::LowFeeRate)
Detects transactions with suspiciously low fee rates:
- Detection: Low fee rate relative to transaction size
- Indication: Non-monetary transactions pay minimal fees
- Threshold: Default 1 sat/vbyte (configurable)
- Configuration:
SpamFilterConfig::min_fee_rate - Status: Disabled by default (can be too aggressive)
6. High Size-to-Value Ratio (SpamType::HighSizeValueRatio)
Detects transactions with very large size relative to value transferred:
- Pattern: >1000 bytes per satoshi (default threshold)
- Indication: Non-monetary use (large data, small value)
- Configuration:
SpamFilterConfig::max_size_value_ratio
7. Many Small Outputs (SpamType::ManySmallOutputs)
Detects transactions with many small outputs:
- Pattern: >10 outputs below dust threshold (default)
- Indication: Common in token distributions and Ordinal transfers
- Configuration:
SpamFilterConfig::max_small_outputs
Critical Design: Output-Only Filtering
Important: Spam filtering applies to OUTPUTS only, not entire transactions.
When processing a spam transaction:
- ✅ INPUTS are ALWAYS removed from UTXO tree (maintains consistency)
- ❌ OUTPUTS are filtered out (bandwidth savings)
This ensures UTXO set consistency even when spam transactions spend non-spam inputs.
Implementation: blvm-protocol utxo_commitments/initial_sync.rs (output-only filtering when processing blocks for UTXO commitments).
Configuration
Default Configuration
#![allow(unused)]
fn main() {
use blvm_protocol::spam_filter::{SpamFilter, SpamFilterConfig};
// Default configuration (all detection methods enabled except low_fee_rate)
let filter = SpamFilter::new();
}
Custom Configuration
#![allow(unused)]
fn main() {
let config = SpamFilterConfig {
filter_ordinals: true,
filter_dust: true,
filter_brc20: true,
filter_large_witness: true, // Detect large witness stacks
filter_low_fee_rate: false, // Disabled by default (too aggressive)
filter_high_size_value_ratio: true, // Detect high size/value ratio
filter_many_small_outputs: true, // Detect many small outputs
dust_threshold: 546, // satoshis
min_output_value: 546, // satoshis
min_fee_rate: 1, // satoshis per vbyte
max_witness_size: 1000, // bytes
max_size_value_ratio: 1000.0, // bytes per satoshi
max_small_outputs: 10, // count
};
let filter = SpamFilter::with_config(config);
}
Witness Data Support
For improved detection accuracy, especially for Taproot/SegWit-based Ordinals, use is_spam_with_witness():
#![allow(unused)]
fn main() {
use blvm_consensus::witness::Witness;
let filter = SpamFilter::new();
let witnesses: Vec<Witness> = /* witness data for each input */;
// Better detection with witness data
let result = filter.is_spam_with_witness(&tx, Some(&witnesses));
// Backward compatible (works without witness data)
let result = filter.is_spam(&tx);
}
Usage
Basic Usage
#![allow(unused)]
fn main() {
use blvm_protocol::spam_filter::SpamFilter;
let filter = SpamFilter::new();
let result = filter.is_spam(&transaction);
if result.is_spam {
println!("Transaction is spam: {:?}", result.spam_type);
for spam_type in &result.detected_types {
println!(" - {:?}", spam_type);
}
}
}
Block Filtering
#![allow(unused)]
fn main() {
let spam_filter = SpamFilter::new();
let (filtered_txs, spam_summary) = spam_filter.filter_block(&block.transactions);
// Spam summary provides statistics:
// - filtered_count: Number of transactions filtered
// - filtered_size: Total bytes filtered
// - by_type: Breakdown by spam type (ordinals, dust, brc20)
}
Block Filtering with Witness Data
#![allow(unused)]
fn main() {
let spam_filter = SpamFilter::new();
let witnesses: Vec<Vec<Witness>> = /* witness data for each transaction */;
let (filtered_txs, spam_summary) = spam_filter.filter_block_with_witness(
&block.transactions,
Some(&witnesses)
);
}
Mempool-Level Spam Filtering
In addition to block-level filtering, spam filtering can be applied at the mempool entry point to reject spam transactions before they enter the mempool.
Configuration
Enable mempool-level spam filtering in MempoolConfig:
#![allow(unused)]
fn main() {
use blvm_consensus::config::MempoolConfig;
let mut config = MempoolConfig::default();
config.reject_spam_in_mempool = true; // Enable spam rejection at mempool entry
// Optional: Customize spam filter configuration
#[cfg(feature = "utxo-commitments")]
{
use blvm_protocol::spam_filter::SpamFilterConfigSerializable;
config.spam_filter_config = Some(SpamFilterConfigSerializable {
filter_ordinals: true,
filter_dust: true,
filter_brc20: true,
// ... other spam filter settings
});
}
}
Standard Transaction Checks
The mempool also enforces stricter standard transaction checks:
OP_RETURN Limits
- Maximum OP_RETURN size: 80 bytes (common policy default, configurable)
- Multiple OP_RETURN rejection: By default, transactions with more than 1 OP_RETURN output are rejected
- Configuration:
MempoolConfig::max_op_return_size,max_op_return_outputs,reject_multiple_op_return
Envelope Protocol Rejection
- Envelope protocol detection: Rejects scripts starting with
OP_FALSE OP_IF(used by Ordinals) - Configuration:
MempoolConfig::reject_envelope_protocol(default: true)
Script Size Limits
- Maximum standard script size: 200 bytes (configurable)
- Configuration:
MempoolConfig::max_standard_script_size
Per-Peer Transaction Rate Limiting
To prevent peer flooding, transaction rate limiting is enforced per peer:
- Burst limit: 10 transactions (configurable)
- Rate limit: 1 transaction per second (configurable)
- Configuration:
MempoolPolicyConfig::tx_rate_limit_burst,tx_rate_limit_per_sec - Location:
blvm-node/src/network/mod.rs
Transactions exceeding the rate limit are dropped before processing.
Example Configuration
[mempool]
# Enable spam filtering at mempool entry
reject_spam_in_mempool = true
# OP_RETURN limits
max_op_return_size = 80
max_op_return_outputs = 1
reject_multiple_op_return = true
# Standard script checks
max_standard_script_size = 200
reject_envelope_protocol = true
# Fee rate requirements for large transactions
min_fee_rate_large_tx = 2
large_tx_threshold_bytes = 1000
[mempool_policy]
# Per-peer transaction rate limiting
tx_rate_limit_burst = 10
tx_rate_limit_per_sec = 1
# Per-peer byte rate limiting
tx_byte_rate_limit = 100000 # 100 KB/s
tx_byte_rate_burst = 1000000 # 1 MB burst
# Spam-aware eviction
eviction_strategy = "spamfirst"
[spam_ban]
# Spam-specific peer banning
spam_ban_threshold = 10
spam_ban_duration_seconds = 3600 # 1 hour
Integration Points
UTXO Commitments
Spam filtering is used in UTXO commitment processing to reduce bandwidth during sync:
- Location:
blvm-protocol/src/utxo_commitments/initial_sync.rs - Usage: Filters outputs when processing blocks for UTXO commitments
- Benefit: 40-60% bandwidth reduction during ongoing sync
Protocol Extensions
Spam filtering is used in protocol extensions for filtered block generation:
- Location:
blvm-node/src/network/protocol_extensions.rs - Usage: Generates filtered blocks for network peers
- Benefit: Reduces bandwidth for filtered block relay
Mempool Entry
Spam filtering can be applied at mempool entry to reject spam transactions:
- Location:
blvm-consensus/src/mempool.rs::accept_to_memory_pool_with_config() - Usage: Optional spam check before accepting transactions to mempool
- Benefit: Prevents spam from entering mempool, reducing memory usage
- Status: Opt-in (default: disabled) to maintain backward compatibility
Bandwidth Savings
- 40-60% bandwidth reduction during ongoing sync
- Maintains consensus correctness
- Enables efficient UTXO commitment synchronization
- Reduces storage requirements for filtered block relay
Performance Characteristics
- CPU Overhead: Minimal (pattern matching)
- Memory: O(1) per transaction
- Detection Speed: Fast (heuristic-based pattern matching)
Use Cases
- UTXO Commitment Sync: Reduce bandwidth during initial sync
- Ongoing Sync: Skip spam transactions in filtered blocks
- Bandwidth Optimization: For nodes with limited bandwidth
- Storage Optimization: Reduce data that needs to be stored
- Network Efficiency: Reduce bandwidth for filtered block relay
- Mempool Management: Reject spam transactions at mempool entry (opt-in)
- Peer Flooding Prevention: Rate limit transactions per peer to prevent DoS
Additional Spam Mitigation
Already Implemented
- Input/Output Limits: Consensus-level limits (MAX_INPUTS = 1000, MAX_OUTPUTS = 1000) prevent transactions with excessive inputs/outputs
- Ancestor/Descendant Limits: Package limits prevent transaction package spam (default: 25 transactions, 101 kB)
- DoS Protection: Automatic peer banning for connection rate violations
- Per-Peer Byte Rate Limiting: Limits bytes per second per peer (default: 100 KB/s, 1 MB burst)
- Fee Rate Requirements for Large Transactions: Requires higher fees for large transactions (default: 2 sat/vB for transactions >1000 bytes)
- Spam-Aware Eviction: Evict spam transactions first when mempool is full (eviction strategy:
SpamFirst) - Spam-Specific Peer Banning: Auto-ban peers that repeatedly send spam transactions (default: ban after 10 spam transactions, 1 hour duration)
Per-Peer Byte Rate Limiting
Prevents large transaction flooding by limiting bytes per second per peer:
[mempool_policy]
tx_byte_rate_limit = 100000 # 100 KB/s
tx_byte_rate_burst = 1000000 # 1 MB burst
Fee Rate Requirements for Large Transactions
Large transactions must pay higher fees to discourage spam:
[mempool]
min_fee_rate_large_tx = 2 # 2 sat/vB (higher than standard 1 sat/vB)
large_tx_threshold_bytes = 1000 # Transactions >1 KB require higher fees
Spam-Aware Eviction Strategy
When mempool is full, spam transactions are evicted first:
[mempool_policy]
eviction_strategy = "spamfirst" # Evict spam transactions first
Note: Requires utxo-commitments feature. Falls back to lowest_fee_rate if feature is disabled.
Spam-Specific Peer Banning
Tracks spam violations per peer and auto-bans repeat offenders:
[spam_ban]
spam_ban_threshold = 10 # Ban after 10 spam transactions
spam_ban_duration_seconds = 3600 # Ban for 1 hour
Peers that repeatedly send spam transactions are automatically banned for the configured duration.
See Also
- UTXO Commitments - How spam filtering integrates with UTXO commitments
- Consensus Overview - Consensus layer introduction
- Network Protocol - Network protocol details
UTXO Commitments
Overview
UTXO Commitments enable fast synchronization of the Bitcoin UTXO set without requiring full blockchain download. The system uses cryptographic Merkle tree commitments with peer consensus verification, achieving 98% bandwidth savings compared to traditional full block download.
Architecture
Core Components
- Merkle Tree: Sparse Merkle Tree for incremental UTXO set updates
- Peer Consensus: N-of-M diverse peer verification model
- Spam Filtering: Filters spam transactions from commitments
- Verification: PoW-based commitment verification
- Network Integration: Works with TCP and Iroh transports
Code: mod.rs
Merkle Tree Implementation
Sparse Merkle Tree
The system uses a sparse Merkle tree for efficient incremental updates:
- Incremental Updates: Insert/remove UTXOs without full tree rebuild
- Proof Generation: Generate Merkle proofs for UTXO inclusion
- Root Calculation: Efficient root hash calculation
- SHA256 Hashing: Uses SHA256 for all hashing operations
Code: merkle_tree.rs
Usage
#![allow(unused)]
fn main() {
use blvm_protocol::utxo_commitments::{UtxoMerkleTree, UtxoCommitment};
use blvm_consensus::types::{OutPoint, UTXO};
// Create UTXO Merkle tree
let mut tree = UtxoMerkleTree::new()?;
// Add UTXO
let outpoint = OutPoint { hash: [1; 32], index: 0 };
let utxo = UTXO { value: 1000, script_pubkey: vec![].into(), height: 0, is_coinbase: false };
tree.insert(outpoint, utxo)?;
// Generate commitment
let commitment = tree.generate_commitment(block_hash, height);
}
Peer Consensus Protocol
N-of-M Verification Model
The peer consensus protocol discovers diverse peers and finds consensus among them to verify UTXO commitments without trusting any single peer.
Peer Diversity
Peers are selected for diversity across:
- ASN (Autonomous System Number): Maximum 2 peers per ASN
- Country: Geographic distribution
- Subnet: /16 subnet distribution
- Implementation: Different Bitcoin implementations may adopt commitment schemes independently
Code: peer_consensus.rs
Consensus Configuration
#![allow(unused)]
fn main() {
pub struct ConsensusConfig {
pub min_peers: usize, // Minimum: 5
pub target_peers: usize, // Target: 10
pub consensus_threshold: f64, // 0.8 (80% agreement)
pub max_peers_per_asn: usize, // 2
pub safety_margin: Natural, // 2016 blocks (~2 weeks)
}
}
Code: peer_consensus.rs
Consensus Process
- Discover Diverse Peers: Find peers across different ASNs, countries, subnets
- Request Commitments: Query each peer for UTXO commitment at checkpoint height
- Group Responses: Group commitments by value (merkle root + supply + count + height)
- Find Consensus: Identify group with highest agreement
- Verify Threshold: Check if agreement meets consensus threshold (80%)
- Verify Commitment: Verify consensus commitment against block headers and PoW
Code: peer_consensus.rs
Fast Sync Protocol
Initial Sync Process
- Download Headers: Download block headers from genesis to tip
- Select Checkpoint: Choose checkpoint height (safety margin back from tip)
- Request UTXO Sets: Query diverse peers for UTXO commitment at checkpoint
- Find Consensus: Use peer consensus to verify commitment
- Verify Commitment: Verify against block headers and PoW
- Sync Forward: Download filtered blocks from checkpoint to tip
- Update Incrementally: Update UTXO set incrementally for each block
Code: initial_sync.rs
Bandwidth Savings
The fast sync protocol achieves 98% bandwidth savings by:
- Headers Only: Download headers instead of full blocks (~80 bytes vs ~1 MB per block)
- Filtered Blocks: Download only relevant transactions (~2% of block size)
- Incremental Updates: Only download UTXO changes, not full set
Calculation:
- Traditional: ~500 GB (full blockchain)
- Fast Sync: ~10 GB (headers + filtered blocks)
- Savings: 98%
Spam Filtering Integration
UTXO Commitments use spam filtering to reduce bandwidth during sync. Spam filtering is a general-purpose feature that can be used independently of UTXO commitments.
For detailed spam filtering documentation, see: Spam Filtering
Integration with UTXO Commitments
When processing blocks for UTXO commitments, spam filtering is applied:
- Location: initial_sync.rs
- Process: All transactions are processed, but spam outputs are filtered out
- Benefit: 40-60% bandwidth reduction during ongoing sync
- Critical Design: INPUTS are always removed (maintains UTXO consistency), OUTPUTS are filtered (bandwidth savings)
Bandwidth Savings
- 40-60% bandwidth reduction during ongoing sync
- Maintains consensus correctness
- Enables efficient UTXO commitment synchronization
BIP158 Compact Block Filters
BIP158 compact block filters support light clients and integrate with UTXO commitments for efficient filtered block serving.
Location
- Protocol (algorithm):
blvm-protocol/src/bip158.rs,blvm-protocol/src/bip157.rs— GCS filter construction and filter header chain - Node (handlers):
blvm-node/src/network/bip157_handler.rs,blvm-node/src/network/filter_service.rs— serving and network integration
Capabilities
Filter Generation
- Golomb-Rice Coded Sets (GCS) for efficient encoding
- False Positive Rate: ~1 in 524,288 (P=19)
- Filter Contents:
- All spendable output scriptPubKeys in the block
- All scriptPubKeys from outputs spent by block’s inputs
Filter Header Chain
- Maintains filter header chain for efficient verification
- Checkpoints every 1000 blocks (per BIP157)
- Enables light clients to verify filter integrity
Algorithm
- Collect Scripts: All output scriptPubKeys from block transactions and all scriptPubKeys from UTXOs being spent
- Hash to Range: Hash each script with SHA256, map to range [0, N*M) where N = number of elements, M = 2^19
- Golomb-Rice Encoding: Sort hashed values, compute differences, encode using Golomb-Rice
- Filter Matching: Light clients hash their scripts and check if script hash is in set
Integration with UTXO Commitments
BIP158 filters can be included in FilteredBlockMessage alongside spam-filtered transactions and UTXO commitments, enabling efficient light client synchronization.
Code: bip158.rs
Verification
Verification Levels
- Minimal: Peer consensus only
- Standard: Peer consensus + PoW + supply checks
- Paranoid: All checks + background genesis verification
Code: config.rs
Verification Checks
- PoW Verification: Verify block headers have valid proof-of-work
- Supply Verification: Verify total supply matches expected value
- Header Chain Verification: Verify commitment height matches header chain
- Merkle Root Verification: Verify Merkle root matches UTXO set
Code: verification.rs
Network Integration
Transport Support
UTXO Commitments work with both TCP and Iroh transports via the transport abstraction layer:
- TCP: Bitcoin P2P compatible
- Iroh/QUIC: QUIC with NAT traversal and DERP
Code: utxo_commitments_client.rs
Network Messages
GetUTXOSet: Request UTXO commitment from peerUTXOSet: Response with UTXO commitmentGetFilteredBlock: Request filtered block (spam-filtered)FilteredBlock: Response with filtered block
Code: network_integration.rs
Configuration
Sync Modes
- PeerConsensus: Use peer consensus for initial sync (fast, trusts N of M peers)
- Genesis: Sync from genesis (slow, but no trust required)
- Hybrid: Use peer consensus but verify from genesis in background
Code: config.rs
Configuration Example
[utxo_commitments]
sync_mode = "PeerConsensus" # or "Genesis" or "Hybrid"
verification_level = "Standard" # or "Minimal" or "Paranoid"
[utxo_commitments.consensus]
min_peers = 5
target_peers = 10
consensus_threshold = 0.8
max_peers_per_asn = 2
safety_margin = 2016
[utxo_commitments.spam_filter]
min_value = 546 # dust threshold
min_fee_rate = 1 # sat/vB
Code: config.rs
Formal Verification
The UTXO Commitments module includes blvm-spec-lock proofs verifying:
- Merkle tree operations (insert, remove, root calculation)
- Commitment generation
- Verification logic
- Peer consensus calculations
Location: blvm-protocol/src/utxo_commitments/
Storage correctness for UTXO set operations is covered by tests and blvm-spec-lock verification in the consensus and protocol crates. The UTXO commitments implementation in blvm-protocol (merkle tree, verification, peer consensus) is the reference for commitment-related logic.
Usage
Initial Sync
#![allow(unused)]
fn main() {
use blvm_protocol::utxo_commitments::InitialSync;
let sync = InitialSync::new(
peer_consensus,
network_client,
config,
);
// Sync from checkpoint
let commitment = sync.sync_from_checkpoint(
header_chain,
diverse_peers,
).await?;
// Complete sync forward with full validation
// Note: checkpoint_utxo_set should be obtained from the verified commitment
// For now, passing None starts with empty set (commitment verified at checkpoint)
sync.complete_sync_from_checkpoint(
&mut utxo_tree,
checkpoint_height,
current_tip,
network_client,
get_block_hash_fn,
peer_id,
Network::Mainnet,
network_time,
Some(&header_chain),
None, // checkpoint_utxo_set - can be obtained separately if needed
).await?;
}
Update After Block
#![allow(unused)]
fn main() {
use blvm_protocol::utxo_commitments::update_commitments_after_block;
update_commitments_after_block(
&mut utxo_tree,
block,
height,
)?;
}
Code: initial_sync.rs
Benefits
- Fast Sync: 98% bandwidth savings vs full blockchain download
- Security: N-of-M peer consensus prevents single peer attacks
- Efficiency: Incremental updates, no full set download
- Flexibility: Multiple sync modes and verification levels
- Transport Agnostic: Works with TCP or QUIC
- Formal Verification: blvm-spec-lock proofs ensure correctness
Components
The UTXO Commitments system includes:
- Sparse Merkle Tree with incremental updates
- Peer consensus protocol (N-of-M verification)
- Spam filtering
- Commitment verification
- Network integration (TCP and Iroh)
- Fast sync protocol
- blvm-spec-lock proofs
Location: blvm-protocol/src/utxo_commitments/, blvm-node/src/network/utxo_commitments_client.rs
See Also
- Consensus Overview - Consensus layer introduction
- Consensus Architecture - Consensus layer design
- Network Protocol - Network protocol details
- Node Configuration - UTXO commitment configuration
- Storage Backends - Storage backend details
- Performance Optimizations - IBD optimizations and batch storage
- Formal Verification - Spec-lock verification system
Protocol Layer Overview
The protocol layer (blvm-protocol) abstracts Bitcoin protocol for multiple variants and protocol evolution. It sits between the pure mathematical consensus rules (blvm-consensus) and the Bitcoin node implementation (blvm-node), supporting mainnet, testnet, regtest, and future protocol variants.
Architecture Position
Tier 3 of the 6-tier Bitcoin Commons architecture:
1. Orange Paper (mathematical foundation)
2. blvm-consensus (pure math implementation)
3. blvm-protocol (Bitcoin abstraction) ← THIS LAYER
4. blvm-node (full node implementation)
5. blvm-sdk (developer toolkit)
6. blvm-commons (governance enforcement)
Protocol Variants
The protocol layer supports multiple Bitcoin network variants:
| Variant | Network Name | Default Port | Purpose |
|---|---|---|---|
| BitcoinV1 | mainnet | 8333 | Production Bitcoin network |
| Testnet3 | testnet | 18333 | Bitcoin test network |
| Regtest | regtest | 18444 | Regression testing network |
Network Parameters
Each variant has specific network parameters:
- Magic Bytes: P2P protocol identification (mainnet:
0xf9beb4d9, testnet:0x0b110907, regtest:0xfabfb5da) - Genesis Blocks: Network-specific genesis block hashes
- Difficulty Targets: Proof-of-work difficulty adjustment
- Halving Intervals: Block subsidy halving schedule (210,000 blocks)
- Feature Activation: SegWit, Taproot activation heights
Code: network_params.rs
Core Components
Protocol Engine
The BitcoinProtocolEngine is the main interface:
#![allow(unused)]
fn main() {
pub struct BitcoinProtocolEngine {
version: ProtocolVersion,
network_params: NetworkParams,
config: ProtocolConfig,
}
}
Features:
- Protocol variant selection
- Network parameter access
- Feature flag management
- Validation rule enforcement
Code: lib.rs
Network Messages
Supports Bitcoin P2P protocol messages:
Core Messages:
Version,VerAck- Connection handshakeAddr,GetAddr- Peer address managementInv,GetData,NotFound- Inventory managementBlock,Tx- Block and transaction relayGetHeaders,Headers,GetBlocks- Header synchronizationPing,Pong- Connection keepaliveMemPool,FeeFilter- Mempool synchronization
BIP152 (Compact Block Relay):
SendCmpct- Compact block negotiationCmpctBlock- Compact block transmissionGetBlockTxn,BlockTxn- Transaction reconstruction
FIBRE Protocol:
FIBREPacket- High-performance relay protocol- Packet format and serialization
- Performance optimizations
Governance Messages:
- Governance messages via P2P protocol
- Message format and routing
- Integration with governance system
Commons Extensions:
GetUTXOSet,UTXOSet- UTXO commitment protocolGetFilteredBlock,FilteredBlock- Spam-filtered blocksGetBanList,BanList- Distributed ban list sharing
Code: network.rs (NetworkMessage), wire/mod.rs (command names and serialization)
Service Flags
Service flags indicate node capabilities:
Standard Flags:
NODE_NETWORK- Full node with all blocksNODE_WITNESS- SegWit supportNODE_COMPACT_FILTERS- BIP157/158 supportNODE_NETWORK_LIMITED- Pruned node
Commons Flags:
NODE_UTXO_COMMITMENTS- UTXO commitment supportNODE_BAN_LIST_SHARING- Ban list sharingNODE_FIBRE- FIBRE protocol supportNODE_DANDELION- Dandelion++ privacy relayNODE_PACKAGE_RELAY- BIP331 package relay
Code: service_flags.rs
Validation Rules
Protocol-specific validation rules:
- Size Limits: Block (4MB), transaction (1MB), script (10KB)
- Feature Flags: SegWit, Taproot, RBF support
- Fee Rules: Minimum and maximum fee rates
- DoS Protection: Message size limits, address count limits
Code: validation.rs
Commons-Specific Extensions
UTXO Commitments
Protocol messages for UTXO set synchronization:
GetUTXOSet- Request UTXO set at specific heightUTXOSet- UTXO set response with merkle proof
Code: commons.rs (message structs); utxo_commitments/mod.rs (UTXO commitment protocol)
Filtered Blocks
Spam-filtered block relay for efficient syncing:
GetFilteredBlock- Request filtered blockFilteredBlock- Filtered block with spam transactions removed
Code: commons.rs (GetFilteredBlockMessage, FilteredBlockMessage)
Ban List Sharing
Distributed ban list management:
GetBanList- Request ban listBanList- Ban list response with signatures
Code: commons.rs (GetBanListMessage, BanListMessage)
BIP Support
Implemented Bitcoin Improvement Proposals:
- BIP152: Compact Block Relay
- BIP157: Client-side Block Filtering
- BIP158: Compact Block Filters
- BIP173/350/351: Bech32/Bech32m Address Encoding
- BIP70: Payment Protocol
Code: bip157.rs
Protocol Evolution
The protocol layer supports protocol evolution:
- Version Support: Multiple protocol versions
- Feature Management: Enable/disable features based on version
- Breaking Changes: Track and manage protocol evolution
- Backward Compatibility: Maintain compatibility with existing nodes
Usage Example
#![allow(unused)]
fn main() {
use blvm_protocol::{BitcoinProtocolEngine, ProtocolVersion};
// Create a mainnet protocol engine
let engine = BitcoinProtocolEngine::new(ProtocolVersion::BitcoinV1)?;
// Get network parameters
let params = engine.get_network_params();
println!("Network: {}", params.network_name);
println!("Port: {}", params.default_port);
// Check feature support
if engine.supports_feature("segwit") {
println!("SegWit is supported");
}
}
See Also
- Protocol Architecture - Protocol layer design and components
- Network Protocol - Transport abstraction and protocol details
- Message Formats - P2P message specifications
- Protocol Specifications - BIP implementations
- Node Configuration - Configuring protocol variants
Protocol Layer Architecture
The protocol layer (blvm-protocol) provides Bitcoin protocol abstraction that enables multiple Bitcoin variants and protocol evolution.
Architecture Position
This is Tier 3 of the 6-tier Bitcoin Commons architecture (BLVM technology stack):
1. Orange Paper (mathematical foundation)
2. blvm-consensus (pure math implementation)
3. blvm-protocol (Bitcoin abstraction) ← THIS CRATE
4. blvm-node (full node implementation)
5. blvm-sdk (developer toolkit)
6. blvm-commons (governance enforcement)
Purpose
The blvm-protocol sits between the pure mathematical consensus rules (blvm-consensus) and the full Bitcoin implementation (blvm-node). It provides:
Protocol Abstraction
- Multiple Variants: Support for mainnet, testnet, and regtest
- Network Parameters: Magic bytes, ports, genesis blocks, difficulty targets
- Feature Flags: SegWit, Taproot, RBF, and other protocol features
- Validation Rules: Protocol-specific size limits and validation logic
Protocol Evolution
- Version Support: Bitcoin V1, V2 (planned), and experimental variants
- Feature Management: Enable/disable features based on protocol version
- Breaking Changes: Track and manage protocol evolution
Core Components
Protocol Variants
- BitcoinV1: Production Bitcoin mainnet
- Testnet3: Bitcoin test network
- Regtest: Regression testing network
Network Parameters
- Magic Bytes: P2P protocol identification
- Ports: Default network ports
- Genesis Blocks: Network-specific genesis blocks
- Difficulty: Proof-of-work targets
- Halving: Block subsidy intervals
For more details, see the blvm-protocol README.
See Also
- Protocol Overview - Protocol layer introduction
- Network Protocol - Transport and protocol details
- Message Formats - P2P message specifications
- Consensus Architecture - Underlying consensus layer
- Node Configuration - Protocol variant configuration
Message Formats
The protocol layer defines message formats for Bitcoin P2P protocol communication.
Protocol Variants
Each protocol variant (mainnet, testnet, regtest) has specific message formats and network parameters:
- Magic Bytes: Unique identifier for each network variant
- Message Headers: Standard Bitcoin message header format
- Message Types:
version,verack,inv,getdata,tx,block, etc.
Network Parameters
Protocol-specific parameters include:
- Default ports (mainnet: 8333, testnet: 18333, regtest: 18444)
- Genesis block hashes
- Difficulty adjustment intervals
- Block size limits
- Feature activation heights
For detailed protocol specifications, see the blvm-protocol README.
See Also
- Protocol Architecture - Protocol layer design
- Network Protocol - Transport and message handling
- Protocol Overview - Protocol layer introduction
- Protocol Specifications - BIP implementations
Network Protocol
The protocol layer abstracts Bitcoin’s P2P network protocol, supporting multiple network variants. See Protocol Overview for details.
Protocol Abstraction
The blvm-protocol abstracts P2P message formats (standard Bitcoin wire protocol), connection management, peer discovery, block synchronization, and transaction relay. See Protocol Architecture for details.
Network Variants
Mainnet (BitcoinV1)
- Production Bitcoin network
- Full consensus rules
- Real economic value
Testnet3
- Bitcoin test network
- Same consensus rules as mainnet
- Different network parameters
- No real economic value
Regtest
- Regression testing network
- Configurable difficulty
- Isolated from other networks
- Fast block generation for testing
For implementation details, see the blvm-protocol README.
Transport Abstraction Layer
The network layer uses multiple transport protocols through a unified abstraction (see Transport Abstraction):
NetworkManager
└── Transport Trait (abstraction)
├── TcpTransport (Bitcoin P2P compatible)
└── IrohTransport (QUIC-based, optional)
Transport Options
TCP Transport (Default): Bitcoin P2P protocol compatibility using traditional TCP sockets. Maintains Bitcoin wire protocol format and is compatible with standard Bitcoin nodes. See Transport Abstraction.
Iroh Transport: QUIC-based transport using Iroh for P2P networking with public key-based peer identity and NAT traversal support. See Transport Abstraction.
Transport Selection
Configure transport via node configuration:
[network]
transport_preference = "tcp_only" # or "iroh_only", "hybrid"
Modes: tcp_only (default, Bitcoin compatible), iroh_only (experimental), hybrid (both simultaneously)
The protocol adapter serializes between blvm-consensus NetworkMessage types and transport-specific wire formats. The message bridge processes messages and generates responses. Default is TCP-only; enable Iroh via iroh feature flag.
See Also
- Protocol Architecture - Protocol layer design
- Message Formats - P2P message specifications
- Protocol Overview - Protocol layer introduction
- Node Configuration - Network and transport configuration
- Protocol Specifications - BIP implementations
Node Implementation Overview
The node implementation (blvm-node) is a minimal reference Bitcoin node: it adds only non-consensus infrastructure on top of the consensus and protocol layers. Treat mainnet and high-value deployments like any consensus-adjacent system—hardening, monitoring, and review are required. Consensus logic comes from blvm-consensus, and protocol abstraction from blvm-protocol.
Architecture
The node follows a layered architecture:
P2P networking, peer management] SL[Storage Layer
Block/UTXO storage] RS[RPC Server
JSON-RPC 2.0 API] MM[Module Manager
Process-isolated modules] MP[Mempool Manager
Transaction mempool] MC[Mining Coordinator
Block template generation] PP[Payment Processor
CTV support] end PROTO[blvm-protocol
Protocol abstraction] CONS[blvm-consensus
Consensus validation] NM --> PROTO SL --> PROTO MP --> PROTO MC --> PROTO PP --> PROTO PROTO --> CONS MM --> NM MM --> SL MM --> MP RS --> SL RS --> MP RS --> MC style NM fill:#bbf,stroke:#333,stroke-width:2px style SL fill:#bfb,stroke:#333,stroke-width:2px style RS fill:#fbf,stroke:#333,stroke-width:2px style MM fill:#ffb,stroke:#333,stroke-width:2px style MP fill:#fbb,stroke:#333,stroke-width:2px style MC fill:#bbf,stroke:#333,stroke-width:2px style PROTO fill:#bfb,stroke:#333,stroke-width:3px style CONS fill:#fbb,stroke:#333,stroke-width:3px
Key Components
Network Manager
- P2P protocol implementation (Bitcoin wire protocol)
- Multi-transport support (TCP, Quinn QUIC, Iroh)
- Peer connection management
- Message routing and relay
- Privacy protocols (Dandelion++, Fibre)
- Package relay (BIP331)
Code: network/mod.rs (module root), network_manager.rs (connection and message handling)
Storage Layer
- Database abstraction with multiple backends (see Storage Backends)
- Automatic backend fallback on failure
- Block storage and indexing
- UTXO set management
- Chain state tracking
- Transaction indexing
- Pruning support
Code: mod.rs
RPC Server
- JSON-RPC 2.0 compliant API (see RPC API Reference)
- REST API (optional feature, runs alongside JSON-RPC)
- Optional QUIC transport support (see QUIC RPC)
- Authentication and rate limiting
- Method coverage
Code: mod.rs
Module System
- Process-isolated modules (see Module System Architecture)
- IPC communication (Unix domain sockets, see Module IPC Protocol)
- Security sandboxing
- Permission-based API access
- Hot reload support
Code: manager.rs
Mempool Manager
- Transaction validation and storage
- Fee-based transaction selection
- RBF (Replace-By-Fee) support with 4 configurable modes (Disabled, Conservative, Standard, Aggressive)
- Comprehensive mempool policies and limits
- Transaction expiry
- Advanced indexing (address and value range indexing)
Code: mempool.rs
Mining Coordinator
- Block template generation
- Stratum V2 protocol support
- Mining job distribution
Code: miner.rs
Payment Processing
- CTV (CheckTemplateVerify) support
- Lightning Network integration
- Payment vaults
- Covenant support
- Payment state management
Code: mod.rs
Governance Integration
- Optional
[governance]configuration (e.g. Commons URL, relay toggles) andNODE_GOVERNANCEP2P capability for extensions such as ban list sharing - Module-visible governance events (proposal lifecycle, webhooks, fork detection) for optional out-of-process modules
Code: config/governance.rs, network/peer_manager.rs (governance feature), P2P governance extensions
Design Principles
- Zero Consensus Re-implementation: All consensus logic delegated to blvm-consensus
- Protocol Abstraction: Uses blvm-protocol for variant support (mainnet, testnet, regtest)
- Pure Infrastructure: Adds storage, networking, RPC, orchestration only
- Feature-complete infrastructure: Full node–style behavior (storage, P2P, RPC, modules) with performance optimizations; not a substitute for operational security review before production
Features
Network Features
- Multi-transport architecture (TCP, QUIC)
- Privacy-preserving relay (Dandelion++)
- High-performance block relay (Fibre)
- Package relay (BIP331)
- UTXO commitments support
- LAN peering system (automatic local network discovery for faster IBD when LAN peers exist)
Storage Features
- Multiple database backends with abstraction layer (redb, sled, rocksdb)
- Common on-disk chain layouts via RocksDB backend
- Automatic backend fallback on failure
- Pruning support
- Advanced transaction indexing (address and value range indexes)
- UTXO set management
Security Features
- IBD bandwidth protection (per-peer/IP/subnet limits, reputation scoring)
Module Features
- Process isolation
- IPC communication
- Security sandboxing
- Hot reload
- Module registry
Mining Features
- Block template generation
- Stratum V2 protocol
- Merge mining support
- Mining pool coordination
Payment Features
- Lightning Network module support
- Payment vault management
- Covenant enforcement
- Payment state machines
Integration Features
- Governance webhook integration
- ZeroMQ notifications (optional)
- REST API alongside JSON-RPC
- Module registry (P2P discovery)
Node Lifecycle
- Initialization: Load configuration, initialize storage, create network manager
- Startup: Connect to P2P network, discover peers, load modules
- Sync: Download and validate blockchain history
- Running: Validate blocks/transactions, relay messages, serve RPC requests
- Shutdown: Graceful shutdown of all components
Code: mod.rs
Metrics and Monitoring
The node includes metrics collection:
- Network Metrics: Peer count, bytes sent/received, connection statistics
- Storage Metrics: Block count, UTXO count, database size
- RPC Metrics: Request count, error rate, response times
- Performance Metrics: Block validation time, transaction processing time
- System Metrics: CPU usage, memory usage, disk I/O
Code: metrics.rs
See Also
- Installation - Installing the node
- Quick Start - Running your first node
- Node Configuration - Configuration options
- Node Operations - Node management and operations
- RPC API Reference - JSON-RPC API documentation
- Mining Integration - Mining functionality
- Module System - Module system architecture
- Storage Backends - Storage backend details
Node Configuration
BLVM node configuration supports different use cases.
Protocol Variants
The node supports multiple Bitcoin protocol variants: Regtest (default, regression testing network for development), Testnet3 (Bitcoin test network), and BitcoinV1 (production Bitcoin mainnet). See Protocol Variants for details.
Configuration Precedence
CLI > ENV > config file > defaults
Environment variables (e.g. BLVM_DATA_DIR, BLVM_IBD_EVICTION) override config file values. See Environment variables in the configuration reference for the full list. Some options (relay, fibre, dandelion) are config-file-only; use CLI flags like --enable-dandelion for common overrides.
Path Expansion
Config path fields (storage.data_dir, modules.modules_dir, ibd.dump_dir, etc.) support ~ expansion to the home directory when loading from file. Example: data_dir = "~/.local/share/blvm-mainnet" resolves to /home/user/.local/share/blvm-mainnet on Unix.
Configuration File
Create a blvm.toml configuration file:
[network]
listen_addr = "127.0.0.1:8333" # Network listening address (default: 127.0.0.1:8333)
protocol_version = "BitcoinV1" # Protocol version: "BitcoinV1" (mainnet), "Testnet3" (testnet), "Regtest" (regtest)
transport_preference = "tcp_only" # Transport preference (default: "tcp_only")
max_peers = 100 # Maximum number of peers (default: 100)
enable_self_advertisement = true # Send own address to peers (default: true)
[storage]
data_dir = "/var/lib/blvm"
database_backend = "auto" # Selects by build features: RocksDB when rocksdb enabled, then TidesDB, Redb, Sled
[rpc]
enabled = true
port = 8332
host = "127.0.0.1" # Bind address
[mining]
enabled = false
Default Values:
listen_addr:127.0.0.1:8333(localhost, mainnet port)protocol_version:"BitcoinV1"(Bitcoin mainnet)transport_preference:"tcp_only"(TCP transport only)max_peers:100(maximum peer connections)enable_self_advertisement:true(advertise own address to peers)
Configuration is organized in logical sections (network, storage, rpc, mempool, ibd, governance, etc.) in the node codebase. Initial block download uses parallel IBD only.
IBD Configuration
Parallel IBD settings (ENV overrides config):
[ibd]
chunk_size = 16
download_timeout_secs = 30
mode = "parallel"
eviction = "fifo" # dynamic, fifo, lifo
max_blocks_in_transit_per_peer = 16
headers_timeout_secs = 30
headers_max_failures = 10
See Environment variables for IBD-related BLVM_* overrides.
Protocol Limits
Tune P2P message limits for constrained networks:
[protocol_limits]
max_protocol_message_length = 33554432 # 32 MB default
max_addr_to_send = 1000
max_inv_sz = 50000
max_headers_results = 2000
Environment Variables
You can also configure via environment variables (ENV overrides config file):
export BLVM_NETWORK=testnet
export BLVM_DATA_DIR=/var/lib/blvm
export BLVM_RPC_ADDR=127.0.0.1:8332
export BLVM_IBD_EVICTION=dynamic
export BLVM_NETWORK_TARGET_PEER_COUNT=125
Common ENV vars: BLVM_DATA_DIR, BLVM_NETWORK, BLVM_LISTEN_ADDR, BLVM_RPC_ADDR, BLVM_LOG_LEVEL, BLVM_NODE_MAX_PEERS, BLVM_IBD_*, BLVM_NETWORK_TARGET_PEER_COUNT, BLVM_REQUEST_*, BLVM_MODULE_MAX_*, RPC_AUTH_TOKENS, COMMONS_API_KEY, RUST_LOG.
See Environment variables for the complete list.
Command Line Options
Precedence: CLI > ENV > config file > defaults
Global Options
| Option | Short | Default | Description |
|---|---|---|---|
--network | -n | regtest | Network: regtest, testnet, mainnet |
--rpc-addr | -r | 127.0.0.1:18332 | RPC server bind address |
--listen-addr | -l | 0.0.0.0:8333 | P2P listen address |
--data-dir | -d | — | Data directory (overrides ENV and config) |
--config | -c | — | Configuration file path (TOML or JSON) |
--verbose | -v | false | Enable verbose logging |
Feature Flags
--enable-stratum-v2, --enable-bip158, --enable-dandelion, --enable-sigop and corresponding --disable-* flags.
Advanced Options
--assumevalid, --noassumevalid, --assumeutxo, --target-peer-count, --async-request-timeout, --module-max-cpu-percent, --module-max-memory-bytes.
Commands
start (default), status, health, version, chain, peers, network, sync, config show|validate|path, rpc.
blvm --network mainnet -d /var/lib/blvm
blvm config show
blvm status --rpc-addr 127.0.0.1:8332
See Command-line arguments in the configuration reference for the full CLI tables, or run blvm --help.
Storage Backends
The node uses multiple storage backends with automatic fallback:
Database Backends
- auto (default): Resolve by build features—RocksDB when
rocksdbfeature enabled, then TidesDB, Redb, Sled (see Configuration Reference) - redb, rocksdb, sled, tidesdb: Force a specific backend (see Storage Backends)
Storage Configuration
[storage]
data_dir = "/var/lib/blvm"
database_backend = "auto" # or "redb", "sled", "rocksdb", "tidesdb"
[storage.cache]
block_cache_mb = 100
utxo_cache_mb = 50
header_cache_mb = 10
[storage.pruning]
enabled = false
keep_blocks = 288 # Keep last 288 blocks (2 days)
Backend Selection
When database_backend = "auto", the node selects by build features: RocksDB (if rocksdb feature enabled), then TidesDB, Redb, Sled. Falls back to the next option if the preferred backend is unavailable.
Cache Configuration
Storage cache sizes can be configured:
- Block / UTXO / header cache: See Configuration Reference for canonical defaults (e.g. 100 / 50 / 10 MB).
Pruning
Pruning reduces storage requirements by removing old block data:
[storage.pruning]
enabled = true
keep_blocks = 288 # Keep last 288 blocks (2 days)
Pruning Modes:
- Disabled (default): Keep all blocks (full archival node)
- Light client: Keep last N blocks (configurable)
- Full pruning: Remove all blocks, keep only UTXO set (planned)
Note: Pruning reduces storage but limits ability to serve historical blocks to peers.
Network Configuration
Transport Options
Configure transport selection (see Transport Abstraction):
[network]
transport_preference = "tcp_only" # Options: "tcp_only" (default), "iroh_only" (requires iroh feature), "quinn_only" (requires quinn feature), "hybrid" (requires iroh feature), "all" (requires both iroh and quinn features)
Available Transport Options:
"tcp_only"- TCP transport only (default, Bitcoin P2P compatible)"iroh_only"- Iroh QUIC transport only (requiresirohfeature)"quinn_only"- Quinn QUIC transport only (requiresquinnfeature)"hybrid"- TCP + Iroh hybrid mode (requiresirohfeature)"all"- All transports enabled (requires bothirohandquinnfeatures)
Feature Requirements:
irohfeature: Enables Iroh QUIC transport with NAT traversalquinnfeature: Enables standalone Quinn QUIC transport
RBF Configuration
Configure Replace-By-Fee (RBF) behavior with 4 modes: Disabled, Conservative, Standard (default), and Aggressive.
RBF Modes
Disabled: No RBF replacements allowed
[rbf]
mode = "disabled"
Conservative: Strict rules with higher fee requirements
[rbf]
mode = "conservative"
min_fee_rate_multiplier = 2.0
min_fee_bump_satoshis = 5000
min_confirmations = 1
max_replacements_per_tx = 3
cooldown_seconds = 300
Standard (default): BIP125-compliant RBF
[rbf]
mode = "standard"
min_fee_rate_multiplier = 1.1
min_fee_bump_satoshis = 1000
Aggressive: Relaxed rules for miners
[rbf]
mode = "aggressive"
min_fee_rate_multiplier = 1.05
min_fee_bump_satoshis = 500
allow_package_replacements = true
See RBF and Mempool Policies for complete configuration guide.
Advanced Indexing
Enable address and value range indexing for efficient queries:
[storage.indexing]
enable_address_index = true
enable_value_index = true
strategy = "eager" # or "lazy"
max_indexed_addresses = 1000000
Module Configuration
Configure process-isolated modules:
[modules]
enabled = true # Enable module system (default: true)
modules_dir = "modules" # Directory containing module binaries (default: "modules")
data_dir = "data/modules" # Directory for module data/state (default: "data/modules")
socket_dir = "data/modules/sockets" # Directory for IPC sockets (default: "data/modules/sockets")
# Paths support ~ expansion (e.g. "~/.local/share/blvm-modules")
enabled_modules = ["blvm-lightning", "blvm-mesh"] # List of enabled modules (empty = auto-discover all)
Module Resource Limits (optional):
[modules.resource_limits]
default_max_cpu_percent = 50 # Max CPU usage per module (default: 50%)
default_max_memory_bytes = 536870912 # Max memory per module (default: 512 MB)
default_max_file_descriptors = 256 # Max file descriptors per module (default: 256)
default_max_child_processes = 10 # Max child processes per module (default: 10)
module_startup_wait_millis = 100 # Wait time for module startup (default: 100ms)
module_socket_timeout_seconds = 5 # IPC socket timeout (default: 5s)
module_socket_check_interval_millis = 100 # Socket check interval (default: 100ms)
module_socket_max_attempts = 50 # Max socket connection attempts (default: 50)
See Module System for module configuration details.
See Also
- Node Overview - Node features and architecture
- Node Operations - Running and managing your node
- Storage Backends - Detailed storage backend information
- Transport Abstraction - Transport options
- Network Protocol - Protocol variants and network configuration
- Configuration Reference - Complete configuration reference
- Getting Started - Installation guide
- Troubleshooting - Common configuration issues
RBF and Mempool Policies
Configure Replace-By-Fee (RBF) behavior and mempool policies to control transaction acceptance, eviction, and limits.
RBF and Mempool Flow
Reject Replacement] CONSERVATIVE[Conservative
2x fee, 1 conf] STANDARD[Standard
1.1x fee, BIP125] AGGRESSIVE[Aggressive
1.05x fee, packages] TX --> RBF RBF -->|disabled| DISABLED RBF -->|conservative| CONSERVATIVE RBF -->|standard| STANDARD RBF -->|aggressive| AGGRESSIVE CONSERVATIVE --> MP[Mempool Check] STANDARD --> MP AGGRESSIVE --> MP DISABLED --> REJECT[Reject] MP --> SIZE{Size Limit?} SIZE -->|OK| FEE{Fee Threshold?} SIZE -->|Full| EVICT[Eviction Strategy] FEE -->|OK| ACCEPT[Accept to Mempool] FEE -->|Low| REJECT EVICT --> LOW[Lowest Fee Rate] EVICT --> OLD[Oldest First] EVICT --> LARGE[Largest First] EVICT --> DESC[No Descendants First] EVICT --> HYBRID[Hybrid] LOW --> ACCEPT OLD --> ACCEPT LARGE --> ACCEPT DESC --> ACCEPT HYBRID --> ACCEPT style TX fill:#bbf,stroke:#333,stroke-width:2px style ACCEPT fill:#bfb,stroke:#333,stroke-width:2px style REJECT fill:#fbb,stroke:#333,stroke-width:2px
RBF Configuration
RBF allows transactions to be replaced by new transactions that spend the same inputs but pay higher fees. BLVM supports 4 configurable RBF modes.
RBF Modes
Disabled
No RBF replacements are allowed. All transactions are final once added to the mempool.
Use Cases:
- Enterprise/compliance requirements
- Nodes that prioritize transaction finality
- Exchanges with strict security policies
Configuration:
[rbf]
mode = "disabled"
Conservative
Strict RBF rules with higher fee requirements and additional safety checks.
Features:
- 2x fee rate multiplier (100% increase required)
- 5000 sat minimum absolute fee bump
- 1 confirmation minimum before allowing replacement
- Maximum 3 replacements per transaction
- 300 second cooldown period
Use Cases:
- Exchanges
- Wallets prioritizing user safety
- Nodes that want to prevent RBF spam
Configuration:
[rbf]
mode = "conservative"
min_fee_rate_multiplier = 2.0
min_fee_bump_satoshis = 5000
min_confirmations = 1
max_replacements_per_tx = 3
cooldown_seconds = 300
Standard (Default)
BIP125-compliant RBF with standard fee requirements.
Features:
- 1.1x fee rate multiplier (10% increase, BIP125 minimum)
- 1000 sat minimum absolute fee bump (BIP125 MIN_RELAY_FEE)
- No confirmation requirement
- Maximum 10 replacements per transaction
- 60 second cooldown period
Use Cases:
- General purpose nodes
- Default configuration
- Familiar defaults for operators coming from common node configs
Configuration:
[rbf]
mode = "standard"
min_fee_rate_multiplier = 1.1
min_fee_bump_satoshis = 1000
Aggressive
Relaxed RBF rules for miners and high-throughput nodes.
Features:
- 1.05x fee rate multiplier (5% increase)
- 500 sat minimum absolute fee bump
- Package replacement support
- Maximum 10 replacements per transaction
- 60 second cooldown period
Use Cases:
- Mining pools
- High-throughput nodes
- Nodes prioritizing fee revenue
Configuration:
[rbf]
mode = "aggressive"
min_fee_rate_multiplier = 1.05
min_fee_bump_satoshis = 500
allow_package_replacements = true
max_replacements_per_tx = 10
cooldown_seconds = 60
RBF Configuration Parameters
| Parameter | Description | Default |
|---|---|---|
mode | RBF mode: disabled, conservative, standard, aggressive | standard |
min_fee_rate_multiplier | Minimum fee rate multiplier for replacement | Mode-specific |
min_fee_bump_satoshis | Minimum absolute fee bump in satoshis | Mode-specific |
min_confirmations | Minimum confirmations before allowing replacement | 0 |
allow_package_replacements | Allow package replacements | false |
max_replacements_per_tx | Maximum replacements per transaction | Mode-specific |
cooldown_seconds | Replacement cooldown period | Mode-specific |
BIP125 Compliance
All modes enforce BIP125 rules:
- Existing transaction must signal RBF (sequence < 0xffffffff)
- New transaction must have higher fee rate
- New transaction must have higher absolute fee
- New transaction must conflict with existing transaction
- No new unconfirmed dependencies
Mode-specific requirements are applied in addition to BIP125 rules.
Mempool Policies
Configure mempool size limits, fee thresholds, eviction strategies, and transaction expiry.
Size Limits
[mempool]
max_mempool_mb = 300 # Maximum mempool size in MB (default: 300)
max_mempool_txs = 100000 # Maximum number of transactions (default: 100000)
Fee Thresholds
[mempool]
min_relay_fee_rate = 1 # Minimum relay fee rate (sat/vB, default: 1)
min_tx_fee = 1000 # Minimum transaction fee (satoshis, default: 1000)
incremental_relay_fee = 1000 # Incremental relay fee (satoshis, default: 1000)
Eviction Strategies
Choose from 5 eviction strategies when mempool limits are reached:
Lowest Fee Rate (Default)
Evicts transactions with the lowest fee rate first. Maximizes average fee rate of remaining transactions.
Best for:
- Mining pools
- Nodes prioritizing fee revenue
- Familiar defaults for operators coming from common node configs
[mempool]
eviction_strategy = "lowest_fee_rate"
Oldest First (FIFO)
Evicts the oldest transactions first, regardless of fee rate.
Best for:
- Nodes with strict time-based policies
- Preventing transaction aging issues
[mempool]
eviction_strategy = "oldest_first"
Largest First
Evicts the largest transactions first to free the most space quickly.
Best for:
- Nodes with limited memory
- Quick space recovery
[mempool]
eviction_strategy = "largest_first"
No Descendants First
Evicts transactions with no descendants first. Prevents orphaning dependent transactions.
Best for:
- Nodes prioritizing transaction package integrity
- Preventing cascading evictions
[mempool]
eviction_strategy = "no_descendants_first"
Hybrid
Combines fee rate and age with configurable weights.
Best for:
- Custom eviction policies
- Balancing multiple factors
[mempool]
eviction_strategy = "hybrid"
Ancestor/Descendant Limits
Prevent transaction package spam and ensure mempool stability:
[mempool]
max_ancestor_count = 25 # Maximum ancestor count (default: 25)
max_ancestor_size = 101000 # Maximum ancestor size in bytes (default: 101000)
max_descendant_count = 25 # Maximum descendant count (default: 25)
max_descendant_size = 101000 # Maximum descendant size in bytes (default: 101000)
Ancestors: Transactions that a given transaction depends on (parent transactions)
Descendants: Transactions that depend on a given transaction (child transactions)
Transaction Expiry
[mempool]
mempool_expiry_hours = 336 # Transaction expiry in hours (default: 336 = 14 days)
Mempool Persistence
Persist mempool across restarts:
[mempool]
persist_mempool = true
mempool_persistence_path = "data/mempool.dat"
Configuration Examples
Exchange Node (Conservative)
[rbf]
mode = "conservative"
min_fee_rate_multiplier = 2.0
min_fee_bump_satoshis = 5000
min_confirmations = 1
max_replacements_per_tx = 3
cooldown_seconds = 300
[mempool]
max_mempool_mb = 500
max_mempool_txs = 200000
min_relay_fee_rate = 2
eviction_strategy = "lowest_fee_rate"
max_ancestor_count = 25
max_descendant_count = 25
persist_mempool = true
Mining Pool (Aggressive)
[rbf]
mode = "aggressive"
min_fee_rate_multiplier = 1.05
min_fee_bump_satoshis = 500
allow_package_replacements = true
max_replacements_per_tx = 10
cooldown_seconds = 60
[mempool]
max_mempool_mb = 1000
max_mempool_txs = 500000
min_relay_fee_rate = 1
eviction_strategy = "lowest_fee_rate"
max_ancestor_count = 50
max_descendant_count = 50
Standard Node (Default)
[rbf]
mode = "standard"
[mempool]
max_mempool_mb = 300
max_mempool_txs = 100000
min_relay_fee_rate = 1
eviction_strategy = "lowest_fee_rate"
max_ancestor_count = 25
max_descendant_count = 25
Best Practices
- Exchanges: Use conservative RBF and higher fee thresholds
- Miners: Use aggressive RBF and larger mempool sizes
- General Users: Use standard/default settings
- High-Throughput Nodes: Increase size limits and use aggressive eviction
Default policy alignment
These defaults match widely used mainnet mempool parameters:
max_mempool_mb: 300 MBmin_relay_fee_rate: 1 sat/vBmax_ancestor_count: 25max_ancestor_size: 101 kBmax_descendant_count: 25max_descendant_size: 101 kBeviction_strategy:lowest_fee_rate
See Also
- Node Configuration - Complete node configuration guide
- Node Overview - Node features and architecture
- Mempool Manager - Mempool implementation details
Node Operations
Operational guide for running and maintaining a BLVM node.
Starting the Node
Basic Startup
# Regtest mode (default, safe for development)
blvm
# Testnet mode
blvm --network testnet
# Mainnet mode (use with caution)
blvm --network mainnet
With Configuration
blvm --config blvm.toml
Node Lifecycle
The node follows a lifecycle with multiple states and transitions.
Lifecycle States
The node operates in the following states:
Initial → Headers → Blocks → Synced
↓ ↓ ↓ ↓
Error Error Error Error
State Descriptions:
- Initial: Node starting up, initializing components
- Headers: Downloading and validating block headers
- Blocks: Downloading and validating full blocks
- Synced: Fully synchronized, normal operation
- Error: Error state (can transition from any state)
Code: sync.rs
State Transitions
State transitions are managed by the SyncStateMachine:
- Initial → Headers: When sync begins
- Headers → Blocks: When headers are complete (30% progress)
- Blocks → Synced: When blocks are complete (60% progress)
- Any → Error: On error conditions
Code: sync.rs
Initial Sync
When starting for the first time, the node will:
- Initialize Components: Storage, network, RPC, modules
- Connect to P2P Network: Discover peers via DNS seeds or persistent peers
- Download Headers: Request and validate block headers
- Download Blocks: Request and validate blocks
- Build UTXO Set: Construct UTXO set from validated blocks
- Sync to Current Height: Continue until caught up with network
Code: sync.rs
Running State
Once synced, the node maintains:
- Peer Connections: Active P2P connections
- Block Validation: Validates and relays new blocks (via blvm-consensus)
- Transaction Processing: Validates and relays transactions
- Chain State Updates: Updates chain tip and height
- RPC Requests: Serves JSON-RPC API requests
- Health Monitoring: Periodic health checks
Code: mod.rs
Health States
The node tracks health status for each component:
- Healthy: Component operating normally
- Degraded: Component functional but with issues
- Unhealthy: Component not functioning correctly
- Down: Component not responding
Code: health.rs
Error Recovery
The node implements graceful error recovery:
- Network Errors: Automatic reconnection with exponential backoff
- Storage Errors: Timeout protection, graceful degradation
- Validation Errors: Logged and reported, node continues operation
- Disk Space: Periodic checks with warnings
Code: mod.rs
Monitoring
Health Checks
# Check if node is running
curl http://localhost:8332/health
# Get blockchain info via RPC
curl -X POST http://localhost:8332 \
-H "Content-Type: application/json" \
-d '{"jsonrpc": "2.0", "method": "getblockchaininfo", "params": [], "id": 1}'
Logging
The node uses structured logging. Set log level via environment variable:
# Set log level
RUST_LOG=info blvm
# Debug mode
RUST_LOG=debug blvm
# Trace all operations
RUST_LOG=trace blvm
Maintenance
Database Maintenance
The node automatically maintains block storage, UTXO set, chain indexes, and transaction indexes.
Backup
Regular backups recommended:
# Backup data directory
tar -czf blvm-backup-$(date +%Y%m%d).tar.gz /var/lib/blvm
Updates
When updating the node:
- Stop the node gracefully
- Backup data directory
- Download new binary from GitHub Releases
- Replace old binary with new one
- Restart node
Troubleshooting
See Troubleshooting for detailed solutions to common issues.
See Also
- Node Configuration - Configuration options
- Node Overview - Node architecture and features
- RPC API Reference - Complete RPC API documentation
- Troubleshooting - Common issues and solutions
- Performance Optimizations - Performance tuning
RPC API Reference
BLVM node provides both a JSON-RPC 2.0 interface (conventional Bitcoin RPC surface) and a modern REST API for interacting with the node.
API Overview
- JSON-RPC 2.0: Methods aligned with widely documented Bitcoin node RPC docs
- Mainnet:
http://localhost:8332(default) - Testnet/Regtest:
http://localhost:18332(default)
- Mainnet:
- REST API: Modern RESTful interface at
http://localhost:8080/api/v1/
Both APIs provide access to the same functionality, with the REST API offering better type safety, clearer error messages, and improved developer experience.
Connection
Default RPC endpoints:
- Mainnet:
http://localhost:8332 - Testnet/Regtest:
http://localhost:18332
RPC ports are configurable. See Node Configuration for details.
Authentication
For production use, configure RPC authentication:
[rpc]
enabled = true
username = "rpcuser"
password = "rpcpassword"
Example Requests
Get Blockchain Info
# Mainnet uses port 8332, testnet/regtest use 18332
curl -X POST http://localhost:8332 \
-H "Content-Type: application/json" \
-d '{
"jsonrpc": "2.0",
"method": "getblockchaininfo",
"params": [],
"id": 1
}'
Get Block
# Mainnet uses port 8332, testnet/regtest use 18332
curl -X POST http://localhost:8332 \
-H "Content-Type: application/json" \
-d '{
"jsonrpc": "2.0",
"method": "getblock",
"params": ["000000000019d6689c085ae165831e934ff763ae46a2a6c172b3f1b60a8ce26f"],
"id": 1
}'
Get Network Info
# Mainnet uses port 8332, testnet/regtest use 18332
curl -X POST http://localhost:8332 \
-H "Content-Type: application/json" \
-d '{
"jsonrpc": "2.0",
"method": "getnetworkinfo",
"params": [],
"id": 1
}'
Available Methods
Methods Implemented: Multiple RPC methods
Blockchain Methods
getblockchaininfo- Get blockchain informationgetblock- Get block by hashgetblockhash- Get block hash by heightgetblockheader- Get block header by hashgetbestblockhash- Get best block hashgetblockcount- Get current block heightgetdifficulty- Get current difficultygettxoutsetinfo- Get UTXO set statisticsverifychain- Verify blockchain databasegetblockfilter- Get block filter (BIP158)getindexinfo- Get index informationgetblockchainstate- Get blockchain stateinvalidateblock- Invalidate a blockreconsiderblock- Reconsider a previously invalidated blockwaitfornewblock- Wait for a new blockwaitforblock- Wait for a specific blockwaitforblockheight- Wait for a specific block height
Raw Transaction Methods
getrawtransaction- Get transaction by txidsendrawtransaction- Submit transaction to mempooltestmempoolaccept- Test if transaction would be accepteddecoderawtransaction- Decode raw transaction hexcreaterawtransaction- Create a raw transactiongettxout- Get UTXO informationgettxoutproof- Get merkle proof for transactionverifytxoutproof- Verify merkle proof
Mempool Methods
getmempoolinfo- Get mempool statisticsgetrawmempool- List transactions in mempoolsavemempool- Persist mempool to diskgetmempoolancestors- Get mempool ancestors of a transactiongetmempooldescendants- Get mempool descendants of a transactiongetmempoolentry- Get mempool entry for a transaction
Network Methods
getnetworkinfo- Get network informationgetpeerinfo- Get connected peersgetconnectioncount- Get number of connectionsping- Ping connected peersaddnode- Add/remove node from peer listdisconnectnode- Disconnect specific nodegetnettotals- Get network statisticsclearbanned- Clear banned nodessetban- Ban/unban a subnetlistbanned- List banned nodesgetaddednodeinfo- Get information about manually added nodesgetnodeaddresses- Get known node addressessetnetworkactive- Enable or disable network activity
Mining Methods
getmininginfo- Get mining informationgetblocktemplate- Get block template for miningsubmitblock- Submit a mined blockestimatesmartfee- Estimate smart fee rateprioritisetransaction- Prioritize a transaction in mempool
Control Methods
stop- Stop the nodeuptime- Get node uptimegetmemoryinfo- Get memory usage informationgetrpcinfo- Get RPC server informationhelp- Get help for RPC methodslogging- Control logging levelsgethealth- Get node health statusgetmetrics- Get node metrics
Address Methods
validateaddress- Validate a Bitcoin addressgetaddressinfo- Get detailed address information
Transaction Methods
gettransactiondetails- Get detailed transaction information
Payment Methods (BIP70)
createpaymentrequest- Create a BIP70 payment request (requiresbip70-httpfeature)
Error Codes
The RPC API uses conventional JSON-RPC 2.0 error codes (same families as common Bitcoin node docs):
Standard JSON-RPC Errors
| Code | Name | Description |
|---|---|---|
| -32700 | Parse error | Invalid JSON was received |
| -32600 | Invalid Request | The JSON sent is not a valid Request object |
| -32601 | Method not found | The method does not exist |
| -32602 | Invalid params | Invalid method parameter(s) |
| -32603 | Internal error | Internal JSON-RPC error |
Bitcoin-Specific Errors
| Code | Name | Description |
|---|---|---|
| -1 | Transaction already in chain | Transaction is already in blockchain |
| -1 | Transaction missing inputs | Transaction references non-existent inputs |
| -5 | Block not found | Block hash not found |
| -5 | Transaction not found | Transaction hash not found |
| -5 | UTXO not found | UTXO does not exist |
| -25 | Transaction rejected | Transaction rejected by consensus rules |
| -27 | Transaction already in mempool | Transaction already in mempool |
Code: errors.rs
Error Response Format
{
"jsonrpc": "2.0",
"error": {
"code": -32602,
"message": "Invalid params",
"data": {
"param": "blockhash",
"reason": "Invalid hex string"
}
},
"id": 1
}
Code: errors.rs
Authentication
RPC authentication is optional but recommended for production:
Token-Based Authentication
# Mainnet uses port 8332, testnet/regtest use 18332
curl -X POST http://localhost:8332 \
-H "Authorization: Bearer <token>" \
-H "Content-Type: application/json" \
-d '{"jsonrpc": "2.0", "method": "getblockchaininfo", "params": [], "id": 1}'
Certificate-Based Authentication
TLS client certificates can be used for authentication when QUIC transport is enabled.
Code: auth.rs
Rate Limiting
Rate limiting is enforced per IP, per user, and per method:
- Authenticated users: 100 burst, 10 req/sec
- Unauthenticated: 50 burst, 5 req/sec
- Per-method limits: May override defaults for specific methods
Code: server.rs
Request/Response Format
Request Format
{
"jsonrpc": "2.0",
"method": "getblockchaininfo",
"params": [],
"id": 1
}
Response Format
Success Response:
{
"jsonrpc": "2.0",
"result": {
"chain": "regtest",
"blocks": 123456,
"headers": 123456,
"bestblockhash": "0000...",
"difficulty": 4.656542373906925e-10
},
"id": 1
}
Error Response:
{
"jsonrpc": "2.0",
"error": {
"code": -32602,
"message": "Invalid params"
},
"id": 1
}
Code: types.rs
Batch Requests
Multiple requests can be sent in a single batch:
[
{"jsonrpc": "2.0", "method": "getblockchaininfo", "params": [], "id": 1},
{"jsonrpc": "2.0", "method": "getblockhash", "params": [100], "id": 2},
{"jsonrpc": "2.0", "method": "getblock", "params": ["0000..."], "id": 3}
]
Responses are returned in the same order as requests.
Implementation Status
The RPC API implements JSON-RPC 2.0 methods documented in the Available Methods section above.
REST API
Overview
The REST API provides a modern, developer-friendly interface alongside the JSON-RPC API. It uses standard HTTP methods and status codes, with JSON request/response bodies.
Base URL: http://localhost:8080/api/v1/
Code: rest/mod.rs
Authentication
REST API authentication works the same as JSON-RPC:
# Token-based authentication
curl -H "Authorization: Bearer <token>" http://localhost:8080/api/v1/node/uptime
# Basic authentication (if configured)
curl -u username:password http://localhost:8080/api/v1/node/uptime
Rate Limiting
Rate limiting is enforced per IP, per user, and per endpoint:
- Authenticated users: 100 burst, 10 req/sec
- Unauthenticated: 50 burst, 5 req/sec
- Per-endpoint limits: Stricter limits for write operations
Code: rest/server.rs
Response Format
All REST API responses follow a consistent format:
Success Response:
{
"status": "success",
"data": {
"chain": "regtest",
"blocks": 123456
},
"request_id": "550e8400-e29b-41d4-a716-446655440000"
}
Error Response:
{
"status": "error",
"error": {
"code": "NOT_FOUND",
"message": "Block not found",
"details": "Block hash 0000... does not exist"
},
"request_id": "550e8400-e29b-41d4-a716-446655440000"
}
Code: rest/types.rs
Endpoints
Node Endpoints
Code: rest/node.rs
GET /api/v1/node/uptime- Get node uptimeGET /api/v1/node/memory- Get memory informationGET /api/v1/node/memory?mode=detailed- Get detailed memory infoGET /api/v1/node/rpc-info- Get RPC server informationGET /api/v1/node/help- Get help for all commandsGET /api/v1/node/help?command=getblock- Get help for specific commandGET /api/v1/node/logging- Get logging configurationPOST /api/v1/node/logging- Update logging configurationPOST /api/v1/node/stop- Stop the node
Example:
curl http://localhost:8080/api/v1/node/uptime
Chain Endpoints
GET /api/v1/chain/info- Get blockchain informationGET /api/v1/chain/blockhash/{height}- Get block hash by heightGET /api/v1/chain/blockcount- Get current block heightGET /api/v1/chain/difficulty- Get current difficultyGET /api/v1/chain/txoutsetinfo- Get UTXO set statisticsPOST /api/v1/chain/verify- Verify blockchain database
Example:
curl http://localhost:8080/api/v1/chain/info
Block Endpoints
Code: rest/blocks.rs
GET /api/v1/blocks/{hash}- Get block by hashGET /api/v1/blocks/{hash}/transactions- Get block transactionsGET /api/v1/blocks/{hash}/header- Get block headerGET /api/v1/blocks/{hash}/header?verbose=true- Get verbose block headerGET /api/v1/blocks/{hash}/stats- Get block statisticsGET /api/v1/blocks/{hash}/filter- Get BIP158 block filterGET /api/v1/blocks/{hash}/filter?filtertype=basic- Get specific filter typeGET /api/v1/blocks/height/{height}- Get block by heightPOST /api/v1/blocks/{hash}/invalidate- Invalidate blockPOST /api/v1/blocks/{hash}/reconsider- Reconsider invalidated block
Example:
curl http://localhost:8080/api/v1/blocks/000000000019d6689c085ae165831e934ff763ae46a2a6c172b3f1b60a8ce26f
Transaction Endpoints
GET /api/v1/transactions/{txid}- Get transaction by txidGET /api/v1/transactions/{txid}?verbose=true- Get verbose transactionPOST /api/v1/transactions- Submit raw transactionPOST /api/v1/transactions/test- Test if transaction would be acceptedGET /api/v1/transactions/{txid}/out/{n}- Get UTXO information
Example:
curl -X POST http://localhost:8080/api/v1/transactions \
-H "Content-Type: application/json" \
-d '{"hex": "0100000001..."}'
Address Endpoints
GET /api/v1/addresses/{address}/balance- Get address balanceGET /api/v1/addresses/{address}/transactions- Get address transaction historyGET /api/v1/addresses/{address}/utxos- Get address UTXOs
Example:
curl http://localhost:8080/api/v1/addresses/1A1zP1eP5QGefi2DMPTfTL5SLmv7DivfNa/balance
Mempool Endpoints
GET /api/v1/mempool/info- Get mempool informationGET /api/v1/mempool/transactions- List transactions in mempoolGET /api/v1/mempool/transactions?verbose=true- List verbose transactionsPOST /api/v1/mempool/save- Persist mempool to disk
Example:
curl http://localhost:8080/api/v1/mempool/info
Network Endpoints
Code: rest/network.rs
GET /api/v1/network/info- Get network informationGET /api/v1/network/peers- Get connected peersGET /api/v1/network/connections/count- Get connection countGET /api/v1/network/totals- Get network statisticsGET /api/v1/network/nodes- Get added node informationGET /api/v1/network/nodes?dns=true- Get added nodes with DNS lookupGET /api/v1/network/nodes/addresses- Get node addressesGET /api/v1/network/nodes/addresses?count=10- Get N node addressesGET /api/v1/network/bans- List banned nodesPOST /api/v1/network/ping- Ping connected peersPOST /api/v1/network/nodes- Add node to peer listPOST /api/v1/network/active- Activate node connectionPOST /api/v1/network/bans- Ban/unban a subnetDELETE /api/v1/network/nodes/{address}- Remove node from peer listDELETE /api/v1/network/bans- Clear all banned nodes
Example:
curl http://localhost:8080/api/v1/network/info
Fee Estimation Endpoints
GET /api/v1/fees/estimate- Estimate fee rateGET /api/v1/fees/estimate?blocks=6- Estimate fee for N blocksGET /api/v1/fees/smart- Get smart fee estimate
Example:
curl http://localhost:8080/api/v1/fees/estimate?blocks=6
Payment Endpoints (BIP70 HTTP)
Requires: --features bip70-http
Code: rest/payment.rs
GET /api/v1/payments/{payment_id}- Get payment statusPOST /api/v1/payments- Create payment requestPOST /api/v1/payments/{payment_id}/pay- Submit paymentPOST /api/v1/payments/{payment_id}/cancel- Cancel payment
Vault Endpoints (CTV)
Requires: --features ctv
Code: rest/vault.rs
GET /api/v1/vaults- List vaultsGET /api/v1/vaults/{vault_id}- Get vault informationPOST /api/v1/vaults- Create vaultPOST /api/v1/vaults/{vault_id}/deposit- Deposit to vaultPOST /api/v1/vaults/{vault_id}/withdraw- Withdraw from vault
Pool Endpoints (CTV)
Requires: --features ctv
Code: rest/pool.rs
GET /api/v1/pools- List poolsGET /api/v1/pools/{pool_id}- Get pool informationPOST /api/v1/pools- Create poolPOST /api/v1/pools/{pool_id}/join- Join poolPOST /api/v1/pools/{pool_id}/leave- Leave pool
Congestion Control Endpoints (CTV)
Requires: --features ctv
Code: rest/congestion.rs
GET /api/v1/congestion/status- Get congestion statusGET /api/v1/batches- List pending batchesPOST /api/v1/batches- Create batchPOST /api/v1/batches/{batch_id}/submit- Submit batch
Security Headers
The REST API includes security headers by default:
X-Content-Type-Options: nosniffX-Frame-Options: DENYX-XSS-Protection: 1; mode=blockStrict-Transport-Security: max-age=31536000(when TLS enabled)
Code: rest/server.rs
Error Codes
REST API uses standard HTTP status codes:
| Status Code | Meaning |
|---|---|
| 200 | Success |
| 400 | Bad Request (invalid parameters) |
| 401 | Unauthorized (authentication required) |
| 404 | Not Found (resource doesn’t exist) |
| 429 | Too Many Requests (rate limit exceeded) |
| 500 | Internal Server Error |
| 503 | Service Unavailable (feature not enabled) |
See Also
- Node Overview - Node implementation details
- Node Configuration - RPC configuration options
- Node Operations - Node management
- Getting Started - Quick start guide
- API Index - Cross-reference to all APIs
- Troubleshooting - Common RPC issues
- Commons Mesh Module - Payment endpoints and mesh networking
- Module System - Module architecture and IPC
Storage Backends
Overview
The node supports multiple database backends for persistent storage of blocks, UTXO set, and chain state. When database_backend = "auto" (the default), the backend is chosen by build features: RocksDB when the rocksdb feature is enabled, then TidesDB, Redb, Sled. See Configuration Reference for the full database_backend options. The system falls back gracefully if the preferred backend is unavailable.
Supported Backends
redb
redb is a production-ready embedded database (selected by auto when the redb feature is enabled and RocksDB/TidesDB are not):
- Pure Rust: No C dependencies
- ACID Compliance: Full ACID transactions
- Production Ready: Stable, well-tested
- Performance: Optimized for read-heavy workloads
- Storage: Efficient key-value storage
Code: database/mod.rs
sled (Fallback)
sled is available as a fallback option:
- Beta Quality: Not recommended for production
- Pure Rust: No C dependencies
- Performance: Good for development and testing
- Storage: Key-value storage with B-tree indexing
Code: database/mod.rs (redb/sled backends)
rocksdb (Optional, common on-disk layouts)
rocksdb is an optional high-performance backend that can interoperate with typical Bitcoin chain on-disk layouts:
- LevelDB-format chain state: RocksDB can read LevelDB-format databases used by common reference deployments
- Automatic detection: Detects and uses existing data directories when present
- Block files: Direct access to raw block files (
blk*.dat) where supported - Format parsing: Parsers for common internal key layouts
- High performance: Optimized for large-scale blockchain data
- System dependency: Requires
libclangfor build - Feature flag:
rocksdb(optional, not enabled by default)
Interop notes:
- Detects standard data directory conventions
- Uses RocksDB (not LevelDB directly) with LevelDB-format compatibility where applicable
- Accesses block files (
blk*.dat) with lazy indexing - Supports mainnet, testnet, regtest, and signet networks
Important: Deployment-specific paths and formats vary; verify against your data source.
Code: database/mod.rs, bitcoin_core_storage.rs
Note: RocksDB requires the rocksdb feature flag. RocksDB and erlay features are mutually exclusive due to dependency conflicts (both require libclang/LLVM).
Backend Selection
When database_backend = "auto", the node chooses the backend by build features in this order:
- RocksDB (if the
rocksdbfeature is enabled) - TidesDB (if the
tidesdbfeature is enabled and RocksDB is not) - Redb (if the
redbfeature is enabled and neither RocksDB nor TidesDB is) - Sled (if the
sledfeature is enabled and no other backend is)
At least one backend feature must be enabled at build time. If the chosen backend fails to open (e.g. missing data dir or lock), the node may fall back to another enabled backend where implemented.
Interop: When RocksDB is enabled, the node may detect and use existing LevelDB-format chain data. That is separate from the auto selection order above.
Code: database/mod.rs (default_backend(), fallback_backend()), storage/mod.rs
Automatic Fallback
If the backend chosen by auto fails to open, the node may fall back to another enabled backend (see fallback_backend() in code).
#![allow(unused)]
fn main() {
// Backend is chosen by default_backend() when using "auto"; fallback on open failure
let storage = Storage::new(data_dir)?;
}
Code: storage/mod.rs
Database Abstraction
The storage layer uses a unified database abstraction:
Database Trait
#![allow(unused)]
fn main() {
pub trait Database: Send + Sync {
fn open_tree(&self, name: &str) -> Result<Box<dyn Tree>>;
fn flush(&self) -> Result<()>;
}
}
Code: database/mod.rs (Database trait)
Tree Trait
#![allow(unused)]
fn main() {
pub trait Tree: Send + Sync {
fn insert(&self, key: &[u8], value: &[u8]) -> Result<()>;
fn get(&self, key: &[u8]) -> Result<Option<Vec<u8>>>;
fn remove(&self, key: &[u8]) -> Result<()>;
fn contains_key(&self, key: &[u8]) -> Result<bool>;
fn len(&self) -> Result<usize>;
fn iter(&self) -> Box<dyn Iterator<Item = Result<(Vec<u8>, Vec<u8>)>> + '_>;
}
}
Code: database/mod.rs (Tree trait)
Storage Components
BlockStore
Stores blocks by hash:
- Key: Block hash (32 bytes)
- Value: Serialized block data
- Indexing: Hash-based lookup
Code: blockstore.rs
UtxoStore
Manages UTXO set:
- Key: OutPoint (36 bytes: txid + output index)
- Value: UTXO data (script, amount)
- Operations: Add, remove, query UTXOs
Code: utxostore.rs
ChainState
Tracks chain metadata:
- Tip Hash: Current chain tip
- Height: Current block height
- Chain Work: Cumulative proof-of-work
- UTXO Stats: Cached UTXO set statistics
Code: chainstate.rs
TxIndex
Transaction indexing:
- Key: Transaction ID (32 bytes)
- Value: Transaction data and metadata
- Lookup: Fast transaction retrieval
Code: txindex.rs
Configuration
Backend Selection
[storage]
data_dir = "/var/lib/blvm"
database_backend = "auto" # or "redb", "sled", "rocksdb", "tidesdb"
Options:
"auto": Select by build features (RocksDB whenrocksdbenabled, then TidesDB, Redb, Sled)"rocksdb": Force RocksDB (requiresrocksdbfeature)"tidesdb": Force TidesDB (requirestidesdbfeature)"redb": Force redb backend"sled": Force sled backend
Code: config (storage / database_backend)
RocksDB Configuration
Enable RocksDB with the rocksdb feature:
cargo build --features rocksdb
System Requirements:
libclangmust be installed (required for RocksDB FFI bindings)- On Ubuntu/Debian:
sudo apt-get install libclang-dev - On Arch:
sudo pacman -S clang - On macOS:
brew install llvm
Default data directories (common layouts): The system can detect typical Bitcoin-style data directories:
- Mainnet:
~/.bitcoin/or~/Library/Application Support/Bitcoin/ - Testnet:
~/.bitcoin/testnet3/or~/Library/Application Support/Bitcoin/testnet3/ - Regtest:
~/.bitcoin/regtest/or~/Library/Application Support/Bitcoin/regtest/
Code: bitcoin_core_detection.rs
Cache Configuration
[storage.cache]
block_cache_mb = 100
utxo_cache_mb = 50
header_cache_mb = 10
Cache Sizes: See Configuration Reference for canonical defaults (e.g. block 100 MB, UTXO 50 MB, header 10 MB).
Code: config
Performance Characteristics
redb Backend
- Read Performance: Excellent for sequential and random reads
- Write Performance: Good for batch writes
- Storage Efficiency: Efficient key-value storage
- Memory Usage: Moderate memory footprint
- Production Ready: Recommended for production
sled Backend
- Read Performance: Good for sequential reads
- Write Performance: Good for batch writes
- Storage Efficiency: Efficient with B-tree indexing
- Memory Usage: Higher memory footprint
- Production Ready: Beta quality, not recommended for production
Migration
Backend Migration
To migrate between backends:
- Export Data: Export all data from current backend
- Import Data: Import data into new backend
- Verify: Verify data integrity
Note: Manual migration is supported. Export data from the current backend and import into the new backend.
Pruning Support
All backends support pruning:
[storage.pruning]
[storage.pruning.mode]
type = "normal"
keep_from_height = 0
min_recent_blocks = 288 # Keep last 288 blocks (~2 days)
auto_prune = true
auto_prune_interval = 144
Pruning Modes:
- Disabled: Keep all blocks (archival node)
- Normal: Conservative pruning (keep recent blocks)
- Aggressive: Prune with UTXO commitments (requires utxo-commitments feature)
- Custom: Fine-grained control over what to keep
Code: pruning.rs
Error Handling
The storage layer handles backend failures gracefully:
- Automatic Fallback: Falls back to alternative backend if primary fails
- Error Recovery: Attempts to recover from transient errors
- Data Integrity: Verifies data integrity on startup
- Corruption Detection: Detects and reports database corruption
Code: storage/mod.rs
See Also
- Node Configuration - Storage configuration options
- Node Operations - Storage operations and maintenance
- Pruning — pruning configuration on this page; see also Node configuration
Transaction Indexing
Overview
The node provides advanced transaction indexing capabilities for efficient querying of blockchain data. Indexes are built on-demand and support both address-based and value-based queries.
Index Types
Transaction Hash Index
Basic transaction lookup by hash:
- Key: Transaction hash (32 bytes)
- Value: Transaction metadata (block hash, height, index, size, weight)
- Lookup: O(1) hash-based lookup
- Always Enabled: Core indexing functionality
Code: txindex.rs
Address Index (Optional)
Indexes transactions by output addresses:
- Key: Address hash (20 bytes for P2PKH, 32 bytes for P2SH/P2WPKH)
- Value: List of (transaction hash, output index) pairs
- Lookup: Fast address balance and transaction history queries
- Lazy Indexing: Built on-demand when first queried
- Configuration: Enable with
storage.indexing.enable_address_index = true
Code: txindex.rs
Value Range Index (Optional)
Indexes transactions by output value ranges:
- Key: Value bucket (logarithmic buckets: 0-1, 1-10, 10-100, 100-1000, etc.)
- Value: List of (transaction hash, output index, value) tuples
- Lookup: Efficient queries for transactions in specific value ranges
- Lazy Indexing: Built on-demand when first queried
- Configuration: Enable with
storage.indexing.enable_value_index = true
Code: txindex.rs
Indexing Strategy
Lazy Indexing
Indexes are built on-demand to minimize impact on block processing:
- Basic Indexing: All transactions are indexed with basic metadata (hash, block, height)
- On-Demand: Address and value indexes are built when first queried
- Caching: Indexed addresses are cached to avoid re-indexing
- Batch Operations: Multiple transactions indexed together for efficiency
Code: txindex.rs
Batch Indexing
Block-level indexing optimizations:
- Single Pass: Processes all transactions in a block at once
- Deduplication: Uses HashSet for O(1) duplicate checking
- Batching: Groups updates per unique address/bucket to reduce DB I/O
- Conditional Writes: Only writes to DB if updates were made
Code: txindex.rs
Configuration
Enable Indexing
[storage.indexing]
enable_address_index = true
enable_value_index = true
Index Statistics
Query indexing statistics:
#![allow(unused)]
fn main() {
use blvm_node::storage::txindex::TxIndex;
let stats = txindex.get_stats()?;
println!("Total transactions: {}", stats.total_transactions);
println!("Indexed addresses: {}", stats.indexed_addresses);
println!("Indexed value buckets: {}", stats.indexed_value_buckets);
}
Code: txindex.rs
Usage
Query by Address
#![allow(unused)]
fn main() {
use blvm_node::storage::txindex::TxIndex;
// Query all transactions for an address
let address = "1A1zP1eP5QGefi2DMPTfTL5SLmv7DivfNa";
let transactions = txindex.query_by_address(&address)?;
}
Query by Value Range
#![allow(unused)]
fn main() {
// Query transactions with outputs in value range [1000, 10000] satoshis
let transactions = txindex.query_by_value_range(1000, 10000)?;
}
Query Transaction Metadata
#![allow(unused)]
fn main() {
// Get transaction metadata by hash
let tx_hash = Hash::from_hex("...")?;
let metadata = txindex.get_metadata(&tx_hash)?;
}
Performance Characteristics
- Hash Lookup: O(1) constant time
- Address Lookup: O(1) after initial indexing, O(n) for first query (indexes on-demand)
- Value Range Lookup: O(log n) for bucket lookup, O(m) for results (where m is number of matches)
- Index Building: Lazy, only builds what’s queried
- Storage Overhead: Minimal for basic index, grows with address/value index usage
See Also
- Storage Backends - Database backend options
- Node Configuration - Indexing configuration options
- Node Operations - Index maintenance and operations
IBD Bandwidth Protection
Overview
The node implements comprehensive protection against Initial Block Download (IBD) bandwidth exhaustion attacks. This prevents malicious peers from forcing a node to upload the entire blockchain multiple times, which could cause ISP data cap overages and economic denial-of-service.
Protection Mechanisms
Per-Peer Bandwidth Limits
Tracks bandwidth usage per peer with configurable daily and hourly limits:
- Daily Limit: Maximum bytes a peer can request per day
- Hourly Limit: Maximum bytes a peer can request per hour
- Automatic Throttling: Blocks requests when limits are exceeded
- Legitimate Node Protection: First request always allowed, reasonable limits for legitimate sync
Code: ibd_protection.rs
Per-IP Bandwidth Limits
Tracks bandwidth usage per IP address to prevent single-IP attacks:
- IP-Based Tracking: Monitors all peers from the same IP
- Aggregate Limits: Combined daily/hourly limits for all peers from an IP
- Attack Detection: Identifies coordinated attacks from single IP
Code: ibd_protection.rs
Per-Subnet Bandwidth Limits
Tracks bandwidth usage per subnet to prevent distributed attacks:
- IPv4 Subnets: Tracks /24 subnets (256 addresses)
- IPv6 Subnets: Tracks /64 subnets
- Subnet Aggregation: Combines bandwidth from all IPs in subnet
- Distributed Attack Mitigation: Prevents coordinated attacks from subnet
Code: ibd_protection.rs
Concurrent IBD Serving Limits
Limits how many peers can simultaneously request IBD:
- Concurrent Limit: Maximum number of peers serving IBD at once
- Queue Management: Queues additional requests when limit reached
- Fair Serving: Rotates serving to queued peers
Code: ibd_protection.rs
Peer Reputation Scoring
Tracks peer behavior to identify malicious patterns:
- Reputation System: Scores peers based on behavior
- Suspicious Pattern Detection: Identifies rapid reconnection with new peer IDs
- Cooldown Periods: Enforces cooldown after suspicious activity
- Legitimate Node Protection: First-time sync always allowed
Code: ibd_protection.rs
Configuration
Default Limits
[ibd_protection]
max_bandwidth_per_peer_per_day_gb = 50
max_bandwidth_per_peer_per_hour_gb = 10
max_bandwidth_per_ip_per_day_gb = 100
max_bandwidth_per_ip_per_hour_gb = 20
max_bandwidth_per_subnet_per_day_gb = 500
max_bandwidth_per_subnet_per_hour_gb = 100
max_concurrent_ibd_serving = 3
ibd_request_cooldown_seconds = 3600
suspicious_reconnection_threshold = 3
reputation_ban_threshold = -100
enable_emergency_throttle = false
emergency_throttle_percent = 50
Configuration Options
- max_bandwidth_per_peer_per_day_gb: Daily limit per peer (default: 50 GB)
- max_bandwidth_per_peer_per_hour_gb: Hourly limit per peer (default: 10 GB)
- max_bandwidth_per_ip_per_day_gb: Daily limit per IP (default: 100 GB)
- max_bandwidth_per_ip_per_hour_gb: Hourly limit per IP (default: 20 GB)
- max_bandwidth_per_subnet_per_day_gb: Daily limit per subnet (default: 500 GB)
- max_bandwidth_per_subnet_per_hour_gb: Hourly limit per subnet (default: 100 GB)
- max_concurrent_ibd_serving: Maximum concurrent IBD serving (default: 3)
- ibd_request_cooldown_seconds: Cooldown period after suspicious activity (default: 3600 seconds)
- suspicious_reconnection_threshold: Number of reconnections in 1 hour to be considered suspicious (default: 3)
- reputation_ban_threshold: Reputation score below which peer is banned (default: -100)
- enable_emergency_throttle: Enable emergency bandwidth throttling (default: false)
- emergency_throttle_percent: Percentage of bandwidth to throttle when emergency throttle is enabled (default: 50)
Code: ibd_protection.rs
Attack Mitigation
Single IP Attack
Attack: Attacker runs multiple fake nodes from same IP Protection: Per-IP bandwidth limits aggregate all peers from IP Result: Blocked after IP limit reached
Subnet Attack
Attack: Attacker distributes fake nodes across subnet Protection: Per-subnet bandwidth limits aggregate all IPs in subnet Result: Blocked after subnet limit reached
Rapid Reconnection Attack
Attack: Attacker disconnects and reconnects with new peer ID Protection: Reputation scoring detects pattern, enforces cooldown Result: Blocked during cooldown period
Distributed Attack
Attack: Coordinated attack from multiple IPs/subnets Protection: Concurrent serving limits prevent serving too many peers simultaneously Result: Additional requests queued, serving rotated fairly
Legitimate New Node
Scenario: Legitimate new node requests full sync Protection: First request always allowed, reasonable limits accommodate legitimate sync Result: Allowed to sync within limits
Integration
The IBD protection is automatically integrated into the network manager:
- Automatic Tracking: Tracks bandwidth when serving Headers/Block messages
- Request Protection: Protects GetHeaders and GetData requests
- Cleanup: Automatically cleans up tracking on peer disconnect
Code: network_manager.rs (IBD protection wiring and config)
LAN Peer Prioritization
LAN peers are automatically discovered and prioritized for IBD, but still respect bandwidth protection limits:
- Priority Assignment: LAN peers get priority within bandwidth limits
- Score Multiplier: LAN peers receive a higher score in the progressive trust system
- Bandwidth Limits: LAN peers still respect per-peer, per-IP, and per-subnet limits
- Reputation Scoring: LAN peer behavior affects reputation scoring
Code: parallel_ibd/mod.rs
For details on LAN peering discovery, security, and configuration, see LAN Peering System.
See Also
- LAN Peering System - Automatic local network discovery and prioritization
- Network Operations - General network operations
- Node Configuration - IBD protection configuration
- Security Controls - Security system overview
LAN Peering System
Overview
The LAN peering system automatically discovers and prioritizes local network (LAN) Bitcoin nodes for Initial Block Download (IBD), reducing sync time when a local node is available. Security is maintained through checkpoint validation and peer diversity requirements.
Benefits
- Lower latency: LAN peers have much lower latency than remote internet peers
- Local throughput: Local network capacity exceeds most residential uplinks
- Stable connectivity: LAN peers are not subject to internet path failures
- Automatic Discovery: Scans local network automatically during startup
- Secure by Default: Internet checkpoint validation prevents eclipse attacks
How It Works
Automatic Discovery
During node startup, the system automatically:
- Detects Local Network Interfaces: Identifies private network interfaces (10.x, 172.16-31.x, 192.168.x)
- Scans Local Subnet: Scans /24 subnets (254 IPs per subnet) for Bitcoin nodes on port 8333
- Parallel Scanning: Uses up to 64 concurrent connection attempts for fast discovery
- Verifies Peers: Performs protocol handshake and chain verification before accepting
Code: lan_discovery.rs
LAN Peer Detection
A peer is considered a LAN peer if its IP address is in one of these ranges:
IPv4 Private Ranges:
10.0.0.0/8- Class A private network172.16.0.0/12- Class B private network (172.16-31.x)192.168.0.0/16- Class C private network (most common for home networks)127.0.0.0/8- Loopback addresses169.254.0.0/16- Link-local addresses
IPv6 Private Ranges:
::1- Loopbackfd00::/8- Unique Local Addresses (ULA)fe80::/10- Link-local addresses
Code: peer_scoring.rs
Progressive Trust System
LAN peers start with limited trust and earn higher priority over time:
-
Initial Trust (1.5x multiplier):
- Newly discovered LAN peers
- Whitelisted peers start at maximum trust instead
-
Level 2 Trust (2.0x multiplier):
- After 1000 valid blocks received
- Indicates reliable peer behavior
-
Maximum Trust (3.0x multiplier):
- After 10000 valid blocks AND 1 hour of connection time
- Maximum priority for block downloads
-
Demoted (1.0x multiplier, no bonus):
- After 3 failures
- Loses LAN status but remains connected
-
Banned (0.0x multiplier, not used):
- Checkpoint validation failure
- Permanent ban (1 year duration)
Code: lan_security.rs
Peer Prioritization
LAN peers receive priority for block downloads during IBD:
- IBD Optimization: LAN peers get priority chunks (first 50,000 blocks)
- Header Download: LAN peers prioritized for header sync
- Score Multiplier: Higher trust score for LAN peers in selection
- Bandwidth Allocation: LAN peers receive more bandwidth allocation
Code: parallel_ibd/mod.rs
Security Model
Hard Limits
The system enforces strict security limits to prevent eclipse attacks:
- Maximum 25% LAN Peers: Hard cap on LAN peer percentage
- Minimum 75% Internet Peers: Required for security
- Minimum 3 Internet Peers: Required for checkpoint validation
- Maximum 1 Discovered LAN Peer: Limits automatically discovered peers (whitelisted are separate)
Code: lan_security.rs
Checkpoint Validation
Internet checkpoints are the primary security mechanism. Even with discovery enabled, eclipse attacks are prevented through regular checkpoint validation:
- Block Checkpoints: Every 1000 blocks, validate block hash against internet peers
- Header Checkpoints: Every 10000 blocks, validate header hash against internet peers
- Consensus Requirement: Requires agreement from at least 3 internet peers
- Failure Response: Checkpoint failure results in permanent ban (1 year)
- Request Timeout: 5 seconds per checkpoint request
- Max Retries: 3 retry attempts per checkpoint
- Protocol Verify Timeout: 5 seconds for protocol handshake verification
- Headers Verify Timeout: 10 seconds for headers verification
- Max Header Divergence: 6 blocks maximum divergence allowed
Security Constants:
BLOCK_CHECKPOINT_INTERVAL: 1000 blocksHEADER_CHECKPOINT_INTERVAL: 10000 blocksMIN_CHECKPOINT_PEERS: 3 internet peers requiredCHECKPOINT_FAILURE_BAN_DURATION: 1 year (31,536,000 seconds)CHECKPOINT_REQUEST_TIMEOUT: 5 secondsCHECKPOINT_MAX_RETRIES: 3 retriesPROTOCOL_VERIFY_TIMEOUT: 5 secondsHEADERS_VERIFY_TIMEOUT: 10 secondsMAX_HEADER_DIVERGENCE: 6 blocks
Code: lan_security.rs (progressive trust and auto-trust thresholds)
Security Guarantees
- No Eclipse Attacks: 75% internet peer minimum ensures honest network connection
- Checkpoint Validation: Regular validation prevents chain divergence
- LAN Address Privacy: LAN addresses are never advertised to external peers
- Progressive Trust: New LAN peers start with limited trust
- Failure Handling: Multiple failures result in demotion or ban
Code: lan_security.rs
Configuration
Whitelisting
You can whitelist trusted LAN peers to start at maximum trust:
#![allow(unused)]
fn main() {
// Whitelisted peers start at maximum trust
policy.add_to_whitelist("192.168.1.100:8333".parse().unwrap());
}
Code: lan_security.rs
Discovery Control
LAN discovery is enabled by default. The system automatically discovers peers during startup, but you can control this behavior through the security policy.
Code: lan_security.rs
Use Cases
Home Networks
If you run multiple Bitcoin nodes on your home network (e.g., Start9, Umbrel, RaspiBlitz), the system can discover and prioritize them for IBD.
Example: Node on 192.168.1.50 automatically discovers node on 192.168.1.100 and uses it for fast IBD.
Docker/VM Environments
The system also checks common Docker/VM bridge networks:
- Docker default bridge:
172.17.0.1 - Common VM network:
10.0.0.1
Code: lan_discovery.rs
Local Development
For local development and testing, LAN peering speeds up blockchain sync when running multiple nodes locally.
Troubleshooting
LAN Peers Not Discovered
Problem: LAN peers are not being discovered automatically.
Solutions:
- Verify both nodes are on the same network (check IP ranges)
- Verify Bitcoin P2P port (default 8333) is open and accessible
- Check firewall rules (local network traffic may be blocked)
- Verify network interface detection (check logs for “Detected local interface”)
Code: lan_discovery.rs
Checkpoint Failures
Problem: LAN peer is being banned due to checkpoint failures.
Solutions:
- Verify LAN peer is on the correct chain (not a testnet/mainnet mismatch)
- Verify internet peers are available (need at least 3 for validation)
- Check network connectivity (LAN peer may be on different chain due to network issues)
- Verify LAN peer is not malicious (check logs for checkpoint failure details)
Code: lan_security.rs
Trust Level Not Increasing
Problem: LAN peer trust level is not increasing beyond initial.
Solutions:
- Verify peer is actually sending valid blocks (check block validation logs)
- Wait for required blocks (1000 for Level 2, 10000 for Maximum)
- Verify connection time (Maximum trust requires 1 hour of connection)
- Check for failures (3 failures result in demotion)
Code: lan_security.rs
Performance Issues
Problem: LAN peer is not being used or sync is slow.
Solutions:
- Verify network speed (check actual bandwidth between nodes)
- Check peer trust level (higher trust = more priority)
- Verify peer is not demoted (check trust level in logs)
- Check for network congestion (other traffic may affect performance)
Integration with IBD Protection
LAN peers are integrated with the IBD bandwidth protection system:
- Bandwidth Limits: LAN peers still respect per-peer bandwidth limits
- Priority Assignment: LAN peers get priority within bandwidth limits
- Reputation Scoring: LAN peer behavior affects reputation scoring
See IBD Bandwidth Protection for details.
Security Considerations
Eclipse Attack Prevention
The 25% LAN peer cap and 75% internet peer minimum ensure that even if all LAN peers are malicious, the node maintains connection to the honest network through internet peers.
Checkpoint Validation
Regular checkpoint validation ensures that LAN peers cannot diverge from the honest chain. Checkpoint failures result in immediate ban.
LAN Address Privacy
LAN addresses are never advertised to external peers, preventing information leakage about your local network topology.
Code: lan_security.rs
See Also
- IBD Bandwidth Protection - How LAN peers interact with bandwidth protection
- Network Operations - General network operations
- Security Threat Models - Security model details
- Node Configuration - Configuration options
Mining Integration
The reference node includes mining coordination functionality as part of the Bitcoin protocol. The system provides block template generation, mining coordination, and optional Stratum V2 protocol support.
Block Template Generation
Block templates are built from blvm-consensus helpers (e.g. template construction) aligned with Orange Paper Section 12.4, with tests and spec-lock proofs on the relevant consensus paths.
Algorithm Overview
- Get Chain State: Retrieve current chain tip, height, and difficulty
- Get Mempool Transactions: Fetch transactions from mempool
- Get UTXO Set: Load UTXO set for fee calculation
- Select Transactions: Choose transactions based on fee priority
- Create Coinbase: Generate coinbase transaction with subsidy + fees
- Calculate Merkle Root: Compute merkle root from transaction list
- Build Template: Construct block header with all components
Code: mining.rs
Transaction Selection
Transactions are selected using a fee-based priority algorithm:
- Prioritize by Fee Rate: Transactions sorted by fee rate (satoshis per byte)
- Size Limits: Respect maximum block size (1MB) and weight (4M weight units)
- Minimum Fee: Filter transactions below minimum fee rate (1 sat/vB default)
- UTXO Validation: Verify all transaction inputs exist in UTXO set
Code: miner.rs
Fee Calculation
Transaction fees are calculated using the UTXO set:
#![allow(unused)]
fn main() {
fee = sum(input_values) - sum(output_values)
fee_rate = fee / transaction_size
}
The coinbase transaction includes:
- Block Subsidy: Calculated based on halving schedule
- Transaction Fees: Sum of all fees from selected transactions
Code: miner.rs
Block Template Structure
#![allow(unused)]
fn main() {
pub struct Block {
header: BlockHeader {
version: 1,
prev_block_hash: [u8; 32],
merkle_root: [u8; 32],
timestamp: u32,
bits: u32,
nonce: 0, // To be filled by miner
},
transactions: Vec<Transaction>, // Coinbase first
}
}
Code: miner.rs
Mining Process
Template Generation
The getblocktemplate RPC method generates a block template:
- Uses
create_block_template(or equivalent) fromblvm-consensus, covered by tests and upstream verification policy - Converts to JSON-RPC format (BIP 22/23)
- Returns template ready for mining
Code: mining.rs
Proof of Work
Mining involves finding a nonce that satisfies the difficulty target:
- Nonce Search: Iterate through nonce values (0 to 2^32-1)
- Hash Calculation: Compute SHA256(SHA256(block_header))
- Target Check: Verify hash < difficulty target
- Success: Return mined block with valid nonce
Code: miner.rs
Block Submission
Mined blocks are submitted via submitblock RPC method:
- Validation: Block validated against consensus rules
- Connection: Block connected to chain
- Confirmation: Block added to blockchain
Code: mining.rs
Mining Coordinator
The MiningCoordinator manages mining operations:
- Template Generation: Creates block templates from mempool
- Mining Loop: Continuously generates and mines blocks
- Stratum V2 Integration: Coordinates with Stratum V2 protocol
- Merge Mining: Available via optional
blvm-merge-miningmodule (paid plugin)
Code: miner.rs
Stratum V2 Support
Optional Stratum V2 protocol support provides:
- Binary Protocol: 50-66% bandwidth savings vs Stratum V1
- Encrypted Communication: TLS/QUIC encryption
- Multiplexed Channels: QUIC stream multiplexing
- Merge Mining: Simultaneous mining of multiple chains
Code: blvm-stratum-v2 (module); node listener: stratum_v2_listener.rs
Configuration
Mining Configuration
[mining]
enabled = false
mining_threads = 1
Stratum V2 Configuration
[stratum_v2]
enabled = true
listen_addr = "0.0.0.0:3333"
Code: mod.rs
See Also
- Node Operations - Node operation and management
- RPC API Reference - Mining-related RPC methods (
getblocktemplate,submitblock) - Stratum V2 + Merge Mining - Stratum V2 protocol details
- Node Configuration - Mining configuration options
- Protocol Specifications - Stratum V2 and mining protocols
Stratum V2 Mining Protocol
Overview
The Stratum V2 mining stack is implemented primarily in the blvm-stratum-v2 module repository (pool/server/protocol). The reference node integrates a TCP listener that accepts Stratum V2 TLV frames and forwards them into the node’s network layer; optional stratum-v2 feature hooks exist on the mining coordinator for a future client path.
Merge mining is a separate optional plugin (blvm-merge-mining) that depends on the Stratum V2 module.
Where the code lives
| Piece | Repository / path |
|---|---|
| Protocol, messages, pool, server, module API | blvm-stratum-v2 (src/protocol.rs, messages.rs, pool.rs, server.rs, module.rs, config.rs) |
| Node: TCP listener → network queue | stratum_v2_listener.rs |
Node: StratumV2Config, [stratum_v2] in config | config/rpc.rs (type), config/mod.rs (top-level config) |
Stratum V2 Protocol
Protocol features
- Binary protocol: TLV framing (see module
messages/protocol) - Server and pool: Implemented in
blvm-stratum-v2(server.rs, pool.rs) - Message encoding: protocol.rs, messages.rs
Listener (reference node)
The node does not ship a full blvm_node::network::stratum_v2::* tree on main by default. Incoming miner connections are handled by the listener, which reads TLV frames and emits NetworkMessage::StratumV2MessageReceived for the rest of the stack.
Code: stratum_v2_listener.rs, network/mod.rs (message variants)
Transport
Mining traffic uses the same transport stack as P2P; see Transport abstraction.
Code: transport.rs
Merge mining (optional plugin)
Merge mining is not part of the core node. It is provided by blvm-merge-mining, which builds on blvm-stratum-v2.
- Requires the Stratum V2 module
- Activation fee / revenue model: see module and marketplace docs
Documentation
- Module system
- Merge-mining module repository (if present in your workspace):
blvm-merge-mining
Configuration
[stratum_v2]
enabled = true
pool_url = "tcp://pool.example.com:3333" # or "iroh://<nodeid>"
listen_addr = "0.0.0.0:3333" # Server / listener mode
Code: StratumV2Config
Usage
Integrate via the blvm-stratum-v2 crate as a node module (StratumV2Module, lib.rs). The snippets in older revisions that imported blvm_node::network::stratum_v2::StratumV2Server do not match the current on-disk layout; follow the module’s examples and the node’s stratum_v2 config + listener wiring above.
Benefits
- Bandwidth: Stratum V2 binary framing vs Stratum V1 text
- Modularity: Full server/pool logic in
blvm-stratum-v2, node provides listener and config - Optional merge mining: Separate commercial module
See also
Multi-Transport Architecture
Overview
The transport abstraction layer provides a unified interface for multiple network transport protocols, enabling Bitcoin Commons to support both traditional TCP (Bitcoin P2P compatible) and modern QUIC-based transports simultaneously.
Architecture
NetworkManager
└── Transport Trait (abstraction)
├── TcpTransport (Bitcoin P2P compatible)
├── QuinnTransport (direct QUIC)
└── IrohTransport (QUIC with NAT traversal)
Transport Types
Transport Comparison
| Feature | TCP | Quinn QUIC | Iroh QUIC |
|---|---|---|---|
| Protocol | TCP/IP | QUIC | QUIC + DERP |
| Compatibility | Bitcoin P2P | Bitcoin P2P compatible | Commons-specific |
| Addressing | SocketAddr | SocketAddr | Public Key |
| NAT Traversal | ❌ No | ❌ No | ✅ Yes (DERP) |
| Multiplexing | ❌ No | ✅ Yes | ✅ Yes |
| Encryption | ❌ No (TLS optional) | ✅ Built-in | ✅ Built-in |
| Connection Migration | ❌ No | ✅ Yes | ✅ Yes |
| Latency | Standard | Lower | Lower |
| Bandwidth | Standard | Better | Better |
| Default | ✅ Yes | ❌ No | ❌ No |
| Feature Flag | Always enabled | quinn | iroh |
Code: transport.rs
TCP Transport
Traditional TCP transport for Bitcoin P2P protocol compatibility:
- Uses standard TCP sockets
- Maintains Bitcoin wire protocol format
- Compatible with standard Bitcoin nodes
- Default transport for backward compatibility
- No built-in encryption (TLS optional)
- No connection multiplexing
Code: tcp_transport.rs
Quinn QUIC Transport
Direct QUIC transport using the Quinn library:
- QUIC protocol benefits (multiplexing, encryption, connection migration)
- SocketAddr-based addressing (similar to TCP)
- Lower latency and better congestion control
- Built-in TLS encryption
- Stream multiplexing over single connection
- Optional feature flag:
quinn
Code: quinn_transport.rs
Iroh Transport
QUIC-based transport using Iroh for P2P networking:
- Public key-based peer identity
- NAT traversal support via DERP (Distributed Endpoint Relay Protocol)
- Decentralized peer discovery
- Built-in encryption and multiplexing
- Connection migration support
- Optional feature flag:
iroh
Code: iroh_transport.rs
Performance Characteristics
TCP Transport:
- Latency: Standard (RTT-dependent)
- Throughput: Standard (TCP congestion control)
- Connection Overhead: Low (no encryption by default)
- Use Case: Bitcoin P2P compatibility, standard networking
Quinn QUIC Transport:
- Latency: Lower (0-RTT connection establishment)
- Throughput: Higher (better congestion control)
- Connection Overhead: Moderate (built-in encryption)
- Use Case: Modern applications, improved performance
Iroh QUIC Transport:
- Latency: Lower (0-RTT + DERP routing)
- Throughput: Higher (QUIC + optimized routing)
- Connection Overhead: Higher (DERP relay overhead)
- Use Case: NAT traversal, decentralized networking
Code: transport.rs
Transport Abstraction
Transport Trait
The Transport trait provides a unified interface:
#![allow(unused)]
fn main() {
pub trait Transport: Send + Sync {
fn connect(&self, addr: TransportAddr) -> Result<Box<dyn TransportConnection>>;
fn listen(&self, addr: TransportAddr) -> Result<Box<dyn TransportListener>>;
fn transport_type(&self) -> TransportType;
}
}
Code: transport.rs
TransportAddr
Unified address type supporting all transports:
#![allow(unused)]
fn main() {
pub enum TransportAddr {
Tcp(SocketAddr),
Quinn(SocketAddr),
Iroh(Vec<u8>), // Public key bytes
}
}
Code: transport.rs
TransportType
Runtime transport selection:
#![allow(unused)]
fn main() {
pub enum TransportType {
Tcp,
Quinn,
Iroh,
}
}
Code: transport.rs
Transport Selection
Transport Preference
Runtime preference for transport selection:
- TcpOnly: Use only TCP transport
- IrohOnly: Use only Iroh transport
- Hybrid: Prefer Iroh if available, fallback to TCP
Code: transport.rs
Feature Negotiation
Peers negotiate transport capabilities during connection:
- Service flags indicate transport support
- Automatic fallback if preferred transport unavailable
- Transport-aware message routing
Code: protocol.rs
Protocol Adapter
The ProtocolAdapter handles message serialization between:
- Consensus-proof
NetworkMessagetypes - Transport-specific wire formats (TCP Bitcoin P2P vs Iroh message format)
Code: protocol_adapter.rs
Message Bridge
The MessageBridge bridges blvm-consensus message processing with transport layer:
- Converts messages to/from transport formats
- Processes incoming messages
- Generates responses
Code: message_bridge.rs
Network Manager Integration
The NetworkManager supports multiple transports:
- Runtime transport selection
- Transport-aware peer management
- Unified message routing
- Automatic transport fallback
Code: mod.rs
Benefits
- Backward Compatibility: TCP transport maintains Bitcoin P2P compatibility
- Modern Protocols: QUIC support for improved performance
- Flexibility: Runtime transport selection
- Unified Interface: Single API for all transports
- NAT Traversal: Iroh transport enables NAT traversal
- Extensible: Easy to add new transport types
Usage
Configuration
[network]
transport_preference = "Hybrid" # or "TcpOnly" or "IrohOnly"
[network.tcp]
enabled = true
listen_addr = "0.0.0.0:8333"
[network.iroh]
enabled = true
node_id = "..."
Code Example
#![allow(unused)]
fn main() {
use blvm_node::network::{NetworkManager, TransportAddr, TransportType};
let network_manager = NetworkManager::new(config);
// Connect via TCP
let tcp_addr = TransportAddr::tcp("127.0.0.1:8333".parse()?);
network_manager.connect(tcp_addr).await?;
// Connect via Iroh
let iroh_addr = TransportAddr::iroh(pubkey_bytes);
network_manager.connect(iroh_addr).await?;
}
Components
The transport abstraction includes:
- Transport trait definitions
- TCP transport implementation
- Quinn QUIC transport (optional)
- Iroh QUIC transport (optional)
- Protocol adapter for message conversion
- Message bridge for unified routing
- Network manager integration
Location: blvm-node/src/network/transport.rs, blvm-node/src/network/tcp_transport.rs, blvm-node/src/network/quinn_transport.rs, blvm-node/src/network/iroh_transport.rs
Privacy Relay Protocols
Overview
Bitcoin Commons implements multiple privacy-preserving and performance-optimized transaction relay protocols: Dandelion++, Fibre, and Package Relay. These protocols improve privacy, reduce bandwidth, and enable efficient transaction propagation.
Dandelion++
Overview
Dandelion++ provides privacy-preserving transaction relay with formal anonymity guarantees against transaction origin analysis. It operates in two phases: stem phase (obscures origin) and fluff phase (standard diffusion).
Code: dandelion.rs
Architecture
Dandelion++ operates in two phases:
- Stem Phase: Transaction relayed along a random path (obscures origin)
- Fluff Phase: Transaction broadcast to all peers (standard diffusion)
Stem Path Management
Each peer maintains a stem path to a randomly selected peer:
#![allow(unused)]
fn main() {
pub struct StemPath {
pub next_peer: String,
pub expiry: Instant,
pub hop_count: u8,
}
}
Code: dandelion.rs
Stem Phase Behavior
- Transactions relayed to next peer in stem path
- Random path selection obscures transaction origin
- Stem timeout: 10 seconds (default)
- Fluff probability: 10% per hop (default)
- Maximum stem hops: 2 (default)
Code: dandelion.rs
Fluff Phase Behavior
- Transaction broadcast to all peers
- Standard Bitcoin transaction diffusion
- Triggered by:
- Random probability at each hop
- Stem timeout expiration
- Maximum hop count reached
Code: dandelion.rs
Configuration
[network.dandelion]
enabled = true
stem_timeout_secs = 10
fluff_probability = 0.1 # 10%
max_stem_hops = 2
Code: dandelion.rs
Benefits
- Privacy: Obscures transaction origin
- Formal Guarantees: Anonymity guarantees against origin analysis
- Backward Compatible: Falls back to standard relay if disabled
- Configurable: Adjustable timeouts and probabilities
Fibre
Overview
Fibre (Fast Internet Bitcoin Relay Engine) provides high-performance block relay using UDP transport with Forward Error Correction (FEC) encoding for packet loss tolerance.
Code: fibre.rs
Architecture
Fibre uses:
- UDP Transport: Low-latency UDP for block relay
- FEC Encoding: Reed-Solomon erasure coding for packet loss tolerance
- Chunk-based Transmission: Blocks split into chunks with parity shards
- Automatic Recovery: Missing chunks recovered via FEC
FEC Encoding
Blocks are encoded using Reed-Solomon erasure coding:
- Data Shards: Original block data split into shards
- Parity Shards: Redundant shards for error recovery
- Shard Size: Configurable (default: 1024 bytes)
- Parity Ratio: Configurable (default: 0.2 = 20% parity)
Code: fibre.rs
Block Encoding Process
- Serialize block to bytes
- Split into data shards
- Generate parity shards via FEC
- Create FEC chunks for transmission
- Send chunks via UDP
Code: fibre.rs
Block Assembly Process
- Receive FEC chunks via UDP
- Track received chunks per block
- When enough chunks received (data shards), reconstruct block
- Verify block hash matches
Code: fibre.rs
UDP Transport
Fibre uses UDP for low-latency transmission:
- Connection Tracking: Per-peer connection state
- Retry Logic: Automatic retry for lost chunks
- Sequence Numbers: Duplicate detection
- Timeout Handling: Connection timeout management
Code: fibre.rs
Configuration
[network.fibre]
enabled = true
bind_addr = "0.0.0.0:8334"
chunk_timeout_secs = 5
max_retries = 3
fec_parity_ratio = 0.2 # 20% parity
max_assemblies = 100
Code: fibre.rs
Statistics
Fibre tracks comprehensive statistics:
- Blocks sent/received
- Chunks sent/received
- FEC recoveries
- UDP errors
- Average latency
- Success rate
Code: fibre.rs
Benefits
- Low Latency: UDP transport reduces latency
- Packet Loss Tolerance: FEC recovers from lost chunks
- High Throughput: Efficient chunk-based transmission
- Automatic Recovery: No manual retry needed
Package Relay (BIP331)
Overview
Package Relay (BIP331) allows nodes to relay and validate groups of transactions together, enabling efficient fee-bumping (RBF) and CPFP (Child Pays For Parent) scenarios.
Code: package_relay.rs
Package Structure
A transaction package contains:
- Transactions: Ordered list (parents before children)
- Package ID: Combined hash of all transactions
- Combined Fee: Sum of all transaction fees
- Combined Weight: Total weight for fee rate calculation
Code: package_relay.rs
Package Validation
Packages are validated for:
- Size Limits: Maximum 25 transactions (BIP331)
- Weight Limits: Maximum 404,000 WU (BIP331)
- Fee Rate: Minimum fee rate requirement
- Ordering: Parents must precede children
- No Duplicates: No duplicate transactions
Code: package_relay.rs
Use Cases
- Fee-Bumping (RBF): Parent + child transaction for fee increase
- CPFP: Child transaction pays for parent’s fees
- Atomic Sets: Multiple transactions that must be accepted together
Code: package_relay.rs
Package ID Calculation
Package ID is calculated as double SHA256 of all transaction IDs:
#![allow(unused)]
fn main() {
pub fn from_transactions(transactions: &[Transaction]) -> PackageId {
// Hash all txids, then double hash
}
}
Code: package_relay.rs
Configuration
[network.package_relay]
enabled = true
max_package_size = 25
max_package_weight = 404000 # 404k WU
min_fee_rate = 1000 # 1 sat/vB
Code: package_relay.rs
Benefits
- Efficient Fee-Bumping: Better fee rate calculation for packages
- Reduced Orphans: Reduces orphan transactions in mempool
- Atomic Validation: Package validated as unit
- DoS Resistance: Size and weight limits prevent abuse
Integration
Relay Manager
The RelayManager coordinates all relay protocols:
- Standard block/transaction relay
- Dandelion++ integration (optional)
- Fibre integration (optional)
- Package relay support
Code: relay.rs
Protocol Selection
Relay protocols are selected based on:
- Feature flags (
dandelion,fibre) - Peer capabilities
- Configuration settings
- Runtime preferences
Code: relay.rs
Components
The privacy relay system includes:
- Dandelion++ stem/fluff phase management
- Fibre UDP transport with FEC encoding
- Package Relay (BIP331) validation
- Relay manager coordination
- Statistics tracking
Location: blvm-node/src/network/dandelion.rs, blvm-node/src/network/fibre.rs, blvm-node/src/network/package_relay.rs, blvm-node/src/network/relay.rs
Package Relay (BIP331)
Overview
Package Relay (BIP331) enables nodes to relay and validate groups of transactions together as atomic units. This is particularly useful for fee-bumping (RBF) transactions, CPFP (Child Pays For Parent) scenarios, and atomic transaction sets.
Specification: BIP 331
Architecture
Package Structure
A transaction package contains:
#![allow(unused)]
fn main() {
pub struct TransactionPackage {
pub transactions: Vec<Transaction>, // Ordered: parents first
pub package_id: PackageId,
pub combined_fee: u64,
pub combined_weight: usize,
}
}
Code: package_relay.rs
Package ID
Package ID is calculated as double SHA256 of all transaction IDs:
#![allow(unused)]
fn main() {
pub fn from_transactions(transactions: &[Transaction]) -> PackageId {
let mut hasher = Sha256::new();
for tx in transactions {
let txid = calculate_txid(tx);
hasher.update(txid);
}
let first = hasher.finalize();
let mut hasher2 = Sha256::new();
hasher2.update(first);
PackageId(final_hash)
}
}
Code: package_relay.rs
Validation Rules
Size Limits
- Maximum Transactions: 25 (BIP331 limit)
- Maximum Weight: 404,000 WU (~101,000 vB)
- Minimum Fee Rate: Configurable (default: 1 sat/vB)
Code: package_relay.rs
Ordering Requirements
Transactions must be ordered with parents before children:
- Each transaction’s inputs that reference in-package parents must reference earlier transactions
- Invalid ordering results in
InvalidOrderrejection
Code: package_relay.rs
Fee Calculation
Package fee is calculated as sum of all transaction fees:
#![allow(unused)]
fn main() {
combined_fee = sum(inputs) - sum(outputs) for all transactions
}
Fee rate is calculated as:
#![allow(unused)]
fn main() {
fee_rate = combined_fee / combined_weight
}
Code: package_relay.rs
Use Cases
Fee-Bumping (RBF)
Parent transaction + child transaction that increases fee:
Package:
- Parent TX (low fee)
- Child TX (bumps parent fee)
CPFP (Child Pays For Parent)
Child transaction pays for parent’s fees:
Package:
- Parent TX (insufficient fee)
- Child TX (pays for parent)
Atomic Transaction Sets
Multiple transactions that must be accepted together:
Package:
- TX1 (depends on TX2)
- TX2 (depends on TX1)
Code: package_relay.rs
Package Manager
PackageRelay
The PackageRelay manager handles:
- Package validation
- Package state tracking
- Package acceptance/rejection
- Package relay to peers
Code: package_relay.rs
Package States
#![allow(unused)]
fn main() {
pub enum PackageStatus {
Pending, // Awaiting validation
Accepted, // Validated and accepted
Rejected { reason: PackageRejectReason },
}
}
Code: package_relay.rs
Rejection Reasons
#![allow(unused)]
fn main() {
pub enum PackageRejectReason {
TooManyTransactions,
WeightExceedsLimit,
FeeRateTooLow,
InvalidOrder,
DuplicateTransactions,
InvalidStructure,
}
}
Code: package_relay.rs
Validation Process
- Size Check: Verify transaction count ≤ 25
- Weight Check: Verify combined weight ≤ 404,000 WU
- Ordering Check: Verify parents before children
- Duplicate Check: Verify no duplicate transactions
- Fee Calculation: Calculate combined fee and fee rate
- Fee Rate Check: Verify fee rate ≥ minimum
- Structure Check: Verify valid package structure
Code: package_relay.rs
Network Integration
Package Messages
PackageRelay: Relay package to peersPackageAccept: Package accepted by peerPackageReject: Package rejected with reason
Code: package_relay_handler.rs
Handler Integration
The PackageRelayHandler processes incoming package messages:
- Receives package relay requests
- Validates packages
- Accepts or rejects packages
- Relays accepted packages to other peers
Code: package_relay_handler.rs
Configuration
[network.package_relay]
enabled = true
max_package_size = 25
max_package_weight = 404000 # 404k WU
min_fee_rate = 1000 # 1 sat/vB
Code: package_relay.rs
Benefits
- Efficient Fee-Bumping: Better fee rate calculation for packages
- Reduced Orphans: Reduces orphan transactions in mempool
- Atomic Validation: Package validated as unit
- DoS Resistance: Size and weight limits prevent abuse
- CPFP Support: Enables child-pays-for-parent scenarios
Components
The Package Relay system includes:
- Package structure and validation
- Package ID calculation
- Fee and weight calculation
- Ordering validation
- Package manager
- Network message handling
Location: blvm-node/src/network/package_relay.rs, blvm-node/src/network/package_relay_handler.rs
Performance Optimizations
Overview
The node implements performance optimizations for initial block download (IBD), parallel validation, and efficient UTXO operations. Actual speedup depends on hardware, network, and workload. For current numbers, see benchmarks.thebitcoincommons.org (when available) or run benchmarks locally (Benchmarking).
Parallel Initial Block Download (IBD)
Overview
Parallel IBD significantly speeds up initial blockchain synchronization by downloading and validating blocks concurrently from multiple peers. The system uses checkpoint-based parallel header download, block pipelining, streaming validation, and efficient batch storage operations.
The node uses parallel IBD for initial sync. Code: parallel_ibd/mod.rs
Architecture
The parallel IBD system consists of several coordinated optimizations:
- Checkpoint Parallel Headers: Download headers in parallel using hardcoded checkpoints
- Block Pipelining: Download multiple blocks concurrently from each peer
- Streaming Validation: Validate blocks as they arrive using a reorder buffer
- Batch Storage: Use batch writes for efficient UTXO set updates
Checkpoint Parallel Headers
Headers are downloaded in parallel using hardcoded checkpoints at well-known block heights. This allows multiple header ranges to be downloaded simultaneously from different peers.
Checkpoints: Genesis, 11111, 33333, 74000, 105000, 134444, 168000, 193000, 210000 (first halving), 250000, 295000, 350000, 400000, 450000, 500000, 550000, 600000, 650000, 700000, 750000, 800000, 850000
Code: parallel_ibd/checkpoints.rs
Algorithm:
- Identify checkpoint ranges for the target height range
- Download headers in parallel for each range
- Each range uses the checkpoint hash as its starting locator
- Verification ensures continuity and checkpoint hash matching
Block Pipelining
Blocks are downloaded with deep pipelining per peer, allowing multiple outstanding block requests to hide network latency.
Configuration (see ParallelIBDConfig and IBD Configuration):
chunk_size: blocks per chunk (default: 16; ENVBLVM_IBD_CHUNK_SIZE16–2000)download_timeout_secs: timeout per block in seconds (default: 30)max_concurrent_per_peer: fixed at 64 in code (not in[ibd]config; seeParallelIBDConfig)
Code: ParallelIBDConfig
Dynamic Work Dispatch:
- Uses a shared work queue instead of static chunk assignment
- Fast peers automatically grab more work as they finish chunks
- FIFO ordering ensures lowest heights are processed first
- Natural load balancing across peers
Streaming Validation with Reorder Buffer
Blocks may arrive out of order from parallel downloads. A reorder buffer ensures blocks are validated in sequential order while allowing downloads to continue.
Implementation:
- Reorder buffer (BTreeMap) holds blocks until next expected height; buffer limit is height-dependent (see memory.rs).
- Streaming validation: validates blocks in order as they become available.
- Backpressure: downloads pause when buffer is full.
Code: parallel_ibd (feeder, validation_loop when production feature enabled)
Batch Storage Operations
UTXO set updates use batch writes for efficient bulk operations (single transaction vs many).
BatchWriter Trait:
- Accumulates multiple put/delete operations
- Commits all operations atomically in a single transaction
- Ensures database consistency even on crash
Code: BatchWriter (trait and backend impls)
Usage:
#![allow(unused)]
fn main() {
let mut batch = tree.batch();
for (key, value) in utxo_updates {
batch.put(key, value);
}
batch.commit()?; // Single atomic commit
}
Peer Scoring and Filtering
The system tracks peer performance and filters out extremely slow peers during IBD:
- Latency Tracking: Monitors average block download latency per peer
- Slow Peer Filtering: Drops peers with >90s average latency (keeps at least 2)
- Dynamic Selection: Fast peers automatically get more work
Code: parallel_ibd (peer scoring and filtering)
Configuration
[ibd]
chunk_size = 16
download_timeout_secs = 30
mode = "parallel"
eviction = "fifo"
max_blocks_in_transit_per_peer = 16
headers_timeout_secs = 30
headers_max_failures = 10
(max_concurrent_per_peer is fixed at 64 in the node; not in IbdConfig. See Node Configuration and configuration-reference.)
Code: ParallelIBDConfig
Parallel headers, pipelining, streaming validation, and batch storage all contribute to faster IBD compared to a single-threaded sequential sync. See benchmarks for current measurements.
Parallel Block Validation
Architecture
Blocks are validated in parallel when they are deep enough from the chain tip. This optimization uses Rayon for parallel execution.
Code: validation/mod.rs
Safety Conditions
Parallel validation is only used when:
- Blocks are beyond
max_parallel_depthfrom tip (default in code: 100 blocks; seeParallelBlockValidator::default) - Each block uses its own UTXO set snapshot (independent validation)
- Blocks are validated sequentially if too close to tip
Code: validation/mod.rs
Implementation
#![allow(unused)]
fn main() {
pub fn validate_blocks_parallel(
&self,
contexts: &[BlockValidationContext],
depth_from_tip: usize,
network: Network,
) -> Result<Vec<(ValidationResult, UtxoSet)>> {
if depth_from_tip <= self.max_parallel_depth {
return self.validate_blocks_sequential(contexts, network);
}
// Parallel validation using Rayon
use rayon::prelude::*;
contexts.par_iter().map(|context| {
connect_block(&context.block, ...)
}).collect()
}
}
Code: validation/mod.rs
Batch UTXO Operations
Batch Fee Calculation
Transaction fees are calculated in batches by pre-fetching all UTXOs before validation:
- Collect all prevouts from all transactions
- Batch UTXO lookup (single pass through HashMap)
- Cache UTXOs for fee calculation
- Calculate fees using cached UTXOs
Code: block/apply.rs
Implementation
#![allow(unused)]
fn main() {
// Pre-collect all prevouts for batch UTXO lookup
let all_prevouts: Vec<&OutPoint> = block
.transactions
.iter()
.filter(|tx| !is_coinbase(tx))
.flat_map(|tx| tx.inputs.iter().map(|input| &input.prevout))
.collect();
// Batch UTXO lookup (single pass)
let mut utxo_cache: HashMap<&OutPoint, &UTXO> =
HashMap::with_capacity(all_prevouts.len());
for prevout in &all_prevouts {
if let Some(utxo) = utxo_set.get(prevout) {
utxo_cache.insert(prevout, utxo);
}
}
}
Code: block/apply.rs
Configuration
[performance]
enable_batch_utxo_lookups = true
parallel_batch_size = 8
Code: config.rs
Assume-Valid Checkpoints
Overview
Assume-valid checkpoints skip expensive signature verification for blocks before a configured height, reducing IBD time when enabled.
Code: block/mod.rs
Safety
This optimization is safe because:
- These blocks are already validated by the network
- Block structure, Merkle roots, and PoW are still validated
- Only signature verification is skipped (the expensive operation)
Code: block/mod.rs
Configuration
[performance]
assume_valid_height = 700000 # Skip signatures before this height
Environment Variable:
ASSUME_VALID_HEIGHT=700000
Code: block/mod.rs
Signature verification is a major cost; skipping it for pre-checkpoint blocks speeds IBD. Only signature verification is skipped; structure, Merkle, and PoW are still validated. Can be disabled (set to 0) for maximum safety.
Parallel Transaction Validation
Architecture
Within a block, transaction validation is parallelized where safe:
- Parallel validation (read-only UTXO access): transaction structure, input validation, fee calculation, script verification.
- Sequential application (write operations): UTXO set updates and state transitions to maintain correctness.
Code: block/mod.rs
Implementation
#![allow(unused)]
fn main() {
#[cfg(feature = "rayon")]
{
use rayon::prelude::*;
// Parallel validation (read-only)
let validation_results: Vec<Result<...>> = block
.transactions
.par_iter()
.map(|tx| { check_transaction(tx)?; check_tx_inputs(tx, &utxo_cache, height)?; ... })
.collect();
// Sequential application (write operations)
for (tx, validation) in transactions.zip(validation_results) {
apply_transaction(tx, &mut utxo_set)?;
}
}
}
Code: block/mod.rs
Advanced Indexing
Address Indexing
Indexes transactions by address for fast lookup:
- Address Database: Maps addresses to transaction history
- Fast Lookup: O(1) address-to-transaction mapping
- Incremental Updates: Updates on each block
Code: txindex.rs, transaction-indexing.md
Value Range Indexing
Indexes UTXOs by value range for efficient queries:
- Range Queries: Find UTXOs in value ranges
- Optimized Lookups: Indexed by value range for efficient queries
- Memory Efficient: Sparse indexing structure
Runtime Optimizations
Constant Folding
Pre-computed constants avoid runtime computation:
#![allow(unused)]
fn main() {
pub mod precomputed_constants {
pub const U64_MAX: u64 = u64::MAX;
pub const MAX_MONEY_U64: u64 = MAX_MONEY as u64;
pub const BTC_PER_SATOSHI: f64 = 1.0 / (SATOSHIS_PER_BTC as f64);
}
}
Code: optimizations.rs
Bounds Check Optimization
Optimized bounds checking for proven-safe access patterns:
#![allow(unused)]
fn main() {
pub fn get_proven<T>(slice: &[T], index: usize, bound_check: bool) -> Option<&T> {
if bound_check {
slice.get(index)
} else {
// Unsafe only when bounds are statically proven
unsafe { ... }
}
}
}
Code: optimizations.rs
Cache-Friendly Memory Layouts
32-byte aligned hash structures for better cache performance:
#![allow(unused)]
fn main() {
#[repr(align(32))]
pub struct CacheAlignedHash([u8; 32]);
}
Code: optimizations.rs
Performance Configuration
Configuration Options
[performance]
# Script verification threads (0 = auto-detect)
script_verification_threads = 0
# Parallel batch size
parallel_batch_size = 8
# Enable SIMD optimizations
enable_simd_optimizations = true
# Enable cache optimizations
enable_cache_optimizations = true
# Enable batch UTXO lookups
enable_batch_utxo_lookups = true
Code: config.rs
Default Values
script_verification_threads: 0 (auto-detect from CPU count)parallel_batch_size: 8 transactions per batchenable_simd_optimizations: trueenable_cache_optimizations: trueenable_batch_utxo_lookups: true
Code: config.rs
Benchmark Results
Benchmark results are published at benchmarks.thebitcoincommons.org, generated by workflows in the blvm-bench repository. Run benchmarks locally for your hardware; see Benchmarking.
Components
The performance optimization system includes:
- Parallel block validation
- Batch UTXO operations
- Assume-valid checkpoints
- Parallel transaction validation
- Advanced indexing (address, value range)
- Runtime optimizations (constant folding, bounds checks, cache-friendly layouts)
- Performance configuration
Location: blvm-consensus/src/optimizations.rs, blvm-consensus/src/block/, blvm-consensus/src/config.rs, blvm-node/src/validation/mod.rs. Storage default for IBD is RocksDB when the rocksdb feature is enabled.
See Also
- Node Overview - Node implementation details
- Node Configuration - Performance configuration options
- Benchmarking - Performance benchmarking
- Storage Backends - Storage performance
- Consensus Architecture - Optimization passes
- UTXO Commitments - UTXO proof verification and fast sync
- IBD Bandwidth Protection - IBD bandwidth management
QUIC RPC
Overview
Bitcoin Commons optionally supports JSON-RPC over QUIC using Quinn, providing improved performance and security compared to the standard TCP RPC server. QUIC RPC is an alternative transport protocol that runs alongside the standard TCP RPC server.
Features
- Encryption: Built-in TLS encryption via QUIC
- Multiplexing: Multiple concurrent requests over single connection
- Better Performance: Lower latency, better congestion control
- Backward Compatible: TCP RPC server always available
Code: QUIC_RPC.md
Usage
Basic (TCP Only - Default)
#![allow(unused)]
fn main() {
use blvm_node::rpc::RpcManager;
use std::net::SocketAddr;
let tcp_addr: SocketAddr = "127.0.0.1:8332".parse().unwrap();
let mut rpc_manager = RpcManager::new(tcp_addr);
rpc_manager.start().await?;
}
Code: QUIC_RPC.md
With QUIC Support
#![allow(unused)]
fn main() {
use blvm_node::rpc::RpcManager;
use std::net::SocketAddr;
let tcp_addr: SocketAddr = "127.0.0.1:8332".parse().unwrap();
let quinn_addr: SocketAddr = "127.0.0.1:18332".parse().unwrap();
// Option 1: Create with both transports
#[cfg(feature = "quinn")]
let mut rpc_manager = RpcManager::with_quinn(tcp_addr, quinn_addr);
// Option 2: Enable QUIC after creation
let mut rpc_manager = RpcManager::new(tcp_addr);
#[cfg(feature = "quinn")]
rpc_manager.enable_quinn(quinn_addr);
rpc_manager.start().await?;
}
Code: QUIC_RPC.md
Configuration
Feature Flag
QUIC RPC requires the quinn feature flag:
[dependencies]
blvm-node = { path = "../blvm-node", features = ["quinn"] }
Code: QUIC_RPC.md
Build with QUIC
cargo build --features quinn
Code: QUIC_RPC.md
QUIC RPC Server
Server Implementation
The QuinnRpcServer provides JSON-RPC over QUIC:
- Certificate Generation: Self-signed certificates for development
- Connection Handling: Accepts incoming QUIC connections
- Stream Management: Manages bidirectional streams
- Request Processing: Processes JSON-RPC requests
Code: quinn_server.rs
Certificate Management
QUIC uses TLS certificates:
- Development: Self-signed certificates
- Production: Should use proper certificate management
- Certificate Generation: Automatic certificate generation
Code: quinn_server.rs
Client Usage
QUIC Client
Clients need QUIC support. Example with quinn (requires quinn feature):
#![allow(unused)]
fn main() {
use quinn::Endpoint;
use std::net::SocketAddr;
let server_addr: SocketAddr = "127.0.0.1:18332".parse().unwrap();
let endpoint = Endpoint::client("0.0.0.0:0".parse().unwrap())?;
let connection = endpoint.connect(server_addr, "localhost")?.await?;
// Open bidirectional stream
let (mut send, mut recv) = connection.open_bi().await?;
// Send JSON-RPC request
let request = r#"{"jsonrpc":"2.0","method":"getblockchaininfo","params":[],"id":1}"#;
send.write_all(request.as_bytes()).await?;
send.finish().await?;
// Read response
let mut response = Vec::new();
recv.read_to_end(&mut response).await?;
let response_str = String::from_utf8(response)?;
}
Code: QUIC_RPC.md
Benefits Over TCP
- Encryption: Built-in TLS, no need for separate TLS layer
- Multiplexing: Multiple requests without head-of-line blocking
- Connection Migration: Survives IP changes
- Lower Latency: Better congestion control
- Stream-Based: Natural fit for request/response patterns
Code: QUIC_RPC.md
Limitations
- Ecosystem tooling: Most off-the-shelf JSON-RPC clients assume TCP HTTP to port 8332/18332
- Client Support: Requires QUIC-capable clients
- Certificate Management: Self-signed certs need proper handling for production
- Network Requirements: Some networks may block UDP/QUIC
Code: QUIC_RPC.md
Security Notes
- Self-Signed Certificates: Uses self-signed certificates for development. Production deployments require proper certificate management.
- Authentication: QUIC provides transport encryption but not application-level auth
- Same Security Boundaries: QUIC RPC has same security boundaries as TCP RPC (no wallet access)
Code: QUIC_RPC.md
When to Use
- High-Performance Applications: When you need better performance than TCP
- Modern Infrastructure: When all clients support QUIC
- Enhanced Security: When you want built-in encryption without extra TLS layer
- Internal Services: When you control both client and server
Code: QUIC_RPC.md
When Not to Use
- Legacy clients: Need scripts or apps that only speak TCP HTTP RPC
- Legacy Clients: Clients that only support TCP/HTTP
- Simple Use Cases: TCP RPC is simpler and sufficient for most cases
Code: QUIC_RPC.md
Components
The QUIC RPC system includes:
- Quinn RPC server implementation
- Certificate generation and management
- Connection and stream handling
- JSON-RPC protocol over QUIC
- Client support examples
Location: blvm-node/src/rpc/quinn_server.rs, blvm-node/docs/QUIC_RPC.md
Developer SDK Overview
The developer SDK (blvm-sdk) provides governance infrastructure and a composition framework for Bitcoin. It includes reusable governance primitives and a composition framework for building alternative Bitcoin implementations.
Architecture Position
Tier 5 of the 6-tier Bitcoin Commons architecture:
1. Orange Paper (mathematical foundation)
2. blvm-consensus (pure math implementation)
3. blvm-protocol (Bitcoin abstraction)
4. blvm-node (full node implementation)
5. blvm-sdk (governance + composition) ← THIS LAYER
6. blvm-commons (governance enforcement)
Core Components
Module authoring (blvm-sdk + macros)
For node modules (process-isolated extensions), blvm-sdk provides:
blvm_sdk::module::preludeandrun_module!/run_module_main!— bootstrap, DB, IPC main loop without hand-written event plumbing.blvm-sdk-macros—#[module],#[command],#[rpc_method],#[on_event],#[config],#[migration], etc., to declare CLI, RPC, events, and config in one place.
Requires the node feature on blvm-sdk. See Module Development (especially SDK declarative style) and the hello-module example.
Governance Primitives
Cryptographic primitives for governance operations:
- Key Management: Generate and manage governance keypairs
- Signature Creation: Sign governance messages using Bitcoin-compatible standards
- Signature Verification: Verify signatures and multisig thresholds
- Multisig Logic: Threshold-based collective decision making
- Nested Multisig: Team-based governance with hierarchical multisig support
- Message Formats: Structured messages for releases, approvals, decisions
Code: signatures.rs
CLI Tools
Command-line tools for governance operations:
blvm-keygen: Generate governance keypairs (PEM, JSON formats)blvm-sign: Sign governance messages (releases, approvals)blvm-verify: Verify signatures and multisig thresholdsblvm-compose: Declarative node composition from modulesblvm-sign-binary: Sign binary files for release verificationblvm-verify-binary: Verify binary file signaturesblvm-aggregate-signatures: Aggregate multiple signatures
Code: blvm-keygen.rs
Composition Framework
Declarative node composition from modules:
- Module Registry: Discover and manage available modules
- Lifecycle Management: Load, unload, reload modules at runtime
- Economic Integration: Merge mining revenue distribution
- Dependency Resolution: Automatic module dependency handling
Code: mod.rs
Key Features
Governance Primitives
#![allow(unused)]
fn main() {
use blvm_sdk::governance::{
GovernanceKeypair, GovernanceMessage, Multisig
};
// Generate a keypair
let keypair = GovernanceKeypair::generate()?;
// Create a message to sign
let message = GovernanceMessage::Release {
version: "v1.0.0".to_string(),
commit_hash: "abc123".to_string(),
};
// Sign the message
let signature = keypair.sign(&message.to_signing_bytes())?;
// Verify with multisig
let multisig = Multisig::new(6, 7, maintainer_keys)?;
let valid = multisig.verify(&message.to_signing_bytes(), &[signature])?;
}
Multisig Support
Threshold-based signature verification:
- N-of-M Thresholds: Configurable signature requirements (e.g., 6-of-7, see Multisig Configuration)
- Key Management: Maintainer key registration and rotation
- Signature Aggregation: Combine multiple signatures
- Verification: Cryptographic verification of threshold satisfaction
Code: multisig.rs
Bitcoin-Compatible Signing
Uses Bitcoin message signing standards:
- Message Format: Bitcoin message signing format
- Signature Algorithm: secp256k1 ECDSA
- Hash Function: Double SHA256
- Compatibility: Works with common PSBT/signing workflows used across the ecosystem
Code: signatures.rs
Design Principles
- Governance Crypto is Reusable: Clean library API for external consumers
- No GitHub Logic: SDK is pure cryptography + composition, not enforcement
- Bitcoin-Compatible: Uses Bitcoin message signing standards
- Test coverage: Treat governance crypto as security-critical—target exhaustive unit and integration tests before release
- Document for Consumers: Governance app developers are the customer
What This Is NOT
- NOT a general-purpose Bitcoin library
- NOT the GitHub enforcement engine (that’s blvm-commons)
- NOT handling wallet keys or user funds
- NOT competing with rust-bitcoin or BDK
Usage Examples
CLI Usage
# Generate a keypair
blvm-keygen --output alice.key --format pem
# Sign a release
blvm-sign release \
--version v1.0.0 \
--commit abc123 \
--key alice.key \
--output signature.txt
# Verify signatures
blvm-verify release \
--version v1.0.0 \
--commit abc123 \
--signatures sig1.txt,sig2.txt,sig3.txt,sig4.txt,sig5.txt,sig6.txt \
--threshold 6-of-7 \
--pubkeys keys.json
Library Usage
#![allow(unused)]
fn main() {
use blvm_sdk::governance::{GovernanceKeypair, GovernanceMessage};
// Generate keypair
let keypair = GovernanceKeypair::generate()?;
// Sign message
let message = GovernanceMessage::Release {
version: "v1.0.0".to_string(),
commit_hash: "abc123".to_string(),
};
let signature = keypair.sign(&message.to_signing_bytes())?;
}
See Also
- SDK Getting Started - Quick start guide
- API Reference - Complete SDK API documentation
- Module Development - Building modules
- SDK Examples - Usage examples
- Governance Overview - Governance system
SDK Getting Started
The developer SDK (blvm-sdk) provides governance infrastructure and cryptographic primitives for Bitcoin governance operations, plus module authoring (process-isolated node modules with CLI, RPC, and events). See Module Development for the declarative style.
Quick Start
As a Library
#![allow(unused)]
fn main() {
use blvm_sdk::governance::{
GovernanceKeypair, GovernanceMessage, Multisig
};
// Generate a keypair
let keypair = GovernanceKeypair::generate()?;
// Create a message to sign
let message = GovernanceMessage::Release {
version: "v1.0.0".to_string(),
commit_hash: "abc123".to_string(),
};
// Sign the message
let signature = keypair.sign(&message.to_signing_bytes())?;
// Verify with multisig
let multisig = Multisig::new(6, 7, maintainer_keys)?;
let valid = multisig.verify(&message.to_signing_bytes(), &[signature])?;
}
CLI Usage
# Generate a keypair
blvm-keygen --output alice.key --format pem
# Sign a release
blvm-sign release \
--version v1.0.0 \
--commit abc123 \
--key alice.key \
--output signature.txt
# Verify signatures
blvm-verify release \
--version v1.0.0 \
--commit abc123 \
--signatures sig1.txt,sig2.txt,sig3.txt,sig4.txt,sig5.txt,sig6.txt \
--threshold 6-of-7 \
--pubkeys keys.json
For more details, see the blvm-sdk README.
Authoring a node module
- Add
blvm-sdkwith thenodefeature and use the SDK declarative style (#[module],run_module!). - Ship a binary +
module.tomlunder the node’s modules directory (see Module Development). - Optional: register CLI subcommands so users invoke your module via
blvm <your-cli-group> …when the module is loaded (Module CLI under blvm).
See Also
- SDK Overview - SDK introduction and architecture
- API Reference - Complete SDK API documentation
- Module Development - Building modules with the SDK
- SDK Examples - More usage examples
- Governance Overview - Governance system details
Module Development
The BTCDecoded blvm-node includes a process-isolated module system that enables optional features (Lightning, merge mining, privacy enhancements) without affecting consensus or base node stability. Modules run in separate processes with IPC communication, providing security through isolation.
Core Principles
- Process Isolation: Each module runs in a separate process with isolated memory
- API Boundaries: Modules communicate only through well-defined APIs
- Crash Containment: Module failures don’t propagate to the base node
- Consensus Isolation: Modules cannot modify consensus rules, UTXO set, or block validation
- State Separation: Module state is completely separate from consensus state
Communication
Modules communicate with the node via Inter-Process Communication (IPC) using Unix domain sockets. Protocol uses length-delimited binary messages (bincode serialization) with message types: Requests, Responses, Events. Connection is persistent for request/response pattern; events use pub/sub pattern for real-time notifications.
Module Structure
Directory Layout
Each module should be placed in a subdirectory within the modules/ directory:
modules/
└── my-module/
├── Cargo.toml
├── src/
│ └── main.rs
└── module.toml # Module manifest (required)
Module Manifest (module.toml)
Every module must include a module.toml manifest file:
# ============================================================================
# Module Manifest
# ============================================================================
# ----------------------------------------------------------------------------
# Core Identity (Required)
# ----------------------------------------------------------------------------
name = "my-module"
version = "1.0.0"
entry_point = "my-module"
# ----------------------------------------------------------------------------
# Metadata (Optional)
# ----------------------------------------------------------------------------
description = "Description of what this module does"
author = "Your Name <your.email@example.com>"
# ----------------------------------------------------------------------------
# Capabilities
# ----------------------------------------------------------------------------
# Permissions this module requires to function
capabilities = [
"read_blockchain", # Query blockchain data
"subscribe_events", # Receive node events
]
# ----------------------------------------------------------------------------
# Dependencies
# ----------------------------------------------------------------------------
# Required dependencies (module cannot load without these)
[dependencies]
"blvm-lightning" = ">=1.0.0"
# Optional dependencies (module can work without these)
[optional_dependencies]
"blvm-mesh" = ">=0.5.0"
# ----------------------------------------------------------------------------
# Configuration Schema (Optional)
# ----------------------------------------------------------------------------
[config_schema]
poll_interval = "Polling interval in seconds (default: 5)"
Required Fields:
name: Module identifier (alphanumeric with dashes/underscores)version: Semantic version (e.g., “1.0.0”)entry_point: Binary name or path
Optional Fields:
description: Human-readable descriptionauthor: Module authorcapabilities: List of required permissionsdependencies: Required (hard) dependencies - module cannot load without themoptional_dependencies: Optional (soft) dependencies - module can work without them
Dependency Version Constraints:
>=1.0.0- Greater than or equal to version<=2.0.0- Less than or equal to version=1.2.3- Exact version match^1.0.0- Compatible version (>=1.0.0 and <2.0.0)~1.2.0- Patch updates only (>=1.2.0 and <1.3.0)
Module Development
SDK declarative style (recommended)
The blvm-sdk crate provides attribute macros and a run_module! macro so you can define CLI, RPC, and event handling in one place without manual IPC or event loops. This is the recommended way to build new modules.
Dependency: Add blvm-sdk with the node feature. Use the prelude:
#![allow(unused)]
fn main() {
use blvm_sdk::module::prelude::*;
}
Module struct and config:
#[blvm_module]/#[module]on the struct:#[module(name = "my-module", config = MyConfig)]. Optionalmigrations = ((1, up_initial), (2, up_add_cache))generatesModuleMetaforrun_module_main!.#[module_config(name = "my-module")]/#[config(name = "my-module")]on a config struct: generatesCONFIG_SECTION_NAME(matches node[modules.my-module]),apply_env_overrides(), andload(path). Field-level#[config_env]or#[config_env("ENV_NAME")]uses env vars to override (default:MODULE_CONFIG_<FIELD>).
Single impl for CLI, RPC, and events:
#[module(name = "my-module")]on the impl block generatescli_spec(),dispatch_cli(),rpc_method_names(),dispatch_rpc(),event_types(), anddispatch_event()from one set of methods:- Methods with
ctx: &InvocationContext(and no#[rpc_method]/#[on_event]) become CLI subcommands. Use#[command]to mark them explicitly. Parameters can use#[arg(long)],#[arg(short = 'n')],#[arg(default = "value")]for CLI parsing. #[rpc_method]/#[rpc_method(name = "method_name")]marks RPC endpoints.#[on_event(NewBlock, NewTransaction)]marks event handlers; use with#[event_handlers]on the impl to generateevent_types()anddispatch_event().- Payload injection: For event types listed in
blvm-sdk-macrosevent_payload_map(same field names asEventPayloadinblvm-node), a handler can take payload fields by name plus optionalctx: &InvocationContextinstead of only&EventMessage. The match is on&event.payload, so use reference types (e.g.packet_data: &[u8], peer_addr: &str). Example:#[on_event(MeshPacketReceived)]with(packet_data: &[u8], peer_addr: &str, ctx: &InvocationContext). The_ctx/_contextnames are also recognized for the legacy(&event, ctx)style.
- Methods with
Migrations: #[migration(version = N)] on a function; use with db.run_migrations(&[(1, up_initial), ...]) or via #[module(migrations = (...))].
Entry point:
ModuleBootstrap::from_env()readsMODULE_ID,SOCKET_PATH,DATA_DIRwhen the node spawns the module; for manual runs you can useModuleBootstrap::init_module("my-module")or parse CLI.ModuleDb::open(&bootstrap.data_dir)opens the module DB; thenrun_module! { bootstrap, module_name, module, module_type, db }runs the main loop (IPC connect, CLI/RPC/event dispatch, no manual event loop).run_module_main!(MyModule)— when your struct has#[module(config = MyConfig, migrations = (...))]and implementsModuleMeta, this macro expands to a fullmainthat does bootstrap, migrations, config load, andrun_module!.
Example (skeleton):
use blvm_sdk::module::prelude::*;
use blvm_sdk::module::{ModuleBootstrap, ModuleDb};
#[derive(Clone, Default, serde::Serialize, serde::Deserialize)]
#[config(name = "my-module")]
pub struct MyConfig { #[config_env] pub setting: String }
#[derive(Clone)]
#[module(name = "my-module", config = MyConfig)]
pub struct MyModule { config: MyConfig }
#[module(name = "my-module")]
impl MyModule {
#[command]
fn status(&self, _ctx: &InvocationContext) -> Result<String, ModuleError> {
Ok("ok".into())
}
#[rpc_method(name = "my_method")]
fn my_method(&self, params: &serde_json::Value, _db: &std::sync::Arc<dyn blvm_node::storage::database::Database>) -> Result<serde_json::Value, ModuleError> {
Ok(serde_json::json!({}))
}
}
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
let bootstrap = ModuleBootstrap::from_env().unwrap_or_else(|_| ModuleBootstrap::init_module("my-module"));
let db = ModuleDb::open(&bootstrap.data_dir)?;
let config = MyConfig::load(bootstrap.data_dir.join("config.toml")).unwrap_or_default();
let module = MyModule { config };
blvm_sdk::run_module! {
bootstrap: &bootstrap,
module_name: "my-module",
module: module,
module_type: MyModule,
db: db.as_db(),
}?;
Ok(())
}
Code: blvm-sdk-macros (attribute definitions), hello-module example, selective-sync (real module using this style).
Module CLI under the blvm binary
Modules that expose CLI handlers (methods with InvocationContext / #[command]) register a CLI spec with the node when they connect over IPC. The main blvm binary discovers registered specs and dispatches invocations to the running module process (node RPC: e.g. listing specs and forwarding runmodulecli-style calls). Users run blvm <command-group> <subcommand> (e.g. blvm sync-policy list for selective-sync). The module must be loaded; otherwise those top-level commands are unavailable. See blvm-node module docs for the full CLI flow.
Basic module structure (integration API)
If you need more control than the SDK declarative style (e.g. custom bootstrap or no macros), you can implement the lifecycle and connect via IPC directly. Two approaches:
Using ModuleIntegration
use blvm_node::module::integration::ModuleIntegration;
use blvm_node::module::EventType;
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
// Parse command-line arguments
let args = Args::parse();
// Connect to node using ModuleIntegration
// Note: socket_path must be PathBuf (convert from String if needed)
let socket_path = std::path::PathBuf::from(&args.socket_path);
let mut integration = ModuleIntegration::connect(
socket_path,
args.module_id.unwrap_or_else(|| "my-module".to_string()),
"my-module".to_string(),
env!("CARGO_PKG_VERSION").to_string(),
).await?;
// Subscribe to events
let event_types = vec![EventType::NewBlock, EventType::NewTransaction];
integration.subscribe_events(event_types).await?;
// Get NodeAPI
let node_api = integration.node_api();
// Get event receiver (broadcast::Receiver returns Result, not Option)
let mut event_receiver = integration.event_receiver();
// Main module loop
loop {
match event_receiver.recv().await {
Ok(ModuleMessage::Event(event_msg)) => {
// Process event
match event_msg.payload {
// Handle specific event types
_ => {}
}
}
Ok(_) => {} // Other message types
Err(tokio::sync::broadcast::error::RecvError::Lagged(skipped)) => {
warn!("Event receiver lagged, skipped {} messages", skipped);
}
Err(tokio::sync::broadcast::error::RecvError::Closed) => {
break; // Channel closed, exit loop
}
}
}
Ok(())
}
Using ModuleIpcClient + NodeApiIpc (Legacy)
use blvm_node::module::ipc::client::ModuleIpcClient;
use blvm_node::module::api::node_api::NodeApiIpc;
use blvm_node::module::ipc::protocol::{RequestMessage, RequestPayload, MessageType};
use std::sync::Arc;
use std::path::PathBuf;
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
// Parse command-line arguments
let args = Args::parse();
// Connect to node IPC socket (PathBuf required)
let socket_path = PathBuf::from(&args.socket_path);
let mut ipc_client = ModuleIpcClient::connect(&socket_path).await?;
// Perform handshake
let correlation_id = ipc_client.next_correlation_id();
let handshake_request = RequestMessage {
correlation_id,
request_type: MessageType::Handshake,
payload: RequestPayload::Handshake {
module_id: "my-module".to_string(),
module_name: "my-module".to_string(),
version: env!("CARGO_PKG_VERSION").to_string(),
},
};
let response = ipc_client.request(handshake_request).await?;
// Verify handshake response...
// Create NodeAPI wrapper (requires Arc<Mutex<ModuleIpcClient>> and module_id)
let ipc_client_arc = Arc::new(tokio::sync::Mutex::new(ipc_client));
let node_api = Arc::new(NodeApiIpc::new(ipc_client_arc.clone(), "my-module".to_string()));
// Subscribe to events using NodeAPI
let event_types = vec![EventType::NewBlock, EventType::NewTransaction];
let mut event_receiver = node_api.subscribe_events(event_types).await?;
// Main module loop (mpsc::Receiver returns Option)
while let Some(event) = event_receiver.recv().await {
match event {
ModuleMessage::Event(event_msg) => {
// Process event
}
_ => {}
}
}
Ok(())
}
Recommendation: Prefer the SDK declarative style for new modules. Otherwise use ModuleIntegration for simplicity. The legacy IPC client approach is still supported but requires more boilerplate.
Module Lifecycle
Modules receive command-line arguments (--module-id, --socket-path, --data-dir) and configuration via environment variables (MODULE_CONFIG_*). Lifecycle: Initialization (connect IPC) → Start (subscribe events) → Running (process events/requests) → Stop (clean shutdown).
Querying Node Data
Modules can query blockchain data through the Node API. Recommended approach: Use NodeAPI methods directly:
#![allow(unused)]
fn main() {
// Get NodeAPI from integration
let node_api = integration.node_api();
// Get current chain tip
let chain_tip = node_api.get_chain_tip().await?;
// Get a block by hash
let block = node_api.get_block(&block_hash).await?;
// Get block header
let header = node_api.get_block_header(&block_hash).await?;
// Get transaction
let tx = node_api.get_transaction(&tx_hash).await?;
// Get UTXO
let utxo = node_api.get_utxo(&outpoint).await?;
// Get chain info
let chain_info = node_api.get_chain_info().await?;
}
Alternative (Low-Level IPC): For advanced use cases, you can use the IPC protocol directly:
#![allow(unused)]
fn main() {
// Note: This requires request_type field in RequestMessage
let request = RequestMessage {
correlation_id: client.next_correlation_id(),
request_type: MessageType::GetChainTip,
payload: RequestPayload::GetChainTip,
};
let response = client.send_request(request).await?;
}
Recommendation: Use NodeAPI methods for simplicity and type safety. Low-level IPC is only needed for custom protocols.
Available NodeAPI Methods:
Blockchain API:
get_block(hash)- Get block by hashget_block_header(hash)- Get block header by hashget_transaction(hash)- Get transaction by hashhas_transaction(hash)- Check if transaction existsget_chain_tip()- Get current chain tip hashget_block_height()- Get current block heightget_block_by_height(height)- Get block by heightget_utxo(outpoint)- Get UTXO by outpoint (read-only)get_chain_info()- Get chain information (tip, height, difficulty, etc.)
Mempool API:
get_mempool_transactions()- Get all transaction hashes in mempoolget_mempool_transaction(hash)- Get transaction from mempool by hashget_mempool_size()- Get mempool size informationcheck_transaction_in_mempool(hash)- Check if transaction is in mempoolget_fee_estimate(target_blocks)- Get fee estimate for target confirmation blocks
Network API:
get_network_stats()- Get network statisticsget_network_peers()- Get list of connected peers
P2P serve policy & sync (read + targeted writes):
These calls affect what the node serves on the Bitcoin P2P wire (getdata responses) and sync introspection. They do not change consensus validation; withheld blocks/transactions are still validated if present locally. Use with care: broad denylists or bans affect relay and peer relationships.
- Block
getdatadenylist — additive merge, bounded snapshot, clear, or replace full-hash sets. Peers requesting a denied block hash getnotfoundinstead of a fullblockmessage.merge_block_serve_denylist(hashes)get_block_serve_denylist_snapshot()clear_block_serve_denylist()replace_block_serve_denylist(hashes)
- Transaction
getdatadenylist — same pattern for fulltxserves ongetdata.merge_tx_serve_denylist(hashes)get_tx_serve_denylist_snapshot()clear_tx_serve_denylist()replace_tx_serve_denylist(hashes)
- Sync status — finer-grained view than
get_chain_info()alone (coordinator phase / progress); seeSyncStatusin the trait.get_sync_status()
- Operational maintenance — when enabled, the node refuses all full-block answers on
getdata(coarse knob for degraded operation).set_block_serve_maintenance_mode(enabled)
- Peer ban — request a ban by peer address string; optional duration (
None= permanent). High-impact; subject to node policy and review.ban_peer(peer_addr, ban_duration_seconds)
Corresponding IPC MessageType / RequestPayload names match the NodeAPI methods (see Module IPC Protocol). Implementation: traits.rs, getdata_serve.rs.
Storage API:
storage_open_tree(name)- Open a storage tree (isolated per module)storage_insert(tree_id, key, value)- Insert a key-value pairstorage_get(tree_id, key)- Get a value by keystorage_remove(tree_id, key)- Remove a key-value pairstorage_contains_key(tree_id, key)- Check if key existsstorage_iter(tree_id)- Iterate over all key-value pairsstorage_transaction(tree_id, operations)- Execute atomic batch of operations
Filesystem API:
read_file(path)- Read a file from module’s data directorywrite_file(path, data)- Write data to a filedelete_file(path)- Delete a filelist_directory(path)- List directory contentscreate_directory(path)- Create a directoryget_file_metadata(path)- Get file metadata (size, type, timestamps)
Module Communication API:
call_module(target_module_id, method, params)- Call an API method on another modulepublish_event(event_type, payload)- Publish an event to other modulesregister_module_api(api)- Register module API for other modules to callunregister_module_api()- Unregister module APIdiscover_modules()- Discover all available modulesget_module_info(module_id)- Get information about a specific moduleis_module_available(module_id)- Check if a module is available
RPC API:
register_rpc_endpoint(method, description)- Register a JSON-RPC endpointunregister_rpc_endpoint(method)- Unregister an RPC endpoint
Timers API:
register_timer(interval_seconds, callback)- Register a periodic timercancel_timer(timer_id)- Cancel a registered timerschedule_task(delay_seconds, callback)- Schedule a one-time task
Metrics API:
report_metric(metric)- Report a metric to the nodeget_module_metrics(module_id)- Get module metricsget_all_metrics()- Get aggregated metrics from all modules
Lightning & Payment API:
get_lightning_node_url()- Get Lightning node connection infoget_lightning_info()- Get Lightning node informationget_payment_state(payment_id)- Get payment state by payment ID
Network Integration API:
send_mesh_packet_to_peer(peer_addr, packet_data)— send a mesh packet to a peer (supported path for IPC modules).send_mesh_packet_to_module(module_id, packet_data, peer_addr)— may be unimplemented for out-of-process modules (NodeApiIpc); use peer-targeted sends where available.
For complete API reference, see NodeAPI trait.
Subscribing to events
Modules subscribe with SubscribeEvents and receive EventType / EventPayload streams (chain, mempool, network, payments, mining, governance, maintenance, etc.). Events are notifications; changing serve policy or sync-adjacent behavior uses the NodeAPI methods above (denylists, maintenance mode, bans), not events alone.
Modules can subscribe to real-time node events. The approach depends on which integration method you’re using:
Using ModuleIntegration
#![allow(unused)]
fn main() {
// Subscribe to events
let event_types = vec![EventType::NewBlock, EventType::NewTransaction];
integration.subscribe_events(event_types).await?;
// Get event receiver
let mut event_receiver = integration.event_receiver();
// Receive events in main loop
while let Some(event) = event_receiver.recv().await {
match event {
ModuleMessage::Event(event_msg) => {
// Handle event
}
_ => {}
}
}
}
Using ModuleClient
#![allow(unused)]
fn main() {
// Subscribe to events
let event_types = vec![EventType::NewBlock, EventType::NewTransaction];
client.subscribe_events(event_types).await?;
// Get event receiver
let mut event_receiver = client.event_receiver();
// Receive events in main loop
while let Some(event) = event_receiver.recv().await {
match event {
ModuleMessage::Event(event_msg) => {
// Handle event
}
_ => {}
}
}
}
Available Event Types:
Core Blockchain Events:
NewBlock- New block connected to chainNewTransaction- New transaction in mempoolBlockDisconnected- Block disconnected (chain reorg)ChainReorg- Chain reorganization occurred
Payment Events:
PaymentRequestCreated,PaymentSettled,PaymentFailed,PaymentVerified,PaymentRouteFound,PaymentRouteFailed,ChannelOpened,ChannelClosed
Mining Events:
BlockMined,BlockTemplateUpdated,MiningDifficultyChanged,MiningJobCreated,ShareSubmitted,MergeMiningReward,MiningPoolConnected,MiningPoolDisconnected
Network Events:
PeerConnected,PeerDisconnected,PeerBanned,MessageReceived,MessageSent,BroadcastStarted,BroadcastCompleted,RouteDiscovered,RouteFailed
Module Lifecycle Events:
ModuleLoaded,ModuleUnloaded,ModuleCrashed,ModuleDiscovered,ModuleInstalled,ModuleUpdated,ModuleRemoved
Configuration & Lifecycle Events:
ConfigLoaded,NodeStartupCompleted,NodeShutdown,NodeShutdownCompleted
Maintenance & Resource Events:
DataMaintenance,MaintenanceStarted,MaintenanceCompleted,HealthCheck,DiskSpaceLow,ResourceLimitWarning
Governance Events:
GovernanceProposalCreated,GovernanceProposalVoted,GovernanceProposalMerged,WebhookSent,WebhookFailed,GovernanceForkDetected
Consensus Events:
BlockValidationStarted,BlockValidationCompleted,ScriptVerificationStarted,ScriptVerificationCompleted,DifficultyAdjusted,SoftForkActivated
Mempool Events:
MempoolTransactionAdded,MempoolTransactionRemoved,FeeRateChanged
And many more. For complete list, see EventType enum and Event System.
Configuration
Module system is configured in node config (see Node Configuration):
[modules]
enabled = true
modules_dir = "modules"
data_dir = "data/modules"
enabled_modules = [] # Empty = auto-discover all
[modules.module_configs.my-module]
setting1 = "value1"
Modules can have their own config.toml files, passed via environment variables.
Security Model
Permissions
Modules operate with whitelist-only access control. Each module declares required capabilities in its manifest. Capabilities use snake_case in module.toml and map to Permission enum variants.
Core Permissions:
read_blockchain- Access to blockchain dataread_utxo- Query UTXO set (read-only)read_chain_state- Query chain state (height, tip)subscribe_events- Subscribe to node eventssend_transactions- Submit transactions to mempool (future: may be restricted)
Additional Permissions:
read_mempool- Read mempool dataread_network- Read network data (peers, stats)network_access- Send network packetsread_lightning- Read Lightning network dataread_payment- Read payment dataread_storage,write_storage,manage_storage- Storage accessread_filesystem,write_filesystem,manage_filesystem- Filesystem accessregister_rpc_endpoint- Register RPC endpointsmanage_timers- Manage timers and scheduled tasksreport_metrics,read_metrics- Metrics accessdiscover_modules- Discover other modulespublish_events- Publish events to other modulescall_module- Call other modules’ APIsregister_module_api- Register module API for other modules to call
For complete list, see Permission enum.
Sandboxing
Modules are sandboxed to ensure security:
- Process Isolation: Separate process, isolated memory
- File System: Access limited to module data directory
- Network: No network access (modules can only communicate via IPC)
- Resource Limits: CPU, memory, and file descriptor limits (configurable via node
module_resource_limits; on Linux applied viaprlimitafter spawn)
Request Validation
All module API requests are validated:
- Permission checks (module has required permission)
- Consensus protection (no consensus-modifying operations)
- Resource limits (enforced per module); rate limiting (planned)
API Reference
NodeAPI Methods: See Querying Node Data section above for complete list of available methods.
Event Types: See Subscribing to Events section above for complete list of available event types.
Permissions: See Permissions section above for complete list of available permissions.
For detailed API reference, see:
For detailed API reference, see blvm-node/src/module/ (traits, IPC protocol, Node API, security).
See Also
- SDK Overview - SDK introduction and capabilities
- SDK API Reference - Complete SDK API documentation
- SDK Examples - Module development examples
- Module System Architecture - Module system design
- Module IPC Protocol - IPC communication details
- Modules Overview - Available modules
- Node Configuration - Configuring modules
SDK API Reference
Complete API documentation for the BLVM Developer SDK, including governance primitives and composition framework.
Overview
The BLVM SDK provides two main API categories:
- Governance Primitives: Cryptographic operations for governance (keys, signatures, multisig)
- Composition Framework: Module registry and node composition APIs
For more API overview and cross-references, see API Index in this book.
Governance Primitives
Core Types
GovernanceKeypair
Cryptographic keypair for signing governance messages.
#![allow(unused)]
fn main() {
pub struct GovernanceKeypair {
// Private fields
}
}
Methods:
generate() -> GovernanceResult<Self>- Generate a new random keypairfrom_secret_key(secret_bytes: &[u8]) -> GovernanceResult<Self>- Create from secret key bytespublic_key(&self) -> PublicKey- Get the public keysecret_key_bytes(&self) -> [u8; 32]- Get the secret key bytes (32 bytes)public_key_bytes(&self) -> [u8; 33]- Get the compressed public key bytes (33 bytes)
Example:
#![allow(unused)]
fn main() {
use blvm_sdk::GovernanceKeypair;
let keypair = GovernanceKeypair::generate()?;
let pubkey = keypair.public_key();
}
PublicKey
Public key for governance operations (Bitcoin-compatible secp256k1).
#![allow(unused)]
fn main() {
pub struct PublicKey {
// Private fields
}
}
Methods:
from_bytes(bytes: &[u8]) -> GovernanceResult<Self>- Create from bytesto_bytes(&self) -> [u8; 33]- Get compressed public key bytesto_compressed_bytes(&self) -> [u8; 33]- Get compressed formatto_uncompressed_bytes(&self) -> [u8; 65]- Get uncompressed format
Signature
Cryptographic signature for governance messages.
#![allow(unused)]
fn main() {
pub struct Signature {
// Private fields
}
}
Methods:
from_bytes(bytes: &[u8]) -> GovernanceResult<Self>- Create from bytesto_bytes(&self) -> [u8; 64]- Get signature bytes (64 bytes)to_der_bytes(&self) -> Vec<u8>- Get signature in DER format
GovernanceMessage
Message types that can be signed for governance decisions.
#![allow(unused)]
fn main() {
pub enum GovernanceMessage {
Release {
version: String,
commit_hash: String,
},
ModuleApproval {
module_name: String,
version: String,
},
BudgetDecision {
amount: u64,
purpose: String,
},
}
}
Methods:
to_signing_bytes(&self) -> Vec<u8>- Convert to bytes for signingdescription(&self) -> String- Get human-readable description
Multisig
Multisig configuration for threshold signatures.
#![allow(unused)]
fn main() {
pub struct Multisig {
// Private fields
}
}
Methods:
new(threshold: usize, total: usize, public_keys: Vec<PublicKey>) -> GovernanceResult<Self>- Create new multisig (e.g., 3-of-5)verify(&self, message: &[u8], signatures: &[Signature]) -> GovernanceResult<bool>- Verify signatures meet thresholdcollect_valid_signatures(&self, message: &[u8], signatures: &[Signature]) -> GovernanceResult<Vec<usize>>- Get indices of valid signaturesthreshold(&self) -> usize- Get threshold (e.g., 3)total(&self) -> usize- Get total number of keys (e.g., 5)public_keys(&self) -> &[PublicKey]- Get all public keysis_valid_signature(&self, signature: &Signature, message: &[u8]) -> GovernanceResult<Option<usize>>- Check if signature is valid and return key index
Example:
#![allow(unused)]
fn main() {
use blvm_sdk::{Multisig, PublicKey};
let multisig = Multisig::new(3, 5, public_keys)?;
let valid = multisig.verify(&message_bytes, &signatures)?;
}
Functions
sign_message
Sign a message with a secret key.
#![allow(unused)]
fn main() {
pub fn sign_message(secret_key: &SecretKey, message: &[u8]) -> GovernanceResult<Signature>
}
Parameters:
secret_key- The secret key to sign withmessage- The message bytes to sign
Returns: GovernanceResult<Signature> - The signature or an error
verify_signature
Verify a signature against a message and public key.
#![allow(unused)]
fn main() {
pub fn verify_signature(
signature: &Signature,
message: &[u8],
public_key: &PublicKey,
) -> GovernanceResult<bool>
}
Parameters:
signature- The signature to verifymessage- The message that was signedpublic_key- The public key to verify against
Returns: GovernanceResult<bool> - true if signature is valid
Error Types
GovernanceError
Errors that can occur during governance operations.
#![allow(unused)]
fn main() {
pub enum GovernanceError {
InvalidKey(String),
SignatureVerification(String),
InvalidMultisig(String),
MessageFormat(String),
Cryptographic(String),
Serialization(String),
InvalidThreshold { threshold: usize, total: usize },
InsufficientSignatures { got: usize, need: usize },
InvalidSignatureFormat(String),
}
}
GovernanceResult<T>
Result type alias for governance operations.
#![allow(unused)]
fn main() {
pub type GovernanceResult<T> = Result<T, GovernanceError>;
}
Composition Framework
Module Registry
ModuleRegistry
Manages module discovery, installation, and dependency resolution.
#![allow(unused)]
fn main() {
pub struct ModuleRegistry {
// Private fields
}
}
Methods:
new<P: AsRef<Path>>(modules_dir: P) -> Self- Create registry for modules directorydiscover_modules(&mut self) -> Result<Vec<ModuleInfo>>- Discover all modules in directoryget_module(&self, name: &str, version: Option<&str>) -> Result<ModuleInfo>- Get module by name/versioninstall_module(&mut self, source: ModuleSource) -> Result<ModuleInfo>- Install module from sourceupdate_module(&mut self, name: &str, new_version: &str) -> Result<ModuleInfo>- Update module to new versionremove_module(&mut self, name: &str) -> Result<()>- Remove modulelist_modules(&self) -> Vec<ModuleInfo>- List all installed modulesresolve_dependencies(&self, module_names: &[String]) -> Result<Vec<ModuleInfo>>- Resolve module dependencies
Example:
#![allow(unused)]
fn main() {
use blvm_sdk::ModuleRegistry;
let mut registry = ModuleRegistry::new("modules");
let modules = registry.discover_modules()?;
let module = registry.get_module("lightning-module", Some("1.0.0"))?;
}
ModuleInfo
Information about a discovered module.
#![allow(unused)]
fn main() {
pub struct ModuleInfo {
pub name: String,
pub version: String,
pub description: String,
pub author: String,
pub capabilities: Vec<String>,
pub dependencies: HashMap<String, String>,
pub entry_point: String,
pub source: ModuleSource,
pub status: ModuleStatus,
pub health: ModuleHealth,
}
}
Node Composition
NodeComposer
Composes nodes from module specifications.
#![allow(unused)]
fn main() {
pub struct NodeComposer {
// Private fields
}
}
Methods:
new<P: AsRef<Path>>(modules_dir: P) -> Self- Create composer with module registryvalidate_composition(&self, spec: &NodeSpec) -> Result<ValidationResult>- Validate node compositiongenerate_config(&self) -> String- Generate node configuration from compositionregistry(&self) -> &ModuleRegistry- Get module registryregistry_mut(&mut self) -> &mut ModuleRegistry- Get mutable registry
NodeSpec
Specification for a composed node.
#![allow(unused)]
fn main() {
pub struct NodeSpec {
pub network_type: NetworkType,
pub modules: Vec<ModuleSpec>,
pub metadata: NodeMetadata,
}
}
ModuleSpec
Specification for a module in a composed node.
#![allow(unused)]
fn main() {
pub struct ModuleSpec {
pub name: String,
pub version: Option<String>,
pub config: HashMap<String, String>,
pub enabled: bool,
}
}
Module Lifecycle
ModuleLifecycle
Manages module lifecycle (start, stop, restart, health checks).
#![allow(unused)]
fn main() {
pub struct ModuleLifecycle {
// Private fields
}
}
Methods:
new(registry: ModuleRegistry) -> Self- Create lifecycle managerwith_module_manager(mut self, manager: Arc<Mutex<ModuleManager>>) -> Self- Attach module managerstart_module(&mut self, name: &str) -> Result<()>- Start a modulestop_module(&mut self, name: &str) -> Result<()>- Stop a modulerestart_module(&mut self, name: &str) -> Result<()>- Restart a modulemodule_status(&self, name: &str) -> Result<ModuleStatus>- Get module statusmodule_health(&self, name: &str) -> Result<ModuleHealth>- Get module healthregistry(&self) -> &ModuleRegistry- Get module registry
ModuleStatus
Module runtime status.
#![allow(unused)]
fn main() {
pub enum ModuleStatus {
Stopped,
Starting,
Running,
Stopping,
Error(String),
}
}
ModuleHealth
Module health information.
#![allow(unused)]
fn main() {
pub struct ModuleHealth {
pub is_healthy: bool,
pub last_heartbeat: Option<SystemTime>,
pub error_count: u64,
pub last_error: Option<String>,
}
}
CLI Tools
blvm-keygen
Generate governance keypairs.
blvm-keygen [OPTIONS]
Options:
-o, --output <OUTPUT> Output file [default: governance.key]
-f, --format <FORMAT> Output format (text, json) [default: text]
--seed <SEED> Generate deterministic keypair from seed
--show-private Show private key in output
blvm-sign
Sign governance messages.
blvm-sign [OPTIONS] <COMMAND>
Options:
-o, --output <OUTPUT> Output file [default: signature.txt]
-f, --format <FORMAT> Output format (text, json) [default: text]
-k, --key <KEY> Private key file
Commands:
release Sign a release message
module Sign a module approval message
budget Sign a budget decision message
blvm-verify
Verify governance signatures and multisig thresholds.
blvm-verify [OPTIONS] <COMMAND>
Options:
-f, --format <FORMAT> Output format (text, json) [default: text]
-s, --signatures <SIGS> Signature files (comma-separated)
--threshold <THRESHOLD> Threshold (e.g., "3-of-5")
--pubkeys <PUBKEYS> Public key files (comma-separated)
Commands:
release Verify a release message
module Verify a module approval message
budget Verify a budget decision message
Usage Examples
Basic Governance Operations
#![allow(unused)]
fn main() {
use blvm_sdk::{GovernanceKeypair, GovernanceMessage, sign_message, verify_signature};
// Generate keypair
let keypair = GovernanceKeypair::generate()?;
// Create message
let message = GovernanceMessage::Release {
version: "v1.0.0".to_string(),
commit_hash: "abc123".to_string(),
};
// Sign message
let message_bytes = message.to_signing_bytes();
let signature = sign_message(&keypair.secret_key, &message_bytes)?;
// Verify signature
let verified = verify_signature(&signature, &message_bytes, &keypair.public_key())?;
assert!(verified);
}
Multisig Operations
#![allow(unused)]
fn main() {
use blvm_sdk::{GovernanceKeypair, GovernanceMessage, Multisig, sign_message};
// Generate 5 keypairs for 3-of-5 multisig
let keypairs: Vec<_> = (0..5)
.map(|_| GovernanceKeypair::generate().unwrap())
.collect();
let public_keys: Vec<_> = keypairs.iter()
.map(|kp| kp.public_key())
.collect();
// Create multisig
let multisig = Multisig::new(3, 5, public_keys)?;
// Create message
let message = GovernanceMessage::Release {
version: "v1.0.0".to_string(),
commit_hash: "abc123".to_string(),
};
let message_bytes = message.to_signing_bytes();
// Sign with 3 keys
let signatures: Vec<_> = keypairs[0..3]
.iter()
.map(|kp| sign_message(&kp.secret_key_bytes(), &message_bytes).unwrap())
.collect();
// Verify multisig threshold
let verified = multisig.verify(&message_bytes, &signatures)?;
assert!(verified);
}
Module Registry Usage
#![allow(unused)]
fn main() {
use blvm_sdk::ModuleRegistry;
// Create registry
let mut registry = ModuleRegistry::new("modules");
// Discover modules
let modules = registry.discover_modules()?;
println!("Found {} modules", modules.len());
// Get specific module
let module = registry.get_module("lightning-module", Some("1.0.0"))?;
println!("Module: {} v{}", module.name, module.version);
// Resolve dependencies
let deps = registry.resolve_dependencies(&["lightning-module".to_string()])?;
}
See Also
- Module Development - Building modules that use these APIs
- SDK Examples - More usage examples
- API Index - Cross-reference to all APIs
- API Index - Cross-reference to all BLVM APIs in this book
SDK Examples
The SDK provides examples for common governance operations and module development.
Complete Governance Workflow
Step 1: Generate Keypairs
Using CLI:
# Generate a keypair
blvm-keygen --output alice.key --format pem
# Generate multiple keypairs for a team
blvm-keygen --output bob.key --format pem
blvm-keygen --output charlie.key --format pem
Using Rust:
#![allow(unused)]
fn main() {
use blvm_sdk::governance::GovernanceKeypair;
// Generate a keypair
let keypair = GovernanceKeypair::generate()?;
// Save to file
keypair.save_to_file("alice.key", blvm_sdk::governance::KeyFormat::Pem)?;
// Get public key
let public_key = keypair.public_key();
println!("Public key: {}", public_key);
}
Step 2: Create a Release Message
Using CLI:
blvm-sign release \
--version v1.0.0 \
--commit abc123def456 \
--key alice.key \
--output alice-signature.txt
Using Rust:
#![allow(unused)]
fn main() {
use blvm_sdk::governance::{GovernanceKeypair, GovernanceMessage};
// Load keypair
let keypair = GovernanceKeypair::load_from_file("alice.key")?;
// Create release message
let message = GovernanceMessage::Release {
version: "v1.0.0".to_string(),
commit_hash: "abc123def456".to_string(),
};
// Sign the message
let signature = keypair.sign(&message.to_signing_bytes())?;
// Save signature
std::fs::write("alice-signature.txt", signature.to_string())?;
}
Step 3: Collect Multiple Signatures
# Each maintainer signs independently
blvm-sign release --version v1.0.0 --commit abc123 --key alice.key --output sig1.txt
blvm-sign release --version v1.0.0 --commit abc123 --key bob.key --output sig2.txt
blvm-sign release --version v1.0.0 --commit abc123 --key charlie.key --output sig3.txt
Step 4: Verify Multisig Threshold
Using CLI:
blvm-verify release \
--version v1.0.0 \
--commit abc123 \
--signatures sig1.txt,sig2.txt,sig3.txt \
--threshold 3-of-5 \
--pubkeys maintainers.json
Using Rust:
#![allow(unused)]
fn main() {
use blvm_sdk::governance::{Multisig, GovernanceMessage, PublicKey};
// Load public keys
let pubkeys = vec![
PublicKey::from_file("alice.pub")?,
PublicKey::from_file("bob.pub")?,
PublicKey::from_file("charlie.pub")?,
PublicKey::from_file("dave.pub")?,
PublicKey::from_file("eve.pub")?,
];
// Create multisig (3 of 5 threshold)
let multisig = Multisig::new(3, 5, pubkeys)?;
// Load signatures
let signatures = vec![
load_signature("sig1.txt")?,
load_signature("sig2.txt")?,
load_signature("sig3.txt")?,
];
// Verify
let message = GovernanceMessage::Release {
version: "v1.0.0".to_string(),
commit_hash: "abc123".to_string(),
};
let valid = multisig.verify(&message.to_signing_bytes(), &signatures)?;
if valid {
println!("✓ Multisig verification passed (3/5 signatures)");
} else {
println!("✗ Multisig verification failed");
}
}
Nested Multisig Example
For team-based governance with hierarchical structure:
#![allow(unused)]
fn main() {
use blvm_sdk::governance::{Multisig, NestedMultisig};
// Team 1: 2 of 3 members
let team1_keys = vec![alice_key, bob_key, charlie_key];
let team1 = Multisig::new(2, 3, team1_keys)?;
// Team 2: 2 of 3 members
let team2_keys = vec![dave_key, eve_key, frank_key];
let team2 = Multisig::new(2, 3, team2_keys)?;
// Organization: 2 of 2 teams
let nested = NestedMultisig::new(2, 2, vec![team1, team2])?;
// Verify with signatures from both teams
let valid = nested.verify(&message.to_signing_bytes(), &all_signatures)?;
}
Binary Signing Example
Sign and verify binary files for release verification:
# Sign a binary
blvm-sign-binary \
--file target/release/blvm \
--key maintainer.key \
--output blvm.sig
# Verify binary signature
blvm-verify-binary \
--file target/release/blvm \
--signature blvm.sig \
--pubkey maintainer.pub
For more examples, see the blvm-sdk examples directory.
Node module example (hello-module)
The hello-module example shows the full declarative pattern: #[config], #[module] on struct (with migrations), #[module] on impl with #[command] and #[rpc_method], ModuleBootstrap, ModuleDb, and run_module!. After building and loading, blvm hello greet is served by the running module (top-level CLI group is derived from the module name, e.g. HelloModule → hello).
- Source: blvm-sdk/examples/hello-module
- Production-style reference: blvm-selective-sync (doc)
See Also
- SDK Overview - SDK introduction and architecture
- SDK Getting Started - Quick start guide
- API Reference - Complete SDK API documentation
- Module Development - Building modules with the SDK
- Module System Architecture - Module system design
- Modules Overview - Available modules
Modules Overview
Introduction
BLVM node uses a modular architecture where optional features run as separate, process-isolated modules. This extends node functionality without affecting consensus or base node stability.
Available Modules
The following modules are available for blvm-node:
Core Modules
- Lightning Network Module - Lightning Network payment processing with multiple provider support (LNBits, LDK, Stub), invoice verification, and payment state tracking
- Commons Mesh Module - Payment-gated mesh networking with routing fees, traffic classification, and anti-monopoly protection. Designed to support specialized modules (onion routing, mining pool coordination, messaging) via ModuleAPI
- Stratum V2 Module - Stratum V2 mining protocol support with network integration complete and mining pool management
- Datum Module - DATUM Gateway mining protocol module for Ocean pool integration (works with Stratum V2)
- Mining OS Module - Operating system-level mining optimizations and hardware management
- Selective Sync Module — Configurable IBD sync policy (e.g. skip flagged transaction content during IBD);
blvm sync-policy …CLI when the module is loaded
Module System Architecture
All modules run in separate processes with IPC communication (see Module System Architecture for details), providing:
- Process Isolation: Each module runs in isolated memory space
- Crash Containment: Module failures don’t affect the base node
- Consensus Isolation: Modules cannot modify consensus rules or UTXO set
- Security: Modules communicate only through well-defined APIs
For detailed information about the module system architecture, see Module System.
Installing Modules
Modules can be installed in several ways:
Via Cargo
cargo install blvm-lightning
cargo install blvm-mesh
cargo install blvm-stratum-v2
cargo install blvm-datum
cargo install blvm-miningos
cargo install blvm-selective-sync
Via Module Installer
cargo install cargo-blvm-module
cargo blvm-module install blvm-lightning
cargo blvm-module install blvm-mesh
cargo blvm-module install blvm-stratum-v2
cargo blvm-module install blvm-datum
cargo blvm-module install blvm-miningos
cargo blvm-module install blvm-selective-sync
Manual Installation
- Build the module:
cargo build --release - Copy the binary to
modules/<module-name>/target/release/ - Create
module.tomlmanifest in the module directory - Restart the node or use runtime module loading
Module Configuration
Each module requires a config.toml file in its module directory where applicable. See individual module documentation (Lightning, Mesh, Stratum V2, Datum, Mining OS, Selective Sync). For blvm-mesh submodules, see the Mesh Module documentation.
Module Lifecycle
Modules can be:
- Loaded at node startup (if enabled in configuration)
- Loaded at runtime via RPC or module manager API
- Unloaded at runtime without affecting the base node
- Reloaded (hot reload) for configuration updates
See Also
- Module System Architecture - Detailed module system documentation
- Module Development - Guide for developing custom modules
- SDK Overview - SDK introduction and capabilities
- SDK API Reference - Complete SDK API documentation
- SDK Examples - Module development examples
- Node Configuration - Node-level module configuration
Lightning Network Module
Overview
The Lightning Network module (blvm-lightning) handles Lightning Network payment processing for blvm-node: invoice verification, payment routing, channel management, and payment state tracking. For information on developing custom modules, see Module Development.
Features
- Invoice Verification: Validates Lightning Network invoices (BOLT11) using multiple provider backends
- Payment Processing: Processes Lightning payments via LNBits API or LDK
- Provider Abstraction: Supports multiple Lightning providers (LNBits, LDK, Stub) with unified interface
- Payment State Tracking: Monitors payment lifecycle from request to settlement
Installation
Via Cargo
cargo install blvm-lightning
Via Module Installer
cargo install cargo-blvm-module
cargo blvm-module install blvm-lightning
Manual Installation
-
Clone the repository:
git clone https://github.com/BTCDecoded/blvm-lightning.git cd blvm-lightning -
Build the module:
cargo build --release -
Install to node modules directory:
mkdir -p /path/to/node/modules/blvm-lightning/target/release cp target/release/blvm-lightning /path/to/node/modules/blvm-lightning/target/release/
Configuration
The module supports multiple Lightning providers. Create a config.toml file in the module directory:
LNBits Provider (Recommended)
[lightning]
provider = "lnbits"
[lightning.lnbits]
api_url = "https://lnbits.example.com"
api_key = "your_lnbits_api_key"
wallet_id = "optional_wallet_id" # Optional
LDK Provider (Rust-native)
[lightning]
provider = "ldk"
[lightning.ldk]
data_dir = "data/ldk"
network = "testnet" # or "mainnet" or "regtest"
node_private_key = "hex_encoded_private_key" # Optional, will generate if not provided
Stub Provider (Testing)
[lightning]
provider = "stub"
Configuration Options
provider(required): Lightning provider to use ("lnbits","ldk", or"stub")- LNBits:
api_url,api_key,wallet_id(optional) - LDK:
data_dir,network,node_private_key(optional) - Stub: No additional configuration needed
Provider Comparison
| Feature | LNBits | LDK | Stub |
|---|---|---|---|
| Status | Operational (REST) | Operational (Rust/LDK) | Stub / dev |
| API Type | REST (HTTP) | Rust-native (lightning-invoice) | None |
| Real Lightning | ✅ Yes | ✅ Yes | ❌ No |
| External Service | ✅ Yes | ❌ No | ❌ No |
| Invoice Creation | ✅ Via API | ✅ Native | ✅ Mock |
| Payment Verification | ✅ Via API | ✅ Native | ✅ Mock |
| Best For | Payment processing | Full control, Rust-native | Testing |
Switching Providers: All providers implement the same interface, so switching providers is just a configuration change. No code changes required.
Module Manifest
The module includes a module.toml manifest (see Module Development):
name = "blvm-lightning"
version = "0.1.0"
description = "Lightning Network payment processor"
author = "Bitcoin Commons Team"
entry_point = "blvm-lightning"
capabilities = [
"read_blockchain",
"subscribe_events",
]
Events
Subscribed Events
The module subscribes to the following node events:
PaymentRequestCreated- New payment request createdPaymentSettled- Payment confirmed on-chainPaymentFailed- Payment failed
Published Events
The module publishes the following events:
PaymentVerified- Lightning payment verifiedPaymentRouteFound- Payment route discoveredPaymentRouteFailed- Payment routing failedChannelOpened- Lightning channel openedChannelClosed- Lightning channel closed
Usage
Once installed and configured, the module automatically:
- Subscribes to payment-related events from the node (
PaymentRequestCreated,PaymentSettled,PaymentFailed) - Verifies Lightning invoices (BOLT11) when payment requests are created
- Processes payments using the configured provider (LNBits, LDK, or Stub)
- Publishes payment verification and status events (
PaymentVerified,PaymentRouteFound,PaymentRouteFailed) - Monitors payment lifecycle and publishes status events
The module automatically selects the provider based on configuration. All providers implement the same interface, so switching providers requires only a configuration change.
Provider Selection
The module uses the LightningProcessor to handle payment processing. The processor:
- Reads provider configuration from
lightning.provider - Creates the appropriate provider instance (LNBits, LDK, or Stub)
- Routes all payment operations through the provider interface
- Stores provider configuration in module storage for persistence
Batch Payment Verification
The module supports batch payment verification for improved performance when processing multiple payments:
#![allow(unused)]
fn main() {
use blvm_lightning::processor::LightningProcessor;
// Verify multiple payments in parallel
let payments = vec![
("invoice1", "payment_id_1"),
("invoice2", "payment_id_2"),
("invoice3", "payment_id_3"),
];
let results = processor.verify_payments_batch(&payments).await?;
// Returns Vec<bool> with verification results in same order as inputs
}
Batch verification processes all payments concurrently, significantly improving throughput for high-volume payment processing scenarios.
API Integration
The module integrates with the node via ModuleClient and NodeApiIpc:
- Read-only blockchain access: Queries blockchain data for payment verification
- Event subscription: Receives real-time events from the node
- Event publication: Publishes Lightning-specific events
- Module storage: Stores provider configuration and channel statistics in module storage tree
lightning_config
Storage Usage
The module uses module storage to persist configuration and statistics:
provider_type: Current provider type (lnbits, ldk, stub)channel_count: Number of active Lightning channelstotal_capacity_sats: Total channel capacity in satoshis
Troubleshooting
Module Not Loading
- Verify the module binary exists at the correct path
- Check
module.tomlmanifest is present and valid - Verify module has required capabilities
- Check node logs for module loading errors
Payment Verification Failing
- LNBits Provider: Verify API URL and API key are correct, check LNBits service is accessible
- LDK Provider: Verify data directory permissions, check network configuration (mainnet/testnet/regtest)
- General: Verify module has
read_blockchaincapability, check node logs for detailed error messages
Provider-Specific Issues
- LNBits: Check API endpoint is accessible, verify wallet_id if specified, check API rate limits
- LDK: Verify data directory exists and is writable, check network matches node configuration
- Stub: No real verification - only for testing
Repository
- GitHub: blvm-lightning
- Version: 0.1.0
See Also
- Module System Overview - Overview of all available modules
- Module System Architecture - Detailed module system documentation
- Module Development - Guide for developing custom modules
- SDK Overview - SDK introduction and capabilities
- SDK API Reference - Complete SDK API documentation
- SDK Examples - Module development examples
Commons Mesh Module
Overview
The Commons Mesh module (blvm-mesh) implements payment-gated mesh networking for blvm-node. It implements the Commons Mesh routing protocol with fee distribution, traffic classification, and anti-monopoly protection. For information on developing custom modules, see Module Development.
Features
- Payment-Gated Routing: Routes traffic based on payment verification
- Traffic Classification: Distinguishes between free and paid traffic
- Fee Distribution: Distributes routing fees (60% destination, 30% routers, 10% treasury)
- Anti-Monopoly Protection: Prevents single entity from dominating routing
- Network State Tracking: Monitors mesh network topology and state
Installation
Via Cargo
cargo install blvm-mesh
Via Module Installer
cargo install cargo-blvm-module
cargo blvm-module install blvm-mesh
Manual Installation
-
Clone the repository:
git clone https://github.com/BTCDecoded/blvm-mesh.git cd blvm-mesh -
Build the module:
cargo build --release -
Install to node modules directory:
mkdir -p /path/to/node/modules/blvm-mesh/target/release cp target/release/blvm-mesh /path/to/node/modules/blvm-mesh/target/release/
Configuration
Create a config.toml file in the module directory:
[mesh]
# Enable/disable module
enabled = true
# Mesh networking mode
# Options: "bitcoin_only", "payment_gated", "open"
mode = "payment_gated"
# Network listening address
listen_addr = "0.0.0.0:8334"
Configuration Options
enabled(default:true): Enable or disable the modulemode(default:"payment_gated"): Mesh networking mode"bitcoin_only": Bitcoin-only routing (no payment gating)"payment_gated": Payment-gated routing (default)"open": Open routing (no payment required)
listen_addr(default:"0.0.0.0:8334"): Network address to listen on
Module Manifest
The module includes a module.toml manifest (see Module Development):
name = "blvm-mesh"
version = "0.1.0"
description = "Commons Mesh networking module"
author = "Bitcoin Commons Team"
entry_point = "blvm-mesh"
capabilities = [
"read_blockchain",
"subscribe_events",
]
Events
Subscribed Events
The module subscribes to 46+ node events including:
- Network Events:
PeerConnected,MessageReceived,PeerDisconnected - Payment Events:
PaymentRequestCreated,PaymentVerified,PaymentSettled - Chain Events:
NewBlock,ChainTipUpdated,BlockDisconnected - Mempool Events:
MempoolTransactionAdded,FeeRateChanged,MempoolTransactionRemoved
Published Events
The module publishes the following events:
RouteDiscovered- Payment route discovered through mesh networkRouteFailed- Payment route discovery failedPaymentVerified- Payment verified for mesh routing
Routing Fee Distribution
Mesh routing fees are distributed as follows:
- 60% to destination node
- 30% to routing nodes (distributed proportionally based on route length)
- 10% to Commons treasury
Fee calculation is performed by the RoutingTable::calculate_fee() method, which takes into account:
- Route length (number of hops)
- Base routing cost
- Payment amount (for percentage-based fees)
Anti-Monopoly Protection
The module implements anti-monopoly protections to prevent single entities from dominating routing:
- Maximum routing share limits: Per-entity limits on routing market share
- Diversification requirements: Routing paths must include multiple entities
- Fee distribution mechanisms: Fee distribution favors decentralized routing paths
- Route quality scoring: Routes are scored based on decentralization metrics
These protections are enforced by the RoutingPolicyEngine and RoutingTable components.
Usage
Once installed and configured, the module automatically:
- Subscribes to network, payment, chain, and mempool events
- Classifies traffic as free or paid based on payment verification
- Routes traffic through the mesh network with payment gating
- Distributes routing fees according to the fee distribution model
- Tracks network topology and publishes routing events
Architecture
Core Infrastructure
blvm-mesh provides the core infrastructure layer for payment-gated routing. The module exposes a ModuleAPI that allows other modules to build specialized functionality on top of the mesh infrastructure. This separation of concerns makes the system composable and allows each module to focus on its domain.
Core Components
MeshManager
Central coordinator for mesh networking operations:
- Payment-gated routing: Routes traffic based on payment verification
- Protocol detection: Detects protocol from packet headers
- Route discovery: Finds routes through mesh network
- Replay prevention: Prevents payment proof replay attacks
- Payment verification: Verifies Lightning and CTV payments
- Routing table management: Manages mesh network topology
Code: manager.rs
PaymentVerifier
Verifies payment proofs for mesh routing:
- Lightning payments: Verifies BOLT11 invoices with preimages via NodeAPI
- CTV payments: Verifies covenant proofs for instant settlement (requires CTV feature flag)
- Expiry checking: Validates payment proof timestamps to prevent expired proofs
- Amount verification: Confirms payment amount matches routing requirements
- NodeAPI integration: Uses NodeAPI to query blockchain for payment verification
Code: verifier.rs
ReplayPrevention
Prevents payment proof replay attacks:
- Hash-based tracking: Tracks payment proof hashes to detect replays
- Sequence numbers: Uses sequence numbers for additional replay protection
- Expiry cleanup: Removes expired payment proof hashes (24-hour expiry)
- Lock-free reads: Uses DashMap for concurrent access without blocking
Code: replay.rs
RoutingTable
Manages mesh network routing:
- Direct peers: Tracks direct connections using DashMap for lock-free concurrent access
- Multi-hop routes: Discovers routes through intermediate nodes using distance vector routing
- Fee calculation: Calculates routing fees (60/30/10 split) based on route length and payment amount
- Route discovery: Finds optimal paths through network with route quality scoring
- Route expiry: Routes expire after 1 hour (configurable)
- Route caching: Caches discovered routes for performance
Code: routing.rs
RouteDiscovery
Implements route discovery protocol:
- Distance vector routing: Simple, scalable routing algorithm
- Route requests: Broadcasts route requests to find paths
- Route responses: Collects route responses from network
- Route advertisements: Advertises known routes to neighbors
- Timeout handling: 30-second timeout for route discovery
- Maximum hops: 10 hops maximum for route discovery
Code: discovery.rs
RoutingPolicyEngine
Determines routing policy based on protocol and configuration:
- Protocol detection: Identifies protocol from packet headers
- Policy determination: Decides if routing requires payment
- Mode support: Bitcoin-only, payment-gated, or open routing
Code: routing_policy.rs
ModuleAPI
Overview
blvm-mesh exposes a ModuleAPI that other modules can call via inter-module IPC. This allows specialized modules to use mesh routing without implementing routing logic themselves.
Code: api.rs
Available Methods
send_packet
Send a packet through the mesh network.
Request: SendPacketRequest
destination: NodeId- 32-byte destination node IDpayload: Vec<u8>- Packet payloadpayment_proof: Option<PaymentProof>- Required for paid routingprotocol_id: Option<String>- Optional protocol identifierttl: Option<u64>- Time-to-live (seconds)
Response: SendPacketResponse
success: bool- Whether packet was sent successfullypacket_id: [u8; 32]- Unique packet IDroute_length: usize- Number of hopsestimated_cost_sats: u64- Total routing costerror: Option<String>- Error message if failed
discover_route
Find a route to a destination node.
Request: DiscoverRouteRequest
destination: NodeId- Destination node IDmax_hops: Option<u8>- Maximum route lengthtimeout_seconds: Option<u64>- Discovery timeout
Response: DiscoverRouteResponse
route: Option<Vec<NodeId>>- Route path (None if not found)route_cost_sats: u64- Estimated routing costdiscovery_time_ms: u64- Time taken to discover
register_protocol_handler
Register a protocol handler for incoming packets.
Request: RegisterProtocolRequest
protocol_id: String- Protocol identifier (e.g., “onion-v1”, “mining-pool-v1”)handler_method: String- Module method to call when packet arrives
Response: RegisterProtocolResponse
success: bool- Whether registration succeeded
get_routing_stats
Get routing statistics.
Response: MeshStats
enabled: bool- Whether mesh is enabledmode: MeshMode- Current mesh moderouting: RoutingStats- Routing statisticsreplay: ReplayStats- Replay prevention statistics
get_node_id
Get the mesh module’s node ID.
Response: NodeId - 32-byte node ID
Building on Mesh Infrastructure
The blvm-mesh module exposes a ModuleAPI that allows other modules to build specialized functionality on top of the core mesh infrastructure. Specialized modules can use the mesh routing system via inter-module IPC.
Using the Mesh ModuleAPI
Modules can call the mesh ModuleAPI in two ways:
Option 1: Direct NodeAPI Call
#![allow(unused)]
fn main() {
use blvm_node::module::traits::NodeAPI;
use blvm_mesh::api::SendPacketRequest;
// Call mesh module API directly
let request = SendPacketRequest {
destination: target_node_id,
payload: packet_data,
payment_proof: Some(payment),
protocol_id: Some("onion-v1".to_string()),
ttl: Some(300),
};
let response_data = node_api
.call_module(Some("blvm-mesh"), "send_packet", bincode::serialize(&request)?)
.await?;
let response: SendPacketResponse = bincode::deserialize(&response_data)?;
}
Option 2: MeshClient Helper (Recommended)
For convenience, the mesh module provides a MeshClient API wrapper that handles serialization:
#![allow(unused)]
fn main() {
use blvm_mesh::MeshClient;
// Create mesh client
let mesh_client = MeshClient::new(node_api, "blvm-mesh".to_string());
// Send packet
let response = mesh_client
.send_packet("my-module-id", destination, payload, payment_proof, Some("onion-v1".to_string()))
.await?;
// Discover route
let route = mesh_client
.discover_route("my-module-id", destination, Some(10))
.await?;
// Register protocol handler
mesh_client
.register_protocol_handler("my-module-id", "onion-v1".to_string(), "handle_packet".to_string())
.await?;
}
Code: client_api.rs
Example Use Cases
Specialized modules can be built to use blvm-mesh for:
- Onion Routing: Multi-layer encrypted packets with anonymous routing (inspired by Tor Project)
- Mining Pool Coordination: Decentralized mining pool operations via mesh
- P2P Messaging: Payment-gated messaging over mesh network
Edge adapters (Meshtastic, Reticulum, other radios)
Commons Mesh does not embed non–Bitcoin transports. For LoRa / Meshtastic, Reticulum, or similar, use a separate adapter process (MQTT, serial, or RNS) that maps frames to bytes your node already accepts, and use MeshClient / blvm-bridge patterns rather than duplicating mesh policy in the adapter. See the upstream blvm-mesh docs/edge-adapters.md in the mesh repository.
Integration Pattern
Any module can integrate with blvm-mesh by:
- Using MeshClient: Create a
MeshClientinstance withMeshClient::new(node_api, "blvm-mesh".to_string()) - Registering a protocol: Call
mesh_client.register_protocol_handler()to register a protocol identifier (e.g.,"onion-v1","mining-pool-v1","messaging-v1") - Sending packets: Use
mesh_client.send_packet()to route packets through the mesh network - Discovering routes: Use
mesh_client.discover_route()to find routes to destination nodes - Receiving packets: Handle incoming packets via the registered protocol handler method
Implementation Details
The mesh module provides both internal routing via MeshManager and external API access via MeshModuleAPI:
- Internal routing: Processes incoming mesh packets via
handle_packet, routes packets through the mesh network, verifies payments, and manages routing tables - External API: Exposes
MeshModuleAPIfor other modules to call via inter-module IPC, providing methods for sending packets, discovering routes, and registering protocol handlers - ModuleIntegration: Uses the new
ModuleIntegrationAPI for IPC communication, replacing the oldModuleClientandNodeApiIpcapproach
For detailed information on the mesh implementation, see the API.md documentation. For developing modules that integrate with mesh routing, see Module Development.
API Integration
The module integrates with the node via ModuleIntegration:
- ModuleIntegration: Uses
ModuleIntegration::connect()for IPC communication (replaces oldModuleClientandNodeApiIpc) - NodeAPI access: Gets NodeAPI via
integration.node_api()for blockchain queries and payment verification - Event subscription: Subscribes to events via
integration.subscribe_events()and receives viaintegration.event_receiver() - Event publication: Publishes mesh-specific events via NodeAPI
- Inter-module IPC: Exposes ModuleAPI for other modules to call via
node_api.call_module()
Troubleshooting
Module Not Loading
- Verify the module binary exists at the correct path
- Check
module.tomlmanifest is present and valid - Verify module has required capabilities
- Check node logs for module loading errors
Routing Not Working
- Verify mesh mode is correctly configured (
bitcoin_only,payment_gated, oropen) - Check network listening address is accessible and not blocked by firewall
- Verify payment verification is working (if using payment-gated mode)
- Check node logs for routing errors
- Verify peers are connected and routing table has entries
- Check replay prevention isn’t blocking valid packets
Payment Verification Issues
- Verify Lightning node is accessible (if using Lightning payments)
- Check CTV covenant proofs are valid (if using CTV payments)
- Verify payment proof timestamps are not expired
- Check payment amounts match routing requirements
Repository
External Resources
- Tor Project: https://www.torproject.org/ - Inspiration for onion routing concepts used in mesh submodules
- Tor Documentation: Tor Project Documentation - Tor network documentation and technical details
See Also
- Module System Overview - Overview of all available modules
- Module System Architecture - Detailed module system documentation
- Module Development - Guide for developing custom modules
- SDK Overview - SDK introduction and capabilities
- SDK API Reference - Complete SDK API documentation
Stratum V2 Module
Overview
The Stratum V2 module (blvm-stratum-v2) implements Stratum V2 mining protocol support for blvm-node: Stratum V2 server implementation, mining pool management, and mining job distribution. For information on developing custom modules, see Module Development.
Note: Merge mining is available as a separate paid plugin module (blvm-merge-mining) that integrates with the Stratum V2 module. It is not built into the Stratum V2 module itself.
Features
- Stratum V2 Server: Full Stratum V2 protocol server implementation
- Mining Pool Management: Manages connections to mining pools
- Mining Job Distribution: Distributes mining jobs to connected miners
- Network Integration: Fully integrated with node network layer (messages routed automatically)
Installation
Via Cargo
cargo install blvm-stratum-v2
Via Module Installer
cargo install cargo-blvm-module
cargo blvm-module install blvm-stratum-v2
Manual Installation
-
Clone the repository:
git clone https://github.com/BTCDecoded/blvm-stratum-v2.git cd blvm-stratum-v2 -
Build the module:
cargo build --release -
Install to node modules directory:
mkdir -p /path/to/node/modules/blvm-stratum-v2/target/release cp target/release/blvm-stratum-v2 /path/to/node/modules/blvm-stratum-v2/target/release/
Configuration
Create a config.toml file in the module directory:
[stratum_v2]
# Enable/disable module
enabled = true
# Network listening address for Stratum V2 server
listen_addr = "0.0.0.0:3333"
# Mining pool URL (for pool mode)
pool_url = "stratum+tcp://pool.example.com:3333"
Configuration Options
enabled(default:true): Enable or disable the modulelisten_addr(default:"0.0.0.0:3333"): Network address to listen on for Stratum V2 serverpool_url(optional): Mining pool URL when operating in pool mode
Module Manifest
The module includes a module.toml manifest (see Module Development):
name = "blvm-stratum-v2"
version = "0.1.0"
description = "Stratum V2 mining protocol module"
author = "Bitcoin Commons Team"
entry_point = "blvm-stratum-v2"
capabilities = [
"read_blockchain",
"subscribe_events",
]
Events
Subscribed Events
The module subscribes to the following node events:
BlockMined- Block successfully minedBlockTemplateUpdated- New block template availableMiningDifficultyChanged- Mining difficulty changedChainTipUpdated- Chain tip updated (new block)
Published Events
The module publishes the following events:
MiningJobCreated- New mining job createdShareSubmitted- Mining share submittedMiningPoolConnected- Connected to mining poolMiningPoolDisconnected- Disconnected from mining pool
Note: Merge mining events (such as MergeMiningReward) are published by the separate blvm-merge-mining module, not by this module.
Stratum V2 Protocol
The module implements the Stratum V2 protocol specification, providing:
- Binary Protocol: 50-66% bandwidth savings compared to Stratum V1
- TLV Encoding: Tag-Length-Value encoding for efficient message serialization
- Encrypted Communication: TLS/QUIC encryption for secure connections
- Multiplexed Channels: QUIC stream multiplexing for multiple mining streams
- Template Distribution: Efficient block template distribution
- Share Submission: Optimized share submission protocol
- Channel Management: Multiple mining channels per connection
Protocol Components
- Server:
StratumV2Servermanages connections and job distribution - Pool:
StratumV2Poolmanages miners, channels, and share validation - Template Generator:
BlockTemplateGeneratorcreates block templates from mempool - Protocol Parser: Handles TLV-encoded Stratum V2 messages
For detailed information about the Stratum V2 protocol, see Stratum V2 Mining Protocol.
Merge Mining (Separate Plugin)
Merge mining is NOT part of the Stratum V2 module. It is available as a separate paid plugin module (blvm-merge-mining) that integrates with the Stratum V2 module.
For merge mining functionality, see:
- blvm-merge-mining README — merge mining module documentation (repository; optional checkout beside
blvm-docs) - Stratum V2 + Merge Mining - How merge mining integrates with Stratum V2
Usage
Once installed and configured, the module automatically:
- Subscribes to mining-related events from the node
- Receives Stratum V2 messages via the node’s network layer (automatic routing)
- Creates and distributes mining jobs to connected miners
- Manages mining pool connections (if configured)
- Tracks mining rewards and publishes mining events
Note: Merge mining is handled by a separate module (blvm-merge-mining) that integrates with this module.
The node’s network layer automatically detects Stratum V2 messages (via TLV format) and routes them to this module via the event system. No additional network configuration is required.
Integration with Other Modules
- blvm-datum: Works together with
blvm-datumfor DATUM Gateway mining.blvm-stratum-v2handles miner connections whileblvm-datumhandles pool communication. - blvm-miningos: MiningOS can update pool configuration via this module’s inter-module API.
- blvm-merge-mining: Separate module that integrates with Stratum V2 for merge mining functionality.
API Integration
The module integrates with the node via ModuleClient and NodeApiIpc:
- Read-only blockchain access: Queries blockchain data for block templates
- Event subscription: Receives real-time mining events from the node
- Event publication: Publishes mining-specific events
Note: The module subscribes to MiningJobCreated and ShareSubmitted events for coordination with other modules (e.g., merge mining), but these events are also published by this module when jobs are created and shares are submitted.
Troubleshooting
Module Not Loading
- Verify the module binary exists at the correct path
- Check
module.tomlmanifest is present and valid - Verify module has required capabilities
- Check node logs for module loading errors
Mining Jobs Not Creating
- Verify node has
read_blockchaincapability - Check that block template events (
BlockTemplateUpdated) are being published by the node - Verify listening address is accessible and not blocked by firewall
- Check node logs for mining job creation errors
- Verify node is synced and can generate block templates
- Check that miners are connected (if no miners, jobs may not be created)
Pool Connection Failing
- Verify pool URL is correct and accessible
- Check network connectivity to mining pool
- Verify pool supports Stratum V2 protocol
- Check node logs for connection errors
Repository
- GitHub: blvm-stratum-v2
- Version: 0.1.0
External Resources
- Stratum V2 Specification: Stratum V2 Protocol Specification - Official Stratum V2 mining protocol specification
- Stratum V2 Documentation: Stratum V2 Docs - Complete Stratum V2 protocol documentation
See Also
- Module System Overview - Overview of all available modules
- Module System Architecture - Detailed module system documentation
- Module Development - Guide for developing custom modules
- SDK Overview - SDK introduction and capabilities
- Stratum V2 + Merge Mining - Stratum V2 protocol documentation
- Mining Integration - Mining functionality
- Datum Module - DATUM Gateway mining protocol (works with Stratum V2)
Datum Module
Overview
The Datum module (blvm-datum) implements the DATUM Gateway mining protocol for blvm-node, enabling decentralized mining with Ocean pool support. This module handles pool communication only - miners connect via the blvm-stratum-v2 module. For information on developing custom modules, see Module Development.
Features
- DATUM Protocol Client: Encrypted communication with DATUM pools (Ocean)
- Decentralized Templates: Block templates generated locally via NodeAPI
- Coinbase Coordination: Coordinates coinbase payouts with DATUM pool
- Module Cooperation: Works with
blvm-stratum-v2for complete mining solution
Architecture
The module integrates with both the node and the Stratum V2 module:
┌─────────────────┐
│ blvm-node │
│ (Core Node) │
└────────┬────────┘
│ NodeAPI
│ (get_block_template, submit_block)
│
┌────┴────┐
│ │
▼ ▼
┌─────────┐ ┌──────────────┐
│ blvm- │ │ blvm-datum │
│ stratum │ │ (Module) │
│ v2 │ │ │
│ │ │ ┌──────────┐ │
│ ┌─────┐ │ │ │ DATUM │ │◄─── DATUM Pool (Ocean)
│ │ SV2 │ │ │ │ Client │ │ (Encrypted Protocol)
│ │Server│ │ │ └──────────┘ │
│ └─────┘ │ └──────────────┘
│ │
│ │ │
│ ▼ │
│ Mining │
│Hardware │
└─────────┘
Key Points:
blvm-datum: Handles DATUM pool communication onlyblvm-stratum-v2: Handles miner connections- Both modules share block templates via NodeAPI
- Both modules can submit blocks independently
Installation
Via Cargo
cargo install blvm-datum
Via Module Installer
cargo install cargo-blvm-module
cargo blvm-module install blvm-datum
Manual Installation
-
Clone the repository:
git clone https://github.com/BTCDecoded/blvm-datum.git cd blvm-datum -
Build the module:
cargo build --release -
Install to node modules directory:
mkdir -p /path/to/node/modules/blvm-datum/target/release cp target/release/blvm-datum /path/to/node/modules/blvm-datum/target/release/
Configuration
Both blvm-stratum-v2 and blvm-datum modules should be enabled for full DATUM Gateway functionality. Create configuration in your node’s config.toml:
[modules.blvm-stratum-v2]
enabled = true
listen_addr = "0.0.0.0:3333"
mode = "solo" # or "pool"
[modules.blvm-datum]
enabled = true
pool_url = "https://ocean.xyz/datum"
pool_username = "user"
pool_password = "pass"
pool_public_key = "hex_encoded_32_byte_public_key" # Optional, for encryption
[modules.blvm-datum.mining]
coinbase_tag_primary = "DATUM Gateway"
coinbase_tag_secondary = "BLVM User"
pool_address = "bc1q..." # Bitcoin address for pool payouts
Configuration Options
enabled(default:true): Enable or disable the modulepool_url(required): DATUM pool URL (e.g.,https://ocean.xyz/datum)pool_username(required): Pool usernamepool_password(required): Pool passwordpool_public_key(optional): Pool public key (32-byte hex-encoded) for encryptioncoinbase_tag_primary(optional): Primary coinbase tagcoinbase_tag_secondary(optional): Secondary coinbase tagpool_address(optional): Bitcoin address for pool payouts
Note: The blvm-stratum-v2 module must also be enabled and configured for miners to connect.
Module Manifest
The module includes a module.toml manifest (see Module Development):
name = "blvm-datum"
version = "0.1.0"
description = "DATUM Gateway mining protocol module for blvm-node"
author = "Bitcoin Commons Team"
entry_point = "blvm-datum"
capabilities = [
"read_blockchain",
"subscribe_events",
]
Events
Subscribed Events
The module subscribes to node events including:
- Chain Events:
NewBlock,ChainTipUpdated,BlockDisconnected - Mining Events:
BlockTemplateGenerated,BlockFound
Published Events
The module publishes the following events:
DatumPoolConnected- Successfully connected to DATUM poolDatumPoolDisconnected- Disconnected from DATUM poolDatumTemplateReceived- Received block template from poolDatumBlockSubmitted- Block submitted to pool
Dependencies
blvm-node: Module system integrationsodiumoxide: Encryption for DATUM protocol (Ed25519, X25519, ChaCha20Poly1305, NaCl sealed boxes)ed25519-dalek: Ed25519 signature verificationx25519-dalek: X25519 key exchangechacha20poly1305: ChaCha20-Poly1305 authenticated encryptiontokio: Async runtime
API Integration
The module integrates with the node via ModuleClient and NodeApiIpc:
- Read-only blockchain access: Queries blockchain data for template generation
- Event subscription: Receives real-time events from the node
- Event publication: Publishes DATUM-specific events
- NodeAPI calls: Uses
get_block_templateandsubmit_blockvia NodeAPI - ModuleAPI registration: Registers
DatumModuleApifor inter-module communication
Inter-Module Communication
The module exposes a ModuleAPI for other modules (e.g., blvm-stratum-v2) to query coinbase payout requirements:
get_coinbase_payout: Returns the current coinbase payout structure (outputs, tags, unique ID) required by the DATUM pool
This allows other modules to construct block templates with the correct coinbase structure for DATUM pool coordination.
Integration with Stratum V2
The blvm-datum module works in conjunction with blvm-stratum-v2:
- blvm-stratum-v2: Handles miner connections via Stratum V2 protocol
- Miners connect to the Stratum V2 server
- Receives mining jobs and submits shares
- blvm-datum: Handles DATUM pool communication
- Communicates with Ocean pool via encrypted DATUM protocol
- Coordinates coinbase payouts
- Shared templates: Both modules use NodeAPI to get block templates independently
- Independent submission: Either module can submit blocks to the network
Architecture Flow:
Miners → blvm-stratum-v2 (Stratum V2 server) → NodeAPI (block templates)
↓
Ocean Pool ← blvm-datum (DATUM client) ← NodeAPI (block templates)
Status
🚧 In Development - Initial implementation
Troubleshooting
Module Not Loading
- Verify the module binary exists at the correct path
- Check
module.tomlmanifest is present and valid - Verify module has required capabilities
- Check node logs for module loading errors
- Ensure
blvm-stratum-v2module is also enabled
Pool Connection Issues
- Verify pool URL is correct and accessible
- Check pool username and password are valid
- Verify network connectivity to pool
- Check node logs for connection errors
- Ensure encryption libraries (sodiumoxide) are properly installed
Template Generation Issues
- Verify NodeAPI is accessible and module has
read_blockchaincapability - Check node is synced and can generate block templates via
get_block_template - Verify both
blvm-stratum-v2andblvm-datumare enabled and configured correctly - Check node logs for template generation errors
- Verify node is not in IBD (Initial Block Download) mode
- Check that the node has sufficient mempool transactions for template generation
Repository
- GitHub: blvm-datum
- Version: 0.1.0
- Status: 🚧 In Development
External Resources
- DATUM Gateway: Ocean DATUM Documentation - Official DATUM Gateway protocol documentation
- Ocean Pool: Ocean.xyz - Mining pool that supports DATUM Gateway protocol
See Also
- Module System Overview - Overview of all available modules
- Stratum V2 Module - Stratum V2 mining protocol (required for miners to connect)
- Module System Architecture - Detailed module system documentation
- Module Development - Guide for developing custom modules
- DATUM Gateway Documentation - Official DATUM documentation
Mining OS Module
Overview
The Mining OS module (blvm-miningos) provides bidirectional integration between BLVM and MiningOS (Mos), enabling BLVM to be managed as a MiningOS “rack” (worker) and exposing miners as “things”. For information on developing custom modules, see Module Development.
Features
- BLVM → MiningOS: Register BLVM as a MiningOS rack, expose miners as things, provide block templates
- MiningOS → BLVM: Execute actions (reboot, power management, pool config updates), query statistics, receive commands
- HTTP REST API Client: Full REST API integration with MiningOS app-node
- OAuth2 Authentication: Token-based authentication with automatic token refresh
- P2P Worker Bridge: Node.js bridge for Hyperswarm P2P integration
- Block Template Provider: Provides block templates to MiningOS
- Enhanced Statistics: Chain info, network stats, mempool statistics
- Action Execution System: Executes MiningOS actions (integrates with Stratum V2 for pool config updates)
- Data Conversion: Converts BLVM data to MiningOS “Thing” format
- Event Subscription: Subscribes to block mined, template updates, and other events
Architecture
The module uses a hybrid approach combining:
- Rust Module: Core integration logic, HTTP client, data conversion, action handling
- Node.js Bridge: P2P worker that extends
TetherWrkBasefor Hyperswarm integration - IPC Communication: Unix socket-based JSON-RPC between Rust and Node.js
┌─────────────────────┐
│ MiningOS │
│ Orchestrator │
│ (Hyperswarm P2P) │
└──────────┬──────────┘
│
│ Hyperswarm
│
┌──────────▼──────────┐ Unix Socket ┌──────────────┐
│ Node.js Bridge │ ◄───────────────────► │ Rust Module │
│ (worker.js) │ JSON-RPC │ │
└─────────────────────┘ └──────┬───────┘
│ IPC
│
┌──────▼───────┐
│ BLVM Node │
└──────────────┘
Installation
Via Cargo
cargo install blvm-miningos
Via Module Installer
cargo install cargo-blvm-module
cargo blvm-module install blvm-miningos
Manual Installation
-
Clone the repository:
git clone https://github.com/BTCDecoded/blvm-miningos.git cd blvm-miningos -
Build the Rust module:
cargo build --release -
Install Node.js bridge dependencies:
cd bridge npm install -
Install to node modules directory:
mkdir -p /path/to/node/modules/blvm-miningos/target/release cp target/release/blvm-miningos /path/to/node/modules/blvm-miningos/target/release/ cp -r bridge /path/to/node/modules/blvm-miningos/
Configuration
The module searches for configuration files in the following order (first found is used):
{data_dir}/config/miningos.toml{data_dir}/miningos.toml./config/miningos.toml./miningos.toml
If no configuration file is found, the module uses default values.
Create data/config/miningos.toml:
[miningos]
enabled = true
[p2p]
enabled = true
rack_id = "blvm-node-001"
rack_type = "miner"
auto_register = true
[http]
enabled = true
app_node_url = "https://api.mos.tether.io"
oauth_provider = "google"
oauth_client_id = "your-client-id"
oauth_client_secret = "your-client-secret"
token_cache_file = "miningos-token.json"
[stats]
enabled = true
collection_interval_seconds = 60
[template]
enabled = true
update_interval_seconds = 30
Configuration Options
enabled(default:true): Enable or disable the module- P2P Configuration:
enabled(default:true): Enable P2P worker bridgerack_id(required): Unique identifier for this BLVM node in MiningOSrack_type(default:"miner"): Type of rack (e.g.,"miner")auto_register(default:true): Automatically register with MiningOS
- HTTP Configuration:
enabled(default:true): Enable HTTP REST API clientapp_node_url(required): MiningOS app-node API URLoauth_provider(required): OAuth2 provider (e.g.,"google")oauth_client_id(required): OAuth2 client IDoauth_client_secret(required): OAuth2 client secrettoken_cache_file(default:"miningos-token.json"): Token cache file path
- Statistics Configuration:
enabled(default:true): Enable statistics collectioncollection_interval_seconds(default:60): Statistics collection interval
- Template Configuration:
enabled(default:true): Enable block template providerupdate_interval_seconds(default:30): Template update interval
Module Manifest
The module includes a module.toml manifest (see Module Development):
name = "blvm-miningos"
version = "0.1.0"
description = "MiningOS integration module for BLVM"
author = "Bitcoin Commons Team"
entry_point = "blvm-miningos"
capabilities = [
"read_blockchain",
"subscribe_events",
"publish_events",
"call_module",
]
Events
Subscribed Events
The module subscribes to node events including:
- Chain Events:
NewBlock,ChainTipUpdated,BlockDisconnected - Mining Events:
BlockTemplateGenerated,BlockFound,ShareSubmitted - Network Events:
PeerConnected,PeerDisconnected - Mempool Events:
MempoolTransactionAdded,MempoolTransactionRemoved
Published Events
The module publishes the following events:
MiningOSRegistered- Successfully registered with MiningOSMiningOSActionExecuted- Action executed from MiningOSMiningOSStatsUpdated- Statistics updated and sent to MiningOSMiningOSTemplateUpdated- Block template updated and sent to MiningOS
API Integration
The module integrates with the node via ModuleClient and NodeApiIpc:
- Read-only blockchain access: Queries blockchain data for statistics
- Event subscription: Receives real-time events from the node
- Event publication: Publishes MiningOS-specific events
- Module calls: Can call other modules (e.g., Stratum V2 for pool config updates) via
call_module - Block templates: Gets block templates via NodeAPI
get_block_templatemethod (no special permission required) - Block submission: Submits mined blocks via NodeAPI
submit_blockmethod (no special permission required)
Note: get_block_template and submit_block are NodeAPI methods, not permissions. Modules can call these methods through the NodeAPI interface without requiring special capabilities.
Action Execution System
The module can execute actions from MiningOS:
- Reboot: System reboot commands
- Power Management: Power on/off commands
- Pool Config Update: Updates pool configuration via inter-module IPC to Stratum V2 module
- Statistics Query: Queries node statistics (chain info, network stats, mempool)
- Template Refresh: Refreshes block templates
Inter-Module Integration
The module integrates with other modules via inter-module IPC:
- Stratum V2: Can update pool configuration when MiningOS sends pool config update actions
- Node API: Queries blockchain data, network statistics, and mempool information
Usage
The module is automatically discovered and loaded by the BLVM node system when placed in the modules directory.
For manual testing:
./target/release/blvm-miningos \
--module-id blvm-miningos \
--socket-path ./data/modules/modules.sock \
--data-dir ./data
Troubleshooting
Module Not Loading
- Verify the module binary exists at the correct path
- Check
module.tomlmanifest is present and valid - Verify module has required capabilities
- Check node logs for module loading errors
- Ensure Node.js bridge is properly installed
OAuth2 Authentication Issues
- Verify OAuth2 credentials are correct
- Check token cache file permissions
- Verify OAuth2 provider URL is accessible
- Check node logs for authentication errors
- Ensure token refresh is working correctly
P2P Bridge Issues
- Verify Node.js bridge is installed (
npm installinbridge/directory) - Check bridge process is running
- Verify Hyperswarm connectivity
- Check bridge logs for connection errors
- Ensure rack_id is unique
Statistics Collection Issues
- Verify node is synced and can provide statistics
- Check collection interval configuration
- Verify NodeAPI is accessible
- Check node logs for statistics collection errors
Repository
- GitHub: blvm-miningos
- Version: 0.1.0
- Documentation: QUICKSTART.md, Integration Guide
External Resources
- MiningOS: https://mos.tether.io/ - The open-source, self-hosted OS for Bitcoin mining and energy orchestration that this module integrates with
See Also
- Module System Overview - Overview of all available modules
- Stratum V2 Module - Stratum V2 mining protocol (integrates with MiningOS for pool config updates)
- Module System Architecture - Detailed module system documentation
- Module Development - Guide for developing custom modules
Selective Synchronization Module
The blvm-selective-sync module provides a configurable IBD sync policy: operators can avoid downloading certain flagged transaction content during initial block download while keeping full cryptographic validation of the chain. It integrates with the spam filter and registry concepts in blvm-protocol.
Requirements
- Node with modules enabled; blvm-selective-sync built and installed (see Modules Overview).
- Typical workspace: blvm-node, blvm-sdk, blvm-protocol (path dependencies).
Loading
Enable the module in node configuration (same patterns as other modules). After load, the module registers CLI with the node.
User-facing CLI (blvm sync-policy …)
With the module running, the blvm binary exposes subcommands (forwarded to the module over IPC). Examples:
| Command | Purpose |
|---|---|
blvm sync-policy list | List subscribed registries and last refresh |
blvm sync-policy subscribe <url> | Subscribe to a registry URL |
blvm sync-policy unsubscribe <url> | Remove a registry |
blvm sync-policy refresh | Refresh subscribed registries |
blvm sync-policy status | Policy / sync status |
blvm sync-policy config-path | Print path to policy config (e.g. for editing sync-policy.json) |
blvm sync-policy build-entry … | Build a registry entry from transaction data (testing / tooling) |
blvm sync-policy build-registry … | Build registry from block data with a spam-filter preset |
Exact flags vary by build; use blvm sync-policy --help when the module is loaded.
Configuration
- Policy and registry URLs are managed via the commands above; config lives under the module data directory (see
config-path). - Node may override module storage via
[modules.selective-sync](e.g.database_backend); see Node configuration.
Implementation notes
- Built with the SDK declarative style:
#[module],#[command],run_module!. - Withholding and P2P serve policy: policy logic can merge block hashes (and related sets) into the node’s
merge_block_serve_denylistpath viaNodeAPIso that, after IBD gates,getdatafor those hashes does not serve fullblockmessages to peers (notfound), while consensus validation behavior remains unchanged. Transaction-level withholding uses the parallelmerge_tx_serve_denylistsurface when enabled for the build. See Module development (P2P serve policy & sync) and Module IPC Protocol. - Repository: blvm-selective-sync.
See also
Governance Overview
The governance system enforces development processes cryptographically across Bitcoin Commons repositories. See Governance Model for details.
Bitcoin Commons Governance System
Central source of truth for governance rules across all Bitcoin Commons repositories (managed by BTCDecoded organization).
⚠️ ACTIVATION STATUS
For verified system status: See SYSTEM_STATUS.md in the BTCDecoded organization repository.
Current Status: Phase 1 (Infrastructure Building)
- ✅ Infrastructure Complete: All core components implemented
- ⚠️ Not Yet Activated: Governance rules are not enforced
- 🔧 Test Keys Only: No real cryptographic enforcement
- 📋 Development Phase: System is in rapid AI-assisted development
Timeline:
- Phase 2 Activation: 3-6 months (governance enforcement begins)
- Phase 3 Full Operation: 12+ months (mature, stable system)
Overview
This repository defines:
- Repository Governance (Binding): Who can merge what, and when
- Protocol Governance (Advisory): User signaling for consensus changes
- Emergency Response: Three-tiered system for critical issues
- Maintainer Lifecycle: Selection, removal, and rotation processes
Key Distinction: We govern repository access (binding) and provide guidance for protocol changes (advisory). Users remain sovereign over Bitcoin’s consensus rules.
Constitutional Governance Model
Bitcoin Commons implements a 5-tier constitutional governance system that makes Bitcoin governance 6x harder to capture than Bitcoin Core’s current model, with complete transparency through cryptographic audit trails and user-protective mechanisms.
Action Tiers
- Tier 1: Routine Maintenance (3-of-5, 7 days) - Bug fixes, documentation, performance
- Tier 2: Feature Changes (4-of-5, 30 days) - New RPC methods, P2P changes, wallet features
- Tier 3: Consensus-Adjacent (5-of-5, 90 days + economic node veto) - Changes affecting consensus validation
- Tier 4: Emergency Actions (4-of-5, 0 days review period) - Critical security patches, network threats
- Tier 5: Governance Changes (Special process, 180 days) - Changes to governance rules themselves
Layer Hierarchy
- Orange Paper (Layer 1) - Constitutional, 6-of-7, 180 days (365 for consensus)
- Consensus Proof (Layer 2) - Constitutional, 6-of-7, 180 days (365 for consensus)
- Protocol Engine (Layer 3) - Implementation, 4-of-5, 90 days
- Reference Node (Layer 4) - Application, 3-of-5, 60 days
- Developer SDK (Layer 5) - Extension, 2-of-3, 14 days
Directory Structure
governance/
├── README.md (this file - navigation hub)
├── GOVERNANCE.md (core governance process)
├── LAYER_TIER_MODEL.md (how layers and tiers combine)
├── SCOPE.md (repository vs protocol governance)
├── LICENSE
│
├── guides/ (comprehensive guides)
│ ├── MAINTAINER_GUIDE.md
│ └── ECONOMIC_NODE_GUIDE.md
│
├── activation/ (activation process)
│ ├── PHASE_ACTIVATION.md
│ └── ACTIVATION_CHECKLIST.md (coming soon)
│
├── config/ (configuration files)
│ ├── action-tiers.yml (tier definitions)
│ ├── repository-layers.yml (layer definitions)
│ ├── tier-classification-rules.yml (PR classification)
│ ├── economic-nodes.yml
│ ├── emergency-tiers.yml
│ ├── governance-fork.yml
│ ├── ruleset-export-template.yml
│ ├── maintainers/ (by layer)
│ └── repos/ (by repository)
│
├── examples/ (workflow examples)
│ ├── consensus-change-workflow.md
│ ├── emergency-response.md
│ ├── tier1-routine-pr.md (coming soon)
│ ├── tier3-consensus-adjacent.md (coming soon)
│ └── economic-node-veto.md (coming soon)
│
└── architecture/ (design documentation)
├── CRYPTOGRAPHIC_GOVERNANCE.md
├── ECONOMIC_NODES.md
├── GOVERNANCE_FORK.md
└── CROSS_LAYER_DEPENDENCIES.md (coming soon)
Quick Navigation
For New Users
- Main Governance Process - How governance works
- Layer + Tier Model - How layers and tiers combine
- Governance Scope - What we control vs what we influence
- Maintainer Guide - If you’re a maintainer
- Economic Node Guide - If you’re an economic node
- Examples - See governance in action
For Maintainers
- Maintainer Guide - Complete maintainer documentation
- Configuration Files - How to modify configuration
- Activation Process - When governance will be activated
- Examples - Practical workflows
For Economic Nodes
- Economic Node Guide - Complete economic node documentation
- Economic Node Configuration - Configuration details
- Veto Examples - How veto process works
- Activation Timeline - When to expect activation
For Developers
- Architecture Documentation - System design
- Configuration Reference - Technical configuration
- Governance App Documentation - Implementation details
- Audit Materials - Security and audit information
For Auditors
- Audit Materials - Complete audit documentation
- Architecture Documentation - System design
- Security Documentation - Security details
- Test Coverage - Testing information
Key Principles
- Dual-Track Governance: Repository (binding) + Protocol (advisory)
- User Sovereignty: No forced network upgrades
- Proportional Response: Emergency tiers match threat severity
- Transparency: All governance events logged and auditable
- Ostrom Compliance: Managing the codebase commons properly
For Maintainers
Your role:
- Steward the codebase (binding authority)
- Review and approve quality code
- Coordinate releases
- Advocate for good protocol design
- Serve users, don’t command them
You control: Repository merges, releases, code quality You don’t control: Network adoption, user choices, Bitcoin consensus
For Users
Your role:
- Choose software that serves your needs
- Signal preferences through adoption
- Participate in protocol discussions
- Fork if dissatisfied
- Exercise sovereignty responsibly
You control: What software you run, Bitcoin consensus rules (via economic coordination) You don’t control: What maintainers work on, release timing
Contributing
To propose governance changes:
- Open discussion in this repository
- Build consensus among maintainers and community
- Submit PR (requires 5-of-7 maintainers + 2-of-3 emergency keyholders)
- 90-day review + 30-day public comment period
See GOVERNANCE.md section on Meta-Governance for details.
Related Documentation
- Audit Materials - Security and audit information
- Governance App - Implementation details
- Developer SDK - Cryptographic primitives
- Orange Paper - Constitutional layer
- Consensus Proof - Constitutional layer
- Protocol Engine - Implementation layer
- Reference Node - Application layer
Attack Path Interception
Figure: Risk interception points across GitHub, Nostr, and OpenTimestamps. Multiple layers of verification prevent single points of failure.
See Also
- Review standards — Human review expectations and AI review intelligence (links to
governancerepo) - Governance Model - Governance architecture and rules
- Multisig Configuration - Configuring multisig thresholds
- Keyholder Procedures - Maintainer responsibilities
- Audit Trails - Audit logging and verification
- SDK API Reference - Governance primitives API
Review standards (maintainers and automation)
Human maintainer expectations and AI-assisted “review intelligence” for Bitcoin Commons code are canonical in the governance repository—they are not duplicated in this book.
- Review expectations — Expected practices for human reviewers and maintainers (guidelines, not rigid rules; see challenge mechanism there).
- Review intelligence — Operating document for AI-assisted review of Bitcoin implementations: true alternative vs Core fork taxonomy, flag structure, and alignment with the Orange Paper and the Bitcoin Commons Compact.
For PR tiers, cryptographic signatures, and merge rules, see PR process and Layer–tier model.
blvm-commons
Overview
blvm-commons is the governance enforcement system for Bitcoin Commons. It provides GitHub integration, OpenTimestamps verification, Nostr integration, and cross-layer validation for the Bitcoin Commons governance framework.
Key Features
- GitHub Integration: GitHub App for cryptographic signature verification and merge enforcement
- OpenTimestamps: Immutable audit trail for governance artifacts
- Nostr Integration: Decentralized governance communication and voting
- Cross-Layer Validation: Security controls and validation across all layers
- CI/CD Workflows: Reusable workflows for Bitcoin Commons repositories
Components
GitHub Integration
The GitHub App enforces cryptographic signatures on pull requests, verifies signature thresholds, and blocks merges until governance requirements are met.
Code: GitHub App
OpenTimestamps Integration
Provides immutable timestamping for governance artifacts, verification proofs, and audit trails.
Code: OpenTimestamps Integration
Nostr Integration
Enables decentralized governance communication, voting, and proposal distribution through Nostr relays.
Code: Nostr Integration
Security Controls
Validates code changes, detects placeholder implementations, and enforces security policies across all Bitcoin Commons repositories.
Code: Security Controls
Repository
GitHub: blvm-commons
See Also
- Governance Overview - Governance system introduction
- OpenTimestamps Integration - Audit trail system
- Nostr Integration - Decentralized communication
- Security Controls - Security validation
- CI/CD Workflows - Reusable workflows
Governance Model
Bitcoin Commons implements a constitutional governance model that makes Bitcoin governance 6x harder to capture.
BTCDecoded Governance Process
⚠️ ACTIVATION STATUS
Current Status: Phase 1 (Infrastructure Building)
- ✅ Infrastructure Complete: All core components implemented
- ⚠️ Not Yet Activated: Governance rules are not enforced
- 🔧 Test Keys Only: No real cryptographic enforcement
- 📋 Development Phase: System is in rapid AI-assisted development
Timeline:
- Phase 2 Activation: 3-6 months (governance enforcement begins)
- Phase 3 Full Operation: 12+ months (mature, stable system)
Constitutional Governance Model
Bitcoin Commons implements a 5-tier constitutional governance system that makes Bitcoin governance 6x harder to capture than Bitcoin Core’s current model, with complete transparency through cryptographic audit trails and user-protective mechanisms.
Core Innovation: Apply the same cryptographic enforcement to governance that Bitcoin applies to consensus - making power visible, capture expensive, and exit cheap.
How Governance Works
Action Tiers (Constitutional Model)
Tier 1: Routine Maintenance (3-of-5, 7 days)
- Bug fixes, documentation, performance optimizations
- Non-consensus changes only
Tier 2: Feature Changes (4-of-5, 30 days)
- New RPC methods, P2P changes, wallet features
- Must include technical specification
Tier 3: Consensus-Adjacent (5-of-5, 90 days + economic node veto)
- Changes affecting consensus validation code
- Economic nodes can veto (30%+ hashpower or 40%+ economic activity)
Tier 4: Emergency Actions (4-of-5, 0 days review period)
- Critical security patches, network-threatening bugs
- Real-time economic node oversight, post-mortem required
Tier 5: Governance Changes (Special process, 180 days)
- Changes to governance rules themselves
- Requires economic node signaling (50%+ hashpower, 60%+ economic activity)
Pull Request Process
- Developer opens PR
- Governance App classifies tier automatically (with temp. manual override)
- Maintainers review and sign:
/governance-sign <signature> - Review period elapses (tier-specific duration)
- Economic node veto period (Tier 3+)
- Requirements met → merge enabled
- PR merged
Signature Requirements by Layer
- Layer 1-2 (Constitutional): 6-of-7 maintainers, 180 days (365 for consensus changes)
- Layer 3 (Implementation): 4-of-5 maintainers, 90 days
- Layer 4 (Application): 3-of-5 maintainers, 60 days
- Layer 5 (Extension): 2-of-3 maintainers, 14 days
Note: When both Layer and Tier requirements apply, the system uses the “most restrictive wins” rule. See LAYER_TIER_MODEL.md for detailed combination rules.
Layer + Tier Combination
The governance system combines two dimensions:
- Layers (Repository Architecture) - Which repository the change affects
- Tiers (Action Classification) - What type of change is being made
When both apply, the system takes the most restrictive (highest) requirements:
| Example | Layer | Tier | Final Signatures | Final Review | Source |
|---|---|---|---|---|---|
| Bug fix in Protocol Engine | 3 | 1 | 4-of-5 | 90 days | Layer 3 |
| New feature in Developer SDK | 5 | 2 | 4-of-5 | 30 days | Tier 2 |
| Consensus change in Orange Paper | 1 | 3 | 6-of-7 | 180 days | Layer 1 |
| Emergency fix in Reference Node | 4 | 4 | 4-of-5 | 0 days | Tier 4 |
See LAYER_TIER_MODEL.md for the complete decision matrix.
Emergency Tier System
Bitcoin Commons uses a three-tiered emergency response system for proportional handling of critical issues.
Tier 1: Critical Emergency (Network-Threatening)
Activation Criteria:
- Inflation bugs (CVE-2010-5139 class)
- Consensus fork risks (CVE-2018-17144 class)
- P2P network DoS vulnerabilities
- Remote code execution
- Private key extraction
Requirements:
- 0 day review period
- 4-of-7 maintainer signatures
- 5-of-7 emergency keyholders to activate
- 7 day maximum duration
- No extensions allowed
Post-Activation:
- Post-mortem required within 30 days
- Security audit required within 60 days
- Public disclosure after patch deployment
Historical Examples:
- CVE-2010-5139 (2010): Value overflow allowing creation of 184B BTC. Fixed in 5 hours with hard fork.
- CVE-2018-17144 (2018): Inflation bug allowing double-spend of same input. Same-day patch, coordinated disclosure.
Rationale: Some vulnerabilities threaten network survival and require immediate action. Historical incidents show response times measured in hours, not days or weeks. Tier 1 enables this while maintaining multi-signature security (4-of-7).
Tier 2: Urgent Security Issue
Activation Criteria:
- Memory corruption vulnerabilities
- Privacy leaks (transaction linkage)
- Crash exploits (non-DoS)
- Privilege escalation
- Data corruption bugs
Requirements:
- 7 day review period
- 5-of-7 maintainer signatures
- 5-of-7 emergency keyholders to activate
- 30 day maximum duration
- One extension allowed (30 days, requires 6-of-7)
Post-Activation:
- Post-mortem required within 60 days
- Public disclosure after majority node deployment
Historical Examples:
- BIP66 Consensus Fork (2015): Non-upgraded miners accepted invalid block. Required urgent but not immediate coordination over hours/days.
Tier 3: Elevated Priority
Activation Criteria:
- Competitive response (other implementations advancing)
- Important bug fixes (non-security)
- Performance degradation issues
- Ecosystem compatibility problems
- User experience issues affecting adoption
Requirements:
- 30 day review period
- 6-of-7 maintainer signatures
- 5-of-7 emergency keyholders to activate
- 90 day maximum duration
- Two extensions allowed (30 days each, requires 6-of-7)
Post-Activation:
- Post-mortem required within 90 days
- Immediate public disclosure
Emergency Activation Process
- Emergency keyholder submits activation request with evidence
- Other emergency keyholders review and sign (5-of-7 required)
- Governance App activates tier and adjusts requirements
- Status checks reflect emergency parameters
- PRs merged under emergency rules
- Post-activation requirements tracked
- Automatic expiration at max duration unless extended
Safeguards
Abuse Prevention:
- All emergency activations logged in governance repository
- Post-mortem required for accountability
- Tier downgrades if criteria not met
- Community oversight via public disclosure
Automatic Expiration:
- No indefinite emergency modes
- Extensions require higher thresholds (6-of-7 vs 5-of-7)
- Multiple extensions discouraged
Escalation Path:
- Start with appropriate tier based on evidence
- Can escalate if situation worsens
- Cannot downgrade active emergency without resolution
See emergency-tiers.yml for complete configuration.
Consensus Rule Changes
Important: BTCDecoded governance is advisory only for Bitcoin protocol consensus changes. Maintainers cannot force network adoption.
Scope Clarification
What We Control (Repository Governance):
- Merge access to BTCDecoded repositories
- Maintainer selection and removal
- Official release creation
- Code quality standards
What We Don’t Control (Protocol Governance):
- Bitcoin network consensus rules
- User adoption decisions
- Node operator choices
- Miner signaling
Consensus Change Process
When changes affect consensus rules (consensus-rules/**, validation/**, block-acceptance/**):
Repository Requirements (Binding):
- 6-of-7 maintainer signatures
- 365 day review period
- BIP specification required
- Comprehensive test vectors required
- Security audit required
- Equivalence proof required (mathematical correctness)
User Activation (Advisory Only):
- Code Approved: Maintainers approve with 6-of-7 signatures
- Code Released: Published as optional upgrade
- Users Signal: Node operators choose whether to upgrade
- Activation Decision: Based on network signaling thresholds
Recommended Thresholds (Advisory):
- 75% node adoption
- 90% hash power signaling
- Economic majority (e.g., 5 of top 10 exchanges)
Measurement:
- BIP9-style version bits or node polling
- 90-day measurement period
Clarification: Maintainers approve code for release. Users decide whether to run it. No amount of maintainer signatures forces network adoption. Users retain sovereignty to fork, run alternatives, or reject changes.
See SCOPE.md for detailed explanation of repository vs. protocol governance.
Formal Verification Requirements
Technical Prerequisites for Consensus Changes
BTCDecoded implements mathematical verification of consensus code to prevent capture and ensure correctness. This creates an objective, non-negotiable technical barrier that complements social governance.
Verification Stack
-
Kani Model Checking (required)
- Symbolic verification with bounded model checking
- Proves mathematical invariants hold for all possible inputs
- Cannot be bypassed or overridden
-
Property-Based Testing (required)
- Randomized testing with
proptest - Discovers edge cases through fuzzing
- Complements Kani with empirical coverage
- Randomized testing with
-
Mathematical Specifications (required)
- Formal documentation of consensus rules
- Invariants documented in code
- Traceability to Orange Paper
Enforcement Levels
Level 1: CI Enforcement (Ostrom #4: Monitoring)
- Automated verification runs on every PR
- Blocks merge if verification fails
- No human override possible
- Technical correctness is non-negotiable
Level 2: Governance App (Ostrom #5: Graduated Sanctions)
- Validates verification passed before allowing signatures
- PRs without passing verification cannot progress
- Prevents maintainers from signing unverified code
Level 3: Meta-Governance (Ostrom #3: Collective Choice)
- Verification requirements set by maintainers collectively
- Changes require 5-of-7 signatures + 90-day review
- Community can propose improvements
Defense Against Capture
Formal verification makes Bitcoin governance 6x harder to capture:
- Technical Barrier: Must bypass automated verification
- Social Barrier: Must convince 6-of-7 maintainers
- Time Barrier: 180-365 day review periods
- Transparency Barrier: All verification results public
- Audit Barrier: OpenTimestamps immutable proof
- Community Barrier: Public review + economic node veto
Key Insight: An attacker cannot simply “convince maintainers” - they must also produce mathematically correct code that passes verification. This dramatically raises the bar for malicious changes.
Verification Status
Current verification coverage: [Link to docs/VERIFICATION.md]
See Cross-Layer Dependencies for separation rules.
Meta-Governance
Changes to governance rules themselves require:
- 5-of-7 maintainers + 2-of-3 emergency keyholders
- 90-day review period
- 30-day public comment period
- Rationale document required
Maintainer Lifecycle
Adding Maintainers:
- Nominated by existing maintainer
- 5-of-7 approval from current maintainers
- 30-day community comment period
- Must demonstrate technical competence and alignment with Bitcoin principles
Removing Maintainers:
- 6-of-7 vote (excluding subject maintainer)
- Formal warning logged in
warnings/directory - 14-day notice period
- Reasons: inactivity (6+ months), misconduct, competence concerns
Rotation:
- Maintainers may voluntarily step down with 30-day notice
- Encouraged rotation every 3-5 years to prevent centralization
- Emeritus status available for advisory role
Ostrom Principles Compliance
BTCDecoded governance follows Elinor Ostrom’s principles for managing common-pool resources:
- Clearly Defined Boundaries: Maintainer roles and repository scope defined
- Proportional Equivalence: Higher-layer (constitutional) changes require more consensus
- Collective Choice: Maintainers participate in rule-making that affects them
- Monitoring: Governance App enforces rules transparently on GitHub
- Graduated Sanctions: Warning system before maintainer removal
- Conflict Resolution: Meta-governance process for disputes
- Minimal Recognition of Rights: GitHub org recognizes this governance structure
- Nested Enterprises: Layered architecture with appropriate rules per layer
Note: We govern the codebase commons (repository access), not the network commons (Bitcoin protocol). Users govern the network through voluntary adoption.
Governance Signature Thresholds
Figure: Signature thresholds by layer showing the graduated security model. Constitutional layers require 6-of-7, while extension layers require 2-of-3.
Governance Process Latency
Figure: Governance process latency showing review periods and decision timelines across different tiers.
PR Review Time Distribution
Figure: Pull request review time distribution. Long tails reveal why throughput stalls without process and tooling. Bitcoin Commons addresses this through structured review periods and automated tooling.
See Also
- PR Process - How governance applies to pull requests
- Layer-Tier Model - Layer and tier combination rules
- Multisig Configuration - Signature threshold configuration
- Governance Overview - Governance system introduction
- Keyholder Procedures - Maintainer responsibilities
Layer-Tier Governance Model
Overview
Bitcoin Commons implements dual-dimensional governance combining Layers (repository architecture) and Tiers (action classification). When both apply, the system uses the most restrictive wins rule, taking the highest signature requirement and longest review period.
Layer System
The layer system maps repository architecture to governance requirements:
| Layer | Repository | Purpose | Signatures | Review Period |
|---|---|---|---|---|
| 1 | blvm-spec | Constitutional | 6-of-7 | 180 days |
| 2 | blvm-consensus | Constitutional | 6-of-7 | 180 days |
| 3 | blvm-protocol | Implementation | 4-of-5 | 90 days |
| 4 | blvm-node / blvm | Application | 3-of-5 | 60 days |
| 5 | blvm-sdk | Extension | 2-of-3 | 14 days |
Note: For consensus rule changes, Layer 1-2 require 365 days review period.
Tier System
The tier system classifies changes by action type:
| Tier | Type | Signatures | Review Period |
|---|---|---|---|
| 1 | Routine Maintenance | 3-of-5 | 7 days |
| 2 | Feature Changes | 4-of-5 | 30 days |
| 3 | Consensus-Adjacent | 5-of-5 | 90 days |
| 4 | Emergency Actions | 4-of-5 | 0 days |
| 5 | Governance Changes | 5-of-5 | 180 days |
Combination Rules
When both Layer and Tier requirements apply, the system takes the most restrictive (highest) requirements:
| Layer | Tier | Final Signatures | Final Review | Source |
|---|---|---|---|---|
| 1 | 1 | 6-of-7 | 180 days | Layer 1 |
| 1 | 2 | 6-of-7 | 180 days | Layer 1 |
| 1 | 3 | 6-of-7 | 180 days | Layer 1 |
| 1 | 4 | 6-of-7 | 180 days | Layer 1 |
| 1 | 5 | 6-of-7 | 180 days | Layer 1 |
| 2 | 1 | 6-of-7 | 180 days | Layer 2 |
| 2 | 2 | 6-of-7 | 180 days | Layer 2 |
| 2 | 3 | 6-of-7 | 180 days | Layer 2 |
| 2 | 4 | 6-of-7 | 180 days | Layer 2 |
| 2 | 5 | 6-of-7 | 180 days | Layer 2 |
| 3 | 1 | 4-of-5 | 90 days | Layer 3 |
| 3 | 2 | 4-of-5 | 90 days | Layer 3 |
| 3 | 3 | 5-of-5 | 90 days | Tier 3 |
| 3 | 4 | 4-of-5 | 90 days | Layer 3 |
| 3 | 5 | 5-of-5 | 180 days | Tier 5 |
| 4 | 1 | 3-of-5 | 60 days | Layer 4 |
| 4 | 2 | 4-of-5 | 60 days | Tier 2 |
| 4 | 3 | 5-of-5 | 90 days | Tier 3 |
| 4 | 4 | 4-of-5 | 60 days | Layer 4 |
| 4 | 5 | 5-of-5 | 180 days | Tier 5 |
| 5 | 1 | 2-of-3 | 14 days | Layer 5 |
| 5 | 2 | 4-of-5 | 30 days | Tier 2 |
| 5 | 3 | 5-of-5 | 90 days | Tier 3 |
| 5 | 4 | 4-of-5 | 14 days | Layer 5 |
| 5 | 5 | 5-of-5 | 180 days | Tier 5 |
Examples
| Example | Layer | Tier | Result | Source |
|---|---|---|---|---|
| Bug fix in blvm-protocol | 3 (4-of-5, 90d) | 1 (3-of-5, 7d) | 4-of-5, 90d | Layer 3 |
| New feature in blvm-sdk | 5 (2-of-3, 14d) | 2 (4-of-5, 30d) | 4-of-5, 30d | Tier 2 |
| Consensus change in blvm-spec | 1 (6-of-7, 180d) | 3 (5-of-5, 90d) | 6-of-7, 180d | Layer 1 |
| Emergency fix in blvm-node | 4 (3-of-5, 60d) | 4 (4-of-5, 0d) | 4-of-5, 0d | Tier 4 |
Implementation
Code: threshold.rs
#![allow(unused)]
fn main() {
pub fn get_combined_requirements(layer: i32, tier: u32) -> (usize, usize, i64) {
let (layer_sigs_req, layer_sigs_total) = Self::get_threshold_for_layer(layer);
let layer_review = Self::get_review_period_for_layer(layer, false);
let (tier_sigs_req, tier_sigs_total) = Self::get_tier_threshold(tier);
let tier_review = Self::get_tier_review_period(tier);
// Take most restrictive
(layer_sigs_req.max(tier_sigs_req), layer_sigs_total.max(tier_sigs_total), layer_review.max(tier_review))
}
}
Test: cd blvm-commons && cargo test threshold
Configuration
config/repository-layers.yml- Layer definitionsconfig/action-tiers.yml- Tier definitionsconfig/tier-classification-rules.yml- PR classification
See Also
- PR Process - How governance tiers apply to pull requests
- Governance Model - Governance system
- Multisig Configuration - Signature threshold configuration
- Governance Overview - Governance system introduction
Configuration System
Accuracy: This chapter describes the intended governance configuration architecture (YAML source of truth, tier rules, fallbacks). On current
blvm-commonsmain, YAML is loaded throughsrc/config/loader.rs(GovernanceConfigFilesand related types). Older doc links tosrc/governance/config_registry.rs,config_reader.rs,yaml_loader.rs, and similar files do not exist in that form; use loader.rs and the governance repoconfig/tree as the live references.
Overview
The Bitcoin Commons configuration system provides a unified, type-safe interface for all governance-controlled parameters. The system uses YAML files as the source of truth with a database-backed registry for governance-controlled changes and a comprehensive fallback chain.
Architecture
The configuration system has three core components:
1. YAML Files (Source of Truth)
YAML configuration files in the governance/config/ directory serve as the authoritative source for all configuration defaults. These files are version-controlled and human-readable.
Key Files:
action-tiers.yml- Tier definitions and signature requirementsrepository-layers.yml- Layer definitions and requirementsemergency-tiers.yml- Emergency tier definitionsgovernance-fork.yml- Governance fork configurationmaintainers/*.yml- Maintainer configurations by layerrepos/*.yml- Repository-specific configurations
2. ConfigRegistry (Database-Backed)
The ConfigRegistry stores all governance-controlled configuration parameters in a database, enabling governance-approved changes without modifying YAML files directly.
Features:
- Stores 87+ forkable governance variables
- Tracks change proposals and approvals
- Requires Tier 5 governance to modify
- Complete audit trail of all changes
- Automatic sync from YAML on startup
Code: config/loader.rs
3. ConfigReader (Unified Interface)
The ConfigReader provides a type-safe interface for reading configuration values with caching and fallback support.
Features:
- Type-safe accessors (
get_i32(),get_f64(),get_bool(),get_string()) - In-memory caching (5-minute TTL)
- Automatic cache invalidation on changes
- Fallback chain support
Code: config/loader.rs
Fallback Chain
The system uses a four-tier fallback chain for configuration values:
1. Cache (in-memory, 5-minute TTL)
↓ (if not found)
2. Config Registry (database, governance-controlled)
↓ (if not found)
3. YAML Config (file-based, source of truth)
↓ (if not found)
4. Hardcoded Defaults (safety fallback)
Implementation: See src/config/loader.rs in blvm-commons
Sync Mechanisms
sync_from_yaml()
On startup, the system automatically syncs YAML values into the database:
#![allow(unused)]
fn main() {
config_registry.sync_from_yaml(config_path).await?;
}
This process:
- Loads all YAML configuration files
- Extracts configuration values using
YamlConfigLoader - Compares with database values
- Updates database if no governance history exists (preserves governance-approved changes)
Code: config/loader.rs
sync_to_yaml()
When governance-approved changes are activated, the system can write changes back to YAML files. Full bidirectional sync is planned.
Code: config/loader.rs
Configuration Categories
Configuration parameters are organized into categories:
- FeatureFlags: Feature toggles (e.g.,
feature_governance_enforcement) - Thresholds: Maintainer signature thresholds (e.g.,
tier_3_signatures_required) - TimeWindows: Review periods and time limits (e.g.,
tier_3_review_period_days) - Limits: Size and count limits (e.g.,
max_pr_size_bytes) - Network: Network-related parameters
- Security: Security-related parameters
- Other: Miscellaneous parameters
87+ Forkable Variables
The system manages 87+ governance-controlled configuration variables, organized into categories:
Complete Configuration Schema
| Category | Variables | Description |
|---|---|---|
| Action Tier Thresholds | 15 | Signature requirements and review periods for each tier |
| Commons Contributor Thresholds | 8 | Qualification thresholds and weight calculation |
| Governance Phase Thresholds | 11 | Phase boundaries (Early, Growth, Mature) |
| Repository Layer Thresholds | 9 | Signature requirements per repository layer |
| Emergency Tier Thresholds | 10 | Emergency action thresholds and windows |
| Governance Review Policy | 10 | Review period policies and requirements |
| Feature Flags | 7 | Feature enable/disable flags |
| Network & Security | 3 | Network and security configuration |
Total: 87+ variables
Code: config/loader.rs
Action Tier Thresholds (15 variables)
| Variable | Default | Description |
|---|---|---|
tier_1_signatures_required | 3 | Tier 1: Required signatures (out of 5) |
tier_1_signatures_total | 5 | Tier 1: Total signatures available |
tier_1_review_period_days | 7 | Tier 1: Review period (days) |
tier_2_signatures_required | 4 | Tier 2: Required signatures (out of 5) |
tier_2_signatures_total | 5 | Tier 2: Total signatures available |
tier_2_review_period_days | 30 | Tier 2: Review period (days) |
tier_3_signatures_required | 5 | Tier 3: Required signatures (unanimous) |
tier_3_signatures_total | 5 | Tier 3: Total signatures available |
tier_3_review_period_days | 90 | Tier 3: Review period (days) |
tier_4_signatures_required | 4 | Tier 4: Required signatures (emergency) |
tier_4_signatures_total | 5 | Tier 4: Total signatures available |
tier_4_review_period_days | 0 | Tier 4: Review period (immediate) |
tier_5_signatures_required | 5 | Tier 5: Required signatures (governance) |
tier_5_signatures_total | 5 | Tier 5: Total signatures available |
tier_5_review_period_days | 180 | Tier 5: Review period (days) |
Code: config/loader.rs
| signaling_tier_5_mining_percent | 50.0 | Tier 5: Fork signaling — mining / hashpower share (%) |
| signaling_tier_5_economic_percent | 60.0 | Tier 5: Fork signaling — participation-weight share (%) |
Code: config/loader.rs
Commons Contributor Thresholds (8 variables)
| Variable | Default | Description |
|---|---|---|
commons_contributor_min_zaps_btc | 0.01 | Minimum zap contribution (BTC) |
commons_contributor_min_marketplace_btc | 0.01 | Minimum marketplace contribution (BTC) |
commons_contributor_measurement_period_days | 90 | Measurement period (days) |
commons_contributor_qualification_logic | “OR” | Qualification logic (OR/AND) |
commons_contributor_weight_formula | “linear” | Weight calculation formula |
commons_contributor_weight_cap | 0.10 | Maximum weight per contributor (10%) |
Code: config/loader.rs
Governance Phase Thresholds (11 variables)
| Variable | Default | Description |
|---|---|---|
phase_early_max_blocks | 50000 | Early phase: Maximum blocks |
phase_early_max_contributors | 10 | Early phase: Maximum contributors |
phase_growth_min_blocks | 50000 | Growth phase: Minimum blocks |
phase_growth_max_blocks | 200000 | Growth phase: Maximum blocks |
phase_growth_min_contributors | 10 | Growth phase: Minimum contributors |
phase_growth_max_contributors | 100 | Growth phase: Maximum contributors |
phase_mature_min_blocks | 200000 | Mature phase: Minimum blocks |
phase_mature_min_contributors | 100 | Mature phase: Minimum contributors |
Code: config/loader.rs
Repository Layer Thresholds (9 variables)
| Variable | Default | Description |
|---|---|---|
layer_1_2_signatures_required | 3 | Layer 1-2: Required signatures |
layer_1_2_signatures_total | 5 | Layer 1-2: Total signatures |
layer_1_2_review_period_days | 7 | Layer 1-2: Review period (days) |
layer_3_signatures_required | 4 | Layer 3: Required signatures |
layer_3_signatures_total | 5 | Layer 3: Total signatures |
layer_3_review_period_days | 30 | Layer 3: Review period (days) |
layer_4_signatures_required | 5 | Layer 4: Required signatures |
layer_4_signatures_total | 5 | Layer 4: Total signatures |
layer_4_review_period_days | 90 | Layer 4: Review period (days) |
layer_5_signatures_required | 5 | Layer 5: Required signatures |
layer_5_signatures_total | 5 | Layer 5: Total signatures |
layer_5_review_period_days | 180 | Layer 5: Review period (days) |
Code: config/loader.rs
Complete Reference
Authoritative defaults and forkable parameters live in the governance repository under config/ (YAML) and ruleset-export-template*.y. Use blvm-commons/src/config/loader.rs for what the node loads today.
Governance Change Workflow
Changing a configuration parameter requires Tier 5 governance approval:
- Proposal: Create a configuration change proposal via PR
- Review: 5-of-5 maintainer signatures required
- Review Period: 180 days review period
- Activation: Change activated in database via
activate_change() - Sync: Change optionally synced back to YAML files
Code: config/loader.rs
Usage Examples
Basic Configuration Access
#![allow(unused)]
fn main() {
use crate::governance::config_reader::ConfigReader;
use crate::governance::config_registry::ConfigRegistry;
use std::sync::Arc;
// Initialize
let registry = Arc::new(ConfigRegistry::new(pool));
let yaml_loader = YamlConfigLoader::new(config_path);
let config = Arc::new(ConfigReader::with_yaml_loader(
registry.clone(),
Some(yaml_loader),
));
// Read a value (with fallback)
let review_period = config.get_i32("tier_3_review_period_days", 90).await?;
let enabled = config.get_bool("feature_governance_enforcement", false).await?;
}
Convenience Methods
#![allow(unused)]
fn main() {
// Get tier signatures
let (required, total) = config.get_tier_signatures(3).await?;
}
Integration with Validators
#![allow(unused)]
fn main() {
// ThresholdValidator with config support
let validator = ThresholdValidator::with_config(config.clone());
// All methods use config registry
let (req, total) = validator.get_tier_threshold(3).await?;
}
Caching Strategy
- Cache TTL: 5 minutes (configurable via
cache_ttl) - Cache Invalidation:
- Automatic after config changes are activated
- Manual via
clear_cache()orinvalidate_key()
- Cache Storage: In-memory
HashMap<String, serde_json::Value>
Code: config/loader.rs
YAML Structure
YAML files use a structured format. Example from action-tiers.yml:
tiers:
- tier: 1
name: "Routine Maintenance"
signatures_required: 3
signatures_total: 5
review_period_days: 7
- tier: 3
name: "Consensus-Adjacent"
signatures_required: 5
signatures_total: 5
review_period_days: 90
The YamlConfigLoader extracts values from these files into a flat key-value structure for the registry.
Code: config/loader.rs
Initialization
On system startup:
- Load YAML Files: System loads YAML configuration files
- Sync to Database:
sync_from_yaml()populates database from YAML - Initialize Defaults:
initialize_governance_defaults()registers any missing configs - Create ConfigReader: ConfigReader created with YAML loader for fallback access
Code: main.rs
Configuration Key Reference
All configuration keys follow a naming convention:
- Tier configs:
tier_{n}_{property} - Layer configs:
layer_{n}_{property}
See the governance repo config/ tree for the live key set.
Benefits
- YAML as Source of Truth: Human-readable, version-controlled defaults
- Governance Control: Database enables governance-approved changes without YAML edits
- Type Safety: Type-safe accessors prevent configuration errors
- Performance: Caching reduces database queries
- Flexibility: Fallback chain ensures system always has valid configuration
- Audit Trail: Complete history of all configuration changes
Governance Fork System
Overview
The governance fork mechanism enables users to choose between different governance rulesets without affecting Bitcoin consensus. This provides an escape hatch for users who disagree with governance decisions while maintaining Bitcoin protocol integrity.
Fork Types
| Type | Definition | Compatibility | Examples |
|---|---|---|---|
| Soft Fork | Changes without breaking compatibility | Existing users continue, new users choose updated | Adding signature requirements, modifying time locks, updating thresholds |
| Hard Fork | Breaking changes | All users must choose, no backward compatibility | Changing signature schemes, modifying fundamental principles, removing tiers |
Ruleset Export
Export Format
Governance rulesets exported as versioned, signed packages in YAML format:
ruleset_version: "1.2.0"
export_timestamp: "YYYY-MM-DDTHH:MM:SSZ"
previous_ruleset_hash: "sha256:abc123..."
governance_rules:
action_tiers: { /* tier definitions */ }
repository_layers: { /* layer definitions */ }
maintainers: { /* maintainer registry */ }
emergency_procedures: { /* emergency protocols */ }
cryptographic_proofs:
maintainer_signatures: [ /* signed by maintainers */ ]
ruleset_hash: "sha256:def456..."
merkle_root: "sha256:ghi789..."
compatibility:
min_version: "1.0.0"
max_version: "2.0.0"
breaking_changes: false
Export Process
- Ruleset preparation (compile current governance rules from YAML files)
- Cryptographic signing (maintainers sign the ruleset)
- Hash calculation (generate tamper-evident hash)
- Merkle tree (create verification structure)
- Export generation (package for distribution)
- Publication (make available for download)
Code: export.rs
Versioning System
| Version Component | Meaning | Example |
|---|---|---|
| Major | Breaking changes (hard fork) | 2.0.0 (incompatible with 1.x) |
| Minor | New features (soft fork) | 1.2.0 (compatible with 1.x) |
| Patch | Bug fixes and improvements | 1.1.1 (compatible with 1.1.x) |
Compatibility: Compatible (upgrade without issues), Incompatible (must choose), Deprecated (removed), Supported (receives updates).
Adoption Tracking
Track ruleset adoption through: node count, hash rate, user count, exchange support.
Public Dashboard: Current distribution, adoption trends, geographic distribution, exchange listings.
Fork Decision Process
User Choice
- Download ruleset package
- Verify maintainer signatures
- Validate ruleset integrity (hash)
- Configure client (set ruleset)
- Announce choice (publicly declare)
Client Implementation
- Ruleset loading (load chosen ruleset)
- Signature verification (verify maintainer signatures)
- Rule enforcement (apply governance rules)
- Status reporting (report chosen ruleset)
- Update mechanism (handle ruleset updates)
Code: executor.rs
Fork Resolution
Conflict Resolution
When forks occur:
- User notification (alert users to fork)
- Choice period (30 days to choose ruleset)
- Migration support (tools for ruleset migration)
- Documentation (clear migration guides)
- Support (community support during transition)
Fork Merging
Forks can be merged by: consensus building, gradual migration, feature adoption, clean slate.
Security Considerations
| Aspect | Requirements |
|---|---|
| Ruleset Integrity | Cryptographic signatures, hash verification, Merkle trees, timestamp anchoring |
| Fork Security | Replay protection, version validation, signature verification, threshold enforcement |
Examples
| Scenario | Type | Change | Result |
|---|---|---|---|
| Adding signature requirement | Soft Fork | Require 4-of-5 instead of 3-of-5 | Existing users continue with 3-of-5, new users use 4-of-5 |
| Changing signature scheme | Hard Fork | Switch from Ed25519 to Dilithium | Clean split into two governance models |
Configuration
governance/config/governance-fork.yml- Fork configurationgovernance/fork-registry.yml- Registered forks
P2P governance-related extensions
Overview
The node can advertise governance-related P2P capability via the NODE_GOVERNANCE service bit in Version.services. Peers use that flag to identify nodes that participate in Commons-oriented extensions (for example ban list sharing: getbanlist / banlist). Relay and forwarding behavior are implemented in blvm-node networking code and gated by node configuration.
Architecture
Capability and peers
- Nodes set
NODE_GOVERNANCEwhen configured to advertise this capability (see service flags / node config). PeerManagercan track peers that advertised the governance bit for features that need governance-capable peers (e.g. ban-list gossip).
Concrete protocol surface today
- Ban list sharing:
GetBanList/BanList(and the corresponding framed command strings) are part of the shared protocol stack. - Other P2P commands follow the node’s allowlisted command set in
network/protocol.rsandblvm-protocol’snode_tcp/ wire layers.
Configuration
Optional [governance] settings in the node (e.g. commons_url, relay toggles) control whether the node forwards or integrates with blvm-commons-side HTTP APIs. Exact fields change over time; see the live configuration reference and blvm-node config sources.
Code references
| Area | Location |
|---|---|
| Service flag | blvm-protocol / blvm-node NODE_GOVERNANCE |
Framed commands & ProtocolMessage | blvm-node/src/network/protocol.rs, blvm-protocol/src/node_tcp.rs |
| Peer selection for governance bit | blvm-node/src/network/peer_manager.rs (governance feature) |
See also
- Node overview — networking and configuration entry points
- Module system —
EventType/ governance-related events (proposal lifecycle, webhooks, fork detection)
OpenTimestamps Integration
Overview
Bitcoin Commons uses OpenTimestamps (OTS) to anchor governance registries to the Bitcoin blockchain, providing cryptographic proof that governance state existed at specific points in time. This creates immutable historical records that cannot be retroactively modified.
Purpose
OpenTimestamps integration serves as a temporal proof mechanism by:
- Anchoring governance registries to Bitcoin blockchain
- Providing cryptographic proof of governance state
- Creating immutable historical records
- Enabling verification of governance timeline
Architecture
Monthly Registry Anchoring
Anchoring Schedule:
- Frequency: Monthly on the 1st day of each month
- Content: Complete governance registry snapshot
- Proof: OpenTimestamps proof anchored to Bitcoin
- Storage: Local proof files and public registry
Code: anchor.rs
Registry Structure
{
"version": "YYYY-MM",
"timestamp": "YYYY-MM-DDTHH:MM:SSZ",
"previous_registry_hash": "sha256:abc123...",
"maintainers": [...],
"authorized_servers": [...],
"audit_logs": {...},
"multisig_config": {...}
}
Code: anchor.rs
OTS Client
Client Implementation
The OtsClient handles communication with OpenTimestamps calendar servers:
- Calendar Servers: Multiple calendar servers for redundancy
- Hash Submission: Submits SHA256 hashes for timestamping
- Proof Generation: Receives OpenTimestamps proofs
- Verification: Verifies proofs against Bitcoin blockchain
Code: client.rs
Calendar Servers
Default calendar servers:
alice.btc.calendar.opentimestamps.orgbob.btc.calendar.opentimestamps.org
Code: client.rs
Proof Generation
OTS Proof Format
- Format: Binary OpenTimestamps proof
- Extension:
.json.ots(e.g.,YYYY-MM.json.ots) - Content: Cryptographic proof of registry existence
- Verification: Can be verified against Bitcoin blockchain
Proof Process
- Calculate Hash: SHA256 hash of registry JSON
- Submit to Calendar: POST hash to OpenTimestamps calendar
- Receive Proof: Calendar returns OTS proof
- Store Proof: Save proof file locally
- Publish: Make proof publicly available
Code: client.rs
Registry Anchorer
Monthly Anchoring
The RegistryAnchorer creates monthly governance registries:
- Registry Generation: Creates complete registry snapshot
- Hash Chain: Links to previous registry via hash
- OTS Stamping: Submits registry for timestamping
- Proof Storage: Stores proofs for verification
Code: anchor.rs
Registry Content
Monthly registries include:
- Maintainer information
- Authorized servers
- Audit log summaries
- Multisig configuration
- Previous registry hash (hash chain)
Code: anchor.rs
Verification
Proof Verification
OTS proofs can be verified:
ots verify YYYY-MM.json.ots
Code: verify.rs
Verification Process
- Load Proof: Read OTS proof file
- Verify Structure: Validate proof format
- Check Calendar: Verify calendar server signatures
- Verify Bitcoin: Check Bitcoin blockchain anchor
- Verify Hash: Confirm hash matches registry
Integration with Governance
Audit Trail Anchoring
Audit log entries are anchored via monthly registries:
- Monthly Snapshots: Complete audit log state
- Hash Chain: Links between monthly registries
- Immutable History: Cannot be retroactively modified
- Public Verification: Anyone can verify proofs
Code: entry.rs
Governance State Proof
Monthly registries prove governance state:
- Maintainer List: Who had authority at that time
- Server Authorization: Which servers were authorized
- Configuration: Governance configuration snapshot
- Timeline: Historical record of changes
Configuration
[ots]
enabled = true
aggregator_url = "https://alice.btc.calendar.opentimestamps.org"
monthly_anchor_day = 1 # Anchor on 1st of each month
registry_path = "./registries"
proofs_path = "./proofs"
Code: config.rs
Benefits
- Immutability: Proofs anchored to Bitcoin blockchain
- Verifiability: Anyone can verify proofs independently
- Historical Record: Complete timeline of governance state
- Tamper-Evident: Any modification breaks hash chain
- Decentralized: No single point of failure
Components
The OpenTimestamps integration includes:
- OTS client for calendar communication
- Registry anchorer for monthly anchoring
- Proof verification
- Hash chain maintenance
- Proof storage and publishing
Location: blvm-commons/src/ots/, blvm-commons/src/audit/
Nostr Integration
Overview
Bitcoin Commons uses Nostr (Notes and Other Stuff Transmitted by Relays) for real-time transparency and decentralized governance communication. The system includes a multi-bot architecture for different types of announcements and status updates.
Purpose
Nostr integration serves as a transparency mechanism by:
- Publishing real-time governance status updates
- Providing public verification of server operations
- Enabling decentralized monitoring of governance events
- Creating an immutable public record of governance actions
Multi-Bot System
Bot Types
The system uses multiple bot identities for different purposes:
- gov: Governance announcements and status updates
- dev: Development updates and technical information
- research: Educational content (optional)
- network: Network metrics and statistics (optional)
Code: bot_manager.rs
Bot Configuration
[nostr.bots.gov]
nsec_path = "env:GOV_BOT_NSEC" # or file path
npub = "npub1..."
# Placeholder LN address (RFC 2606); use a real address in production.
lightning_address = "gov@example.org"
[nostr.bots.gov.profile]
name = "@BTCCommons_Gov"
about = "Bitcoin Commons Governance Bot"
picture = "https://bitcoincommons.org/logo.png"
Code: config.rs
Nostr Client
Client Implementation
The NostrClient manages connections to multiple Nostr relays:
- Multi-Relay Support: Connects to multiple relays for redundancy
- Event Publishing: Publishes events to all connected relays
- Error Handling: Handles relay failures gracefully
- Retry Logic: Automatic retry for failed publishes
Code: client.rs
Relay Management
#![allow(unused)]
fn main() {
let client = NostrClient::new(nsec, relay_urls).await?;
client.publish_event(event).await?;
}
Code: client.rs
Event Types
Governance Status Events (Kind 30078)
Published hourly by each authorized server:
- Server health status
- Binary and config hashes
- Audit log status
- Tagged with
d:governance-status
Code: events.rs
Server Health Events (Kind 30079)
Published when server status changes:
- Uptime metrics
- Last merge information
- Operational status
- Tagged with
d:server-health
Code: events.rs
Audit Log Head Events (Kind 30080)
Published when audit log head changes:
- Current audit log head hash
- Entry count
- Tagged with
d:audit-head
Code: events.rs
Governance Action Events
Published for governance actions:
- PR merges
- Review period notifications
- Keyholder announcements
Code: governance_publisher.rs
Governance Publisher
Status Publishing
The StatusPublisher publishes governance status:
- Hourly Updates: Regular status updates
- Event Signing: Events signed with server key
- Multi-Relay: Published to multiple relays
- Error Recovery: Handles relay failures
Code: publisher.rs
Action Publishing
The GovernanceActionPublisher publishes governance actions:
- PR Events: Merge and review events
- Keyholder Events: Signature announcements
- Fork Events: Governance fork decisions
Code: governance_publisher.rs
Zap Tracking
Zap Contributions
Zaps are tracked for contribution-based voting:
- Zap Tracker: Monitors Nostr zaps
- Contribution Recording: Records zap contributions
- Vote Conversion: Converts zaps to votes
- Real-Time Processing: Processes zaps as received
Code: zap_tracker.rs
Zap-to-Vote
Zaps to governance events become votes:
- Proposal Zaps: Zaps to governance event IDs
- Vote Weight: Calculated using quadratic formula
- Vote Type: Extracted from zap message
- Database Storage: Stored in proposal_zap_votes table
Code: zap_voting.rs
Configuration
[nostr]
enabled = true
relays = [
"wss://relay.bitcoincommons.org",
"wss://nostr.bitcoincommons.org"
]
publish_interval_secs = 3600 # 1 hour
governance_config = "commons_mainnet"
[nostr.bots.gov]
nsec_path = "env:GOV_BOT_NSEC"
npub = "npub1..."
# Placeholder LN address (RFC 2606); use a real address in production.
lightning_address = "gov@example.org"
Code: config.rs
Real-Time Transparency
Public Monitoring
Anyone can monitor governance via Nostr:
- Event Filtering: Filter by event kind and tags
- Relay Queries: Query any Nostr relay
- Real-Time Updates: Receive updates as they happen
- Verification: Verify event signatures
Event Verification
All events are signed:
- Server Keys: Each server has Nostr keypair
- Event Signing: Events signed with server key
- Public Verification: Anyone can verify signatures
- Tamper-Evident: Cannot modify events without breaking signature
Benefits
- Decentralization: No single point of failure
- Censorship Resistance: Multiple relays, no central authority
- Real-Time: Immediate status updates
- Public Verification: Anyone can verify events
- Transparency: Complete public record of governance actions
Components
The Nostr integration includes:
- Multi-bot manager
- Nostr client with multi-relay support
- Event types (status, health, audit, actions)
- Governance publisher
- Status publisher
- Zap tracker and voting processor
Location: blvm-commons/src/nostr/
Multisig Configuration
Bitcoin Commons uses multisig thresholds for governance decisions, with different thresholds based on the layer and tier of the change. See Layer-Tier Model for details.
Layer-Based Thresholds
Constitutional Layers (Layer 1-2)
- Orange Paper (Layer 1): 6-of-7 maintainers, 180 days (365 for consensus changes)
- blvm-consensus (Layer 2): 6-of-7 maintainers, 180 days (365 for consensus changes)
Implementation Layer (Layer 3)
- blvm-protocol: 4-of-5 maintainers, 90 days
Application Layer (Layer 4)
- blvm-node: 3-of-5 maintainers, 60 days
Extension Layer (Layer 5)
- blvm-sdk: 2-of-3 maintainers, 14 days
- governance: 2-of-3 maintainers, 14 days
- blvm-commons: 2-of-3 maintainers, 14 days
Tier-Based Thresholds
Tier 1: Routine Maintenance
- Signatures: 3-of-5 maintainers
- Review Period: 7 days
- Scope: Bug fixes, documentation, performance optimizations
Tier 2: Feature Changes
- Signatures: 4-of-5 maintainers
- Review Period: 30 days
- Scope: New RPC methods, P2P changes, wallet features
Tier 3: Consensus-Adjacent
- Signatures: 5-of-5 maintainers
- Review Period: 90 days
- Scope: Changes affecting consensus validation code
Tier 4: Emergency Actions
- Signatures: 4-of-5 maintainers
- Review Period: 0 days (immediate)
- Scope: Critical security patches, network-threatening bugs
Tier 5: Governance Changes
- Signatures: 5-of-5 maintainers (special process)
- Review Period: 180 days
- Scope: Changes to governance rules themselves
Combined Model
When both layer and tier apply, the system uses “most restrictive wins” rule. See Layer-Tier Model for the decision matrix.
Multisig Threshold Sensitivity
Figure: Multisig threshold sensitivity analysis showing how different threshold configurations affect security and decision-making speed.
Governance Signature Thresholds
Figure: Signature thresholds by layer showing the graduated security model.
For configuration details, see the governance config/ directory in the governance repository.
See Also
- Layer-Tier Model - How layers and tiers combine
- PR Process - How thresholds apply to PRs
- Governance Model - Governance system
- Keyholder Procedures - Maintainer signing process
- Governance Overview - Governance system introduction
Keyholder Procedures
Bitcoin Commons uses cryptographic keyholders (maintainers) to sign governance decisions. This section describes procedures for keyholders.
Maintainer Responsibilities
Maintainers are responsible for:
- Reviewing Changes: Understanding the impact of proposed changes
- Signing Decisions: Cryptographically signing approved changes
- Maintaining Keys: Securely storing and managing cryptographic keys
- Following Procedures: Adhering to governance processes and review periods
Signing Process
- Review PR: Understand the change and its impact
- Generate Signature: Use
blvm-signfrom blvm-sdk - Post Signature: Comment
/governance-sign <signature>on PR - Governance App Verifies: Cryptographically verifies signature
- Status Check Updates: Shows signature count progress
Key Management
Key Generation
blvm-keygen --output maintainer.key --format pem
Key Storage
- Development: Test keys can be stored locally
- Production: Keys should be stored in HSMs (Hardware Security Modules)
- Backup: Secure backup procedures required
Key Rotation
Keys can be rotated through the governance process. See MAINTAINER_GUIDE.md for detailed procedures.
Emergency Keyholders
Emergency keyholders (5-of-7) can activate emergency mode for critical situations:
- Activation: 5-of-7 emergency keyholders required
- Duration: Maximum 90 days
- Review Periods: Reduced to 30 days during emergency
- Signature Thresholds: Unchanged
Release Pipeline Gate Strength
Figure: Gate strength across the release pipeline. Each gate requires specific signatures and review periods based on the change tier.
For detailed maintainer procedures, see MAINTAINER_GUIDE.md.
See Also
- PR Process - How maintainers sign PRs
- Multisig Configuration - Signature threshold requirements
- Layer-Tier Model - Governance tier system
- Governance Model - Governance system
- Governance Overview - Governance system introduction
Audit Trails
Bitcoin Commons maintains immutable audit trails of all governance decisions using cryptographic hash chains and Bitcoin blockchain anchoring.
Audit Log System
The governance system maintains tamper-evident audit logs that record:
- All Governance Decisions: Every PR merge and maintainer signature milestone
- Maintainer Actions: Key generation, rotation, and usage
- Emergency Activations: Emergency mode activations and deactivations
Cryptographic Properties
- Hash Chains: Each log entry includes hash of previous entry
- Bitcoin Anchoring: Monthly registry anchoring via OpenTimestamps
- Immutable: Logs cannot be modified without detection
- Verifiable: Anyone can verify log integrity
Audit Log Verification
Audit logs can be verified using:
blvm-commons verify-audit-log --log-path audit.log
Three-Layer Verification Architecture
See Also
- Governance Overview - Governance system introduction
- Governance Model - Governance architecture
- Keyholder Procedures - Maintainer responsibilities
- OpenTimestamps Integration - OTS anchoring details
- Multisig Configuration - Signature requirements
The governance system implements three complementary verification layers:
Figure: Three-layer verification: GitHub merge control, real-time Nostr transparency, and OpenTimestamps historical proof.
Audit Trail Completeness
Figure: Audit-trail completeness across governance layers.
OpenTimestamps Integration
The system uses OpenTimestamps to anchor audit logs to the Bitcoin blockchain:
- Monthly Anchoring: Registry state anchored monthly
- Immutable Proof: Proof of existence at specific time
- Public Verification: Anyone can verify timestamps
For detailed audit log documentation, see the blvm-commons repository documentation.
Orange Paper
The Orange Paper provides the mathematical specification of Bitcoin consensus.
The Orange Paper: Bitcoin Protocol Specification
A Complete Mathematical Description of the Bitcoin Consensus System
Version 1.0
Based on Bitcoin Core Implementation Analysis
Authors: BTCDecoded.org, MyBitcoinFuture.com, @secsovereign
Abstract
This paper presents a complete mathematical specification of the Bitcoin consensus protocol as implemented in Bitcoin Core. Unlike previous descriptions, this specification is derived entirely from the current codebase and represents the protocol as it exists today, not as it was originally conceived. This “Orange Paper” serves as the definitive reference for Bitcoin’s consensus rules, state transitions, and economic model.
Table of Contents
- Introduction
- 1.1 Key Contributions
- 1.2 Document Structure
- System Model
- 2.1 Participants
- 2.2 Network Assumptions
- Mathematical Foundations
- 3.1 Basic Types
- 3.2 Core Data Structures
- 3.3 Script System
- Consensus Constants
- 4.1 Monetary Constants
- 4.2 Block Constants
- 4.3 Script Constants
- State Transition Functions
- 5.1 Transaction Validation
- 5.2 Script Execution
- 5.3 Block Validation
- 5.4 BIP Validation Rules
- 5.5 Sequence Locks (BIP68)
- Economic Model
- 6.1 Block Subsidy
- 6.2 Total Supply
- 6.3 Fee Market
- Proof of Work
- Security Properties
- Mempool Protocol
- Network Protocol
- 10.1 Message Types
- 10.2 Connection Management
- 10.3 Peer Discovery
- 10.4 Block Synchronization
- 10.5 Transaction Relay
- Advanced Features
- 11.1 Segregated Witness (SegWit)
- 11.2 Taproot
- 11.3 Chain Reorganization
- 11.3.1 Undo Log Pattern
- 11.4 UTXO Commitments
- Mining Protocol
- 12.1 Block Template Generation
- 12.2 Coinbase Transaction
- 12.3 Mining Process
- 12.4 Block Template Interface
- Implementation Considerations
- 13.1 Performance
- 13.2 Security
- Conclusion
- 14.1 Summary of Contributions
- 14.2 Applications
- Governance Model
- 15.1 Mathematical Foundations
- 15.2 Vote Aggregation
- 15.3 Security Properties
1. Introduction
Bitcoin is a distributed consensus system that maintains a shared ledger of transactions without requiring trusted intermediaries. The system achieves consensus through proof-of-work and enforces economic rules through cryptographic validation. This paper provides a complete mathematical description of how Bitcoin operates.
1.1 Key Contributions
- Complete State Machine: Formal specification of Bitcoin’s state transitions
- Economic Model: Mathematical description of the monetary system
- Validation Rules: Precise definition of all consensus-critical checks
- Security Properties: Formal statements of Bitcoin’s security guarantees
1.2 Document Structure
This specification is organized into four main parts:
- Foundations (Sections 2-4): Mathematical foundations, data structures, and constants
- Core Protocol (Sections 5-8): State transitions, economic model, proof-of-work, and security
- Network Layer (Sections 9-11): Mempool, P2P protocol, and advanced features
- Mining Protocol (Section 12): Block creation and mining process
Each section builds upon previous sections, with cross-references to maintain consistency.
2. System Model
2.1 Participants
- Miners: Create blocks and compete for block rewards
- Nodes: Validate transactions and maintain the blockchain
- Users: Create transactions to transfer value
2.2 Network Assumptions
- Asynchronous Network: Messages may be delayed or reordered
- Byzantine Fault Tolerance: Some participants may behave maliciously
- Economic Rationality: Participants act to maximize their utility
3. Mathematical Foundations
3.1 Basic Types
Hash Values: $\mathbb{H} = {0,1}^{256}$ - Set of 256-bit hashes
Byte Strings: $\mathbb{S} = {0,1}^*$ - Set of byte strings
Natural Numbers: $\mathbb{N} = {0, 1, 2, \ldots}$ - Set of natural numbers
Integers: $\mathbb{Z} = {\ldots, -2, -1, 0, 1, 2, \ldots}$ - Set of integers
Rational Numbers: $\mathbb{Q}$ - Set of rational numbers
Notation: Throughout this document, we use:
- $h \in \mathbb{N}$ for block height
- $tx \in \mathcal{TX}$ for transactions
- $us \in \mathcal{US}$ for UTXO sets
- $b \in \mathcal{B}$ for blocks
3.2 Core Data Structures
OutPoint: $\mathcal{O} = \mathbb{H} \times \mathbb{N}$ (see Transaction Input, Cartesian product)
Transaction Input: $\mathcal{I} = \mathcal{O} \times \mathbb{S} \times \mathbb{N}$ (see Script System)
Transaction Output: $\mathcal{T} = \mathbb{Z} \times \mathbb{S}$ (see Monetary Values)
Transaction: $\mathcal{TX} = \mathbb{N} \times \mathcal{I}^* \times \mathcal{T}^* \times \mathbb{N}$ (see Transaction Validation, Kleene star)
Block Header: $\mathcal{H} = \mathbb{Z} \times \mathbb{H} \times \mathbb{H} \times \mathbb{N} \times \mathbb{N} \times \mathbb{N}$ (see Block Validation)
Block: $\mathcal{B} = \mathcal{H} \times \mathcal{TX}^*$ (see Block Validation)
UTXO: $\mathcal{U} = \mathbb{Z} \times \mathbb{S} \times \mathbb{N}$ (see UTXO Set Invariant)
UTXO Set: $\mathcal{US} = \mathcal{O} \rightarrow \mathcal{U}$ (see State Transition Functions, function type)
3.3 Script System
Script: $\mathcal{SC} = \mathbb{S}$ (sequence of opcodes)
Witness: $\mathcal{W} = \mathbb{S}^$ (stack of witness data)
Stack: $\mathcal{ST} = \mathbb{S}^$ (execution stack, see Script Execution)
4. Consensus Constants
4.1 Monetary Constants
$C = 10^8$ (satoshis per BTC, see Economic Model)
$M_{max} = 21 \times 10^6 \times C$ (maximum money supply, see Supply Limit)
$H = 210,000$ (halving interval, see Block Subsidy)
4.2 Block Constants
$W_{max} = 4 \times 10^6$ (maximum block weight, see Block Validation)
$S_{max} = 80,000$ (maximum sigops per block, see Script Execution)
$R = 100$ (coinbase maturity requirement, see Transaction Validation)
4.3 Script Constants
$L_{script} = 10,000$ (maximum script length, see Script Security)
$L_{stack} = 1,000$ (maximum stack size, see Script Execution Bounds)
$L_{ops} = 201$ (maximum operations per script, see Script Execution Bounds)
$L_{element} = 520$ (maximum element size, see Script Execution)
5. State Transition Functions
5.1 Transaction Validation
CheckTransaction: $\mathcal{TX} \rightarrow {\text{valid}, \text{invalid}}$
A transaction $tx = (v, ins, outs, lt)$ is valid if and only if:
- $|ins| > 0 \land |outs| > 0$
- $\forall o \in outs: 0 \leq o.value \leq M_{max}$
- $\sum_{o \in outs} o.value \leq M_{max}$
- $\forall i,j \in ins: i \neq j \Rightarrow i.prevout \neq j.prevout$
- If $tx$ is coinbase: $2 \leq |ins[0].scriptSig| \leq 100$
- If $tx$ is not coinbase: $\forall i \in ins: \neg i.prevout.IsNull()$
Properties:
- Structure validation: $\text{CheckTransaction}(tx) = \text{valid} \implies |tx.\text{inputs}| > 0 \land |tx.\text{outputs}| > 0$
- Input bounds: $\text{CheckTransaction}(tx) = \text{valid} \implies |tx.\text{inputs}| \leq M_{\text{max_inputs}}$
- Output bounds: $\text{CheckTransaction}(tx) = \text{valid} \implies |tx.\text{outputs}| \leq M_{\text{max_outputs}}$
- Empty rejection: $|tx.\text{inputs}| = 0 \lor |tx.\text{outputs}| = 0 \implies \text{CheckTransaction}(tx) \neq \text{valid}$
- Output value bounds: $\text{CheckTransaction}(tx) = \text{valid} \implies \forall o \in tx.\text{outputs}: 0 \leq o.\text{value} \leq M_{\text{max}}$
- Total output sum: $\text{CheckTransaction}(tx) = \text{valid} \implies \sum_{o \in tx.\text{outputs}} o.\text{value} \leq M_{\text{max}}$
- No duplicate prevouts: $\text{CheckTransaction}(tx) = \text{valid} \implies \forall i,j \in tx.\text{inputs}: i \neq j \implies i.\text{prevout} \neq j.\text{prevout}$
- Coinbase scriptSig length: $\text{CheckTransaction}(tx) = \text{valid} \land \text{IsCoinbase}(tx) \implies 2 \leq |tx.\text{inputs}[0].\text{scriptSig}| \leq 100$
- Non-coinbase prevout: $\text{CheckTransaction}(tx) = \text{valid} \land \neg \text{IsCoinbase}(tx) \implies \forall i \in tx.\text{inputs}: \neg i.\text{prevout}.\text{IsNull}()$
- Deterministic: $\text{CheckTransaction}(tx_1) = \text{CheckTransaction}(tx_2) \iff tx_1 = tx_2$ (same transaction → same result)
- Result type: $\text{CheckTransaction}(tx) \in {\text{valid}, \text{invalid}}$
CheckTxInputs: $\mathcal{TX} \times \mathcal{US} \times \mathbb{N} \rightarrow {\text{valid}, \text{invalid}} \times \mathbb{Z}$
Properties:
- Coinbase fee: $\text{IsCoinbase}(tx) = \text{true} \implies \text{CheckTxInputs}(tx, us, h) = (\text{valid}, 0)$
- Value conservation: $\text{CheckTxInputs}(tx, us, h) = (\text{valid}, fee) \land \neg \text{IsCoinbase}(tx) \implies \sum_{i \in tx.\text{inputs}} us(i.\text{prevout}).\text{value} = \sum_{o \in tx.\text{outputs}} o.\text{value} + fee$
- Fee calculation: $\text{CheckTxInputs}(tx, us, h) = (\text{valid}, fee) \land \neg \text{IsCoinbase}(tx) \implies fee = \sum_{i \in tx.\text{inputs}} us(i.\text{prevout}).\text{value} - \sum_{o \in tx.\text{outputs}} o.\text{value}$
- Non-negative fee: $\text{CheckTxInputs}(tx, us, h) = (\text{valid}, fee) \implies fee \geq 0$
- Insufficient funds: $\text{CheckTxInputs}(tx, us, h) = (\text{invalid}, 0) \land \neg \text{IsCoinbase}(tx) \implies \sum_{i \in tx.\text{inputs}} us(i.\text{prevout}).\text{value} < \sum_{o \in tx.\text{outputs}} o.\text{value}$
- Deterministic: $\text{CheckTxInputs}(tx_1, us_1, h_1) = \text{CheckTxInputs}(tx_2, us_2, h_2) \iff tx_1 = tx_2 \land us_1 = us_2 \land h_1 = h_2$
- Result type: $\text{CheckTxInputs}(tx, us, h) \in {(\text{valid}, \mathbb{Z}), (\text{invalid}, 0)}$
For transaction $tx$ with UTXO set $us$ at height $h$:
- If $tx$ is coinbase: return $(\text{valid}, 0)$
- Let $total_{in} = \sum_{i \in ins} us(i.prevout).value$
- Let $total_{out} = \sum_{o \in outs} o.value$
- If $total_{in} < total_{out}$: return $(\text{invalid}, 0)$
- Return $(\text{valid}, total_{in} - total_{out})$
5.1.1 Transaction Sighash Calculation
CalculateSighash: $\mathcal{TX} \times \mathbb{N} \times \mathcal{US} \times \text{SighashType} \times \mathbb{N} \rightarrow \mathbb{H}$
Properties:
- Hash length: $\text{CalculateSighash}(tx, i, us, st, h) = h \implies |h| = 32$ (32-byte hash)
- Input index requirement: $\text{CalculateSighash}(tx, i, us, st, h)$ requires $i < |tx.inputs|$ (valid input index)
- Deterministic: $\text{CalculateSighash}(tx_1, i_1, us_1, st_1, h_1) = \text{CalculateSighash}(tx_2, i_2, us_2, st_2, h_2) \iff tx_1 = tx_2 \land i_1 = i_2 \land us_1 = us_2 \land st_1 = st_2 \land h_1 = h_2$
For transaction $tx$, input index $i$, UTXO set $us$, sighash type $st$, and height $h$:
$$\text{CalculateSighash}(tx, i, us, st, h) = \text{SHA256}(\text{SHA256}(\text{SighashPreimage}(tx, i, us, st, h)))$$
SighashScriptCode: $\mathcal{TX} \times \mathbb{N} \times \mathcal{US} \rightarrow \mathbb{S}$
Properties:
- P2SH handling: $\text{SighashScriptCode}(tx, i, us) = \text{RedeemScript}(tx, i) \iff \text{IsP2SH}(us(tx.\text{inputs}[i].\text{prevout}).\text{scriptPubkey})$ (P2SH uses redeem script)
- Non-P2SH handling: $\text{SighashScriptCode}(tx, i, us) = us(tx.\text{inputs}[i].\text{prevout}).\text{scriptPubkey} \iff \neg \text{IsP2SH}(us(tx.\text{inputs}[i].\text{prevout}).\text{scriptPubkey})$ (non-P2SH uses scriptPubkey)
- Input index requirement: $\text{SighashScriptCode}(tx, i, us)$ requires $i < |tx.\text{inputs}|$ (valid input index)
- UTXO existence: $\text{SighashScriptCode}(tx, i, us)$ requires $tx.\text{inputs}[i].\text{prevout} \in us$ (UTXO must exist)
- Deterministic: $\text{SighashScriptCode}(tx_1, i_1, us_1) = \text{SighashScriptCode}(tx_2, i_2, us_2) \iff tx_1 = tx_2 \land i_1 = i_2 \land us_1 = us_2$
- Result type: $\text{SighashScriptCode}(tx, i, us) \in \mathbb{S}$ (returns script)
For transaction $tx$, input index $i$, and UTXO set $us$:
$$\text{SighashScriptCode}(tx, i, us) = \begin{cases} \text{RedeemScript}(tx, i) & \text{if } \text{IsP2SH}(us(tx.\text{inputs}[i].\text{prevout}).\text{scriptPubkey}) \ us(tx.\text{inputs}[i].\text{prevout}).\text{scriptPubkey} & \text{otherwise} \end{cases}$$
Where $\text{RedeemScript}(tx, i)$ is the redeem script extracted from the stack after executing scriptSig for input $i$.
SighashType: $\mathbb{N}_{8} \times \mathbb{N} \rightarrow \text{SighashType}$
Properties:
- BIP66 legacy handling: $\text{SighashType}(0x00, h) = \text{AllLegacy} \iff h < H_{66}$ (legacy 0x00 only before BIP66)
- Standard types: $\text{SighashType}(byte, h) \in {\text{All}, \text{None}, \text{Single}} \iff byte \in {0x01, 0x02, 0x03}$ (standard types)
- AnyoneCanPay flag: $\text{SighashType}(byte, h) \text{ has AnyoneCanPay flag } \iff byte \in {0x81, 0x82, 0x83}$ (AnyoneCanPay types)
- Invalid handling: $\text{SighashType}(byte, h) = \text{Invalid} \iff byte \notin {0x00, 0x01, 0x02, 0x03, 0x81, 0x82, 0x83} \lor (byte = 0x00 \land h \geq H_{66})$ (invalid bytes or post-BIP66 0x00)
- Deterministic: $\text{SighashType}(byte_1, h_1) = \text{SighashType}(byte_2, h_2) \iff byte_1 = byte_2 \land h_1 = h_2$
- Result type: $\text{SighashType}(byte, h) \in {\text{AllLegacy}, \text{All}, \text{None}, \text{Single}, \text{All} \mid \text{AnyoneCanPay}, \text{None} \mid \text{AnyoneCanPay}, \text{Single} \mid \text{AnyoneCanPay}, \text{Invalid}}$
For sighash byte $byte$ and height $h$:
$$\text{SighashType}(byte, h) = \begin{cases} \text{AllLegacy} & \text{if } byte = 0x00 \land h < H_{66} \ \text{All} & \text{if } byte = 0x01 \ \text{None} & \text{if } byte = 0x02 \ \text{Single} & \text{if } byte = 0x03 \ \text{All} \mid \text{AnyoneCanPay} & \text{if } byte = 0x81 \ \text{None} \mid \text{AnyoneCanPay} & \text{if } byte = 0x82 \ \text{Single} \mid \text{AnyoneCanPay} & \text{if } byte = 0x83 \ \text{Invalid} & \text{otherwise} \end{cases}$$
Where $H_{66}$ is the BIP66 activation height (mainnet: 363,724).
Early Bitcoin Legacy: In early Bitcoin (pre-BIP66), sighash type $0x00$ was accepted and treated as SIGHASH_ALL. This is represented as $\text{AllLegacy}$ to preserve the correct byte value for sighash computation.
Theorem 5.1.1 (P2SH Redeem Script Sighash): For P2SH transactions, the sighash must use the redeem script instead of the scriptPubKey.
Proof: By construction, P2SH scriptPubKeys contain only a hash of the redeem script. The actual script logic is in the redeem script, which must be used for sighash calculation to ensure signatures validate correctly. This is proven by the requirement that $\text{SighashScriptCode}$ returns the redeem script for P2SH transactions.
Theorem 5.1.2 (Sighash Type AllLegacy): Early Bitcoin (pre-BIP66) accepted sighash type 0x00 as SIGHASH_ALL.
Proof: Historical Bitcoin blocks before BIP66 activation (block 363,724) contain transactions with sighash type 0x00. These transactions are valid and must be accepted. The $\text{SighashType}$ function maps $0x00$ to $\text{AllLegacy}$ for heights $< H_{66}$ to preserve compatibility with these historical transactions.
5.2 Script Execution
Bitcoin uses a stack-based scripting language for transaction validation. Scripts are executed to determine whether a transaction output can be spent.
EvalScript: $\mathcal{SC} \times \mathcal{ST} \times \mathbb{N} \rightarrow {\text{true}, \text{false}}$
Script execution follows a stack-based virtual machine:
- Initialize stack $S = \emptyset$
- For each opcode $op$ in script:
- If $|S| > L_{stack}$: return $\text{false}$ (stack overflow)
- If operation count $> L_{ops}$: return $\text{false}$ (operation limit exceeded)
- Execute $op$ with current stack state
- If execution fails: return $\text{false}$
- Return $|S| = 1 \land S[0] \neq 0$ (exactly one non-zero value on stack)
Properties:
- Success condition: $\text{EvalScript}(script, stack, flags) = \text{true} \iff |stack| = 1 \land stack[0] \neq 0$
- Stack bounds: $\text{EvalScript}(script, stack, flags) \implies |stack| \leq L_{\text{stack}}$ (stack never exceeds maximum size)
- Empty script: $|script| = 0 \implies \text{EvalScript}(script, stack, flags) = \text{false}$ (empty script always fails)
- Operation limit: $\text{EvalScript}(script, stack, flags)$ fails if operation count exceeds $L_{\text{ops}}$
- Stack overflow: If $|stack| > L_{\text{stack}}$ during execution, $\text{EvalScript}(script, stack, flags) = \text{false}$
- Boolean result: $\text{EvalScript}(script, stack, flags) \in {\text{true}, \text{false}}$
- Deterministic: $\text{EvalScript}(script_1, stack_1, flags_1) = \text{EvalScript}(script_2, stack_2, flags_2) \iff script_1 = script_2 \land stack_1 = stack_2 \land flags_1 = flags_2$
- Stack preservation: During execution, stack size is bounded by $L_{\text{stack}}$
- Failure modes: $\text{EvalScript}(script, stack, flags) = \text{false}$ if stack overflow, operation limit exceeded, or opcode execution fails
Properties:
- Script verification correctness: $\text{VerifyScript}(ss, spk, w, f) = \text{true} \iff$ script execution succeeds with final stack having exactly one true value
- P2SH validation: $(f \land 0x01) \neq 0 \land \text{IsP2SH}(spk) \implies \text{P2SHPushOnlyCheck}(ss)$ must be valid
- Boolean result: $\text{VerifyScript}(ss, spk, w, f) \in {\text{true}, \text{false}}$
- Deterministic: $\text{VerifyScript}(ss_1, spk_1, w_1, f_1) = \text{VerifyScript}(ss_2, spk_2, w_2, f_2) \iff ss_1 = ss_2 \land spk_1 = spk_2 \land w_1 = w_2 \land f_1 = f_2$
- Execution order: $\text{VerifyScript}(ss, spk, w, f)$ executes $ss$ first, then $spk$, then $w$ if present
- Stack initialization: $\text{VerifyScript}(ss, spk, w, f)$ starts with empty stack for $ss$ execution
- Final stack condition: $\text{VerifyScript}(ss, spk, w, f) = \text{true} \implies$ final stack has exactly one non-zero element
VerifyScript: $\mathcal{SC} \times \mathcal{SC} \times \mathcal{W} \times \mathbb{N} \rightarrow {\text{true}, \text{false}}$
For scriptSig $ss$, scriptPubKey $spk$, witness $w$, and flags $f$:
- P2SH Push-Only Validation: If $(f \land 0x01) \neq 0$ (SCRIPT_VERIFY_P2SH) and $\text{IsP2SH}(spk)$, then $\text{P2SHPushOnlyCheck}(ss)$ must be valid
- Execute $ss$ on empty stack
- Execute $spk$ on resulting stack
- If witness present: execute $w$ on stack
- Return final stack has exactly one true value
5.2.1 P2SH Push-Only Validation
P2SHPushOnlyCheck: $\mathbb{S} \rightarrow {\text{valid}, \text{invalid}}$
Properties:
- Push-only validation: $\text{P2SHPushOnlyCheck}(ss) = \text{valid} \iff \forall op \in ss : \text{IsPushOpcode}(op)$
- Boolean result: $\text{P2SHPushOnlyCheck}(ss) \in {\text{valid}, \text{invalid}}$
- Deterministic: $\text{P2SHPushOnlyCheck}(ss_1) = \text{P2SHPushOnlyCheck}(ss_2) \iff ss_1 = ss_2$
- Empty script: $\text{P2SHPushOnlyCheck}(\emptyset) = \text{valid}$ (empty script is valid push-only)
- Non-push opcode: If $\exists op \in ss : \neg \text{IsPushOpcode}(op)$, then $\text{P2SHPushOnlyCheck}(ss) = \text{invalid}$
For P2SH scriptSig $ss$:
$$\text{P2SHPushOnlyCheck}(ss) = \begin{cases} \text{valid} & \text{if } \forall op \in ss : \text{IsPushOpcode}(op) \ \text{invalid} & \text{otherwise} \end{cases}$$
Where $\text{IsPushOpcode}(op)$ returns true if $op$ is a push opcode (0x00-0x4e) with valid encoding:
- Direct push: $0x01 \leq op \leq 0x4b$ (push 1-75 bytes)
- OP_PUSHDATA1: $op = 0x4c$ (followed by 1-byte length)
- OP_PUSHDATA2: $op = 0x4d$ (followed by 2-byte length)
- OP_PUSHDATA4: $op = 0x4e$ (followed by 4-byte length)
- OP_0: $op = 0x00$ (push empty array)
P2SH Detection: $\text{IsP2SH}(spk) = (|spk| = 23) \land (spk[0] = 0xa9) \land (spk[1] = 0x14) \land (spk[22] = 0x87)$
Where:
- $0xa9$ is OP_HASH160
- $0x14$ is push 20 bytes
- $0x87$ is OP_EQUAL
Security Property: P2SH push-only validation prevents script injection attacks:
$$\forall ss, spk \in \mathbb{S}, f \in \mathbb{N}_{32} : (f \land 0x01) \neq 0 \land \text{IsP2SH}(spk) \land \neg \text{P2SHPushOnlyCheck}(ss) \implies \text{VerifyScript}(ss, spk, w, f) = \text{false}$$
Theorem 5.2.1 (P2SH Push-Only Security): P2SH scriptSig must contain only push operations to prevent script injection.
Proof: By construction, if a P2SH scriptSig $ss$ contains any non-push opcode, then $\text{P2SHPushOnlyCheck}(ss) = \text{invalid}$, causing $\text{VerifyScript}(ss, spk, w, f) = \text{false}$ before script execution. This prevents malicious opcodes from being executed, ensuring that only data (the redeem script) is pushed onto the stack.
Activation: Block 173,805 (mainnet) - Same as P2SH activation (BIP16)
5.2.2 Signature Operation Counting
Signature operations (sigops) are counted to enforce the MAX_BLOCK_SIGOPS_COST limit (80,000) per block. Sigops include OP_CHECKSIG, OP_CHECKSIGVERIFY, OP_CHECKMULTISIG, and OP_CHECKMULTISIGVERIFY operations.
CountSigOpsInScript: $\mathbb{S} \times {\text{true}, \text{false}} \rightarrow \mathbb{N}$
Properties:
- Sigop count bounded: $\text{CountSigOpsInScript}(s, a) \leq |s|$ for all scripts $s$
- Non-negativity: $\text{CountSigOpsInScript}(s, a) \geq 0$ for all scripts $s$
- Empty script: $\text{CountSigOpsInScript}(\emptyset, a) = 0$
For script $s$ and accurate flag $a$:
$$\text{CountSigOpsInScript}(s, a) = \sum_{i=0}^{|s|-1} \text{SigOpCount}(s[i], s, i, a)$$
Where $\text{SigOpCount}(op, s, i, a)$ returns:
- $1$ if $op \in {0xac, 0xad}$ (OP_CHECKSIG, OP_CHECKSIGVERIFY)
- $n$ if $op \in {0xae, 0xaf}$ (OP_CHECKMULTISIG, OP_CHECKMULTISIGVERIFY) where:
- If $a = \text{true}$ and $i > 0$ and $s[i-1] \in [0x51, 0x60]$ (OP_1 to OP_16), then $n = s[i-1] - 0x50$
- Otherwise, $n = 20$ (MAX_PUBKEYS_PER_MULTISIG)
- $0$ otherwise
GetLegacySigOpCount: $\mathcal{TX} \rightarrow \mathbb{N}$
Properties:
- Non-negativity: $\text{GetLegacySigOpCount}(tx) \geq 0$ for all transactions $tx$
- Coinbase sigops: $\text{IsCoinbase}(tx) = \text{true} \implies \text{GetLegacySigOpCount}(tx) \geq 0$ (coinbase may have sigops in scriptSig)
For transaction $tx$:
$$\text{GetLegacySigOpCount}(tx) = \sum_{i \in tx.\text{inputs}} \text{CountSigOpsInScript}(i.\text{scriptSig}, \text{false}) + \sum_{o \in tx.\text{outputs}} \text{CountSigOpsInScript}(o.\text{scriptPubkey}, \text{false})$$
GetP2SHSigOpCount: $\mathcal{TX} \times \mathcal{US} \rightarrow \mathbb{N}$
Properties:
- Coinbase zero: $\text{IsCoinbase}(tx) = \text{true} \implies \text{GetP2SHSigOpCount}(tx, us) = 0$
- Non-negativity: $\text{GetP2SHSigOpCount}(tx, us) \geq 0$ for all transactions $tx$ and UTXO sets $us$
- P2SH only: $\text{GetP2SHSigOpCount}(tx, us) > 0 \implies \exists i \in tx.inputs: \text{IsP2SH}(us(i.prevout).scriptPubkey)$
For transaction $tx$ and UTXO set $us$:
$$\text{GetP2SHSigOpCount}(tx, us) = \begin{cases} 0 & \text{if } \text{IsCoinbase}(tx) \ \sum_{i \in tx.\text{inputs}} \text{P2SHSigOps}(i, us) & \text{otherwise} \end{cases}$$
Where $\text{P2SHSigOps}(i, us) = \begin{cases} \text{CountSigOpsInScript}(r, \text{true}) & \text{if } \text{IsP2SH}(us(i.\text{prevout}).\text{scriptPubkey}) \land \text{ExtractRedeemScript}(i.\text{scriptSig}) = r \ 0 & \text{otherwise} \end{cases}$
GetTransactionSigOpCost: $\mathcal{TX} \times \mathcal{US} \times \mathcal{W}^? \times \mathbb{N}_{32} \rightarrow \mathbb{N}$
Properties:
- Non-negativity: $\text{GetTransactionSigOpCost}(tx, us, w, f) \geq 0$ for all valid inputs
- Cost formula: $\text{GetTransactionSigOpCost}(tx, us, w, f) = \text{GetLegacySigOpCount}(tx) \times 4 + \text{GetP2SHSigOpCount}(tx, us) \times 4 \times \text{IsP2SHEnabled}(f) + \text{CountWitnessSigOps}(tx, w, us, f)$
- Block limit: $\sum_{tx \in block.transactions} \text{GetTransactionSigOpCost}(tx, us, w, f) \leq M_{\text{max_block_sigops}}$ for valid blocks
For transaction $tx$, UTXO set $us$, witness $w$, and flags $f$:
$$\text{GetTransactionSigOpCost}(tx, us, w, f) = \text{GetLegacySigOpCount}(tx) \times 4 + \text{GetP2SHSigOpCount}(tx, us) \times 4 \times \text{IsP2SHEnabled}(f) + \text{CountWitnessSigOps}(tx, w, us, f)$$
Where:
- $\text{IsP2SHEnabled}(f) = (f \land 0x01) \neq 0$
- $\text{CountWitnessSigOps}(tx, w, us, f)$ counts sigops in witness scripts for SegWit transactions
Block SigOps Limit: For block $b$:
$$\sum_{tx \in b.\text{transactions}} \text{GetTransactionSigOpCost}(tx, us, w_{tx}, f) \leq S_{max}$$
Where $S_{max} = 80,000$ (MAX_BLOCK_SIGOPS_COST).
5.2.3 Script Verification Flags
CalculateScriptFlags: $\mathcal{TX} \times \mathcal{W}^? \times \mathbb{N} \times \text{Network} \rightarrow \mathbb{N}_{32}$
Properties:
- Flag activation: $\text{CalculateScriptFlags}(tx, w, h, n) = f \implies \forall flag \in f: h \geq H_{flag}(n)$
- Per-transaction calculation: $\text{CalculateScriptFlags}(tx_1, w_1, h, n) \neq \text{CalculateScriptFlags}(tx_2, w_2, h, n)$ (may differ for different transactions)
- Flag monotonicity: $h_1 \leq h_2 \implies \text{CalculateScriptFlags}(tx, w, h_1, n) \subseteq \text{CalculateScriptFlags}(tx, w, h_2, n)$
For transaction $tx$, witness $w$, height $h$, and network $n$:
$$\text{CalculateScriptFlags}(tx, w, h, n) = \bigcup_{flag \in \text{ActiveFlags}(tx, w, h, n)} flag$$
Where $\text{ActiveFlags}(tx, w, h, n)$ returns the set of flags active for this transaction:
$$\text{ActiveFlags}(tx, w, h, n) = {f : f \in \text{AllFlags} \land \text{IsFlagActive}(f, tx, w, h, n)}$$
Flag Activation: $\text{IsFlagActive}(f, tx, w, h, n) = (h \geq H_f(n)) \land \text{FlagCondition}(f, tx, w)$
Where:
- $H_f(n)$ is the activation height for flag $f$ on network $n$
- $\text{FlagCondition}(f, tx, w)$ is the transaction-specific condition for flag $f$
Flag Definitions:
- SCRIPT_VERIFY_P2SH ($f = 0x01$): $H_f(\text{mainnet}) = 173,805$, $\text{FlagCondition} = \text{true}$ (always active after activation)
- SCRIPT_VERIFY_STRICTENC ($f = 0x02$): $H_f(\text{mainnet}) = 363,724$ (BIP66), $\text{FlagCondition} = \text{true}$
- SCRIPT_VERIFY_DERSIG ($f = 0x04$): $H_f(\text{mainnet}) = 363,724$ (BIP66), $\text{FlagCondition} = \text{true}$
- SCRIPT_VERIFY_LOW_S ($f = 0x08$): $H_f(\text{mainnet}) = 363,724$ (BIP66), $\text{FlagCondition} = \text{true}$
- SCRIPT_VERIFY_NULLDUMMY ($f = 0x10$): $H_f(\text{mainnet}) = 481,824$ (BIP147), $\text{FlagCondition} = \text{true}$
- SCRIPT_VERIFY_CHECKLOCKTIMEVERIFY ($f = 0x200$): $H_f(\text{mainnet}) = 388,381$ (BIP65), $\text{FlagCondition} = \text{true}$
- SCRIPT_VERIFY_CHECKSEQUENCEVERIFY ($f = 0x400$): $H_f(\text{mainnet}) = 481,824$ (BIP112), $\text{FlagCondition} = \text{true}$
- SCRIPT_VERIFY_WITNESS ($f = 0x800$): $H_f(\text{mainnet}) = 481,824$ (SegWit), $\text{FlagCondition} = (w \neq \emptyset \lor \text{IsSegWitTransaction}(tx))$
- SCRIPT_VERIFY_WITNESS_PUBKEYTYPE ($f = 0x8000$): $H_f(\text{mainnet}) = 709,632$ (Taproot), $\text{FlagCondition} = \exists o \in tx.\text{outputs} : \text{IsP2TR}(o.\text{scriptPubkey})$
P2TR Detection: $\text{IsP2TR}(spk) = (|spk| = 34) \land (spk[0] = 0x51) \land (spk[1] = 0x20)$
Where:
- $0x51$ is OP_1
- $0x20$ is push 32 bytes
Mathematical Property: Flags are calculated per-transaction, not per-block:
$$\forall tx_1, tx_2 \in \mathcal{TX}, tx_1 \neq tx_2 : \text{CalculateScriptFlags}(tx_1, w_1, h, n) \neq \text{CalculateScriptFlags}(tx_2, w_2, h, n) \text{ (may differ)}$$
Theorem 5.2.2 (Per-Transaction Flag Calculation): Script verification flags must be calculated per-transaction based on transaction characteristics and block height.
Proof: By construction, flags depend on both block height (activation) and transaction characteristics (witness presence, output types). Different transactions in the same block may have different flags, so flags cannot be calculated once per block. This is proven by the implementation requirement that $\text{CalculateScriptFlags}$ is called for each transaction individually.
Activation Heights (Mainnet):
- P2SH: Block 173,805
- BIP66 (DER, STRICTENC, LOW_S): Block 363,724
- BIP65 (CLTV): Block 388,381
- SegWit (WITNESS, NULLDUMMY, CSV): Block 481,824
- Taproot (WITNESS_PUBKEYTYPE): Block 709,632
5.3 Block Validation
ConnectBlock: $\mathcal{B} \times \mathcal{US} \times \mathbb{N} \rightarrow {\text{valid}, \text{invalid}} \times \mathcal{US}$
Properties:
- Block structure: $\text{ConnectBlock}(b, us, height) = \text{valid} \implies |b.transactions| > 0$ (block must have transactions)
- Coinbase requirement: $\text{ConnectBlock}(b, us, height) = \text{valid} \implies \text{IsCoinbase}(b.transactions[0]) = \text{true}$ (first transaction must be coinbase)
- UTXO consistency: $\text{ConnectBlock}(b, us, height) = (\text{valid}, us’) \implies$ UTXO set $us’$ reflects all transactions in block $b$
- Transaction validation: $\text{ConnectBlock}(b, us, height) = \text{valid} \implies \forall tx \in b.transactions : \text{CheckTransaction}(tx) = \text{valid}$
- Input validation: $\text{ConnectBlock}(b, us, height) = \text{valid} \implies \forall tx \in b.transactions : \text{CheckTxInputs}(tx, us, height) = (\text{valid}, fee)$
- Script verification: $\text{ConnectBlock}(b, us, height) = \text{valid} \implies \forall tx \in b.transactions :$ all scripts verify successfully
- Coinbase fee: $\text{ConnectBlock}(b, us, height) = \text{valid} \implies$ coinbase output $\leq$ fees + subsidy
- Result type: $\text{ConnectBlock}(b, us, height) \in {(\text{valid}, \mathcal{US}), (\text{invalid}, \mathcal{US})}$
- Deterministic: $\text{ConnectBlock}(b_1, us_1, height_1) = \text{ConnectBlock}(b_2, us_2, height_2) \iff b_1 = b_2 \land us_1 = us_2 \land height_1 = height_2$
- UTXO set growth: $\text{ConnectBlock}(b, us, height) = (\text{valid}, us’) \implies |us’| = |us| - \sum_{tx \in b.transactions, \neg \text{IsCoinbase}(tx)} |tx.inputs| + \sum_{tx \in b.transactions} |tx.outputs|$
For block $b = (h, txs)$ with UTXO set $us$ at height $height$:
5.3.1 Transaction Application Equivalence
Theorem 5.3.1 (ApplyTransaction Equivalence): The functions apply_transaction and apply_transaction_with_id produce identical results:
$$\forall tx \in \mathcal{TX}, us \in \mathcal{US}, h \in \mathbb{N}:$$ $$\text{ApplyTransaction}(tx, us, h) = \text{ApplyTransactionWithId}(tx, \text{CalculateTxId}(tx), us, h)$$
Proof: Both functions perform the same UTXO set transformations. The only difference is that apply_transaction_with_id accepts a pre-computed transaction ID, while apply_transaction computes it internally. The equivalence is verified by comparing UTXO sets produced by both functions, ensuring they are identical. This property is implicitly proven through the consistency proofs in the implementation.
Corollary 5.3.1.1: Transaction application is deterministic and side-effect-free, regardless of which function is used.
- Validate block header $h$
- For each transaction $tx \in txs$:
- Validate $tx$ structure
- Check inputs against $us$
- Verify scripts
- Let $fees = \sum_{tx \in txs} \text{fee}(tx)$
- Let $subsidy = \text{GetBlockSubsidy}(height)$
- If coinbase output $> fees + subsidy$: return $(\text{invalid}, us)$
- Apply all transactions to $us$: $us’ = \text{ApplyTransactions}(txs, us)$
- Return $(\text{valid}, us’)$
ApplyTransaction: $\mathcal{TX} \times \mathcal{US} \rightarrow \mathcal{US}$
Properties:
- Undo entries match inputs: $\text{ApplyTransaction}(tx, us) = (us’, ul) \implies |ul| = |tx.inputs|$ (undo log has one entry per input)
- Coinbase undo: $\text{IsCoinbase}(tx) = \text{true} \implies \text{ApplyTransaction}(tx, us) = (us’, \emptyset)$ (coinbase has no undo entries)
- UTXO consistency: $\text{ApplyTransaction}(tx, us) = (us’, ul) \implies$ UTXO set $us’$ reflects transaction $tx$ applied to $us$
- Spent inputs removed: $\text{ApplyTransaction}(tx, us) = (us’, ul) \land \neg \text{IsCoinbase}(tx) \implies \forall i \in tx.inputs : i.prevout \notin us’$ (spent inputs removed)
- Outputs added: $\text{ApplyTransaction}(tx, us) = (us’, ul) \implies \forall i \in [0, |tx.outputs|) : (tx.id, i) \in us’$ (all outputs added)
- UTXO set size: $\text{ApplyTransaction}(tx, us) = (us’, ul) \land \neg \text{IsCoinbase}(tx) \implies |us’| = |us| - |tx.inputs| + |tx.outputs|$
- Coinbase UTXO set size: $\text{ApplyTransaction}(tx, us) = (us’, ul) \land \text{IsCoinbase}(tx) \implies |us’| = |us| + |tx.outputs|$
- Deterministic: $\text{ApplyTransaction}(tx_1, us_1) = \text{ApplyTransaction}(tx_2, us_2) \iff tx_1 = tx_2 \land us_1 = us_2$
- Idempotency with undo: $\text{DisconnectBlock}(b, ul, \text{ConnectBlock}(b, us, h)) = us$ where $ul$ is undo log from ConnectBlock
For transaction $tx$ and UTXO set $us$:
- If $tx$ is coinbase: $us’ = us \cup {(tx.id, i) \mapsto tx.outputs[i] : i \in [0, |tx.outputs|)}$
- Otherwise: $us’ = (us \setminus {i.prevout : i \in tx.inputs}) \cup {(tx.id, i) \mapsto tx.outputs[i] : i \in [0, |tx.outputs|)}$
- Return $us’$
5.4 BIP Validation Rules
This section specifies the mathematical properties of critical Bitcoin Improvement Proposals (BIPs) that enforce consensus rules for block and transaction validation.
5.4.1 BIP30: Duplicate Coinbase Prevention
BIP30Check: $\mathcal{B} \times \mathcal{US} \times \mathbb{N} \times \text{Network} \rightarrow {\text{valid}, \text{invalid}}$
Properties:
- Deactivation height: $\text{BIP30Check}(b, us, h, n) = \text{valid} \implies h > H_{30_deact}(n)$ (after deactivation, always valid)
- Duplicate coinbase prevention: $\text{BIP30Check}(b, us, h, n) = \text{invalid} \implies$ duplicate coinbase transaction detected
- Validation correctness: $\text{BIP30Check}(b, us, h, n)$ prevents duplicate coinbase transactions before deactivation height
For block $b = (h, txs)$ with UTXO set $us$, height $h$, and network $n$:
$$\text{BIP30Check}(b, us, h, n) = \begin{cases} \text{valid} & \text{if } h > H_{30_deact}(n) \ \text{invalid} & \text{if } h \leq H_{30_deact}(n) \land \exists tx \in txs : \text{IsCoinbase}(tx) \land \text{txid}(tx) \in \text{CoinbaseTxids}(us) \ \text{valid} & \text{otherwise} \end{cases}$$
Where:
- $H_{30_deact}(n)$ is the BIP30 deactivation height for network $n$:
- Mainnet: $H_{30_deact}(\text{mainnet}) = 91,722$
- Testnet: $H_{30_deact}(\text{testnet}) = 0$ (never enforced)
- Regtest: $H_{30_deact}(\text{regtest}) = 0$ (never enforced)
- $\text{CoinbaseTxids}(us)$ is the set of all coinbase transaction IDs that have created UTXOs in $us$.
Deactivation: BIP30 was disabled after block 91,722 (mainnet) to allow duplicate coinbases in blocks 91,842 and 91,880 (historical bug, grandfathered exception).
Mathematical Property: BIP30 ensures coinbase transaction uniqueness before deactivation:
$$\forall b_1, b_2 \in \mathcal{B}, b_1 \neq b_2, h \leq H_{30_deact}(n) : \text{IsCoinbase}(tx_1) \land \text{IsCoinbase}(tx_2) \implies \text{txid}(tx_1) \neq \text{txid}(tx_2)$$
Theorem 5.4.1 (BIP30 Uniqueness): BIP30 prevents duplicate coinbase transactions before deactivation height.
Proof: By construction, if a coinbase transaction $tx$ at height $h \leq H_{30_deact}(n)$ has $\text{txid}(tx) \in \text{CoinbaseTxids}(us)$, then $\text{BIP30Check}(b, us, h, n) = \text{invalid}$, preventing the block from being accepted. Since coinbase transactions create new UTXOs, their transaction IDs are recorded in the UTXO set, ensuring uniqueness across all blocks before deactivation.
Activation: Block 0 (always active until deactivation)
Deactivation Heights:
- Mainnet: Block 91,722
- Testnet: Block 0 (never enforced)
- Regtest: Block 0 (never enforced)
5.4.2 BIP34: Block Height in Coinbase
BIP34Check: $\mathcal{B} \times \mathbb{N} \rightarrow {\text{valid}, \text{invalid}}$
Properties:
- Height requirement: $\text{BIP34Check}(b, h) = \text{valid} \implies h < H_{34} \lor (\forall tx \in b.transactions : \text{IsCoinbase}(tx) \implies \text{ExtractHeight}(tx) = h)$
- Coinbase height: $\text{BIP34Check}(b, h) = \text{invalid} \implies h \geq H_{34} \land \exists tx \in b.transactions : \text{IsCoinbase}(tx) \land \text{ExtractHeight}(tx) \neq h$
- Validation correctness: $\text{BIP34Check}(b, h)$ ensures coinbase contains correct block height after activation
For block $b = (h, txs)$ at height $h$:
$$\text{BIP34Check}(b, h) = \begin{cases} \text{invalid} & \text{if } h \geq H_{34} \land \exists tx \in txs : \text{IsCoinbase}(tx) \land \text{ExtractHeight}(tx) \neq h \ \text{valid} & \text{otherwise} \end{cases}$$
Where:
- $H_{34}$ is the BIP34 activation height (mainnet: 227,836; testnet: 211,111; regtest: 0)
- $\text{ExtractHeight}(tx)$ extracts the block height from coinbase scriptSig using CScriptNum encoding
Height Encoding: The block height is encoded in the coinbase scriptSig as a script number:
$$\text{EncodeHeight}(h) = \text{CScriptNum}(h)$$
Where $\text{CScriptNum}$ encodes the height as a variable-length integer in the script format.
Mathematical Property: BIP34 ensures coinbase height consistency:
$$\forall b = (h, txs) \in \mathcal{B}, h \geq H_{34} : \text{IsCoinbase}(tx) \implies \text{ExtractHeight}(tx) = h$$
Theorem 5.4.2 (BIP34 Height Consistency): BIP34 ensures that coinbase transactions encode the correct block height.
Proof: For any block $b$ at height $h \geq H_{34}$, if the coinbase transaction $tx$ does not encode height $h$ in its scriptSig, then $\text{BIP34Check}(b, h) = \text{invalid}$, preventing block acceptance. This ensures that all blocks after activation height have consistent height encoding.
Activation Heights:
- Mainnet: Block 227,836
- Testnet: Block 211,111
- Regtest: Block 0 (always active)
5.4.3 BIP66: Strict DER Signature Validation
BIP66Check: $\mathbb{S} \times \mathbb{N} \rightarrow {\text{valid}, \text{invalid}}$
Properties:
- Strict DER requirement: $\text{BIP66Check}(sig, h) = \text{valid} \implies h < H_{66} \lor \text{IsStrictDER}(sig)$
- DER validation: $\text{BIP66Check}(sig, h) = \text{invalid} \implies h \geq H_{66} \land \neg \text{IsStrictDER}(sig)$
- Validation correctness: $\text{BIP66Check}(sig, h)$ enforces strict DER encoding after activation height
For signature $sig \in \mathbb{S}$ at block height $h$:
$$\text{BIP66Check}(sig, h) = \begin{cases} \text{invalid} & \text{if } h \geq H_{66} \land \neg \text{IsStrictDER}(sig) \ \text{valid} & \text{otherwise} \end{cases}$$
Where:
- $H_{66}$ is the BIP66 activation height (mainnet: 363,724; testnet: 330,776; regtest: 0)
- $\text{IsStrictDER}(sig)$ verifies that $sig$ is strictly DER-encoded according to X.690 ASN.1 encoding rules
Strict DER Requirements:
- Sequence Structure: $sig$ must be a valid DER-encoded SEQUENCE
- Integer Encoding: Both $r$ and $s$ values must be strictly DER-encoded integers
- No Leading Zeros: Integers must not have leading zero bytes (except for negative numbers)
- Minimal Length: Encoding must use minimal length representation
Mathematical Property: BIP66 enforces strict DER signature encoding:
$$\forall sig \in \mathbb{S}, h \geq H_{66} : \text{BIP66Check}(sig, h) = \text{valid} \implies \text{IsStrictDER}(sig)$$
Theorem 5.4.3 (BIP66 Strict DER Enforcement): BIP66 ensures all signatures after activation height are strictly DER-encoded.
Proof: For any signature $sig$ at height $h \geq H_{66}$, if $\neg \text{IsStrictDER}(sig)$, then $\text{BIP66Check}(sig, h) = \text{invalid}$, causing script validation to fail. This ensures that all signatures after activation conform to strict DER encoding, preventing signature malleability.
Activation Heights:
- Mainnet: Block 363,724
- Testnet: Block 330,776
- Regtest: Block 0 (always active)
5.4.4 BIP90: Block Version Enforcement
BIP90Check: $\mathcal{H} \times \mathbb{N} \rightarrow {\text{valid}, \text{invalid}}$
Properties:
- Version requirement: $\text{BIP90Check}(h, height) = \text{valid} \implies height < H_{34} \lor h.version \geq 2$
- Version enforcement: $\text{BIP90Check}(h, height) = \text{invalid} \implies height \geq H_{34} \land h.version < 2$
- Validation correctness: $\text{BIP90Check}(h, height)$ enforces minimum block version after BIP34 activation
For block header $h = (version, \ldots)$ at height $height$:
$$\text{BIP90Check}(h, height) = \begin{cases} \text{invalid} & \text{if } height \geq H_{34} \land version < 2 \ \text{invalid} & \text{if } height \geq H_{66} \land version < 3 \ \text{invalid} & \text{if } height \geq H_{65} \land version < 4 \ \text{valid} & \text{otherwise} \end{cases}$$
Where:
- $H_{34}$ is BIP34 activation height (mainnet: 227,836)
- $H_{66}$ is BIP66 activation height (mainnet: 363,724)
- $H_{65}$ is BIP65 activation height (mainnet: 388,381)
Mathematical Property: BIP90 enforces minimum block versions:
$$\forall h = (version, \ldots) \in \mathcal{H}, height \in \mathbb{N} : \text{BIP90Check}(h, height) = \text{valid} \implies version \geq \text{MinVersion}(height)$$
Where $\text{MinVersion}(height)$ is the minimum required block version at height $height$:
$$\text{MinVersion}(height) = \begin{cases} 4 & \text{if } height \geq H_{65} \ 3 & \text{if } height \geq H_{66} \ 2 & \text{if } height \geq H_{34} \ 1 & \text{otherwise} \end{cases}$$
Theorem 5.4.4 (BIP90 Version Enforcement): BIP90 ensures blocks use appropriate versions based on activation heights.
Proof: For any block header $h$ at height $height$, if $version < \text{MinVersion}(height)$, then $\text{BIP90Check}(h, height) = \text{invalid}$, preventing block acceptance. This ensures that blocks after each BIP activation use the correct minimum version, simplifying activation logic.
Activation Heights:
- Mainnet: Various (BIP34: 227,836; BIP66: 363,724; BIP65: 388,381)
- Testnet: Various
- Regtest: Block 0 (always active)
5.4.5 BIP147: NULLDUMMY Enforcement
BIP147Check: $\mathbb{S} \times \mathbb{S} \times \mathbb{N} \rightarrow {\text{valid}, \text{invalid}}$
Properties:
- NULLDUMMY requirement: $\text{BIP147Check}(scriptSig, scriptPubkey, h) = \text{valid} \implies h < H_{147} \lor \neg \text{ContainsMultisig}(scriptPubkey) \lor \text{IsNullDummy}(scriptSig)$
- Multisig validation: $\text{BIP147Check}(scriptSig, scriptPubkey, h) = \text{invalid} \implies h \geq H_{147} \land \text{ContainsMultisig}(scriptPubkey) \land \neg \text{IsNullDummy}(scriptSig)$
- Validation correctness: $\text{BIP147Check}(scriptSig, scriptPubkey, h)$ enforces NULLDUMMY for multisig after activation
For scriptSig $scriptSig$, scriptPubkey $scriptPubkey$ containing OP_CHECKMULTISIG, and block height $h$:
$$\text{BIP147Check}(scriptSig, scriptPubkey, h) = \begin{cases} \text{invalid} & \text{if } h \geq H_{147} \land \text{ContainsMultisig}(scriptPubkey) \land \neg \text{IsNullDummy}(scriptSig) \ \text{valid} & \text{otherwise} \end{cases}$$
Where:
- $H_{147}$ is the BIP147 activation height (mainnet: 481,824; testnet: 834,624; regtest: 0)
- $\text{ContainsMultisig}(scriptPubkey)$ checks if $scriptPubkey$ contains OP_CHECKMULTISIG (0xae)
- $\text{IsNullDummy}(scriptSig)$ verifies that the dummy element (extra stack element consumed by OP_CHECKMULTISIG) is empty (OP_0)
OP_CHECKMULTISIG Stack Consumption: OP_CHECKMULTISIG consumes $m + n + 2$ stack elements:
- $m$ signatures
- $n$ public keys
- $m$ (signature threshold)
- $n$ (public key count)
- Dummy element (must be empty with BIP147)
Mathematical Property: BIP147 enforces NULLDUMMY for multisig scripts:
$$\forall scriptSig, scriptPubkey \in \mathbb{S}, h \geq H_{147} : \text{ContainsMultisig}(scriptPubkey) \land \text{BIP147Check}(scriptSig, scriptPubkey, h) = \text{valid} \implies \text{IsNullDummy}(scriptSig)$$
Theorem 5.4.5 (BIP147 NULLDUMMY Enforcement): BIP147 ensures that OP_CHECKMULTISIG dummy elements are empty after activation height.
Proof: For any scriptSig $scriptSig$ and scriptPubkey $scriptPubkey$ containing OP_CHECKMULTISIG at height $h \geq H_{147}$, if $\neg \text{IsNullDummy}(scriptSig)$, then $\text{BIP147Check}(scriptSig, scriptPubkey, h) = \text{invalid}$, causing script validation to fail. This ensures that all multisig scripts after activation use empty dummy elements, which is required for SegWit compatibility.
Activation Heights:
- Mainnet: Block 481,824 (SegWit activation)
- Testnet: Block 834,624
- Regtest: Block 0 (always active)
5.4.6 BIP119: OP_CHECKTEMPLATEVERIFY (CTV)
BIP119Check: $\mathcal{TX} \times \mathbb{N} \times \mathbb{H} \rightarrow {\text{valid}, \text{invalid}}$
Properties:
- Template hash validation: $\text{BIP119Check}(tx, i, h) = \text{valid} \iff \text{TemplateHash}(tx, i) = h$
- Input index requirement: $\text{BIP119Check}(tx, i, h)$ requires $i < |tx.inputs|$ (valid input index)
- Validation correctness: $\text{BIP119Check}(tx, i, h)$ validates template hash matches expected value
For transaction $tx$, input index $i$, and template hash $h$:
$$\text{BIP119Check}(tx, i, h) = \begin{cases} \text{valid} & \text{if } \text{TemplateHash}(tx, i) = h \ \text{invalid} & \text{otherwise} \end{cases}$$
Template Hash Calculation:
$$\text{TemplateHash}(tx, i) = \text{SHA256}(\text{SHA256}(\text{TemplatePreimage}(tx, i)))$$
Where $\text{TemplatePreimage}(tx, i)$ is the serialized template structure:
$$\text{TemplatePreimage}(tx, i) = \text{Version}(tx) || \text{Inputs}(tx) || \text{Outputs}(tx) || \text{Locktime}(tx) || i$$
Template Preimage Serialization:
-
Transaction Version (4 bytes, little-endian): $$\text{Version}(tx) = \text{LE32}(tx.\text{version})$$
-
Input Count (varint): $$\text{InputCount}(tx) = \text{VarInt}(|tx.\text{inputs}|)$$
-
Input Serialization (for each input $in \in tx.\text{inputs}$):
- Previous output hash (32 bytes): $in.\text{prevout}.\text{hash}$
- Previous output index (4 bytes, little-endian): $\text{LE32}(in.\text{prevout}.\text{index})$
- Sequence (4 bytes, little-endian): $\text{LE32}(in.\text{sequence})$
- Note: $\text{scriptSig}$ is NOT included in template (key difference from sighash)
-
Output Count (varint): $$\text{OutputCount}(tx) = \text{VarInt}(|tx.\text{outputs}|)$$
-
Output Serialization (for each output $out \in tx.\text{outputs}$):
- Value (8 bytes, little-endian): $\text{LE64}(out.\text{value})$
- Script length (varint): $\text{VarInt}(|out.\text{scriptPubkey}|)$
- Script bytes: $out.\text{scriptPubkey}$
-
Locktime (4 bytes, little-endian): $$\text{Locktime}(tx) = \text{LE32}(tx.\text{lockTime})$$
-
Input Index (4 bytes, little-endian): $$i = \text{LE32}(i)$$
Mathematical Properties:
Theorem 5.4.6.1 (CTV Determinism): Template hash is deterministic:
$$\forall tx \in \mathcal{TX}, i \in \mathbb{N} : \text{TemplateHash}(tx, i) \text{ is unique and deterministic}$$
Proof: By construction, $\text{TemplateHash}$ uses SHA256, which is a deterministic cryptographic hash function. Given the same transaction structure and input index, the template preimage is identical, producing the same hash.
Theorem 5.4.6.2 (CTV Uniqueness): Different transactions produce different template hashes with overwhelming probability:
$$\forall tx_1, tx_2 \in \mathcal{TX}, tx_1 \neq tx_2 : P(\text{TemplateHash}(tx_1, i) = \text{TemplateHash}(tx_2, i)) \approx 2^{-256}$$
Proof: SHA256 is a cryptographic hash function with collision resistance. The probability of two different transactions producing the same template hash is approximately $2^{-256}$, which is negligible.
Theorem 5.4.6.3 (CTV Input-Specific): Template hash depends on input index:
$$\forall tx \in \mathcal{TX}, i_1, i_2 \in \mathbb{N}, i_1 \neq i_2 : \text{TemplateHash}(tx, i_1) \neq \text{TemplateHash}(tx, i_2)$$
Proof: The input index $i$ is included in the template preimage. Since $i_1 \neq i_2$, the preimages differ, and by the collision resistance of SHA256, the hashes must differ.
Theorem 5.4.6.4 (CTV ScriptSig Independence): Template hash does not depend on scriptSig:
$$\forall tx_1, tx_2 \in \mathcal{TX}, i \in \mathbb{N} : (\text{Structure}(tx_1) = \text{Structure}(tx_2) \land tx_1.\text{inputs}[i].\text{scriptSig} \neq tx_2.\text{inputs}[i].\text{scriptSig}) \implies \text{TemplateHash}(tx_1, i) = \text{TemplateHash}(tx_2, i)$$
Where $\text{Structure}(tx)$ includes all fields except scriptSig.
Proof: By construction, scriptSig is not included in the template preimage. Therefore, changes to scriptSig do not affect the template hash, allowing the same template to be satisfied by different scriptSigs.
Opcode Behavior:
OP_CHECKTEMPLATEVERIFY (opcode 0xba):
- Stack Input: $[h]$ where $h \in \mathbb{H}$ is a 32-byte template hash
- Stack Output: Nothing (opcode fails if template doesn’t match)
- Validation: $\text{BIP119Check}(tx, i, h) = \text{valid}$
Use Cases:
- Congestion Control: Transaction batching with predefined templates
- Vault Contracts: Time-locked withdrawals with specific output structures
- Payment Channels: State updates with committed transaction structures
- Smart Contracts: Covenants and state machines with transaction templates
Activation Heights:
- Mainnet: TBD (BIP9 activation pending)
- Testnet: TBD
- Regtest: Block 0 (always active for testing)
Implementation Notes:
- Template hash calculation must match BIP119 specification exactly
- Input index must be within bounds: $0 \leq i < |tx.\text{inputs}|$
- Transaction must have at least one input and one output
- Template hash is 32 bytes (SHA256 output)
- Opcode requires full transaction context (cannot be used in basic script execution)
5.4.7 BIP65: OP_CHECKLOCKTIMEVERIFY (CLTV)
BIP65Check: $\mathcal{TX} \times \mathbb{N} \times \mathbb{N} \times \mathbb{H} \rightarrow {\text{valid}, \text{invalid}}$
Properties:
- Zero locktime rejection: $\text{BIP65Check}(tx, i, lt, h) = \text{invalid} \iff tx.\text{lockTime} = 0$ (zero locktime always invalid)
- Type consistency: $\text{BIP65Check}(tx, i, lt, h) = \text{valid} \implies \text{LocktimeType}(tx.\text{lockTime}) = \text{LocktimeType}(lt)$ (types must match)
- Locktime ordering: $\text{BIP65Check}(tx, i, lt, h) = \text{valid} \implies tx.\text{lockTime} \leq lt$ (transaction locktime must be <= stack locktime)
- Input index requirement: $\text{BIP65Check}(tx, i, lt, h)$ requires $i < |tx.\text{inputs}|$ (valid input index)
- Deterministic: $\text{BIP65Check}(tx_1, i_1, lt_1, h_1) = \text{BIP65Check}(tx_2, i_2, lt_2, h_2) \iff tx_1 = tx_2 \land i_1 = i_2 \land lt_1 = lt_2 \land h_1 = h_2$
- Result type: $\text{BIP65Check}(tx, i, lt, h) \in {\text{valid}, \text{invalid}}$
For transaction $tx$, input index $i$, locktime value $lt$, and block header $h$:
$$\text{BIP65Check}(tx, i, lt, h) = \begin{cases} \text{invalid} & \text{if } tx.\text{lockTime} = 0 \ \text{invalid} & \text{if } \text{LocktimeType}(tx.\text{lockTime}) \neq \text{LocktimeType}(lt) \ \text{invalid} & \text{if } tx.\text{lockTime} > lt \ \text{valid} & \text{otherwise} \end{cases}$$
Where $\text{LocktimeType}(x)$ returns $\text{BlockHeight}$ if $x < 500000000$, otherwise $\text{Timestamp}$.
OP_CHECKLOCKTIMEVERIFY (opcode 0xb1):
- Stack Input: $[lt]$ where $lt$ is a locktime value (encoded as minimal byte string)
- Stack Output: Nothing (opcode fails if locktime check doesn’t pass)
- Validation: $\text{BIP65Check}(tx, i, \text{DecodeLocktime}(lt), h) = \text{valid}$
Locktime Type Determination:
$$\text{LocktimeType}(lt) = \begin{cases} \text{BlockHeight} & \text{if } lt < 500000000 \ \text{Timestamp} & \text{otherwise} \end{cases}$$
Locktime Encoding/Decoding: Locktime values are encoded as minimal little-endian byte strings (max 5 bytes) on the script stack.
Theorem 5.4.7.1 (Locktime Encoding Round-Trip): Locktime encoding and decoding are inverse operations:
$$\forall lt \in \mathbb{N}_{32}: \text{DecodeLocktime}(\text{EncodeLocktime}(lt)) = lt$$
Proof: By construction, the encoding uses minimal little-endian representation and decoding reconstructs the value from the byte string. This is proven by blvm-spec-lock formal verification.
Theorem 5.4.7.2 (Locktime Type Determination Correctness): Locktime type determination is correct:
$$\forall lt \in \mathbb{N}_{32}: \text{LocktimeType}(lt) = \begin{cases} \text{BlockHeight} & \text{if } lt < 500000000 \ \text{Timestamp} & \text{otherwise} \end{cases}$$
Proof: By construction, the threshold $500000000$ correctly separates block heights (which are always $< 500000000$) from Unix timestamps (which are always $\geq 500000000$). This is proven by blvm-spec-lock formal verification.
Theorem 5.4.7.3 (CLTV Type Matching Requirement): CLTV requires matching locktime types:
$$\forall tx \in \mathcal{TX}, lt \in \mathbb{N}_{32}: \text{BIP65Check}(tx, i, lt, h) = \text{valid} \implies \text{LocktimeType}(tx.\text{lockTime}) = \text{LocktimeType}(lt)$$
Proof: By construction, if the types don’t match, $\text{BIP65Check}$ returns $\text{invalid}$. This ensures that block height locktimes are only compared with block heights, and timestamps are only compared with timestamps. This is proven by blvm-spec-lock formal verification.
Theorem 5.4.7.4 (CLTV Zero Locktime Rejection): CLTV always fails when transaction locktime is zero:
$$\forall tx \in \mathcal{TX}, lt \in \mathbb{N}_{32}: tx.\text{lockTime} = 0 \implies \text{BIP65Check}(tx, i, lt, h) = \text{invalid}$$
Proof: By construction, if $tx.\text{lockTime} = 0$, the check immediately returns $\text{invalid}$ regardless of the stack locktime value. This is proven by blvm-spec-lock formal verification.
Activation Heights:
- Mainnet: Block 388,381
- Testnet: Block 371,337
- Regtest: Block 0 (always active)
Corollary 5.4.1 (BIP Activation Consistency): All BIP validation rules are enforced consistently across the network after their respective activation heights, ensuring consensus compatibility.
Proof: Each BIP validation rule $P$ has an activation height $H_P$ such that for all blocks $b$ at height $h \geq H_P$, $P(b) = \text{valid}$ is required. Since all nodes enforce the same activation heights, consensus is maintained.
5.5 Sequence Locks (BIP68)
Sequence locks enforce relative locktime constraints using transaction input sequence numbers. Unlike absolute locktime (nLockTime), sequence locks are relative to when the input was confirmed.
Sequence Number Encoding: $nSequence \in \mathbb{N}_{32}$ (32-bit unsigned integer)
The sequence number encodes:
- Bit 31 (0x80000000): Disable flag - if set, sequence is not treated as relative locktime
- Bit 22 (0x00400000): Type flag - if set, locktime is time-based; otherwise block-based
- Bits 0-15 (0x0000ffff): Locktime value
ExtractSequenceLocktimeValue: $\mathbb{N}{32} \rightarrow \mathbb{N}{16}$
Properties:
- Value extraction: $\text{ExtractSequenceLocktimeValue}(seq) = seq \land 0x0000ffff$ (bits 0-15)
- Value range: $0 \leq \text{ExtractSequenceLocktimeValue}(seq) \leq 65535$ for all $seq \in \mathbb{N}_{32}$
- Bit masking: $\text{ExtractSequenceLocktimeValue}(seq)$ extracts lower 16 bits
$$\text{ExtractSequenceLocktimeValue}(seq) = seq \land 0x0000ffff$$
ExtractSequenceTypeFlag: $\mathbb{N}_{32} \rightarrow {\text{true}, \text{false}}$
Properties:
- Type flag extraction: $\text{ExtractSequenceTypeFlag}(seq) = \text{true} \iff (seq \land 0x00400000) \neq 0$
- Boolean result: $\text{ExtractSequenceTypeFlag}(seq) \in {\text{true}, \text{false}}$
- Bit 22: $\text{ExtractSequenceTypeFlag}(seq)$ extracts bit 22 (type flag)
$$\text{ExtractSequenceTypeFlag}(seq) = (seq \land 0x00400000) \neq 0$$
IsSequenceDisabled: $\mathbb{N}_{32} \rightarrow {\text{true}, \text{false}}$
Properties:
- Disabled flag extraction: $\text{IsSequenceDisabled}(seq) = \text{true} \iff (seq \land 0x80000000) \neq 0$
- Boolean result: $\text{IsSequenceDisabled}(seq) \in {\text{true}, \text{false}}$
- Bit 31: $\text{IsSequenceDisabled}(seq)$ extracts bit 31 (disable flag)
$$\text{IsSequenceDisabled}(seq) = (seq \land 0x80000000) \neq 0$$
GetMedianTimePast: $[\mathcal{H}] \rightarrow \mathbb{N}$
Properties:
- Median calculation: $\text{GetMedianTimePast}(headers) = \text{median}({h.\text{timestamp} : h \in \text{last_11}(headers)})$ (median of last 11 block timestamps)
- Empty input: $\text{GetMedianTimePast}([]) = 0$ (empty headers return 0)
- Bounds: $\text{GetMedianTimePast}(headers) \geq \min({h.\text{timestamp} : h \in headers}) \land \text{GetMedianTimePast}(headers) \leq \max({h.\text{timestamp} : h \in headers})$ (median within timestamp range)
- Last 11 blocks: $\text{GetMedianTimePast}(headers)$ uses at most the last 11 block headers (BIP113 requirement)
- Deterministic: $\text{GetMedianTimePast}(h_1) = \text{GetMedianTimePast}(h_2) \iff h_1 = h_2$
- Result type: $\text{GetMedianTimePast}(headers) \in \mathbb{N}$ (returns Unix timestamp)
For block headers $headers \in [\mathcal{H}]$:
$$\text{GetMedianTimePast}(headers) = \begin{cases} 0 & \text{if } |headers| = 0 \ \text{median}({h.\text{timestamp} : h \in headers[\max(0, |headers| - 11):]}) & \text{otherwise} \end{cases}$$
Where $\text{median}(timestamps)$ is calculated as:
- If $|timestamps|$ is odd: $\text{median}(timestamps) = timestamps[\lfloor |timestamps|/2 \rfloor]$
- If $|timestamps|$ is even: $\text{median}(timestamps) = \lfloor (timestamps[|timestamps|/2 - 1] + timestamps[|timestamps|/2]) / 2 \rfloor$
BIP113 Reference: This function implements BIP113: Median Time-Past, which uses the median timestamp of the last 11 blocks to prevent time-warp attacks.
CalculateSequenceLocks: $\mathcal{TX} \times \mathbb{N} \times [\mathbb{N}] \times [\mathcal{H}]^? \rightarrow (\mathbb{Z}, \mathbb{Z})$
Properties:
- Heights match inputs: $\text{CalculateSequenceLocks}(tx, f, ph, rh) = (min_h, min_t) \implies |ph| = |tx.inputs|$ (heights must match input count)
- Lock calculation: $\text{CalculateSequenceLocks}(tx, f, ph, rh)$ calculates minimum height and time locks from sequence numbers
- Negative locks: $\text{CalculateSequenceLocks}(tx, f, ph, rh) = (min_h, min_t) \implies min_h \geq -1 \land min_t \geq -1$ (locks can be -1 if disabled)
For transaction $tx$, flags $f$, previous heights $ph \in [\mathbb{N}]$, and recent headers $rh \in [\mathcal{H}]^?$:
$$\text{CalculateSequenceLocks}(tx, f, ph, rh) = (\text{min_height}, \text{min_time})$$
Where:
- BIP68 is only enforced if $tx.\text{version} \geq 2$ and $(f \land 0x01) \neq 0$
- For each input $i \in tx.\text{inputs}$:
- If $\text{IsSequenceDisabled}(i.\text{sequence})$: skip input
- If $\text{ExtractSequenceTypeFlag}(i.\text{sequence})$ (time-based):
- $locktime_value = \text{ExtractSequenceLocktimeValue}(i.\text{sequence})$
- $locktime_seconds = locktime_value \times 512 = locktime_value \ll 9$ (bit shift for efficiency)
- $coin_time = \text{GetMedianTimePast}(ph[i], rh)$
- $required_time = coin_time + locktime_seconds - 1$
- $\text{min_time} = \max(\text{min_time}, required_time)$
- Else (block-based):
- $locktime_value = \text{ExtractSequenceLocktimeValue}(i.\text{sequence})$
- $required_height = ph[i] + locktime_value - 1$
- $\text{min_height} = \max(\text{min_height}, required_height)$
EvaluateSequenceLocks: $\mathbb{N} \times \mathbb{N} \times (\mathbb{Z}, \mathbb{Z}) \rightarrow {\text{true}, \text{false}}$
Properties:
- Lock evaluation: $\text{EvaluateSequenceLocks}(height, time, (min_h, min_t)) = \text{true} \iff (min_h < 0 \lor height > min_h) \land (min_t < 0 \lor time > min_t)$
- Boolean result: $\text{EvaluateSequenceLocks}(height, time, (min_h, min_t)) \in {\text{true}, \text{false}}$
- Disabled locks: $\text{EvaluateSequenceLocks}(height, time, (-1, -1)) = \text{true}$ (disabled locks always pass)
$$\text{EvaluateSequenceLocks}(height, time, (min_h, min_t)) = \begin{cases} \text{true} & \text{if } (min_h < 0 \lor height > min_h) \land (min_t < 0 \lor time > min_t) \ \text{false} & \text{otherwise} \end{cases}$$
Where:
- $min_h < 0$ (typically $-1$) indicates no height constraint
- $min_t < 0$ indicates no time constraint
- The comparison uses $>$ (strictly greater) because sequence locks use “last invalid” semantics (like nLockTime)
Theorem 5.5.1 (Sequence Lock Arithmetic Safety): Sequence lock calculations never overflow for valid inputs:
$$\forall tx \in \mathcal{TX}, ph \in [\mathbb{N}], seq \in \mathbb{N}_{32}:$$ $$\text{CalculateSequenceLocks}(tx, f, ph, rh) \text{ does not overflow}$$
Proof: By construction, all arithmetic operations use checked addition/subtraction. The locktime value is bounded to 16 bits (0-65535), and block heights/times are bounded to 64-bit integers. This is proven by blvm-spec-lock formal verification.
Theorem 5.5.2 (Sequence Lock Correctness): Sequence locks correctly enforce relative locktime:
$$\forall tx \in \mathcal{TX}, ph \in [\mathbb{N}]:$$ $$\text{EvaluateSequenceLocks}(h, t, \text{CalculateSequenceLocks}(tx, f, ph, rh)) = \text{true}$$ $$\iff$$ $$\forall i \in tx.\text{inputs}: \text{IsSequenceDisabled}(i.\text{sequence}) \lor \text{LocktimeSatisfied}(i, ph[i], h, t)$$
Where $\text{LocktimeSatisfied}$ checks if the relative locktime constraint is met.
Proof: By construction, $\text{CalculateSequenceLocks}$ computes the minimum height/time required by all inputs, and $\text{EvaluateSequenceLocks}$ checks if current height/time meets these requirements. This is proven by blvm-spec-lock formal verification.
6. Economic Model
6.1 Block Subsidy
GetBlockSubsidy: $\mathbb{N} \rightarrow \mathbb{Z}$
Properties:
- Non-negative: $\text{GetBlockSubsidy}(h) \geq 0$ for all $h \in \mathbb{N}$
- Upper bound: $\text{GetBlockSubsidy}(h) \leq \text{INITIAL_SUBSIDY}$ for all $h \in \mathbb{N}$
- Genesis block: $h = 0 \implies \text{GetBlockSubsidy}(h) = \text{INITIAL_SUBSIDY}$
- After 64 halvings: $h \geq \text{HALVING_INTERVAL} \times 64 \implies \text{GetBlockSubsidy}(h) = 0$
- Halving schedule: For $h < \text{HALVING_INTERVAL} \times 64$, $\text{GetBlockSubsidy}(h) = \text{INITIAL_SUBSIDY} \gg \lfloor h / \text{HALVING_INTERVAL} \rfloor$
- First halving boundary: $h = \text{HALVING_INTERVAL} \implies \text{GetBlockSubsidy}(h) = \text{INITIAL_SUBSIDY} / 2$
- Second halving boundary: $h = \text{HALVING_INTERVAL} \times 2 \implies \text{GetBlockSubsidy}(h) = \text{INITIAL_SUBSIDY} / 4$
- Before 64 halvings: $h < \text{HALVING_INTERVAL} \times 64 \implies \text{GetBlockSubsidy}(h) > 0$
- Deterministic: $\text{GetBlockSubsidy}(h_1) = \text{GetBlockSubsidy}(h_2) \iff h_1 = h_2$ (same height → same subsidy)
- Monotonic decreasing: For $h_1 < h_2 < \text{HALVING_INTERVAL} \times 64$ within same halving period, $\text{GetBlockSubsidy}(h_1) = \text{GetBlockSubsidy}(h_2)$
$$\text{GetBlockSubsidy}(h) = \begin{cases} 0 & \text{if } h \geq 64 \times H \ 50 \times C \times 2^{-\lfloor h/H \rfloor} & \text{otherwise} \end{cases}$$
Where $\lfloor h/H \rfloor$ represents the number of halvings that have occurred by height $h$.
Halving Schedule:
- Blocks 0-209,999: 50 BTC per block
- Blocks 210,000-419,999: 25 BTC per block
- Blocks 420,000-629,999: 12.5 BTC per block
- Blocks 630,000-839,999: 6.25 BTC per block
- Blocks 840,000+: 3.125 BTC per block
- Blocks 13,440,000+: 0 BTC per block (after 64 halvings)
Properties:
- Non-negativity: $\text{GetBlockSubsidy}(h) \geq 0$ for all $h \in \mathbb{N}$
- Upper bound: $\text{GetBlockSubsidy}(h) \leq 50 \times C$ for all $h \in \mathbb{N}$
- Genesis block: $\text{GetBlockSubsidy}(0) = 50 \times C$
- After 64 halvings: $\text{GetBlockSubsidy}(h) = 0$ for all $h \geq 64 \times H$
- First halving: $\text{GetBlockSubsidy}(H) = 25 \times C$
- Second halving: $\text{GetBlockSubsidy}(2 \times H) = 12.5 \times C$
- Non-zero before 64 halvings: $\text{GetBlockSubsidy}(h) > 0$ for all $h < 64 \times H$
Theorem 6.1.1 (Halving Schedule Correctness): The block subsidy halves every 210,000 blocks:
$$\forall h \in \mathbb{N}, h < 64 \times H: \text{GetBlockSubsidy}(h + H) = \frac{\text{GetBlockSubsidy}(h)}{2}$$
Where $H = 210,000$ is the halving interval.
Proof: By construction, $\text{GetBlockSubsidy}(h) = 50 \times C \times 2^{-\lfloor h/H \rfloor}$. For $h + H$, we have $\lfloor (h+H)/H \rfloor = \lfloor h/H \rfloor + 1$, so $\text{GetBlockSubsidy}(h + H) = 50 \times C \times 2^{-(\lfloor h/H \rfloor + 1)} = \frac{50 \times C \times 2^{-\lfloor h/H \rfloor}}{2} = \frac{\text{GetBlockSubsidy}(h)}{2}$. This is proven by blvm-spec-lock formal verification.
6.2 Total Supply
TotalSupply: $\mathbb{N} \rightarrow \mathbb{Z}$
Properties:
- Non-negativity: $\text{TotalSupply}(h) \geq 0$ for all $h \in \mathbb{N}$
- Supply limit: $\text{TotalSupply}(h) \leq \text{MAX_MONEY}$ for all $h \in \mathbb{N}$ (critical security invariant)
- Genesis block: $\text{TotalSupply}(0) = 50 \times C = \text{INITIAL_SUBSIDY}$
- Monotonicity: $\text{TotalSupply}(h_1) \leq \text{TotalSupply}(h_2)$ for all $h_1 \leq h_2$ (monotonically increasing)
- Supply increase: For $h_2 > h_1$, $\text{TotalSupply}(h_2) = \text{TotalSupply}(h_1) + \sum_{i=h_1+1}^{h_2} \text{GetBlockSubsidy}(i)$
- Supply convergence: $\lim_{h \to \infty} \text{TotalSupply}(h) = 21 \times 10^6 \times C$ (converges to 21M BTC)
- After 64 halvings: For $h \geq \text{HALVING_INTERVAL} \times 64$, $\text{TotalSupply}(h) = \text{TotalSupply}(\text{HALVING_INTERVAL} \times 64)$ (constant after halvings stop)
- Deterministic: $\text{TotalSupply}(h_1) = \text{TotalSupply}(h_2) \iff h_1 = h_2$ (same height → same supply)
- Supply increase bounded: $\text{TotalSupply}(h+1) - \text{TotalSupply}(h) = \text{GetBlockSubsidy}(h+1) \leq \text{INITIAL_SUBSIDY}$
$$\text{TotalSupply}(h) = \sum_{i=0}^{h} \text{GetBlockSubsidy}(i)$$
Theorem 6.2.1 (Total Supply Monotonicity): Total supply is monotonically increasing:
$$\forall h_1, h_2 \in \mathbb{N}, h_1 \leq h_2: \text{TotalSupply}(h_1) \leq \text{TotalSupply}(h_2)$$
Proof: By construction, $\text{TotalSupply}(h) = \sum_{i=0}^{h} \text{GetBlockSubsidy}(i)$. Since $\text{GetBlockSubsidy}(i) \geq 0$ for all $i$, adding more terms can only increase the sum. This is proven by blvm-spec-lock formal verification.
Theorem 6.2.2 (Total Supply Bounded): Total supply never exceeds MAX_MONEY:
$$\forall h \in \mathbb{N}: \text{TotalSupply}(h) \leq \text{MAX_MONEY}$$
Where $\text{MAX_MONEY} = 21 \times 10^6 \times C$ is the maximum Bitcoin supply.
Proof: By construction, the total supply converges to $21 \times 10^6 \times C$ as $h \to \infty$, and all block subsidies are non-negative. The implementation uses checked arithmetic to prevent overflow. This is proven by blvm-spec-lock formal verification.
Theorem 6.2.3 (Supply Convergence): $\lim_{h \to \infty} \text{TotalSupply}(h) = 21 \times 10^6 \times C$
Proof: The total supply can be expressed as a sum of geometric series. For each halving period $k$ (where $k = \lfloor h/H \rfloor$), the subsidy is $50 \times C \times 2^{-k}$ for $H$ consecutive blocks.
The total supply is: $$\text{TotalSupply}(\infty) = \sum_{k=0}^{63} H \times 50 \times C \times 2^{-k} = H \times 50 \times C \times \sum_{k=0}^{63} 2^{-k}$$
Since $\sum_{k=0}^{63} 2^{-k} = 2 - 2^{-63} \approx 2$ for large $k$: $$\text{TotalSupply}(\infty) \approx H \times 50 \times C \times 2 = 210,000 \times 50 \times 10^8 \times 2 = 21 \times 10^6 \times 10^8 = 21 \times 10^6 \times C$$
6.3 Supply Limit Validation
ValidateSupplyLimit: $\mathbb{N} \rightarrow {\text{valid}, \text{invalid}}$
$$\text{ValidateSupplyLimit}(h) = \begin{cases} \text{valid} & \text{if } \text{TotalSupply}(h) \leq \text{MAX_MONEY} \ \text{invalid} & \text{otherwise} \end{cases}$$
Validates that the total supply at height $h$ does not exceed the maximum money supply.
Properties:
- Validation correctness: $\text{ValidateSupplyLimit}(h) = \text{valid} \iff \text{TotalSupply}(h) \leq \text{MAX_MONEY}$
- Supply limit invariant: $\text{ValidateSupplyLimit}(h) = \text{valid}$ for all $h \in \mathbb{N}$ (critical security property)
- Boolean result: $\text{ValidateSupplyLimit}(h) \in {\text{valid}, \text{invalid}}$
- Deterministic: $\text{ValidateSupplyLimit}(h_1) = \text{ValidateSupplyLimit}(h_2) \iff \text{TotalSupply}(h_1) \leq \text{MAX_MONEY} \iff \text{TotalSupply}(h_2) \leq \text{MAX_MONEY}$
- Always valid: Since $\text{TotalSupply}(h) \leq \text{MAX_MONEY}$ for all $h$, $\text{ValidateSupplyLimit}(h) = \text{valid}$ for all $h$
Theorem 6.3.1 (Supply Limit Correctness): The supply limit validation is correct:
$$\forall h \in \mathbb{N}: \text{ValidateSupplyLimit}(h) = \text{valid} \iff \text{TotalSupply}(h) \leq \text{MAX_MONEY}$$
Proof: By construction, the validation function directly checks the condition. This is proven by blvm-spec-lock formal verification.
6.4 Coinbase Detection
IsCoinbase: $\mathcal{TX} \rightarrow {\text{true}, \text{false}}$
A transaction $tx = (v, ins, outs, lt)$ is a coinbase transaction if and only if:
$$\text{IsCoinbase}(tx) = \begin{cases} \text{true} & \text{if } |ins| = 1 \land ins[0].\text{prevout}.\text{hash} = 0^{32} \land ins[0].\text{prevout}.\text{index} = 2^{32} - 1 \ \text{false} & \text{otherwise} \end{cases}$$
Where:
- $0^{32}$ is the 32-byte zero hash (all zeros)
- $2^{32} - 1$ is the maximum 32-bit unsigned integer (0xFFFFFFFF)
Properties:
- Definition correctness: $\text{IsCoinbase}(tx) = \text{true} \iff |tx.\text{inputs}| = 1 \land tx.\text{inputs}[0].\text{prevout}.\text{hash} = 0^{32} \land tx.\text{inputs}[0].\text{prevout}.\text{index} = 2^{32} - 1$
- Input count: $\text{IsCoinbase}(tx) = \text{true} \implies |tx.\text{inputs}| = 1$ (coinbase has exactly one input)
- Zero hash: $\text{IsCoinbase}(tx) = \text{true} \implies tx.\text{inputs}[0].\text{prevout}.\text{hash} = 0^{32}$ (null hash)
- Max index: $\text{IsCoinbase}(tx) = \text{true} \implies tx.\text{inputs}[0].\text{prevout}.\text{index} = 2^{32} - 1$ (0xFFFFFFFF)
- Boolean result: $\text{IsCoinbase}(tx) \in {\text{true}, \text{false}}$
- Deterministic: $\text{IsCoinbase}(tx_1) = \text{IsCoinbase}(tx_2) \iff tx_1 = tx_2$ (same transaction → same result)
- Coinbase uniqueness: In any valid block $b$, exactly one transaction satisfies $\text{IsCoinbase}(tx) = \text{true}$
- Coinbase position: In valid blocks, coinbase is always the first transaction: $\text{IsCoinbase}(b.transactions[0]) = \text{true}$
- ScriptSig length: $\text{IsCoinbase}(tx) = \text{true} \land \text{CheckTransaction}(tx) = \text{valid} \implies 2 \leq |tx.inputs[0].scriptSig| \leq 100$
Theorem 6.4.1 (Coinbase Uniqueness): Each block contains exactly one coinbase transaction:
$$\forall b = (h, txs) \in \mathcal{B}: \sum_{tx \in txs} \text{IsCoinbase}(tx) = 1$$
Proof: By Bitcoin consensus rules, each block must have exactly one coinbase transaction as its first transaction. This is proven by blvm-spec-lock formal verification.
6.5 Fee Market
Transaction Fee: $\mathcal{TX} \times \mathcal{US} \rightarrow \mathbb{Z}$
Properties:
- Fee formula: $\text{Fee}(tx, us) = \sum_{i \in tx.\text{inputs}} us(i.\text{prevout}).\text{value} - \sum_{o \in tx.\text{outputs}} o.\text{value}$
- Coinbase fee: $\text{IsCoinbase}(tx) = \text{true} \implies \text{Fee}(tx, us) = 0$
- Non-negative fee: $\text{Fee}(tx, us) \geq 0$ for valid transactions (inputs ≥ outputs)
- Value conservation: $\text{Fee}(tx, us) \geq 0 \implies \sum_{i \in tx.\text{inputs}} us(i.\text{prevout}).\text{value} = \sum_{o \in tx.\text{outputs}} o.\text{value} + \text{Fee}(tx, us)$
- Deterministic: $\text{Fee}(tx_1, us_1) = \text{Fee}(tx_2, us_2) \iff tx_1 = tx_2 \land us_1 = us_2$
$$\text{Fee}(tx, us) = \sum_{i \in tx.inputs} us(i.prevout).value - \sum_{o \in tx.outputs} o.value$$
Fee Rate: $\mathcal{TX} \times \mathcal{US} \rightarrow \mathbb{Q}$
Properties:
- Fee rate formula: $\text{FeeRate}(tx, us) = \frac{\text{Fee}(tx, us)}{\text{Weight}(tx)}$
- Non-negative rate: $\text{FeeRate}(tx, us) \geq 0$ for valid transactions
- Zero fee rate: $\text{IsCoinbase}(tx) = \text{true} \implies \text{FeeRate}(tx, us) = 0$
- Deterministic: $\text{FeeRate}(tx_1, us_1) = \text{FeeRate}(tx_2, us_2) \iff tx_1 = tx_2 \land us_1 = us_2$
$$\text{FeeRate}(tx, us) = \frac{\text{Fee}(tx, us)}{\text{Weight}(tx)}$$
Theorem 6.3.1 (Fee Non-Negativity): Transaction fees are always non-negative for valid transactions:
$$\forall tx \in \mathcal{TX}, us \in \mathcal{US}: \text{Fee}(tx, us) \geq 0$$
Proof: By construction, $\text{Fee}(tx, us) = \sum_{i \in tx.inputs} us(i.prevout).value - \sum_{o \in tx.outputs} o.value$. For a valid transaction, the sum of input values must be at least the sum of output values (otherwise the transaction would be invalid). This is proven by blvm-spec-lock formal verification.
7. Proof of Work
7.1 Difficulty Adjustment
ExpandTarget: $\mathbb{N} \rightarrow \mathbb{U256}$
Properties:
- Positive bits: $\text{ExpandTarget}(bits)$ requires $bits > 0$ (bits must be positive)
- Target expansion: $\text{ExpandTarget}(bits)$ expands compact difficulty representation to full 256-bit target
- Formula correctness: $\text{ExpandTarget}(bits) = \text{mantissa} \times 2^{8 \times (\text{exponent} - 3)}$ where exponent and mantissa extracted from bits
$$\text{ExpandTarget}(bits) = \text{mantissa} \times 2^{8 \times (\text{exponent} - 3)}$$
Where:
- $\text{exponent} = (bits \gg 24) \land 0xff$
- $\text{mantissa} = bits \land 0x00ffffff$
Proof: This function converts the compact difficulty representation (used in block headers) to a full 256-bit target value. The compact format uses 3 bytes for the exponent and 3 bytes for the mantissa. This is proven by blvm-spec-lock formal verification.
Properties:
- Valid bits range: $\text{GetNextWorkRequired}(h, prev) \implies 0 < \text{result} \leq \text{MAX_TARGET}$
- Bits positivity: $\text{GetNextWorkRequired}(h, prev) > 0$
GetNextWorkRequired: $\mathcal{H} \times \mathcal{H}^* \rightarrow \mathbb{N}$
Properties:
- Minimum headers: $\text{GetNextWorkRequired}(h, prev)$ requires $|prev| \geq 2$ for adjustment (otherwise returns initial difficulty)
- Difficulty bounds: $\text{GetNextWorkRequired}(h, prev) \leq \text{maxTarget}$ (difficulty never exceeds maximum)
- Positive difficulty: $\text{GetNextWorkRequired}(h, prev) > 0$ (difficulty is always positive)
- Deterministic: $\text{GetNextWorkRequired}(h_1, prev_1) = \text{GetNextWorkRequired}(h_2, prev_2) \iff h_1 = h_2 \land prev_1 = prev_2$
- Time-based adjustment: $\text{GetNextWorkRequired}(h, prev)$ adjusts difficulty based on time span between blocks
For block header $h$ and previous headers $prev$:
- If $|prev| < 2$: return initial difficulty
- Let $timeSpan = h.time - prev[0].time$
- Let $expectedTime = 14 \times 24 \times 60 \times 60$ (2 weeks)
- Let $adjustment = \frac{timeSpan}{expectedTime}$
- Let $newTarget = h.bits \times adjustment$
- Return $\min(newTarget, maxTarget)$
Theorem 7.1 (Difficulty Adjustment Bounds): The difficulty adjustment is bounded by a factor of 4 in either direction.
Proof: From the implementation, we have: $$\frac{timeSpan}{4} \leq actualTimeSpan \leq 4 \times timeSpan$$
Where $timeSpan$ is clamped to $[\frac{expectedTime}{4}, 4 \times expectedTime]$. Therefore: $$\frac{1}{4} \leq adjustment \leq 4$$
Corollary 7.1: The difficulty can change by at most a factor of 4 between any two difficulty adjustment periods.
Theorem 7.1.1 (Target Expansion Bounds): For valid difficulty bits, target expansion produces valid targets:
$$\forall bits \in \mathbb{N}, 0x03000000 \leq bits \leq 0x1d00ffff:$$ $$\text{ExpandTarget}(bits) \text{ produces valid target } \land \text{ExpandTarget}(bits) \leq 0x00ffffff \times 256^{exponent-3}$$
Where $exponent = (bits \gg 24) \land 0xff$ and $mantissa = bits \land 0x00ffffff$.
Proof: By construction, the target expansion formula ensures that valid bits produce valid targets within the specified bounds. Invalid bits may produce errors, which is acceptable. This is proven by blvm-spec-lock formal verification.
Theorem 7.1.2 (Difficulty Adjustment Bounds Enforcement): Difficulty adjustment respects maximum and minimum bounds:
$$\forall h \in \mathcal{H}, prev \in [\mathcal{H}]:$$ $$\text{GetNextWorkRequired}(h, prev) \leq \text{MAX_TARGET} \land \text{GetNextWorkRequired}(h, prev) > 0$$
Proof: By construction, the difficulty adjustment algorithm clamps the result to ensure it never exceeds $\text{MAX_TARGET}$ and is always positive. This is proven by blvm-spec-lock formal verification.
Theorem 7.2 (Difficulty Convergence): Under constant hash rate, the difficulty converges to the target block time.
Proof: Let $H$ be the constant hash rate and $D$ be the current difficulty. The expected time for the next block is: $$E[T] = \frac{D \times 2^{256}}{H}$$
If $E[T] > targetTime$, then $timeSpan > expectedTime$, so $adjustment > 1$, increasing difficulty. If $E[T] < targetTime$, then $adjustment < 1$, decreasing difficulty. This creates a negative feedback loop that converges to $E[T] = targetTime$.
7.2 Block Validation
CheckProofOfWork: $\mathcal{H} \rightarrow {\text{true}, \text{false}}$
$$\text{CheckProofOfWork}(h) = \text{SHA256}(\text{SHA256}(h)) < \text{ExpandTarget}(h.bits)$$
Where SHA256 is the Secure Hash Algorithm and $\text{ExpandTarget}$ converts the compact difficulty representation to a full 256-bit target.
Properties:
- PoW correctness: $\text{CheckProofOfWork}(h) = \text{true} \iff \text{SHA256}(\text{SHA256}(h)) < \text{ExpandTarget}(h.bits)$
- Hash comparison: $\text{CheckProofOfWork}(h)$ compares double-SHA256 hash against expanded target
- Boolean result: $\text{CheckProofOfWork}(h) \in {\text{true}, \text{false}}$
- Deterministic: $\text{CheckProofOfWork}(h_1) = \text{CheckProofOfWork}(h_2) \iff h_1 = h_2$ (same header → same result)
- Target requirement: $\text{CheckProofOfWork}(h)$ requires valid target expansion (bits must be valid)
- Hash length: $\text{SHA256}(\text{SHA256}(h))$ produces 32-byte hash for comparison
8. Security Properties
8.1 Economic Security
Conservation of Value: For any valid transaction $tx$: $$\sum_{i \in tx.inputs} us(i.prevout).value \geq \sum_{o \in tx.outputs} o.value$$
Theorem 8.1 (UTXO Set Invariant): The UTXO set maintains the invariant that the sum of all UTXO values equals the total money supply.
Proof: Let $US_h$ be the UTXO set at height $h$. We prove by induction:
Base case: At height 0 (genesis block), the UTXO set contains only the coinbase output, so the invariant holds.
Inductive step: Assume the invariant holds at height $h-1$. For block $b$ at height $h$:
-
Non-coinbase transactions: Each transaction $tx$ satisfies: $$\sum_{i \in tx.inputs} us(i.prevout).value = \sum_{o \in tx.outputs} o.value + \text{fee}(tx)$$
-
Coinbase transaction: Only adds value (block subsidy + fees) without spending any inputs.
-
UTXO set update: $$\sum_{utxo \in US_h} utxo.value = \sum_{utxo \in US_{h-1}} utxo.value + \text{GetBlockSubsidy}(h) + \sum_{tx \in b.transactions} \text{fee}(tx)$$
Therefore, the total UTXO value increases by exactly the block subsidy plus fees, maintaining the invariant.
Supply Limit: For any height $h$: $$\text{TotalSupply}(h) \leq 21 \times 10^6 \times C$$
Theorem 8.2 (Supply Convergence): The total supply converges to exactly 21 million BTC.
Proof: From Theorem 6.2.3, we have: $$\lim_{h \to \infty} \text{TotalSupply}(h) = 21 \times 10^6 \times C$$
Since the subsidy becomes 0 after 64 halvings (height 13,440,000), the total supply is exactly: $$\text{TotalSupply}(13,440,000) = 50 \times C \times \sum_{i=0}^{63} \left(\frac{1}{2}\right)^i = 50 \times C \times \frac{1 - (1/2)^{64}}{1 - 1/2} = 100 \times C \times (1 - 2^{-64})$$
For practical purposes, $2^{-64} \approx 0$, so the total supply is effectively 21 million BTC.
8.2 Integration and Round-Trip Properties
8.2.1 Integration Properties
Integration properties verify that multiple functions work together correctly in multi-function workflows.
Property (Economic Block Integration): For valid blocks, economic rules are consistently enforced:
$$\forall b \in \mathcal{B}, h \in \mathbb{N}: \text{ConnectBlock}(b, us, h) = \text{valid} \implies$$ $$\text{GetBlockSubsidy}(h) + \sum_{tx \in b.transactions} \text{Fee}(tx, us) \geq \sum_{o \in b.transactions[0].outputs} o.value$$
Where $b.transactions[0]$ is the coinbase transaction.
Property (ConnectBlock-DisconnectBlock Idempotency): Connect and disconnect operations are perfect inverses:
$$\forall b \in \mathcal{B}, us \in \mathcal{US}, h \in \mathbb{N}, ul \in \mathcal{UL}:$$ $$\text{ConnectBlock}(b, us, h) = (\text{valid}, us’) \implies$$ $$\text{DisconnectBlock}(b, ul, us’) = us$$
Where $ul$ is the undo log created during $\text{ConnectBlock}$.
Property (BIP65 + BIP112 Locktime Consistency): Locktime encoding/decoding is consistent across BIPs:
$$\forall lt \in \mathbb{N}_{32}: \text{DecodeLocktime}(\text{EncodeLocktime}(lt)) = lt$$
Property (RBF Conflict Requirement): RBF replacement requires transaction conflict:
$$\forall tx_1, tx_2 \in \mathcal{TX}:$$ $$\text{ReplacementChecks}(tx_1, tx_2) = \text{true} \implies$$ $$\exists i \in tx_1.inputs, j \in tx_2.inputs: i.prevout = j.prevout$$
8.2.2 Round-Trip Properties
Round-trip properties ensure that encoding/decoding and serialization/deserialization operations are inverse operations.
Property (Transaction Serialization Round-Trip): Transaction serialization and deserialization are inverse operations:
$$\forall tx \in \mathcal{TX}: \text{DeserializeTransaction}(\text{SerializeTransaction}(tx)) = tx$$
Property (Block Header Serialization Round-Trip): Block header serialization and deserialization are inverse operations:
$$\forall h \in \mathcal{H}: \text{DeserializeHeader}(\text{SerializeHeader}(h)) = h$$
Property (Serialization Determinism): Serialization is deterministic:
$$\forall tx_1, tx_2 \in \mathcal{TX}: tx_1 = tx_2 \iff \text{SerializeTransaction}(tx_1) = \text{SerializeTransaction}(tx_2)$$
Property (Locktime Encoding Round-Trip): Locktime encoding and decoding are inverse operations:
$$\forall lt \in \mathbb{N}_{32}: \text{DecodeLocktime}(\text{EncodeLocktime}(lt)) = lt$$
8.3 Cryptographic Security
Signature Verification: For public key $pk$, signature $sig$, and message hash $m$: $$\text{VerifySignature}(pk, sig, m) = \text{secp256k1_verify}(pk, sig, m)$$
Where secp256k1 is the elliptic curve used by Bitcoin and ECDSA is the signature algorithm.
Theorem 8.3 (Signature Security): Assuming the discrete logarithm problem is hard in the secp256k1 group, signature forgery is computationally infeasible.
Proof: This follows directly from the security of ECDSA with secp256k1. Any successful signature forgery would imply a solution to the discrete logarithm problem in the secp256k1 group, which is believed to be computationally infeasible.
Script Security: For script $s$ and flags $f$: $$\text{ScriptSecure}(s, f) = |s| \leq L_{script} \land \text{OpCount}(s) \leq L_{ops}$$
Theorem 8.4 (Script Execution Bounds): Script execution is bounded in time and space.
Proof: From the script limits:
- Maximum script size: $L_{script} = 10,000$ bytes
- Maximum operations: $L_{ops} = 201$
- Maximum stack size: $L_{stack} = 1,000$
Since each operation takes constant time and the stack size is bounded, script execution is $O(L_{ops}) = O(1)$ in the worst case.
8.4 Merkle Tree Security
Theorem 8.5 (Merkle Tree Integrity): The merkle root commits to all transactions in the block.
Proof: The merkle root is computed as: $$\text{MerkleRoot}(txs) = \text{ComputeMerkleRoot}({\text{Hash}(tx) : tx \in txs})$$
Any change to any transaction would result in a different merkle root, assuming SHA-256 is collision-resistant.
Theorem 8.6 (Merkle Tree Malleability): Bitcoin’s merkle tree implementation is vulnerable to CVE-2012-2459.
Proof: The vulnerability occurs when the number of hashes at a given level is odd, causing the last hash to be duplicated. This can result in different transaction lists producing the same merkle root. The implementation mitigates this by detecting when identical hashes are hashed together and treating such blocks as invalid.
Corollary 8.1: The merkle tree provides cryptographic commitment to transaction inclusion but requires additional validation to prevent malleability attacks.
8.4 Deterministic Properties
Many consensus functions must be deterministic to ensure all nodes reach the same results.
Theorem 8.4.1 (Proof of Work Determinism): Proof of work validation is deterministic:
$$\forall h \in \mathcal{H}: \text{CheckProofOfWork}(h) \text{ is deterministic}$$
Proof: The function uses only the block header and deterministic hash functions (SHA256). Given the same header, it always produces the same result. This is proven by blvm-spec-lock formal verification.
Theorem 8.4.2 (Transaction Application Determinism): Transaction application is deterministic:
$$\forall tx \in \mathcal{TX}, us \in \mathcal{US}, h \in \mathbb{N}:$$ $$\text{ApplyTransaction}(tx, us, h) \text{ is deterministic}$$
Proof: Transaction application uses only the transaction, UTXO set, and height. All operations (UTXO removal, UTXO addition) are deterministic. The consistency and correctness of transaction application is proven by blvm-spec-lock formal verification.
Theorem 8.4.3 (Block Connection Determinism): Block connection is deterministic:
$$\forall b \in \mathcal{B}, us \in \mathcal{US}, h \in \mathbb{N}:$$ $$\text{ConnectBlock}(b, us, h) \text{ is deterministic}$$
Proof: Block connection applies transactions deterministically and performs deterministic validation checks. This ensures all nodes reach the same consensus state.
9. Mempool Protocol
9.1 Mempool Validation
AcceptToMemoryPool: $\mathcal{TX} \times \mathcal{US} \rightarrow {\text{accepted}, \text{rejected}}$
Properties:
- Acceptance correctness: $\text{AcceptToMemoryPool}(tx, us) = \text{accepted} \implies \text{CheckTransaction}(tx) = \text{valid} \land \neg \text{IsCoinbase}(tx)$
- Coinbase rejection: $\text{IsCoinbase}(tx) = \text{true} \implies \text{AcceptToMemoryPool}(tx, us) = \text{rejected}$
- Result type: $\text{AcceptToMemoryPool}(tx, us) \in {\text{accepted}, \text{rejected}}$
A transaction $tx$ is accepted to the mempool if and only if:
- Basic Validation: $tx$ passes CheckTransaction
- Non-Coinbase: $\neg \text{IsCoinBase}(tx)$
- Standard Transaction: $\text{IsStandardTx}(tx)$ (see Standard Transaction Rules)
- Size Limits: $|\text{Serialize}(tx)| \geq 65$ bytes (minimum non-witness size)
- Finality: $\text{CheckFinalTxAtTip}(tx)$ (see Transaction Finality)
- Fee Requirements: $\text{FeeRate}(tx) \geq \text{minRelayFeeRate}$
- SigOps Limit: $\text{SigOpsCount}(tx) \leq \text{MAX_STANDARD_TX_SIGOPS_COST}$
9.2 Standard Transaction Rules
IsStandardTx: $\mathcal{TX} \rightarrow {\text{true}, \text{false}}$
Properties:
- Standard version: $\text{IsStandardTx}(tx) = \text{true} \implies tx.version \in {1, 2}$ (version must be 1 or 2)
- Standard scripts: $\text{IsStandardTx}(tx) = \text{true} \implies$ all outputs use standard script types (P2PKH, P2SH, P2WPKH, P2WSH, P2TR)
- Boolean result: $\text{IsStandardTx}(tx) \in {\text{true}, \text{false}}$
- Deterministic: $\text{IsStandardTx}(tx_1) = \text{IsStandardTx}(tx_2) \iff tx_1 = tx_2$ (same transaction → same result)
- Standard transaction subset: $\text{IsStandardTx}(tx) = \text{true} \implies \text{CheckTransaction}(tx) = \text{valid}$ (standard implies valid)
- Non-standard rejection: $\text{IsStandardTx}(tx) = \text{false} \implies$ transaction may be rejected by mempool
A transaction is standard if:
- Version: $tx.version \in {1, 2}$
- Script Types: All outputs use standard script types:
- P2PKH:
OP_DUP OP_HASH160 <20-byte-hash> OP_EQUALVERIFY OP_CHECKSIG - P2SH:
OP_HASH160 <20-byte-hash> OP_EQUAL - P2WPKH:
OP_0 <20-byte-hash> - P2WSH:
OP_0 <32-byte-hash> - P2TR:
OP_1 <32-byte-hash>
- P2PKH:
- Data Carrier: OP_RETURN outputs $\leq$ 83 bytes
- Dust Threshold: All outputs $\geq$ dust threshold
- Multisig: $\leq$ 3 keys for bare multisig
9.3 Replace-By-Fee (RBF)
ReplacementChecks: $\mathcal{TX} \times \mathcal{TX} \rightarrow {\text{true}, \text{false}}$
Properties:
- RBF requirement: $\text{ReplacementChecks}(tx_1, tx_2) = \text{true} \implies \exists i \in tx_1.inputs : i.sequence < \text{SEQUENCE_FINAL}$ (RBF signaling required)
- Fee bump requirement: $\text{ReplacementChecks}(tx_1, tx_2) = \text{true} \implies \text{FeeRate}(tx_2) > \text{FeeRate}(tx_1)$ (higher fee rate required)
- Boolean result: $\text{ReplacementChecks}(tx_1, tx_2) \in {\text{true}, \text{false}}$
- Deterministic: $\text{ReplacementChecks}(tx_{1a}, tx_{2a}) = \text{ReplacementChecks}(tx_{1b}, tx_{2b}) \iff tx_{1a} = tx_{1b} \land tx_{2a} = tx_{2b}$
- Conflict requirement: $\text{ReplacementChecks}(tx_1, tx_2) = \text{true} \implies$ $tx_1$ and $tx_2$ must conflict (share inputs)
- Same transaction ID: $\text{ReplacementChecks}(tx_1, tx_2) = \text{true} \implies tx_1.id \neq tx_2.id$ (different transactions)
Transaction $tx_2$ can replace $tx_1$ if:
- RBF Signaling: $tx_1$ has any input with $nSequence < \text{SEQUENCE_FINAL}$
- Fee Bump: $\text{FeeRate}(tx_2) > \text{FeeRate}(tx_1)$
- Absolute Fee: $\text{Fee}(tx_2) > \text{Fee}(tx_1) + \text{minRelayFee}$
- Conflicts: $tx_2$ spends at least one input from $tx_1$
- No New Unconfirmed: All inputs of $tx_2$ are confirmed or from $tx_1$
10. Network Protocol
The Bitcoin network protocol enables nodes to synchronize the blockchain and relay transactions. The protocol operates over TCP connections and uses a message-based communication system.
10.1 Message Types
NetworkMessage: $\mathcal{M} = {\text{version}, \text{verack}, \text{addr}, \text{inv}, \text{getdata}, \text{tx}, \text{block}, \text{headers}, \text{getheaders}, \text{ping}, \text{pong}}$
Message Flow:
- Connection: Nodes establish TCP connections
- Handshake: Exchange
versionandverackmessages - Synchronization: Request and receive blocks/headers
- Transaction Relay: Broadcast new transactions
- Maintenance: Periodic
ping/pongto maintain connections
10.2 Connection Management
Connection Types:
- Outbound: Active connections to other nodes
- Inbound: Passive connections from other nodes
- Feeler: Short-lived connections for peer discovery
- Block-Relay: Connections that only relay blocks
10.3 Peer Discovery
AddrMan: Address manager maintaining peer database
GetAddr: Request peer addresses from connected nodes Addr: Broadcast known peer addresses
10.4 Block Synchronization
GetHeaders: Request block headers from a specific point Headers: Response containing block headers GetBlocks: Request block inventory (deprecated) Inv: Inventory message listing available objects GetData: Request specific objects (blocks, transactions) Block: Full block data MerkleBlock: Block with merkle proof for filtered nodes
10.5 Transaction Relay
Tx: Broadcast transaction to peers MemPool: Request mempool contents FeeFilter: Set minimum fee rate for transaction relay
10.6 Dandelion++ k-Anonymity
Adversary Model: Passive observer capable of monitoring network traffic, operating nodes, and performing graph analysis.
k-Anonymity Definition: A transaction $tx$ satisfies k-anonymity if, from the adversary’s perspective, $tx$ could have originated from at least $k$ distinct nodes with equal probability.
Formally, let:
- $O$ = set of nodes that could have originated $tx$ (from adversary’s view)
- $P(O = N_i | \text{Evidence})$ = probability that node $N_i$ originated $tx$ given observed evidence
Then $tx$ has k-anonymity if:
- $|O| \geq k$
- $\forall N_i, N_j \in O: P(O = N_i | \text{Evidence}) = P(O = N_j | \text{Evidence})$
Stem Phase Parameters:
- $p_{\text{fluff}} \in [0, 1]$: Probability of transitioning to fluff at each hop (default: 0.1)
- $\text{max_stem_hops} \in \mathbb{N}$: Maximum number of stem hops before forced fluff (default: 2)
- $\text{stem_timeout} \in \mathbb{R}^+$: Maximum duration (seconds) in stem phase before timeout fluff
Stem Phase Algorithm: $$\text{stem_phase_relay}(tx, \text{current_peer}, \text{peers}) \rightarrow \text{Option}<\text{Peer}>$$
-
If $tx$ already in stem phase:
- If $\text{elapsed_time}(tx) > \text{stem_timeout}$: return $\text{None}$ (fluff via timeout)
- If $\text{hop_count}(tx) \geq \text{max_stem_hops}$: return $\text{None}$ (fluff via hop limit)
- If $\text{random}() < p_{\text{fluff}}$: return $\text{None}$ (fluff via probability)
- Otherwise: $\text{advance_stem}(tx)$ → return $\text{Some}(\text{next_peer})$
-
Else: $\text{start_stem_phase}(tx)$ → return $\text{Some}(\text{next_peer})$
Fluff Phase: When algorithm returns $\text{None}$, transaction enters fluff phase and is broadcast to all peers (standard Bitcoin relay).
Theorem 1 (Stem Phase Anonymity): During the stem phase, if the adversary observes a transaction at node $N_i$, the set of possible originators includes all nodes that have been on the stem path up to $N_i$.
Proof Sketch: The adversary cannot distinguish between:
- $tx$ originated at $N_i$ and is in its first stem hop
- $tx$ originated at any previous node $N_j$ ($j < i$) and is being forwarded
The random peer selection ensures uniform probability distribution over all possible originators in the path.
Theorem 2 (Minimum k-Anonymity): For a stem path of length $h$ hops, the minimum k-anonymity is $k \geq h + 1$.
Proof: A stem path $N_0 \rightarrow N_1 \rightarrow \ldots \rightarrow N_h$ contains $h + 1$ nodes. From the adversary’s perspective at $N_h$, any of these $h + 1$ nodes could have originated $tx$. Therefore, $k \geq h + 1$.
Corollary: With $\text{max_stem_hops} = 2$, we guarantee $k \geq 3$ (3-anonymity).
Theorem 3 (Timeout Guarantee): Even if the adversary controls all peers except the originator, the stem phase will terminate within $\text{stem_timeout}$ seconds.
Proof: The timeout check ensures $tx$ transitions to fluff phase within $\text{stem_timeout}$ seconds regardless of peer behavior.
Theorem 4 (No Premature Broadcast): During the stem phase, a transaction is never broadcast to multiple peers simultaneously.
Proof: The algorithm returns $\text{Option}<\text{Peer}>$ where $\text{Some}(\text{peer})$ indicates single-peer relay and $\text{None}$ indicates transition to fluff. The fluff phase is the only mechanism for broadcast.
Implementation Invariants (BLVM Specification Lock Verified):
- No Premature Broadcast: $\forall tx, \text{phase}: \text{phase} = \text{Stem} \implies \text{broadcast_count}(tx) = 0$
- Bounded Stem Length: $\forall tx: \text{stem_hops}(tx) \leq \text{max_stem_hops}$
- Timeout Enforcement: $\forall tx: \text{elapsed_time}(tx) > \text{stem_timeout} \implies \text{phase}(tx) = \text{Fluff}$
- Single Stem State: $\forall tx: |\text{stem_states}(tx)| \leq 1$
- Eventual Fluff: $\forall tx: \exists t: \text{phase_at_time}(tx, t) = \text{Fluff}$
11. Advanced Features
11.1 Segregated Witness (SegWit)
Witness Data: $\mathcal{W} = \mathbb{S}^*$ (stack of witness elements)
Witness Commitment: Coinbase transaction includes witness root hash $$\text{WitnessRoot} = \text{ComputeMerkleRoot}({\text{Hash}(tx.witness) : tx \in block.transactions})$$
Weight Calculation: $$\text{Weight}(tx) = 4 \times |\text{Serialize}(tx \setminus witness)| + |\text{Serialize}(tx)|$$
11.1.1 Weight and Size Calculations
CalculateTransactionWeight: $\mathcal{TX} \times \mathcal{W}^? \rightarrow \mathbb{N}$
Properties:
- Weight formula: $\text{CalculateTransactionWeight}(tx, w) = 4 \times \text{BaseSize}(tx) + \text{TotalSize}(tx, w)$
- Non-negativity: $\text{CalculateTransactionWeight}(tx, w) \geq 0$ for all valid transactions
- Minimum weight: $\text{CalculateTransactionWeight}(tx, w) \geq 4$ (at least 4 bytes base size)
- Weight bounds: $\text{CalculateTransactionWeight}(tx, w) \leq W_{\text{max_tx_weight}}$ for valid transactions
- Base size component: $\text{CalculateTransactionWeight}(tx, w) \geq 4 \times \text{BaseSize}(tx)$ (base size is always included)
- Total size component: $\text{CalculateTransactionWeight}(tx, w) \geq \text{TotalSize}(tx, w)$ (total size is always included)
- Deterministic: $\text{CalculateTransactionWeight}(tx_1, w_1) = \text{CalculateTransactionWeight}(tx_2, w_2) \iff tx_1 = tx_2 \land w_1 = w_2$
- Witness impact: $\text{CalculateTransactionWeight}(tx, \text{Some}(w)) \geq \text{CalculateTransactionWeight}(tx, \text{None})$ (witness increases weight)
For transaction $tx$ and witness $w$:
$$\text{CalculateTransactionWeight}(tx, w) = 4 \times \text{BaseSize}(tx) + \text{TotalSize}(tx, w)$$
Where:
- $\text{BaseSize}(tx) = |\text{Serialize}(tx \setminus witness)|$ (transaction size without witness data)
- $\text{TotalSize}(tx, w) = |\text{Serialize}(tx)|$ (transaction size with witness data)
Properties:
- Weight formula: $\text{CalculateTransactionWeight}(tx, w) = 4 \times \text{BaseSize}(tx) + \text{TotalSize}(tx, w)$
- Weight positivity: $\text{CalculateTransactionWeight}(tx, w) > 0$ for all valid transactions
- Minimum weight: $\text{CalculateTransactionWeight}(tx, w) \geq 4$ (at least 4 bytes base size)
- Weight bounds: $\text{CalculateTransactionWeight}(tx, w) \leq M_{\text{max_tx_weight}}$ for valid transactions
- Base size component: $\text{CalculateTransactionWeight}(tx, w) \geq 4 \times \text{BaseSize}(tx)$ (base size is always included)
- Total size component: $\text{CalculateTransactionWeight}(tx, w) \geq \text{TotalSize}(tx, w)$ (total size is always included)
- Deterministic: $\text{CalculateTransactionWeight}(tx_1, w_1) = \text{CalculateTransactionWeight}(tx_2, w_2) \iff tx_1 = tx_2 \land w_1 = w_2$
- Witness impact: $\text{CalculateTransactionWeight}(tx, \text{Some}(w)) \geq \text{CalculateTransactionWeight}(tx, \text{None})$ (witness increases weight)
- Non-negativity: $\text{CalculateTransactionWeight}(tx, w) \geq 0$ for all valid transactions
WeightToVSize: $\mathbb{N} \rightarrow \mathbb{N}$
Properties:
- Ceiling division: $\text{WeightToVSize}(w) = \lceil w / 4 \rceil = (w + 3) / 4$
- Lower bound: $\text{WeightToVSize}(w) \geq w / 4$ for all $w \in \mathbb{N}$
- Upper bound: $\text{WeightToVSize}(w) \leq (w / 4) + 1$ for all $w \in \mathbb{N}$
- Zero weight: $\text{WeightToVSize}(0) = 0$
- Exact division: $w \bmod 4 = 0 \implies \text{WeightToVSize}(w) = w / 4$
For weight $w$:
$$\text{WeightToVSize}(w) = \lceil w / 4 \rceil$$
Implemented as: $\text{WeightToVSize}(w) = (w + 3) / 4$ (integer ceiling division).
CalculateBlockWeight: $\mathcal{B} \times \mathcal{W}^* \rightarrow \mathbb{N}$
Properties:
- Weight positivity: $\text{CalculateBlockWeight}(b, w_1, \ldots, w_n) > 0$ for all valid blocks
- Minimum weight: $\text{CalculateBlockWeight}(b, w_1, \ldots, w_n) \geq |b.transactions| \times 4$ (minimum weight per transaction)
- Block limit: $\text{CalculateBlockWeight}(b, w_1, \ldots, w_n) \leq W_{\text{max}}$ for valid blocks
- Sum property: $\text{CalculateBlockWeight}(b, w_1, \ldots, w_n) = \sum_{i=1}^{|b.transactions|} \text{CalculateTransactionWeight}(b.transactions[i], w_i)$
- Deterministic: $\text{CalculateBlockWeight}(b_1, w_{1a}, \ldots) = \text{CalculateBlockWeight}(b_2, w_{2a}, \ldots) \iff b_1 = b_2 \land w_{1a} = w_{2a} \land \ldots$
- Witness count: $\text{CalculateBlockWeight}(b, w_1, \ldots, w_n)$ requires $|w_1, \ldots, w_n| = |b.transactions|$ (witnesses match transaction count)
- Monotonicity: Adding transactions increases weight: $|b_1.transactions| < |b_2.transactions| \implies \text{CalculateBlockWeight}(b_1, \ldots) < \text{CalculateBlockWeight}(b_2, \ldots)$
For block $b$ and witnesses $w_1, \ldots, w_n$:
$$\text{CalculateBlockWeight}(b, w_1, \ldots, w_n) = \sum_{i=1}^{|b.\text{transactions}|} \text{CalculateTransactionWeight}(b.\text{transactions}[i], w_i)$$
Block Weight Limit: For block $b$:
$$\text{CalculateBlockWeight}(b, w_1, \ldots, w_n) \leq W_{max}$$
Where $W_{max} = 4,000,000$ (MAX_BLOCK_WEIGHT).
11.1.2 Witness Structure Validation
ValidateSegWitWitnessStructure: $\mathcal{W} \rightarrow {\text{true}, \text{false}}$
Properties:
- Element size bounds: $\text{ValidateSegWitWitnessStructure}(w) = \text{true} \iff \forall e \in w : |e| \leq 520$
- Empty witness: $\text{ValidateSegWitWitnessStructure}(\emptyset) = \text{true}$ (empty witness is valid)
- Structure validation: $\text{ValidateSegWitWitnessStructure}(w) = \text{true} \implies$ all witness elements respect size limits
For witness $w$:
$$\text{ValidateSegWitWitnessStructure}(w) = \forall e \in w : |e| \leq 520$$
Where 520 is MAX_SCRIPT_ELEMENT_SIZE (maximum witness element size per BIP141).
IsWitnessEmpty: $\mathcal{W} \rightarrow {\text{true}, \text{false}}$
Properties:
- Empty definition: $\text{IsWitnessEmpty}(w) = \text{true} \iff (|w| = 0) \lor (\forall e \in w : |e| = 0)$
- Boolean result: $\text{IsWitnessEmpty}(w) \in {\text{true}, \text{false}}$
- Empty witness: $\text{IsWitnessEmpty}(\emptyset) = \text{true}$
For witness $w$:
$$\text{IsWitnessEmpty}(w) = (|w| = 0) \lor (\forall e \in w : |e| = 0)$$
11.1.3 Witness Program Extraction
ExtractWitnessVersion: $\mathbb{S} \rightarrow {\text{None}, \text{SegWitV0}, \text{TaprootV1}}$
Properties:
- Version range: $\text{ExtractWitnessVersion}(s) \neq \text{None} \implies |s| \geq 2 \land (s[0] = 0x00 \lor s[0] = 0x51)$
- Valid version: $\text{ExtractWitnessVersion}(s) = \text{SegWitV0} \implies s[0] = 0x00 \land |s| \geq 2$
- Taproot version: $\text{ExtractWitnessVersion}(s) = \text{TaprootV1} \implies s[0] = 0x51 \land |s| \geq 2$
For script $s$:
$$\text{ExtractWitnessVersion}(s) = \begin{cases} \text{SegWitV0} & \text{if } |s| \geq 2 \land s[0] = 0x00 \ \text{TaprootV1} & \text{if } |s| \geq 2 \land s[0] = 0x51 \ \text{None} & \text{otherwise} \end{cases}$$
ExtractWitnessProgram: $\mathbb{S} \times {\text{SegWitV0}, \text{TaprootV1}} \rightarrow \mathbb{S}^?$
Properties:
- Program extraction: $\text{ExtractWitnessProgram}(s, v) = \text{Some}(p) \implies |s| \geq 3$ (minimum script length)
- SegWit program: $\text{ExtractWitnessProgram}(s, \text{SegWitV0}) = \text{Some}(p) \implies s[1]] \in {0x14, 0x20} \land |s| \geq 3$
- Taproot program: $\text{ExtractWitnessProgram}(s, \text{TaprootV1}) = \text{Some}(p) \implies s[1] = 0x20 \land |s| \geq 3$
For script $s$ and version $v$:
$$\text{ExtractWitnessProgram}(s, v) = \begin{cases} s[2..|s|] & \text{if } v = \text{SegWitV0} \land s[1] \in {0x14, 0x20} \ s[2..|s|] & \text{if } v = \text{TaprootV1} \land s[1] = 0x20 \ \text{None} & \text{otherwise} \end{cases}$$
ValidateWitnessProgramLength: $\mathbb{S} \times {\text{SegWitV0}, \text{TaprootV1}} \rightarrow {\text{true}, \text{false}}$
Properties:
- Valid program length: $\text{ValidateWitnessProgramLength}(p, v) = \text{true} \implies |p| > 0$
- SegWit length: $\text{ValidateWitnessProgramLength}(p, \text{SegWitV0}) = \text{true} \iff |p| = 20 \lor |p| = 32$
- Taproot length: $\text{ValidateWitnessProgramLength}(p, \text{TaprootV1}) = \text{true} \iff |p| = 32$
For program $p$ and version $v$:
$$\text{ValidateWitnessProgramLength}(p, v) = \begin{cases} |p| = 20 \lor |p| = 32 & \text{if } v = \text{SegWitV0} \ |p| = 32 & \text{if } v = \text{TaprootV1} \ \text{false} & \text{otherwise} \end{cases}$$
11.1.4 Witness Merkle Root
ComputeWitnessMerkleRoot: $\mathcal{B} \times \mathcal{W}^* \rightarrow \mathbb{H}$
Properties:
- Root length: $\text{ComputeWitnessMerkleRoot}(b, w_1, \ldots, w_n) = root \implies |root| = 32$ (32-byte hash)
- Non-zero root: $\text{ComputeWitnessMerkleRoot}(b, w_1, \ldots, w_n) = root \implies root \neq 0^{32}$ (unless all witnesses empty)
- Block requirement: $\text{ComputeWitnessMerkleRoot}(b, w_1, \ldots, w_n)$ requires $|b.transactions| > 0$
For block $b$ and witnesses $w_1, \ldots, w_n$:
$$\text{ComputeWitnessMerkleRoot}(b, w_1, \ldots, w_n) = \text{ComputeMerkleRoot}({\text{Hash}(w_i) : i \in [1, |b.\text{transactions}|]})$$
Where:
- $\text{Hash}(w_1) = [0]^{32}$ (coinbase witness is empty, hash is zero)
- $\text{Hash}(w_i) = \text{SHA256}(\text{SHA256}(\text{Serialize}(w_i)))$ for $i > 1$
11.1.5 Witness Commitment Validation
ValidateWitnessCommitment: $\mathcal{TX} \times \mathbb{H} \rightarrow {\text{true}, \text{false}}$
Properties:
- Coinbase requirement: $\text{ValidateWitnessCommitment}(tx, root) = \text{true} \implies \text{IsCoinbase}(tx) = \text{true}$
- Commitment existence: $\text{ValidateWitnessCommitment}(tx, root) = \text{true} \iff \exists o \in tx.outputs : \text{IsWitnessCommitment}(o.scriptPubkey, root)$
- Boolean result: $\text{ValidateWitnessCommitment}(tx, root) \in {\text{true}, \text{false}}$
For coinbase transaction $tx$ and witness root $root$:
$$\text{ValidateWitnessCommitment}(tx, root) = \exists o \in tx.\text{outputs} : \text{IsWitnessCommitment}(o.\text{scriptPubkey}, root)$$
Where $\text{IsWitnessCommitment}(spk, root) = (|spk| = 38) \land (spk[0] = 0x6a) \land (spk[1] = 0x24) \land (spk[2..34] = root) \land (spk[34..38] = [0x00, 0x00, 0x00, 0x00])$
11.1.6 SegWit Transaction Detection
IsSegWitTransaction: $\mathcal{TX} \rightarrow {\text{true}, \text{false}}$
Properties:
- Output detection: $\text{IsSegWitTransaction}(tx) = \text{true} \iff \exists o \in tx.outputs : \text{IsSegWitOutput}(o.scriptPubkey)$
- Boolean result: $\text{IsSegWitTransaction}(tx) \in {\text{true}, \text{false}}$
- Witness presence: $\text{IsSegWitTransaction}(tx) = \text{true} \implies$ transaction may have witness data
For transaction $tx$:
$$\text{IsSegWitTransaction}(tx) = \exists o \in tx.\text{outputs} : \text{IsSegWitOutput}(o.\text{scriptPubkey})$$
Where $\text{IsSegWitOutput}(spk) = (|spk| \in {22, 34}) \land (spk[0] = 0x00) \land ((spk[1] = 0x14) \lor (spk[1] = 0x20))$
11.1.7 Block Validation
ValidateSegWitBlock: $\mathcal{B} \times \mathcal{W}^* \times \mathbb{H} \rightarrow {\text{valid}, \text{invalid}}$
Properties:
- Validation correctness: $\text{ValidateSegWitBlock}(b, w_1, \ldots, w_n, root) = \text{valid} \iff \text{ComputeWitnessMerkleRoot}(b, w_1, \ldots, w_n) = root \land \text{CalculateBlockWeight}(b, w_1, \ldots, w_n) \leq W_{\text{max}}$
- Boolean result: $\text{ValidateSegWitBlock}(b, w_1, \ldots, w_n, root) \in {\text{valid}, \text{invalid}}$
- Weight limit: $\text{ValidateSegWitBlock}(b, w_1, \ldots, w_n, root) = \text{valid} \implies \text{CalculateBlockWeight}(b, w_1, \ldots, w_n) \leq W_{\text{max}}$
For block $b$, witnesses $w_1, \ldots, w_n$, and witness root $root$:
$$\text{ValidateSegWitBlock}(b, w_1, \ldots, w_n, root) = \begin{cases} \text{valid} & \text{if } \text{ComputeWitnessMerkleRoot}(b, w_1, \ldots, w_n) = root \land \text{CalculateBlockWeight}(b, w_1, \ldots, w_n) \leq W_{max} \ \text{invalid} & \text{otherwise} \end{cases}$$
11.1.8 Nested SegWit (P2WSH-in-P2SH, P2WPKH-in-P2SH)
Nested SegWit: SegWit outputs can be wrapped in P2SH, creating nested SegWit transactions.
P2WPKH-in-P2SH: Pay-to-Witness-Public-Key-Hash wrapped in P2SH
For P2WPKH-in-P2SH:
- Redeem Script Format: $[0x00, 0x14, h_{20}]$ where $h_{20} \in {0,1}^{160}$
- $0x00$ is OP_0
- $0x14$ is push 20 bytes
- $h_{20}$ is the 20-byte pubkey hash
- Witness: Contains signature and public key (2 elements)
- Validation: Witness program is 20 bytes, witness contains signature + pubkey
P2WSH-in-P2SH: Pay-to-Witness-Script-Hash wrapped in P2SH
For P2WSH-in-P2SH:
- Redeem Script Format: $[0x00, 0x20, h_{32}]$ where $h_{32} \in {0,1}^{256}$
- $0x00$ is OP_0
- $0x20$ is push 32 bytes
- $h_{32}$ is the 32-byte script hash
- Witness: Contains witness script as last element
- Validation: Witness program is 32 bytes, witness script (last witness element) must hash to program
Nested SegWit Detection: $\text{IsNestedSegWit}(redeem) = (redeem[0] = 0x00) \land ((redeem[1] = 0x14) \lor (redeem[1] = 0x20))$
Theorem 11.1.1 (Nested SegWit Validation): Nested SegWit transactions validate the witness program hash in the P2SH redeem script, then execute witness validation.
Proof: By construction, nested SegWit transactions first validate that the redeem script hash matches the P2SH scriptPubKey. Then, the witness program (20 or 32 bytes) is extracted from the redeem script, and witness validation proceeds as for direct SegWit transactions. For P2WSH-in-P2SH, the witness script is the last witness element and must hash to the 32-byte program.
Activation: Block 481,824 (mainnet) - Same as SegWit activation
11.2 Taproot
Taproot Output: P2TR script OP_1 <32-byte-hash>
P2TR Script Format: $\text{P2TR} = [0x51, 0x20, h_{32}]$ where $h_{32} \in {0,1}^{256}$
P2TR Detection: $\text{IsP2TR}(spk) = (|spk| = 34) \land (spk[0] = 0x51) \land (spk[1] = 0x20)$
Empty ScriptSig Requirement: For Taproot transactions, scriptSig must be empty:
$$\forall tx \in \mathcal{TX}, i \in \mathbb{N} : \text{IsP2TR}(tx.\text{outputs}[j].\text{scriptPubkey}) \land tx.\text{inputs}[i].\text{prevout} = (txid, j) \implies tx.\text{inputs}[i].\text{scriptSig} = \emptyset$$
Key Aggregation: $$\text{OutputKey} = \text{InternalPubKey} + \text{TaprootTweak}(\text{MerkleRoot}) \times G$$
Script Path: Alternative spending path using merkle proof
11.2.1 Taproot Script Validation
ValidateTaprootScript: $\mathbb{S} \rightarrow {\text{true}, \text{false}}$
For script $s$:
$$\text{ValidateTaprootScript}(s) = (|s| = 34) \land (s[0] = 0x51) \land (s[1] = 0x20)$$
Properties:
- Length validation: $\text{ValidateTaprootScript}(s) = \text{true} \iff |s| = 34$
- Format correctness: $\text{ValidateTaprootScript}(s) = \text{true} \implies s[0] = 0x51 \land s[1] = 0x20$
- Invalid length: $|s| \neq 34 \implies \text{ValidateTaprootScript}(s) = \text{false}$
ExtractTaprootOutputKey: $\mathbb{S} \rightarrow {[0,1]^{256}}^?$
Properties:
- Key extraction: $\text{ExtractTaprootOutputKey}(s) = \text{Some}(k) \implies \text{ValidateTaprootScript}(s) = \text{true}$
- Key length: $\text{ExtractTaprootOutputKey}(s) = \text{Some}(k) \implies |k| = 32$ (32-byte public key)
- Script validation: $\text{ExtractTaprootOutputKey}(s) = \text{Some}(k) \implies |s| = 34 \land s[0] = 0x51 \land s[1] = 0x20$
For script $s$:
$$\text{ExtractTaprootOutputKey}(s) = \begin{cases} s[2..34] & \text{if } \text{ValidateTaprootScript}(s) \ \text{None} & \text{otherwise} \end{cases}$$
IsTaprootOutput: $\mathcal{T} \rightarrow {\text{true}, \text{false}}$
Properties:
- Output detection: $\text{IsTaprootOutput}(o) = \text{true} \iff \text{ValidateTaprootScript}(o.scriptPubkey) = \text{true}$
- Boolean result: $\text{IsTaprootOutput}(o) \in {\text{true}, \text{false}}$
- Script validation: $\text{IsTaprootOutput}(o) = \text{true} \implies |o.scriptPubkey| = 34 \land o.scriptPubkey[0] = 0x51$
For transaction output $o$:
$$\text{IsTaprootOutput}(o) = \text{ValidateTaprootScript}(o.\text{scriptPubkey})$$
11.2.2 Taproot Key Operations
ComputeTaprootTweak: $[0,1]^{256} \times \mathbb{H} \rightarrow [0,1]^{256}$
Properties:
- Tweak length: $\text{ComputeTaprootTweak}(pk, root) = t \implies |t| = 32$ (32-byte tweak)
- Deterministic: $\text{ComputeTaprootTweak}(pk_1, root_1) = \text{ComputeTaprootTweak}(pk_2, root_2) \iff pk_1 = pk_2 \land root_1 = root_2$
- Hash property: $\text{ComputeTaprootTweak}(pk, root)$ uses tagged hash for domain separation
For internal public key $pk$ and merkle root $root$:
$$\text{ComputeTaprootTweak}(pk, root) = \text{TaggedHash}(\text{“TapTweak”}, pk, root)$$
Where $\text{TaggedHash}$ is BIP340 tagged hash function.
ValidateTaprootKeyAggregation: $[0,1]^{256} \times [0,1]^{256} \times \mathbb{H} \rightarrow {\text{true}, \text{false}}$
Properties:
- Key aggregation correctness: $\text{ValidateTaprootKeyAggregation}(pk, out, root) = \text{true} \iff out = pk + \text{ComputeTaprootTweak}(pk, root) \times G$
- Boolean result: $\text{ValidateTaprootKeyAggregation}(pk, out, root) \in {\text{true}, \text{false}}$
- Elliptic curve operation: $\text{ValidateTaprootKeyAggregation}(pk, out, root)$ validates elliptic curve point addition
For internal public key $pk$, output key $out$, and merkle root $root$:
$$\text{ValidateTaprootKeyAggregation}(pk, out, root) = (out = pk + \text{ComputeTaprootTweak}(pk, root) \times G)$$
11.2.3 Taproot Script Path
ValidateTaprootScriptPath: $\mathbb{S} \times [\mathbb{H}]^* \times [0,1]^{256} \rightarrow {\text{true}, \text{false}}$
Properties:
- Merkle proof validation: $\text{ValidateTaprootScriptPath}(s, proof, out) = \text{true} \iff \text{ComputeScriptMerkleRoot}(s, proof) = \text{ExtractMerkleRoot}(out)$
- Boolean result: $\text{ValidateTaprootScriptPath}(s, proof, out) \in {\text{true}, \text{false}}$
- Script path correctness: $\text{ValidateTaprootScriptPath}(s, proof, out)$ validates script is in Taproot merkle tree
For script $s$, merkle proof $proof$, and output key $out$:
$$\text{ValidateTaprootScriptPath}(s, proof, out) = \begin{cases} \text{true} & \text{if } \text{ComputeScriptMerkleRoot}(s, proof) = \text{ExtractMerkleRoot}(out) \ \text{false} & \text{otherwise} \end{cases}$$
11.2.4 Taproot Witness Structure
ValidateTaprootWitnessStructure: $\mathcal{W} \times {\text{true}, \text{false}} \rightarrow {\text{true}, \text{false}}$
Properties:
- Key path structure: $\text{ValidateTaprootWitnessStructure}(w, \text{false}) = \text{true} \iff |w| = 1 \land |w[0]| = 64$ (single 64-byte signature)
- Script path structure: $\text{ValidateTaprootWitnessStructure}(w, \text{true}) = \text{true} \iff |w| \geq 2 \land |w[|w|-1]| \geq 33 \land (|w[|w|-1]| - 33) \bmod 32 = 0$
- Boolean result: $\text{ValidateTaprootWitnessStructure}(w, is_script) \in {\text{true}, \text{false}}$
For witness $w$ and script path flag $is_script$:
$$\text{ValidateTaprootWitnessStructure}(w, is_script) = \begin{cases} |w| = 1 \land |w[0]| = 64 & \text{if } \neg is_script \text{ (key path)} \ |w| \geq 2 \land |w[|w|-1]| \geq 33 \land (|w[|w|-1]| - 33) \bmod 32 = 0 & \text{if } is_script \text{ (script path)} \end{cases}$$
11.2.5 Taproot Transaction Validation
ValidateTaprootTransaction: $\mathcal{TX} \times \mathcal{W}^? \rightarrow {\text{valid}, \text{invalid}}$
Properties:
- ScriptSig empty: $\text{ValidateTaprootTransaction}(tx, w) = \text{valid} \implies \forall i \in tx.inputs : \text{IsP2TR}(tx.outputs[j].scriptPubkey) \implies i.scriptSig = \emptyset$
- Witness structure: $\text{ValidateTaprootTransaction}(tx, w) = \text{valid} \implies \text{ValidateTaprootWitnessStructure}(w, \text{IsScriptPath}(w)) = \text{true}$
- Validation correctness: $\text{ValidateTaprootTransaction}(tx, w)$ validates all Taproot-specific rules
For transaction $tx$ and witness $w$:
$$\text{ValidateTaprootTransaction}(tx, w) = \begin{cases} \text{valid} & \text{if } \forall i \in tx.\text{inputs} : \text{IsP2TR}(tx.\text{outputs}[j].\text{scriptPubkey}) \implies i.\text{scriptSig} = \emptyset \land \text{ValidateTaprootWitnessStructure}(w, \text{IsScriptPath}(w)) \ \text{invalid} & \text{otherwise} \end{cases}$$
11.2.6 Taproot Signature Hash
ComputeTaprootSignatureHash: $\mathcal{TX} \times \mathbb{N} \times \mathcal{US} \times \mathbb{N}_{32} \times \mathbb{H}^? \rightarrow \mathbb{H}$
Properties:
- Hash length: $\text{ComputeTaprootSignatureHash}(tx, i, us, type, leaf) = h \implies |h| = 32$ (32-byte hash)
- Deterministic: $\text{ComputeTaprootSignatureHash}(tx_1, i_1, us_1, type_1, leaf_1) = \text{ComputeTaprootSignatureHash}(tx_2, i_2, us_2, type_2, leaf_2) \iff tx_1 = tx_2 \land i_1 = i_2 \land us_1 = us_2 \land type_1 = type_2 \land leaf_1 = leaf_2$
- Tagged hash: $\text{ComputeTaprootSignatureHash}(tx, i, us, type, leaf)$ uses BIP340 tagged hash for domain separation
For transaction $tx$, input index $i$, UTXO set $us$, sighash type $type$, and leaf hash $leaf$:
$$\text{ComputeTaprootSignatureHash}(tx, i, us, type, leaf) = \text{TaggedHash}(\text{“TapSighash”}, tx, i, us(i.\text{prevout}), type, leaf)$$
Theorem 11.2.1 (Taproot Empty ScriptSig): Taproot transactions require empty scriptSig for all inputs spending P2TR outputs.
Proof: By construction, Taproot validation happens entirely through witness data (key path or script path). The scriptPubKey OP_1 <32-byte-hash> is not executable as a script, so scriptSig must be empty. If scriptSig is non-empty, validation fails before witness processing.
Activation: Block 709,632 (mainnet)
11.3 Chain Reorganization
Chain Selection: Choose chain with most cumulative work $$\text{BestChain} = \arg\max_{chain} \sum_{block \in chain} \text{Work}(block)$$
Reorganization: When a longer chain is found:
- Disconnect blocks from current tip
- Connect blocks from new chain
- Update UTXO set accordingly
11.3.1 Undo Log Pattern
Chain reorganization requires disconnecting blocks from the current chain and connecting blocks from the new chain. To efficiently reverse the effects of ConnectBlock, we use an undo log pattern that records all UTXO set changes made by a block.
Undo Entry: $\mathcal{UE} = \mathcal{O} \times \mathcal{U}^? \times \mathcal{U}^?$
An undo entry records:
outpoint: The outpoint that was changedprevious_utxo: The UTXO that existed before (None if created)new_utxo: The UTXO that exists after (None if spent)
Block Undo Log: $\mathcal{UL} = \mathcal{UE}^*$
A block undo log contains all undo entries for a block, stored in reverse order (most recent first) to allow efficient undo by iterating forward.
DisconnectBlock: $\mathcal{B} \times \mathcal{UL} \times \mathcal{US} \rightarrow \mathcal{US}$
Properties:
- Undo correctness: $\text{DisconnectBlock}(b, ul, us) = us’ \implies$ UTXO set $us’$ reflects state before block $b$ was connected
- Idempotency: $\text{DisconnectBlock}(b, ul, \text{ConnectBlock}(b, us, h)) = us$ (perfect inverse)
- Undo log length: $|\text{DisconnectBlock}(b, ul, us)| = |us| - |ul|$ (undo log entries match block changes)
For block $b$, undo log $ul$, and UTXO set $us$:
$$\text{DisconnectBlock}(b, ul, us) = \begin{cases} us’ & \text{where } us’ = \text{ApplyUndoLog}(ul, us) \ \text{error} & \text{if undo log is invalid} \end{cases}$$
Where $\text{ApplyUndoLog}$ processes each entry $e \in ul$ in order:
- If $e.\text{new_utxo} \neq \text{None}$: $us’ = us’ \setminus {e.\text{outpoint}}$ (remove UTXO created by block)
- If $e.\text{previous_utxo} \neq \text{None}$: $us’ = us’ \cup {e.\text{outpoint} \mapsto e.\text{previous_utxo}}$ (restore UTXO spent by block)
Theorem 11.3.1 (Disconnect/Connect Idempotency): Disconnect and connect operations are perfect inverses:
$$\forall b \in \mathcal{B}, us \in \mathcal{US}, ul \in \mathcal{UL}:$$ $$\text{DisconnectBlock}(b, ul, \text{ConnectBlock}(b, us)) = us$$
Proof: By construction, the undo log $ul$ created during $\text{ConnectBlock}$ records all UTXO changes. When $\text{DisconnectBlock}$ applies the undo log, it reverses each change exactly, restoring the original UTXO set. This is proven by blvm-spec-lock formal verification.
Corollary 11.3.1.1: Undo logs enable perfect historical state restoration without re-validating blocks.
11.4 UTXO Commitments
UTXO commitments provide cryptographic commitments to the UTXO set using Merkle trees, enabling efficient UTXO set synchronization and verification without requiring full blockchain download.
UTXO Commitment: $\mathcal{UC} = \mathbb{H} \times \mathbb{N} \times \mathbb{H} \times \mathbb{N} \times \mathbb{N}$
A UTXO commitment contains:
merkle_root: Root hash of the UTXO Merkle treeblock_height: Block height at which commitment was createdblock_hash: Hash of the block at commitment heighttotal_supply: Total supply committed (sum of all UTXO values)utxo_count: Number of UTXOs in the commitment
UTXO Merkle Tree: Sparse Merkle tree where:
- Key: OutPoint hash (256 bits)
- Value: Serialized UTXO (value, script_pubkey, height)
- Root: Merkle root hash committing to entire UTXO set
GenerateCommitment: $\mathcal{US} \times \mathbb{H} \times \mathbb{N} \rightarrow \mathcal{UC}$
Properties:
- Merkle root correctness: $\text{GenerateCommitment}(us, bh, h).\text{merkle_root} = \text{BuildMerkleTree}(us)$ (merkle root commits to entire UTXO set)
- Height consistency: $\text{GenerateCommitment}(us, bh, h).\text{block_height} = h$ (height matches input)
- UTXO count: $\text{GenerateCommitment}(us, bh, h).\text{utxo_count} = |us|$ (count matches UTXO set size)
- Total supply: $\text{GenerateCommitment}(us, bh, h).\text{total_supply} = \sum_{u \in us} u.\text{value}$ (total supply equals sum of UTXO values)
- Block hash: $\text{GenerateCommitment}(us, bh, h).\text{block_hash} = bh$ (block hash matches input)
- Deterministic: $\text{GenerateCommitment}(us_1, bh_1, h_1) = \text{GenerateCommitment}(us_2, bh_2, h_2) \iff us_1 = us_2 \land bh_1 = bh_2 \land h_1 = h_2$
- Merkle root length: $\text{GenerateCommitment}(us, bh, h).\text{merkle_root}$ is 32 bytes (SHA256 hash)
- Supply consistency: $\text{GenerateCommitment}(us, bh, h).\text{total_supply} \leq \text{MAX_MONEY}$ (supply respects maximum)
For UTXO set $us$, block hash $bh$, and height $h$:
$$\text{GenerateCommitment}(us, bh, h) = \begin{cases} uc & \text{where } uc.\text{merkle_root} = \text{BuildMerkleTree}(us) \ & uc.\text{block_height} = h \ & uc.\text{block_hash} = bh \ & uc.\text{total_supply} = \sum_{utxo \in us} utxo.\text{value} \ & uc.\text{utxo_count} = |us| \end{cases}$$
FindConsensus: $[\mathcal{UC}] \times [0,1] \rightarrow \mathcal{UC}^?$
Properties:
- Consensus existence: $\text{FindConsensus}(cs, t) = \text{Some}(c) \implies \frac{|{c’ \in cs : c’ = c}|}{|cs|} \geq t$ (consensus requires threshold agreement)
- Threshold requirement: $\text{FindConsensus}(cs, t) = \text{Some}(c) \implies$ at least $\lceil |cs| \times t \rceil$ commitments match $c$ (integer threshold)
- No consensus: $\text{FindConsensus}(cs, t) = \text{None} \implies \nexists c \in cs: \frac{|{c’ \in cs : c’ = c}|}{|cs|} \geq t$ (no commitment meets threshold)
- Minimum peers: $\text{FindConsensus}(cs, t)$ requires $|cs| \geq \text{min_peers}$ (enough peers for consensus)
- Deterministic: $\text{FindConsensus}(cs_1, t_1) = \text{FindConsensus}(cs_2, t_2) \iff cs_1 = cs_2 \land t_1 = t_2$
- Result type: $\text{FindConsensus}(cs, t) \in {\text{Some}(\mathcal{UC}), \text{None}}$
- Threshold range: $\text{FindConsensus}(cs, t)$ requires $t \in [0, 1]$ (threshold must be valid probability)
For commitments $cs \in [\mathcal{UC}]$ and threshold $t \in [0,1]$:
$$\text{FindConsensus}(cs, t) = \begin{cases} c & \text{if } \exists c \in cs: \frac{|{c’ \in cs : c’ = c}|}{|cs|} \geq t \ \text{None} & \text{otherwise} \end{cases}$$
VerifyConsensusCommitment: $\mathcal{UC} \times [\mathcal{H}] \rightarrow {\text{valid}, \text{invalid}}$
Properties:
- PoW verification: $\text{VerifyConsensusCommitment}(uc, hs) = \text{valid} \implies \text{VerifyPoW}(uc.\text{block_hash}, hs) = \text{true}$ (PoW must be valid)
- Header chain: $\text{VerifyConsensusCommitment}(uc, hs) = \text{valid} \implies uc.\text{block_hash} \in hs$ (block hash in header chain)
- Commitment validity: $\text{VerifyConsensusCommitment}(uc, hs) = \text{valid} \implies$ commitment $uc$ is cryptographically valid
- Supply verification: $\text{VerifyConsensusCommitment}(uc, hs) = \text{valid} \implies \text{VerifySupply}(uc.\text{total_supply}, uc.\text{block_height}) = \text{true}$
- Result type: $\text{VerifyConsensusCommitment}(uc, hs) \in {\text{valid}, \text{invalid}}$
- Deterministic: $\text{VerifyConsensusCommitment}(uc_1, hs_1) = \text{VerifyConsensusCommitment}(uc_2, hs_2) \iff uc_1 = uc_2 \land hs_1 = hs_2$
- Non-empty headers: $\text{VerifyConsensusCommitment}(uc, hs)$ requires $|hs| > 0$ (header chain must not be empty)
For commitment $uc$ and headers $hs$:
$$\text{VerifyConsensusCommitment}(uc, hs) = \begin{cases} \text{valid} & \text{if } \text{VerifyPoW}(uc.\text{block_hash}, hs) \land \ & \quad \text{VerifySupply}(uc.\text{total_supply}, uc.\text{block_height}) \ \text{invalid} & \text{otherwise} \end{cases}$$
Theorem 11.4.1 (Consensus Threshold Correctness): Consensus threshold calculation using integer arithmetic is correct:
$$\forall cs \in [\mathcal{UC}], t \in [0,1]:$$ $$\text{FindConsensus}(cs, t) = c \iff \lceil |cs| \times t \rceil \text{ peers agree on } c$$
Proof: The threshold check uses integer arithmetic: $required = \lceil |cs| \times t \rceil$. If $agreement_count \geq required$, then $agreement_count / |cs| \geq t$ (within floating-point precision). This avoids floating-point precision issues and is proven by blvm-spec-lock formal verification.
Integer Arithmetic for Threshold Calculations: To avoid floating-point precision issues in consensus-critical calculations, we use integer arithmetic with ceiling operations. For threshold $t \in [0,1]$ and count $n \in \mathbb{N}$:
$$required = \lceil n \times t \rceil$$
Theorem 11.4.2 (Integer Threshold Correctness): Integer threshold calculation correctly implements consensus thresholds:
$$\forall n \in \mathbb{N}, t \in [0,1]:$$ $$required = \lceil n \times t \rceil \implies \forall agreement \in \mathbb{N}:$$ $$(agreement \geq required \implies \frac{agreement}{n} \geq t - \epsilon) \land$$ $$(agreement < required \implies \frac{agreement}{n} < t + \epsilon)$$
Where $\epsilon$ is floating-point precision error (typically $< 10^{-15}$).
Proof: By properties of ceiling function and floating-point arithmetic. The integer calculation ensures we err on the side of requiring more agreement, which is safer for consensus. This is proven by blvm-spec-lock formal verification.
Theorem 11.4.3 (Commitment Verification): UTXO commitments can be verified without full UTXO set:
$$\forall us \in \mathcal{US}, uc = \text{GenerateCommitment}(us, bh, h):$$ $$\text{VerifyCommitment}(uc, merkle_proof, outpoint, utxo) = \text{valid}$$ $$\iff$$ $$utxo \in us \land us[\text{outpoint}] = utxo$$
Proof: By construction, the Merkle tree provides cryptographic commitment. A Merkle proof for a specific outpoint can verify inclusion without revealing the entire UTXO set.
12. Mining Protocol
12.1 Block Template Generation
CreateNewBlock: $\mathcal{US} \times \mathcal{TX}^* \rightarrow \mathcal{B}$
Properties:
- Block structure: $\text{CreateNewBlock}(us, mempool)$ returns a block with at least one transaction (coinbase)
- Coinbase requirement: First transaction in block is coinbase: $\text{CreateNewBlock}(us, mempool) = block \implies \text{IsCoinbase}(block.transactions[0]) = \text{true}$
- Transaction ordering: Coinbase is first, followed by mempool transactions
- Block validity: $\text{CreateNewBlock}(us, mempool) = block \implies \text{CheckTransaction}(block.transactions[0]) = \text{valid}$ (coinbase is valid)
- Difficulty validity: $\text{CreateNewBlock}(us, mempool) = block \implies block.header.bits > 0$ (valid difficulty)
- Minimum transactions: $\text{CreateNewBlock}(us, mempool) = block \implies |block.transactions| \geq 1$ (at least coinbase)
- Deterministic structure: Block structure follows deterministic rules (coinbase first, then mempool transactions)
- Merkle root: $\text{CreateNewBlock}(us, mempool) = block \implies block.header.hashMerkleRoot = \text{BlockMerkleRoot}(block.transactions)$
For UTXO set $us$ and mempool transactions $mempool$:
- Initialize Block: Create empty block with dummy coinbase
- Set Version: $block.version = \text{ComputeBlockVersion}(prevBlock)$
- Set Time: $block.time = \text{CurrentTime}()$
- Add Transactions: Select transactions from mempool respecting weight limits
- Create Coinbase: Generate coinbase transaction (see Coinbase Transaction)
- Set Header: $block.hashPrevBlock = prevBlock.hash$, $block.nBits = \text{GetNextWorkRequired}()$
- Initialize Nonce: $block.nNonce = 0$
12.2 Coinbase Transaction
Coinbase Transaction: Special transaction with no inputs that creates new bitcoins
Structure:
- Input: Single input with $prevout = \text{null}$, $scriptSig = \langle height, OP_0 \rangle$
- Output: Single output with $value = \text{GetBlockSubsidy}(height) + \text{totalFees}$
- LockTime: $nLockTime = height - 1$
Validation Rules:
- Height Encoding: $scriptSig$ must encode current block height
- No Inputs: Must have exactly one input with null $prevout$
- Value Limit: $value \leq \text{GetBlockSubsidy}(height) + \text{totalFees}$
- LockTime: Must equal $height - 1$
12.3 Mining Process
MineBlock: $\mathcal{B} \times \mathbb{N} \rightarrow \mathcal{B} \times {\text{success}, \text{failure}}$
Properties:
- PoW success: $\text{MineBlock}(block, maxTries) = (block’, \text{success}) \implies \text{CheckProofOfWork}(block’) = \text{true}$ (mined block passes PoW)
- Merkle root: $\text{MineBlock}(block, maxTries) = (block’, _) \implies block’.\text{hashMerkleRoot} = \text{BlockMerkleRoot}(block’)$ (merkle root is correct)
- Nonce modification: $\text{MineBlock}(block, maxTries) = (block’, \text{success}) \implies block’.\text{nonce} \neq block.\text{nonce}$ (nonce changed during mining)
- Max attempts: $\text{MineBlock}(block, maxTries)$ requires $maxTries > 0$ (must have at least one attempt)
- Result type: $\text{MineBlock}(block, maxTries) \in {(\mathcal{B}, \text{success}), (\mathcal{B}, \text{failure})}$
- Failure condition: $\text{MineBlock}(block, maxTries) = (block’, \text{failure}) \implies$ all $maxTries$ attempts failed to find valid PoW
- Success condition: $\text{MineBlock}(block, maxTries) = (block’, \text{success}) \implies$ found valid PoW within $maxTries$ attempts
- Block structure preserved: $\text{MineBlock}(block, maxTries) = (block’, _) \implies$ all block fields except nonce and hash remain unchanged
For block template $block$ and max attempts $maxTries$:
- Set Merkle Root: $block.hashMerkleRoot = \text{BlockMerkleRoot}(block)$
- Proof of Work: While $maxTries > 0$ and $\neg \text{CheckProofOfWork}(block)$:
- Increment $block.nNonce$
- Decrement $maxTries$
- Return Result: If valid proof found, return $(block, \text{success})$, else $(\bot, \text{failure})$
12.4 Block Template Interface
BlockTemplate: Interface for mining software
Required Methods:
getBlockHeader(): Return block header for hashinggetBlock(): Return complete block (with dummy coinbase)getCoinbaseTx(): Return actual coinbase transactiongetCoinbaseCommitment(): Return witness commitmentsubmitSolution(version, timestamp, nonce, coinbase): Submit mining solution
Consensus Requirements:
- SegWit Support: Must include witness commitment in coinbase
- Version Bits: Must respect BIP9 deployment states
- Weight Limits: Must not exceed $W_{max} = 4 \times 10^6$ weight units
- Transaction Selection: Must respect mempool fee policies
13. Implementation Considerations
13.1 Performance
- UTXO Set: Maintain in-memory for $O(1)$ lookups
- Script Caching: Cache verification results
- Parallel Validation: Validate transactions concurrently
13.2 Security
- Malleability: Prevented through SegWit
- DoS Protection: Resource limits on size and operations
- Replay Protection: Sequence numbers and locktime
13.3 Engineering-Specific Edge Cases
While the Orange Paper focuses on mathematical consensus rules (~95% coverage), there are engineering-specific edge cases that are consensus-critical but not purely mathematical. These must be handled identically to Bitcoin Core to prevent network divergence.
13.3.1 Integer Arithmetic Overflow/Underflow
Critical Requirement: All monetary value arithmetic must use checked operations to prevent overflow/underflow.
Edge Cases:
- Value Summation: Input/output value summation can overflow
i64::MAXwhen combining many large UTXOs - Fee Calculation:
total_in - total_outcan underflow or overflow near boundaries - Coinbase Value:
subsidy + feescan exceedMAX_MONEYif not checked - Fee Accumulation: Summing fees across block transactions can overflow
Implementation: Use checked_add() and checked_sub() for all value arithmetic. Match Bitcoin Core’s CAmount behavior exactly.
13.3.2 Serialization/Deserialization Correctness
Critical Requirement: Wire format must match Bitcoin Core byte-for-byte.
Edge Cases:
- VarInt Encoding: Boundary values (
0xfc,0xfd,0xfe,0xff) must use correct encoding format - Little-Endian: All integers must be serialized as little-endian
- Block Header: Must be exactly 80 bytes
- Transaction Format: Must match Bitcoin Core’s exact byte layout
Implementation: Consolidated serialization module with round-trip correctness guarantees. See docs/ENGINEERING_EDGE_CASES.md for details.
Theorem 13.3.2.1 (Serialization Round-Trip Correctness): Serialization and deserialization are inverse operations:
$$\forall x \in \mathcal{D}: \text{deserialize}(\text{serialize}(x)) = x$$
Where $\mathcal{D}$ is the domain of serializable data structures (block headers, transactions, etc.).
Proof: By construction, the serialization format is designed to be lossless and reversible. All fields are encoded in a deterministic format that can be exactly reconstructed. This is proven by blvm-spec-lock formal verification.
Theorem 13.3.2.2 (Serialization Determinism): Serialization is deterministic:
$$\forall x \in \mathcal{D}: \text{serialize}(x) \text{ is deterministic (same input always produces same output)}$$
Proof: The serialization process uses only the input data structure and deterministic encoding rules. There are no random elements or non-deterministic operations. This is proven by blvm-spec-lock formal verification.
13.3.3 Resource Limit Enforcement
Critical Requirement: DoS protection limits must be enforced deterministically at exact boundaries.
Edge Cases:
- Script Operation Limit: Exactly 201 operations must fail (limit check happens after increment)
- Stack Size Limit: Exactly 1000 stack items must fail before next push
- Transaction Size: Exactly 1,000,000 bytes must pass, 1,000,001 must fail
- Coinbase ScriptSig: Must be exactly 2-100 bytes (boundary validation)
Implementation: All limits checked before resource exhaustion. Boundary behavior matches Bitcoin Core exactly.
13.3.4 Parser Determinism
Critical Requirement: Malformed data must be rejected deterministically. All nodes must agree on invalid inputs.
Edge Cases:
- Truncated Data: EOF at any point must be rejected with clear error
- Invalid Length Fields: Length > remaining bytes, invalid VarInt encodings
- Malformed Structures: Negative counts, maximum value abuse
Implementation: Wire-format parser with comprehensive error handling. All rejection scenarios tested. See tests/engineering/parser_edge_cases.rs.
Reference: See docs/ENGINEERING_EDGE_CASES.md for complete documentation of all engineering-specific edge cases, test coverage, and Bitcoin Core alignment.
13.3.5 Integration Proofs
Integration proofs verify that different consensus modules work together correctly, ensuring that cross-module interactions maintain mathematical correctness.
Theorem 13.3.5.1 (BIP65/BIP112 Locktime Consistency): BIP65 (CLTV) and BIP112 (CSV) use shared locktime logic consistently:
$$\forall lt \in \mathbb{N}_{32}:$$ $$\text{DecodeLocktime}(\text{EncodeLocktime}(lt)) = lt \land$$ $$\text{LocktimeType}(lt) \text{ is consistent for CLTV and CSV}$$
Proof: Both BIP65 and BIP112 use the same locktime encoding/decoding and type determination functions. The shared implementation ensures consistency. This is proven by blvm-spec-lock formal verification.
Theorem 13.3.5.2 (Locktime/Script Integration): Locktime validation integrates correctly with script execution:
$$\forall tx \in \mathcal{TX}, script \in \mathcal{SC}, lt \in \mathbb{N}_{32}:$$ $$\text{ExecuteScript}(script, tx, lt) \text{ uses consistent locktime validation}$$
Proof: Script execution uses the same locktime validation functions as standalone locktime checks, ensuring consistency between script-level and transaction-level locktime validation. This is proven by blvm-spec-lock formal verification.
Theorem 13.3.5.3 (Economic/Block Integration): Economic rules integrate correctly with block validation:
$$\forall b \in \mathcal{B}, h \in \mathbb{N}:$$ $$\text{ConnectBlock}(b, us, h) \text{ enforces economic invariants (subsidy, fees, supply limits)}$$
Proof: Block connection validates economic rules (subsidy calculation, fee validation, supply limits) as part of the block validation process, ensuring economic correctness is maintained. This is proven by blvm-spec-lock formal verification.
13.4 Peer Consensus Protocol
The peer consensus protocol implements an N-of-M consensus model for UTXO commitment verification. It discovers diverse peers and finds consensus among them to verify UTXO commitments without trusting any single peer. This protocol is used in conjunction with UTXO commitments (section 11.4) to enable efficient UTXO set synchronization.
Peer Information: $\mathcal{PI} = \mathbb{IP} \times \mathbb{N}^? \times \mathbb{S}^? \times \mathbb{S}^? \times \mathbb{N}$
A peer information structure contains:
address: IP address of the peerasn: Autonomous System Number (optional)country: Country code (optional)implementation: Bitcoin implementation identifier (optional)subnet: Subnet identifier for diversity checks (/16 for IPv4, /32 for IPv6)
DiscoverDiversePeers: $[\mathcal{PI}] \times \mathbb{N} \times \mathbb{N} \rightarrow [\mathcal{PI}]$
Properties:
- Subset property: $\text{DiscoverDiversePeers}(peers, max_asn, target) \subseteq peers$ (no new peers created)
- ASN diversity: $\forall p_1, p_2 \in \text{DiscoverDiversePeers}(peers, max_asn, target): p_1.\text{asn} = p_2.\text{asn} \implies |{p \in \text{result} : p.\text{asn} = p_1.\text{asn}}| \leq max_asn$ (ASN limit enforced)
- Subnet diversity: $\forall p_1, p_2 \in \text{DiscoverDiversePeers}(peers, max_asn, target): p_1 \neq p_2 \implies p_1.\text{subnet} \neq p_2.\text{subnet}$ (no duplicate subnets)
- Size bound: $|\text{DiscoverDiversePeers}(peers, max_asn, target)| \leq \min(|peers|, target)$ (result size bounded)
- Deterministic: $\text{DiscoverDiversePeers}(peers_1, max_asn_1, target_1) = \text{DiscoverDiversePeers}(peers_2, max_asn_2, target_2) \iff peers_1 = peers_2 \land max_asn_1 = max_asn_2 \land target_1 = target_2$
For peer list $peers$, maximum peers per ASN $max_asn$, and target number $target$:
$$\text{DiscoverDiversePeers}(peers, max_asn, target) = \begin{cases} result & \text{where } result \subseteq peers \land \ & \quad \forall asn: |{p \in result : p.\text{asn} = asn}| \leq max_asn \land \ & \quad \forall p_1, p_2 \in result: p_1 \neq p_2 \implies p_1.\text{subnet} \neq p_2.\text{subnet} \land \ & \quad |result| \leq target \ peers & \text{if } |peers| \leq target \land \text{diversity constraints satisfied} \end{cases}$$
DetermineCheckpointHeight: $[\mathbb{N}] \times \mathbb{N} \rightarrow \mathbb{N}$
Properties:
- Median bounds: $\text{DetermineCheckpointHeight}(tips, margin) = h \implies h \in [\min(tips) - margin, \max(tips)]$ (checkpoint within tip range)
- Non-negative: $\text{DetermineCheckpointHeight}(tips, margin) \geq 0$ (checkpoint is non-negative)
- Empty input: $\text{DetermineCheckpointHeight}([], margin) = 0$ (empty tips return genesis)
- Safety margin: $\text{DetermineCheckpointHeight}(tips, margin) \leq \text{median}(tips)$ (checkpoint at or below median)
- Deterministic: $\text{DetermineCheckpointHeight}(tips_1, margin_1) = \text{DetermineCheckpointHeight}(tips_2, margin_2) \iff tips_1 = tips_2 \land margin_1 = margin_2$
For peer tips $tips \in [\mathbb{N}]$ and safety margin $margin \in \mathbb{N}$:
$$\text{DetermineCheckpointHeight}(tips, margin) = \begin{cases} 0 & \text{if } |tips| = 0 \ \max(0, \text{median}(tips) - margin) & \text{if } \text{median}(tips) > margin \ 0 & \text{otherwise} \end{cases}$$
Where $\text{median}(tips)$ is the median value of sorted $tips$:
- If $|tips|$ is odd: $\text{median}(tips) = tips[\lfloor |tips|/2 \rfloor]$
- If $|tips|$ is even: $\text{median}(tips) = \lfloor (tips[|tips|/2 - 1] + tips[|tips|/2]) / 2 \rfloor$
FindConsensus: $[\mathcal{PC}] \times \mathbb{N} \times [0,1] \rightarrow \mathcal{CR}^?$
Where $\mathcal{PC} = \mathcal{PI} \times \mathcal{UC}$ is a peer commitment (peer info + UTXO commitment) and $\mathcal{CR} = \mathcal{UC} \times \mathbb{N} \times \mathbb{N} \times [0,1]$ is a consensus result (commitment, agreement count, total peers, agreement ratio).
Properties:
- Minimum peers: $\text{FindConsensus}(peer_commitments, min_peers, threshold) = \text{Some}(result) \implies |peer_commitments| \geq min_peers$ (requires minimum peers)
- Threshold requirement: $\text{FindConsensus}(peer_commitments, min_peers, threshold) = \text{Some}(result) \implies result.\text{agreement_ratio} \geq threshold$ (meets threshold)
- Agreement count: $\text{FindConsensus}(peer_commitments, min_peers, threshold) = \text{Some}(result) \implies result.\text{agreement_count} \geq \lceil |peer_commitments| \times threshold \rceil$ (integer threshold)
- Consensus commitment: $\text{FindConsensus}(peer_commitments, min_peers, threshold) = \text{Some}(result) \implies$ at least $\lceil |peer_commitments| \times threshold \rceil$ peers agree on $result.\text{commitment}$
- No consensus: $\text{FindConsensus}(peer_commitments, min_peers, threshold) = \text{None} \implies$ no commitment meets threshold
- Deterministic: $\text{FindConsensus}(pc_1, min_1, t_1) = \text{FindConsensus}(pc_2, min_2, t_2) \iff pc_1 = pc_2 \land min_1 = min_2 \land t_1 = t_2$
For peer commitments $peer_commitments \in [\mathcal{PC}]$, minimum peers $min_peers \in \mathbb{N}$, and threshold $threshold \in [0,1]$:
$$\text{FindConsensus}(peer_commitments, min_peers, threshold) = \begin{cases} \text{Some}(result) & \text{if } |peer_commitments| \geq min_peers \land \ & \quad \exists c \in \mathcal{UC}: \frac{|{pc \in peer_commitments : pc.\text{commitment} = c}|}{|peer_commitments|} \geq threshold \ \text{None} & \text{otherwise} \end{cases}$$
Where $result = (c, agreement_count, |peer_commitments|, agreement_count / |peer_commitments|)$ and $c$ is the commitment with highest agreement.
VerifyConsensusCommitment: $\mathcal{CR} \times [\mathcal{H}] \rightarrow {\text{valid}, \text{invalid}}$
Properties:
- PoW verification: $\text{VerifyConsensusCommitment}(consensus, headers) = \text{valid} \implies \text{VerifyPoW}(headers) = \text{true}$ (PoW must be valid)
- Supply verification: $\text{VerifyConsensusCommitment}(consensus, headers) = \text{valid} \implies \text{VerifySupply}(consensus.\text{commitment}.\text{total_supply}, consensus.\text{commitment}.\text{block_height}) = \text{true}$ (supply must be valid)
- Block hash match: $\text{VerifyConsensusCommitment}(consensus, headers) = \text{valid} \implies headers[consensus.\text{commitment}.\text{block_height}].\text{hash} = consensus.\text{commitment}.\text{block_hash}$ (block hash matches)
- Height bounds: $\text{VerifyConsensusCommitment}(consensus, headers) = \text{valid} \implies consensus.\text{commitment}.\text{block_height} < |headers|$ (height within header chain)
- Deterministic: $\text{VerifyConsensusCommitment}(c_1, h_1) = \text{VerifyConsensusCommitment}(c_2, h_2) \iff c_1 = c_2 \land h_1 = h_2$
For consensus result $consensus \in \mathcal{CR}$ and header chain $headers \in [\mathcal{H}]$:
$$\text{VerifyConsensusCommitment}(consensus, headers) = \begin{cases} \text{valid} & \text{if } \text{VerifyPoW}(headers) \land \ & \quad \text{VerifySupply}(consensus.\text{commitment}.\text{total_supply}, consensus.\text{commitment}.\text{block_height}) \land \ & \quad consensus.\text{commitment}.\text{block_height} < |headers| \land \ & \quad headers[consensus.\text{commitment}.\text{block_height}].\text{hash} = consensus.\text{commitment}.\text{block_hash} \ \text{invalid} & \text{otherwise} \end{cases}$$
VerifyUTXOProofs: $\mathcal{CR} \times [(\mathcal{O}, \mathcal{U}, \mathcal{MP})] \rightarrow {\text{valid}, \text{invalid}}$
Where $\mathcal{MP}$ is a Merkle proof for UTXO inclusion.
Properties:
- Proof validity: $\text{VerifyUTXOProofs}(consensus, proofs) = \text{valid} \implies \forall (outpoint, utxo, proof) \in proofs: \text{VerifyMerkleProof}(consensus.\text{commitment}.\text{merkle_root}, outpoint, utxo, proof) = \text{true}$ (all proofs valid)
- UTXO inclusion: $\text{VerifyUTXOProofs}(consensus, proofs) = \text{valid} \implies \forall (outpoint, utxo, proof) \in proofs: utxo \in \text{UTXOSet}(consensus.\text{commitment})$ (all UTXOs in commitment)
- Deterministic: $\text{VerifyUTXOProofs}(c_1, p_1) = \text{VerifyUTXOProofs}(c_2, p_2) \iff c_1 = c_2 \land p_1 = p_2$
For consensus result $consensus \in \mathcal{CR}$ and UTXO proofs $proofs \in [(\mathcal{O}, \mathcal{U}, \mathcal{MP})]$:
$$\text{VerifyUTXOProofs}(consensus, proofs) = \begin{cases} \text{valid} & \text{if } \forall (outpoint, utxo, proof) \in proofs: \ & \quad \text{VerifyMerkleProof}(consensus.\text{commitment}.\text{merkle_root}, outpoint, utxo, proof) = \text{true} \ \text{invalid} & \text{otherwise} \end{cases}$$
Theorem 13.4.1 (Peer Diversity Correctness): Diverse peer discovery ensures network diversity:
$$\forall peers \in [\mathcal{PI}], max_asn \in \mathbb{N}, target \in \mathbb{N}:$$ $$\text{DiscoverDiversePeers}(peers, max_asn, target) \text{ ensures ASN and subnet diversity}$$
Proof: By construction, the algorithm filters peers to ensure no more than $max_asn$ peers per ASN and no duplicate subnets. This ensures geographic and network diversity, reducing the risk of coordinated attacks. This is proven by blvm-spec-lock formal verification.
Theorem 13.4.2 (Checkpoint Safety): Checkpoint height determination prevents deep reorganizations:
$$\forall tips \in [\mathbb{N}], margin \in \mathbb{N}:$$ $$\text{DetermineCheckpointHeight}(tips, margin) \leq \text{median}(tips) - margin$$
Proof: The checkpoint is calculated as $\max(0, \text{median}(tips) - margin)$, ensuring it is always at least $margin$ blocks behind the median tip. This provides safety margin against deep reorganizations. This is proven by blvm-spec-lock formal verification.
Theorem 13.4.3 (Consensus Threshold Correctness): Consensus finding correctly implements threshold-based agreement:
$$\forall peer_commitments \in [\mathcal{PC}], min_peers \in \mathbb{N}, threshold \in [0,1]:$$ $$\text{FindConsensus}(peer_commitments, min_peers, threshold) = \text{Some}(result) \iff$$ $$|peer_commitments| \geq min_peers \land \frac{result.\text{agreement_count}}{|peer_commitments|} \geq threshold$$
Proof: The algorithm groups commitments by their values and finds the group with highest agreement. If the agreement ratio meets the threshold, consensus is found. This ensures that a sufficient fraction of peers agree on the commitment. This is proven by blvm-spec-lock formal verification.
14. Conclusion
This Orange Paper provides a complete mathematical specification of the Bitcoin consensus protocol. The mathematical formalism makes it suitable for formal verification and provides a solid foundation for understanding Bitcoin’s security properties.
14.1 Summary of Contributions
Complete Protocol Specification: We have mathematically formalized all consensus-critical aspects of Bitcoin, including:
- Transaction and block validation rules
- Script execution semantics
- Economic model with formal proofs
- Proof-of-work and difficulty adjustment
- Network protocol and mempool management
- Mining process and block template generation
Validation Against Implementation: All specifications have been validated against the actual Bitcoin Core implementation, ensuring accuracy and completeness.
Mathematical Rigor: The specification uses formal mathematical notation throughout, making it suitable for:
- Formal verification tools
- Academic research
- Protocol analysis and development
- Security auditing
14.2 Applications
This specification can be used for:
- Formal Verification: Proving correctness properties of Bitcoin implementations
- Protocol Analysis: Understanding the security and economic properties of Bitcoin
- Implementation Reference: Building new Bitcoin-compatible software
- Academic Research: Studying distributed consensus and cryptocurrency systems
The specification covers all aspects of Bitcoin’s operation, from basic transaction validation to complex economic rules. It serves as both a reference implementation and a formal specification that can be used for security analysis and protocol development.
15. Governance Model
The Bitcoin Commons governance system implements a 5-tier constitutional governance model with cryptographic enforcement, multisig security, and layer-based repository governance. The system applies the same cryptographic enforcement to governance that Bitcoin applies to consensus.
Core Innovation: Make power visible, capture expensive, and exit cheap through cryptographic audit trails and user-protective mechanisms.
15.1 Layer + Tier System
The governance system combines two dimensions:
Layer System (Repository Architecture):
- Layer 1-2 (Constitutional):
blvm-spec,blvm-consensus- 6-of-7 signatures, 180 days (365 for consensus changes) - Layer 3 (Implementation):
blvm-protocol- 4-of-5 signatures, 90 days - Layer 4 (Application):
blvm-node,bllvm- 3-of-5 signatures, 60 days - Layer 5 (Extension):
blvm-sdk- 2-of-3 signatures, 14 days
Tier System (Action Classification):
- Tier 1: Routine Maintenance - 3-of-5 signatures, 7 days
- Tier 2: Feature Changes - 4-of-5 signatures, 30 days
- Tier 3: Consensus-Adjacent - 5-of-5 signatures, 90 days
- Tier 4: Emergency Actions - 4-of-5 signatures, 0 days
- Tier 5: Governance Changes - 5-of-5 signatures, 180 days
Combination Rule: When both Layer and Tier apply, the system uses the most restrictive (highest) requirements.
15.2 Cryptographic Enforcement
All governance actions require:
- Cryptographic Signatures: secp256k1 signatures (same curve as Bitcoin)
- Multisig Thresholds: Multiple maintainers must sign critical actions
- Public Verification: All signatures and actions are publicly verifiable
- Tamper Evidence: Hash chain and Merkle tree audit trails
- Blockchain Anchoring: OpenTimestamps for immutable proof
Signature Requirements:
- Maintainer keypairs for PR approval, maintainer management, emergency actions
- BIP39 mnemonic seed phrases (24 words) for key generation
- BIP32 hierarchical deterministic key derivation
- Regular key rotation (6 months for maintainers, 3 months for emergency keyholders)
See governance repository for complete governance specification, implementation details, and current status.
References
Bitcoin Protocol
- Bitcoin Core Implementation
- Bitcoin Improvement Proposals (BIPs)
- Satoshi Nakamoto, “Bitcoin: A Peer-to-Peer Electronic Cash System” (2008)
Cryptographic Standards
- FIPS 180-4: Secure Hash Standard
- SEC 2: Recommended Elliptic Curve Domain Parameters
- RFC 6979: Deterministic Usage of DSA and ECDSA
Mathematical Concepts
Security Vulnerabilities
This document represents the current state of Bitcoin. The protocol continues to evolve through the BIP process.
Protocol Specifications
Bitcoin Improvement Proposals (BIPs) implemented in BLVM. Consensus-critical behavior lives in blvm-consensus with tests, review, and BLVM Specification Lock proofs. See Formal Verification.
Consensus-Critical BIPs
Script Opcodes:
- BIP65 (CLTV, opcode 0xb1): Locktime validation (
blvm-consensus/src/script/) - BIP112 (CSV, opcode 0xb2): Relative locktime via sequence numbers (
blvm-consensus/src/script/) - BIP68: Relative locktime sequence encoding (used by BIP112)
Time Validation:
- BIP113: Median time-past for CLTV timestamp validation (
blvm-consensus/src/block/mod.rs)
Transaction Features:
- BIP125 (RBF): Replace-by-fee with all 5 requirements (
blvm-consensus/src/mempool.rs) with tests - BIP141/143 (SegWit): Witness validation, weight calculation, P2WPKH/P2WSH (
blvm-consensus/src/segwit.rs) - BIP340/341/342 (Taproot): P2TR validation framework (
blvm-consensus/src/taproot.rs)
Network Protocol BIPs
- BIP152: Compact block relay - short transaction IDs, block reconstruction (see Compact Blocks)
- BIP157/158: Client-side block filtering - GCS filter construction, integrated with network layer, works over all transports (see BIP157/158)
- BIP331: Package relay - efficient transaction relay (see Package Relay)
Application-Level BIPs
- BIP21: Bitcoin URI scheme (
blvm-node/src/bip21.rs) - BIP32/39/44: HD wallets, mnemonic phrases, standard derivation paths (
blvm-node/src/wallet/) - BIP70: Payment protocol (full reimplementation,
blvm-node/src/network/bip70_handler.rs) - BIP174: PSBT format for hardware wallet support (
blvm-node/src/psbt.rs) - BIP350/351: Bech32m for Taproot (P2TR), Bech32 for SegWit (
blvm-node/src/bech32m.rs)
Experimental Features
Available in experimental build variant: UTXO commitments, BIP119 CTV (CheckTemplateVerify), Dandelion++ privacy relay, Stratum V2 mining protocol.
Configuration Reference
Reference for BLVM node configuration options. Configuration can be provided via TOML file, JSON file, command-line arguments, or environment variables. See Node Configuration for usage examples.
Precedence: CLI > ENV > config file > defaults. Canonical defaults: This reference is the source of truth; other docs (e.g. first-node, storage-backends) give examples—use this page when you need exact defaults.
Path expansion: Path fields (storage.data_dir, modules.modules_dir, ibd.dump_dir, ibd.snapshot_dir) expand ~ to the home directory when loading from file.
Configuration File Format
Configuration files support both TOML (.toml) and JSON (.json) formats. TOML is recommended for readability.
Example Configuration File
# blvm.toml
listen_addr = "127.0.0.1:8333"
transport_preference = "tcp_only"
max_peers = 100
protocol_version = "BitcoinV1"
[storage]
data_dir = "/var/lib/blvm"
database_backend = "auto"
[storage.cache]
block_cache_mb = 100
utxo_cache_mb = 50
header_cache_mb = 10
[storage.pruning]
mode = { type = "normal", keep_from_height = 0, min_recent_blocks = 288 }
auto_prune = true
auto_prune_interval = 144
[modules]
enabled = true
modules_dir = "modules"
data_dir = "data/modules"
[rpc_auth]
required = false
rate_limit_burst = 100
rate_limit_rate = 10
Core Configuration
Network Settings
listen_addr
- Type:
SocketAddr(e.g.,"127.0.0.1:8333") - Default:
"127.0.0.1:8333" - Description: Network address to listen on for incoming P2P connections.
- Example:
listen_addr = "0.0.0.0:8333"(listen on all interfaces)
transport_preference
- Type:
string(enum) - Default:
"tcp_only" - Options:
"tcp_only"- Use only TCP transport (Bitcoin P2P compatible, default)"quinn_only"- Use only Quinn/QUIC transport (requiresquinnfeature)"iroh_only"- Use only Iroh transport (requiresirohfeature, experimental)"hybrid"- Use both TCP and Iroh simultaneously (requiresirohfeature)"all"- Use all available transports (requires bothquinnandirohfeatures)
- Description: Transport protocol selection. See Transport Abstraction for details.
max_peers
- Type:
integer - Default:
100 - Description: Maximum number of simultaneous peer connections.
protocol_version
- Type:
string - Default:
"BitcoinV1" - Options:
"BitcoinV1"(mainnet),"Testnet3"(testnet),"Regtest"(regtest) - Description: Bitcoin protocol variant. See Network Variants.
persistent_peers
- Type:
arrayofSocketAddr - Default:
[] - Description: List of peer addresses to connect to on startup. Format:
["192.168.1.1:8333", "example.com:8333"] - Example:
persistent_peers = ["192.168.1.1:8333", "10.0.0.1:8333"]
enable_self_advertisement
- Type:
boolean - Default:
true - Description: Whether to advertise own address to peers. Set to
falsefor privacy.
IBD Configuration
Initial block download uses parallel IBD only. [ibd] (top-level): chunk_size, download_timeout_secs, mode, eviction, max_blocks_in_transit_per_peer, headers_timeout_secs, headers_max_failures; optional: preferred_peers, max_ahead_blocks, memory_only, dump_dir, snapshot_dir, yield_interval, earliest_first, prefetch_*, utxo_prefetch_lookahead. [ibd_protection] (top-level): bandwidth limits per peer/IP/subnet. max_concurrent_per_peer is fixed at 64 in code (not in config). See Node Configuration and IBD Protection; ENV: BLVM_IBD_*.
Storage Configuration
storage.data_dir
- Type:
string(path) - Default:
"data" - Description: Directory for storing blockchain data (blocks, UTXO set, indexes).
storage.database_backend
- Type:
string(enum) - Default:
"auto" - Options:
"auto"- Select by build features: RocksDB whenrocksdbfeature enabled (typical default), else TidesDB, else Redb, else Sled"rocksdb"- Use RocksDB (requiresrocksdbfeature; reads common LevelDB/blk*.datlayouts)"redb"- Use redb database (production-ready)"sled"- Use sled database (beta, fallback option)"tidesdb"- Use TidesDB (if available)
- Description: Database backend selection. System automatically falls back if preferred backend fails.
Storage Cache
storage.cache.block_cache_mb
- Type:
integer(megabytes) - Default:
100 - Description: Size of block cache in megabytes. Caches recently accessed blocks.
storage.cache.utxo_cache_mb
- Type:
integer(megabytes) - Default:
50 - Description: Size of UTXO cache in megabytes. Caches frequently accessed UTXOs.
storage.cache.header_cache_mb
- Type:
integer(megabytes) - Default:
10 - Description: Size of header cache in megabytes. Caches block headers.
Pruning Configuration
storage.pruning.mode
- Type:
object(enum with variants) - Default: Aggressive (configurable; for full archival nodes use Disabled or Normal)
- Options: Disabled, Normal (
keep_from_height,min_recent_blocks), Aggressive (keep_from_height,keep_commitments,keep_filtered_blocks,min_blocks), Custom (fine-grained control) - Description: Pruning mode configuration. See Pruning Modes below.
storage.pruning.auto_prune
- Type:
boolean - Default:
true(if mode is Aggressive),falseotherwise - Description: Automatically prune old blocks periodically as chain grows.
storage.pruning.auto_prune_interval
- Type:
integer(blocks) - Default:
144(~1 day at 10 min/block) - Description: Prune every N blocks when
auto_pruneis enabled.
storage.pruning.min_blocks_to_keep
- Type:
integer(blocks) - Default:
144(~1 day at 10 min/block) - Description: Minimum number of blocks to keep as safety margin, even with aggressive pruning.
storage.pruning.prune_on_startup
- Type:
boolean - Default:
false - Description: Prune old blocks when node starts (if they exceed configured limits).
storage.pruning.incremental_prune_during_ibd
- Type:
boolean - Default:
true(if Aggressive mode) - Description: Prune old blocks incrementally during initial block download (IBD), keeping only a sliding window. Requires UTXO commitments.
storage.pruning.prune_window_size
- Type:
integer(blocks) - Default:
144(~1 day) - Description: Number of recent blocks to keep during incremental pruning (sliding window).
storage.pruning.min_blocks_for_incremental_prune
- Type:
integer(blocks) - Default:
288(~2 days) - Description: Minimum blocks before starting incremental pruning during IBD.
Pruning Modes
Disabled Mode
[storage.pruning]
mode = { type = "disabled" }
Keep all blocks. No pruning performed.
Normal Mode
[storage.pruning]
mode = { type = "normal", keep_from_height = 0, min_recent_blocks = 288 }
keep_from_height: Keep blocks from this height onwards (default:0)min_recent_blocks: Keep at least this many recent blocks (default:288= ~2 days)
Aggressive Mode
[storage.pruning]
mode = { type = "aggressive", keep_from_height = 0, keep_commitments = true, keep_filtered_blocks = false, min_blocks = 144 }
Requires: utxo-commitments feature enabled.
keep_from_height: Keep blocks from this height onwards (default:0)keep_commitments: Keep UTXO commitments for pruned blocks (default:true)keep_filtered_blocks: Keep spam-filtered blocks for pruned range (default:false)min_blocks: Minimum blocks to keep as safety margin (default:144= ~1 day)
Custom Mode
[storage.pruning]
mode = {
type = "custom",
keep_headers = true, # Always required for PoW verification
keep_bodies_from_height = 0,
keep_commitments = false,
keep_filters = false,
keep_filtered_blocks = false,
keep_witnesses = false,
keep_tx_index = false
}
Fine-grained control over what data to keep:
keep_headers: Keep block headers (always required, default:true)keep_bodies_from_height: Keep block bodies from this height onwardskeep_commitments: Keep UTXO commitments (if feature enabled)keep_filters: Keep BIP158 filters (if feature enabled)keep_filtered_blocks: Keep spam-filtered blockskeep_witnesses: Keep witness data (for SegWit verification)keep_tx_index: Keep transaction index
UTXO Commitments Pruning (Experimental)
Requires: utxo-commitments feature enabled.
[storage.pruning.utxo_commitments]
keep_commitments = true
keep_filtered_blocks = false
generate_before_prune = true
max_commitment_age_days = 0 # 0 = keep forever
BIP158 Filter Pruning (Experimental)
Requires: bip158 feature enabled.
[storage.pruning.bip158_filters]
keep_filters = true
keep_filter_headers = true # Always required for verification
max_filter_age_days = 0 # 0 = keep forever
Module System Configuration
modules.enabled
- Type:
boolean - Default:
true - Description: Enable the module system. Set to
falseto disable all modules.
modules.modules_dir
- Type:
string(path) - Default:
"modules" - Description: Directory containing module binaries and manifests.
modules.data_dir
- Type:
string(path) - Default:
"data/modules" - Description: Directory for module data (state, configs, logs).
modules.socket_dir
- Type:
string(path) - Default:
"data/modules/sockets" - Description: Directory for IPC sockets used for module communication.
modules.enabled_modules
- Type:
arrayofstring - Default:
[](empty = auto-discover all modules) - Description: List of module names to enable. Empty list enables all discovered modules.
- Example:
enabled_modules = ["lightning-module", "mining-module"]
modules.module_configs
- Type:
object(nested key-value pairs) - Default:
{} - Description: Module-specific configuration overrides.
- Example:
[modules.module_configs.lightning-module]
port = "9735"
network = "mainnet"
Module Resource Limits
[module_resource_limits]
default_max_cpu_percent = 50 # CPU limit (0-100%)
default_max_memory_bytes = 536870912 # Memory limit (512 MB)
default_max_file_descriptors = 256 # File descriptor limit
default_max_child_processes = 10 # Child process limit
module_startup_wait_millis = 100 # Startup wait time
module_socket_timeout_seconds = 5 # Socket timeout
module_socket_check_interval_millis = 100
module_socket_max_attempts = 50
RPC Configuration
rpc_auth.required
- Type:
boolean - Default:
false - Description: Require authentication for RPC requests. Set to
truefor production.
rpc_auth.tokens
- Type:
arrayofstring - Default:
[] - Description: Valid authentication tokens for RPC access.
- Example:
tokens = ["token1", "token2"]
rpc_auth.certificates
- Type:
arrayofstring - Default:
[] - Description: Valid certificate fingerprints for certificate-based authentication.
rpc_auth.rate_limit_burst
- Type:
integer - Default:
100 - Description: RPC rate limit burst size (token bucket).
rpc_auth.rate_limit_rate
- Type:
integer - Default:
10 - Description: RPC rate limit (requests per second).
Network Configuration
Network Timing
[network_timing]
target_peer_count = 8 # Target outbound peers (typical deployments use a similar range)
peer_connection_delay_seconds = 2 # Wait before connecting to database peers
addr_relay_min_interval_seconds = 8640 # Min interval between addr broadcasts (2.4 hours)
max_addresses_per_addr_message = 1000 # Max addresses per addr message
max_addresses_from_dns = 100 # Max addresses from DNS seeds
Request Timeouts
[request_timeouts]
async_request_timeout_seconds = 300 # Timeout for async requests (getheaders, getdata)
utxo_commitment_request_timeout_seconds = 30
request_cleanup_interval_seconds = 60 # Cleanup interval for expired requests
pending_request_max_age_seconds = 300 # Max age before cleanup
DoS Protection
[dos_protection]
max_connections_per_window = 10 # Max connections per IP per window
window_seconds = 60 # Time window for rate limiting
max_message_queue_size = 10000 # Max message queue size
max_active_connections = 200 # Max active connections
auto_ban_threshold = 3 # Violations before auto-ban
ban_duration_seconds = 3600 # Ban duration (1 hour)
Relay Configuration
[relay]
max_relay_age = 3600 # Max age for relayed items (1 hour)
max_tracked_items = 10000 # Max items to track
enable_block_relay = true # Enable block relay
enable_tx_relay = true # Enable transaction relay
enable_dandelion = false # Enable Dandelion++ privacy relay
Address Database
[address_database]
max_addresses = 10000 # Max addresses to store
expiration_seconds = 86400 # Address expiration (24 hours)
Peer Rate Limiting
[peer_rate_limiting]
default_burst = 100 # Token bucket burst size
default_rate = 10 # Messages per second
Ban List Sharing
[ban_list_sharing]
enabled = true # Enable ban list sharing
share_mode = "periodic" # "immediate", "periodic", or "disabled"
periodic_interval_seconds = 300 # Sharing interval (5 minutes)
min_ban_duration_to_share = 3600 # Min ban duration to share (1 hour)
Experimental Features
Dandelion++ Privacy Relay
Requires: dandelion feature enabled.
[dandelion]
stem_timeout_seconds = 10 # Stem phase timeout
fluff_probability = 0.1 # Probability of fluffing at each hop (10%)
max_stem_hops = 2 # Max stem hops before forced fluff
Stratum V2 Mining
Requires: stratum-v2 feature enabled.
[stratum_v2]
enabled = false
pool_url = "tcp://pool.example.com:3333" # Pool URL for miner mode
listen_addr = "127.0.0.1:3333" # Listen address for server mode
transport_preference = "tcp_only"
merge_mining_enabled = false
secondary_chains = []
Command-Line Arguments
Configuration can be overridden via command-line arguments. CLI overrides ENV and config file.
Global: --network / -n, --rpc-addr / -r, --listen-addr / -l, --data-dir / -d, --config / -c, --verbose / -v
Advanced: --assumevalid, --noassumevalid, --assumeutxo, --target-peer-count, --async-request-timeout, --module-max-cpu-percent, --module-max-memory-bytes
Feature flags: --enable-stratum-v2, --enable-bip158, --enable-dandelion, --enable-sigop and --disable-* counterparts
Commands: start (default), status, health, version, chain, peers, network, sync, config show|validate|path, rpc
blvm --config /path/to/config.toml
blvm --network mainnet --data-dir /var/lib/blvm
blvm config show
CLI behavior is documented in this section; run blvm --help for the full generated flag list.
Environment Variables
Configuration can also be set via environment variables (prefixed with BLVM_). ENV overrides config file.
export BLVM_NETWORK=testnet
export BLVM_DATA_DIR=/var/lib/blvm
export BLVM_RPC_ADDR=127.0.0.1:8332
export BLVM_IBD_EVICTION=dynamic
export BLVM_NETWORK_TARGET_PEER_COUNT=125
Key ENV categories: Node (BLVM_DATA_DIR, BLVM_NETWORK, BLVM_LISTEN_ADDR, BLVM_RPC_ADDR), Network timing (BLVM_NETWORK_TARGET_PEER_COUNT, BLVM_NETWORK_PEER_CONNECTION_DELAY), Request timeouts (BLVM_REQUEST_ASYNC_TIMEOUT, etc.), Module limits (BLVM_MODULE_MAX_*), IBD (BLVM_IBD_*), Storage (BLVM_DBCACHE_MB, BLVM_ROCKSDB_*), External (RPC_AUTH_TOKENS, COMMONS_API_KEY, RUST_LOG).
Additional or experimental BLVM_* names may exist; use blvm --help and the node’s config schema as the source of truth in this repository.
Configuration Precedence
- Command-line arguments (highest priority)
- Environment variables (e.g.
BLVM_DATA_DIR,BLVM_IBD_EVICTION) - Configuration file
- Default values (lowest priority)
Config-file-only options: relay, fibre, dandelion, peer_rate_limiting, rest_api, ban_list_sharing have no ENV overrides. Use CLI flags (e.g. --enable-dandelion) or config file.
Validation
The node validates configuration on startup. Invalid configurations will cause the node to exit with an error message indicating the problem.
Common validation errors:
- Pruning mode requires features that aren’t enabled
- Invalid network addresses
- Resource limits set to zero
- Conflicting transport preferences
See Also
- Node Configuration - Quick start guide
- Storage Backends - Backend selection details
- Transport Abstraction - Transport options
- Module Development - Module system details
API Index
Quick reference and cross-references to all BLVM APIs across the ecosystem.
Complete API Documentation
API reference in this book (hosted at docs.thebitcoincommons.org):
- blvm-primitives - Foundation crate: types, serialization, crypto, opcodes, constants (shared by consensus and protocol; blvm-consensus re-exports for API compatibility)
- blvm-consensus - Consensus layer APIs (transaction validation, block validation, script execution)
- blvm-protocol - Protocol abstraction layer APIs (network variants, message handling)
- blvm-node - Node implementation APIs (storage, networking, RPC, modules)
- blvm-sdk - Developer SDK APIs (governance primitives, composition framework)
For full Rust type/signature documentation, build cargo doc --open in each crate repository.
Quick Reference by Component
Foundation (blvm-primitives)
Shared types, serialization, and crypto live in blvm-primitives. blvm-consensus depends on primitives and re-exports many types; blvm-protocol and blvm-node use those types through the Cargo dependency graph on consensus and protocol crates—not ad hoc duplicated definitions.
Key areas: types (Transaction, Block, BlockHeader, UTXO, Script, etc.), serialization, cryptographic operations, opcodes, constants.
Documentation: See Consensus Architecture and System Overview.
Consensus Layer (blvm-consensus)
Block and script logic live in the block/ and script/ submodules (directories), not single files. Canonical types and crypto are in blvm-primitives and re-exported by consensus.
Core Functions (Orange Paper spec names):
CheckTransaction- Validate transaction structure and signaturesConnectBlock- Validate and connect block to chainEvalScript- Execute Bitcoin scriptVerifyScript- Verify script execution results
Note: These are Orange Paper mathematical specification names (PascalCase). The Rust API uses ConsensusProof struct methods (see API Usage Patterns below).
Key Types:
Transaction,Block,BlockHeaderUTXO,OutPointScript,ScriptOpcodeValidationResult
Documentation: See Consensus Overview, Consensus Architecture, and Formal Verification.
Protocol Layer (blvm-protocol)
Core Abstractions:
BitcoinProtocolEngine- Protocol engine for network variantsNetworkMessage- P2P message typesProtocolVersion- Network variant (BitcoinV1, Testnet3, Regtest)
Key Types:
NetworkMessage,MessageTypePeerConnection,ConnectionStateBlockTemplate(for mining)
Documentation: See Protocol Overview, Network Protocol, and Protocol Architecture.
Node Implementation (blvm-node)
Node API
Main Node Type:
Node- Main node orchestrator
Key Methods:
Node::new(protocol_version: Option<ProtocolVersion>) -> Result<Self>- Create new nodeNode::start() -> Result<()>- Start the nodeNode::stop() -> Result<()>- Stop the node gracefully
Module System API
NodeAPI Trait - Interface for modules to query node state:
#![allow(unused)]
fn main() {
pub trait NodeAPI {
async fn get_block(&self, hash: &Hash) -> Result<Option<Block>, ModuleError>;
async fn get_block_header(&self, hash: &Hash) -> Result<Option<BlockHeader>, ModuleError>;
async fn get_transaction(&self, hash: &Hash) -> Result<Option<Transaction>, ModuleError>;
async fn has_transaction(&self, hash: &Hash) -> Result<bool, ModuleError>;
async fn get_chain_tip(&self) -> Result<Hash, ModuleError>;
async fn get_block_height(&self) -> Result<u64, ModuleError>;
async fn get_utxo(&self, outpoint: &OutPoint) -> Result<Option<UTXO>, ModuleError>;
async fn subscribe_events(&self, event_types: Vec<EventType>) -> Result<Receiver<ModuleMessage>, ModuleError>;
// … plus P2P serve denylists, get_sync_status, ban_peer, maintenance mode — see trait.
}
}
The full NodeAPI surface includes events (subscribe_events) and targeted writes for P2P policy (block/tx getdata denylists), sync status, peer ban, and maintenance mode; see Module development.
Event Types: Core Blockchain Events:
EventType::NewBlock- New block connected to chainEventType::NewTransaction- New transaction in mempoolEventType::BlockDisconnected- Block disconnected (chain reorg)EventType::ChainReorg- Chain reorganization occurred
Payment Events:
EventType::PaymentRequestCreated,EventType::PaymentSettled,EventType::PaymentFailed,EventType::PaymentVerified,EventType::PaymentRouteFound,EventType::PaymentRouteFailed,EventType::ChannelOpened,EventType::ChannelClosed
Mining Events:
EventType::BlockMined,EventType::BlockTemplateUpdated,EventType::MiningDifficultyChanged,EventType::MiningJobCreated,EventType::ShareSubmitted,EventType::MergeMiningReward,EventType::MiningPoolConnected,EventType::MiningPoolDisconnected
Network Events:
EventType::PeerConnected,EventType::PeerDisconnected,EventType::MessageReceived,EventType::MessageSent,EventType::BroadcastStarted,EventType::BroadcastCompleted,EventType::RouteDiscovered,EventType::RouteFailed
Module Lifecycle Events:
EventType::ModuleLoaded,EventType::ModuleUnloaded,EventType::ModuleCrashed,EventType::ModuleDiscovered,EventType::ModuleInstalled,EventType::ModuleUpdated,EventType::ModuleRemoved
And many more. For complete list, see EventType enum and Event System.
ModuleContext - Context provided to modules:
#![allow(unused)]
fn main() {
pub struct ModuleContext {
pub module_id: String,
pub socket_path: String,
pub data_dir: String,
pub config: HashMap<String, String>,
}
}
Documentation: See Module Development for complete module API details.
RPC API
RPC Methods: JSON-RPC methods aligned with widely documented Bitcoin node APIs. See RPC API Reference for the list.
Key Categories:
- Blockchain methods (8):
getblockchaininfo,getblock,getblockhash,getblockheader,getbestblockhash,getblockcount,getdifficulty,gettxoutsetinfo,verifychain - Raw transaction methods (7):
getrawtransaction,sendrawtransaction,testmempoolaccept,decoderawtransaction,gettxout,gettxoutproof,verifytxoutproof - Mempool methods (3):
getmempoolinfo,getrawmempool,savemempool - Network methods (9):
getnetworkinfo,getpeerinfo,getconnectioncount,ping,addnode,disconnectnode,getnettotals,clearbanned,setban,listbanned - Mining methods (4):
getmininginfo,getblocktemplate,submitblock,estimatesmartfee
Documentation: See RPC API Reference.
Storage API
Storage Trait:
Storage- Storage backend interface
Key Methods:
get_block(&self, hash: &Hash) -> Result<Option<Block>>get_block_header(&self, hash: &Hash) -> Result<Option<BlockHeader>>get_utxo(&self, outpoint: &OutPoint) -> Result<Option<UTXO>>get_chain_tip(&self) -> Result<Hash>get_block_height(&self) -> Result<u64>
Backends: Selected by database_backend (default auto resolves by build features: RocksDB when rocksdb enabled, then TidesDB, Redb, Sled). See Storage Backends and Configuration Reference.
Documentation: See Storage Backends and Node Configuration.
Developer SDK (blvm-sdk)
Module authoring (node feature)
- Crate: blvm-sdk (feature
node); procedural macros in blvm-sdk-macros (#[module],#[command],#[rpc_method],#[on_event],#[config],#[migration], etc.). - Entry:
blvm_sdk::module::prelude::*,run_module!,run_module_main!,ModuleBootstrap,ModuleDb,InvocationContext. - Documentation: Module Development, hello-module example.
Governance Primitives
Core Types:
GovernanceKeypair- Keypair for signingPublicKey- Public key (secp256k1)Signature- Cryptographic signatureGovernanceMessage- Message types (Release, ModuleApproval, BudgetDecision)Multisig- Threshold signature configuration
Functions:
sign_message(secret_key: &SecretKey, message: &[u8]) -> GovernanceResult<Signature>verify_signature(signature: &Signature, message: &[u8], public_key: &PublicKey) -> GovernanceResult<bool>
Documentation: See SDK API Reference.
Composition Framework
Core Types:
ModuleRegistry- Module discovery and managementNodeComposer- Node composition from modulesModuleLifecycle- Module lifecycle managementNodeSpec,ModuleSpec- Composition specifications
Documentation: See SDK API Reference.
API Usage Patterns
Consensus Validation
#![allow(unused)]
fn main() {
use blvm_consensus::{ConsensusProof, Transaction, Block};
// Create consensus proof instance
let proof = ConsensusProof::new();
// Validate transaction
let result = proof.validate_transaction(&tx)?;
// Validate and connect block
let (result, new_utxo_set) = proof.validate_block(&block, utxo_set, height)?;
}
Alternative: Direct module functions are also available:
#![allow(unused)]
fn main() {
use blvm_consensus::{transaction, block, types::*};
// Validate transaction using direct module function
let result = transaction::check_transaction(&tx)?;
// Connect block using direct module function
let (result, new_utxo_set, _undo_log) = block::connect_block(
&block,
&witnesses,
utxo_set,
height,
None,
network_time,
network,
)?;
}
Protocol Abstraction
#![allow(unused)]
fn main() {
use blvm_protocol::{BitcoinProtocolEngine, ProtocolVersion};
// Create protocol engine for testnet
let engine = BitcoinProtocolEngine::new(ProtocolVersion::Testnet3)?;
}
Module Development
#![allow(unused)]
fn main() {
use blvm_node::module::traits::NodeAPI;
// In module code, use NodeAPI trait through IPC
let block = node_api.get_block(&hash).await?;
let tip = node_api.get_chain_tip().await?;
}
Governance Operations
#![allow(unused)]
fn main() {
use blvm_sdk::{GovernanceKeypair, GovernanceMessage, Multisig};
// Generate keypair and sign message
let keypair = GovernanceKeypair::generate()?;
let message = GovernanceMessage::Release { version, commit_hash };
let signature = sign_message(&keypair.secret_key_bytes(), &message.to_signing_bytes())?;
}
API Stability
Stable APIs:
- Consensus layer (
blvm-consensus) - Stable API; validated with tests and spec-lock proofs - Protocol layer (
blvm-protocol) - Stable, Bitcoin-compatible - Node storage APIs - Stable
Development APIs:
- Module system APIs - Stable interface; implementation evolves with releases
- Composition framework - Active development
- Experimental features - Subject to change
Error Handling
All APIs use consistent error types:
blvm_consensus::ConsensusError- Consensus validation errorsblvm_protocol::ProtocolError- Protocol layer errorsblvm_node::module::ModuleError- Module system errorsblvm_sdk::GovernanceError- Governance operation errors
See Also
- SDK API Reference - Detailed SDK documentation
- Module Development - Module API usage
- RPC API Reference - RPC method documentation
- Configuration Reference - Configuration APIs
Glossary
Key terms and concepts used throughout the BLVM documentation.
BLVM Components
BLVM (Bitcoin Low-Level Virtual Machine) - Compiler-like infrastructure for Bitcoin implementations. The Orange Paper is the mathematical specification (IR); the implementation is validated against it via formal verification (blvm-spec-lock), not generated or transformed from the IR. Similar to how LLVM provides compiler infrastructure, BLVM provides Bitcoin implementation infrastructure.
Orange Paper - Mathematical specification of Bitcoin’s consensus protocol, serving as the “intermediate representation” (IR) in BLVM’s compiler-like architecture. The implementation is validated against this spec via formal verification; code is not generated or transformed from the IR. See Orange Paper.
Optimization Passes - Runtime optimizations applied to the blvm-consensus implementation (e.g. constant folding, memory layout, SIMD vectorization, bounds check optimization, dead code elimination). They optimize the implementation code; the implementation is validated against the Orange Paper, not generated from it. See Optimization Passes.
blvm-primitives - Shared foundation crate: Bitcoin types, serialization, crypto, opcodes, constants. Used by blvm-consensus and blvm-protocol; consensus re-exports for API compatibility. See API Index.
blvm-consensus - Optimized mathematical implementation of Bitcoin consensus rules with formal verification. Builds on blvm-primitives; block/script logic in block/ and script/ submodules. See Consensus Overview.
blvm-protocol - Protocol abstraction layer for multiple Bitcoin variants (mainnet, testnet, regtest) while maintaining consensus compatibility. Uses blvm-primitives. See Protocol Overview.
blvm-node - Bitcoin node implementation with storage, networking, RPC, and mining capabilities. Intended as the reference full node; treat production deployment like any consensus-adjacent system (hardening, monitoring, System Status). See Node Overview.
blvm-sdk - Developer toolkit: governance primitives, node module authoring (macros, run_module!, node feature), composition, and CLI tools (keygen, sign, compose, etc.). See SDK Overview.
Governance
Bitcoin Commons - Forkable governance framework applying Elinor Ostrom’s commons management principles through cryptographic enforcement. See Governance Overview.
5-Tier Governance Model - Constitutional governance system with graduated signature thresholds (3-of-5 to 6-of-7) and review periods (7 days to 365 days) based on change impact. See Layer-Tier Model.
Forkable Governance - Governance rules can be forked by users if they disagree with decisions, creating exit competition and preventing capture. See Governance Fork.
Cryptographic Enforcement - All governance actions require cryptographic signatures from maintainers, making power visible and accountable. See Keyholder Procedures.
Technical Concepts
Formal Verification - BLVM Specification Lock: Z3-backed proofs tying spec-locked consensus code to Orange Paper contracts. See Formal Verification.
Proofs Locked to Code - Spec-lock proofs live with the functions they verify; code changes require proof updates. See Formal Verification.
Spec Drift Detection - Automated detection when implementation code diverges from the Orange Paper mathematical specification.
Compiler-Like Architecture - The Orange Paper is the spec (IR); blvm-consensus is the implementation, validated against that spec through tests, review, and BLVM Specification Lock. Optimization passes optimize the implementation. No code is generated from the IR. See System Overview.
Process Isolation - Module system design where each module runs in a separate process with isolated memory, preventing failures from propagating to the base node.
IPC (Inter-Process Communication) - Communication mechanism between modules and the node using Unix domain sockets with length-delimited binary messages. See Module IPC Protocol.
Storage & Networking
Storage Backends - Database backends for blockchain data. database_backend = auto selects by build features (RocksDB when enabled, then TidesDB, Redb, Sled). See Storage Backends and Configuration Reference.
Pruning - Storage optimization that removes old block data while keeping the UTXO set. Configurable to keep last N blocks.
Transport Abstraction - Unified abstraction supporting multiple transport protocols: TCP (default, Bitcoin P2P compatible) and Iroh/QUIC (experimental). See Transport Abstraction.
Network Variants - Bitcoin network types: Mainnet (BitcoinV1, production), Testnet3 (test network), Regtest (regression testing, isolated).
Consensus & Protocol
Consensus Rules - Mathematical rules that all Bitcoin nodes must follow to maintain network consensus. Defined in the Orange Paper and implemented in blvm-consensus.
BIP (Bitcoin Improvement Proposal) - Standards for Bitcoin protocol changes. BLVM implements numerous BIPs including BIP30, BIP34, BIP66, BIP90, BIP147, BIP141/143, BIP340/341/342. See Protocol Specifications.
SegWit (Segregated Witness) - BIP141/143 implementation separating witness data from transaction data, enabling transaction malleability fixes and capacity improvements.
Taproot - BIP340/341/342 implementation providing Schnorr signatures, Merkle tree scripts, and improved privacy.
RBF (Replace-By-Fee) - BIP125 implementation allowing transaction replacement with higher fees before confirmation.
Development
Module System - Process-isolated system supporting optional features (Lightning, merge mining, privacy enhancements) without affecting consensus or base node stability.
Module Manifest (module.toml) - Configuration file defining module metadata, capabilities, dependencies, and entry point.
Capabilities - Permissions system for modules. Capabilities use snake_case in module.toml and map to Permission enum variants. Core capabilities include: read_blockchain, read_utxo, read_chain_state, subscribe_events, send_transactions, read_mempool, read_network, network_access, read_lightning, read_payment, read_storage, write_storage, manage_storage, read_filesystem, write_filesystem, manage_filesystem, register_rpc_endpoint, manage_timers, report_metrics, read_metrics, discover_modules, publish_events, call_module, register_module_api. See Permission System for complete list.
RPC (Remote Procedure Call) - JSON-RPC 2.0 interface for interacting with the node. Methods follow conventions widely used by Bitcoin node RPC documentation.
Governance Status
Governance activation - Governance rules are not yet activated; test keys are used. When activated, real cryptographic keys and keyholder onboarding enforce governance. The system is experimental until then. See System Status.
Contributing to BLVM
Thank you for your interest in contributing to BLVM (Bitcoin Low-Level Virtual Machine)! This guide covers the complete developer workflow from setting up your environment to getting your changes merged.
Code of Conduct
This project and everyone participating in it is governed by our Code of Conduct. By participating, you are expected to uphold this code.
Getting Started
Prerequisites
- Rust 1.83 or later (recommended; some crates require at least 1.82 per
Cargo.tomlrust-version) — check withrustc --version - Git - For version control
- Cargo - Included with Rust
- Text editor or IDE - Your choice
Development Setup
- Fork the repository you want to contribute to (e.g., blvm-consensus, blvm-protocol, blvm-node)
- Clone your fork:
git clone https://github.com/YOUR_USERNAME/blvm-consensus.git cd blvm-consensus - Add upstream remote:
git remote add upstream https://github.com/BTCDecoded/blvm-consensus.git - Build the project:
cargo build - Run tests:
cargo test
Contribution Workflow
1. Create a Feature Branch
Always create a new branch from main:
git checkout main
git pull upstream main
git checkout -b feature/your-feature-name
Branch naming conventions:
feature/- New featuresfix/- Bug fixesdocs/- Documentation changesrefactor/- Code refactoringtest/- Test additions
2. Make Your Changes
Follow these guidelines when making changes:
Code Style
- Follow Rust conventions - Use
cargo fmtto format code - Run clippy - Use
cargo clippy -- -D warningsto check for improvements - Write clear, self-documenting code - Code should be readable without excessive comments
Testing
- Write tests for all new functionality - See Testing Infrastructure for details
- Ensure existing tests continue to pass - Run
cargo testbefore committing - Add integration tests for complex features
- Aim for high test coverage - Consensus-critical code requires >95% coverage
Documentation
- Document all public APIs - Use Rust doc comments (
///) - Update README files when adding features
- Include code examples in documentation
- Follow Rust documentation conventions
3. Commit Your Changes
Use conventional commit format:
type(scope): description
[optional body]
[optional footer]
Commit types:
feat- New featurefix- Bug fixdocs- Documentation changestest- Test additions/changesrefactor- Code refactoringperf- Performance improvementsci- CI/CD changeschore- Maintenance tasks
Examples:
feat(consensus): add OP_CHECKSIGVERIFY implementation
fix(node): resolve connection timeout issue
docs(readme): update installation instructions
test(block): add edge case tests for block validation
4. Push and Create Pull Request
git push origin feature/your-feature-name
Then open a Pull Request on GitHub. See the PR Process for details on governance tiers, review periods, and maintainer signatures. Your PR should include:
- Clear title - Describes what the PR does
- Detailed description - Explains the changes and why
- Reference issues - Link to related issues if applicable
- Checklist - Mark items as you complete them (see PR Checklist below)
Repository-Specific Guidelines
blvm-consensus
Critical: This code implements Bitcoin consensus rules. Any changes must:
- Match mainnet consensus rules — no undocumented deviations in consensus code
- Not deviate from the Orange Paper specifications - Mathematical correctness required
- Handle all edge cases correctly - Consensus code must be bulletproof
- Maintain mathematical precision - No approximations
Additional requirements:
- Exact Version Pinning: All consensus-critical dependencies must be pinned to exact versions
- Pure Functions: All functions must remain side-effect-free
- Testing: All mathematical functions must be thoroughly tested (see Testing Infrastructure)
- Formal Verification: Consensus-critical changes may require Z3 proofs (via BLVM Specification Lock)
blvm-protocol
- Protocol Abstraction: Changes must maintain clean abstraction
- Variant Support: Ensure all Bitcoin variants continue to work
- Backward Compatibility: Avoid breaking changes to protocol interfaces
blvm-node
- Consensus Integrity: Never modify consensus rules (use blvm-consensus for that)
- Production Readiness: Consider production deployment implications
- Performance: Maintain reasonable performance characteristics
Pull Request Checklist
Before submitting your PR, ensure:
- All tests pass - Run
cargo testlocally - Code is formatted - Run
cargo fmt - No clippy warnings - Run
cargo clippy -- -D warnings - Documentation is updated - Public APIs documented, README updated if needed
- Commit messages follow conventions - Use conventional commit format
- Changes are focused and atomic - One logical change per PR
- Repository-specific guidelines followed - See section above
Review Process
Canonical review docs (maintainer expectations vs AI “review intelligence”) live in the governance repository; this book summarizes PR mechanics only. See Review standards in the Governance section for links.
What Happens After You Submit a PR
- Automated CI runs - Tests, linting, and checks run automatically
- Governance tier classification - Your PR is automatically classified into a governance tier
- Maintainers review - Code review by project maintainers
- Signatures required - Maintainers must cryptographically sign approval (see PR Process)
- Review period - Tier-specific review period must elapse (see PR Process for details)
- Merge - Once all requirements are met, your PR is merged
Review Criteria
Reviewers will check:
- Correctness - Does the code work as intended?
- Consensus compliance — Does it match the Orange Paper and observed mainnet behavior? (for consensus code)
- Test coverage - Are all cases covered?
- Performance - No regressions?
- Documentation - Is it clear and complete?
- Security - Any potential vulnerabilities?
Getting Your PR Reviewed
- Be patient - Review periods vary by tier (7-180 days)
- Respond to feedback - Address review comments promptly
- Keep PRs small - Smaller PRs are reviewed faster
- Update PR description - Keep it current as you make changes
Governance Tiers
Your PR will be automatically classified into a governance tier based on the changes. See PR Process for detailed information about:
- Tier 1: Routine Maintenance - Bug fixes, documentation, performance optimizations (7 day review, see Layer-Tier Model)
- Tier 2: Feature Changes - New RPC methods, P2P changes, wallet features (30 day review)
- Tier 3: Consensus-Adjacent - Changes affecting consensus validation code (90 day review)
- Tier 4: Emergency Actions - Critical security patches (0 day review)
- Tier 5: Governance Changes - Changes to governance rules (180 day review)
Testing Your Changes
See Testing Infrastructure for testing documentation. Key points:
- Unit tests - Test individual functions
- Integration tests - Test cross-module functionality
- Property-based testing - Test with generated inputs
- Fuzzing - Find edge cases automatically
- Differential testing — Cross-check against a reference full node when applicable
CI/CD Workflows
When you push code or open a PR, automated workflows run:
- Tests - All test suites run
- Linting - Code style and quality checks
- Coverage - Test coverage analysis
- Build verification - Ensures code compiles
See CI/CD Workflows for detailed information about what runs and how to debug failures.
Getting Help
- Discussions - Use GitHub Discussions for questions
- Issues - Use GitHub Issues for bugs and feature requests
- Security - Use private channels for security issues (see SECURITY.md in each repo)
Recognition
Contributors will be recognized in:
- Repository CONTRIBUTORS.md files
- Release notes for significant contributions
- Organization acknowledgments
Questions?
If you have questions about contributing:
- Check existing discussions and issues
- Open a new discussion
- Contact maintainers privately for sensitive matters
Thank you for contributing to BLVM!
Contributing to Documentation
For documentation-specific contributions (improving docs, fixing typos, adding examples), see Contributing to Documentation in the Appendices section. This guide covers:
- Documentation standards and style guidelines
- Where to contribute (source repos vs. unified docs)
- Documentation workflow
- Local testing of documentation changes
Note: Code contributions (this page) and documentation contributions (linked above) follow different workflows but both are welcome!
See Also
- PR Process - Pull request review process and governance tiers
- CI/CD Workflows - What happens when you push code
- Testing Infrastructure - Testing guides
- Release Process - How releases are created
- Contributing to Documentation - Documentation contribution guide
CI/CD Workflows
This document explains what happens when you push code or open a Pull Request, how to interpret CI results, and how to debug failures.
Overview
BLVM uses GitHub Actions for continuous integration and deployment. All workflows run on self-hosted Linux x64 runners to ensure security and deterministic builds.
What Happens When You Push Code
On Push to Any Branch
When you push code to any branch, the following workflows may trigger:
- CI Workflow - Runs tests, linting, and build verification
- Coverage Workflow - Calculates test coverage
- Security Workflow - Runs security checks (if configured)
On Push to Main Branch
In addition to the above, pushing to main triggers:
- Release Workflow - Automatically creates a new release (see Release Process)
- Version Bumping - Auto-increments patch version
- Cargo Publishing - Publishes dependencies to crates.io
- Git Tagging - Tags all repositories with the new version
Repository-Specific CI Workflows
blvm-consensus
Workflows:
ci.yml- Runs test suite, linting, and build verificationcoverage.yml- Calculates test coverage
What Runs:
- Unit tests
- Integration tests
- Property-based tests
- BLVM Specification Lock formal verification (optional, can be enabled)
- Code formatting check (
cargo fmt --check) - Linting check (
cargo clippy)
blvm-protocol
Workflows:
ci.yml- Runs test suite and build verificationcoverage.yml- Calculates test coverage
What Runs:
- Unit tests
- Integration tests
- Protocol compatibility tests
- Build verification
blvm-node
Workflows:
ci.yml- Runs test suite and build verificationcoverage.yml- Calculates test coverage
What Runs:
- Unit tests
- Integration tests
- Node functionality tests
- Network protocol tests
- Build verification
blvm (Main Repository)
Workflows:
ci.yml- Runs tests across all componentscoverage.yml- Aggregates coverage from all reposrelease.yml- Official release workflowprerelease.yml- Prerelease workflownightly-prerelease.yml- Scheduled nightly builds
Reusable Workflows (blvm-commons)
The blvm-commons repository provides reusable workflows that other repositories call:
verify_consensus.yml
Purpose: Runs tests and optional BLVM Specification Lock verification for consensus code
Inputs:
repo- Repository nameref- Git reference (branch/tag)blvm-spec-lock- Boolean to enable BLVM Specification Lock verification
What It Does:
- Checks out the repository
- Runs test suite
- Optionally runs BLVM Specification Lock formal verification
- Reports results
build_lib.yml
Purpose: Deterministic library build with artifact hashing
Inputs:
repo- Repository nameref- Git referencepackage- Cargo package namefeatures- Feature flags to enableverify_deterministic- Optional: rebuild and compare hashes
What It Does:
- Builds the library with
cargo build --locked --release - Hashes outputs to
SHA256SUMS - Optionally verifies deterministic builds (rebuild and compare)
build_docker.yml
Purpose: Builds Docker images
Inputs:
repo- Repository nameref- Git referencetag- Docker image tagimage_name- Docker image namepush- Boolean to push to registry
What It Does:
- Builds Docker image
- Optionally pushes to registry
Workflow Dependencies and Ordering
Builds follow Cargo’s dependency graph (simplified view):
1. blvm-primitives — foundation types/crypto shared by consensus & protocol
↓
2. blvm-consensus — depends on primitives
↓
3. blvm-protocol — depends on consensus + primitives
↓
4. blvm-node — depends on protocol + consensus
↓
5. blvm (CLI) — depends on blvm-node
blvm-sdk — depends on blvm-protocol + blvm-consensus (and optionally blvm-node); not a separate “no-deps” lane
↓
blvm-commons — depends on blvm-sdk + blvm-protocol
Security Gates: Consensus verification (tests + optional BLVM Specification Lock) must pass before downstream builds proceed.
Self-Hosted Runners
All workflows run on self-hosted Linux x64 runners:
- Security: Code never leaves our infrastructure
- Performance: Faster builds, no rate limits
- Deterministic: Consistent build environment
- Labels: Optional labels (
rust,docker,blvm-spec-lock) optimize job assignment (note: label uses lowercase for technical compatibility)
Runner Policy:
- All jobs run on
[self-hosted, linux, x64]runners - Workflows handle installation as fallback if labeled runners unavailable
- Repos should restrict Actions to self-hosted in settings
Deterministic Builds
All builds use deterministic build practices:
- Locked Dependencies:
cargo build --lockedensures exact dependency versions - Toolchain Pinning: Per-repo
rust-toolchain.tomldefines exact Rust version - Artifact Hashing: All outputs hashed to
SHA256SUMS - Verification: Optional deterministic verification (rebuild and compare hashes)
Interpreting CI Results
✅ Success
All checks pass:
- ✅ Tests pass
- ✅ Linting passes
- ✅ Build succeeds
- ✅ Coverage meets threshold
Action: Your PR is ready for review (subject to governance requirements).
❌ Test Failures
One or more tests fail:
- Check the test output in the workflow logs
- Look for error messages and stack traces
- Run tests locally to reproduce:
cargo test
Common Causes:
- Logic errors in your code
- Test environment differences
- Flaky tests (timing issues)
❌ Linting Failures
Code style or quality issues:
- Formatting: Run
cargo fmtlocally - Clippy warnings: Run
cargo clippy -- -D warningsand fix issues
Action: Fix locally and push again.
❌ Build Failures
Code doesn’t compile:
- Check compiler errors in workflow logs
- Build locally:
cargo build - Check for missing dependencies or version conflicts
Action: Fix compilation errors and push again.
⚠️ Coverage Below Threshold
Test coverage is below the required threshold:
- Add more tests to cover untested code
- Check coverage report to see what’s missing
Action: Add tests to increase coverage.
Debugging CI Failures
1. Check Workflow Logs
Click on the failed check in your PR to see detailed logs:
- Expand failed job
- Look for error messages
- Check which step failed
2. Reproduce Locally
Run the same commands locally:
# Run tests
cargo test
# Check formatting
cargo fmt --check
# Run clippy
cargo clippy -- -D warnings
# Build
cargo build --release
3. Check for Environment Differences
CI runs in a clean environment:
- Dependencies are fresh
- No local configuration
- Specific Rust toolchain version
Solution: Use rust-toolchain.toml to pin Rust version.
4. Common Issues
Issue: Tests pass locally but fail in CI
- Cause: Timing issues, environment differences
- Solution: Make tests more robust, check for race conditions
Issue: Build works locally but fails in CI
- Cause: Dependency version mismatch
- Solution: Ensure
Cargo.lockis committed, use--lockedflag
Issue: Coverage calculation fails
- Cause: Coverage tool issues
- Solution: Check coverage tool version, ensure tests run successfully
Workflow Status Checks
PRs require all status checks to pass before merging:
- Required Checks: Must pass (configured per repository)
- Optional Checks: Can fail but won’t block merge
- Status: Shown in PR checks section
Note: Even if all checks pass, PRs still require:
- Maintainer signatures (see PR Process)
- Review period to elapse
Best Practices
Before Pushing
- Run tests locally:
cargo test - Check formatting:
cargo fmt - Run clippy:
cargo clippy -- -D warnings - Build:
cargo build --release
During Development
- Push frequently: Small commits are easier to debug
- Check CI early: Don’t wait until PR is “done”
- Fix issues immediately: Don’t let failures accumulate
When CI Fails
- Don’t panic: CI failures are normal during development
- Read the logs: Errors name the failing step or crate
- Reproduce locally: Fix the issue, then push again
- Ask for help: If stuck, ask in discussions or PR comments
Workflow Configuration
Workflows are configured in .github/workflows/ in each repository:
- Trigger conditions: When workflows run
- Job definitions: What each job does
- Runner requirements: Which runners to use
- Dependencies: Job ordering
Note: Workflows in blvm-commons are reusable and called by other repositories via workflow_call.
Workflow Optimization
Caching Strategies
For self-hosted runners, local caching can provide significant performance improvements:
Local Caching System
Using /tmp/runner-cache with rsync provides 10-100x faster cache operations than GitHub Actions cache:
- No API rate limits: Local filesystem access
- Faster restore: rsync is much faster than GitHub cache API
- Works offline: Once cached, no network needed
- Preserves symlinks: Better than GitHub cache for complex builds
Shared Setup Jobs
Use a single setup job that all other jobs depend on:
- Checkout dependencies once: Avoid redundant checkouts
- Generate cache keys once: Share keys via job outputs
- Parallel execution: Other jobs can run in parallel after setup
Cross-Repo Build Artifact Caching
Cache target/ directories for dependencies across workflow runs:
- Don’t rebuild dependencies: Cache blvm-consensus and blvm-protocol build artifacts
- Faster incremental builds: Only rebuild what changed
- Shared across repos: Same cache can be used by multiple repositories
Cache Key Strategy
Use deterministic cache keys based on:
Cargo.lockhash (for dependency changes)- Rust toolchain version (for toolchain changes)
- Combined key:
${DEPS_KEY}-${TOOLCHAIN}
Disk Space Management
For long-running runners, implement cache cleanup:
- Automatic cleanup: Remove caches older than N days
- Keep recent caches: Maintain last N cache entries
- Emergency cleanup: Check disk space and clean if >80% full
Performance Improvements
With proper caching optimization:
- Dependency checkout: ~30s (once in setup job)
- Cache restore: ~5s per job (local cache vs ~20s for GitHub cache)
- Dependency build: ~30s (cached artifacts vs ~5min without cache)
- Total overhead: ~2min vs ~35min without optimization
Estimated speedup: ~17x faster for setup overhead
Additional Resources
- Testing Infrastructure - Comprehensive testing documentation
- Release Process - How releases are created
- PR Process - Pull request review process
Pull Request Process
This document explains the PR review process, governance tiers, signature requirements, and how to get your PR reviewed and merged.
For human maintainer expectations and the AI review intelligence operating document (alternative implementation vs Core fork, flags, Compact alignment), see Review standards.
Overview
BLVM uses a 5-tier constitutional governance model with cryptographic signatures to ensure secure, transparent, and accountable code changes. Every PR is automatically classified into a governance tier based on the scope and impact of the changes.
PR Lifecycle
1. Developer Opens PR
When you open a Pull Request:
- Automated CI runs - Tests, linting, and build verification
- Governance tier classification - PR is automatically classified (with temporary manual override available)
- Status checks appear - Shows what needs to happen for merge
2. Maintainers Review and Sign
Maintainers review your code and cryptographically sign approval:
- Review PR - Understand the change and its impact
- Generate signature - Use
blvm-signfrom blvm-sdk - Post signature - Comment
/governance-sign <signature>on PR - Governance App verifies - Cryptographically verifies signature
- Status check updates - Shows signature count progress
3. Review Period Elapses
Each tier has a specific review period that must elapse:
- Tier 1: 7 days
- Tier 2: 30 days
- Tier 3: 90 days
- Tier 4: 0 days (immediate)
- Tier 5: 180 days
The review period starts when the PR is opened and all required signatures are collected.
4. Requirements Met → Merge Enabled
Once all requirements are met:
- Required signatures collected
- Review period elapsed
- All CI checks pass
The PR can be merged.
Governance Tiers
Tier 1: Routine Maintenance
Scope: Bug fixes, documentation, performance optimizations
Requirements:
- Signatures: 3-of-5 maintainers
- Review Period: 7 days
- Restriction: Non-consensus changes only
Examples:
- Fixing a typo in documentation
- Performance optimization in non-consensus code
- Bug fix in node networking code
- Code refactoring
Tier 2: Feature Changes
Scope: New RPC methods, P2P changes, wallet features
Requirements:
- Signatures: 4-of-5 maintainers
- Review Period: 30 days
- Requirement: Must include technical specification
Examples:
- Adding a new RPC method
- Implementing a new P2P protocol feature
- Adding wallet functionality
- New SDK features
Tier 3: Consensus-Adjacent
Scope: Changes affecting consensus validation code
Requirements:
- Signatures: 5-of-5 maintainers
- Review Period: 90 days
- Requirement: Formal verification (BLVM Specification Lock) required
Examples:
- Changes to consensus validation logic
- Modifications to block/transaction validation
- Updates to consensus-critical algorithms
Note: This tier requires the most scrutiny because changes can affect network consensus.
Tier 4: Emergency Actions
Scope: Critical security patches, network-threatening bugs
Requirements:
- Signatures: 4-of-5 maintainers
- Review Period: 0 days (immediate)
- Requirement: Post-mortem required
Sub-tiers:
- Critical Emergency: Network-threatening (7 day maximum duration)
- Urgent Security: Security issues (30 day maximum duration)
- Elevated Priority: Important fixes (90 day maximum duration)
Examples:
- Critical security vulnerability
- Network-threatening bug
- Consensus-breaking issue requiring immediate fix
Tier 5: Governance Changes
Scope: Changes to governance rules themselves
Requirements:
- Signatures: Special process (5-of-7 maintainers + 2-of-3 emergency keyholders)
- Review Period: 180 days
Examples:
- Changing signature requirements
- Modifying review periods
- Updating governance tier definitions
Layer + Tier Combination
The governance system combines two dimensions:
- Layers (Repository Architecture) - Which repository the change affects
- Tiers (Action Classification) - What type of change is being made
When both apply, the system uses “most restrictive wins” rule:
| Example | Layer | Tier | Final Signatures | Final Review | Source |
|---|---|---|---|---|---|
| Bug fix in Protocol Engine | 3 | 1 | 4-of-5 | 90 days | Layer 3 |
| New feature in Developer SDK | 5 | 2 | 4-of-5 | 30 days | Tier 2 |
| Consensus change in Orange Paper | 1 | 3 | 6-of-7 | 180 days | Layer 1 |
| Emergency fix in Reference Node | 4 | 4 | 4-of-5 | 0 days | Tier 4 |
See Layer-Tier Model for the complete decision matrix.
Signature Requirements by Layer
In addition to tier requirements, layers have their own signature requirements:
- Layer 1-2 (Constitutional): 6-of-7 maintainers, 180 days (365 for consensus changes)
- Layer 3 (Implementation): 4-of-5 maintainers, 90 days
- Layer 4 (Application): 3-of-5 maintainers, 60 days
- Layer 5 (Extension): 2-of-3 maintainers, 14 days
The most restrictive requirement (layer or tier) applies.
Maintainer Signing Process
How Maintainers Sign
- Review PR: Understand the change and its impact
- Generate signature: Use
blvm-signfrom blvm-sdk:blvm-sign --message "Approve PR #123" --key ~/.blvm/maintainer.key - Post signature: Comment on PR:
/governance-sign <signature> - Governance App verifies: Cryptographically verifies signature
- Status check updates: Shows signature count progress
Signature Verification
The Governance App cryptographically verifies each signature:
- Uses secp256k1 ECDSA (Bitcoin-compatible)
- Verifies signature matches maintainer’s public key
- Ensures signature is for the correct PR
- Prevents signature reuse
Emergency Procedures
The system includes a three-tiered emergency response system:
Tier 1: Critical Emergency (Network-threatening)
- Review period: 0 days
- Signatures: 4-of-7 maintainers
- Activation: 5-of-7 emergency keyholders required
- Maximum duration: 7 days
Tier 2: Urgent Security Issue
- Review period: 7 days
- Signatures: 5-of-7 maintainers
- Maximum duration: 30 days
Tier 3: Elevated Priority
- Review period: 30 days
- Signatures: 6-of-7 maintainers
- Maximum duration: 90 days
How to Get Your PR Reviewed
1. Ensure PR is Ready
- All CI checks pass
- Code is well-documented
- Tests are included
- PR description is clear
2. Be Patient
Review periods vary by tier:
- Tier 1: 7 days minimum
- Tier 2: 30 days minimum
- Tier 3: 90 days minimum
- Tier 4: 0 days (immediate)
- Tier 5: 180 days minimum
3. Respond to Feedback
- Address review comments promptly
- Update PR as needed
- Keep PR description current
4. Keep PRs Small
- Smaller PRs are reviewed faster
- Easier to understand
- Less risk of issues
5. Communicate
- Update PR description if scope changes
- Respond to questions
- Ask for help if stuck
PR Status Indicators
Your PR will show status indicators:
- Signature progress:
3/5 signatures collected - Review period:
5 days remaining - CI status: All checks passing/failing
Common Questions
How do I know what tier my PR is?
The Governance App automatically classifies your PR. You’ll see the tier in the PR status checks.
Can I speed up the review process?
No. Review periods are fixed by tier to ensure adequate scrutiny. However, you can:
- Ensure your PR is ready (all checks pass)
- Respond to feedback quickly
- Keep PRs small and focused
What if I disagree with the tier classification?
Contact maintainers. There’s a temporary manual override available for tier classification.
Can I merge my own PR?
No. All PRs require maintainer signatures and review period to elapse, regardless of who opened it.
Additional Resources
- Contributing Guide - Complete developer workflow
- Governance Model - Detailed governance documentation
- Layer-Tier Model - Complete decision matrix
Release Process
This document explains how BLVM releases are created, what variants are available, version numbering, and how to verify releases.
Overview
BLVM uses an automated release pipeline that builds and releases the entire ecosystem when code is merged to main in any repository. The system uses Cargo’s dependency management to build repositories in the correct order.
Release Triggers
Automatic Release (Push to Main)
The release pipeline automatically triggers when:
- A commit is pushed to the
mainbranch in any repository - The commit changes code files (not just documentation)
- Paths ignored:
**.md,.github/**,docs/**
What happens:
- Version is auto-incremented (patch version: X.Y.Z → X.Y.(Z+1))
- Dependencies are published to crates.io
- All repositories are built in dependency order
- Release artifacts are created
- GitHub release is created
- All repositories are tagged with the version
Manual Release (Workflow Dispatch)
You can manually trigger a release with:
- Custom version tag (e.g.,
v0.2.0) - Platform selection (linux, windows, or both)
- Option to skip tagging (for testing)
When to use:
- Major or minor version bumps
- Coordinated releases
- Testing release process
Version Numbering
Automatic Version Bumping
When triggered by a push to main:
- Reads current version from
blvm/versions.toml(fromblvm-consensusversion) - Auto-increments the patch version (X.Y.Z → X.Y.(Z+1))
- Generates a release set ID (e.g.,
set-2025-0123)
Manual Version Override
When using workflow dispatch:
- Provide a specific version tag (e.g.,
v0.2.0) - The pipeline uses your provided version instead of auto-incrementing
Semantic Versioning
BLVM uses Semantic Versioning:
- MAJOR (X.0.0): Breaking changes
- MINOR (0.X.0): New features, backward compatible
- PATCH (0.0.X): Bug fixes, backward compatible
Build Process
Dependency Order
Publishing and local builds follow each crate’s Cargo dependency graph (not a single linear list). In practice:
- Foundation: blvm-primitives is shared by blvm-consensus and blvm-protocol.
- Core node path: blvm-consensus → blvm-protocol → blvm-node →
blvmCLI binary (theblvmcrate depends onblvm-node). - SDK / governance: blvm-sdk depends on blvm-protocol and blvm-consensus (and optionally blvm-node via features). blvm-commons depends on blvm-sdk and blvm-protocol.
So blvm-sdk is not a leaf with “no dependencies”; it sits beside the node stack and pulls protocol/consensus crates.
Build Variants
Each release includes two variants:
Base Variant
Purpose: Default stable build (standard optimizations, no experimental-only features)
Features:
- Core functionality
- Production-oriented optimizations
- Standard Bitcoin-compatible feature set for the variant
Use for: Production-style deployments after you apply your own security review, monitoring, and hardening—not a blanket “certified production” artifact.
Experimental Variant
Purpose: Full-featured with experimental features
Features: All base features plus:
- UTXO commitments
- Dandelion++ privacy relay
- BIP119 CheckTemplateVerify (CTV)
- Stratum V2 mining
- BIP158 compact block filters
- Signature operations counting
- Iroh transport support
Use for: Development, testing, advanced features
Platforms
Both variants are built for:
- Linux x86_64 (native)
- Windows x86_64 (cross-compiled with MinGW)
Release Artifacts
Binaries Included
Both variants include:
blvm- Bitcoin reference nodeblvm-keygen- Key generation toolblvm-sign- Message signing toolblvm-verify- Signature verification toolblvm-commons- Governance application server (Linux only)key-manager- Key management utilitytest-content-hash- Content hash testing tooltest-content-hash-standalone- Standalone content hash test
Archive Formats
Each platform/variant combination produces:
.tar.gzarchive (Linux/Unix).ziparchive (Windows/universal)SHA256SUMSfile for verification
Release Notes
Automatically generated RELEASE_NOTES.md includes:
- Release date
- Component versions
- Build variant descriptions
- Installation instructions
- Verification instructions
Quality Assurance
Deterministic Build Verification
The pipeline verifies builds are reproducible by:
- Building once and saving binary hashes
- Cleaning and rebuilding
- Comparing hashes (must match exactly)
Note: Non-deterministic builds are warnings (not failures) but should be fixed for production.
Test Execution
All repositories run their test suites:
- Unit tests
- Integration tests
- Library and binary tests
- Excluded: Doctests (for build speed)
Test Requirements:
- All tests must pass
- 30-minute timeout per repository
- Single-threaded execution to avoid resource contention
Git Tagging
Automatic Tagging
When a release succeeds, the pipeline:
- Creates git tags in all repositories with the version tag
- Tags are annotated with release message
- Pushes tags to origin
Repositories Tagged:
blvm-consensusblvm-protocolblvm-nodeblvmblvm-sdkblvm-commons
Tag Format
- Format:
vX.Y.Z(e.g.,v0.1.0) - Semantic versioning
- Immutable once created
GitHub Release
Release Creation
The pipeline creates a GitHub release with:
- Tag: Version tag (e.g.,
v0.1.0) - Title:
Bitcoin Commons v0.1.0 - Body: Generated from
RELEASE_NOTES.md - Artifacts: All binary archives and checksums
- Type: Official release (not prerelease)
Release Location
Releases are created in the blvm repository as the primary release point for the ecosystem.
Cargo Publishing
Publishing Strategy
To avoid compiling all dependencies when building the final blvm binary, all library dependencies are published to crates.io as part of the release process.
Publishing Order (respect Cargo edges; automation can batch steps):
- blvm-primitives (shared foundation)
- blvm-consensus (depends on primitives)
- blvm-protocol (depends on consensus + primitives)
- blvm-node (depends on protocol + consensus)
- blvm-sdk (depends on protocol + consensus; optional blvm-node via features)—not independent of the consensus stack
- blvm-commons (depends on sdk + protocol)
blvmbinary crate (depends on blvm-node) when publishing the CLI
Publishing Process
The release pipeline automatically:
- Publishes dependencies in dependency order to crates.io
- Waits for publication to complete before building dependents
- Updates Cargo.toml in dependent repos to use published versions
- Builds final binary using published crates (no compilation of dependencies)
Benefits
- Faster builds: Final binary uses pre-built dependencies
- Better caching: Cargo can cache published crates
- Version control: Exact versions published and tracked
- Reproducibility: Same versions available to all users
- Distribution: Users can depend on published crates directly
Crate Names
Published crates use the same names as the repositories:
blvm-consensus→blvm-consensusblvm-protocol→blvm-protocolblvm-node→blvm-nodeblvm-sdk→blvm-sdk
Version Coordination
versions.toml
The blvm/versions.toml file tracks:
- Current version of each repository
- Dependency requirements
- Release set ID
Updating Versions
For major/minor version bumps:
- Manually edit
versions.toml - Update version numbers
- Trigger release with workflow dispatch
- Provide the new version tag
For patch releases:
- Automatic via push to main
- Patch version auto-increments
Release Verification
Verifying Release Artifacts
- Download artifacts from GitHub release
- Download SHA256SUMS file
- Verify checksums:
sha256sum -c SHA256SUMS - Verify signatures (if GPG signing is enabled)
Verifying Deterministic Builds
For deterministic build verification:
- Check release notes for deterministic build status
- Compare hashes from multiple builds (if available)
- Rebuild from source and compare hashes
Getting Notified of Releases
GitHub Notifications
- Watch repository: Get notified of all releases
- Release notifications: GitHub will notify you of new releases
Release Announcements
Announce releases through:
- GitHub release notes
- Project website
- Community channels (if configured)
Best Practices
When to Release
- Automatic: After merging PRs to main (recommended)
- Manual: For major/minor version bumps
- Skip: For documentation-only changes (auto-ignored)
Version Strategy
- Patch: Bug fixes, minor improvements (auto-increment)
- Minor: New features, backward compatible (manual)
- Major: Breaking changes (manual)
Release Frequency
- Regular: After each merge to main (automatic)
- Scheduled: For coordinated releases (manual)
- Emergency: For critical fixes (manual with version override)
Troubleshooting
Build Failures
Common Issues:
- Missing dependencies: Check all repos are cloned
- Cargo config issues: Pipeline auto-fixes common problems
- Windows cross-compile: Verify MinGW is installed
Solutions:
- Check build logs in GitHub Actions
- Verify all repositories are accessible
- Ensure Rust toolchain is up to date
Test Failures
Common Issues:
- Flaky tests: Check for timing issues
- Resource contention: Tests run single-threaded
- Timeout: Tests have 30-minute limit
Solutions:
- Review test output in logs
- Check for CI-specific test issues
- Consider skipping problematic tests temporarily
Tagging Failures
Common Issues:
- Tag already exists: Pipeline skips gracefully
- Permission issues: Verify
REPO_ACCESS_TOKENhas write access
Solutions:
- Check if tag exists before release
- Verify token permissions
- Use
skip_taggingoption for testing
Additional Resources
- CI/CD Workflows - Detailed CI/CD documentation
- Contributing Guide - Developer workflow
- GitHub Releases - All releases
Testing Infrastructure
Overview
Bitcoin Commons uses BLVM Specification Lock, property-based testing, fuzzing, integration tests, runtime assertions, and MIRI. Proof scope: PROOF_LIMITATIONS.md.
Testing Strategy
Layered Verification
- Formal Verification: Z3 proofs via BLVM Specification Lock on spec-locked consensus code
- Property-Based Testing (Proptest): Randomized invariant checks
- Fuzzing (libFuzzer): Random input exploration
- Integration Tests: End-to-end scenarios
- Unit Tests: Per-function tests
- Runtime Assertions: Optional invariant checks (feature-gated)
- MIRI: Undefined-behavior detection on selected tests
Test Types
Unit Tests
Unit tests verify individual functions in isolation:
- Location:
tests/directory,#[test]functions - Coverage: Public functions
- Examples: Transaction validation, block validation, script execution
Code: scripts/README.md (test data helpers); VERIFICATION.md (verification workflows)
Property-Based Tests
Property-based tests verify mathematical invariants:
- Location:
tests/consensus_property_tests.rsand other property test files - Coverage: Mathematical invariants
- Tool: Proptest
Code: consensus_property_tests.rs
Integration Tests
Integration tests verify end-to-end correctness:
- Location:
tests/integration/directory - Coverage: Multi-component scenarios
- Examples: BIP compliance, historical replay, mempool mining
Code: mod.rs
Fuzz tests
Coverage-guided fuzzing (libFuzzer / cargo-fuzz). Inventory: fuzz/Cargo.toml in each fuzz crate—see Fuzzing.
Formal Verification (spec-lock)
Formal verification uses blvm-spec-lock / BLVM Specification Lock in blvm-consensus:
- Location:
src/,tests/ - Command:
cargo spec-lock verify(tiered: strong / medium / slow) - Inventory: VERIFICATION.md
- Tool: blvm-spec-lock
See also: UTXO Commitments
Code: formal-verification.md
Runtime Assertions
Runtime assertions catch violations during execution:
- Coverage: Critical paths with runtime assertions
- Production: Available via feature flag
MIRI Integration
MIRI detects undefined behavior:
- CI Integration: Automated undefined behavior detection
- Coverage: Property tests and critical unit tests
- Tool: MIRI interpreter
Coverage Statistics
Overall Coverage
| Verification Technique | Status |
|---|---|
| Formal Proofs (spec-lock) | ✅ Tiered Z3 proofs on spec-locked code |
| Property Tests | ✅ Broad invariant coverage |
| Runtime Assertions | ✅ Feature-gated on selected paths |
| Fuzz Targets | ✅ Critical validation surfaces |
| MIRI Integration | ✅ UB checks on selected tests |
| Mathematical Specs | ✅ Orange Paper + docs |
Coverage by Consensus Area
Economic rules, PoW, transactions, blocks, scripts, reorg, crypto, mempool, SegWit, and serialization are covered by unit, property, integration, and fuzz tests, with BLVM Specification Lock on critical spec-locked paths. Details: VERIFICATION.md.
Running Tests
Run All Tests
cd blvm-consensus
cargo test
Run Specific Test Type
# Unit tests
cargo test --lib
# Property tests
cargo test --test consensus_property_tests
# Integration tests
cargo test --test integration
# Fuzz (example; target name from fuzz/Cargo.toml)
cd fuzz && cargo +nightly fuzz run <target_name>
Run with MIRI
cargo +nightly miri test
Run blvm-spec-lock Proofs
cargo spec-lock verify
Code: formal-verification.md
Run Spec-Lock Verification
# Run spec-lock verification (requires cargo-spec-lock)
cargo spec-lock verify --crate-path .
Coverage Goals
Target Coverage
Ongoing expansion of spec-lock coverage, property tests, fuzz corpora, runtime assertions, and integration scenarios. Status: VERIFICATION.md, PROOF_LIMITATIONS.md.
Test Organization
Directory Structure
blvm-consensus/
├── src/ # Source; spec-lock on marked functions
├── tests/
│ ├── consensus_property_tests.rs # Main property tests
│ ├── integration/ # Integration tests
│ ├── unit/ # Unit tests
│ ├── fuzzing/ # Fuzzing helpers
│ └── verification/ # Verification tests
└── fuzz/
└── fuzz_targets/ # Fuzz targets
Code: tests/
Edge Case Coverage
Beyond Proof Bounds
Edge cases beyond blvm-spec-lock proof bounds are covered by:
- Property-Based Testing: Random inputs of various sizes
- Mainnet Block Tests: Real Bitcoin mainnet blocks
- Integration Tests: Realistic scenarios
- Fuzz Testing: Random generation
Code: PROOF_LIMITATIONS.md
Differential Testing
Cross-implementation checks
Differential tests compare blvm-consensus with an independent full node over RPC (see Differential testing).
- Location:
tests/integration/differential_tests.rs(skeleton); full harnesses in blvm-bench - Purpose: Catch consensus divergences empirically
Code: differential_tests.rs
CI Integration
Automated Testing
All tests run in CI:
- Unit Tests: Required for merge
- Property Tests: Required for merge
- Integration Tests: Required for merge
- Fuzz Tests: Run on schedule
- blvm-spec-lock Proofs: Run separately, not blocking
- MIRI: Run on property tests and critical unit tests
Code: formal-verification.md
Test Metrics
- Property Test Functions: Multiple functions across all files
- Runtime Assertions: Multiple assertions (
assert!anddebug_assert!) - Fuzz Targets: Multiple fuzz targets
Code: VERIFICATION.md
Components
The testing infrastructure includes:
- Unit tests for all public functions
- Property-based tests for mathematical invariants
- Integration tests for end-to-end scenarios
- Fuzz tests for edge case discovery
- blvm-spec-lock proofs for formal verification
- Runtime assertions for execution-time checks
- MIRI integration for undefined behavior detection
- Differential tests (see differential-testing.md)
Location: blvm-consensus/tests/, blvm-consensus/fuzz/, blvm-consensus/src/
See Also
- Property-Based Testing - Verify mathematical invariants
- Fuzzing Infrastructure - Automated bug discovery
- Differential Testing - Cross-check vs reference RPC
- Benchmarking - Performance measurement
- Snapshot Testing - Output consistency verification
- Formal Verification - blvm-spec-lock model checking
- UTXO Commitments - Spec-lock verification for UTXO operations
- Contributing - Testing requirements for contributions
Fuzzing
Coverage-guided fuzzing uses libFuzzer via cargo-fuzz on unstructured byte inputs. It complements spec-lock, unit tests, and property tests; it does not replace them.
Source of truth
Harness names and crate wiring live in each repo’s fuzz/Cargo.toml ([[bin]] entries). Implementation sources are under fuzz/fuzz_targets/. Do not treat prose (here or in READMEs) as an inventory—it goes stale.
| Crate | Location |
|---|---|
| blvm-consensus | blvm-consensus/fuzz |
| blvm-node | blvm-node/fuzz |
Quick start (consensus)
cd blvm-consensus/fuzz
./init_corpus.sh # optional: seed corpora
cargo +nightly fuzz run <target_name>
Pick <target_name> from fuzz/Cargo.toml. The fuzz/ directory also contains scripts (e.g. campaign runners, corpus helpers, sanitizer build helpers)—use what matches your workflow.
CI
Fuzz jobs are defined in the relevant repository’s GitHub Actions. Matrix steps and timeouts may not exercise every harness on every run; read the workflow for actual behavior.
See also
- Testing — how techniques fit together
- Differential testing — cross-implementation checks
- Formal verification — spec-lock scope
Property-Based Testing
Overview
Bitcoin Commons uses property-based testing with Proptest to verify mathematical invariants across thousands of random inputs. The system includes property tests in the main test file and property test functions across all test files.
Property Test Categories
Economic Rules
prop_block_subsidy_halving_schedule- Verifies subsidy halves every 210,000 blocksprop_total_supply_monotonic_bounded- Verifies supply increases monotonically and is boundedprop_block_subsidy_non_negative_decreasing- Verifies subsidy is non-negative and decreasing
Proof of Work
prop_pow_target_expansion_valid_range- Verifies target expansion within valid rangeprop_target_expansion_deterministic- Verifies target expansion is deterministic
Transaction Validation
prop_transaction_output_value_bounded- Verifies output values are boundedprop_transaction_non_empty_inputs_outputs- Verifies transactions have inputs and outputsprop_transaction_size_bounded- Verifies transaction size is boundedprop_coinbase_script_sig_length- Verifies coinbase script sig length limitsprop_transaction_validation_deterministic- Verifies validation is deterministic
Script Execution
prop_script_execution_deterministic- Verifies script execution is deterministicprop_script_size_bounded- Verifies script size is boundedprop_script_execution_performance_bounded- Verifies script execution performance
Performance
prop_sha256_performance_bounded- Verifies SHA256 performanceprop_double_sha256_performance_bounded- Verifies double SHA256 performanceprop_transaction_validation_performance_bounded- Verifies transaction validation performanceprop_script_execution_performance_bounded- Verifies script execution performanceprop_block_subsidy_constant_time- Verifies block subsidy calculation is constant-timeprop_target_expansion_performance_bounded- Verifies target expansion performance
Deterministic Execution
prop_transaction_validation_deterministic- Verifies transaction validation determinismprop_block_subsidy_deterministic- Verifies block subsidy determinismprop_total_supply_deterministic- Verifies total supply determinismprop_target_expansion_deterministic- Verifies target expansion determinismprop_fee_calculation_deterministic- Verifies fee calculation determinism
Integer Overflow Safety
prop_fee_calculation_overflow_safety- Verifies fee calculation overflow safetyprop_output_value_overflow_safety- Verifies output value overflow safetyprop_total_supply_overflow_safety- Verifies total supply overflow safety
Temporal/State Transition
prop_supply_never_decreases_across_blocks- Verifies supply never decreasesprop_reorganization_preserves_supply- Verifies reorganization preserves supplyprop_supply_matches_expected_across_blocks- Verifies supply matches expected values
Compositional Verification
prop_connect_block_composition- Verifies block connection compositionprop_disconnect_connect_idempotency- Verifies disconnect/connect idempotency
SHA256 Correctness
sha256_matches_reference- Verifies SHA256 matches reference implementationdouble_sha256_matches_reference- Verifies double SHA256 matches referencesha256_idempotent- Verifies SHA256 idempotencysha256_deterministic- Verifies SHA256 determinismsha256_output_length- Verifies SHA256 output lengthdouble_sha256_output_length- Verifies double SHA256 output length
Location: blvm-consensus/tests/consensus_property_tests.rs
Proptest Integration
Basic Usage
#![allow(unused)]
fn main() {
use proptest::prelude::*;
proptest! {
#[test]
fn prop_function_invariant(input in strategy) {
let result = function_under_test(input);
prop_assert!(result.property_holds());
}
}
}
Strategy Generation
Proptest generates random inputs using strategies:
#![allow(unused)]
fn main() {
// Integer strategy
let height_strategy = 0u64..10_000_000;
// Vector strategy
let tx_strategy = prop::collection::vec(tx_strategy, 1..1000);
// Custom strategy
let block_strategy = (height_strategy, tx_strategy).prop_map(|(h, txs)| {
Block::new(h, txs)
});
}
Property Assertions
#![allow(unused)]
fn main() {
// Basic assertion
prop_assert!(condition);
// Assertion with message
prop_assert!(condition, "Property failed: {}", reason);
// Assertion with equality
prop_assert_eq!(actual, expected);
}
Property Test Patterns
Invariant Testing
Test that invariants hold across all inputs:
#![allow(unused)]
fn main() {
proptest! {
#[test]
fn prop_subsidy_non_negative(height in 0u64..10_000_000) {
let subsidy = get_block_subsidy(height);
prop_assert!(subsidy >= 0);
}
}
}
Round-Trip Properties
Test that operations are reversible:
#![allow(unused)]
fn main() {
proptest! {
#[test]
fn prop_serialization_round_trip(tx in tx_strategy()) {
let serialized = serialize(&tx);
let deserialized = deserialize(&serialized)?;
prop_assert_eq!(tx, deserialized);
}
}
}
Determinism Properties
Test that functions are deterministic:
#![allow(unused)]
fn main() {
proptest! {
#[test]
fn prop_deterministic(input in input_strategy()) {
let result1 = function(input.clone());
let result2 = function(input);
prop_assert_eq!(result1, result2);
}
}
}
Bounds Properties
Test that values stay within bounds:
#![allow(unused)]
fn main() {
proptest! {
#[test]
fn prop_value_bounded(value in 0i64..MAX_MONEY) {
let result = process_value(value);
prop_assert!(result >= 0 && result <= MAX_MONEY);
}
}
}
Additional Property Tests
Comprehensive Property Tests
Location: tests/unit/comprehensive_property_tests.rs
- Multiple proptest! blocks covering comprehensive scenarios
Script Opcode Property Tests
Location: tests/unit/script_opcode_property_tests.rs
- Multiple proptest! blocks for script opcode testing
SegWit/Taproot Property Tests
Location: tests/unit/segwit_taproot_property_tests.rs
- Multiple proptest! blocks for SegWit and Taproot
Edge Case Property Tests
Multiple files with edge case testing:
tests/unit/block_edge_cases.rs: Multiple proptest! blockstests/unit/economic_edge_cases.rs: Multiple proptest! blockstests/unit/reorganization_edge_cases.rs: Multiple proptest! blockstests/unit/transaction_edge_cases.rs: Multiple proptest! blockstests/unit/utxo_edge_cases.rs: Multiple proptest! blockstests/unit/difficulty_edge_cases.rs: Multiple proptest! blockstests/unit/mempool_edge_cases.rs: Multiple proptest! blocks
Cross-BIP Property Tests
Location: tests/cross_bip_property_tests.rs
- Multiple proptest! blocks for cross-BIP validation
Statistics
- Property Test Blocks: Multiple proptest! blocks across all test files
- Property Test Functions: Multiple prop_* functions across all test files
Running Property Tests
Run All Property Tests
cargo test --test consensus_property_tests
Run Specific Property Test
cargo test --test consensus_property_tests prop_block_subsidy_halving_schedule
Run with Verbose Output
cargo test --test consensus_property_tests -- --nocapture
Run with MIRI
cargo +nightly miri test --test consensus_property_tests
Shrinking
Proptest automatically shrinks failing inputs to minimal examples:
- Initial Failure: Large random input fails
- Shrinking: Proptest reduces input size
- Minimal Example: Smallest input that still fails
- Debugging: Minimal example is easier to debug
Configuration
Test Cases
Default: 256 test cases per property test
#![allow(unused)]
fn main() {
proptest! {
#![proptest_config(ProptestConfig::with_cases(1000))]
#[test]
fn prop_test(input in strategy) {
// ...
}
}
}
Max Shrink Iterations
Default: 65536 shrink iterations
#![allow(unused)]
fn main() {
proptest! {
#![proptest_config(ProptestConfig {
max_shrink_iters: 10000,
..ProptestConfig::default()
})]
#[test]
fn prop_test(input in strategy) {
// ...
}
}
}
Integration with Formal Verification
Property tests complement BLVM Specification Lock (Z3 proofs on spec-locked code):
- Spec-lock: Formal proofs tied to Orange Paper contracts
- Proptest: Randomized invariant sampling over strategies
- Together: Complementary layers; see PROOF_LIMITATIONS.md
Components
The property-based testing system includes:
- Property tests in main test file
- Property test blocks across all files
- Property test functions
- Proptest integration
- Strategy generation
- Automatic shrinking
- MIRI integration
Location: blvm-consensus/tests/consensus_property_tests.rs, blvm-consensus/tests/unit/
See Also
- Testing Infrastructure - Overview of all testing techniques
- Fuzzing Infrastructure - Automated bug discovery
- Differential Testing - Cross-check vs reference RPC
- Formal Verification - BLVM Specification Lock verification
- Contributing - Testing requirements for contributions
Benchmarking Infrastructure
Overview
Bitcoin Commons maintains a comprehensive benchmarking infrastructure to measure and track performance across all components. Benchmarks are automatically generated and published at benchmarks.thebitcoincommons.org.
Benchmark Infrastructure
blvm-bench Repository
The benchmarking infrastructure is maintained in a separate repository (blvm-bench) that:
- Runs performance benchmarks across all BLVM components
- Parallel benchmark execution for efficient testing
- Differential testing infrastructure (optional cross-check vs a reference full node)
- FIBRE protocol performance benchmarks
- Generates benchmark reports and visualizations
- Publishes results to
benchmarks.thebitcoincommons.org - Tracks performance over time
- Optional A/B comparisons when a second implementation is available in your bench setup
Automated Benchmark Generation
Benchmarks are generated automatically via GitHub Actions workflows:
- Scheduled Runs: Regular benchmark runs on schedule
- PR Triggers: Benchmarks run on pull requests
- Release Triggers: Comprehensive benchmarks before releases
- Results Publishing: Automatic publishing to benchmark website
Published Benchmarks
Benchmark Website
All benchmark results are published at:
- URL: benchmarks.thebitcoincommons.org
- Content: Performance metrics, comparisons, historical trends
- Format: Interactive charts and detailed reports
Benchmark Categories
Benchmarks cover:
-
Consensus Performance
- Block validation speed
- Transaction validation speed
- Script execution performance
- UTXO operations
-
Network Performance
- P2P message handling
- Block propagation
- Transaction relay
- Network protocol overhead
-
Storage Performance
- Database operations
- Index operations
- Cache performance
- Disk I/O
-
Memory Performance
- Memory usage
- Allocation patterns
- Cache efficiency
- Memory leaks
Running Benchmarks Locally
Prerequisites
# Install Rust benchmarking tools
cargo install criterion
# Install benchmark dependencies
cargo build --release --benches
Run All Benchmarks
cd blvm-consensus
cargo bench
Run Specific Benchmark
# Run specific benchmark suite
cargo bench --bench block_validation
# Run specific benchmark
cargo bench --bench block_validation -- block_connect
Benchmark Configuration
Benchmarks can be configured via environment variables:
# Set benchmark iterations
export BENCH_ITERATIONS=1000
# Set benchmark warmup time
export BENCH_WARMUP_SECS=5
# Set benchmark measurement time
export BENCH_MEASUREMENT_SECS=10
Benchmark Structure
Criterion Benchmarks
Benchmarks use the Criterion.rs framework:
#![allow(unused)]
fn main() {
use criterion::{black_box, criterion_group, criterion_main, Criterion};
fn benchmark_block_validation(c: &mut Criterion) {
c.bench_function("block_connect", |b| {
let block = create_test_block();
b.iter(|| {
black_box(validate_block(&block));
});
});
}
criterion_group!(benches, benchmark_block_validation);
criterion_main!(benches);
}
Benchmark Groups
Benchmarks are organized into groups:
- Block Validation: Block connection, header validation
- Transaction Validation: Transaction parsing, input validation
- Script Execution: Script VM performance, opcode execution
- Cryptographic: SHA256, double SHA256, signature verification
- UTXO Operations: UTXO set updates, lookups, batch operations
Interpreting Results
Performance Metrics
Benchmarks report:
- Throughput: Operations per second
- Latency: Time per operation
- Memory: Memory usage per operation
- CPU: CPU utilization
Comparisons
When configured, benches may compare runs against a reference build or historical BLVM baselines:
- Relative performance: Speedup/slowdown vs baseline
- Regression detection: Catch performance cliffs across commits
Historical Trends
Benchmark results track performance over time:
- Performance Regression Detection: Identify performance regressions
- Optimization Validation: Verify optimization effectiveness
- Release Impact: Measure performance impact of releases
Benchmark Workflows
GitHub Actions
Benchmark workflows in blvm-bench:
- Scheduled Benchmarks: Daily/weekly benchmark runs
- PR Benchmarks: Benchmark on pull requests
- Release Benchmarks: Comprehensive benchmarks before releases
- Results Publishing: Automatic publishing to website
Benchmark Artifacts
Workflows generate:
- Benchmark Reports: Detailed performance reports
- Visualizations: Charts and graphs
- Comparison data: Baseline vs current (when enabled)
- Historical Data: Performance trends
Performance Targets
Consensus Performance
- Block Validation: Target <100ms per block (mainnet average)
- Transaction Validation: Target <1ms per transaction
- Script Execution: Target <10ms per script (average complexity)
Network Performance
- Block Propagation: Target <1s for block propagation
- Transaction Relay: Target <100ms for transaction relay
- P2P Overhead: Target <5% protocol overhead
Storage Performance
- Database Operations: Target <10ms for common queries
- Index Operations: Target <1ms for index lookups
- Cache Hit Rate: Target >90% cache hit rate
Benchmark Best Practices
Benchmark Design
- Isolate Components: Benchmark individual components
- Use Realistic Data: Use real-world data when possible
- Warm Up: Include warmup iterations
- Multiple Runs: Run benchmarks multiple times
- Statistical Analysis: Use statistical methods for accuracy
Benchmark Maintenance
- Regular Updates: Update benchmarks with code changes
- Performance Monitoring: Monitor for regressions
- Documentation: Document benchmark methodology
- Reproducibility: Ensure benchmarks are reproducible
Components
The benchmarking infrastructure includes:
- Criterion.rs benchmark framework
- Automated benchmark generation (GitHub Actions)
- Benchmark website (benchmarks.thebitcoincommons.org)
- Performance tracking and visualization
- Optional external baselines when wired in blvm-bench
- Historical performance trends
Location: blvm-bench repository, benchmark results at benchmarks.thebitcoincommons.org
See Also
- Testing Infrastructure - Overview of all testing techniques
- Node Performance - Performance optimizations
- CI/CD Workflows - CI integration
- Contributing - Performance requirements for contributions
Differential Testing
Overview
Differential testing compares blvm-consensus validation results against an independent full node over JSON-RPC, so disagreements surface as empirical failures. Primary tooling lives in blvm-bench (parallel runs, harnesses, optional reference-binary integration).
Purpose
- Cross-check local validation vs an external node’s
testmempoolaccept,submitblock, etc. - Catch divergences before mainnet exposure
- Exercise real blocks and scripted scenarios, not only unit tests
Implementation
Primary: blvm-bench — differential harnesses, regtest helpers, RPC client, historical/BIP-focused tests.
Skeleton in consensus: differential_tests.rs — placeholder; use blvm-bench for real runs.
Comparison flow
- Validate in blvm-consensus.
- Submit the same payload to a reference RPC (configure URL/credentials for your environment).
- Compare accept/reject and errors; treat any mismatch as a bug or intentional policy difference to document.
Rust-shaped APIs in docs may use names like CoreRpcConfig in code—configure whatever full node you use for cross-validation; this documentation does not require a specific vendor.
Differential fuzzing (internal)
The differential_fuzzing cargo-fuzz target checks internal consistency (round-trips, idempotence). It does not call an external node. See Fuzzing and differential_fuzzing.rs.
Public JSON test vectors
Consensus tests reuse widely circulated transaction/script JSON vectors (same families many implementations use). Paths and provenance: TEST_DATA_SOURCES.md in blvm-consensus.
Mainnet and historical tests
Real mainnet blocks and era-specific scenarios are exercised in integration and bench tooling; see blvm-consensus tests/ and blvm-bench for entry points.
Running (bench)
cd blvm-bench
cargo test --features differential
# or targeted suites as provided by blvm-bench
Prerequisites: a reference full node binary and RPC reachable from the harness when RPC comparison is enabled; see blvm-bench README for env vars (e.g. path to the binary, RPC URL).
Results
Treat divergence as: investigate, fix consensus if BLVM is wrong, or document policy differences (mempool policy is not identical across implementations).
See also
Snapshot Testing
Overview
Bitcoin Commons uses snapshot testing to verify that complex data structures and outputs don’t change unexpectedly. Snapshot tests capture the output of functions and compare them against stored snapshots, making it easy to detect unintended changes.
Purpose
Snapshot testing serves to:
- Detect Regressions: Catch unexpected changes in output
- Verify Complex Outputs: Test complex data structures without writing detailed assertions
- Document Behavior: Snapshots serve as documentation of expected behavior
- Review Changes: Interactive review of snapshot changes
Code: snapshot_tests.rs (insta snapshots; inline mod validation_snapshot_tests)
Architecture
Snapshot Testing Library
Bitcoin Commons uses insta for snapshot testing:
- Snapshot Storage: Snapshots stored in
.snapfiles - Version Control: Snapshots committed to git
- Interactive Review: Review changes before accepting
- Format Support: Text, JSON, YAML, and custom formats
Code: TESTING_SETUP.md
Usage
Creating Snapshots
#![allow(unused)]
fn main() {
use insta::assert_snapshot;
#[test]
fn test_example() {
let result = compute_something();
assert_snapshot!("snapshot_name", result);
}
}
Code: TESTING_SETUP.md
Snapshot Examples
Content Hash Snapshot
#![allow(unused)]
fn main() {
#[test]
fn test_content_hash_snapshot() {
let validator = ContentHashValidator::new();
let content = b"test content for snapshot";
let hash = validator.compute_file_hash(content);
assert_snapshot!("content_hash", hash);
}
}
Code: snapshot_tests.rs
Directory Hash Snapshot
#![allow(unused)]
fn main() {
#[test]
fn test_directory_hash_snapshot() {
let validator = ContentHashValidator::new();
let files = vec![
("file1.txt".to_string(), b"content1".to_vec()),
("file2.txt".to_string(), b"content2".to_vec()),
("file3.txt".to_string(), b"content3".to_vec()),
];
let result = validator.compute_directory_hash(&files);
assert_snapshot!("directory_hash", format!(
"file_count: {}\ntotal_size: {}\nmerkle_root: {}",
result.file_count,
result.total_size,
result.merkle_root
));
}
}
Code: snapshot_tests.rs
Version Format Snapshot
#![allow(unused)]
fn main() {
#[test]
fn test_version_format_snapshot() {
let validator = VersionPinningValidator::default();
let format = validator.generate_reference_format(
"v1.2.3",
"abc123def456",
"sha256:fedcba9876543210"
);
assert_snapshot!("version_format", format);
}
}
Code: snapshot_tests.rs
Running Snapshot Tests
Run Tests
cargo test --test snapshot_tests
Or using Makefile:
make test-snapshot
Code: TESTING_SETUP.md
Updating Snapshots
Interactive Review
When snapshots change (expected changes):
cargo insta review
This opens an interactive review where you can:
- Accept changes
- Reject changes
- See diffs
Code: TESTING_SETUP.md
Update Command
make update-snapshots
Code: TESTING_SETUP.md
Snapshot Files
File Location
- Location:
tests/snapshots/ - Format:
.snapfiles - Version Controlled: Yes
Code: TESTING_SETUP.md
File Structure
Snapshot files are organized by test module:
tests/snapshots/
├── validation_snapshot_tests/
│ ├── content_hash.snap
│ ├── directory_hash.snap
│ └── version_format.snap
└── github_snapshot_tests/
└── ...
Best Practices
1. Commit Snapshots
- Commit
.snapfiles to version control - Review snapshot changes in PRs
- Don’t ignore snapshot files
Code: TESTING_SETUP.md
2. Review Changes
- Always review snapshot changes before accepting
- Understand why snapshots changed
- Verify changes are expected
Code: TESTING_SETUP.md
3. Use Descriptive Names
- Use clear snapshot names
- Include context in snapshot names
- Group related snapshots
4. Test Complex Outputs
- Use snapshots for complex data structures
- Test formatted output
- Test serialized data
Troubleshooting
Snapshots Failing
If snapshots fail unexpectedly:
- Review changes:
cargo insta review - If changes are expected, accept them
- If changes are unexpected, investigate
Code: TESTING_SETUP.md
Snapshot Not Found
If snapshot file is missing:
- Run test to generate snapshot
- Review generated snapshot
- Accept if correct
CI Integration
GitHub Actions
Snapshot tests run in CI:
- On PRs: Run snapshot tests
- On Push: Run snapshot tests
- Fail on Mismatch: Tests fail if snapshots don’t match
Local CI Simulation
# Run snapshot tests (like CI)
make test-snapshot
Code: TESTING_SETUP.md
Configuration
Insta Configuration
Configuration file: .insta.yml
# Insta configuration
snapshot_path: tests/snapshots
Code: .insta.yml
Test Suites
Validation Snapshots
Tests for validation functions:
- Content hash computation
- Directory hash computation
- Version format generation
- Version parsing
Code: snapshot_tests.rs
GitHub Snapshots
Tests for GitHub integration:
- PR comment formatting
- Status check formatting
- Webhook processing
Code: snapshot_tests.rs
See Also
- Testing Infrastructure - Overview of all testing techniques
- Property-Based Testing - Verify invariants with random inputs
- Fuzzing Infrastructure - Automated bug discovery
- Contributing - Testing requirements for contributions
Benefits
- Easy Regression Detection: Catch unexpected changes easily
- Complex Output Testing: Test complex structures without detailed assertions
- Documentation: Snapshots document expected behavior
- Interactive Review: Review changes before accepting
- Version Control: Track changes over time
Components
The snapshot testing system includes:
- Insta snapshot testing library
- Snapshot test suites
- Snapshot file management
- Interactive review tools
- CI integration
Location: blvm-commons/tests/snapshot/, blvm-commons/docs/testing/TESTING_SETUP.md
Known Issues
This document tracks known technical issues in the codebase that require attention. These are validated issues confirmed through code inspection and static analysis.
Critical Issues
MutexGuard Held Across Await Points
Status: Known issue
Severity: Critical
Location: blvm-node/src/network/mod.rs and related files
Problem
Multiple instances where std::sync::Mutex guards are held across await points, causing deadlock risks. The async runtime may yield while holding a blocking mutex guard, and another task trying to acquire the same lock will block, potentially causing deadlock.
Code Pattern
#![allow(unused)]
fn main() {
// Problematic pattern
let mut peer_states = self.peer_states.lock().unwrap(); // std::sync::Mutex
// ... code that uses peer_states ...
if let Err(e) = self.send_to_peer(peer_addr, wire_msg).await { // AWAIT WITH LOCK HELD!
// MutexGuard still held here - DEADLOCK RISK
}
}
Impact
- Deadlock risk: Holding a
std::sync::Mutexguard across anawaitpoint can cause deadlocks - The async runtime may yield, and another task trying to acquire the same lock will block
- If that task is on the same executor thread, deadlock occurs
Root Cause
peer_statesusesArc<Mutex<...>>withstd::sync::Mutexinstead oftokio::sync::Mutex- Guard is held while calling async function
send_to_peer().await
Recommended Fix
#![allow(unused)]
fn main() {
// Option 1: Drop guard before await
{
let mut peer_states = self.peer_states.lock().unwrap();
// ... use peer_states ...
} // Guard dropped here
if let Err(e) = self.send_to_peer(peer_addr, wire_msg).await {
// ...
}
// Option 2: Use tokio::sync::Mutex (preferred for async code)
// Change field type to Arc<tokio::sync::Mutex<...>>
let mut peer_states = self.peer_states.lock().await;
}
Affected Locations
blvm-node/src/network/mod.rs: Multiple locationsblvm-node/src/network/utxo_commitments_client.rs: Lines 156, 165, 257, 349, 445
Mixed Mutex Types
Status: Known issue
Severity: Critical
Location: blvm-node/src/network/mod.rs
Problem
NetworkManager uses Arc<Mutex<...>> with std::sync::Mutex (blocking) in async contexts, causing deadlock risks. All Mutex fields in NetworkManager are std::sync::Mutex but used in async code.
Current State
#![allow(unused)]
fn main() {
pub struct NetworkManager {
peer_manager: Arc<Mutex<PeerManager>>, // std::sync::Mutex
peer_states: Arc<Mutex<HashMap<...>>>, // std::sync::Mutex
// ... many more Mutex fields
}
}
Recommended Fix
- Audit all Mutex fields in NetworkManager
- Convert to tokio::sync::Mutex for async contexts
- Update all
.lock().unwrap()to.lock().await - Remove blocking locks from async functions
Affected Fields
peer_manager: Arc<Mutex<PeerManager>>peer_states: Arc<Mutex<HashMap<...>>>persistent_peers: Arc<Mutex<HashSet<...>>>ban_list: Arc<Mutex<HashMap<...>>>socket_to_transport: Arc<Mutex<HashMap<...>>>pending_requests: Arc<Mutex<HashMap<...>>>request_id_counter: Arc<Mutex<u32>>address_database: Arc<Mutex<...>>- And more…
Unwrap() on Mutex Locks
Status: Known issue
Severity: High
Location: Multiple files
Problem
Using .unwrap() on mutex locks can cause panics if the lock is poisoned (a thread panicked while holding the lock).
#![allow(unused)]
fn main() {
let mut db = self.address_database.lock().unwrap(); // Can panic!
let peer_states = network.peer_states.lock().unwrap(); // Can panic!
}
Impact
- If a thread panics while holding a Mutex, the lock becomes “poisoned”
.unwrap()will panic, potentially crashing the entire node- No graceful error handling
Recommended Fix
#![allow(unused)]
fn main() {
// Option 1: Handle poisoning gracefully
match self.address_database.lock() {
Ok(guard) => { /* use guard */ }
Err(poisoned) => {
warn!("Mutex poisoned, recovering...");
let guard = poisoned.into_inner();
// use guard
}
}
// Option 2: Use tokio::sync::Mutex (doesn't poison)
}
Affected Locations
blvm-node/src/network/mod.rs: Multiple locations (19+ instances)blvm-node/src/network/utxo_commitments_client.rs: Lines 156, 165, 257, 349, 445blvm-consensus/src/script/: Multiple locations (script logic is in thescript/directory)
Medium Priority Issues
Transport Abstraction Not Fully Integrated
Status: Known issue
Severity: Medium
Location: blvm-node/src/network/
Problem
Transport abstraction exists (Transport trait, TcpTransport, IrohTransport), but Peer struct still uses raw TcpStream directly in some places, not using the transport abstraction consistently.
Impact
- Code duplication
- Inconsistent error handling
- Harder to add new transports
Recommended Fix
- Audit all
Peercreation sites - Ensure all use
from_transport_connection - Remove direct
TcpStreamusage
Nested Locking Patterns
Status: Known issue
Severity: Medium
Location: blvm-node/src/network/utxo_commitments_client.rs
Problem
Nested locking where RwLock read guard is held while acquiring inner Mutex locks, which can cause deadlocks.
#![allow(unused)]
fn main() {
let network = network_manager.read().await; // RwLock read
// ...
network.socket_to_transport.lock().unwrap(); // Mutex lock inside
}
Recommended Fix
- Review locking strategy
- Consider flattening the lock hierarchy
- Or ensure consistent lock ordering
Testing Gaps
Missing Concurrency Tests
Status: Known gap
Severity: Low
Problem
- No tests for Mutex deadlock scenarios
- No tests for lock ordering
- No stress tests for concurrent access
Recommendation
- Add tests that spawn multiple tasks accessing shared Mutex
- Test lock ordering to prevent deadlocks
- Add timeout tests for lock acquisition
Priority Summary
Priority 1 (Critical - Fix Immediately)
- Fix MutexGuard held across await
- Convert all
std::sync::Mutextotokio::sync::Mutexin async contexts - Replace
.unwrap()on locks with proper error handling
Priority 2 (High - Fix Soon)
- Review and fix nested locking patterns
- Complete transport abstraction integration
Priority 3 (Medium - Fix When Possible)
- Add concurrency stress tests
Files Requiring Immediate Attention
- blvm-node/src/network/mod.rs - Multiple critical issues
- blvm-node/src/network/utxo_commitments_client.rs - MutexGuard across await
- blvm-consensus/src/script/ - Unwrap() on locks
See Also
- Contributing Guide - How to contribute fixes
- Testing Guide - Testing practices
- PR Process - Pull request workflow
Security Controls System
Overview
Bitcoin Commons implements a security controls system that automatically classifies pull requests based on affected security controls and determines required governance tiers. This embeds security controls directly into the governance system, making it self-enforcing.
Architecture
Security Control Mapping
Security controls are defined in a YAML configuration file that maps file patterns to security controls:
- File Patterns: Glob patterns matching code files
- Control Definitions: Security control metadata
- Priority Levels: P0 (Critical), P1 (High), P2 (Medium), P3 (Low)
- Categories: Control categories (consensus_integrity, cryptographic, etc.)
Code: security_controls.rs
Security Control Structure
security_controls:
- id: "A-001"
name: "Genesis Block Implementation"
category: "consensus_integrity"
priority: "P0"
description: "Proper genesis blocks"
files:
- "blvm-protocol/**/*.rs"
required_signatures: "7-of-7"
review_period_days: 180
requires_security_audit: true
requires_formal_verification: true
requires_cryptography_expert: false
Code: security_controls.rs
Priority Levels
P0 (Critical)
Highest priority security controls:
- Impact: Blocks production deployment and security audit
- Requirements: Security audit, formal verification, cryptographer approval
- Governance Tier:
security_critical - Examples: Genesis block implementation, cryptographic primitives
Code: security_controls.rs
P1 (High)
High priority security controls:
- Impact: Medium impact, may require cryptography expert
- Requirements: Security review, formal verification
- Governance Tier:
cryptographicorsecurity_enhancement - Examples: Signature verification, key management
Code: security_controls.rs
P2 (Medium)
Medium priority security controls:
- Impact: Low impact
- Requirements: Security review by maintainer
- Governance Tier:
security_enhancement - Examples: Access control, rate limiting
Code: security_controls.rs
P3 (Low)
Low priority security controls:
- Impact: Minimal impact
- Requirements: Standard review
- Governance Tier: None (standard process)
- Examples: Logging, monitoring
Code: security_controls.rs
Control Categories
Consensus Integrity
Controls related to consensus-critical code:
- Max Priority: P0
- Examples: Block validation, transaction validation, UTXO management
- Requirements: Formal verification, security audit
Cryptographic
Controls related to cryptographic operations:
- Max Priority: P0
- Examples: Signature verification, key generation, hash functions
- Requirements: Cryptographer approval, side-channel analysis
Access Control
Controls related to authorization and access:
- Max Priority: P1
- Examples: Maintainer authorization, server authorization
- Requirements: Security review
Network Security
Controls related to network protocols:
- Max Priority: P1
- Examples: P2P message validation, relay security
- Requirements: Security review
Security Control Validator
Impact Analysis
The SecurityControlValidator analyzes security impact of changed files:
- File Matching: Matches changed files against control patterns
- Control Identification: Identifies affected security controls
- Priority Calculation: Determines highest priority affected
- Tier Determination: Determines required governance tier
- Requirement Collection: Collects additional requirements
Code: security_controls.rs
Impact Levels
#![allow(unused)]
fn main() {
pub enum ImpactLevel {
None, // No controls affected
Low, // P2 controls
Medium, // P1 controls
High, // P0 controls
Critical, // Multiple P0 controls
}
}
Code: security_controls.rs
Governance Tier Mapping
Impact levels map to governance tiers:
- Critical/High:
security_criticaltier - Medium (crypto):
cryptographictier - Medium (other):
security_enhancementtier - Low:
security_enhancementtier - None: Standard tier
Code: security_controls.rs
Placeholder Detection
Placeholder Patterns
The validator detects placeholder implementations in security-critical files:
PLACEHOLDER- See Threat Models for comprehensive security documentation
0x00[PLACEHOLDER0x02[PLACEHOLDER0x03[PLACEHOLDER0x04[PLACEHOLDERreturn None as a placeholderreturn vec![] as a placeholderThis is a placeholder
Code: security_controls.rs
Placeholder Violations
Placeholder violations block PRs affecting P0 controls:
- Detection: Automatic scanning of changed files
- Blocking: Blocks production deployment
- Reporting: Detailed violation reports
Code: security_controls.rs
Security Gate CLI
Status Check
Check security control status:
security-gate status
security-gate status --detailed
Code: security-gate.rs
PR Impact Analysis
Analyze security impact of a PR:
security-gate check-pr 123
security-gate check-pr 123 --format json
Code: security-gate.rs
Placeholder Check
Check for placeholder implementations:
security-gate check-placeholders
security-gate check-placeholders --fail-on-placeholder
Code: security-gate.rs
Production Readiness
Verify production readiness:
security-gate verify-production-readiness
security-gate verify-production-readiness --format json
Code: security-gate.rs
Integration with Governance
Automatic Classification
Security controls automatically classify PRs:
- File Analysis: Analyzes changed files
- Control Matching: Matches files to controls
- Tier Assignment: Assigns governance tier
- Requirement Collection: Collects requirements
Code: security_controls.rs
PR Comments
The validator generates PR comments with security impact:
- Impact Level: Visual indicator of impact
- Affected Controls: List of affected controls
- Required Tier: Governance tier required
- Additional Requirements: List of requirements
- Blocking Status: Production/audit blocking status
Code: security_controls.rs
Control Requirements
Security Critical Tier
Requirements for security_critical tier:
- All affected P0 controls must be certified
- No placeholder implementations in diff
- Formal verification proofs passing
- Security audit report attached to PR
- Cryptographer approval required
Code: security_controls.rs
Cryptographic Tier
Requirements for cryptographic tier:
- Cryptographer approval required
- Test vectors from standard specifications
- Side-channel analysis performed
- Formal verification proofs passing
Code: security_controls.rs
Security Enhancement Tier
Requirements for security_enhancement tier:
- Security review by maintainer
- Comprehensive test coverage
- No placeholder implementations
Code: security_controls.rs
Production Blocking
P0 Control Blocking
P0 controls block production deployment:
- Blocks Production: Cannot deploy to production
- Blocks Audit: Cannot proceed with security audit
- Requires Certification: Must be certified before merge
Code: security_controls.rs
Components
The security controls system includes:
- Security control mapping (YAML configuration)
- Security control validator (impact analysis)
- Placeholder detection
- Security gate CLI tool
- Governance tier integration
- PR comment generation
Location: blvm-commons/src/validation/security_controls.rs, blvm-commons/src/bin/security-gate.rs
See Also
- Threat Models - Security threat analysis
- Developer Security Checklist - Security checklist for developers
- Security Architecture Review Template - Architecture review process
- Security Testing Template - Security testing guidelines
- Contributing - Development workflow
- PR Process - Security review in PR process
Threat Models
Overview
Bitcoin Commons implements security boundaries and threat models to protect against various attack vectors. The system uses defense-in-depth principles with multiple layers of security.
Security Boundaries
Node Security Boundaries
What blvm-node Handles:
- Consensus validation (delegated to blvm-consensus)
- Network protocol (P2P message parsing, peer management)
- Storage layer (block storage, UTXO set, chain state)
- RPC interface (JSON-RPC 2.0 API)
- Module orchestration (loading, IPC, lifecycle management)
- Mempool management
- Mining coordination
What blvm-node NEVER Handles:
- Consensus rule validation (delegated to blvm-consensus)
- Protocol variant selection (delegated to blvm-protocol)
- Private key management (no wallet functionality)
- Cryptographic key generation (delegated to blvm-sdk or modules)
- Governance enforcement (delegated to blvm-commons)
Code: SECURITY.md
Module System Security Boundaries
Process Isolation:
- Modules run in separate processes with isolated memory
- Node consensus state is protected and read-only to modules
- Module crashes are isolated and do not affect the base node
Code: MODULE_SYSTEM.md
What Modules Cannot Do:
- Modify consensus rules
- Modify UTXO set
- Access node private keys
- Bypass security boundaries
- Affect other modules
Code: MODULE_SYSTEM.md
Threat Model: Pre-Production Testing
Environment
- Network: Trusted network only
- Timeline: Extended testing before production use
- Threats: Limited to development and testing scenarios
Threats NOT Applicable (Trusted Network)
- Eclipse attacks
- Sybil attacks
- Network partitioning attacks
- Malicious peer injection
Code: SECURITY.md
Threats That Apply
- Code vulnerabilities in consensus validation
- Memory corruption in parsing
- Integer overflow in calculations
- Resource exhaustion (DoS)
- Supply chain attacks on dependencies
Code: SECURITY.md
Threat Model: Mainnet Deployment
Environment
- Network: Public Bitcoin network
- Timeline: After security audit and hardening
- Threats: Full Bitcoin network threat model
Additional Threats for Mainnet
- Eclipse attacks - malicious peers isolate node
- Sybil attacks - fake peer identities
- Network partitioning - routing attacks
- Resource exhaustion - memory/CPU DoS
- Protocol manipulation - malformed messages
Code: SECURITY.md
Attack Vectors
Eclipse Attacks
Threat: Malicious peers isolate node from honest network
Mitigations:
- IP diversity tracking
- Limits connections from same IP range
- LAN peering security: 25% LAN peer cap, 75% internet peer minimum, checkpoint validation
- Geographic diversity requirements
- ASN diversity tracking
Code: SECURITY.md
Sybil Attacks
Threat: Attacker creates many fake peer identities
Mitigations:
- Connection rate limiting
- Per-IP connection limits
- Peer reputation tracking
- Ban list sharing
Code: SECURITY.md
Resource Exhaustion (DoS)
Threat: Attacker exhausts node resources (memory, CPU, network)
Mitigations:
- Connection rate limiting (token bucket)
- Message queue limits
- Auto-ban for abusive peers
- Resource monitoring
- Per-user RPC rate limiting
Code: SECURITY.md
Protocol Manipulation
Threat: Attacker sends malformed messages to exploit parsing bugs
Mitigations:
- Input validation and sanitization
- Fuzzing (overview)
- Formal verification
- Property-based testing
- Network protocol validation
Code: SECURITY.md
Memory Corruption
Threat: Buffer overflows, use-after-free, double-free
Mitigations:
- Rust memory safety
- MIRI integration (undefined behavior detection)
- Fuzzing with sanitizers (ASAN, UBSAN, MSAN)
- Runtime assertions
Code: PROOF_LIMITATIONS.md
Integer Overflow
Threat: Integer overflow in calculations causing consensus divergence
Mitigations:
- Checked arithmetic
- Formal verification (Z3 proofs via BLVM Specification Lock)
- Property-based testing
- Runtime assertions
Code: PROOF_LIMITATIONS.md
Supply Chain Attacks
Threat: Malicious dependencies compromise node
Mitigations:
- Dependency pinning (exact versions)
- Regular security audits (cargo audit)
- Minimal dependency set
- Trusted dependency sources
Code: SECURITY.md
Security Hardening
Pre-Production (Current)
- Fix signature verification with real transaction hashes
- Implement proper Bitcoin double SHA256 hashing
- Pin all dependencies to exact versions
- Add network protocol input validation
- Replace sled with redb (production-ready database)
- Add DoS protection mechanisms
- Add RPC authentication
- Implement rate limiting
- Add comprehensive fuzzing
- Add eclipse attack prevention
- Add storage bounds checking
Code: SECURITY.md
Production Readiness
- All pre-production items completed
- Professional security audit (external, requires security firm)
- Formal verification of critical paths
- Advanced peer management
Code: SECURITY.md
Module System Security
Process Isolation
Modules run in separate processes:
- Isolated Memory: Each module has separate memory space
- IPC Communication: Modules communicate only via IPC
- Crash Isolation: Module crashes don’t affect node
- Resource Limits: CPU, memory, and network limits enforced
Code: mod.rs
Sandboxing
Modules are sandboxed:
- File System: Restricted file system access
- Network: Network access controlled
- Process: Resource limits enforced
- Capabilities: Permission-based access control
Code: mod.rs
Permission System
Modules require explicit permissions:
- Capability Checks: Permission validator checks capabilities
- Tier Validation: Tier-based permission system
- Resource Limits: Enforced resource limits
- Request Validation: All requests validated
Code: MODULE_SYSTEM.md
RPC Security
Authentication
RPC authentication implemented:
- Token-Based: Token-based authentication
- Certificate-Based: Certificate-based authentication
- Configurable: Authentication method configurable
Code: SECURITY.md
Rate Limiting
RPC rate limiting implemented:
- Per-User: Per-user rate limiting
- Token Bucket: Token bucket algorithm
- Configurable: Rate limits configurable
Code: SECURITY.md
Input Validation
RPC input validation:
- Sanitization: Input sanitization
- Validation: Input validation
- Access Control: Access control via authentication
Code: SECURITY.md
Network Security
DoS Protection
DoS protection mechanisms:
- Connection Rate Limiting: Token bucket, per-IP connection limits
- Message Queue Limits: Limits on message queue size
- Auto-Ban: Automatic banning of abusive peers
- Resource Monitoring: Resource usage monitoring
Code: SECURITY.md
Eclipse Attack Prevention
Eclipse attack prevention:
- IP Diversity Tracking: Tracks IP diversity
- Subnet Limits: Limits connections from same IP range
- Geographic Diversity: Geographic diversity requirements
- ASN Diversity: ASN diversity tracking
Code: SECURITY.md
Storage Security
Database Security
Storage layer security:
- redb Default: Production-ready database (pure Rust, ACID)
- sled Fallback: Available as fallback (beta quality)
- Database Abstraction: Allows switching backends
- Storage Bounds: Storage bounds checking
Code: SECURITY.md
LAN Peering Security
The LAN peering system includes multiple security mechanisms to prevent eclipse attacks while allowing fast local network sync:
Security Limits
- 25% LAN Peer Cap: Maximum percentage of peers that can be LAN peers (hard limit)
- 75% Internet Peer Minimum: Minimum percentage of peers that must be internet peers
- Minimum 3 Internet Peers: Required for checkpoint validation consensus
- Maximum 1 Discovered LAN Peer: Limits automatically discovered peers (whitelisted are separate)
Code: lan_security.rs
Checkpoint Validation
Internet checkpoints are the primary security mechanism for LAN peering:
- Block Checkpoints: Every 1000 blocks, validate block hash against internet peers
- Header Checkpoints: Every 10000 blocks, validate header hash against internet peers
- Consensus Requirement: Requires agreement from at least 3 internet peers
- Failure Response: Checkpoint failure results in permanent ban (1 year duration)
Code: lan_security.rs
Progressive Trust System
LAN peers start with limited trust and earn higher priority over time:
- Initial Trust: 1.5x multiplier for newly discovered peers
- Level 2 Trust: 2.0x multiplier after 1000 valid blocks
- Maximum Trust: 3.0x multiplier after 10000 blocks AND 1 hour connection
- Demotion: After 3 failures, peer loses LAN status
- Banning: Checkpoint failure results in permanent ban
Code: lan_security.rs
Eclipse Attack Prevention
The security model ensures eclipse attack prevention:
- Internet Peer Majority: 75% minimum ensures connection to honest network
- Checkpoint Validation: Regular validation prevents chain divergence
- LAN Address Privacy: LAN addresses never advertised to external peers
- Failure Handling: Multiple failures result in demotion or ban
Code: lan_security.rs
For complete documentation, see LAN Peering System.
See Also
- LAN Peering System - Complete LAN peering documentation
- Security Controls - Security control implementation
- Developer Security Checklist - Security checklist for developers
- Security Architecture Review Template - Architecture review process
- Security Testing Template - Security testing guidelines
- Node Overview - Node security features
- Contributing - Security in development workflow
Components
The threat model and security boundaries include:
- Node security boundaries (what node handles vs. never handles)
- Module system security (process isolation, sandboxing)
- Threat models (pre-production, mainnet)
- Attack vectors and mitigations
- Security hardening roadmap
- RPC security (authentication, rate limiting)
- Network security (DoS protection, eclipse prevention)
- Storage security
Location: blvm-node/SECURITY.md, blvm-node/src/module/, blvm-node/docs/MODULE_SYSTEM.md
Developer Security Checklist
Use this checklist when writing new code or modifying existing code to ensure security best practices.
Before Writing Code
- Understand the security implications of your changes
- Identify affected security controls (check
governance/config/security-control-mapping.yml) - Review relevant security documentation
- Consider threat model for your changes
Input Validation
- Validate all user inputs at boundaries
- Sanitize inputs before processing
- Use type-safe APIs (Rust’s type system)
- Reject invalid inputs early
- Validate data from external sources (network, files, databases)
Examples:
#![allow(unused)]
fn main() {
// ✅ Good: Validate input
fn process_amount(amount: u64) -> Result<u64, Error> {
if amount > MAX_AMOUNT {
return Err(Error::AmountTooLarge);
}
Ok(amount)
}
// ❌ Bad: No validation
fn process_amount(amount: u64) -> u64 {
amount // Could overflow
}
}
Authentication & Authorization
- Implement proper authentication (if applicable)
- Check authorization before sensitive operations
- Use principle of least privilege
- Verify permissions at every boundary
- Don’t trust client-side authorization checks
Examples:
#![allow(unused)]
fn main() {
// ✅ Good: Check authorization
fn transfer_funds(from: Account, to: Account, amount: u64) -> Result<(), Error> {
if !from.has_permission(Permission::Transfer) {
return Err(Error::Unauthorized);
}
// ... transfer logic
}
// ❌ Bad: No authorization check
fn transfer_funds(from: Account, to: Account, amount: u64) {
// ... transfer logic without checking permissions
}
}
Cryptographic Operations
- Use well-tested cryptographic libraries (secp256k1, bitcoin_hashes)
- Never hardcode keys or secrets
- Use cryptographically secure random number generation
- Follow Bitcoin standards (BIP32, BIP39, BIP44)
- Verify signatures completely
- Use constant-time operations where needed (avoid timing attacks)
Examples:
#![allow(unused)]
fn main() {
// ✅ Good: Use secure random
use rand::rngs::OsRng;
let mut rng = OsRng;
let key = secp256k1::SecretKey::new(&mut rng);
// ❌ Bad: Insecure random
let key = secp256k1::SecretKey::from_slice(&[1, 2, 3, ...])?;
}
Consensus & Protocol
- Implement consensus rules exactly as specified
- Validate all protocol messages
- Handle network errors gracefully
- Prevent DoS attacks (rate limiting, resource limits)
- Don’t bypass consensus validation
Examples:
#![allow(unused)]
fn main() {
// ✅ Good: Validate consensus rules
fn validate_block(block: &Block) -> Result<(), ConsensusError> {
if !block.verify_merkle_root() {
return Err(ConsensusError::InvalidMerkleRoot);
}
// ... more validation
}
// ❌ Bad: Skip validation
fn validate_block(block: &Block) -> Result<(), ConsensusError> {
Ok(()) // No validation!
}
}
Memory Safety
- Prefer safe Rust code
- Document and justify any
unsafecode - Ensure proper resource cleanup (Drop trait)
- Avoid memory leaks (use RAII patterns)
- Check bounds before array/vector access
Examples:
#![allow(unused)]
fn main() {
// ✅ Good: Safe Rust
let value = vec.get(index).ok_or(Error::OutOfBounds)?;
// ❌ Bad: Unsafe indexing
let value = vec[index]; // Could panic
}
Error Handling
- Don’t leak sensitive information in errors
- Use specific error types
- Handle all error cases
- Fail securely (default deny)
- Log errors appropriately (no sensitive data)
Examples:
#![allow(unused)]
fn main() {
// ✅ Good: Generic error message
return Err(Error::AuthenticationFailed); // Doesn't reveal why
// ❌ Bad: Leaks information
return Err(Error::InvalidPassword("user123")); // Reveals username
}
Dependencies
- Use minimal dependencies
- Keep dependencies up-to-date
- Pin consensus-critical dependencies to exact versions
- Check for known vulnerabilities (cargo audit)
- Review dependency licenses
Examples:
# ✅ Good: Pin critical dependencies
[dependencies]
secp256k1 = "=0.28.0" # Exact version for consensus-critical
# ❌ Bad: Allow version ranges for critical code
[dependencies]
secp256k1 = "^0.28" # Could break consensus
Testing
- Write security-focused tests
- Test edge cases and boundary conditions
- Test error handling paths
- Include fuzzing for consensus/protocol code
- Test with malicious inputs
- Achieve adequate test coverage
Examples:
#![allow(unused)]
fn main() {
#[test]
fn test_amount_overflow() {
assert!(process_amount(u64::MAX).is_err());
}
#[test]
fn test_invalid_signature() {
let invalid_sig = vec![0u8; 64];
assert!(verify_signature(&invalid_sig).is_err());
}
}
Documentation
- Document security assumptions
- Document threat model considerations
- Document security implications of design decisions
- Update security documentation if adding new controls
- Document configuration security requirements
Code Review
- Request security review for security-sensitive code
- Address security review feedback
- Update security control mapping if needed
- Ensure appropriate governance tier is selected
Post-Implementation
- Verify security tests pass
- Check for new security advisories
- Update threat model if needed
- Document any security trade-offs
Security Control Categories
Category A: Consensus Integrity
- Genesis block implementation
- SegWit witness verification
- Taproot support
- Script execution limits
- UTXO set validation
Category B: Cryptographic
- Maintainer key management
- Emergency signature verification
- Multisig threshold enforcement
- Key derivation and storage
Category C: Governance
- Tier classification logic
- Database query implementation
- Cross-layer file verification
Category D: Data Integrity
- Audit log hash chain
- OTS timestamping
- State synchronization
Category E: Input Validation
- GitHub webhook signature verification
- Input sanitization
- SQL injection prevention
- API rate limiting
Resources
- Security Controls System
- Threat Models
- blvm-node SECURITY.md (reference node security practices)
Security Architecture Review Template
Use this template when conducting security architecture reviews for new features, major changes, or system components.
Review Information
Component/Feature: [Name of component or feature]
Reviewer: [Name]
Date: [Date]
Review Type: [Initial / Follow-up / Final]
Affected Security Controls: [List control IDs, e.g., A-001, B-002]
Executive Summary
Brief Description: [One-paragraph summary of the component/feature and its security implications]
Security Risk Level:
- Low
- Medium
- High
- Critical
Recommendation:
- Approve
- Approve with conditions
- Request changes
- Reject
Architecture Overview
Component Description
[Detailed description of the component, its purpose, and how it fits into the system]
Data Flow
[Describe how data flows through the component, including inputs, outputs, and transformations]
Threat Model
[Identify potential threats, attackers, and attack vectors]
Security Analysis
Authentication & Authorization
Current Implementation: [Describe how authentication and authorization are handled]
Security Assessment:
- Authentication is properly implemented
- Authorization checks are present at all boundaries
- Principle of least privilege is followed
- No privilege escalation vulnerabilities
- Session management is secure (if applicable)
Issues Found: [List any authentication/authorization issues]
Recommendations: [List recommendations for improvement]
Cryptographic Operations
Current Implementation: [Describe cryptographic operations used]
Security Assessment:
- Cryptographic primitives are appropriate and well-tested
- Key management follows best practices
- No hardcoded keys or secrets
- Random number generation is secure
- Signature verification is complete
- Constant-time operations used where needed
Issues Found: [List any cryptographic issues]
Recommendations: [List recommendations for improvement]
Input Validation & Sanitization
Current Implementation: [Describe input validation approach]
Security Assessment:
- All inputs are validated at boundaries
- Input sanitization is appropriate
- No injection vulnerabilities (SQL, command, etc.)
- Path traversal is prevented
- Buffer overflows are prevented
- Integer overflow/underflow is handled
Issues Found: [List any input validation issues]
Recommendations: [List recommendations for improvement]
Data Protection
Current Implementation: [Describe how sensitive data is protected]
Security Assessment:
- Sensitive data is encrypted at rest (if applicable)
- Sensitive data is encrypted in transit
- No sensitive data in logs
- No sensitive data in error messages
- Proper data retention and deletion
Issues Found: [List any data protection issues]
Recommendations: [List recommendations for improvement]
Error Handling
Current Implementation: [Describe error handling approach]
Security Assessment:
- Errors don’t leak sensitive information
- Error handling is comprehensive
- Fail-secure defaults are used
- No information disclosure through errors
Issues Found: [List any error handling issues]
Recommendations: [List recommendations for improvement]
Network Security
Current Implementation: [Describe network security measures]
Security Assessment:
- Network communication is encrypted (TLS)
- DoS protection is implemented
- Rate limiting is appropriate
- Network message validation is complete
- Protocol security is maintained
Issues Found: [List any network security issues]
Recommendations: [List recommendations for improvement]
Consensus & Protocol Compliance
Current Implementation: [Describe consensus/protocol implementation]
Security Assessment:
- Consensus rules are correctly implemented
- No consensus bypass vulnerabilities
- Protocol compliance is maintained
- Network compatibility is preserved
Issues Found: [List any consensus/protocol issues]
Recommendations: [List recommendations for improvement]
Security Controls Mapping
Affected Controls: [List all security controls affected by this component]
| Control ID | Control Name | Priority | Status | Notes |
|---|---|---|---|---|
| A-001 | Genesis Block | P0 | ✅ Complete | - |
| B-002 | Emergency Signatures | P0 | ⚠️ Partial | Needs review |
Required Actions:
- Security audit required (P0 controls)
- Formal verification required (consensus-critical)
- Cryptography expert review required
Testing & Validation
Current Testing: [Describe existing tests]
Security Testing Assessment:
- Security tests are included
- Edge cases are tested
- Fuzzing is appropriate (if applicable)
- Integration tests cover security scenarios
- Test coverage is adequate
Recommendations: [List testing recommendations]
Dependencies
Dependencies: [List security-sensitive dependencies]
Security Assessment:
- Dependencies are up-to-date
- No known vulnerabilities
- Consensus-critical dependencies are pinned
- Licenses are compatible
Issues Found: [List dependency issues]
Compliance & Governance
Governance Tier: [Identify required governance tier]
Compliance:
- Appropriate governance tier is selected
- Required signatures are identified
- Review period is appropriate
Risk Assessment
Identified Risks
| Risk | Severity | Likelihood | Impact | Mitigation |
|---|---|---|---|---|
| Example risk | High | Medium | Critical | Mitigation strategy |
Risk Summary
[Overall risk assessment and summary]
Recommendations
Critical (Must Fix)
[List critical issues that must be fixed before approval]
High Priority
[List high-priority recommendations]
Medium Priority
[List medium-priority recommendations]
Low Priority
[List low-priority recommendations]
Approval
Reviewer Signature: [Name]
Date: [Date]
Status: [Approved / Conditionally Approved / Rejected]
Conditions (if applicable): [List any conditions for approval]
Follow-up
Required Actions: [List actions required before final approval]
Follow-up Review Date: [Date for follow-up review, if needed]
References
- Security Controls System
- Threat Models
- Developer Security Checklist
- Security Testing Template
- blvm-node SECURITY.md (reference node security practices)
Security Testing Template
Use this template to plan and document security testing for new features, components, or security-sensitive changes.
Test Information
Component/Feature: [Name of component or feature]
Tester: [Name]
Date: [Date]
Test Type: [Unit / Integration / Fuzzing / Penetration / Review]
Affected Security Controls: [List control IDs]
Test Objectives
Primary Objectives:
- Verify input validation
- Verify authentication/authorization
- Verify cryptographic operations
- Verify consensus compliance
- Verify error handling
- Verify data protection
- Verify DoS resistance
Secondary Objectives: [List any additional testing objectives]
Test Scope
In Scope: [List what is being tested]
Out of Scope: [List what is explicitly not being tested]
Assumptions: [List any assumptions made during testing]
Test Environment
Environment Details:
- OS: [Operating system]
- Rust Version: [Version]
- Dependencies: [Key dependencies and versions]
- Network: [Network configuration if applicable]
Test Data: [Describe test data used]
Test Cases
Input Validation Tests
Test Case 1: Valid Input
- Description: Test with valid inputs
- Steps: [Test steps]
- Expected Result: [Expected behavior]
- Actual Result: [Actual behavior]
- Status: [Pass / Fail / Blocked]
Test Case 2: Invalid Input - Boundary Values
- Description: Test with boundary values (min, max, zero)
- Steps: [Test steps]
- Expected Result: [Expected behavior]
- Actual Result: [Actual behavior]
- Status: [Pass / Fail / Blocked]
Test Case 3: Invalid Input - Type Mismatch
- Description: Test with wrong data types
- Steps: [Test steps]
- Expected Result: [Expected behavior]
- Actual Result: [Actual behavior]
- Status: [Pass / Fail / Blocked]
Test Case 4: Invalid Input - Injection Attempts
- Description: Test for SQL injection, command injection, etc.
- Steps: [Test steps]
- Expected Result: [Expected behavior]
- Actual Result: [Actual behavior]
- Status: [Pass / Fail / Blocked]
Authentication & Authorization Tests
Test Case 5: Valid Authentication
- Description: Test successful authentication
- Steps: [Test steps]
- Expected Result: [Expected behavior]
- Actual Result: [Actual behavior]
- Status: [Pass / Fail / Blocked]
Test Case 6: Invalid Authentication
- Description: Test with invalid credentials
- Steps: [Test steps]
- Expected Result: [Expected behavior]
- Actual Result: [Actual behavior]
- Status: [Pass / Fail / Blocked]
Test Case 7: Authorization Bypass
- Description: Test attempts to bypass authorization
- Steps: [Test steps]
- Expected Result: [Expected behavior]
- Actual Result: [Actual behavior]
- Status: [Pass / Fail / Blocked]
Test Case 8: Privilege Escalation
- Description: Test for privilege escalation vulnerabilities
- Steps: [Test steps]
- Expected Result: [Expected behavior]
- Actual Result: [Actual behavior]
- Status: [Pass / Fail / Blocked]
Cryptographic Tests
Test Case 9: Signature Verification
- Description: Test signature verification with valid signatures
- Steps: [Test steps]
- Expected Result: [Expected behavior]
- Actual Result: [Actual behavior]
- Status: [Pass / Fail / Blocked]
Test Case 10: Invalid Signature
- Description: Test signature verification with invalid signatures
- Steps: [Test steps]
- Expected Result: [Expected behavior]
- Actual Result: [Actual behavior]
- Status: [Pass / Fail / Blocked]
Test Case 11: Key Management
- Description: Test key generation, storage, and usage
- Steps: [Test steps]
- Expected Result: [Expected behavior]
- Actual Result: [Actual behavior]
- Status: [Pass / Fail / Blocked]
Test Case 12: Random Number Generation
- Description: Test cryptographic random number generation
- Steps: [Test steps]
- Expected Result: [Expected behavior]
- Actual Result: [Actual behavior]
- Status: [Pass / Fail / Blocked]
Consensus & Protocol Tests
Test Case 13: Consensus Rule Compliance
- Description: Test consensus rule implementation
- Steps: [Test steps]
- Expected Result: [Expected behavior]
- Actual Result: [Actual behavior]
- Status: [Pass / Fail / Blocked]
Test Case 14: Protocol Message Validation
- Description: Test protocol message validation
- Steps: [Test steps]
- Expected Result: [Expected behavior]
- Actual Result: [Actual behavior]
- Status: [Pass / Fail / Blocked]
Test Case 15: Consensus Bypass Attempts
- Description: Test attempts to bypass consensus rules
- Steps: [Test steps]
- Expected Result: [Expected behavior]
- Actual Result: [Actual behavior]
- Status: [Pass / Fail / Blocked]
Error Handling Tests
Test Case 16: Error Information Disclosure
- Description: Test that errors don’t leak sensitive information
- Steps: [Test steps]
- Expected Result: [Expected behavior]
- Actual Result: [Actual behavior]
- Status: [Pass / Fail / Blocked]
Test Case 17: Error Recovery
- Description: Test error recovery mechanisms
- Steps: [Test steps]
- Expected Result: [Expected behavior]
- Actual Result: [Actual behavior]
- Status: [Pass / Fail / Blocked]
DoS Resistance Tests
Test Case 18: Resource Exhaustion
- Description: Test resistance to resource exhaustion attacks
- Steps: [Test steps]
- Expected Result: [Expected behavior]
- Actual Result: [Actual behavior]
- Status: [Pass / Fail / Blocked]
Test Case 19: Rate Limiting
- Description: Test rate limiting mechanisms
- Steps: [Test steps]
- Expected Result: [Expected behavior]
- Actual Result: [Actual behavior]
- Status: [Pass / Fail / Blocked]
Test Case 20: Memory Exhaustion
- Description: Test resistance to memory exhaustion
- Steps: [Test steps]
- Expected Result: [Expected behavior]
- Actual Result: [Actual behavior]
- Status: [Pass / Fail / Blocked]
Fuzzing Tests
Fuzzing Tool: [Tool used, e.g., cargo-fuzz, AFL]
Fuzzing Duration: [Duration]
Coverage: [Code coverage achieved]
Issues Found: [List issues found during fuzzing]
Fuzzing Results: [Summary of fuzzing results]
Penetration Tests
Penetration Test Scope: [Describe penetration testing scope]
Issues Found: [List issues found during penetration testing]
Penetration Test Results: [Summary of penetration test results]
Test Results Summary
Total Test Cases: [Number]
Passed: [Number]
Failed: [Number]
Blocked: [Number]
Critical Issues: [Number]
High Issues: [Number]
Medium Issues: [Number]
Low Issues: [Number]
Issues Found
Critical Issues
Issue 1: [Title]
- Description: [Description]
- Impact: [Impact]
- Steps to Reproduce: [Steps]
- Recommendation: [Recommendation]
- Status: [Open / Fixed / Deferred]
High Issues
[List high-priority issues]
Medium Issues
[List medium-priority issues]
Low Issues
[List low-priority issues]
Recommendations
Immediate Actions: [List immediate actions required]
Short-term Actions: [List short-term actions]
Long-term Actions: [List long-term actions]
Test Coverage
Code Coverage: [Percentage]
Security Control Coverage: [Percentage]
Coverage Gaps: [List areas with insufficient coverage]
Sign-off
Tester: [Name]
Date: [Date]
Status: [Pass / Fail / Conditional Pass]
Approval: [Approval from security team/maintainers]
References
- Security Controls System
- Threat Models
- Developer Security Checklist
- Security Architecture Review Template
- blvm-node SECURITY.md (reference node security practices)
Migration Guides
Migration guides for upgrading between BLVM versions are documented here.
Migration guides are provided as needed for version upgrades. When available, they cover configuration changes, database migrations, API compatibility, and upgrade procedures.
When migration guides are needed, they will cover:
- Configuration changes between versions
- Database schema migrations
- API changes and compatibility
- Breaking changes and upgrade procedures
Frequently Asked Questions
General Questions
What is Bitcoin Commons?
Bitcoin Commons is a project that solves Bitcoin’s governance asymmetry through two complementary innovations: BLVM (the technical stack providing mathematical rigor) and Bitcoin Commons (the governance framework providing coordination without civil war). Together, they enable safe alternative Bitcoin implementations with forkable governance. See Introduction and Governance Overview for details.
How does this relate to other Bitcoin implementations?
The widely deployed reference stack is one mature codebase and informal governance. Bitcoin Commons adds: (1) BLVM — mathematical rigor and a normative Orange Paper, (2) Commons — forkable governance. The goals are implementation diversity and verifiable specs alongside strong consensus testing.
Is this a fork of Bitcoin?
No. Neither BLVM nor Bitcoin Commons forks Bitcoin’s blockchain or consensus rules. BLVM provides mathematical specification enabling safe alternative implementations. Bitcoin Commons provides governance framework enabling coordination. Both maintain full Bitcoin consensus compatibility.
Is the system production ready?
BLVM provides a complete node implementation with core components, formal verification tooling, and broad tests. Readiness depends on your deployment: governance is not universally activated, and you must apply your own security review, hardening, RPC authentication, and monitoring. See Node Configuration, Security, and System Status.
How do BLVM and Bitcoin Commons work together?
BLVM provides the mathematical foundation and compiler-like architecture (Orange Paper as spec/IR; implementation validated against it). Bitcoin Commons provides the governance framework (coordination without civil war). The modular architecture is where both meet: BLVM supplies the spec and verification stack; Commons supplies governance. Production deployments need engineering and operations like any node.
What are the two innovations?
BLVM (Bitcoin Low-Level Virtual Machine): Technical innovation combining the Orange Paper (normative math spec), formal verification (BLVM Specification Lock / Z3), and a compiler-like split between spec and implementation. See Introduction and Consensus Overview.
Bitcoin Commons (Cryptographic Commons): Governance innovation providing forkable governance through Ostrom’s principles, cryptographic enforcement, 5-tier governance model, and transparent audit trails. This ensures coordination. See Governance Overview for details.
What’s the relationship between Bitcoin Commons and BTCDecoded?
Bitcoin Commons is the governance framework; BTCDecoded is the first complete implementation of both innovations (BLVM + Commons). Think of BLVM as the technical foundation, Bitcoin Commons as the governance constitution, and BTCDecoded as the first “government” built on both. Other implementations can adopt the same framework.
What is Bitcoin Commons (the governance framework)?
Bitcoin Commons is a forkable governance framework that applies Elinor Ostrom’s proven commons management principles through cryptographic enforcement. It solves Bitcoin’s governance asymmetry by making development governance as robust as technical consensus. It provides coordination without civil war through forkable rules, cryptographic signatures, and transparent audit trails.
How does Bitcoin Commons governance work?
Bitcoin Commons uses a 5-tier constitutional governance model with graduated signature thresholds (3-of-5 for routine maintenance, up to 6-of-7 for consensus changes) and review periods (7 days to 365 days). All governance actions are cryptographically signed and transparently auditable. Users can fork governance rules if they disagree, creating exit competition.
What makes Bitcoin Commons governance “6x harder to capture”?
Multiple mechanisms: (1) Forkable governance rules allow users to exit if governance is captured, (2) Multiple implementations compete, preventing monopoly, (3) Cryptographic enforcement makes power visible and accountable, (4) Economic alignment through merge mining, (5) Graduated thresholds prevent rapid changes, (6) Transparent audit trails.
How does forkable governance work?
Users can fork the governance rules (not just the code) if they disagree with decisions. This creates exit competition: if governance is captured, users can fork to a better governance model while maintaining Bitcoin consensus compatibility. The threat of forking prevents capture.
What are Ostrom’s principles?
Elinor Ostrom’s Nobel Prize-winning research identified 8 principles for managing commons successfully. Bitcoin Commons applies these through: clearly defined boundaries, proportional equivalence, collective choice, monitoring, graduated sanctions, conflict resolution, minimal recognition of rights, and nested enterprises.
Why do you need both BLVM and Bitcoin Commons?
BLVM addresses the technical problem (shared Orange Paper spec, layered verification, alternative implementations). Bitcoin Commons addresses the governance problem (coordination without civil war). They are designed to work together.
How does the modular architecture combine both innovations?
The modular architecture has three layers: (1) Mandatory Consensus (shared blvm-consensus rules and verification policy), (2) Optional Modules (Commons enables competition), (3) Economic Coordination (module marketplace funds infrastructure). Consensus stays in one layer; Commons coordinates changes and releases. The architecture is where both meet.
Can you use BLVM without Bitcoin Commons governance?
Yes. BLVM is a technical stack usable on its own. Without Bitcoin Commons governance you do not get this project’s governance model. BLVM supplies the spec and implementation stack; Commons supplies coordination between alternatives.
Can you use Bitcoin Commons governance without BLVM?
The governance framework can apply to other implementations. BLVM’s Orange Paper and verification stack give a shared spec and Z3-backed proofs on spec-locked code; any codebase still needs correct implementation and review to stay aligned with mainnet.
What happens if governance is captured?
Forkable governance means users can fork to a better governance model. This creates exit competition: captured governance loses users to better-governed implementations. The threat of forking prevents capture. Here you can fork governance rules, not only application code.
How does economic alignment work?
Through the module marketplace. Module authors receive 75% of sales, Commons receives 15% for infrastructure, and node operators receive 10%. This creates sustainable funding while incentivizing quality module development.
What is merge mining?
Merge mining is available as a separate paid plugin module (blvm-merge-mining). It allows miners to mine multiple blockchains simultaneously using the same proof-of-work. However, merge mining is not a Commons funding model - revenue goes to the module developer, not to Commons infrastructure.
What features does BLVM provide?
The Orange Paper, blvm-consensus (with formal verification tooling), blvm-protocol, blvm-node, blvm-sdk, and blvm-commons (governance enforcement) exist as implemented layers. Governance rules are not yet activated in production; treat the stack as experimental until your deployment’s activation criteria are met. See System Status and Governance Overview.
How is Bitcoin Commons governance implemented?
Bitcoin Commons governance uses a 5-tier constitutional model with cryptographic enforcement. Governance rules are defined, the governance-app is implemented, and cryptographic primitives are available. Governance activation requires a suitable cohort of keyholders to be onboarded. See Governance Overview for details.
How does governance activation work?
Governance activation requires a suitable cohort of keyholders to be onboarded. This involves security audits, keyholder onboarding, governance app deployment, and community testing. See Governance Overview and Keyholder Procedures for details.
How can I contribute?
Review BLVM code and formal proofs, review Bitcoin Commons governance rules, submit issues and pull requests, help with testing and security audits, build your own implementation using both innovations, or participate in governance discussions.
Can I build my own implementation?
Yes! You can use BLVM’s technical stack (Orange Paper, blvm-consensus) and adopt Bitcoin Commons governance framework. Fork the governance model, customize it for your organization, and build your own Bitcoin-compatible implementation. See the Implementations Registry.
Where is the code?
All code is open source on GitHub under the BTCDecoded organization. Key repositories: BLVM (blvm-spec/Orange Paper, blvm-consensus, blvm-protocol, blvm-node, blvm-sdk) and Commons (governance, governance-app).
What documentation should I read?
White Paper for complete technical and governance overview, Unified Documentation for technical documentation, and Governance Docs for governance rules and processes.
Why “commons”?
Bitcoin’s codebase is a commons: a shared resource that benefits everyone but no one owns. Traditional commons fail due to tragedy of the commons. Ostrom showed how to manage commons successfully. Bitcoin Commons applies these proven principles through cryptographic enforcement.
How does this relate to cypherpunk philosophy?
Cypherpunks focused on eliminating trusted third parties in transactions. Bitcoin Commons extends this to development: reduce reliance on trusted parties in governance through cryptographic enforcement, transparency, and forkability. BLVM extends this to implementation: open specs, tests, review, and BLVM Specification Lock where applied—not a single blanket proof of every line.
Technical Questions
What is BLVM?
BLVM (Bitcoin Low-Level Virtual Machine) is a compiler-like infrastructure for Bitcoin implementations. It includes: (1) Orange Paper—complete mathematical specification serving as the IR (intermediate representation), (2) blvm-consensus—implementation validated against that spec (not generated from the IR), (3) optimization passes—runtime optimizations on the implementation, (4) blvm-protocol—Bitcoin abstraction layer, (5) blvm-node—full node implementation, (6) blvm-sdk—developer toolkit.
What is the Orange Paper?
The Orange Paper is a complete mathematical specification of Bitcoin’s consensus protocol, produced from analysis of the widely deployed implementation using AI-assisted extraction. It serves as the “intermediate representation” (IR) in BLVM’s compiler-like architecture. The implementation is validated against this spec (not generated from it). blvm-consensus implements those rules with tests, review, and BLVM Specification Lock proofs on spec-locked code.
How does formal verification work in BLVM?
BLVM uses BLVM Specification Lock (Z3) for formal proofs on spec-locked consensus functions, together with tests and review. The Orange Paper is the specification; blvm-consensus implements it. See Formal Verification.
How is BLVM different from a typical full-node codebase?
Many deployments embed consensus in a large C++ codebase without a single companion IR like the Orange Paper. BLVM provides: (1) mathematical specification (Orange Paper), (2) BLVM Specification Lock (Z3 proofs on spec-locked code), (3) proofs co-located with code, (4) a compiler-like split (spec vs implementation) for alternative implementations. BLVM is a different development and verification stack, not a drop-in replacement for any one node.
What does “compiler-like architecture” mean?
Like a compiler has a spec (IR) and implementation (machine code), BLVM has the Orange Paper as the spec (IR) and blvm-consensus as the implementation. The implementation is validated against the Orange Paper—it is not generated or transformed from the IR. Optimization passes optimize the implementation code. Multiple implementations can target the same Orange Paper; operating on mainnet also requires sound deployment and operations.
What is formal verification in BLVM?
BLVM Specification Lock produces Z3 proofs for spec-locked functions against Orange Paper contracts. The Orange Paper remains the normative spec for the full rule set. See Formal Verification.
How many formal proofs does BLVM have?
The set of spec-locked functions grows over time. Run cargo spec-lock verify in blvm-consensus or see VERIFICATION.md.
What does “proofs locked to code” mean?
Spec-lock proofs live next to the functions they verify. Changing those functions requires updating proofs. Full proof scope: PROOF_LIMITATIONS.md.
How does BLVM prevent consensus bugs?
Through multiple layers: (1) Orange Paper specifies the rules, (2) Tests and integration catch regressions, (3) BLVM Specification Lock proves spec-locked consensus code against those rules, (4) Consensus logic lives in blvm-consensus so the node does not reimplement rules, (5) Review and tooling catch what automation misses. See Formal Verification.
How does cryptographic enforcement work?
All governance actions require cryptographic signatures from maintainers. The governance-app (GitHub App) verifies signatures, enforces thresholds (e.g., 6-of-7), and blocks merges until requirements are met. This makes power visible and accountable: you can see who signed what, when.
What BIPs are implemented?
BLVM implements numerous Bitcoin Improvement Proposals. See Protocol Specifications for a complete list, including consensus-critical BIPs (BIP65, BIP112, BIP68, BIP113, BIP125, BIP141/143, BIP340/341/342), network protocol BIPs (BIP152, BIP157/158, BIP331), and application-level BIPs (BIP21, BIP32/39/44, BIP174, BIP350/351).
What storage backends are supported?
The node supports multiple storage backends. With database_backend = "auto" (default), the backend is chosen by build features: RocksDB when the rocksdb feature is enabled, then TidesDB, Redb, Sled. Options include rocksdb (can read common LevelDB-format chain state and blk*.dat layouts), redb, sled, and tidesdb. See Storage Backends and Configuration Reference for details.
What transport protocols are supported?
The network layer supports multiple transport protocols: TCP (default, Bitcoin P2P compatible) and Iroh/QUIC (experimental). See Network Protocol for details.
How do I install BLVM?
Pre-built binaries are available from GitHub Releases. See Installation for platform-specific instructions.
What experimental features are available?
The experimental build variant includes: UTXO commitments, BIP119 CTV (CheckTemplateVerify), Dandelion++ privacy relay, BIP158, Stratum V2 mining protocol, and enhanced signature operations counting. See Installation for details.
How do I configure the node?
Configuration can be done via config file (blvm.toml), environment variables, or command-line options. See Node Configuration for complete configuration options.
What RPC methods are available?
The node implements many JSON-RPC methods aligned with widely documented Bitcoin node RPC conventions across blockchain, raw transaction, mempool, network, mining, control, address, transaction, and payment categories. See RPC API Reference for the list.
How does the module system work?
The node includes a process-isolated module system that enables optional features (Lightning, merge mining, privacy enhancements) without affecting consensus or base node stability. Modules run in separate processes with IPC communication. See Module Development for details.
How do I troubleshoot issues?
See Troubleshooting for common issues and solutions, including node startup problems, storage issues, network connectivity, RPC configuration, module system issues, and performance optimization.
Troubleshooting
Common issues and solutions when running BLVM nodes. See Node Operations for operational details.
Node Won’t Start
Port Already in Use
Error: Address already in use or Port 8332 already in use
Solution:
# Use a different port
blvm --rpc-port 8333
# Or find and stop the process using the port
lsof -i :8332
kill <PID>
Permission Denied
Error: Permission denied when accessing data directory
Solution:
# Fix directory permissions
sudo chown -R $USER:$USER /var/lib/blvm
# Or use a user-writable directory
blvm --data-dir ~/.blvm
Storage Issues
Database Backend Fails
Error: Failed to initialize database backend
Solution:
- The system automatically falls back to alternative backends when the chosen one fails
- Check data directory permissions and sufficient disk space
- Set backend explicitly in config:
[storage] database_backend = "redb"(or"rocksdb","sled","tidesdb"). See Configuration Reference.
Corrupted Database
Error: Database corruption or inconsistent state
Solution:
# Stop the node
# Remove corrupted database files (backup first!)
rm -rf data/blocks data/chainstate
# Restart and resync
blvm
Network Issues
No Peer Connections
Symptoms: Node starts but shows 0 connections
Solutions:
- Check firewall settings (port 8333 for mainnet, 18333 for testnet)
- Verify network connectivity
- Try adding manual peers:
blvm --addnode <peer-ip>:8333 - Check DNS seed resolution
Connection Drops
Symptoms: Connections established but immediately drop
Solutions:
- Check network stability
- Verify protocol version compatibility
- Review node logs for specific error messages
- Try different transport:
--transport tcp_only
RPC Issues
RPC Connection Refused
Error: Connection refused when calling RPC
Solutions:
- Verify RPC is enabled:
--rpc-enabled true - Check RPC port:
--rpc-port 8332 - Verify bind address:
--rpc-host 127.0.0.1(default) - Check firewall for RPC port
RPC Authentication Errors
Error: Unauthorized or authentication failures
Solutions:
- Configure RPC credentials in config file
- Use correct username/password in requests
- For development, RPC can run without auth (not recommended for production)
Module System Issues
Module Not Loading
Error: Module fails to load or start
Solutions:
- Verify
module.tomlexists and is valid - Check module binary exists at expected path
- Review module logs in
data/modules/logs/ - Verify module has required permissions in manifest
- Check IPC socket directory permissions
IPC Connection Failures
Error: Module cannot connect to node IPC
Solutions:
- Ensure socket directory exists:
data/modules/sockets/ - Check file permissions on socket directory
- Verify module process has access to socket
- Restart node to recreate sockets
Performance Issues
Slow Initial Sync
Symptoms: Node takes very long to sync
Solutions:
- Use pruning:
--pruning enabled --pruning-keep-blocks 288 - Increase cache sizes in config
- Use a storage backend suited to your workload (see Storage Backends)
- Check network bandwidth and latency
High Memory Usage
Symptoms: Node uses excessive memory
Solutions:
- Reduce cache sizes in config
- Enable pruning to reduce data size
- Check for memory leaks in logs
- Consider using lighter storage backend
Getting Help
- Check node logs:
data/logs/or console output - Review Configuration for options
- See RPC API for available methods
- Check GitHub issues for known problems
See Also
- Node Operations - Node management and operations
- Node Configuration - Configuration options
- Getting Started - First node setup
- FAQ - Frequently asked questions
- Migration Guides - Migration from other implementations
Contributing to BLVM Documentation
Thank you for your interest in improving BLVM documentation!
Documentation Philosophy
Documentation is maintained in source repositories alongside code. This repository (blvm-docs) aggregates that documentation into a unified site.
Where to contribute:
- Component-specific documentation → Edit in the source repository (e.g.,
blvm-consensus/docs/) - Cross-cutting documentation → Edit in this repository (e.g.,
blvm-docs/src/architecture/) - Navigation structure → Edit
SUMMARY.mdin this repository
Documentation Standards
Content principles (keep docs timeless and accurate)
- Current state only — Describe how things work and where things live now. Do not describe what was removed, refactored, or “we recently changed X.”
- No plan artifacts — No task IDs, “Phase 2”, “we removed X”, or references to internal plans or WIP.
- No unsubstantiated numbers — Do not claim specific speedups (e.g. “10-50x faster”) unless citing published benchmarks. Describe optimizations and point to benchmarks.thebitcoincommons.org or local runs.
- Accurate feature status — Do not label features as “deprecated” when they are actively reimplemented (e.g. BIP70).
- IR vs implementation — The Orange Paper is the spec (IR). The implementation is validated against it (e.g. blvm-spec-lock). Do not say the IR is “transformed” or “generated” into code.
- API reference — The canonical API reference is this book (API Index, SDK API Reference). Do not point users to docs.rs as the primary API docs; link in-book or docs.thebitcoincommons.org.
- Storage default —
database_backend = "auto"resolves by build features: RocksDB (ifrocksdbfeature) → TidesDB → Redb → Sled. Do not describe “redb” as the default without this context. - Paths — Code links must use actual paths:
block/,script/(dirs),node/parallel_ibd/(dir), blvm-protocol for spam_filter/utxo_commitments; noblock.rs,script.rs,parallel_ibd.rsas single files, no utxostore_proofs. - Brittle links — Prefer file or module links without line-number anchors (
#L123). Line numbers break as code changes; use them only when pointing to a stable, narrow section and prefer “seepath/to/file.rs” when the exact line is not critical.
When in doubt, follow the Content principles subsection above and the Contributing chapter.
Markdown Format
- Use standard Markdown (no mdBook-specific syntax in source repos)
- Follow consistent heading hierarchy
- Use relative links for internal documentation
- Include code examples where helpful
Style Guidelines
- Clarity: Write clearly and concisely
- Completeness: Cover all important aspects
- Examples: Include practical examples
- Links: Link to related documentation
- Code: Include testable code examples where possible
File Organization
Each source repository should maintain documentation in:
repository-root/
├── README.md # High-level overview
├── docs/
│ ├── README.md # Documentation index
│ ├── architecture.md # Component architecture
│ ├── guides/ # How-to guides
│ ├── reference/ # Reference documentation
│ └── examples/ # Code examples
Contribution Workflow
For Source Repository Documentation
- Fork the source repository (e.g.,
blvm-consensus) - Make documentation improvements
- Submit a pull request to the source repository
- After merge, changes will appear in the unified documentation site (via
{{#include}}directives)
For Cross-Cutting Documentation
- Fork this repository (
blvm-docs) - Edit files in
src/directory (not in submodules) - Submit a pull request
- After merge, GitHub Actions will automatically rebuild and deploy
For Navigation Changes
- Edit
src/SUMMARY.mdto add/remove/modify navigation - Create corresponding content files if needed
- Submit a pull request
Local Testing
Before submitting changes:
-
Clone the repository:
git clone https://github.com/BTCDecoded/blvm-docs.git -
Includes for Orange Paper and governance —
mdbook buildfails if these paths are missing:modules/blvm-spec/THE_ORANGE_PAPER.md(included from Orange Paper)modules/governance/README.mdandmodules/governance/GOVERNANCE.md(included from Governance Overview and Governance Model)
Clone from GitHub if needed (
https://github.com/BTCDecoded/blvm-spec,https://github.com/BTCDecoded/governance). With sibling checkouts, fromblvm-docs/modules/:ln -sf ../../blvm-spec blvm-spec ln -sf ../../governance governancePoint the targets at your local clones (paths may differ).
-
Serve locally:
mdbook serve -
Review changes at
http://localhost:3000 -
Check for broken links:
mdbook test
modules/blvm submodule
The modules/blvm submodule is the meta-repo (blvm build/orchestration tree). Its docs/ tree is for umbrella workflows and release tooling, not the same as this book’s src/. Prefer editing cross-cutting narrative in blvm-docs/src/ unless the change belongs to meta-repo CI or release docs only.
Review Process
- All documentation changes require review
- Maintainers will review for clarity, completeness, and accuracy
- Technical accuracy is especially important for consensus and protocol documentation
Major documentation update checklist
When refreshing docs for a release or large refactor, explicitly verify (not only path fixes):
| Area | Ask |
|---|---|
| SDK / modules | Are blvm-sdk module APIs documented? (#[module], run_module!, prelude, blvm-sdk-macros) |
| User CLI | Do modules that register CLI document blvm <group> … and that the module must be loaded? |
| New crates | Is every user-facing crate listed in stack overview, glossary, and api-index? |
| First-class modules | Does each shipped module have a book page (not only a GitHub link)? |
| Composition | Is blvm-compose still accurately described if the composition API changed? |
| Node config | Do defaults (IBD, storage, pruning) match code and configuration-reference? |
| Optional features | If a feature is user-visible (e.g. WASM modules, extra transports), is it mentioned in the right node/sdk section? |
Add missing sections rather than assuming “the plan” covered developer ergonomics—those are easy to omit.
Questions?
- Open an issue for questions about documentation structure
- Ask in GitHub Discussions for general questions
- Contact maintainers for repository-specific questions
Thank you for helping improve BLVM documentation!