Introduction
Welcome to the BLVM (Bitcoin Low-Level Virtual Machine) documentation!
BLVM implements Bitcoin consensus rules directly from the Orange Paper, provides protocol abstraction for multiple Bitcoin variants, delivers a production-ready node with full P2P networking, includes a developer SDK for custom implementations, and enforces cryptographic governance for transparent development.
What is BLVM?
BLVM (Bitcoin Low-Level Virtual Machine) is compiler-like infrastructure for Bitcoin implementations. Like LLVM transforms source code through optimization passes, BLVM transforms the Orange Paper mathematical specification into optimized, production-ready code via optimization passes.
Compiler-Like Architecture
Like a compiler transforms source code → IR → optimized machine code, BLVM transforms:
- Orange Paper - Mathematical specification (IR/intermediate representation)
- BLVM Specification Lock - Formal verification tooling linking code to Orange Paper specifications using Z3
- Optimization Passes - Transform spec into optimized code
- blvm-consensus - Optimized implementation with formal verification
- blvm-protocol - Protocol abstraction for mainnet, testnet, regtest
- blvm-node - Full Bitcoin node with storage, networking, RPC
- blvm-sdk - Developer toolkit and module composition
- Governance - Cryptographic governance enforcement
Why “LVM”?
Like LLVM’s compiler infrastructure, BLVM provides Bitcoin implementation infrastructure with optimization passes. The Orange Paper serves as the intermediate representation (IR) transformed into production-ready code, enabling safe alternative implementations while maintaining consensus correctness.
Documentation Structure
This documentation is organized into several sections:
- Getting Started - Installation and quick start guides
- Architecture - System-wide design and component relationships
- Component Documentation - Detailed documentation for each layer
- Developer Guides - SDK usage and module development
- Governance - Governance model and procedures
- Reference - Specifications, API documentation, and glossary
Documentation Sources
This unified documentation site aggregates content from multiple source repositories:
- Documentation is maintained in source repositories alongside code
- Changes to source documentation automatically propagate here
- Each component’s documentation is authored by its maintainers
Getting Help
Report bugs or request features on GitHub Issues, ask questions in GitHub Discussions, or report security issues to security@btcdecoded.org.
Key Features
Core Components
- blvm-consensus - Pure mathematical implementation with formal verification, BIP integration (BIP30, BIP34, BIP66, BIP90, BIP147)
- blvm-protocol - Protocol variants (mainnet, testnet, regtest) and network messages
- blvm-node - Full Bitcoin node with RPC, storage, and module system
- blvm-sdk - Governance primitives and CLI tools (blvm-keygen, blvm-sign, blvm-verify)
- blvm-commons - Governance enforcement system with GitHub integration, OpenTimestamps, Nostr, and cross-layer validation
Module System
BLVM includes a process-isolated module system enabling optional features:
- blvm-lightning - Lightning Network module (LDK implementation)
- blvm-mesh - Mesh networking module with submodules (onion routing, mining pool, messaging)
- blvm-stratum-v2 - Stratum V2 mining module
- blvm-datum - Datum blockchain module
- blvm-miningos - Mining OS module
Key Capabilities
BLVM includes comprehensive Bitcoin node functionality:
- Module System: Process-isolated modules with enhanced security and process isolation
- RBF and Mempool Policies: Configurable replacement-by-fee modes with 5 eviction strategies
- Payment Processing: CTV (CheckTemplateVerify) support for advanced payment flows
- Advanced Indexing: Address and value range indexing for efficient queries
- Formal Verification: Formal verification for critical proofs
- Differential Testing: Infrastructure for comparing against Bitcoin Core
- FIBRE Protocol: High-performance relay protocol support
License
This documentation is licensed under the MIT License, same as the BLVM codebase.
Installation
This guide covers installing BLVM from pre-built binaries available on GitHub releases.
Prerequisites
Pre-built binaries are available for Linux, macOS, and Windows on common platforms. No Rust installation required - binaries are pre-compiled and ready to use.
Installing blvm-node
The reference node is the main entry point for running a BLVM node.
Quick Start
-
Download the latest release from GitHub Releases
-
Extract the archive for your platform:
# Linux tar -xzf blvm-*-linux-x86_64.tar.gz # macOS tar -xzf blvm-*-macos-x86_64.tar.gz # Windows # Extract the .zip file using your preferred tool -
Move the binary to your PATH (optional but recommended):
# Linux/macOS sudo mv blvm /usr/local/bin/ # Or add to your local bin directory mkdir -p ~/.local/bin mv blvm ~/.local/bin/ export PATH="$HOME/.local/bin:$PATH" # Add to ~/.bashrc or ~/.zshrc -
Verify installation:
blvm --version
Release Variants
Releases include two variants:
Base Variant (blvm-{version}-{platform}.tar.gz)
Stable, minimal release with core Bitcoin node functionality, production optimizations, standard storage backends, and process sandboxing. Use for production deployments prioritizing stability.
Experimental Variant (blvm-experimental-{version}-{platform}.tar.gz)
Full-featured build with experimental features: UTXO commitments, BIP119 CTV, Dandelion++, BIP158, Stratum V2, and enhanced signature operations counting. See Protocol Specifications for details.
Use for development, testing, or when experimental capabilities are required.
Installing blvm-sdk Tools
The SDK tools (blvm-keygen, blvm-sign, blvm-verify) are included in the blvm-node release archives.
After extracting the release archive, you’ll find:
blvm- Bitcoin reference nodeblvm-keygen- Generate governance keypairsblvm-sign- Sign governance messagesblvm-verify- Verify signatures and multisig thresholds
All tools are in the same directory. Move them to your PATH as described above.
Platform-Specific Notes
Linux
- x86_64: Standard 64-bit Linux
- ARM64: For ARM-based systems (Raspberry Pi, AWS Graviton, etc.)
- glibc 2.31+: Required for Linux binaries
macOS
- x86_64: Intel Macs
- ARM64: Apple Silicon (M1/M2/M3)
- macOS 11.0+: Required for macOS binaries
Windows
- x86_64: 64-bit Windows
- Extract the
.zipfile and runblvm.exefrom the extracted directory - Add the directory to your PATH for command-line access
Verifying Installation
After installation, verify everything works:
# Check blvm-node version
blvm --version
# Check SDK tools
blvm-keygen --help
blvm-sign --help
blvm-verify --help
Building from Source (Advanced)
Building from source requires Rust 1.70+ and is primarily for development. Clone the blvm repository and follow the build instructions in its README.
Next Steps
- See Quick Start for running your first node
- See Node Configuration for detailed setup options
See Also
- Quick Start - Run your first node
- First Node Setup - Complete setup guide
- Node Configuration - Configuration options
- Node Overview - Node features and capabilities
- Release Process - How releases are created
- GitHub Releases - Download releases
Quick Start
Get up and running with BLVM in minutes.
Running Your First Node
After installing the binary, you can start a node:
Regtest Mode (Recommended for Development)
Regtest mode is safe for development - it creates an isolated network:
blvm
Or explicitly:
blvm --network regtest
Starts a node in regtest mode (default), creating an isolated network with instant block generation for testing and development.
Testnet Mode
Connect to Bitcoin testnet:
blvm --network testnet
Mainnet Mode
⚠️ Warning: Only use mainnet if you understand the risks.
blvm --network mainnet
Basic Node Operations
Checking Node Status
Once the node is running, check its status via RPC:
# Mainnet uses port 8332, testnet/regtest use 18332
curl -X POST http://localhost:8332 \
-H "Content-Type: application/json" \
-d '{"jsonrpc": "2.0", "method": "getblockchaininfo", "params": [], "id": 1}'
Example Response:
{
"jsonrpc": "2.0",
"result": {
"chain": "regtest",
"blocks": 100,
"headers": 100,
"bestblockhash": "0f9188f13cb7b2c71f2a335e3a4fc328bf5beb436012afca590b1a11466e2206",
"difficulty": 4.656542373906925e-10,
"mediantime": 1231006505,
"verificationprogress": 1.0,
"chainwork": "0000000000000000000000000000000000000000000000000000000000000064",
"pruned": false,
"initialblockdownload": false
},
"id": 1
}
Getting Peer Information
# Mainnet uses port 8332, testnet/regtest use 18332
curl -X POST http://localhost:8332 \
-H "Content-Type: application/json" \
-d '{"jsonrpc": "2.0", "method": "getpeerinfo", "params": [], "id": 2}'
Getting Mempool Information
# Mainnet uses port 8332, testnet/regtest use 18332
curl -X POST http://localhost:8332 \
-H "Content-Type: application/json" \
-d '{"jsonrpc": "2.0", "method": "getmempoolinfo", "params": [], "id": 3}'
Verifying Installation
blvm --version # Verify installation
blvm --help # View available options
The node connects to the P2P network, syncs blockchain state, accepts RPC commands on port 8332 (mainnet default) or 18332 (testnet/regtest), and can mine blocks if configured.
Using the SDK
Generate a Governance Keypair
blvm-keygen --output my-key.key
Sign a Message
blvm-sign release \
--version v1.0.0 \
--commit abc123 \
--key my-key.key \
--output signature.txt
Verify Signatures
blvm-verify release \
--version v1.0.0 \
--commit abc123 \
--signatures sig1.txt,sig2.txt,sig3.txt \
--threshold 3-of-5 \
--pubkeys keys.json
Next Steps
- First Node Setup - Detailed configuration guide
- Node Configuration - Full configuration options
- RPC API Reference - Interact with your node
See Also
- Installation - Installing BLVM
- First Node Setup - Complete setup guide
- Node Configuration - Configuration options
- Node Operations - Managing your node
- RPC API Reference - Complete RPC API
- Troubleshooting - Common issues
First Node Setup
Complete guide for setting up and configuring your first BLVM node.
Step-by-Step Setup
Step 1: Create Configuration Directory
mkdir -p ~/.config/blvm
cd ~/.config/blvm
Step 2: Create Basic Configuration
Create a configuration file blvm.toml:
[network]
protocol = "regtest" # Start with regtest for safe testing
port = 18444 # Regtest default port
[storage]
data_dir = "~/.local/share/blvm"
backend = "auto" # Auto-select best available backend
[rpc]
enabled = true
port = 18332 # RPC port (regtest default, configurable)
host = "127.0.0.1" # Only listen on localhost
[logging]
level = "info" # info, debug, trace, warn, error
Step 3: Start the Node
blvm --config ~/.config/blvm/blvm.toml
Expected Output:
[INFO] Starting BLVM node
[INFO] Network: regtest
[INFO] Data directory: ~/.local/share/blvm
[INFO] RPC server listening on 127.0.0.1:18332
[INFO] Connecting to network...
[INFO] Connected to 0 peers
Step 4: Verify Node is Running
In another terminal, check the node status:
curl -X POST http://localhost:18332 \
-H "Content-Type: application/json" \
-d '{"jsonrpc": "2.0", "method": "getblockchaininfo", "params": [], "id": 1}'
Expected Response:
{
"jsonrpc": "2.0",
"result": {
"chain": "regtest",
"blocks": 0,
"headers": 0,
"bestblockhash": "...",
"difficulty": 4.656542373906925e-10,
"mediantime": 1231006505,
"verificationprogress": 1.0,
"chainwork": "0000000000000000000000000000000000000000000000000000000000000001",
"pruned": false,
"initialblockdownload": false
},
"id": 1
}
Configuration Examples
Development Node (Regtest)
[network]
protocol = "regtest"
port = 18444
[storage]
data_dir = "~/.local/share/blvm"
backend = "auto"
[rpc]
enabled = true
port = 18332 # Regtest RPC port (configurable)
host = "127.0.0.1"
[rbf]
mode = "standard" # Standard RBF for development
[mempool]
max_mempool_mb = 100
min_relay_fee_rate = 1
Testnet Node
[network]
protocol = "testnet"
port = 18333
[storage]
data_dir = "~/.local/share/blvm-testnet"
backend = "redb"
[rpc]
enabled = true
port = 18332 # Regtest RPC port (configurable)
host = "127.0.0.1"
[rbf]
mode = "standard"
[mempool]
max_mempool_mb = 300
min_relay_fee_rate = 1
eviction_strategy = "lowest_fee_rate"
Production Mainnet Node
[network]
protocol = "mainnet"
port = 8333
[storage]
data_dir = "/var/lib/blvm"
backend = "redb"
[storage.cache]
block_cache_mb = 200
utxo_cache_mb = 100
header_cache_mb = 20
[rpc]
enabled = true
port = 8332 # Mainnet RPC port (configurable)
host = "127.0.0.1"
# Enable authentication for production
# auth_required = true
[rbf]
mode = "standard"
[mempool]
max_mempool_mb = 300
max_mempool_txs = 100000
min_relay_fee_rate = 1
eviction_strategy = "lowest_fee_rate"
max_ancestor_count = 25
max_descendant_count = 25
See Node Configuration for complete configuration options.
Storage
The node stores blockchain data (blocks, UTXO set, chain state, and indexes) in the configured data directory. See Storage Backends for configuration options.
Network Connection
The node automatically discovers peers, connects to the network, syncs blockchain state, and relays transactions and blocks.
RPC Interface
Once running, interact with the node via JSON-RPC:
# Get blockchain info (mainnet uses port 8332, testnet/regtest use 18332)
curl -X POST http://localhost:8332 \
-H "Content-Type: application/json" \
-d '{"jsonrpc": "2.0", "method": "getblockchaininfo", "params": [], "id": 1}'
See RPC API Reference for complete API documentation.
See Also
- Node Configuration - Complete configuration options
- Node Operations - Running and managing your node
- RPC API Reference - Complete API documentation
- Troubleshooting - Common issues and solutions
Security Considerations
⚠️ Important: This implementation is designed for pre-production testing and development. Additional hardening is required for production mainnet use. Use regtest or testnet for development, never expose RPC to untrusted networks, configure RPC authentication, and keep software updated.
Troubleshooting
See Troubleshooting for common issues and solutions.
RBF Configuration Example
Complete example of configuring RBF (Replace-By-Fee) for different use cases.
Exchange Node Configuration
For exchanges that need to protect users from unexpected transaction replacements:
[rbf]
mode = "conservative"
min_fee_rate_multiplier = 2.0
min_fee_bump_satoshis = 5000
min_confirmations = 1
max_replacements_per_tx = 3
cooldown_seconds = 300
[mempool]
max_mempool_mb = 500
max_mempool_txs = 200000
min_relay_fee_rate = 2
eviction_strategy = "lowest_fee_rate"
max_ancestor_count = 25
max_descendant_count = 25
persist_mempool = true
Why This Configuration:
- Conservative RBF: Requires 2x fee increase, preventing low-fee replacements
- 1 Confirmation Required: Additional safety check before allowing replacement
- Higher Fee Threshold: 2 sat/vB minimum relay fee filters low-priority transactions
- Mempool Persistence: Survives restarts for better reliability
Mining Pool Configuration
For mining pools that want to maximize fee revenue:
[rbf]
mode = "aggressive"
min_fee_rate_multiplier = 1.05
min_fee_bump_satoshis = 500
allow_package_replacements = true
max_replacements_per_tx = 10
cooldown_seconds = 60
[mempool]
max_mempool_mb = 1000
max_mempool_txs = 500000
min_relay_fee_rate = 1
eviction_strategy = "lowest_fee_rate"
max_ancestor_count = 50
max_descendant_count = 50
Why This Configuration:
- Aggressive RBF: Only 5% fee increase required, maximizing fee opportunities
- Package Replacements: Allows parent+child transaction replacements
- Larger Mempool: 1GB capacity for more transaction opportunities
- Relaxed Ancestor Limits: 50 transactions for larger packages
Standard Node Configuration
For general-purpose nodes with Bitcoin Core compatibility:
[rbf]
mode = "standard"
min_fee_rate_multiplier = 1.1
min_fee_bump_satoshis = 1000
[mempool]
max_mempool_mb = 300
max_mempool_txs = 100000
min_relay_fee_rate = 1
eviction_strategy = "lowest_fee_rate"
max_ancestor_count = 25
max_descendant_count = 25
mempool_expiry_hours = 336
Why This Configuration:
- Standard RBF: BIP125-compliant with 10% fee increase
- Bitcoin Core Defaults: Matches Bitcoin Core mempool settings
- Balanced: Good for most use cases
Testing RBF Configuration
Test Transaction Replacement
- Create initial transaction:
# Send transaction with RBF signaling (sequence < 0xffffffff)
bitcoin-cli sendtoaddress <address> 0.001 "" "" true
- Replace with higher fee:
# Create replacement with higher fee
bitcoin-cli bumpfee <txid> --fee_rate 20
- Verify replacement:
# Check mempool
curl -X POST http://localhost:8332 \
-H "Content-Type: application/json" \
-d '{"jsonrpc": "2.0", "method": "getmempoolentry", "params": ["<new_txid>"], "id": 1}'
Monitor RBF Activity
# Get mempool info
curl -X POST http://localhost:8332 \
-H "Content-Type: application/json" \
-d '{"jsonrpc": "2.0", "method": "getmempoolinfo", "params": [], "id": 1}'
Expected Response:
{
"jsonrpc": "2.0",
"result": {
"size": 1234,
"bytes": 567890,
"usage": 1234567,
"maxmempool": 314572800,
"mempoolminfee": 0.00001,
"minrelaytxfee": 0.00001
},
"id": 1
}
See Also
- RBF and Mempool Policies - Complete configuration guide
- Node Configuration - All configuration options
- Node Operations - Managing your node
System Overview
Bitcoin Commons is a Bitcoin implementation ecosystem with six tiers building on the Orange Paper mathematical specifications. The system implements consensus rules directly from the spec, provides protocol abstraction, delivers a minimal reference implementation, and includes a developer SDK.
6-Tier Component Architecture
Mathematical Foundation] T2[blvm-consensus
Pure Math Implementation] T3[blvm-protocol
Protocol Abstraction] T4[blvm-node
Full Node Implementation] T5[blvm-sdk
Developer Toolkit] T6[blvm-commons
Governance Enforcement] T1 -->|direct implementation| T2 T2 -->|protocol abstraction| T3 T3 -->|full node| T4 T4 -->|ergonomic API| T5 T5 -->|cryptographic governance| T6 style T1 fill:#f9f,stroke:#333,stroke-width:2px style T2 fill:#bbf,stroke:#333,stroke-width:2px style T3 fill:#bfb,stroke:#333,stroke-width:2px style T4 fill:#fbf,stroke:#333,stroke-width:2px style T5 fill:#ffb,stroke:#333,stroke-width:2px style T6 fill:#fbb,stroke:#333,stroke-width:2px
BLVM Stack Architecture
Figure: BLVM architecture showing blvm-spec (Orange Paper) as the foundation, blvm-consensus as the core implementation with verification paths (Z3 proofs via BLVM Specification Lock, spec drift detection, hash verification), and dependent components (blvm-protocol, blvm-node, blvm-sdk) building on the verified consensus layer.
Tiered Architecture
Figure: Tiered architecture: Tier 1 = Orange Paper + Consensus Proof (mathematical foundation); Tier 2 = Protocol Engine (protocol abstraction); Tier 3 = Reference Node (complete implementation); Tier 4 = Developer SDK + Governance (developer toolkit + governance enforcement).
Component Overview
Tier 1: Orange Paper (Mathematical Foundation)
- Mathematical specifications for Bitcoin consensus rules
- Source of truth for all implementations
- Timeless, immutable consensus rules
Tier 2: blvm-consensus (Pure Math Implementation)
- Direct implementation of Orange Paper functions
- Formal proofs verify mathematical correctness
- Side-effect-free, deterministic functions
- Consensus-critical dependencies pinned to exact versions
Code: README.md
Tier 3: blvm-protocol (Protocol Abstraction)
- Bitcoin protocol abstraction for multiple variants
- Supports mainnet, testnet, regtest
- Commons-specific protocol extensions (UTXO commitments, ban list sharing)
- BIP implementations (BIP152, BIP157, BIP158, BIP173/350/351)
Code: README.md
Tier 4: blvm-node (Node Implementation)
- Minimal, production-ready Bitcoin node
- Storage layer (database abstraction with multiple backends)
- Network manager (multi-transport: TCP, QUIC, Iroh)
- RPC server (JSON-RPC 2.0 with Bitcoin Core compatibility)
- Module system (process-isolated runtime modules)
- Payment processing with CTV (CheckTemplateVerify) support
- RBF and mempool policies (4 configurable modes)
- Advanced indexing (address and value range indexing)
- Mining coordination (Stratum V2, merge mining)
- P2P governance message relay
- Governance integration (webhooks, user signaling)
- ZeroMQ notifications (optional)
Code: README.md
Tier 5: blvm-sdk (Developer Toolkit)
- Governance primitives (key management, signatures, multisig)
- CLI tools (blvm-keygen, blvm-sign, blvm-verify)
- Composition framework (declarative node composition)
- Bitcoin-compatible signing standards
Code: README.md
Tier 6: blvm-commons (Governance Enforcement)
- GitHub App for governance enforcement
- Cryptographic signature verification
- Multisig threshold enforcement
- Audit trail management
- OpenTimestamps integration
Data Flow
- Orange Paper provides mathematical consensus specifications
- blvm-consensus directly implements mathematical functions
- blvm-protocol wraps blvm-consensus with protocol-specific parameters
- blvm-node uses blvm-protocol and blvm-consensus for validation
- blvm-sdk provides governance primitives
- blvm-commons uses blvm-sdk for cryptographic operations
Cross-Layer Validation
- Dependencies between layers are strictly enforced
- Consensus rule modifications are prevented in application layers
- Equivalence proofs required between Orange Paper and blvm-consensus
- Version coordination ensures compatibility across layers
Key Features
Mathematical Rigor
- Direct implementation of Orange Paper specifications
- Formal verification with BLVM Specification Lock
- Property-based testing for mathematical invariants
- Formal proofs verify critical consensus functions
Protocol Abstraction
- Multiple Bitcoin variants (mainnet, testnet, regtest)
- Commons-specific protocol extensions
- BIP implementations (BIP152, BIP157, BIP158)
- Protocol evolution support
Production Ready
- Bitcoin node functionality
- Performance optimizations (PGO, parallel validation)
- Multiple storage backends with automatic fallback
- Multi-transport networking (TCP, QUIC, Iroh)
- Payment processing infrastructure
- REST API alongside JSON-RPC
Governance Infrastructure
- Cryptographic governance primitives
- Multisig threshold enforcement
- Transparent audit trails
- Forkable governance rules
See Also
- Component Relationships - Detailed component interactions
- Design Philosophy - Core design principles
- Module System - Module system architecture
- Node Overview - Node implementation details
- Consensus Overview - Consensus layer details
- Protocol Overview - Protocol layer details
Component Relationships
BLVM implements a 6-tier layered architecture where each tier builds upon the previous one.
Dependency Graph
No dependencies] P[blvm-protocol
Depends on consensus] N[blvm-node
Depends on protocol + consensus] C --> P P --> N C --> N end subgraph "Governance Stack" S[blvm-sdk
No dependencies] G[blvm-commons
Depends on SDK] S --> G end style C fill:#bbf,stroke:#333,stroke-width:2px style P fill:#bfb,stroke:#333,stroke-width:2px style N fill:#fbf,stroke:#333,stroke-width:2px style S fill:#ffb,stroke:#333,stroke-width:2px style G fill:#fbb,stroke:#333,stroke-width:2px
Layer Descriptions
Tier 1: Orange Paper (blvm-spec)
- Purpose: Mathematical foundation - timeless consensus rules
- Type: Documentation and specification
- Governance: Layer 1 (Constitutional - 6-of-7 maintainers, 180 days, see Layer-Tier Model)
Tier 2: Consensus Layer (blvm-consensus)
- Purpose: Pure mathematical implementation of Orange Paper functions
- Type: Rust library (pure functions, no side effects)
- Dependencies: None (foundation layer)
- Governance: Layer 2 (Constitutional - 6-of-7 maintainers, 180 days, see Layer-Tier Model)
- Key Functions: CheckTransaction, ConnectBlock, EvalScript, VerifyScript
Tier 3: Protocol Layer (blvm-protocol)
- Purpose: Protocol abstraction layer for multiple Bitcoin variants
- Type: Rust library
- Dependencies: blvm-consensus (exact version)
- Governance: Layer 3 (Implementation - 4-of-5 maintainers, 90 days, see Layer-Tier Model)
- Supports: mainnet, testnet, regtest, and additional protocol variants
Tier 4: Node Implementation (blvm-node)
- Purpose: Minimal, production-ready Bitcoin implementation
- Type: Rust binaries (full node)
- Dependencies: blvm-protocol, blvm-consensus (exact versions)
- Governance: Layer 4 (Application - 3-of-5 maintainers, 60 days, see Layer-Tier Model)
- Components: Block validation, storage, P2P networking, RPC, mining
Tier 5: Developer SDK (blvm-sdk)
- Purpose: Developer toolkit and governance cryptographic primitives
- Type: Rust library and CLI tools
- Dependencies: Standalone (no consensus dependencies)
- Governance: Layer 5 (Extension - 2-of-3 maintainers, 14 days, see Layer-Tier Model)
- Components: Key generation, signing, verification, multisig operations
Tier 6: Governance Infrastructure (blvm-commons)
- Purpose: Cryptographic governance enforcement
- Type: Rust service (GitHub App)
- Dependencies: blvm-sdk
- Governance: Layer 5 (Extension - 2-of-3 maintainers, 14 days)
- Components: GitHub integration, signature verification, status checks
Data Flow
Mathematical Specs] CONS[blvm-consensus
Pure Functions] PROTO[blvm-protocol
Protocol Abstraction] NODE[blvm-node
Full Node] SDK[blvm-sdk
Governance Primitives] COMMONS[blvm-commons
Enforcement] OP -->|implements| CONS CONS -->|wraps| PROTO PROTO -->|uses| NODE SDK -->|provides| COMMONS NODE -->|validates via| PROTO PROTO -->|validates via| CONS style OP fill:#f9f,stroke:#333,stroke-width:3px style CONS fill:#bbf,stroke:#333,stroke-width:2px style PROTO fill:#bfb,stroke:#333,stroke-width:2px style NODE fill:#fbf,stroke:#333,stroke-width:2px style SDK fill:#ffb,stroke:#333,stroke-width:2px style COMMONS fill:#fbb,stroke:#333,stroke-width:2px
Figure: End-to-end data flow through Reference Node, Consensus Proof, Protocol Engine, modules, and governance.
- Orange Paper provides mathematical consensus specifications
- blvm-consensus directly implements mathematical functions
- blvm-protocol wraps blvm-consensus with protocol-specific parameters
- blvm-node uses blvm-protocol and blvm-consensus for validation
- blvm-sdk provides governance primitives
- blvm-commons uses blvm-sdk for cryptographic operations
Cross-Layer Validation
- Dependencies between layers are strictly enforced
- Consensus rule modifications are prevented in application layers
- Equivalence proofs required between Orange Paper and blvm-consensus
- Version coordination ensures compatibility across layers
See Also
- System Overview - High-level architecture overview
- Design Philosophy - Core design principles
- Consensus Architecture - Consensus layer details
- Protocol Architecture - Protocol layer details
- Node Overview - Node implementation details
Design Philosophy
BLVM is built on core principles that guide all design decisions.
Core Principles
1. Mathematical Correctness First
- Direct implementation of Orange Paper specifications
- No interpretation or approximation
- Formal verification ensures correctness
- Pure functions with no side effects
2. Layered Architecture
- Clear separation of concerns
- Each layer builds on previous layers
- No circular dependencies
- Independent versioning where possible
3. Zero Consensus Re-implementation
- All consensus logic comes from blvm-consensus
- Application layers cannot modify consensus rules
- Protocol abstraction enables variants without consensus changes
- Clear security boundaries
4. Cryptographic Governance
- Apply Bitcoin’s cryptographic primitives to governance
- Make power visible, capture expensive, exit cheap
- Multi-signature requirements for all changes
- Transparent audit trails
5. User Sovereignty
- Users control what software they run
- No forced network upgrades
- Forkable governance model
Design Decisions
Why Pure Functions?
Pure functions are:
- Testable: Same input always produces same output
- Verifiable: Mathematical properties can be proven
- Composable: Can be combined without side effects
- Predictable: No hidden state or dependencies
Why Layered Architecture?
Layered architecture provides:
- Separation of Concerns: Each layer has a single responsibility
- Reusability: Lower layers can be used independently
- Testability: Each layer can be tested in isolation
- Evolution: Protocol can evolve without consensus changes
Why Formal Verification?
Formal verification ensures:
- Correctness: Mathematical proofs of correctness
- Security: Prevents consensus violations
- Confidence: High assurance in critical code
- Auditability: Immutable proof of verification
Why Cryptographic Governance?
Cryptographic governance provides:
- Transparency: All decisions are cryptographically verifiable
- Accountability: Clear audit trail of all actions
- Resistance to Capture: Multi-signature requirements make capture expensive
- User Protection: Forkable governance allows users to exit if they disagree
Trade-offs
Performance vs Correctness
- Choice: Correctness first
- Rationale: Consensus violations are catastrophic
- Mitigation: Optimize after verification
Flexibility vs Safety
- Choice: Safety first
- Rationale: Bitcoin consensus must be stable
- Mitigation: Protocol abstraction enables experimentation
Simplicity vs Features
- Choice: Simplicity where possible
- Rationale: Complex code is harder to verify
- Mitigation: Add features only when necessary
Design Evolution
BLVM is designed to support Bitcoin’s evolution for the next 500 years:
- Protocol Evolution: New variants without consensus changes
- Feature Addition: New capabilities through protocol abstraction
- Governance Evolution: Governance rules can evolve through proper process
- User Choice: Multiple implementations can coexist
See Also
- System Overview - High-level architecture overview
- Component Relationships - Layer dependencies and data flow
- Consensus Architecture - Mathematical correctness implementation
- Governance Overview - Cryptographic governance system
- Orange Paper - Mathematical foundation
Module System
Overview
The module system supports optional features (Lightning Network, merge mining, privacy enhancements) without affecting consensus or base node stability. Modules run in separate processes with IPC communication, providing security through isolation.
Available Modules
The following modules are available for blvm-node:
- Lightning Network Module - Lightning Network payment processing, invoice verification, payment routing, and channel management
- Commons Mesh Module - Payment-gated mesh networking with routing fees, traffic classification, and anti-monopoly protection
- Stratum V2 Module - Stratum V2 mining protocol support and mining pool management
- Datum Module - DATUM Gateway mining protocol
- Mining OS Module - MiningOS integration
- Merge Mining Module - Merge mining available as separate paid plugin (
blvm-merge-mining)
For detailed documentation on each module, see the Modules section.
Architecture
Process Isolation
Each module runs in a separate process with isolated memory. The base node consensus state is protected and read-only to modules.
Protected, Read-Only] MM[Module Manager
Orchestration] NM[Network Manager] SM[Storage Manager] RM[RPC Manager] end subgraph "Module Process 1
blvm-lightning" LS[Lightning State
Isolated Memory] SB1[Sandbox
Resource Limits] end subgraph "Module Process 2
blvm-mesh" MS[Mesh State
Isolated Memory] SB2[Sandbox
Resource Limits] end subgraph "Module Process 3
blvm-stratum-v2" SS[Stratum V2 State
Isolated Memory] SB3[Sandbox
Resource Limits] end MM -->|IPC Unix Sockets| LS MM -->|IPC Unix Sockets| MS MM -->|IPC Unix Sockets| SS CS -.->|Read-Only Access| MM NM --> MM SM --> MM RM --> MM style CS fill:#fbb,stroke:#333,stroke-width:3px style MM fill:#bbf,stroke:#333,stroke-width:2px style LS fill:#bfb,stroke:#333,stroke-width:2px style MS fill:#bfb,stroke:#333,stroke-width:2px style SS fill:#bfb,stroke:#333,stroke-width:2px
Code: mod.rs
Core Components
ModuleManager
Orchestrates all modules, handling lifecycle, runtime loading/unloading/reloading, and coordination.
Features:
- Module discovery and loading
- Process spawning and monitoring
- IPC server management
- Event subscription management
- Dependency resolution
- Registry integration
Code: manager.rs
Process Isolation
Modules run in separate processes via ModuleProcessSpawner:
- Separate memory space
- Isolated execution environment
- Resource limits enforced
- Crash containment
Code: spawner.rs
IPC Communication
Modules communicate with the base node via Unix domain sockets (Unix) or named pipes (Windows):
- Request/response protocol
- Event subscription system
- Correlation IDs for async operations
- Type-safe message serialization
Code: protocol.rs
Security Sandbox
Modules run in sandboxed environments with:
- Resource limits (CPU, memory, file descriptors)
- Filesystem restrictions
- Network restrictions
- Permission-based API access
Code: network.rs
Permission System
Modules request capabilities that are validated before API access. Capabilities use snake_case in module.toml (e.g., read_blockchain) and map to Permission enum variants (e.g., ReadBlockchain).
Core Permissions:
read_blockchain/ReadBlockchain- Read-only blockchain access (blocks, headers, transactions)read_utxo/ReadUTXO- Query UTXO set (read-only)read_chain_state/ReadChainState- Query chain state (height, tip)subscribe_events/SubscribeEvents- Subscribe to node eventssend_transactions/SendTransactions- Submit transactions to mempool (future: may be restricted)
Mempool & Network Permissions:
read_mempool/ReadMempool- Read mempool data (transactions, size, fee estimates)read_network/ReadNetwork- Read network data (peers, stats)network_access/NetworkAccess- Send network packets (mesh packets, etc.)
Lightning & Payment Permissions:
read_lightning/ReadLightning- Read Lightning network dataread_payment/ReadPayment- Read payment data
Storage Permissions:
read_storage/ReadStorage- Read from module storagewrite_storage/WriteStorage- Write to module storagemanage_storage/ManageStorage- Manage storage (create/delete trees, manage quotas)
Filesystem Permissions:
read_filesystem/ReadFilesystem- Read files from module data directorywrite_filesystem/WriteFilesystem- Write files to module data directorymanage_filesystem/ManageFilesystem- Manage filesystem (create/delete directories, manage quotas)
RPC & Timers Permissions:
register_rpc_endpoint/RegisterRpcEndpoint- Register RPC endpointsmanage_timers/ManageTimers- Manage timers and scheduled tasks
Metrics Permissions:
report_metrics/ReportMetrics- Report metricsread_metrics/ReadMetrics- Read metrics
Module Communication Permissions:
discover_modules/DiscoverModules- Discover other modulespublish_events/PublishEvents- Publish events to other modulescall_module/CallModule- Call other modules’ APIsregister_module_api/RegisterModuleApi- Register module API for other modules to call
Code: permissions.rs
Module Lifecycle
Discovery → Verification → Loading → Execution → Monitoring
│ │ │ │ │
│ │ │ │ │
▼ ▼ ▼ ▼ ▼
Registry Signer Loader Process Monitor
Discovery
Modules discovered through:
- Local filesystem (
modules/directory) - Module registry (REST API)
- Manual installation
Code: discovery.rs
Verification
Each module verified through:
- Hash verification (binary integrity)
- Signature verification (multisig maintainer signatures)
- Permission checking (capability validation)
- Compatibility checking (version requirements)
Code: manifest_validator.rs
Loading
Module loaded into isolated process:
- Sandbox creation (resource limits)
- IPC connection establishment
- API subscription setup
Code: manager.rs
Execution
Module runs in isolated process:
- Separate memory space
- Resource limits enforced
- IPC communication only
- Event subscription active
Monitoring
Module health monitored:
- Process status tracking
- Resource usage monitoring
- Error tracking
- Crash isolation
Code: monitor.rs
Security Model
Consensus Isolation
Modules cannot:
- Modify consensus rules
- Modify UTXO set
- Access node private keys
- Bypass security boundaries
- Affect other modules
Guarantee: Module failures are isolated and cannot affect consensus.
Crash Containment
Module crashes are isolated and do not affect the base node. The ModuleProcessMonitor detects crashes and automatically removes failed modules.
Code: manager.rs
Security Flow
Module Binary
│
├─→ Hash Verification ──→ Integrity Check
│
├─→ Signature Verification ──→ Multisig Check ──→ Maintainer Verification
│
├─→ Permission Check ──→ Capability Validation
│
└─→ Sandbox Creation ──→ Resource Limits ──→ Isolation
Module Manifest
Module manifests use TOML format:
# Module Identity
name = "lightning-network"
version = "1.2.3"
description = "Lightning Network implementation"
author = "Alice <alice@example.com>"
# Governance
[governance]
tier = "application"
maintainers = ["alice", "bob", "charlie"]
threshold = "2-of-3"
review_period_days = 14
# Signatures
[signatures]
maintainers = [
{ name = "alice", key = "02abc...", signature = "..." },
{ name = "bob", key = "03def...", signature = "..." }
]
threshold = "2-of-3"
# Binary
[binary]
hash = "sha256:abc123..."
size = 1234567
download_url = "https://registry.bitcoincommons.org/modules/lightning-network/1.2.3"
# Dependencies
[dependencies]
"blvm-node" = ">=1.0.0"
"another-module" = ">=0.5.0"
# Compatibility
[compatibility]
min_consensus_version = "1.0.0"
min_protocol_version = "1.0.0"
min_node_version = "1.0.0"
tested_with = ["1.0.0", "1.1.0"]
# Capabilities
capabilities = [
"read_blockchain",
"subscribe_events"
]
Code: manifest.rs
API Hub
The ModuleApiHub routes API requests from modules to the appropriate handlers:
- Blockchain API (blocks, headers, transactions)
- Governance API (proposals, votes)
- Communication API (P2P messaging)
Code: hub.rs
Event System
The module event system provides a comprehensive, consistent, and reliable way for modules to receive notifications about node state changes, blockchain events, and system lifecycle events.
Event Subscription
Modules subscribe to events they need during initialization:
#![allow(unused)]
fn main() {
let event_types = vec![
EventType::NewBlock,
EventType::NewTransaction,
EventType::ModuleLoaded,
EventType::ConfigLoaded,
];
client.subscribe_events(event_types).await?;
}
Event Categories
Core Blockchain Events:
NewBlock- Block connected to chainNewTransaction- Transaction in mempoolBlockDisconnected- Block disconnected (reorg)ChainReorg- Chain reorganization
Payment Events:
PaymentRequestCreated- Payment request createdPaymentSettled- Payment settled (confirmed on-chain)PaymentFailed- Payment failedPaymentVerified- Lightning payment verifiedPaymentRouteFound- Payment route discoveredPaymentRouteFailed- Payment routing failedChannelOpened- Lightning channel openedChannelClosed- Lightning channel closed
Mining Events:
BlockMined- Block mined successfullyBlockTemplateUpdated- Block template updatedMiningDifficultyChanged- Mining difficulty changedMiningJobCreated- Mining job createdShareSubmitted- Mining share submittedMergeMiningReward- Merge mining reward receivedMiningPoolConnected- Mining pool connectedMiningPoolDisconnected- Mining pool disconnected
Mesh Networking Events:
MeshPacketReceived- Mesh packet received from network
Stratum V2 Events:
StratumV2MessageReceived- Stratum V2 message received from network
Module Lifecycle Events:
ModuleLoaded- Module loaded (published after subscription)ModuleUnloaded- Module unloadedModuleCrashed- Module crashedModuleDiscovered- Module discoveredModuleInstalled- Module installedModuleUpdated- Module updatedModuleRemoved- Module removed
Configuration Events:
ConfigLoaded- Node configuration loaded/changed
Node Lifecycle Events:
NodeStartupCompleted- Node fully operationalNodeShutdown- Node shutting downNodeShutdownCompleted- Shutdown complete
Maintenance Events:
DataMaintenance- Unified cleanup/flush event (replaces StorageFlush + DataCleanup)MaintenanceStarted- Maintenance startedMaintenanceCompleted- Maintenance completedHealthCheck- Health check performed
Resource Management Events:
DiskSpaceLow- Disk space lowResourceLimitWarning- Resource limit warning
Governance Events:
GovernanceProposalCreated- Proposal createdGovernanceProposalVoted- Vote castGovernanceProposalMerged- Proposal mergedEconomicNodeRegistered- Economic node registeredEconomicNodeStatus- Economic node status query/responseEconomicNodeForkDecision- Economic node fork decisionEconomicNodeVeto- Economic node veto signalVetoThresholdReached- Veto threshold reachedGovernanceForkDetected- Governance fork detectedWebhookSent- Webhook sentWebhookFailed- Webhook delivery failed
Network Events:
PeerConnected- Peer connectedPeerDisconnected- Peer disconnectedPeerBanned- Peer bannedPeerUnbanned- Peer unbannedMessageReceived- Network message receivedMessageSent- Network message sentBroadcastStarted- Broadcast startedBroadcastCompleted- Broadcast completedRouteDiscovered- Route discoveredRouteFailed- Route failedConnectionAttempt- Connection attempt (success/failure)AddressDiscovered- New peer address discoveredAddressExpired- Peer address expiredNetworkPartition- Network partition detectedNetworkReconnected- Network partition reconnectedDoSAttackDetected- DoS attack detectedRateLimitExceeded- Rate limit exceeded
Consensus Events:
BlockValidationStarted- Block validation startedBlockValidationCompleted- Block validation completed (success/failure)ScriptVerificationStarted- Script verification startedScriptVerificationCompleted- Script verification completedUTXOValidationStarted- UTXO validation startedUTXOValidationCompleted- UTXO validation completedDifficultyAdjusted- Network difficulty adjustedSoftForkActivated- Soft fork activated (SegWit, Taproot, CTV, etc.)SoftForkLockedIn- Soft fork locked in (BIP9)ConsensusRuleViolation- Consensus rule violation detected
Sync Events:
HeadersSyncStarted- Headers sync startedHeadersSyncProgress- Headers sync progress updateHeadersSyncCompleted- Headers sync completedBlockSyncStarted- Block sync started (IBD)BlockSyncProgress- Block sync progress updateBlockSyncCompleted- Block sync completed
Mempool Events:
MempoolTransactionAdded- Transaction added to mempoolMempoolTransactionRemoved- Transaction removed from mempoolFeeRateChanged- Fee rate changed
Additional Event Categories:
- Dandelion++ Events (DandelionStemStarted, DandelionStemAdvanced, DandelionFluffed, etc.)
- Compact Blocks Events (CompactBlockReceived, BlockReconstructionStarted, etc.)
- FIBRE Events (FibreBlockEncoded, FibreBlockSent, FibrePeerRegistered)
- Package Relay Events (PackageReceived, PackageRejected)
- UTXO Commitments Events (UtxoCommitmentReceived, UtxoCommitmentVerified)
- Ban List Sharing Events (BanListShared, BanListReceived)
For a complete list of all event types, see EventType enum.
Event Delivery Guarantees
At-Most-Once Delivery:
- Events are delivered at most once per subscriber
- If channel is full, event is dropped (not retried)
- If channel is closed, module is removed from subscriptions
Best-Effort Delivery:
- Events are delivered on a best-effort basis
- No guaranteed delivery (modules can be slow/dead)
- Statistics track delivery success/failure rates
Ordering Guarantees:
- Events are delivered in order per module (single channel)
- No cross-module ordering guarantees
- ModuleLoaded events are ordered: subscription → ModuleLoaded
Event Timing and Consistency
ModuleLoaded Event Timing:
ModuleLoadedevents are only published AFTER a module has subscribed (after startup is complete)- This ensures modules are fully ready before receiving ModuleLoaded events
- Hotloaded modules automatically receive all already-loaded modules when subscribing
Event Flow:
- Module process is spawned
- Module connects via IPC and sends Handshake
- Module sends
SubscribeEventsrequest - At subscription time:
- Module receives
ModuleLoadedevents for all already-loaded modules (hotloaded modules get existing modules) ModuleLoadedis published for the newly subscribing module (if it’s loaded)
- Module receives
- Module is now fully operational
Event Delivery Reliability
Channel Buffering:
- 100-event buffer per module (prevents unbounded memory growth)
- Non-blocking delivery (publisher never blocks)
- Channel full events are tracked in statistics
Error Handling:
- Channel Full: Event dropped with warning, module subscription NOT removed (module is slow, not dead)
- Channel Closed: Module subscription removed, statistics track failed delivery
- Serialization Errors: Event dropped with warning, module subscription NOT removed
Delivery Statistics:
- Track success/failure/channel-full counts per module
- Available via
EventManager::get_delivery_stats() - Useful for monitoring and debugging
Code: events.rs
For detailed event system documentation, see:
- Event System Integration - Complete integration guide
- Event Consistency - Event timing and consistency guarantees
- Janitorial Events - Maintenance and lifecycle events
Module Registry
Modules can be discovered and installed from a module registry:
- REST API client for module discovery
- Binary download and verification
- Dependency resolution
- Signature verification
Code: client.rs
Usage
Loading a Module
#![allow(unused)]
fn main() {
use blvm_node::module::{ModuleManager, ModuleMetadata};
let mut manager = ModuleManager::new(
modules_dir,
data_dir,
socket_dir,
);
manager.start(socket_path, node_api).await?;
manager.load_module(
"lightning-network",
binary_path,
metadata,
config,
).await?;
}
Auto-Discovery
#![allow(unused)]
fn main() {
// Automatically discover and load all modules
manager.auto_load_modules().await?;
}
Code: manager.rs
Benefits
- Consensus Isolation: Modules cannot affect consensus rules
- Crash Containment: Module failures don’t affect base node
- Security: Process isolation and permission system
- Extensibility: Add features without consensus changes
- Flexibility: Load/unload modules at runtime
- Governance: Modules subject to governance approval
Use Cases
- Lightning Network: Payment channel management
- Merge Mining: Auxiliary chain support
- Privacy Enhancements: Transaction mixing, coinjoin
- Alternative Mempool Policies: Custom transaction selection
- Smart Contracts: Layer 2 contract execution
Components
The module system includes:
- Process isolation
- IPC communication
- Security sandboxing
- Permission system
- Module registry
- Event system
- API hub
Location: blvm-node/src/module/
IPC Communication
Modules communicate with the node via the Module IPC Protocol:
- Protocol: Length-delimited binary messages over Unix domain sockets
- Message Types: Requests, Responses, Events, Logs
- Security: Process isolation, permission-based API access, resource sandboxing
- Performance: Persistent connections, concurrent requests, correlation IDs
Integration Approaches
There are two approaches for modules to integrate with the node:
1. ModuleIntegration (Recommended for New Modules)
The ModuleIntegration API provides a simplified, unified interface for module integration:
#![allow(unused)]
fn main() {
use blvm_node::module::integration::ModuleIntegration;
// Connect to node (socket_path must be PathBuf)
let socket_path = std::path::PathBuf::from(socket_path);
let mut integration = ModuleIntegration::connect(
socket_path,
module_id,
module_name,
version,
).await?;
// Subscribe to events
integration.subscribe_events(event_types).await?;
// Get NodeAPI
let node_api = integration.node_api();
// Get event receiver
let mut event_receiver = integration.event_receiver();
}
Benefits:
- Single unified API for all integration needs
- Automatic handshake and connection management
- Simplified event subscription
- Direct access to NodeAPI and event receiver
Used by: blvm-mesh and its submodules (blvm-onion, blvm-mining-pool, blvm-messaging, blvm-bridge)
2. ModuleClient + NodeApiIpc (Legacy Approach)
The traditional approach uses separate components:
#![allow(unused)]
fn main() {
use blvm_node::module::ipc::client::ModuleIpcClient;
use blvm_node::module::api::node_api::NodeApiIpc;
// Connect to IPC socket
let mut ipc_client = ModuleIpcClient::connect(&socket_path).await?;
// Perform handshake manually
let handshake_request = RequestMessage { /* ... */ };
let response = ipc_client.request(handshake_request).await?;
// Create NodeAPI wrapper
// NodeApiIpc requires Arc<Mutex<ModuleIpcClient>> and module_id
let ipc_client_arc = Arc::new(tokio::sync::Mutex::new(ipc_client));
let node_api = Arc::new(NodeApiIpc::new(ipc_client_arc, "my-module".to_string()));
// Create ModuleClient for event subscription
let mut client = ModuleClient::connect(/* ... */).await?;
client.subscribe_events(event_types).await?;
let mut event_receiver = client.event_receiver();
}
Benefits:
- More granular control over IPC communication
- Direct access to IPC client for custom requests
- Established, stable API
Used by: blvm-lightning, blvm-stratum-v2, blvm-datum, blvm-miningos
Migration: New modules should use ModuleIntegration. Existing modules can continue using ModuleClient + NodeApiIpc, but migration to ModuleIntegration is recommended for consistency and simplicity.
For detailed protocol documentation, see Module IPC Protocol.
See Also
- Module IPC Protocol - Complete IPC protocol documentation
- Modules Overview - Overview of all available modules
- Lightning Network Module - Lightning Network payment processing
- Commons Mesh Module - Payment-gated mesh networking
- Stratum V2 Module - Stratum V2 mining protocol
- Datum Module - DATUM Gateway mining protocol
- Mining OS Module - MiningOS integration
- Module Development - Guide for developing custom modules
Module IPC Protocol
Overview
The Module IPC (Inter-Process Communication) protocol enables secure communication between process-isolated modules and the base node. Modules run in separate processes and communicate via Unix domain sockets using a length-delimited binary message protocol.
Architecture
┌─────────────────────────────────────┐
│ blvm-node Process │
│ ┌───────────────────────────────┐ │
│ │ Module IPC Server │ │
│ │ (Unix Domain Socket) │ │
│ └───────────────────────────────┘ │
└─────────────────────────────────────┘
│ IPC Protocol
│ (Unix Domain Socket)
│
┌─────────────┴─────────────────────┐
│ Module Process (Isolated) │
│ ┌───────────────────────────────┐ │
│ │ Module IPC Client │ │
│ │ (Unix Domain Socket) │ │
│ └───────────────────────────────┘ │
└─────────────────────────────────────┘
Protocol Format
Message Encoding
Messages use length-delimited binary encoding:
[4-byte length][message payload]
- Length: 4-byte little-endian integer (message size)
- Payload: Binary-encoded message (bincode serialization)
Code: mod.rs
Message Types
The protocol supports four message types:
- Request: Module → Node (API calls)
- Response: Node → Module (API responses)
- Event: Node → Module (event notifications)
- Log: Module → Node (logging)
Code: protocol.rs
Message Structure
Request Message
#![allow(unused)]
fn main() {
pub struct RequestMessage {
pub correlation_id: CorrelationId,
pub request_type: MessageType,
pub payload: RequestPayload,
}
}
Request Types:
GetBlock- Get block by hashGetBlockHeader- Get block header by hashGetTransaction- Get transaction by hashGetChainTip- Get current chain tipGetBlockHeight- Get current block heightGetUTXO- Get UTXO by outpointSubscribeEvents- Subscribe to node eventsGetMempoolTransactions- Get mempool transaction hashesGetNetworkStats- Get network statisticsGetNetworkPeers- Get connected peersGetChainInfo- Get chain information
Code: protocol.rs
Response Message
#![allow(unused)]
fn main() {
pub struct ResponseMessage {
pub correlation_id: CorrelationId,
pub payload: ResponsePayload,
}
}
Response Types:
Success- Request succeeded with dataError- Request failed with error detailsNotFound- Resource not found
Code: protocol.rs
Event Message
#![allow(unused)]
fn main() {
pub struct EventMessage {
pub event_type: EventType,
pub payload: EventPayload,
}
}
Event Types (46+ event types):
- Network events:
PeerConnected,MessageReceived,PeerDisconnected - Payment events:
PaymentRequestCreated,PaymentVerified,PaymentSettled - Chain events:
NewBlock,ChainTipUpdated,BlockDisconnected - Mempool events:
MempoolTransactionAdded,FeeRateChanged,MempoolTransactionRemoved
Code: protocol.rs
Log Message
#![allow(unused)]
fn main() {
pub struct LogMessage {
pub level: LogLevel,
pub message: String,
pub module_id: String,
}
}
Log Levels: Error, Warn, Info, Debug, Trace
Code: protocol.rs
Communication Flow
Request-Response Pattern
- Module sends Request: Module sends request message with correlation ID
- Node processes Request: Node processes request and generates response
- Node sends Response: Node sends response with matching correlation ID
- Module receives Response: Module matches response to request using correlation ID
Code: server.rs
Event Subscription Pattern
- Module subscribes: Module sends
SubscribeEventsrequest with event types - Node confirms: Node sends subscription confirmation
- Node publishes Events: Node sends event messages as they occur
- Module receives Events: Module processes events asynchronously
Code: server.rs
Connection Management
Handshake
On connection, modules send a handshake message:
#![allow(unused)]
fn main() {
pub struct HandshakeMessage {
pub module_id: String,
pub capabilities: Vec<String>,
pub version: String,
}
}
Code: server.rs
Connection Lifecycle
- Connect: Module connects to Unix domain socket
- Handshake: Module sends handshake, node validates
- Active: Connection active, ready for requests/events
- Disconnect: Connection closed (graceful or error)
Code: server.rs
Security
Process Isolation
- Modules run in separate processes with isolated memory
- No shared memory between node and modules
- Module crashes don’t affect the base node
Code: spawner.rs
Permission System
Modules request capabilities that are validated before API access:
ReadBlockchain- Read-only blockchain accessReadUTXO- Query UTXO set (read-only)ReadChainState- Query chain state (height, tip)SubscribeEvents- Subscribe to node eventsSendTransactions- Submit transactions to mempool
Code: permissions.rs
Sandboxing
Modules run in sandboxed environments with:
- Resource limits (CPU, memory, file descriptors)
- Filesystem restrictions
- Network restrictions (modules cannot open network connections)
- Permission-based API access
Code: mod.rs
Error Handling
Error Types
#![allow(unused)]
fn main() {
pub enum ModuleError {
ConnectionError(String),
ProtocolError(String),
PermissionDenied(String),
ResourceExhausted(String),
Timeout(String),
}
}
Code: traits.rs
Error Recovery
- Connection Errors: Automatic reconnection with exponential backoff
- Protocol Errors: Clear error messages, connection termination
- Permission Errors: Detailed error messages, request rejection
- Timeout Errors: Request timeout, connection remains active
Code: client.rs
Performance
Message Serialization
- Format: bincode (binary encoding)
- Size: Compact binary representation
- Speed: Fast serialization/deserialization
Code: protocol.rs
Connection Pooling
- Persistent Connections: Connections remain open for multiple requests
- Concurrent Requests: Multiple requests can be in-flight simultaneously
- Correlation IDs: Match responses to requests asynchronously
Code: client.rs
Implementation Details
IPC Server
The node-side IPC server:
- Listens on Unix domain socket
- Accepts module connections
- Routes requests to NodeAPI implementation
- Publishes events to subscribed modules
Code: server.rs
IPC Client
The module-side IPC client:
- Connects to Unix domain socket
- Sends requests and receives responses
- Subscribes to events
- Handles connection errors
Code: client.rs
See Also
- Module System - Module system architecture
- Module Development - Building modules
- Modules Overview - Available modules
Event System Integration
Overview
The module event system is designed to handle common integration pain points in distributed module architectures. This document covers all integration scenarios, reliability guarantees, and best practices.
Integration Pain Points Addressed
1. Event Delivery Reliability
Problem: Events can be lost if modules are slow or channels are full.
Solution:
- Channel Buffering: 100-event buffer per module (configurable)
- Non-Blocking Delivery: Uses
try_sendto avoid blocking the publisher - Channel Full Handling: Events are dropped with warning (module is slow, not dead)
- Channel Closed Detection: Automatically removes dead modules from subscriptions
- Delivery Statistics: Track success/failure rates per module
Code:
#![allow(unused)]
fn main() {
// EventManager tracks delivery statistics
let stats = event_manager.get_delivery_stats("module_id").await;
// Returns: Option<(successful_deliveries, failed_deliveries, channel_full_count)>
}
2. Event Ordering and Timing
Problem: Events might arrive out of order or modules might miss events during startup.
Solution:
- ModuleLoaded Timing: Only published AFTER module subscribes (startup complete)
- Hotloaded Modules: Automatically receive all already-loaded modules when subscribing
- Consistent Ordering: Subscription → ModuleLoaded events (guaranteed order)
Flow:
- Module loads → Recorded in
loaded_modules - Module subscribes → Receives all already-loaded modules
- ModuleLoaded published → After subscription (startup complete)
3. Event Channel Backpressure
Problem: Fast publishers can overwhelm slow consumers.
Solution:
- Bounded Channels: 100-event buffer prevents unbounded memory growth
- Non-Blocking: Publisher never blocks, events dropped if channel full
- Statistics Tracking: Monitor channel full events to identify slow modules
- Automatic Cleanup: Dead modules automatically removed
Monitoring:
#![allow(unused)]
fn main() {
let stats = event_manager.get_delivery_stats("module_id").await;
if let Some((_, _, channel_full_count)) = stats {
if channel_full_count > 100 {
warn!("Module {} is slow, dropping events", module_id);
}
}
}
4. Missing Events During Startup
Problem: Modules that start later miss events from earlier modules.
Solution:
- Hotloaded Module Support: Newly subscribing modules receive all already-loaded modules
- Event Replay: ModuleLoaded events sent to newly subscribing modules
- Consistent State: All modules have consistent view of loaded modules
5. Event Type Coverage
Problem: Not all events have corresponding payloads or are published.
Solution:
- Complete Coverage: All EventType variants have corresponding EventPayload variants
- Governance Events: All governance events are published
- Network Events: All network events are published
- Lifecycle Events: All lifecycle events are published
Event Categories
Core Blockchain Events
NewBlock: Block connected to chainNewTransaction: Transaction in mempoolBlockDisconnected: Block disconnected (reorg)ChainReorg: Chain reorganization
Governance Events
GovernanceProposalCreated: Proposal createdGovernanceProposalVoted: Vote castGovernanceProposalMerged: Proposal mergedGovernanceForkDetected: Fork detected
Network Events
PeerConnected: Peer connectedPeerDisconnected: Peer disconnectedPeerBanned: Peer bannedMessageReceived: Network message receivedBroadcastStarted: Broadcast startedBroadcastCompleted: Broadcast completed
Module Lifecycle Events
ModuleLoaded: Module loaded (after subscription)ModuleUnloaded: Module unloadedModuleCrashed: Module crashedModuleHealthChanged: Health status changed
Maintenance Events
DataMaintenance: Unified cleanup/flush (replaces StorageFlush + DataCleanup)MaintenanceStarted: Maintenance startedMaintenanceCompleted: Maintenance completedHealthCheck: Health check performed
Resource Management Events
DiskSpaceLow: Disk space lowResourceLimitWarning: Resource limit warning
Event Delivery Guarantees
At-Most-Once Delivery
- Events are delivered at most once per subscriber
- If channel is full, event is dropped (not retried)
- If channel is closed, module is removed from subscriptions
Best-Effort Delivery
- Events are delivered on a best-effort basis
- No guaranteed delivery (modules can be slow/dead)
- Statistics track delivery success/failure rates
Ordering Guarantees
- Events are delivered in order per module (single channel)
- No cross-module ordering guarantees
- ModuleLoaded events are ordered: subscription → ModuleLoaded
Error Handling
Channel Full
- Event is dropped with warning
- Module subscription is NOT removed (module is slow, not dead)
- Statistics track channel full count
Channel Closed
- Module subscription is removed
- Statistics track failed delivery count
- Module is automatically cleaned up
Serialization Errors
- Event is dropped with warning
- Module subscription is NOT removed
- Error is logged for debugging
Monitoring and Debugging
Delivery Statistics
#![allow(unused)]
fn main() {
// Get statistics for a module
let stats = event_manager.get_delivery_stats("module_id").await;
// Returns: Option<(successful, failed, channel_full)>
// Get statistics for all modules
let all_stats = event_manager.get_all_delivery_stats().await;
// Returns: HashMap<module_id, (successful, failed, channel_full)>
// Reset statistics (for testing)
event_manager.reset_delivery_stats("module_id").await;
}
Event Subscribers
#![allow(unused)]
fn main() {
// Get list of subscribers for an event type
let subscribers = event_manager.get_subscribers(EventType::NewBlock).await;
// Returns: Vec<module_id>
}
Best Practices
For Module Developers
- Subscribe Early: Subscribe to events as soon as possible after handshake
- Handle Events Quickly: Keep event handlers fast and non-blocking
- Monitor Statistics: Check delivery statistics to ensure events are received
- Handle ModuleLoaded: Always handle ModuleLoaded to know about other modules
- Graceful Shutdown: Handle NodeShutdown and DataMaintenance (urgency: “high”)
For Node Developers
- Publish Consistently: Publish events at consistent points in the code
- Use EventPublisher: Use EventPublisher for all event publishing
- Monitor Statistics: Monitor delivery statistics to identify slow modules
- Handle Errors: Log warnings for failed event deliveries
- Test Integration: Test event delivery in integration tests
Common Integration Scenarios
Scenario 1: Module Startup
- Module process spawned
- Module connects via IPC
- Module sends Handshake
- Module subscribes to events
- Module receives ModuleLoaded for all already-loaded modules
- ModuleLoaded published for this module (after subscription)
Scenario 2: Hotloaded Module
- Module B loads while Module A is already running
- Module B subscribes to events
- Module B receives ModuleLoaded for Module A
- ModuleLoaded published for Module B
- Module A receives ModuleLoaded for Module B
Scenario 3: Slow Module
- Module receives events slowly
- Event channel fills up (100 events)
- New events are dropped with warning
- Statistics track channel full count
- Module subscription is NOT removed (module is slow, not dead)
Scenario 4: Dead Module
- Module process crashes
- Event channel is closed
- Event delivery fails
- Module subscription is automatically removed
- Statistics track failed delivery count
Scenario 5: Governance Event Flow
- Network receives governance event
- Event published to governance module
- Governance module processes event
- Governance module may publish additional events
- All events delivered via same reliable channel
Configuration
Channel Buffer Size
Currently hardcoded to 100 events per module. Can be made configurable in the future.
Event Statistics
Statistics are kept in memory and reset on node restart. Can be persisted in the future.
Future Improvements
- Configurable Buffer Size: Make channel buffer size configurable per module
- Event Persistence: Persist events for replay after module restart
- Event Filtering: Allow modules to filter events by criteria
- Event Priority: Add priority queue for critical events
- Event Metrics: Add Prometheus metrics for event delivery
- Event Replay: Allow modules to replay missed events
See Also
- Module System - Module system architecture
- Event Consistency - Event timing and consistency guarantees
- Janitorial Events - Maintenance and lifecycle events
- Module IPC Protocol - IPC communication details
Module Event System Consistency
Overview
The module event system is designed to be consistent, minimal, and extensible. All events follow a clear pattern and timing to ensure modules can integrate seamlessly with the node.
Event Timing and Consistency
ModuleLoaded Event
Key Principle: ModuleLoaded events are only published AFTER a module has subscribed (after startup is complete).
Flow:
- Module process is spawned
- Module connects via IPC and sends Handshake
- Module sends
SubscribeEventsrequest - At subscription time:
- Module receives
ModuleLoadedevents for all already-loaded modules (hotloaded modules get existing modules) ModuleLoadedis published for the newly subscribing module (if it’s loaded)
- Module receives
- Module is now fully operational
Why this design?
- Ensures
ModuleLoadedonly happens after module is fully ready (subscribed) - Hotloaded modules automatically receive all existing modules
- Consistent event ordering: subscription → ModuleLoaded
- No race conditions: modules can’t miss events
Example Flow
Startup (Module A loads first):
- Module A process spawned
- Module A connects and handshakes
- Module A subscribes to events
ModuleLoadedpublished for Module A (no other modules yet)
Hotload (Module B loads later):
- Module B process spawned
- Module B connects and handshakes
- Module B subscribes to events
- Module B receives
ModuleLoadedfor Module A (already loaded) ModuleLoadedpublished for Module B (all modules get it)
Unified Events
DataMaintenance (Unified Cleanup/Flush)
Replaces: StorageFlush and DataCleanup (unified into one extensible event)
Purpose: Single event for all data maintenance operations
Payload:
operation: “flush”, “cleanup”, or “both”urgency: “low”, “medium”, or “high”reason: “periodic”, “shutdown”, “low_disk”, “manual”target_age_days: Optional (for cleanup operations)timeout_seconds: Optional (for high urgency operations)
Usage Examples:
- Shutdown:
DataMaintenance { operation: "flush", urgency: "high", reason: "shutdown", timeout_seconds: Some(5) } - Periodic Cleanup:
DataMaintenance { operation: "cleanup", urgency: "low", reason: "periodic", target_age_days: Some(30) } - Low Disk:
DataMaintenance { operation: "both", urgency: "high", reason: "low_disk", target_age_days: Some(7), timeout_seconds: Some(10) }
Benefits:
- Single event for all maintenance operations
- Extensible: easy to add new operation types or urgency levels
- Clear semantics: operation + urgency + reason
- Modules can handle all maintenance in one place
Event Categories
1. Node Lifecycle
NodeStartupCompleted: Node is fully operationalNodeShutdown: Node is shutting down (modules should clean up)NodeShutdownCompleted: Shutdown finished
2. Module Lifecycle
ModuleLoaded: Module loaded and subscribed (after startup complete)ModuleUnloaded: Module unloadedModuleReloaded: Module reloadedModuleCrashed: Module crashed
3. Configuration
ConfigLoaded: Node configuration loaded/changed
4. Maintenance
DataMaintenance: Unified cleanup/flush event (replaces StorageFlush + DataCleanup)MaintenanceStarted: Maintenance operation startedMaintenanceCompleted: Maintenance operation completedHealthCheck: Health check performed
5. Resource Management
DiskSpaceLow: Disk space is lowResourceLimitWarning: Resource limit approaching
Best Practices
- Subscribe Early: Modules should subscribe to events as soon as possible after handshake
- Handle ModuleLoaded: Always handle
ModuleLoadedto know about other modules - DataMaintenance: Handle all maintenance operations in one place using
DataMaintenance - Graceful Shutdown: Always handle
NodeShutdownandDataMaintenance(urgency: “high”) - Non-Blocking: Keep event handlers fast and non-blocking
Consistency Guarantees
- ModuleLoaded Timing: Always happens after subscription (startup complete)
- Hotloaded Modules: Always receive all already-loaded modules
- Event Ordering: Consistent ordering (subscription → ModuleLoaded)
- No Race Conditions: Events are delivered reliably
- Unified Maintenance: Single event for all maintenance operations
Extensibility
The event system is designed to be easily extensible:
- Add New Events: Add to
EventTypeenum andEventPayloadenum - Add Event Publishers: Add methods to
EventPublisher - Add Event Handlers: Modules subscribe and handle events
- Unified Patterns: Follow existing patterns (e.g., DataMaintenance)
Migration from Old Events
Old: StorageFlush + DataCleanup
New: DataMaintenance with operation and urgency fields
Migration:
#![allow(unused)]
fn main() {
// Old
match event_type {
EventType::StorageFlush => { flush_data().await?; }
EventType::DataCleanup => { cleanup_data().await?; }
}
// New
match event_type {
EventType::DataMaintenance => {
if let EventPayload::DataMaintenance { operation, urgency, .. } = payload {
if operation == "flush" || operation == "both" {
flush_data().await?;
}
if operation == "cleanup" || operation == "both" {
cleanup_data().await?;
}
}
}
}
}
See Also
- Module System - Module system architecture
- Event System Integration - Complete integration guide
- Janitorial Events - Maintenance and lifecycle events
- Module IPC Protocol - IPC communication details
Janitorial and Maintenance Events
Overview
The module system provides comprehensive janitorial and maintenance events that allow modules to participate in node lifecycle, resource management, and data maintenance operations. This ensures modules can perform their own cleanup, maintenance, and resource management in sync with the node.
Event Categories
1. Node Lifecycle Events
NodeShutdown
When: Node is shutting down (before components stop) Purpose: Allow modules to clean up gracefully Payload:
reason: String - Shutdown reason (“graceful”, “signal”, “rpc”, “error”)timeout_seconds: u64 - Graceful shutdown timeout
Module Action:
- Save state
- Close connections
- Flush data
- Clean up resources
NodeShutdownCompleted
When: Node shutdown is complete Purpose: Notify modules that shutdown finished Payload:
duration_ms: u64 - Shutdown duration
NodeStartupCompleted
When: Node startup is complete (all components initialized) Purpose: Notify modules that node is fully operational Payload:
duration_ms: u64 - Startup durationcomponents: Vec- Components that were initialized
Module Action:
- Initialize connections
- Load state
- Start processing
2. Storage Events
DataMaintenance (Unified)
When: Data maintenance is requested (shutdown, periodic, low disk, manual) Purpose: Allow modules to flush data and/or clean up old data Payload:
operation: String - “flush”, “cleanup”, or “both”urgency: String - “low”, “medium”, or “high”reason: String - “periodic”, “shutdown”, “low_disk”, “manual”target_age_days: Option- Target age for cleanup (if operation includes cleanup) timeout_seconds: Option- Timeout for high urgency operations
Module Action:
- Flush: Write pending data to disk
- Cleanup: Delete old data based on target_age_days
- Both: Flush and cleanup
Urgency Levels:
- Low: Periodic maintenance, can be done asynchronously
- Medium: Scheduled maintenance, should complete soon
- High: Urgent (shutdown, low disk), must complete quickly
3. Maintenance Events
MaintenanceStarted
When: Maintenance operation started Purpose: Allow modules to prepare for maintenance Payload:
maintenance_type: String - “backup”, “cleanup”, “prune”estimated_duration_seconds: Option- Estimated duration
Module Action:
- Pause non-critical operations
- Prepare for maintenance
MaintenanceCompleted
When: Maintenance operation completed Purpose: Notify modules that maintenance finished Payload:
maintenance_type: String - Maintenance typesuccess: bool - Success statusduration_ms: u64 - Duration in millisecondsresults: Option- Results/statistics (optional JSON)
Module Action:
- Resume normal operations
- Process results if needed
HealthCheck
When: Health check performed Purpose: Allow modules to report their health status Payload:
check_type: String - “periodic”, “manual”, “startup”node_healthy: bool - Node health statushealth_report: Option- Health report (optional JSON)
Module Action:
- Report module health status
- Perform internal health checks
4. Resource Management Events
DiskSpaceLow
When: Disk space is low Purpose: Allow modules to clean up data to free space Payload:
available_bytes: u64 - Available space in bytestotal_bytes: u64 - Total space in bytespercent_free: f64 - Percentage freedisk_path: String - Disk path
Module Action:
- Clean up old data
- Reduce data retention
- Flush and compress data
ResourceLimitWarning
When: Resource limit approaching Purpose: Allow modules to reduce resource usage Payload:
resource_type: String - “memory”, “cpu”, “disk”, “network”usage_percent: f64 - Current usage percentagecurrent_usage: u64 - Current usage valuelimit: u64 - Limit valuethreshold_percent: f64 - Warning threshold percentage
Module Action:
- Reduce resource usage
- Clean up resources
- Optimize operations
Usage Examples
Handling Shutdown
#![allow(unused)]
fn main() {
match event_type {
EventType::NodeShutdown => {
if let EventPayload::NodeShutdown { reason, timeout_seconds } = payload {
info!("Node shutting down: {}, timeout: {}s", reason, timeout_seconds);
// Save state
save_state().await?;
// Close connections
close_connections().await?;
// Flush data
flush_data().await?;
}
}
_ => {}
}
}
Handling Data Maintenance
#![allow(unused)]
fn main() {
match event_type {
EventType::DataMaintenance => {
if let EventPayload::DataMaintenance { operation, urgency, reason, target_age_days, timeout_seconds } = payload {
match operation.as_str() {
"flush" => {
flush_pending_data().await?;
}
"cleanup" => {
let age_days = target_age_days.unwrap_or(30);
cleanup_old_data(age_days).await?;
}
"both" => {
flush_pending_data().await?;
let age_days = target_age_days.unwrap_or(30);
cleanup_old_data(age_days).await?;
}
_ => {}
}
if urgency == "high" {
// High urgency - must complete quickly
if let Some(timeout) = timeout_seconds {
tokio::time::timeout(
Duration::from_secs(timeout),
maintenance_operation()
).await?;
}
}
}
}
_ => {}
}
}
Handling Disk Space Low
#![allow(unused)]
fn main() {
match event_type {
EventType::DiskSpaceLow => {
if let EventPayload::DiskSpaceLow { available_bytes, percent_free, .. } = payload {
warn!("Disk space low: {} bytes available, {:.2}% free", available_bytes, percent_free);
// Clean up old data
cleanup_old_data(7).await?; // Keep only last 7 days
// Compress data
compress_data().await?;
}
}
_ => {}
}
}
Best Practices
- Always Handle Shutdown: Modules must handle
NodeShutdownandDataMaintenance(urgency: “high”) - Non-Blocking Operations: Keep maintenance operations fast and non-blocking
- Respect Timeouts: For high urgency operations, respect timeout_seconds
- Clean Up Resources: Always clean up resources on shutdown
- Monitor Health: Report health status during
HealthCheckevents
Integration Timing
Startup Sequence
- Node starts
- Modules load
- Modules subscribe to events
NodeStartupCompletedpublished- Modules can start processing
Shutdown Sequence
NodeShutdownpublished (with timeout)- Modules clean up (within timeout)
DataMaintenancepublished (urgency: “high”, operation: “flush”)- Modules flush data
- Node components stop
NodeShutdownCompletedpublished
Periodic Maintenance
DataMaintenancepublished (urgency: “low”, operation: “cleanup”, reason: “periodic”)- Modules clean up old data
MaintenanceCompletedpublished
See Also
- Module System - Module system architecture
- Event System Integration - Complete integration guide
- Event Consistency - Event timing and consistency guarantees
- Module IPC Protocol - IPC communication details
Consensus Layer Overview
The consensus layer (blvm-consensus) provides a pure mathematical implementation of Bitcoin consensus rules from the Orange Paper. All functions are deterministic, side-effect-free, and directly implement the mathematical specifications without interpretation.
Architecture Position
Tier 2 of the 6-tier Bitcoin Commons architecture:
1. Orange Paper (mathematical foundation)
2. blvm-consensus (pure math implementation) ← THIS LAYER
3. blvm-protocol (Bitcoin abstraction)
4. blvm-node (full node implementation)
5. blvm-sdk (developer toolkit)
6. blvm-commons (governance enforcement)
Core Functions
Implements major Bitcoin consensus functions from the Orange Paper:
Transaction Validation
CheckTransaction: Transaction structure and limit validationCheckTxInputs: Input validation against UTXO setEvalScript: Script execution engineVerifyScript: Script verification with witness data
Code: transaction.rs
Block Validation
ConnectBlock: Block connection and validationApplyTransaction: Transaction application to UTXO setCheckProofOfWork: Proof of work verificationShouldReorganize: Chain reorganization logic
Code: block.rs
Economic Model
GetBlockSubsidy: Block reward calculation with halvingTotalSupply: Total supply computationGetNextWorkRequired: Difficulty adjustment calculation
Code: economic.rs
Mempool Protocol
AcceptToMemoryPool: Transaction mempool validationIsStandardTx: Standard transaction checksReplacementChecks: RBF (Replace-By-Fee) logic
Code: mempool.rs
Mining Protocol
CreateNewBlock: Block creation from mempoolMineBlock: Block mining and nonce findingGetBlockTemplate: Block template generation
Code: mining.rs
Advanced Features
- SegWit: Witness data validation and weight calculation
- Taproot: P2TR output validation and key aggregation
Code: segwit.rs
Design Principles
- Pure Functions: All functions are deterministic and side-effect-free
- Mathematical Accuracy: Direct implementation of Orange Paper specifications
- Optimization Passes: LLVM-like optimization passes transform specifications into optimized code
- Exact Version Pinning: All consensus-critical dependencies pinned to exact versions
- Comprehensive Testing: Extensive test coverage with unit tests, property-based tests, and integration tests
- No Consensus Rule Interpretation: Only mathematical implementation
- Formal Verification: BLVM Specification Lock and property-based testing ensure correctness
Formal Verification
Implements mathematical verification of Bitcoin consensus rules:
Recent Improvements
- Strong Tier System: Critical proofs prioritized with AWS spot instance integration
- Spam Filtering: Always available (removed feature gate dependency)
- Parallel Proof Execution: Tiered scheduling for efficient verification
Code: block.rs
Verification Coverage
Chain Selection: should_reorganize, calculate_chain_work verified
Block Subsidy: get_block_subsidy halving schedule verified
Proof of Work: check_proof_of_work, target expansion verified
Transaction Validation: check_transaction structure rules verified
Block Connection: connect_block UTXO consistency verified
Code: VERIFICATION.md
BIP Implementation
Critical Bitcoin Improvement Proposals (BIPs) implemented:
- BIP30: Duplicate coinbase prevention (integrated in
connect_block()) - BIP34: Block height in coinbase (integrated in
connect_block()) - BIP66: Strict DER signatures (enforced via script verification)
- BIP90: Block version enforcement (integrated in
connect_block()) - BIP147: NULLDUMMY enforcement (enforced via script verification)
Code: block.rs
Performance Optimizations
Profile-Guided Optimization (PGO)
For maximum performance:
./scripts/pgo-build.sh
Expected gain: Significant performance improvement
Optimization Passes
LLVM-like optimization passes transform Orange Paper specifications:
- Constant Folding: Compile-time constant evaluation
- Memory Layout Optimization: Cache-friendly data structures
- SIMD Vectorization: Parallel processing where applicable
- Bounds Check Optimization: Eliminate unnecessary checks
- Dead Code Elimination: Remove unused code paths
Code: mod.rs
Mathematical Lock
Implementation is mathematically locked to the Orange Paper:
Chain of Trust:
Orange Paper (Math Spec) → BLVM Specification Lock (Z3 Proof) → Implementation → Bitcoin Consensus
Every function implements a mathematical specification, every critical function has a Z3 proof (via BLVM Specification Lock), and all proofs reference Orange Paper sections.
Dependencies
All consensus-critical dependencies are pinned to exact versions:
# Consensus-critical cryptography - EXACT VERSIONS
secp256k1 = "=0.28.2"
sha2 = "=0.10.9"
ripemd = "=0.1.3"
bitcoin_hashes = "=0.11.0"
Code: Cargo.toml
See Also
- Consensus Architecture - Consensus layer design
- Formal Verification - Verification methodology
- Mathematical Correctness - Verification approach and coverage
- UTXO Commitments - UTXO commitment system
- Orange Paper - Mathematical foundation
Consensus Layer Architecture
The consensus layer is designed as a pure mathematical implementation with no side effects.
Design Principles
- Pure Functions: All functions are deterministic and side-effect-free
- Mathematical Accuracy: Direct implementation of Orange Paper specifications
- Optimization Passes: LLVM-like optimization passes transform the Orange Paper specification into optimized code (constant folding, memory layout optimization, SIMD vectorization, bounds check optimization, dead code elimination)
- Exact Version Pinning: All consensus-critical dependencies pinned to exact versions
- Testing: Test coverage with unit tests, property-based tests, and integration tests
- No Consensus Rule Interpretation: Only mathematical implementation
- Formal Verification: BLVM Specification Lock and property-based testing ensure correctness
Core Functions
Transaction Validation
- Transaction structure and limit validation
- Input validation against UTXO set
- Script execution and verification
Block Validation
- Block connection and validation
- Transaction application to UTXO set
- Proof of work verification
Economic Model
- Block reward calculation
- Total supply computation
- Difficulty adjustment
Mempool Protocol
- Transaction mempool validation
- Standard transaction checks
- Transaction replacement (RBF) logic
Mining Protocol
- Block creation from mempool
- Block mining and nonce finding
- Block template generation
Chain Management
- Chain reorganization handling
- P2P network message processing
Advanced Features
- SegWit: Witness data validation and weight calculation (see BIP141)
- Taproot: P2TR output validation and key aggregation (see BIP341)
Optimization Passes
BLVM applies optimizations to transform the Orange Paper specification into optimized, production-ready code:
- Constant Folding - Pre-computed constants and constant propagation
- Memory Layout Optimization - Cache-aligned structures and compact stack frames
- SIMD Vectorization - Batch hash operations with parallel processing
- Bounds Check Optimization - Removes redundant runtime bounds checks using BLVM Specification Lock-proven bounds
- Dead Code Elimination - Removes unused code paths
- Inlining Hints - Aggressive inlining of hot functions
Mathematical Protections
Mathematical protection mechanisms ensure correctness through formal verification. See Mathematical Specifications for details.
Spec Maintenance Workflow
Figure: Specification maintenance workflow showing how changes are detected, verified, and integrated.
See Also
- Consensus Overview - Consensus layer introduction
- Formal Verification - Verification methodology and tools
- Mathematical Correctness - Verification approach and coverage
- Orange Paper - Mathematical foundation
- Protocol Architecture - Protocol layer built on consensus
Formal Verification
The consensus layer implements formal verification for Bitcoin consensus rules using a multi-layered approach combining mathematical specifications, symbolic verification, and property-based testing.
Verification Stack
Verification approach follows: “Rust + Tests + Math Specs = Source of Truth”
Comprehensive Coverage] PT[Property Tests
Randomized Edge Cases] IT[Integration Tests
Cross-System Validation] end subgraph "Layer 2: Symbolic Verification" SPECLOCK[BLVM Specification Lock
Tiered Execution] SPEC[Math Specifications
Orange Paper] SSE[State Space Exploration
All Execution Paths] end subgraph "Layer 3: CI Enforcement" AUTO[Automated Testing
Required for Merge] PROOF[Formal Proofs
Separate Execution] OTS[OpenTimestamps
Immutable Audit Trail] end UT --> AUTO PT --> AUTO IT --> AUTO SPECLOCK --> PROOF SPEC --> SPECLOCK SSE --> SPECLOCK PROOF --> OTS AUTO --> OTS style UT fill:#bbf,stroke:#333,stroke-width:2px style PT fill:#bbf,stroke:#333,stroke-width:2px style IT fill:#bbf,stroke:#333,stroke-width:2px style SPECLOCK fill:#bfb,stroke:#333,stroke-width:3px style SPEC fill:#fbf,stroke:#333,stroke-width:2px style AUTO fill:#ffb,stroke:#333,stroke-width:2px style OTS fill:#fbb,stroke:#333,stroke-width:2px
Layer 1: Empirical Testing
- Unit tests: Comprehensive test coverage for all consensus functions
- Property-based tests: Randomized testing with
proptestto discover edge cases - Integration tests: Cross-system validation between consensus components
Layer 2: Symbolic Verification
- BLVM Specification Lock: Formal verification tool using Z3 SMT solver with tiered execution
- Mathematical specifications: Formal documentation of consensus rules
- State space exploration: Verification of all possible execution paths
Layer 3: CI Enforcement
- Automated testing: All tests must pass before merge
- Formal proofs: Run separately with tiered execution (strong/medium/slow tiers)
- OpenTimestamps audit logging: Immutable proof of verification artifacts
Verification Statistics
Formal Proofs
All critical consensus functions are verified across multiple files with tiered execution system (strong/medium/slow tiers).
Verification Command:
# Run BLVM Specification Lock verification
cargo spec-lock verify
For tiered execution:
# Run all Z3 proofs (uses tiered execution)
cargo spec-lock verify
# Run specific tier
cargo spec-lock verify --tier strong
Tier System:
- Strong Tier: Critical consensus proofs (AWS spot instance integration)
- Medium Tier: Important proofs (parallel execution)
- Slow Tier: Comprehensive coverage proofs
Infrastructure:
- AWS spot instance integration for expensive proof execution
- Parallel proof execution with tiered scheduling
- Automated proof verification in CI/CD
Property-Based Tests
Property-based tests in tests/consensus_property_tests.rs cover economic rules, proof of work, transaction validation, script execution, performance, deterministic execution, integer overflow safety, temporal/state transitions, compositional verification, and SHA256 correctness.
Verification Command:
cargo test --test consensus_property_tests
Runtime Assertions
Runtime assertions provide invariant checking during execution.
Runtime Invariant Feature Flag:
#[cfg(any(debug_assertions, feature = "runtime-invariants"))]enables assertionssrc/block.rs: Supply invariant checks inconnect_block
Verification: Runtime assertions execute during debug builds and can be enabled in production with --features runtime-invariants.
Fuzz Targets (libFuzzer)
Fuzz targets include:
block_validation.rscompact_block_reconstruction.rsdifferential_fuzzing.rseconomic_validation.rsmempool_operations.rspow_validation.rsscript_execution.rsscript_opcodes.rssegwit_validation.rsserialization.rstransaction_validation.rsutxo_commitments.rs
Location: fuzz/fuzz_targets/
Verification Command:
cd fuzz
cargo +nightly fuzz run transaction_validation
MIRI Runtime Checks
Status: Integrated in CI
Location: .github/workflows/verify.yml
Checks:
- Property tests under MIRI
- Critical unit tests under MIRI
- Undefined behavior detection
Verification Command:
cargo +nightly miri test --test consensus_property_tests
Mathematical Specifications
Multiple functions have complete formal documentation
Location: docs/MATHEMATICAL_SPECIFICATIONS_COMPLETE.md
Documented Functions:
- Economic:
get_block_subsidy,total_supply,calculate_fee - Proof of Work:
expand_target,compress_target,check_proof_of_work - Transaction:
check_transaction,is_coinbase - Block:
connect_block,apply_transaction - Script:
eval_script,verify_script - Reorganization:
calculate_chain_work,should_reorganize - Cryptographic:
SHA256
Mathematical Specifications
Chain Selection (src/reorganization.rs)
Mathematical Specification:
∀ chains C₁, C₂: work(C₁) > work(C₂) ⇒ select(C₁)
Invariants:
- Selected chain has maximum cumulative work
- Work calculation is deterministic
- Empty chains are rejected
- Chain work is always non-negative
Verified Functions:
should_reorganize: Proves longest chain selectioncalculate_chain_work: Verifies cumulative work calculationexpand_target: Handles difficulty target edge cases
Block Subsidy (src/economic.rs)
Mathematical Specification:
∀ h ∈ ℕ: subsidy(h) = 50 * 10^8 * 2^(-⌊h/210000⌋) if ⌊h/210000⌋ < 64 else 0
Invariants:
- Subsidy halves every 210,000 blocks
- After 64 halvings, subsidy becomes 0
- Subsidy is always non-negative
- Total supply approaches 21M BTC asymptotically
Verified Functions:
get_block_subsidy: Verifies halving scheduletotal_supply: Proves monotonic increasevalidate_supply_limit: Ensures supply cap compliance
Proof of Work (src/pow.rs)
Mathematical Specification:
∀ header H: CheckProofOfWork(H) = SHA256(SHA256(H)) < ExpandTarget(H.bits)
Target Compression/Expansion:
∀ bits ∈ [0x03000000, 0x1d00ffff]:
Let expanded = expand_target(bits)
Let compressed = compress_target(expanded)
Let re_expanded = expand_target(compressed)
Then:
- re_expanded ≤ expanded (compression truncates, never increases)
- re_expanded.0[2] = expanded.0[2] ∧ re_expanded.0[3] = expanded.0[3]
(significant bits preserved exactly)
- Precision loss in words 0, 1 is acceptable (compact format limitation)
Invariants:
- Hash must be less than target for valid proof of work
- Target expansion handles edge cases correctly
- Target compression preserves significant bits (words 2, 3) exactly
- Target compression may lose precision in lower bits (words 0, 1)
- Difficulty adjustment respects bounds [0.25, 4.0]
- Work calculation is deterministic
Verified Functions:
check_proof_of_work: Verifies hash < targetexpand_target: Handles compact target representationcompress_target: Implements Bitcoin Core GetCompact() exactlytarget_expand_compress_round_trip: Formally verified - proves significant bits preservedget_next_work_required: Respects difficulty bounds
Transaction Validation (src/transaction.rs)
Mathematical Specification:
∀ tx ∈ 𝒯𝒳: CheckTransaction(tx) = valid ⟺
(|tx.inputs| > 0 ∧ |tx.outputs| > 0 ∧
∀o ∈ tx.outputs: 0 ≤ o.value ≤ M_max ∧
|tx.inputs| ≤ M_max_inputs ∧ |tx.outputs| ≤ M_max_outputs ∧
|tx| ≤ M_max_tx_size)
Invariants:
- Valid transactions have non-empty inputs and outputs
- Output values are bounded [0, MAX_MONEY]
- Input/output counts respect limits
- Transaction size respects limits
- Coinbase transactions have special validation rules
Verified Functions:
check_transaction: Validates structure rulescheck_tx_inputs: Handles coinbase correctlyis_coinbase: Correctly identifies coinbase transactions
Block Connection (src/block.rs)
Mathematical Specification:
∀ block B, UTXO set US, height h: ConnectBlock(B, US, h) = (valid, US') ⟺
(ValidateHeader(B.header) ∧
∀ tx ∈ B.transactions: CheckTransaction(tx) ∧ CheckTxInputs(tx, US, h) ∧
VerifyScripts(tx, US) ∧
CoinbaseOutput ≤ TotalFees + GetBlockSubsidy(h) ∧
US' = ApplyTransactions(B.transactions, US))
Invariants:
- Valid blocks have valid headers and transactions
- UTXO set consistency is preserved
- Coinbase output respects economic rules
- Transaction application is atomic
Verified Functions:
connect_block: Validates complete blockapply_transaction: Preserves UTXO consistencycalculate_tx_id: Deterministic transaction identification
Verification Tools
BLVM Specification Lock
Purpose: Formal verification tool using Z3 SMT solver for mathematical proof of correctness
Usage: cargo spec-lock verify
Coverage: All functions with #[spec_locked] attributes
Strategy: Links Rust code to Orange Paper specifications and verifies contracts using Z3
Proptest Property Testing
Purpose: Randomized testing to discover edge cases
Usage: cargo test (runs automatically)
Coverage: All proptest! macros
Strategy: Generates random inputs within specified bounds
Example:
#![allow(unused)]
fn main() {
proptest! {
#[test]
fn prop_function_invariant(input in strategy) {
let result = function_under_test(input);
prop_assert!(result.property_holds());
}
}
}
CI Integration
Verification Workflow
The .github/workflows/verify.yml workflow enforces verification:
-
Unit & Property Tests (required)
cargo test --all-features- Must pass for CI to succeed
-
BLVM Specification Lock Verification (release verification)
cargo spec-lock verify- Verifies all Z3 proofs for functions with
#[spec_locked]attributes - Full verification run before each release
- Slower runs may be deferred between major releases
- Not required for merge
-
OpenTimestamps Audit (non-blocking)
- Collect verification artifacts
- Timestamp proof bundle with
ots stamp - Upload artifacts for transparency
Local Development
Run all tests:
cargo test --all-features
Run BLVM Specification Lock verification:
cargo spec-lock verify
Run specific verification:
cargo test --test property_tests
cargo spec-lock verify --proof <function_name>
Verification Coverage
Critical consensus functions are formally verified or property-tested across economic rules, proof of work, transaction validation, block validation, script execution, chain reorganization, cryptographic operations, mempool, SegWit, and serialization, using formal proofs, property tests, runtime assertions, and fuzz targets.
Network Protocol Verification
Network protocol message parsing, serialization, and processing are formally verified using BLVM Specification Lock, extending verification beyond consensus to the network layer.
Verified Properties: Message header parsing (magic, command, length, checksum), checksum validation, size limit enforcement, round-trip properties (parse(serialize(msg)) == msg).
Verified Messages: Phase 1: Version, VerAck, Ping, Pong. Phase 2: Transaction, Block, Headers, Inv, GetData, GetHeaders.
Mathematical Specifications: Round-trip property ∀ msg: parse(serialize(msg)) = msg, checksum validation rejects invalid checksums, size limits enforced for all messages.
Verification runs automatically in CI. Proofs excluded from release builds via verify feature.
Consensus Coverage Comparison
Figure: Consensus coverage comparison: Bitcoin Core achieves coverage through testing alone. Bitcoin Commons achieves formal verification coverage (Z3 proofs via BLVM Specification Lock) plus comprehensive test coverage. Commons uses consensus-focused test files with extensive test functions compared to Core’s total files. The mathematical specification enables both formal verification and comprehensive testing.
Proof Maintenance Cost
Figure: Proof maintenance cost: proofs changed per change by area; highlights refactor hotspots; Commons aims for lower proof churn than Core.
Spec Drift vs Test Coverage
Figure: Spec drift decreases as test coverage increases. Higher test coverage reduces the likelihood of specification drift over time.
Network Protocol Verification
Network protocol message parsing, serialization, and processing are formally verified using BLVM Specification Lock, extending verification beyond consensus to the network layer. See Network Protocol for transport details.
Verified Properties: Message header parsing (magic, command, length, checksum), checksum validation, size limit enforcement, round-trip properties (parse(serialize(msg)) == msg).
Verified Messages: Phase 1: Version, VerAck, Ping, Pong. Phase 2: Transaction, Block, Headers, Inv, GetData, GetHeaders.
Mathematical Specifications: Round-trip property ∀ msg: parse(serialize(msg)) = msg, checksum validation rejects invalid checksums, size limits enforced for all messages.
Verification runs automatically in CI. Proofs excluded from release builds via verify feature.
See Also
- Consensus Overview - Consensus layer introduction
- Consensus Architecture - Consensus layer design
- Mathematical Specifications - Mathematical spec details
- Mathematical Correctness - Correctness guarantees
- Property-Based Testing - Property-based testing
- Fuzzing - Fuzzing infrastructure
- Testing Infrastructure - Complete testing overview
Peer Consensus Protocol
Overview
Bitcoin Commons implements an N-of-M peer consensus protocol for UTXO set verification. The protocol discovers diverse peers and finds consensus among them to verify UTXO commitments without trusting any single peer.
Architecture
N-of-M Consensus Model
The protocol uses an N-of-M consensus model:
- N: Minimum number of peers required
- M: Target number of diverse peers
- Threshold: Consensus threshold (e.g., 70% agreement)
- Diversity: Peers must be diverse across ASNs, subnets, geographic regions
Code: peer_consensus.rs
Peer Information
Peer information tracks diversity:
#![allow(unused)]
fn main() {
pub struct PeerInfo {
pub address: IpAddr,
pub asn: Option<u32>, // Autonomous System Number
pub country: Option<String>, // Country code (ISO 3166-1 alpha-2)
pub implementation: Option<String>, // Bitcoin implementation
pub subnet: u32, // /16 subnet for diversity checks
}
}
Code: peer_consensus.rs
Diverse Peer Discovery
Diversity Requirements
Peers must be diverse across:
- ASNs: Maximum N peers per ASN
- Subnets: No peers from same /16 subnet
- Geographic Regions: Geographic diversity
- Bitcoin Implementations: Implementation diversity
Code: peer_consensus.rs
Discovery Process
- Collect All Peers: Gather all available peers
- Filter by ASN: Limit peers per ASN
- Filter by Subnet: Remove duplicate subnets
- Select Diverse Set: Select diverse peer set
- Stop at Target: Stop when target number reached
Code: peer_consensus.rs
Consensus Finding
Commitment Grouping
Commitments are grouped by their values:
- Merkle Root: UTXO commitment Merkle root
- Total Supply: Total Bitcoin supply
- UTXO Count: Number of UTXOs
- Block Height: Block height of commitment
Code: peer_consensus.rs
Consensus Threshold
Consensus threshold check:
- Threshold: Configurable threshold (e.g., 70%)
- Agreement Count: Number of peers agreeing
- Required Count:
ceil(total_peers * threshold) - Verification: Check if agreement count >= required count
Code: peer_consensus.rs
Mathematical Invariants
Consensus finding maintains invariants:
required_agreement_count <= total_peersrequired_agreement_count >= 1best_agreement_count <= total_peers- If
agreement_count >= required_agreement_count, thenagreement_count/total_peers >= threshold
Code: peer_consensus.rs
Checkpoint Height Determination
Median-Based Checkpoint
Checkpoint height determined from peer chain tips:
- Median Calculation: Uses median of peer tips
- Safety Margin: Subtracts safety margin to prevent deep reorgs
- Mathematical Invariants:
- Median is always between min(tips) and max(tips)
- Checkpoint height is always >= 0
- Checkpoint height <= median_tip
Code: peer_consensus.rs
Ban List Sharing
Ban List Protocol
Nodes share ban lists to protect against malicious peers:
- Ban List Messages:
GetBanList,BanListprotocol messages - Hash Verification: Ban list hash verification
- Merging: Ban list merging from multiple peers
- Network-Wide Protection: Protects entire network
Code: mod.rs
Ban List Validation
Ban list entries are validated:
- Entry Validation: Each entry validated
- Hash Verification: Ban list hash verified
- Merging Logic: Merged with local ban list
- Duplicate Prevention: Duplicate entries prevented
Code: mod.rs
Ban List Merging
Ban lists are merged from multiple peers:
- Hash Verification: Verify ban list hash
- Entry Validation: Validate each ban entry
- Merging: Merge with local ban list
- Conflict Resolution: Resolve conflicts (longest ban wins)
Code: ban_list_merging.rs
Filtered Blocks
Filtered Block Protocol
Nodes can request filtered blocks:
- GetFilteredBlock: Request filtered block
- FilteredBlock: Response with filtered block
- Efficiency: More efficient than full blocks
- Privacy: Better privacy for light clients
Code: protocol.rs
Network-Wide Malicious Peer Protection
Protection Mechanisms
Network-wide protection against malicious peers:
- Ban List Sharing: Share ban lists across network
- Peer Reputation: Track peer reputation
- Auto-Ban: Automatic banning of abusive peers
- Eclipse Prevention: Prevent eclipse attacks
Code: SECURITY.md
Configuration
Consensus Configuration
#![allow(unused)]
fn main() {
pub struct ConsensusConfig {
pub min_peers: usize, // Minimum peers required
pub target_peers: usize, // Target number of diverse peers
pub consensus_threshold: f64, // Consensus threshold (0.0-1.0)
pub max_peers_per_asn: usize, // Max peers per ASN
pub safety_margin_blocks: Natural, // Safety margin for checkpoint
}
}
Code: peer_consensus.rs
Benefits
- No Single Point of Trust: No need to trust any single peer
- Diversity: Diverse peer set reduces attack surface
- Consensus: Majority agreement ensures correctness
- Network Protection: Ban list sharing protects entire network
- Efficiency: Filtered blocks reduce bandwidth
Components
The peer consensus protocol includes:
- N-of-M consensus model
- Diverse peer discovery
- Consensus finding algorithm
- Checkpoint height determination
- Ban list sharing
- Filtered block protocol
- Network-wide malicious peer protection
Location: blvm-consensus/src/utxo_commitments/peer_consensus.rs, blvm-node/src/network/ban_list_merging.rs, blvm-node/src/network/mod.rs
See Also
- Consensus Overview - Consensus layer introduction
- UTXO Commitments - UTXO commitment system
- Mathematical Specifications - Mathematical spec details
- Network Protocol - Network layer details
Mathematical Specifications
Overview
Bitcoin Commons implements formal mathematical specifications for all critical consensus functions. These specifications provide precise mathematical definitions that serve as the source of truth for consensus behavior.
Specification Format
Mathematical specifications use formal notation to define consensus rules:
- Quantifiers: Universal (∀) and existential (∃) quantifiers
- Functions: Mathematical function definitions
- Invariants: Properties that must always hold
- Constraints: Bounds and limits
Code: VERIFICATION.md
Core Specifications
Chain Selection
Mathematical Specification:
∀ chains C₁, C₂: work(C₁) > work(C₂) ⇒ select(C₁)
Invariants:
- Selected chain has maximum cumulative work
- Work calculation is deterministic
- Empty chains are rejected
- Chain work is always non-negative
Verified Functions:
should_reorganize: Proves longest chain selectioncalculate_chain_work: Verifies cumulative work calculationexpand_target: Handles difficulty target edge cases
Code: VERIFICATION.md
Block Subsidy
Mathematical Specification:
∀ h ∈ ℕ: subsidy(h) = 50 * 10^8 * 2^(-⌊h/210000⌋) if ⌊h/210000⌋ < 64 else 0
Invariants:
- Subsidy halves every 210,000 blocks
- Subsidy is non-negative
- Subsidy decreases monotonically
- Total supply converges to 21 million BTC
Code: VERIFICATION.md
Total Supply
Mathematical Specification:
∀ h ∈ ℕ: total_supply(h) = Σ(i=0 to h) subsidy(i)
Invariants:
- Total supply is monotonic (never decreases)
- Total supply is bounded (≤ 21 * 10^6 * 10^8 satoshis)
- Total supply converges to 21 million BTC
Code: CONSENSUS_COVERAGE_ASSESSMENT.md
Difficulty Adjustment
Mathematical Specification:
target_new = target_old * (timespan / expected_timespan)
timespan_clamped = clamp(timespan, expected/4, expected*4)
Invariants:
- Target is always positive
- Timespan is clamped to [expected/4, expected*4]
- Difficulty adjustment is deterministic
Code: MATHEMATICAL_PROTECTIONS.md
Consensus Threshold
Mathematical Specification:
required_agreement_count = ceil(total_peers * threshold)
consensus_met ⟺ agreement_count >= required_agreement_count
Invariants:
1 <= required_agreement_count <= total_peersagreement_count >= required⟺ratio >= threshold- Integer comparison is deterministic
Code: MATHEMATICAL_PROTECTIONS.md
Median Calculation
Mathematical Specification:
median(tips) = {
tips[n/2] if n is odd,
(tips[n/2-1] + tips[n/2]) / 2 if n is even
}
Invariants:
min(tips) <= median <= max(tips)- Median is deterministic
- Checkpoint = max(0, median - safety_margin)
Code: MATHEMATICAL_PROTECTIONS.md
Specification Coverage
Functions with Specifications
Multiple functions have formal mathematical specifications:
- Chain selection (
should_reorganize,calculate_chain_work) - Block subsidy (
get_block_subsidy) - Total supply (
total_supply) - Difficulty adjustment (
get_next_work_required,expand_target) - Transaction validation (
check_transaction,check_tx_inputs) - Block validation (
connect_block,apply_transaction) - Script execution (
eval_script,verify_script) - Consensus threshold (
find_consensus) - Median calculation (
determine_checkpoint_height)
Code: CONSENSUS_COVERAGE_ASSESSMENT.md
Mathematical Protections
Integer-Based Arithmetic
Floating-point arithmetic replaced with integer-based calculations:
#![allow(unused)]
fn main() {
// Integer-based threshold calculation
let required_agreement_count = ((total_peers as f64) * threshold).ceil() as usize;
if agreement_count >= required_agreement_count {
// Consensus met
}
}
Code: MATHEMATICAL_PROTECTIONS.md
Runtime Assertions
Runtime assertions verify mathematical invariants:
- Threshold calculation bounds
- Consensus result invariants
- Median calculation bounds
- Checkpoint bounds
Code: MATHEMATICAL_PROTECTIONS.md
Checked Arithmetic
Checked arithmetic prevents overflow/underflow:
#![allow(unused)]
fn main() {
// Median calculation with overflow protection
let median_tip = if sorted_tips.len() % 2 == 0 {
let mid = sorted_tips.len() / 2;
let lower = sorted_tips[mid - 1];
let upper = sorted_tips[mid];
(lower + upper) / 2 // Safe: Natural type prevents overflow
} else {
sorted_tips[sorted_tips.len() / 2]
};
}
Code: MATHEMATICAL_PROTECTIONS.md
Formal Verification
Z3 Proofs
Z3 proofs (via BLVM Specification Lock) verify mathematical specifications:
- Threshold Calculation: Verifies integer-based threshold correctness
- Median Calculation: Verifies median bounds
- Consensus Result: Verifies consensus result invariants
- Economic Rules: Verifies subsidy and supply calculations
Code: MATHEMATICAL_PROTECTIONS.md
Property-Based Tests
Property-based tests verify invariants:
- Generate random inputs
- Verify properties hold
- Discover edge cases
- Test mathematical correctness
Code: MATHEMATICAL_PROTECTIONS.md
Documentation
Specification Documents
Mathematical specifications are documented in:
- MATHEMATICAL_SPECIFICATIONS_COMPLETE.md: Complete formal specifications
- VERIFICATION.md: Verification methodology
- MATHEMATICAL_PROTECTIONS.md: Protection mechanisms
- PROTECTION_COVERAGE.md: Coverage statistics
Code: README.md
Components
The mathematical specifications system includes:
- Formal mathematical notation for consensus functions
- Mathematical invariants documentation
- Integer-based arithmetic (prevents floating-point bugs)
- Runtime assertions (verify invariants)
- Checked arithmetic (prevents overflow)
- Z3 proofs (formal verification via BLVM Specification Lock)
- Property-based tests (invariant verification)
Location: blvm-consensus/docs/VERIFICATION.md, blvm-consensus/docs/MATHEMATICAL_SPECIFICATIONS_COMPLETE.md, blvm-consensus/docs/MATHEMATICAL_PROTECTIONS.md
See Also
- Consensus Overview - Consensus layer introduction
- Consensus Architecture - Consensus layer design
- Formal Verification - BLVM Specification Lock verification details
- Mathematical Correctness - Correctness guarantees
- Property-Based Testing - Property-based testing
Mathematical Correctness
The consensus layer implements mathematical correctness through formal verification and comprehensive testing.
Verification Approach
Our verification approach follows: “Rust + Tests + Math Specs = Source of Truth”
Layer 1: Empirical Testing (Required, Must Pass)
- Unit tests: Comprehensive test coverage for all consensus functions
- Property-based tests: Randomized testing with
proptestto discover edge cases - Integration tests: Cross-system validation between consensus components
Layer 2: Symbolic Verification (Required, Must Pass)
- BLVM Specification Lock: Bounded symbolic verification with mathematical invariants using Z3
- Mathematical specifications: Formal documentation of consensus rules
- State space exploration: Verification of all possible execution paths
Layer 3: CI Enforcement (Required, Blocks Merge)
- Automated verification: All tests and proofs must pass before merge
- OpenTimestamps audit logging: Immutable proof of verification artifacts
- No human override: Technical correctness is non-negotiable
Verified Functions
✅ Chain Selection: should_reorganize, calculate_chain_work verified
✅ Block Subsidy: get_block_subsidy halving schedule verified
✅ Proof of Work: check_proof_of_work, target expansion verified
✅ Transaction Validation: check_transaction structure rules verified
✅ Block Connection: connect_block UTXO consistency verified
Protection Coverage
{{#include ../../../blvm-consensus/docs/PROTECTION_COVERAGE.md}}
See Also
- Consensus Architecture - Consensus layer design
- Formal Verification - Verification methodology and tools
- Consensus Overview - Consensus layer introduction
- Orange Paper - Mathematical foundation
Spam Filtering
Overview
Spam filtering provides transaction-level filtering for bandwidth optimization and non-monetary transaction detection. The system filters spam transactions to achieve 40-60% bandwidth savings during ongoing sync while maintaining consensus correctness.
Code: spam_filter.rs
Note: While the implementation is located in the utxo_commitments module for organizational purposes, spam filtering is a general-purpose feature that can be used independently of UTXO commitments.
Spam Detection Types
1. Ordinals/Inscriptions (SpamType::Ordinals)
Detects data embedded in Bitcoin transactions:
- Witness Scripts: Detects data embedded in witness scripts (SegWit v0 or Taproot) - PRIMARY METHOD
- OP_RETURN Outputs: Detects OP_RETURN outputs with large data pushes
- Envelope Protocol: Detects envelope protocol patterns (OP_FALSE OP_IF … OP_ENDIF)
- Pattern Detection: Large scripts (>100 bytes) or OP_RETURN with >80 bytes
- Witness Detection: Large witness stacks (>1000 bytes) or suspicious data patterns
2. Dust Outputs (SpamType::Dust)
Filters outputs below threshold:
- Threshold: Default 546 satoshis (configurable)
- Detection: All outputs must be below threshold for transaction to be considered dust
- Configuration:
SpamFilterConfig::dust_threshold
3. BRC-20 Tokens (SpamType::BRC20)
Detects BRC-20 token transactions:
- Pattern Matching: Detects BRC-20 JSON patterns in OP_RETURN outputs
- Patterns:
"p":"brc-20","op":"mint","op":"transfer","op":"deploy"
4. Large Witness Data (SpamType::LargeWitness)
Detects transactions with suspiciously large witness data:
- Threshold: Default 1000 bytes (configurable)
- Indication: Potential data embedding in witness data
- Configuration:
SpamFilterConfig::max_witness_size
5. Low Fee Rate (SpamType::LowFeeRate)
Detects transactions with suspiciously low fee rates:
- Detection: Low fee rate relative to transaction size
- Indication: Non-monetary transactions often pay minimal fees
- Threshold: Default 1 sat/vbyte (configurable)
- Configuration:
SpamFilterConfig::min_fee_rate - Status: Disabled by default (can be too aggressive)
6. High Size-to-Value Ratio (SpamType::HighSizeValueRatio)
Detects transactions with very large size relative to value transferred:
- Pattern: >1000 bytes per satoshi (default threshold)
- Indication: Non-monetary use (large data, small value)
- Configuration:
SpamFilterConfig::max_size_value_ratio
7. Many Small Outputs (SpamType::ManySmallOutputs)
Detects transactions with many small outputs:
- Pattern: >10 outputs below dust threshold (default)
- Indication: Common in token distributions and Ordinal transfers
- Configuration:
SpamFilterConfig::max_small_outputs
Critical Design: Output-Only Filtering
Important: Spam filtering applies to OUTPUTS only, not entire transactions.
When processing a spam transaction:
- ✅ INPUTS are ALWAYS removed from UTXO tree (maintains consistency)
- ❌ OUTPUTS are filtered out (bandwidth savings)
This ensures UTXO set consistency even when spam transactions spend non-spam inputs.
Implementation: 206:310:blvm-consensus/src/utxo_commitments/initial_sync.rs
Configuration
Default Configuration
#![allow(unused)]
fn main() {
use blvm_consensus::utxo_commitments::spam_filter::{SpamFilter, SpamFilterConfig};
// Default configuration (all detection methods enabled except low_fee_rate)
let filter = SpamFilter::new();
}
Custom Configuration
#![allow(unused)]
fn main() {
let config = SpamFilterConfig {
filter_ordinals: true,
filter_dust: true,
filter_brc20: true,
filter_large_witness: true, // Detect large witness stacks
filter_low_fee_rate: false, // Disabled by default (too aggressive)
filter_high_size_value_ratio: true, // Detect high size/value ratio
filter_many_small_outputs: true, // Detect many small outputs
dust_threshold: 546, // satoshis
min_output_value: 546, // satoshis
min_fee_rate: 1, // satoshis per vbyte
max_witness_size: 1000, // bytes
max_size_value_ratio: 1000.0, // bytes per satoshi
max_small_outputs: 10, // count
};
let filter = SpamFilter::with_config(config);
}
Witness Data Support
For improved detection accuracy, especially for Taproot/SegWit-based Ordinals, use is_spam_with_witness():
#![allow(unused)]
fn main() {
use blvm_consensus::witness::Witness;
let filter = SpamFilter::new();
let witnesses: Vec<Witness> = /* witness data for each input */;
// Better detection with witness data
let result = filter.is_spam_with_witness(&tx, Some(&witnesses));
// Backward compatible (works without witness data)
let result = filter.is_spam(&tx);
}
Usage
Basic Usage
#![allow(unused)]
fn main() {
use blvm_consensus::utxo_commitments::spam_filter::SpamFilter;
let filter = SpamFilter::new();
let result = filter.is_spam(&transaction);
if result.is_spam {
println!("Transaction is spam: {:?}", result.spam_type);
for spam_type in &result.detected_types {
println!(" - {:?}", spam_type);
}
}
}
Block Filtering
#![allow(unused)]
fn main() {
let spam_filter = SpamFilter::new();
let (filtered_txs, spam_summary) = spam_filter.filter_block(&block.transactions);
// Spam summary provides statistics:
// - filtered_count: Number of transactions filtered
// - filtered_size: Total bytes filtered
// - by_type: Breakdown by spam type (ordinals, dust, brc20)
}
Block Filtering with Witness Data
#![allow(unused)]
fn main() {
let spam_filter = SpamFilter::new();
let witnesses: Vec<Vec<Witness>> = /* witness data for each transaction */;
let (filtered_txs, spam_summary) = spam_filter.filter_block_with_witness(
&block.transactions,
Some(&witnesses)
);
}
Mempool-Level Spam Filtering
In addition to block-level filtering, spam filtering can be applied at the mempool entry point to reject spam transactions before they enter the mempool.
Configuration
Enable mempool-level spam filtering in MempoolConfig:
#![allow(unused)]
fn main() {
use blvm_consensus::config::MempoolConfig;
let mut config = MempoolConfig::default();
config.reject_spam_in_mempool = true; // Enable spam rejection at mempool entry
// Optional: Customize spam filter configuration
#[cfg(feature = "utxo-commitments")]
{
use blvm_consensus::utxo_commitments::config::SpamFilterConfigSerializable;
config.spam_filter_config = Some(SpamFilterConfigSerializable {
filter_ordinals: true,
filter_dust: true,
filter_brc20: true,
// ... other spam filter settings
});
}
}
Standard Transaction Checks
The mempool also enforces stricter standard transaction checks:
OP_RETURN Limits
- Maximum OP_RETURN size: 80 bytes (Bitcoin Core standard, configurable)
- Multiple OP_RETURN rejection: By default, transactions with more than 1 OP_RETURN output are rejected
- Configuration:
MempoolConfig::max_op_return_size,max_op_return_outputs,reject_multiple_op_return
Envelope Protocol Rejection
- Envelope protocol detection: Rejects scripts starting with
OP_FALSE OP_IF(used by Ordinals) - Configuration:
MempoolConfig::reject_envelope_protocol(default: true)
Script Size Limits
- Maximum standard script size: 200 bytes (configurable)
- Configuration:
MempoolConfig::max_standard_script_size
Per-Peer Transaction Rate Limiting
To prevent peer flooding, transaction rate limiting is enforced per peer:
- Burst limit: 10 transactions (configurable)
- Rate limit: 1 transaction per second (configurable)
- Configuration:
MempoolPolicyConfig::tx_rate_limit_burst,tx_rate_limit_per_sec - Location:
blvm-node/src/network/mod.rs
Transactions exceeding the rate limit are dropped before processing.
Example Configuration
[mempool]
# Enable spam filtering at mempool entry
reject_spam_in_mempool = true
# OP_RETURN limits
max_op_return_size = 80
max_op_return_outputs = 1
reject_multiple_op_return = true
# Standard script checks
max_standard_script_size = 200
reject_envelope_protocol = true
# Fee rate requirements for large transactions
min_fee_rate_large_tx = 2
large_tx_threshold_bytes = 1000
[mempool_policy]
# Per-peer transaction rate limiting
tx_rate_limit_burst = 10
tx_rate_limit_per_sec = 1
# Per-peer byte rate limiting
tx_byte_rate_limit = 100000 # 100 KB/s
tx_byte_rate_burst = 1000000 # 1 MB burst
# Spam-aware eviction
eviction_strategy = "spamfirst"
[spam_ban]
# Spam-specific peer banning
spam_ban_threshold = 10
spam_ban_duration_seconds = 3600 # 1 hour
Integration Points
UTXO Commitments
Spam filtering is used in UTXO commitment processing to reduce bandwidth during sync:
- Location:
blvm-consensus/src/utxo_commitments/initial_sync.rs - Usage: Filters outputs when processing blocks for UTXO commitments
- Benefit: 40-60% bandwidth reduction during ongoing sync
Protocol Extensions
Spam filtering is used in protocol extensions for filtered block generation:
- Location:
blvm-node/src/network/protocol_extensions.rs - Usage: Generates filtered blocks for network peers
- Benefit: Reduces bandwidth for filtered block relay
Mempool Entry
Spam filtering can be applied at mempool entry to reject spam transactions:
- Location:
blvm-consensus/src/mempool.rs::accept_to_memory_pool_with_config() - Usage: Optional spam check before accepting transactions to mempool
- Benefit: Prevents spam from entering mempool, reducing memory usage
- Status: Opt-in (default: disabled) to maintain backward compatibility
Bandwidth Savings
- 40-60% bandwidth reduction during ongoing sync
- Maintains consensus correctness
- Enables efficient UTXO commitment synchronization
- Reduces storage requirements for filtered block relay
Performance Characteristics
- CPU Overhead: Minimal (pattern matching)
- Memory: O(1) per transaction
- Detection Speed: Fast (heuristic-based pattern matching)
Use Cases
- UTXO Commitment Sync: Reduce bandwidth during initial sync
- Ongoing Sync: Skip spam transactions in filtered blocks
- Bandwidth Optimization: For nodes with limited bandwidth
- Storage Optimization: Reduce data that needs to be stored
- Network Efficiency: Reduce bandwidth for filtered block relay
- Mempool Management: Reject spam transactions at mempool entry (opt-in)
- Peer Flooding Prevention: Rate limit transactions per peer to prevent DoS
Additional Spam Mitigation
Already Implemented
- Input/Output Limits: Consensus-level limits (MAX_INPUTS = 1000, MAX_OUTPUTS = 1000) prevent transactions with excessive inputs/outputs
- Ancestor/Descendant Limits: Package limits prevent transaction package spam (default: 25 transactions, 101 kB)
- DoS Protection: Automatic peer banning for connection rate violations
- Per-Peer Byte Rate Limiting: Limits bytes per second per peer (default: 100 KB/s, 1 MB burst)
- Fee Rate Requirements for Large Transactions: Requires higher fees for large transactions (default: 2 sat/vB for transactions >1000 bytes)
- Spam-Aware Eviction: Evict spam transactions first when mempool is full (eviction strategy:
SpamFirst) - Spam-Specific Peer Banning: Auto-ban peers that repeatedly send spam transactions (default: ban after 10 spam transactions, 1 hour duration)
Per-Peer Byte Rate Limiting
Prevents large transaction flooding by limiting bytes per second per peer:
[mempool_policy]
tx_byte_rate_limit = 100000 # 100 KB/s
tx_byte_rate_burst = 1000000 # 1 MB burst
Fee Rate Requirements for Large Transactions
Large transactions must pay higher fees to discourage spam:
[mempool]
min_fee_rate_large_tx = 2 # 2 sat/vB (higher than standard 1 sat/vB)
large_tx_threshold_bytes = 1000 # Transactions >1 KB require higher fees
Spam-Aware Eviction Strategy
When mempool is full, spam transactions are evicted first:
[mempool_policy]
eviction_strategy = "spamfirst" # Evict spam transactions first
Note: Requires utxo-commitments feature. Falls back to lowest_fee_rate if feature is disabled.
Spam-Specific Peer Banning
Tracks spam violations per peer and auto-bans repeat offenders:
[spam_ban]
spam_ban_threshold = 10 # Ban after 10 spam transactions
spam_ban_duration_seconds = 3600 # Ban for 1 hour
Peers that repeatedly send spam transactions are automatically banned for the configured duration.
See Also
- UTXO Commitments - How spam filtering integrates with UTXO commitments
- Consensus Overview - Consensus layer introduction
- Network Protocol - Network protocol details
UTXO Commitments
Overview
UTXO Commitments enable fast synchronization of the Bitcoin UTXO set without requiring full blockchain download. The system uses cryptographic Merkle tree commitments with peer consensus verification, achieving 98% bandwidth savings compared to traditional full block download.
Architecture
Core Components
- Merkle Tree: Sparse Merkle Tree for incremental UTXO set updates
- Peer Consensus: N-of-M diverse peer verification model
- Spam Filtering: Filters spam transactions from commitments
- Verification: PoW-based commitment verification
- Network Integration: Works with TCP and Iroh transports
Code: mod.rs
Merkle Tree Implementation
Sparse Merkle Tree
The system uses a sparse Merkle tree for efficient incremental updates:
- Incremental Updates: Insert/remove UTXOs without full tree rebuild
- Proof Generation: Generate Merkle proofs for UTXO inclusion
- Root Calculation: Efficient root hash calculation
- SHA256 Hashing: Uses SHA256 for all hashing operations
Code: merkle_tree.rs
Usage
#![allow(unused)]
fn main() {
use blvm_consensus::utxo_commitments::{UtxoCommitmentSet, UtxoCommitment};
// Create UTXO commitment set
let mut commitment_set = UtxoCommitmentSet::new();
// Add UTXO
let outpoint = OutPoint { hash: [1; 32], index: 0 };
let utxo = UTXO { value: 1000, script_pubkey: vec![], height: 0 };
commitment_set.insert(outpoint, utxo)?;
// Generate commitment
let commitment = commitment_set.generate_commitment(block_hash, height)?;
}
Peer Consensus Protocol
N-of-M Verification Model
The peer consensus protocol discovers diverse peers and finds consensus among them to verify UTXO commitments without trusting any single peer.
Peer Diversity
Peers are selected for diversity across:
- ASN (Autonomous System Number): Maximum 2 peers per ASN
- Country: Geographic distribution
- Subnet: /16 subnet distribution
- Implementation: Different Bitcoin implementations (Bitcoin Core, btcd, etc.)
Code: peer_consensus.rs
Consensus Configuration
#![allow(unused)]
fn main() {
pub struct ConsensusConfig {
pub min_peers: usize, // Minimum: 5
pub target_peers: usize, // Target: 10
pub consensus_threshold: f64, // 0.8 (80% agreement)
pub max_peers_per_asn: usize, // 2
pub safety_margin: Natural, // 2016 blocks (~2 weeks)
}
}
Code: peer_consensus.rs
Consensus Process
- Discover Diverse Peers: Find peers across different ASNs, countries, subnets
- Request Commitments: Query each peer for UTXO commitment at checkpoint height
- Group Responses: Group commitments by value (merkle root + supply + count + height)
- Find Consensus: Identify group with highest agreement
- Verify Threshold: Check if agreement meets consensus threshold (80%)
- Verify Commitment: Verify consensus commitment against block headers and PoW
Code: peer_consensus.rs
Fast Sync Protocol
Initial Sync Process
- Download Headers: Download block headers from genesis to tip
- Select Checkpoint: Choose checkpoint height (safety margin back from tip)
- Request UTXO Sets: Query diverse peers for UTXO commitment at checkpoint
- Find Consensus: Use peer consensus to verify commitment
- Verify Commitment: Verify against block headers and PoW
- Sync Forward: Download filtered blocks from checkpoint to tip
- Update Incrementally: Update UTXO set incrementally for each block
Code: initial_sync.rs
Bandwidth Savings
The fast sync protocol achieves 98% bandwidth savings by:
- Headers Only: Download headers instead of full blocks (~80 bytes vs ~1 MB per block)
- Filtered Blocks: Download only relevant transactions (~2% of block size)
- Incremental Updates: Only download UTXO changes, not full set
Calculation:
- Traditional: ~500 GB (full blockchain)
- Fast Sync: ~10 GB (headers + filtered blocks)
- Savings: 98%
Spam Filtering Integration
UTXO Commitments use spam filtering to reduce bandwidth during sync. Spam filtering is a general-purpose feature that can be used independently of UTXO commitments.
For detailed spam filtering documentation, see: Spam Filtering
Integration with UTXO Commitments
When processing blocks for UTXO commitments, spam filtering is applied:
- Location: initial_sync.rs
- Process: All transactions are processed, but spam outputs are filtered out
- Benefit: 40-60% bandwidth reduction during ongoing sync
- Critical Design: INPUTS are always removed (maintains UTXO consistency), OUTPUTS are filtered (bandwidth savings)
Bandwidth Savings
- 40-60% bandwidth reduction during ongoing sync
- Maintains consensus correctness
- Enables efficient UTXO commitment synchronization
BIP158 Compact Block Filters
The node implements BIP158 compact block filters for light client support. While this is implemented at the node level, it integrates with UTXO commitments for efficient filtered block serving.
Location
- Node Implementation:
blvm-node/src/bip158.rs - Service:
blvm-node/src/network/filter_service.rs - Integration: Used for light client support
Capabilities
Filter Generation
- Golomb-Rice Coded Sets (GCS) for efficient encoding
- False Positive Rate: ~1 in 524,288 (P=19)
- Filter Contents:
- All spendable output scriptPubKeys in the block
- All scriptPubKeys from outputs spent by block’s inputs
Filter Header Chain
- Maintains filter header chain for efficient verification
- Checkpoints every 1000 blocks (per BIP157)
- Enables light clients to verify filter integrity
Algorithm
- Collect Scripts: All output scriptPubKeys from block transactions and all scriptPubKeys from UTXOs being spent
- Hash to Range: Hash each script with SHA256, map to range [0, N*M) where N = number of elements, M = 2^19
- Golomb-Rice Encoding: Sort hashed values, compute differences, encode using Golomb-Rice
- Filter Matching: Light clients hash their scripts and check if script hash is in set
Integration with UTXO Commitments
BIP158 filters can be included in FilteredBlockMessage alongside spam-filtered transactions and UTXO commitments, enabling efficient light client synchronization.
Code: bip158.rs
Verification
Verification Levels
- Minimal: Peer consensus only
- Standard: Peer consensus + PoW + supply checks
- Paranoid: All checks + background genesis verification
Code: config.rs
Verification Checks
- PoW Verification: Verify block headers have valid proof-of-work
- Supply Verification: Verify total supply matches expected value
- Header Chain Verification: Verify commitment height matches header chain
- Merkle Root Verification: Verify Merkle root matches UTXO set
Code: verification.rs
Network Integration
Transport Support
UTXO Commitments work with both TCP and Iroh transports via the transport abstraction layer:
- TCP: Bitcoin P2P compatible
- Iroh/QUIC: QUIC with NAT traversal and DERP
Code: utxo_commitments_client.rs
Network Messages
GetUTXOSet: Request UTXO commitment from peerUTXOSet: Response with UTXO commitmentGetFilteredBlock: Request filtered block (spam-filtered)FilteredBlock: Response with filtered block
Code: network_integration.rs
Configuration
Sync Modes
- PeerConsensus: Use peer consensus for initial sync (fast, trusts N of M peers)
- Genesis: Sync from genesis (slow, but no trust required)
- Hybrid: Use peer consensus but verify from genesis in background
Code: config.rs
Configuration Example
[utxo_commitments]
sync_mode = "PeerConsensus" # or "Genesis" or "Hybrid"
verification_level = "Standard" # or "Minimal" or "Paranoid"
[utxo_commitments.consensus]
min_peers = 5
target_peers = 10
consensus_threshold = 0.8
max_peers_per_asn = 2
safety_margin = 2016
[utxo_commitments.spam_filter]
min_value = 546 # dust threshold
min_fee_rate = 1 # sat/vB
Code: config.rs
Formal Verification
The UTXO Commitments module includes blvm-spec-lock proofs verifying:
- Merkle tree operations (insert, remove, root calculation)
- Commitment generation
- Verification logic
- Peer consensus calculations
Location: blvm-consensus/src/utxo_commitments/
UTXO Proof Verification
Overview
UTXO proof verification provides cryptographic proofs that UTXO set operations maintain correctness properties. The system uses blvm-spec-lock to formally verify storage operations against mathematical specifications from the Orange Paper.
Code: utxostore_proofs.rs
Verified Properties
The proof verification system verifies the following mathematical properties:
1. UTXO Uniqueness (Orange Paper Theorem 5.3.1)
Mathematical Specification: ∀ outpoint: has_utxo(outpoint) ⟹ get_utxo(outpoint) = Some(utxo)
Verification: Spec-lock verifies that if a UTXO exists for an outpoint, retrieving it returns the same UTXO that was stored.
Code: verify_utxo_uniqueness
2. Add/Remove Consistency
Mathematical Specification: add_utxo(op, utxo); remove_utxo(op); has_utxo(op) = false
Verification: Spec-lock verifies that adding and then removing a UTXO results in the UTXO no longer existing.
Code: verify_add_remove_consistency
3. Spent Output Tracking
Mathematical Specification: mark_spent(op); is_spent(op) = true
Verification: Spec-lock verifies that marking an output as spent correctly updates the spent state.
Code: verify_spent_output_tracking
4. Value Conservation (Orange Paper Theorem 5.3.2)
Mathematical Specification: total_value(us) = Σ_{op ∈ dom(us)} us(op).value
Verification: Spec-lock verifies that the total value of the UTXO set equals the sum of all individual UTXO values.
Code: verify_value_conservation
5. Count Accuracy
Mathematical Specification: utxo_count() = |{utxo : has_utxo(utxo)}|
Verification: Spec-lock verifies that the UTXO count matches the number of UTXOs that exist.
Code: verify_count_accuracy
6. Round-Trip Storage (Orange Paper Theorem 5.3.3)
Mathematical Specification: load_utxo_set(store_utxo_set(us)) = us
Verification: Spec-lock verifies that storing and then loading a UTXO set results in the same set.
Code: verify_roundtrip_storage
Verification Workflow
The proof verification system works as follows:
- Proof Generation: blvm-spec-lock verifies each
#[spec_locked]function - Property Verification: Each proof verifies a specific mathematical property
- Integration: Proofs are integrated into the codebase and verified during CI/CD
- Runtime Assertions: Verified properties can be checked at runtime for additional safety
Usage
Proof verification is automatic and integrated into the build system:
# Run spec-lock verification
cargo spec-lock verify --crate-path .
Benefits
- Mathematical Correctness: Properties are proven, not just tested
- Orange Paper Compliance: Proofs verify compliance with Orange Paper specifications
- Runtime Safety: Verified properties can be checked at runtime
- CI/CD Integration: Proofs run automatically in continuous integration
Integration with UTXO Commitments
UTXO proof verification ensures that UTXO set operations used by UTXO commitments maintain correctness:
- Merkle Tree Operations: Verified to maintain UTXO uniqueness and value conservation
- Storage Operations: Verified to maintain round-trip consistency
- Commitment Generation: Uses verified UTXO set operations
Code: utxostore_proofs.rs
Usage
Initial Sync
#![allow(unused)]
fn main() {
use blvm_consensus::utxo_commitments::InitialSync;
let sync = InitialSync::new(
peer_consensus,
network_client,
config,
);
// Sync from checkpoint
let commitment = sync.sync_from_checkpoint(
header_chain,
diverse_peers,
).await?;
// Complete sync forward with full validation
// Note: checkpoint_utxo_set should be obtained from the verified commitment
// For now, passing None starts with empty set (commitment verified at checkpoint)
sync.complete_sync_from_checkpoint(
&mut utxo_tree,
checkpoint_height,
current_tip,
network_client,
get_block_hash_fn,
peer_id,
Network::Mainnet,
network_time,
Some(&header_chain),
None, // checkpoint_utxo_set - can be obtained separately if needed
).await?;
}
Update After Block
#![allow(unused)]
fn main() {
use blvm_consensus::utxo_commitments::update_commitments_after_block;
update_commitments_after_block(
&mut utxo_tree,
block,
height,
)?;
}
Code: initial_sync.rs
Benefits
- Fast Sync: 98% bandwidth savings vs full blockchain download
- Security: N-of-M peer consensus prevents single peer attacks
- Efficiency: Incremental updates, no full set download
- Flexibility: Multiple sync modes and verification levels
- Transport Agnostic: Works with TCP or QUIC
- Formal Verification: blvm-spec-lock proofs ensure correctness
Components
The UTXO Commitments system includes:
- Sparse Merkle Tree with incremental updates
- Peer consensus protocol (N-of-M verification)
- Spam filtering
- Commitment verification
- Network integration (TCP and Iroh)
- Fast sync protocol
- blvm-spec-lock proofs
Location: blvm-consensus/src/utxo_commitments/, blvm-node/src/network/utxo_commitments_client.rs
See Also
- Consensus Overview - Consensus layer introduction
- Consensus Architecture - Consensus layer design
- Network Protocol - Network protocol details
- Node Configuration - UTXO commitment configuration
- Storage Backends - Storage backend details
- Performance Optimizations - IBD optimizations and batch storage
- Formal Verification - Spec-lock verification system
Protocol Layer Overview
The protocol layer (blvm-protocol) abstracts Bitcoin protocol for multiple variants and protocol evolution. It sits between the pure mathematical consensus rules (blvm-consensus) and the Bitcoin node implementation (blvm-node), supporting mainnet, testnet, regtest, and future protocol variants.
Architecture Position
Tier 3 of the 6-tier Bitcoin Commons architecture:
1. Orange Paper (mathematical foundation)
2. blvm-consensus (pure math implementation)
3. blvm-protocol (Bitcoin abstraction) ← THIS LAYER
4. blvm-node (full node implementation)
5. blvm-sdk (developer toolkit)
6. blvm-commons (governance enforcement)
Protocol Variants
The protocol layer supports multiple Bitcoin network variants:
| Variant | Network Name | Default Port | Purpose |
|---|---|---|---|
| BitcoinV1 | mainnet | 8333 | Production Bitcoin network |
| Testnet3 | testnet | 18333 | Bitcoin test network |
| Regtest | regtest | 18444 | Regression testing network |
Network Parameters
Each variant has specific network parameters:
- Magic Bytes: P2P protocol identification (mainnet:
0xf9beb4d9, testnet:0x0b110907, regtest:0xfabfb5da) - Genesis Blocks: Network-specific genesis block hashes
- Difficulty Targets: Proof-of-work difficulty adjustment
- Halving Intervals: Block subsidy halving schedule (210,000 blocks)
- Feature Activation: SegWit, Taproot activation heights
Code: network_params.rs
Core Components
Protocol Engine
The BitcoinProtocolEngine is the main interface:
#![allow(unused)]
fn main() {
pub struct BitcoinProtocolEngine {
version: ProtocolVersion,
network_params: NetworkParams,
config: ProtocolConfig,
}
}
Features:
- Protocol variant selection
- Network parameter access
- Feature flag management
- Validation rule enforcement
Code: lib.rs
Network Messages
Supports Bitcoin P2P protocol messages:
Core Messages:
Version,VerAck- Connection handshakeAddr,GetAddr- Peer address managementInv,GetData,NotFound- Inventory managementBlock,Tx- Block and transaction relayGetHeaders,Headers,GetBlocks- Header synchronizationPing,Pong- Connection keepaliveMemPool,FeeFilter- Mempool synchronization
BIP152 (Compact Block Relay):
SendCmpct- Compact block negotiationCmpctBlock- Compact block transmissionGetBlockTxn,BlockTxn- Transaction reconstruction
FIBRE Protocol:
FIBREPacket- High-performance relay protocol- Packet format and serialization
- Performance optimizations
Governance Messages:
- Governance messages via P2P protocol
- Message format and routing
- Integration with governance system
Commons Extensions:
GetUTXOSet,UTXOSet- UTXO commitment protocolGetFilteredBlock,FilteredBlock- Spam-filtered blocksGetBanList,BanList- Distributed ban list sharing
Code: messages.rs
Service Flags
Service flags indicate node capabilities:
Standard Flags:
NODE_NETWORK- Full node with all blocksNODE_WITNESS- SegWit supportNODE_COMPACT_FILTERS- BIP157/158 supportNODE_NETWORK_LIMITED- Pruned node
Commons Flags:
NODE_UTXO_COMMITMENTS- UTXO commitment supportNODE_BAN_LIST_SHARING- Ban list sharingNODE_FIBRE- FIBRE protocol supportNODE_DANDELION- Dandelion++ privacy relayNODE_PACKAGE_RELAY- BIP331 package relay
Code: mod.rs
Validation Rules
Protocol-specific validation rules:
- Size Limits: Block (4MB), transaction (1MB), script (10KB)
- Feature Flags: SegWit, Taproot, RBF support
- Fee Rules: Minimum and maximum fee rates
- DoS Protection: Message size limits, address count limits
Code: mod.rs
Commons-Specific Extensions
UTXO Commitments
Protocol messages for UTXO set synchronization:
GetUTXOSet- Request UTXO set at specific heightUTXOSet- UTXO set response with merkle proof
Code: utxo_commitments.rs
Filtered Blocks
Spam-filtered block relay for efficient syncing:
GetFilteredBlock- Request filtered blockFilteredBlock- Filtered block with spam transactions removed
Code: filtered_blocks.rs
Ban List Sharing
Distributed ban list management:
GetBanList- Request ban listBanList- Ban list response with signatures
Code: ban_list.rs
BIP Support
Implemented Bitcoin Improvement Proposals:
- BIP152: Compact Block Relay
- BIP157: Client-side Block Filtering
- BIP158: Compact Block Filters
- BIP173/350/351: Bech32/Bech32m Address Encoding
- BIP70: Payment Protocol
Code: mod.rs
Protocol Evolution
The protocol layer supports protocol evolution:
- Version Support: Multiple protocol versions
- Feature Management: Enable/disable features based on version
- Breaking Changes: Track and manage protocol evolution
- Backward Compatibility: Maintain compatibility with existing nodes
Usage Example
#![allow(unused)]
fn main() {
use blvm_protocol::{BitcoinProtocolEngine, ProtocolVersion};
// Create a mainnet protocol engine
let engine = BitcoinProtocolEngine::new(ProtocolVersion::BitcoinV1)?;
// Get network parameters
let params = engine.get_network_params();
println!("Network: {}", params.network_name);
println!("Port: {}", params.default_port);
// Check feature support
if engine.supports_feature("segwit") {
println!("SegWit is supported");
}
}
See Also
- Protocol Architecture - Protocol layer design and components
- Network Protocol - Transport abstraction and protocol details
- Message Formats - P2P message specifications
- Protocol Specifications - BIP implementations
- Node Configuration - Configuring protocol variants
Protocol Layer Architecture
The protocol layer (blvm-protocol) provides Bitcoin protocol abstraction that enables multiple Bitcoin variants and protocol evolution.
Architecture Position
This is Tier 3 of the 5-tier Bitcoin Commons architecture (BLVM technology stack):
1. Orange Paper (mathematical foundation)
2. blvm-consensus (pure math implementation)
3. blvm-protocol (Bitcoin abstraction) ← THIS CRATE
4. blvm-node (full node implementation)
5. blvm-sdk (developer toolkit)
Purpose
The blvm-protocol sits between the pure mathematical consensus rules (blvm-consensus) and the full Bitcoin implementation (blvm-node). It provides:
Protocol Abstraction
- Multiple Variants: Support for mainnet, testnet, and regtest
- Network Parameters: Magic bytes, ports, genesis blocks, difficulty targets
- Feature Flags: SegWit, Taproot, RBF, and other protocol features
- Validation Rules: Protocol-specific size limits and validation logic
Protocol Evolution
- Version Support: Bitcoin V1, V2 (planned), and experimental variants
- Feature Management: Enable/disable features based on protocol version
- Breaking Changes: Track and manage protocol evolution
Core Components
Protocol Variants
- BitcoinV1: Production Bitcoin mainnet
- Testnet3: Bitcoin test network
- Regtest: Regression testing network
Network Parameters
- Magic Bytes: P2P protocol identification
- Ports: Default network ports
- Genesis Blocks: Network-specific genesis blocks
- Difficulty: Proof-of-work targets
- Halving: Block subsidy intervals
For more details, see the blvm-protocol README.
See Also
- Protocol Overview - Protocol layer introduction
- Network Protocol - Transport and protocol details
- Message Formats - P2P message specifications
- Consensus Architecture - Underlying consensus layer
- Node Configuration - Protocol variant configuration
Message Formats
The protocol layer defines message formats for Bitcoin P2P protocol communication.
Protocol Variants
Each protocol variant (mainnet, testnet, regtest) has specific message formats and network parameters:
- Magic Bytes: Unique identifier for each network variant
- Message Headers: Standard Bitcoin message header format
- Message Types:
version,verack,inv,getdata,tx,block, etc.
Network Parameters
Protocol-specific parameters include:
- Default ports (mainnet: 8333, testnet: 18333, regtest: 18444)
- Genesis block hashes
- Difficulty adjustment intervals
- Block size limits
- Feature activation heights
For detailed protocol specifications, see the blvm-protocol README.
See Also
- Protocol Architecture - Protocol layer design
- Network Protocol - Transport and message handling
- Protocol Overview - Protocol layer introduction
- Protocol Specifications - BIP implementations
Network Protocol
The protocol layer abstracts Bitcoin’s P2P network protocol, supporting multiple network variants. See Protocol Overview for details.
Protocol Abstraction
The blvm-protocol abstracts P2P message formats (standard Bitcoin wire protocol), connection management, peer discovery, block synchronization, and transaction relay. See Protocol Architecture for details.
Network Variants
Mainnet (BitcoinV1)
- Production Bitcoin network
- Full consensus rules
- Real economic value
Testnet3
- Bitcoin test network
- Same consensus rules as mainnet
- Different network parameters
- No real economic value
Regtest
- Regression testing network
- Configurable difficulty
- Isolated from other networks
- Fast block generation for testing
For implementation details, see the blvm-protocol README.
Transport Abstraction Layer
The network layer uses multiple transport protocols through a unified abstraction (see Transport Abstraction):
NetworkManager
└── Transport Trait (abstraction)
├── TcpTransport (Bitcoin P2P compatible)
└── IrohTransport (QUIC-based, optional)
Transport Options
TCP Transport (Default): Bitcoin P2P protocol compatibility using traditional TCP sockets. Maintains Bitcoin wire protocol format and is compatible with standard Bitcoin nodes. See Transport Abstraction.
Iroh Transport: QUIC-based transport using Iroh for P2P networking with public key-based peer identity and NAT traversal support. See Transport Abstraction.
Transport Selection
Configure transport via node configuration:
[network]
transport_preference = "tcp_only" # or "iroh_only", "hybrid"
Modes: tcp_only (default, Bitcoin compatible), iroh_only (experimental), hybrid (both simultaneously)
The protocol adapter serializes between blvm-consensus NetworkMessage types and transport-specific wire formats. The message bridge processes messages and generates responses. Default is TCP-only; enable Iroh via iroh feature flag.
See Also
- Protocol Architecture - Protocol layer design
- Message Formats - P2P message specifications
- Protocol Overview - Protocol layer introduction
- Node Configuration - Network and transport configuration
- Protocol Specifications - BIP implementations
Node Implementation Overview
The node implementation (blvm-node) is a minimal, production-ready Bitcoin node that adds only non-consensus infrastructure to the consensus and protocol layers. Consensus logic comes from blvm-consensus, and protocol abstraction from blvm-protocol.
Architecture
The node follows a layered architecture:
P2P networking, peer management] SL[Storage Layer
Block/UTXO storage] RS[RPC Server
JSON-RPC 2.0 API] MM[Module Manager
Process-isolated modules] MP[Mempool Manager
Transaction mempool] MC[Mining Coordinator
Block template generation] PP[Payment Processor
CTV support] end PROTO[blvm-protocol
Protocol abstraction] CONS[blvm-consensus
Consensus validation] NM --> PROTO SL --> PROTO MP --> PROTO MC --> PROTO PP --> PROTO PROTO --> CONS MM --> NM MM --> SL MM --> MP RS --> SL RS --> MP RS --> MC style NM fill:#bbf,stroke:#333,stroke-width:2px style SL fill:#bfb,stroke:#333,stroke-width:2px style RS fill:#fbf,stroke:#333,stroke-width:2px style MM fill:#ffb,stroke:#333,stroke-width:2px style MP fill:#fbb,stroke:#333,stroke-width:2px style MC fill:#bbf,stroke:#333,stroke-width:2px style PROTO fill:#bfb,stroke:#333,stroke-width:3px style CONS fill:#fbb,stroke:#333,stroke-width:3px
Key Components
Network Manager
- P2P protocol implementation (Bitcoin wire protocol)
- Multi-transport support (TCP, Quinn QUIC, Iroh)
- Peer connection management
- Message routing and relay
- Privacy protocols (Dandelion++, Fibre)
- Package relay (BIP331)
Code: mod.rs
Storage Layer
- Database abstraction with multiple backends (see Storage Backends)
- Automatic backend fallback on failure
- Block storage and indexing
- UTXO set management
- Chain state tracking
- Transaction indexing
- Pruning support
Code: mod.rs
RPC Server
- JSON-RPC 2.0 compliant API (see RPC API Reference)
- REST API (optional feature, runs alongside JSON-RPC)
- Optional QUIC transport support (see QUIC RPC)
- Authentication and rate limiting
- Method coverage
Code: mod.rs
Module System
- Process-isolated modules (see Module System Architecture)
- IPC communication (Unix domain sockets, see Module IPC Protocol)
- Security sandboxing
- Permission-based API access
- Hot reload support
Code: manager.rs
Mempool Manager
- Transaction validation and storage
- Fee-based transaction selection
- RBF (Replace-By-Fee) support with 4 configurable modes (Disabled, Conservative, Standard, Aggressive)
- Comprehensive mempool policies and limits
- Transaction expiry
- Advanced indexing (address and value range indexing)
Code: mempool.rs
Mining Coordinator
- Block template generation
- Stratum V2 protocol support
- Mining job distribution
Code: miner.rs
Payment Processing
- CTV (CheckTemplateVerify) support
- Lightning Network integration
- Payment vaults
- Covenant support
- Payment state management
Code: mod.rs
Governance Integration
- P2P governance message relay
- Webhook handlers for governance events
- User signaling support
Code: mod.rs
Design Principles
- Zero Consensus Re-implementation: All consensus logic delegated to blvm-consensus
- Protocol Abstraction: Uses blvm-protocol for variant support (mainnet, testnet, regtest)
- Pure Infrastructure: Adds storage, networking, RPC, orchestration only
- Production Ready: Full Bitcoin node functionality with performance optimizations
Features
Network Features
- Multi-transport architecture (TCP, QUIC)
- Privacy-preserving relay (Dandelion++)
- High-performance block relay (Fibre)
- Package relay (BIP331)
- UTXO commitments support
- LAN peering system (automatic local network discovery, 10-50x IBD speedup)
Storage Features
- Multiple database backends with abstraction layer (redb, sled, rocksdb)
- Bitcoin Core compatibility via RocksDB backend
- Automatic backend fallback on failure
- Pruning support
- Advanced transaction indexing (address and value range indexes)
- UTXO set management
Security Features
- IBD bandwidth protection (per-peer/IP/subnet limits, reputation scoring)
Module Features
- Process isolation
- IPC communication
- Security sandboxing
- Hot reload
- Module registry
Mining Features
- Block template generation
- Stratum V2 protocol
- Merge mining support
- Mining pool coordination
Payment Features
- Lightning Network module support
- Payment vault management
- Covenant enforcement
- Payment state machines
Integration Features
- Governance webhook integration
- ZeroMQ notifications (optional)
- REST API alongside JSON-RPC
- Module registry (P2P discovery)
Node Lifecycle
- Initialization: Load configuration, initialize storage, create network manager
- Startup: Connect to P2P network, discover peers, load modules
- Sync: Download and validate blockchain history
- Running: Validate blocks/transactions, relay messages, serve RPC requests
- Shutdown: Graceful shutdown of all components
Code: mod.rs
Metrics and Monitoring
The node includes metrics collection:
- Network Metrics: Peer count, bytes sent/received, connection statistics
- Storage Metrics: Block count, UTXO count, database size
- RPC Metrics: Request count, error rate, response times
- Performance Metrics: Block validation time, transaction processing time
- System Metrics: CPU usage, memory usage, disk I/O
Code: metrics.rs
See Also
- Installation - Installing the node
- Quick Start - Running your first node
- Node Configuration - Configuration options
- Node Operations - Node management and operations
- RPC API Reference - JSON-RPC API documentation
- Mining Integration - Mining functionality
- Module System - Module system architecture
- Storage Backends - Storage backend details
Node Configuration
BLVM node configuration supports different use cases.
Protocol Variants
The node supports multiple Bitcoin protocol variants: Regtest (default, regression testing network for development), Testnet3 (Bitcoin test network), and BitcoinV1 (production Bitcoin mainnet). See Protocol Variants for details.
Configuration File
Create a blvm.toml configuration file:
[network]
listen_addr = "127.0.0.1:8333" # Network listening address (default: 127.0.0.1:8333)
protocol_version = "BitcoinV1" # Protocol version: "BitcoinV1" (mainnet), "Testnet3" (testnet), "Regtest" (regtest)
transport_preference = "tcp_only" # Transport preference (default: "tcp_only")
max_peers = 100 # Maximum number of peers (default: 100)
enable_self_advertisement = true # Send own address to peers (default: true)
[storage]
data_dir = "/var/lib/blvm"
backend = "auto" # Auto-select best available backend (prefers redb)
[rpc]
enabled = true
port = 8332
host = "127.0.0.1" # Bind address
[mining]
enabled = false
Default Values:
listen_addr:127.0.0.1:8333(localhost, mainnet port)protocol_version:"BitcoinV1"(Bitcoin mainnet)transport_preference:"tcp_only"(TCP transport only)max_peers:100(maximum peer connections)enable_self_advertisement:true(advertise own address to peers)
Environment Variables
You can also configure via environment variables:
export BLVM_NETWORK=testnet
export BLVM_DATA_DIR=/var/lib/blvm
export BLVM_RPC_PORT=8332
Command Line Options
# Start with specific network
blvm --network testnet
# Use custom config file
blvm --config /path/to/config.toml
# Override data directory
blvm --data-dir /custom/path
Storage Backends
The node uses multiple storage backends with automatic fallback:
Database Backends
- redb (default, recommended): Production-ready embedded database (see Storage Backends)
- sled: Beta, fallback option (see Storage Backends)
- auto: Auto-select based on availability (prefers redb, falls back to sled)
Storage Configuration
[storage]
data_dir = "/var/lib/blvm"
backend = "auto" # or "redb", "sled"
[storage.cache]
block_cache_mb = 100
utxo_cache_mb = 50
header_cache_mb = 10
[storage.pruning]
enabled = false
keep_blocks = 288 # Keep last 288 blocks (2 days)
Backend Selection
The system automatically selects the best available backend:
- Attempts to use redb (default)
- Falls back to sled if redb fails and sled is available
- Returns error if no backend is available
Cache Configuration
Storage cache sizes can be configured:
- Block cache: Default 100 MB, caches recently accessed blocks
- UTXO cache: Default 50 MB, caches frequently accessed UTXOs
- Header cache: Default 10 MB, caches block headers
Pruning
Pruning reduces storage requirements by removing old block data:
[storage.pruning]
enabled = true
keep_blocks = 288 # Keep last 288 blocks (2 days)
Pruning Modes:
- Disabled (default): Keep all blocks (full archival node)
- Light client: Keep last N blocks (configurable)
- Full pruning: Remove all blocks, keep only UTXO set (planned)
Note: Pruning reduces storage but limits ability to serve historical blocks to peers.
Network Configuration
Transport Options
Configure transport selection (see Transport Abstraction):
[network]
transport_preference = "tcp_only" # Options: "tcp_only" (default), "iroh_only" (requires iroh feature), "quinn_only" (requires quinn feature), "hybrid" (requires iroh feature), "all" (requires both iroh and quinn features)
Available Transport Options:
"tcp_only"- TCP transport only (default, Bitcoin P2P compatible)"iroh_only"- Iroh QUIC transport only (requiresirohfeature)"quinn_only"- Quinn QUIC transport only (requiresquinnfeature)"hybrid"- TCP + Iroh hybrid mode (requiresirohfeature)"all"- All transports enabled (requires bothirohandquinnfeatures)
Feature Requirements:
irohfeature: Enables Iroh QUIC transport with NAT traversalquinnfeature: Enables standalone Quinn QUIC transport
RBF Configuration
Configure Replace-By-Fee (RBF) behavior with 4 modes: Disabled, Conservative, Standard (default), and Aggressive.
RBF Modes
Disabled: No RBF replacements allowed
[rbf]
mode = "disabled"
Conservative: Strict rules with higher fee requirements
[rbf]
mode = "conservative"
min_fee_rate_multiplier = 2.0
min_fee_bump_satoshis = 5000
min_confirmations = 1
max_replacements_per_tx = 3
cooldown_seconds = 300
Standard (default): BIP125-compliant RBF
[rbf]
mode = "standard"
min_fee_rate_multiplier = 1.1
min_fee_bump_satoshis = 1000
Aggressive: Relaxed rules for miners
[rbf]
mode = "aggressive"
min_fee_rate_multiplier = 1.05
min_fee_bump_satoshis = 500
allow_package_replacements = true
See RBF and Mempool Policies for complete configuration guide.
Advanced Indexing
Enable address and value range indexing for efficient queries:
[storage.indexing]
enable_address_index = true
enable_value_index = true
strategy = "eager" # or "lazy"
max_indexed_addresses = 1000000
Module Configuration
Configure process-isolated modules:
[modules]
enabled = true # Enable module system (default: true)
modules_dir = "modules" # Directory containing module binaries (default: "modules")
data_dir = "data/modules" # Directory for module data/state (default: "data/modules")
socket_dir = "data/modules/sockets" # Directory for IPC sockets (default: "data/modules/sockets")
enabled_modules = ["blvm-lightning", "blvm-mesh"] # List of enabled modules (empty = auto-discover all)
Module Resource Limits (optional):
[modules.resource_limits]
default_max_cpu_percent = 50 # Max CPU usage per module (default: 50%)
default_max_memory_bytes = 536870912 # Max memory per module (default: 512 MB)
default_max_file_descriptors = 256 # Max file descriptors per module (default: 256)
default_max_child_processes = 10 # Max child processes per module (default: 10)
module_startup_wait_millis = 100 # Wait time for module startup (default: 100ms)
module_socket_timeout_seconds = 5 # IPC socket timeout (default: 5s)
module_socket_check_interval_millis = 100 # Socket check interval (default: 100ms)
module_socket_max_attempts = 50 # Max socket connection attempts (default: 50)
See Module System for module configuration details.
See Also
- Node Overview - Node features and architecture
- Node Operations - Running and managing your node
- Storage Backends - Detailed storage backend information
- Transport Abstraction - Transport options
- Network Protocol - Protocol variants and network configuration
- Configuration Reference - Complete configuration reference
- Getting Started - Installation guide
- Troubleshooting - Common configuration issues
RBF and Mempool Policies
Configure Replace-By-Fee (RBF) behavior and mempool policies to control transaction acceptance, eviction, and limits.
RBF and Mempool Flow
Reject Replacement] CONSERVATIVE[Conservative
2x fee, 1 conf] STANDARD[Standard
1.1x fee, BIP125] AGGRESSIVE[Aggressive
1.05x fee, packages] TX --> RBF RBF -->|disabled| DISABLED RBF -->|conservative| CONSERVATIVE RBF -->|standard| STANDARD RBF -->|aggressive| AGGRESSIVE CONSERVATIVE --> MP[Mempool Check] STANDARD --> MP AGGRESSIVE --> MP DISABLED --> REJECT[Reject] MP --> SIZE{Size Limit?} SIZE -->|OK| FEE{Fee Threshold?} SIZE -->|Full| EVICT[Eviction Strategy] FEE -->|OK| ACCEPT[Accept to Mempool] FEE -->|Low| REJECT EVICT --> LOW[Lowest Fee Rate] EVICT --> OLD[Oldest First] EVICT --> LARGE[Largest First] EVICT --> DESC[No Descendants First] EVICT --> HYBRID[Hybrid] LOW --> ACCEPT OLD --> ACCEPT LARGE --> ACCEPT DESC --> ACCEPT HYBRID --> ACCEPT style TX fill:#bbf,stroke:#333,stroke-width:2px style ACCEPT fill:#bfb,stroke:#333,stroke-width:2px style REJECT fill:#fbb,stroke:#333,stroke-width:2px
RBF Configuration
RBF allows transactions to be replaced by new transactions that spend the same inputs but pay higher fees. BLVM supports 4 configurable RBF modes.
RBF Modes
Disabled
No RBF replacements are allowed. All transactions are final once added to the mempool.
Use Cases:
- Enterprise/compliance requirements
- Nodes that prioritize transaction finality
- Exchanges with strict security policies
Configuration:
[rbf]
mode = "disabled"
Conservative
Strict RBF rules with higher fee requirements and additional safety checks.
Features:
- 2x fee rate multiplier (100% increase required)
- 5000 sat minimum absolute fee bump
- 1 confirmation minimum before allowing replacement
- Maximum 3 replacements per transaction
- 300 second cooldown period
Use Cases:
- Exchanges
- Wallets prioritizing user safety
- Nodes that want to prevent RBF spam
Configuration:
[rbf]
mode = "conservative"
min_fee_rate_multiplier = 2.0
min_fee_bump_satoshis = 5000
min_confirmations = 1
max_replacements_per_tx = 3
cooldown_seconds = 300
Standard (Default)
BIP125-compliant RBF with standard fee requirements.
Features:
- 1.1x fee rate multiplier (10% increase, BIP125 minimum)
- 1000 sat minimum absolute fee bump (BIP125 MIN_RELAY_FEE)
- No confirmation requirement
- Maximum 10 replacements per transaction
- 60 second cooldown period
Use Cases:
- General purpose nodes
- Default configuration
- Bitcoin Core compatibility
Configuration:
[rbf]
mode = "standard"
min_fee_rate_multiplier = 1.1
min_fee_bump_satoshis = 1000
Aggressive
Relaxed RBF rules for miners and high-throughput nodes.
Features:
- 1.05x fee rate multiplier (5% increase)
- 500 sat minimum absolute fee bump
- Package replacement support
- Maximum 10 replacements per transaction
- 60 second cooldown period
Use Cases:
- Mining pools
- High-throughput nodes
- Nodes prioritizing fee revenue
Configuration:
[rbf]
mode = "aggressive"
min_fee_rate_multiplier = 1.05
min_fee_bump_satoshis = 500
allow_package_replacements = true
max_replacements_per_tx = 10
cooldown_seconds = 60
RBF Configuration Parameters
| Parameter | Description | Default |
|---|---|---|
mode | RBF mode: disabled, conservative, standard, aggressive | standard |
min_fee_rate_multiplier | Minimum fee rate multiplier for replacement | Mode-specific |
min_fee_bump_satoshis | Minimum absolute fee bump in satoshis | Mode-specific |
min_confirmations | Minimum confirmations before allowing replacement | 0 |
allow_package_replacements | Allow package replacements | false |
max_replacements_per_tx | Maximum replacements per transaction | Mode-specific |
cooldown_seconds | Replacement cooldown period | Mode-specific |
BIP125 Compliance
All modes enforce BIP125 rules:
- Existing transaction must signal RBF (sequence < 0xffffffff)
- New transaction must have higher fee rate
- New transaction must have higher absolute fee
- New transaction must conflict with existing transaction
- No new unconfirmed dependencies
Mode-specific requirements are applied in addition to BIP125 rules.
Mempool Policies
Configure mempool size limits, fee thresholds, eviction strategies, and transaction expiry.
Size Limits
[mempool]
max_mempool_mb = 300 # Maximum mempool size in MB (default: 300)
max_mempool_txs = 100000 # Maximum number of transactions (default: 100000)
Fee Thresholds
[mempool]
min_relay_fee_rate = 1 # Minimum relay fee rate (sat/vB, default: 1)
min_tx_fee = 1000 # Minimum transaction fee (satoshis, default: 1000)
incremental_relay_fee = 1000 # Incremental relay fee (satoshis, default: 1000)
Eviction Strategies
Choose from 5 eviction strategies when mempool limits are reached:
Lowest Fee Rate (Default)
Evicts transactions with the lowest fee rate first. Maximizes average fee rate of remaining transactions.
Best for:
- Mining pools
- Nodes prioritizing fee revenue
- Bitcoin Core compatibility
[mempool]
eviction_strategy = "lowest_fee_rate"
Oldest First (FIFO)
Evicts the oldest transactions first, regardless of fee rate.
Best for:
- Nodes with strict time-based policies
- Preventing transaction aging issues
[mempool]
eviction_strategy = "oldest_first"
Largest First
Evicts the largest transactions first to free the most space quickly.
Best for:
- Nodes with limited memory
- Quick space recovery
[mempool]
eviction_strategy = "largest_first"
No Descendants First
Evicts transactions with no descendants first. Prevents orphaning dependent transactions.
Best for:
- Nodes prioritizing transaction package integrity
- Preventing cascading evictions
[mempool]
eviction_strategy = "no_descendants_first"
Hybrid
Combines fee rate and age with configurable weights.
Best for:
- Custom eviction policies
- Balancing multiple factors
[mempool]
eviction_strategy = "hybrid"
Ancestor/Descendant Limits
Prevent transaction package spam and ensure mempool stability:
[mempool]
max_ancestor_count = 25 # Maximum ancestor count (default: 25)
max_ancestor_size = 101000 # Maximum ancestor size in bytes (default: 101000)
max_descendant_count = 25 # Maximum descendant count (default: 25)
max_descendant_size = 101000 # Maximum descendant size in bytes (default: 101000)
Ancestors: Transactions that a given transaction depends on (parent transactions)
Descendants: Transactions that depend on a given transaction (child transactions)
Transaction Expiry
[mempool]
mempool_expiry_hours = 336 # Transaction expiry in hours (default: 336 = 14 days)
Mempool Persistence
Persist mempool across restarts:
[mempool]
persist_mempool = true
mempool_persistence_path = "data/mempool.dat"
Configuration Examples
Exchange Node (Conservative)
[rbf]
mode = "conservative"
min_fee_rate_multiplier = 2.0
min_fee_bump_satoshis = 5000
min_confirmations = 1
max_replacements_per_tx = 3
cooldown_seconds = 300
[mempool]
max_mempool_mb = 500
max_mempool_txs = 200000
min_relay_fee_rate = 2
eviction_strategy = "lowest_fee_rate"
max_ancestor_count = 25
max_descendant_count = 25
persist_mempool = true
Mining Pool (Aggressive)
[rbf]
mode = "aggressive"
min_fee_rate_multiplier = 1.05
min_fee_bump_satoshis = 500
allow_package_replacements = true
max_replacements_per_tx = 10
cooldown_seconds = 60
[mempool]
max_mempool_mb = 1000
max_mempool_txs = 500000
min_relay_fee_rate = 1
eviction_strategy = "lowest_fee_rate"
max_ancestor_count = 50
max_descendant_count = 50
Standard Node (Default)
[rbf]
mode = "standard"
[mempool]
max_mempool_mb = 300
max_mempool_txs = 100000
min_relay_fee_rate = 1
eviction_strategy = "lowest_fee_rate"
max_ancestor_count = 25
max_descendant_count = 25
Best Practices
- Exchanges: Use conservative RBF and higher fee thresholds
- Miners: Use aggressive RBF and larger mempool sizes
- General Users: Use standard/default settings
- High-Throughput Nodes: Increase size limits and use aggressive eviction
Bitcoin Core Compatibility
Default values match Bitcoin Core defaults:
max_mempool_mb: 300 MBmin_relay_fee_rate: 1 sat/vBmax_ancestor_count: 25max_ancestor_size: 101 kBmax_descendant_count: 25max_descendant_size: 101 kBeviction_strategy:lowest_fee_rate
See Also
- Node Configuration - Complete node configuration guide
- Node Overview - Node features and architecture
- Mempool Manager - Mempool implementation details
Node Operations
Operational guide for running and maintaining a BLVM node.
Starting the Node
Basic Startup
# Regtest mode (default, safe for development)
blvm
# Testnet mode
blvm --network testnet
# Mainnet mode (use with caution)
blvm --network mainnet
With Configuration
blvm --config blvm.toml
Node Lifecycle
The node follows a lifecycle with multiple states and transitions.
Lifecycle States
The node operates in the following states:
Initial → Headers → Blocks → Synced
↓ ↓ ↓ ↓
Error Error Error Error
State Descriptions:
- Initial: Node starting up, initializing components
- Headers: Downloading and validating block headers
- Blocks: Downloading and validating full blocks
- Synced: Fully synchronized, normal operation
- Error: Error state (can transition from any state)
Code: sync.rs
State Transitions
State transitions are managed by the SyncStateMachine:
- Initial → Headers: When sync begins
- Headers → Blocks: When headers are complete (30% progress)
- Blocks → Synced: When blocks are complete (60% progress)
- Any → Error: On error conditions
Code: sync.rs
Initial Sync
When starting for the first time, the node will:
- Initialize Components: Storage, network, RPC, modules
- Connect to P2P Network: Discover peers via DNS seeds or persistent peers
- Download Headers: Request and validate block headers
- Download Blocks: Request and validate blocks
- Build UTXO Set: Construct UTXO set from validated blocks
- Sync to Current Height: Continue until caught up with network
Code: sync.rs
Running State
Once synced, the node maintains:
- Peer Connections: Active P2P connections
- Block Validation: Validates and relays new blocks (via blvm-consensus)
- Transaction Processing: Validates and relays transactions
- Chain State Updates: Updates chain tip and height
- RPC Requests: Serves JSON-RPC API requests
- Health Monitoring: Periodic health checks
Code: mod.rs
Health States
The node tracks health status for each component:
- Healthy: Component operating normally
- Degraded: Component functional but with issues
- Unhealthy: Component not functioning correctly
- Down: Component not responding
Code: health.rs
Error Recovery
The node implements graceful error recovery:
- Network Errors: Automatic reconnection with exponential backoff
- Storage Errors: Timeout protection, graceful degradation
- Validation Errors: Logged and reported, node continues operation
- Disk Space: Periodic checks with warnings
Code: mod.rs
Monitoring
Health Checks
# Check if node is running
curl http://localhost:8332/health
# Get blockchain info via RPC
curl -X POST http://localhost:8332 \
-H "Content-Type: application/json" \
-d '{"jsonrpc": "2.0", "method": "getblockchaininfo", "params": [], "id": 1}'
Logging
The node uses structured logging. Set log level via environment variable:
# Set log level
RUST_LOG=info blvm
# Debug mode
RUST_LOG=debug blvm
# Trace all operations
RUST_LOG=trace blvm
Maintenance
Database Maintenance
The node automatically maintains block storage, UTXO set, chain indexes, and transaction indexes.
Backup
Regular backups recommended:
# Backup data directory
tar -czf blvm-backup-$(date +%Y%m%d).tar.gz /var/lib/blvm
Updates
When updating the node:
- Stop the node gracefully
- Backup data directory
- Download new binary from GitHub Releases
- Replace old binary with new one
- Restart node
Troubleshooting
See Troubleshooting for detailed solutions to common issues.
See Also
- Node Configuration - Configuration options
- Node Overview - Node architecture and features
- RPC API Reference - Complete RPC API documentation
- Troubleshooting - Common issues and solutions
- Performance Optimizations - Performance tuning
RPC API Reference
BLVM node provides both a JSON-RPC 2.0 interface (Bitcoin Core compatible) and a modern REST API for interacting with the node.
API Overview
- JSON-RPC 2.0: Bitcoin Core-compatible interface
- Mainnet:
http://localhost:8332(default) - Testnet/Regtest:
http://localhost:18332(default)
- Mainnet:
- REST API: Modern RESTful interface at
http://localhost:8080/api/v1/
Both APIs provide access to the same functionality, with the REST API offering better type safety, clearer error messages, and improved developer experience.
Connection
Default RPC endpoints:
- Mainnet:
http://localhost:8332 - Testnet/Regtest:
http://localhost:18332
RPC ports are configurable. See Node Configuration for details.
Authentication
For production use, configure RPC authentication:
[rpc]
enabled = true
username = "rpcuser"
password = "rpcpassword"
Example Requests
Get Blockchain Info
# Mainnet uses port 8332, testnet/regtest use 18332
curl -X POST http://localhost:8332 \
-H "Content-Type: application/json" \
-d '{
"jsonrpc": "2.0",
"method": "getblockchaininfo",
"params": [],
"id": 1
}'
Get Block
# Mainnet uses port 8332, testnet/regtest use 18332
curl -X POST http://localhost:8332 \
-H "Content-Type: application/json" \
-d '{
"jsonrpc": "2.0",
"method": "getblock",
"params": ["000000000019d6689c085ae165831e934ff763ae46a2a6c172b3f1b60a8ce26f"],
"id": 1
}'
Get Network Info
# Mainnet uses port 8332, testnet/regtest use 18332
curl -X POST http://localhost:8332 \
-H "Content-Type: application/json" \
-d '{
"jsonrpc": "2.0",
"method": "getnetworkinfo",
"params": [],
"id": 1
}'
Available Methods
Methods Implemented: Multiple RPC methods
Blockchain Methods
getblockchaininfo- Get blockchain informationgetblock- Get block by hashgetblockhash- Get block hash by heightgetblockheader- Get block header by hashgetbestblockhash- Get best block hashgetblockcount- Get current block heightgetdifficulty- Get current difficultygettxoutsetinfo- Get UTXO set statisticsverifychain- Verify blockchain databasegetblockfilter- Get block filter (BIP158)getindexinfo- Get index informationgetblockchainstate- Get blockchain stateinvalidateblock- Invalidate a blockreconsiderblock- Reconsider a previously invalidated blockwaitfornewblock- Wait for a new blockwaitforblock- Wait for a specific blockwaitforblockheight- Wait for a specific block height
Raw Transaction Methods
getrawtransaction- Get transaction by txidsendrawtransaction- Submit transaction to mempooltestmempoolaccept- Test if transaction would be accepteddecoderawtransaction- Decode raw transaction hexcreaterawtransaction- Create a raw transactiongettxout- Get UTXO informationgettxoutproof- Get merkle proof for transactionverifytxoutproof- Verify merkle proof
Mempool Methods
getmempoolinfo- Get mempool statisticsgetrawmempool- List transactions in mempoolsavemempool- Persist mempool to diskgetmempoolancestors- Get mempool ancestors of a transactiongetmempooldescendants- Get mempool descendants of a transactiongetmempoolentry- Get mempool entry for a transaction
Network Methods
getnetworkinfo- Get network informationgetpeerinfo- Get connected peersgetconnectioncount- Get number of connectionsping- Ping connected peersaddnode- Add/remove node from peer listdisconnectnode- Disconnect specific nodegetnettotals- Get network statisticsclearbanned- Clear banned nodessetban- Ban/unban a subnetlistbanned- List banned nodesgetaddednodeinfo- Get information about manually added nodesgetnodeaddresses- Get known node addressessetnetworkactive- Enable or disable network activity
Mining Methods
getmininginfo- Get mining informationgetblocktemplate- Get block template for miningsubmitblock- Submit a mined blockestimatesmartfee- Estimate smart fee rateprioritisetransaction- Prioritize a transaction in mempool
Control Methods
stop- Stop the nodeuptime- Get node uptimegetmemoryinfo- Get memory usage informationgetrpcinfo- Get RPC server informationhelp- Get help for RPC methodslogging- Control logging levelsgethealth- Get node health statusgetmetrics- Get node metrics
Address Methods
validateaddress- Validate a Bitcoin addressgetaddressinfo- Get detailed address information
Transaction Methods
gettransactiondetails- Get detailed transaction information
Payment Methods (BIP70, feature-gated)
createpaymentrequest- Create a BIP70 payment request (requiresbip70-httpfeature)
Error Codes
The RPC API uses Bitcoin Core-compatible JSON-RPC 2.0 error codes:
Standard JSON-RPC Errors
| Code | Name | Description |
|---|---|---|
| -32700 | Parse error | Invalid JSON was received |
| -32600 | Invalid Request | The JSON sent is not a valid Request object |
| -32601 | Method not found | The method does not exist |
| -32602 | Invalid params | Invalid method parameter(s) |
| -32603 | Internal error | Internal JSON-RPC error |
Bitcoin-Specific Errors
| Code | Name | Description |
|---|---|---|
| -1 | Transaction already in chain | Transaction is already in blockchain |
| -1 | Transaction missing inputs | Transaction references non-existent inputs |
| -5 | Block not found | Block hash not found |
| -5 | Transaction not found | Transaction hash not found |
| -5 | UTXO not found | UTXO does not exist |
| -25 | Transaction rejected | Transaction rejected by consensus rules |
| -27 | Transaction already in mempool | Transaction already in mempool |
Code: errors.rs
Error Response Format
{
"jsonrpc": "2.0",
"error": {
"code": -32602,
"message": "Invalid params",
"data": {
"param": "blockhash",
"reason": "Invalid hex string"
}
},
"id": 1
}
Code: errors.rs
Authentication
RPC authentication is optional but recommended for production:
Token-Based Authentication
# Mainnet uses port 8332, testnet/regtest use 18332
curl -X POST http://localhost:8332 \
-H "Authorization: Bearer <token>" \
-H "Content-Type: application/json" \
-d '{"jsonrpc": "2.0", "method": "getblockchaininfo", "params": [], "id": 1}'
Certificate-Based Authentication
TLS client certificates can be used for authentication when QUIC transport is enabled.
Code: auth.rs
Rate Limiting
Rate limiting is enforced per IP, per user, and per method:
- Authenticated users: 100 burst, 10 req/sec
- Unauthenticated: 50 burst, 5 req/sec
- Per-method limits: May override defaults for specific methods
Code: server.rs
Request/Response Format
Request Format
{
"jsonrpc": "2.0",
"method": "getblockchaininfo",
"params": [],
"id": 1
}
Response Format
Success Response:
{
"jsonrpc": "2.0",
"result": {
"chain": "regtest",
"blocks": 123456,
"headers": 123456,
"bestblockhash": "0000...",
"difficulty": 4.656542373906925e-10
},
"id": 1
}
Error Response:
{
"jsonrpc": "2.0",
"error": {
"code": -32602,
"message": "Invalid params"
},
"id": 1
}
Code: types.rs
Batch Requests
Multiple requests can be sent in a single batch:
[
{"jsonrpc": "2.0", "method": "getblockchaininfo", "params": [], "id": 1},
{"jsonrpc": "2.0", "method": "getblockhash", "params": [100], "id": 2},
{"jsonrpc": "2.0", "method": "getblock", "params": ["0000..."], "id": 3}
]
Responses are returned in the same order as requests.
Implementation Status
The RPC API implements Bitcoin Core-compatible JSON-RPC 2.0 methods. See the Available Methods section above for a complete list of implemented methods.
REST API
Overview
The REST API provides a modern, developer-friendly interface alongside the JSON-RPC API. It uses standard HTTP methods and status codes, with JSON request/response bodies.
Base URL: http://localhost:8080/api/v1/
Code: rest/mod.rs
Authentication
REST API authentication works the same as JSON-RPC:
# Token-based authentication
curl -H "Authorization: Bearer <token>" http://localhost:8080/api/v1/node/uptime
# Basic authentication (if configured)
curl -u username:password http://localhost:8080/api/v1/node/uptime
Rate Limiting
Rate limiting is enforced per IP, per user, and per endpoint:
- Authenticated users: 100 burst, 10 req/sec
- Unauthenticated: 50 burst, 5 req/sec
- Per-endpoint limits: Stricter limits for write operations
Code: rest/server.rs
Response Format
All REST API responses follow a consistent format:
Success Response:
{
"status": "success",
"data": {
"chain": "regtest",
"blocks": 123456
},
"request_id": "550e8400-e29b-41d4-a716-446655440000"
}
Error Response:
{
"status": "error",
"error": {
"code": "NOT_FOUND",
"message": "Block not found",
"details": "Block hash 0000... does not exist"
},
"request_id": "550e8400-e29b-41d4-a716-446655440000"
}
Code: rest/types.rs
Endpoints
Node Endpoints
Code: rest/node.rs
GET /api/v1/node/uptime- Get node uptimeGET /api/v1/node/memory- Get memory informationGET /api/v1/node/memory?mode=detailed- Get detailed memory infoGET /api/v1/node/rpc-info- Get RPC server informationGET /api/v1/node/help- Get help for all commandsGET /api/v1/node/help?command=getblock- Get help for specific commandGET /api/v1/node/logging- Get logging configurationPOST /api/v1/node/logging- Update logging configurationPOST /api/v1/node/stop- Stop the node
Example:
curl http://localhost:8080/api/v1/node/uptime
Chain Endpoints
GET /api/v1/chain/info- Get blockchain informationGET /api/v1/chain/blockhash/{height}- Get block hash by heightGET /api/v1/chain/blockcount- Get current block heightGET /api/v1/chain/difficulty- Get current difficultyGET /api/v1/chain/txoutsetinfo- Get UTXO set statisticsPOST /api/v1/chain/verify- Verify blockchain database
Example:
curl http://localhost:8080/api/v1/chain/info
Block Endpoints
Code: rest/blocks.rs
GET /api/v1/blocks/{hash}- Get block by hashGET /api/v1/blocks/{hash}/transactions- Get block transactionsGET /api/v1/blocks/{hash}/header- Get block headerGET /api/v1/blocks/{hash}/header?verbose=true- Get verbose block headerGET /api/v1/blocks/{hash}/stats- Get block statisticsGET /api/v1/blocks/{hash}/filter- Get BIP158 block filterGET /api/v1/blocks/{hash}/filter?filtertype=basic- Get specific filter typeGET /api/v1/blocks/height/{height}- Get block by heightPOST /api/v1/blocks/{hash}/invalidate- Invalidate blockPOST /api/v1/blocks/{hash}/reconsider- Reconsider invalidated block
Example:
curl http://localhost:8080/api/v1/blocks/000000000019d6689c085ae165831e934ff763ae46a2a6c172b3f1b60a8ce26f
Transaction Endpoints
GET /api/v1/transactions/{txid}- Get transaction by txidGET /api/v1/transactions/{txid}?verbose=true- Get verbose transactionPOST /api/v1/transactions- Submit raw transactionPOST /api/v1/transactions/test- Test if transaction would be acceptedGET /api/v1/transactions/{txid}/out/{n}- Get UTXO information
Example:
curl -X POST http://localhost:8080/api/v1/transactions \
-H "Content-Type: application/json" \
-d '{"hex": "0100000001..."}'
Address Endpoints
GET /api/v1/addresses/{address}/balance- Get address balanceGET /api/v1/addresses/{address}/transactions- Get address transaction historyGET /api/v1/addresses/{address}/utxos- Get address UTXOs
Example:
curl http://localhost:8080/api/v1/addresses/1A1zP1eP5QGefi2DMPTfTL5SLmv7DivfNa/balance
Mempool Endpoints
GET /api/v1/mempool/info- Get mempool informationGET /api/v1/mempool/transactions- List transactions in mempoolGET /api/v1/mempool/transactions?verbose=true- List verbose transactionsPOST /api/v1/mempool/save- Persist mempool to disk
Example:
curl http://localhost:8080/api/v1/mempool/info
Network Endpoints
Code: rest/network.rs
GET /api/v1/network/info- Get network informationGET /api/v1/network/peers- Get connected peersGET /api/v1/network/connections/count- Get connection countGET /api/v1/network/totals- Get network statisticsGET /api/v1/network/nodes- Get added node informationGET /api/v1/network/nodes?dns=true- Get added nodes with DNS lookupGET /api/v1/network/nodes/addresses- Get node addressesGET /api/v1/network/nodes/addresses?count=10- Get N node addressesGET /api/v1/network/bans- List banned nodesPOST /api/v1/network/ping- Ping connected peersPOST /api/v1/network/nodes- Add node to peer listPOST /api/v1/network/active- Activate node connectionPOST /api/v1/network/bans- Ban/unban a subnetDELETE /api/v1/network/nodes/{address}- Remove node from peer listDELETE /api/v1/network/bans- Clear all banned nodes
Example:
curl http://localhost:8080/api/v1/network/info
Fee Estimation Endpoints
GET /api/v1/fees/estimate- Estimate fee rateGET /api/v1/fees/estimate?blocks=6- Estimate fee for N blocksGET /api/v1/fees/smart- Get smart fee estimate
Example:
curl http://localhost:8080/api/v1/fees/estimate?blocks=6
Payment Endpoints (BIP70 HTTP)
Requires: --features bip70-http
Code: rest/payment.rs
GET /api/v1/payments/{payment_id}- Get payment statusPOST /api/v1/payments- Create payment requestPOST /api/v1/payments/{payment_id}/pay- Submit paymentPOST /api/v1/payments/{payment_id}/cancel- Cancel payment
Vault Endpoints (CTV)
Requires: --features ctv
Code: rest/vault.rs
GET /api/v1/vaults- List vaultsGET /api/v1/vaults/{vault_id}- Get vault informationPOST /api/v1/vaults- Create vaultPOST /api/v1/vaults/{vault_id}/deposit- Deposit to vaultPOST /api/v1/vaults/{vault_id}/withdraw- Withdraw from vault
Pool Endpoints (CTV)
Requires: --features ctv
Code: rest/pool.rs
GET /api/v1/pools- List poolsGET /api/v1/pools/{pool_id}- Get pool informationPOST /api/v1/pools- Create poolPOST /api/v1/pools/{pool_id}/join- Join poolPOST /api/v1/pools/{pool_id}/leave- Leave pool
Congestion Control Endpoints (CTV)
Requires: --features ctv
Code: rest/congestion.rs
GET /api/v1/congestion/status- Get congestion statusGET /api/v1/batches- List pending batchesPOST /api/v1/batches- Create batchPOST /api/v1/batches/{batch_id}/submit- Submit batch
Security Headers
The REST API includes security headers by default:
X-Content-Type-Options: nosniffX-Frame-Options: DENYX-XSS-Protection: 1; mode=blockStrict-Transport-Security: max-age=31536000(when TLS enabled)
Code: rest/server.rs
Error Codes
REST API uses standard HTTP status codes:
| Status Code | Meaning |
|---|---|
| 200 | Success |
| 400 | Bad Request (invalid parameters) |
| 401 | Unauthorized (authentication required) |
| 404 | Not Found (resource doesn’t exist) |
| 429 | Too Many Requests (rate limit exceeded) |
| 500 | Internal Server Error |
| 503 | Service Unavailable (feature not enabled) |
See Also
- Node Overview - Node implementation details
- Node Configuration - RPC configuration options
- Node Operations - Node management
- Getting Started - Quick start guide
- API Index - Cross-reference to all APIs
- Troubleshooting - Common RPC issues
- Commons Mesh Module - Payment endpoints and mesh networking
- Module System - Module architecture and IPC
Storage Backends
Overview
The node supports multiple database backends for persistent storage of blocks, UTXO set, and chain state. The system automatically selects the best available backend with graceful fallback.
Supported Backends
redb (Default, Recommended)
redb is the default, production-ready embedded database:
- Pure Rust: No C dependencies
- ACID Compliance: Full ACID transactions
- Production Ready: Stable, well-tested
- Performance: Optimized for read-heavy workloads
- Storage: Efficient key-value storage
Code: database.rs
sled (Fallback)
sled is available as a fallback option:
- Beta Quality: Not recommended for production
- Pure Rust: No C dependencies
- Performance: Good for development and testing
- Storage: Key-value storage with B-tree indexing
Code: database.rs
rocksdb (Optional, Bitcoin Core Compatible)
rocksdb is an optional high-performance backend with Bitcoin Core compatibility:
- Bitcoin Core Compatibility: Uses RocksDB to read Bitcoin Core’s LevelDB databases (RocksDB has backward compatibility with LevelDB format)
- Automatic Detection: Automatically detects and uses Bitcoin Core data if present
- Block File Access: Direct access to Bitcoin Core block files (
blk*.dat) - Format Parsing: Parses Bitcoin Core’s internal data formats
- High Performance: Optimized for large-scale blockchain data
- System Dependency: Requires
libclangfor build - Feature Flag:
rocksdb(optional, not enabled by default)
Bitcoin Core Integration:
- Automatically detects Bitcoin Core data directories
- Uses RocksDB to read LevelDB databases: Bitcoin Core uses LevelDB, but we use RocksDB (which can read LevelDB format) to access the data
- Accesses block files (
blk*.dat) with lazy indexing - Supports mainnet, testnet, regtest, and signet networks
Important: We use RocksDB (not LevelDB directly). RocksDB provides backward compatibility with LevelDB format, allowing us to read Bitcoin Core’s LevelDB databases.
Code: database.rs, bitcoin_core_storage.rs
Note: RocksDB requires the rocksdb feature flag. RocksDB and erlay features are mutually exclusive due to dependency conflicts (both require libclang/LLVM).
Backend Selection
The system automatically selects the best available backend in this order:
- Bitcoin Core Detection (if RocksDB feature enabled): Checks for existing Bitcoin Core data and uses RocksDB to read it (Bitcoin Core uses LevelDB, but RocksDB can read LevelDB format)
- redb (default, preferred): Attempts to use redb as the primary backend
- sled (fallback): Falls back to sled if redb fails and sled is available
- RocksDB (fallback): Falls back to RocksDB if available and other backends fail
- Error: Returns error if no backend is available
Backend Selection Logic:
- The
"auto"backend option follows this selection order - Bitcoin Core detection happens first (if RocksDB enabled) to preserve compatibility
- redb is always preferred over sled when both are available
- Automatic fallback ensures the node can start even if preferred backend fails
Code: database.rs, mod.rs
Automatic Fallback
#![allow(unused)]
fn main() {
// System automatically tries redb first, falls back to sled if needed
let storage = Storage::new(data_dir)?;
}
Code: mod.rs
Database Abstraction
The storage layer uses a unified database abstraction:
Database Trait
#![allow(unused)]
fn main() {
pub trait Database: Send + Sync {
fn open_tree(&self, name: &str) -> Result<Box<dyn Tree>>;
fn flush(&self) -> Result<()>;
}
}
Code: database.rs
Tree Trait
#![allow(unused)]
fn main() {
pub trait Tree: Send + Sync {
fn insert(&self, key: &[u8], value: &[u8]) -> Result<()>;
fn get(&self, key: &[u8]) -> Result<Option<Vec<u8>>>;
fn remove(&self, key: &[u8]) -> Result<()>;
fn contains_key(&self, key: &[u8]) -> Result<bool>;
fn len(&self) -> Result<usize>;
fn iter(&self) -> Box<dyn Iterator<Item = Result<(Vec<u8>, Vec<u8>)>> + '_>;
}
}
Code: database.rs
Storage Components
BlockStore
Stores blocks by hash:
- Key: Block hash (32 bytes)
- Value: Serialized block data
- Indexing: Hash-based lookup
Code: blockstore.rs
UtxoStore
Manages UTXO set:
- Key: OutPoint (36 bytes: txid + output index)
- Value: UTXO data (script, amount)
- Operations: Add, remove, query UTXOs
Code: utxostore.rs
ChainState
Tracks chain metadata:
- Tip Hash: Current chain tip
- Height: Current block height
- Chain Work: Cumulative proof-of-work
- UTXO Stats: Cached UTXO set statistics
Code: chainstate.rs
TxIndex
Transaction indexing:
- Key: Transaction ID (32 bytes)
- Value: Transaction data and metadata
- Lookup: Fast transaction retrieval
Code: txindex.rs
Configuration
Backend Selection
[storage]
data_dir = "/var/lib/blvm"
backend = "auto" # or "redb", "sled"
Options:
"auto": Auto-select based on availability (checks Bitcoin Core data, prefers redb, falls back to sled/rocksdb)"redb": Force redb backend"sled": Force sled backend"rocksdb": Force rocksdb backend (requiresrocksdbfeature)
Code: mod.rs
RocksDB Configuration
Enable RocksDB with the rocksdb feature:
cargo build --features rocksdb
System Requirements:
libclangmust be installed (required for RocksDB FFI bindings)- On Ubuntu/Debian:
sudo apt-get install libclang-dev - On Arch:
sudo pacman -S clang - On macOS:
brew install llvm
Bitcoin Core Detection: The system automatically detects Bitcoin Core data directories:
- Mainnet:
~/.bitcoin/or~/Library/Application Support/Bitcoin/ - Testnet:
~/.bitcoin/testnet3/or~/Library/Application Support/Bitcoin/testnet3/ - Regtest:
~/.bitcoin/regtest/or~/Library/Application Support/Bitcoin/regtest/
Code: bitcoin_core_detection.rs
Cache Configuration
[storage.cache]
block_cache_mb = 100
utxo_cache_mb = 50
header_cache_mb = 10
Cache Sizes:
- Block Cache: Default 100 MB, caches recently accessed blocks
- UTXO Cache: Default 50 MB, caches frequently accessed UTXOs
- Header Cache: Default 10 MB, caches block headers
Code: mod.rs
Performance Characteristics
redb Backend
- Read Performance: Excellent for sequential and random reads
- Write Performance: Good for batch writes
- Storage Efficiency: Efficient key-value storage
- Memory Usage: Moderate memory footprint
- Production Ready: Recommended for production
sled Backend
- Read Performance: Good for sequential reads
- Write Performance: Good for batch writes
- Storage Efficiency: Efficient with B-tree indexing
- Memory Usage: Higher memory footprint
- Production Ready: Beta quality, not recommended for production
Migration
Backend Migration
To migrate between backends:
- Export Data: Export all data from current backend
- Import Data: Import data into new backend
- Verify: Verify data integrity
Note: Manual migration is supported. Export data from the current backend and import into the new backend.
Pruning Support
Both backends support pruning:
[storage.pruning]
enabled = true
keep_blocks = 288 # Keep last 288 blocks (2 days)
Pruning Modes:
- Disabled: Keep all blocks (archival node)
- Normal: Conservative pruning (keep recent blocks)
- Aggressive: Prune with UTXO commitments (requires utxo-commitments feature)
- Custom: Fine-grained control over what to keep
Code: pruning.rs
Error Handling
The storage layer handles backend failures gracefully:
- Automatic Fallback: Falls back to alternative backend if primary fails
- Error Recovery: Attempts to recover from transient errors
- Data Integrity: Verifies data integrity on startup
- Corruption Detection: Detects and reports database corruption
Code: mod.rs
See Also
- Node Configuration - Storage configuration options
- Node Operations - Storage operations and maintenance
- Pruning - Pruning configuration and usage
Transaction Indexing
Overview
The node provides advanced transaction indexing capabilities for efficient querying of blockchain data. Indexes are built on-demand and support both address-based and value-based queries.
Index Types
Transaction Hash Index
Basic transaction lookup by hash:
- Key: Transaction hash (32 bytes)
- Value: Transaction metadata (block hash, height, index, size, weight)
- Lookup: O(1) hash-based lookup
- Always Enabled: Core indexing functionality
Code: txindex.rs
Address Index (Optional)
Indexes transactions by output addresses:
- Key: Address hash (20 bytes for P2PKH, 32 bytes for P2SH/P2WPKH)
- Value: List of (transaction hash, output index) pairs
- Lookup: Fast address balance and transaction history queries
- Lazy Indexing: Built on-demand when first queried
- Configuration: Enable with
storage.indexing.enable_address_index = true
Code: txindex.rs
Value Range Index (Optional)
Indexes transactions by output value ranges:
- Key: Value bucket (logarithmic buckets: 0-1, 1-10, 10-100, 100-1000, etc.)
- Value: List of (transaction hash, output index, value) tuples
- Lookup: Efficient queries for transactions in specific value ranges
- Lazy Indexing: Built on-demand when first queried
- Configuration: Enable with
storage.indexing.enable_value_index = true
Code: txindex.rs
Indexing Strategy
Lazy Indexing
Indexes are built on-demand to minimize impact on block processing:
- Basic Indexing: All transactions are indexed with basic metadata (hash, block, height)
- On-Demand: Address and value indexes are built when first queried
- Caching: Indexed addresses are cached to avoid re-indexing
- Batch Operations: Multiple transactions indexed together for efficiency
Code: txindex.rs
Batch Indexing
Block-level indexing optimizations:
- Single Pass: Processes all transactions in a block at once
- Deduplication: Uses HashSet for O(1) duplicate checking
- Batching: Groups updates per unique address/bucket to reduce DB I/O
- Conditional Writes: Only writes to DB if updates were made
Code: txindex.rs
Configuration
Enable Indexing
[storage.indexing]
enable_address_index = true
enable_value_index = true
Index Statistics
Query indexing statistics:
#![allow(unused)]
fn main() {
use blvm_node::storage::txindex::TxIndex;
let stats = txindex.get_stats()?;
println!("Total transactions: {}", stats.total_transactions);
println!("Indexed addresses: {}", stats.indexed_addresses);
println!("Indexed value buckets: {}", stats.indexed_value_buckets);
}
Code: txindex.rs
Usage
Query by Address
#![allow(unused)]
fn main() {
use blvm_node::storage::txindex::TxIndex;
// Query all transactions for an address
let address = "1A1zP1eP5QGefi2DMPTfTL5SLmv7DivfNa";
let transactions = txindex.query_by_address(&address)?;
}
Query by Value Range
#![allow(unused)]
fn main() {
// Query transactions with outputs in value range [1000, 10000] satoshis
let transactions = txindex.query_by_value_range(1000, 10000)?;
}
Query Transaction Metadata
#![allow(unused)]
fn main() {
// Get transaction metadata by hash
let tx_hash = Hash::from_hex("...")?;
let metadata = txindex.get_metadata(&tx_hash)?;
}
Performance Characteristics
- Hash Lookup: O(1) constant time
- Address Lookup: O(1) after initial indexing, O(n) for first query (indexes on-demand)
- Value Range Lookup: O(log n) for bucket lookup, O(m) for results (where m is number of matches)
- Index Building: Lazy, only builds what’s queried
- Storage Overhead: Minimal for basic index, grows with address/value index usage
See Also
- Storage Backends - Database backend options
- Node Configuration - Indexing configuration options
- Node Operations - Index maintenance and operations
IBD Bandwidth Protection
Overview
The node implements comprehensive protection against Initial Block Download (IBD) bandwidth exhaustion attacks. This prevents malicious peers from forcing a node to upload the entire blockchain multiple times, which could cause ISP data cap overages and economic denial-of-service.
Protection Mechanisms
Per-Peer Bandwidth Limits
Tracks bandwidth usage per peer with configurable daily and hourly limits:
- Daily Limit: Maximum bytes a peer can request per day
- Hourly Limit: Maximum bytes a peer can request per hour
- Automatic Throttling: Blocks requests when limits are exceeded
- Legitimate Node Protection: First request always allowed, reasonable limits for legitimate sync
Code: ibd_protection.rs
Per-IP Bandwidth Limits
Tracks bandwidth usage per IP address to prevent single-IP attacks:
- IP-Based Tracking: Monitors all peers from the same IP
- Aggregate Limits: Combined daily/hourly limits for all peers from an IP
- Attack Detection: Identifies coordinated attacks from single IP
Code: ibd_protection.rs
Per-Subnet Bandwidth Limits
Tracks bandwidth usage per subnet to prevent distributed attacks:
- IPv4 Subnets: Tracks /24 subnets (256 addresses)
- IPv6 Subnets: Tracks /64 subnets
- Subnet Aggregation: Combines bandwidth from all IPs in subnet
- Distributed Attack Mitigation: Prevents coordinated attacks from subnet
Code: ibd_protection.rs
Concurrent IBD Serving Limits
Limits how many peers can simultaneously request IBD:
- Concurrent Limit: Maximum number of peers serving IBD at once
- Queue Management: Queues additional requests when limit reached
- Fair Serving: Rotates serving to queued peers
Code: ibd_protection.rs
Peer Reputation Scoring
Tracks peer behavior to identify malicious patterns:
- Reputation System: Scores peers based on behavior
- Suspicious Pattern Detection: Identifies rapid reconnection with new peer IDs
- Cooldown Periods: Enforces cooldown after suspicious activity
- Legitimate Node Protection: First-time sync always allowed
Code: ibd_protection.rs
Configuration
Default Limits
[network.ibd_protection]
max_bandwidth_per_peer_per_day_gb = 50.0
max_bandwidth_per_peer_per_hour_gb = 10.0
max_bandwidth_per_ip_per_day_gb = 100.0
max_bandwidth_per_ip_per_hour_gb = 20.0
max_bandwidth_per_subnet_per_day_gb = 500.0
max_bandwidth_per_subnet_per_hour_gb = 100.0
max_concurrent_ibd_serving = 3
ibd_request_cooldown_seconds = 3600
suspicious_reconnection_threshold = 3
reputation_ban_threshold = -100
enable_emergency_throttle = false
emergency_throttle_percent = 50
Configuration Options
- max_bandwidth_per_peer_per_day_gb: Daily limit per peer (default: 50 GB)
- max_bandwidth_per_peer_per_hour_gb: Hourly limit per peer (default: 10 GB)
- max_bandwidth_per_ip_per_day_gb: Daily limit per IP (default: 100 GB)
- max_bandwidth_per_ip_per_hour_gb: Hourly limit per IP (default: 20 GB)
- max_bandwidth_per_subnet_per_day_gb: Daily limit per subnet (default: 500 GB)
- max_bandwidth_per_subnet_per_hour_gb: Hourly limit per subnet (default: 100 GB)
- max_concurrent_ibd_serving: Maximum concurrent IBD serving (default: 3)
- ibd_request_cooldown_seconds: Cooldown period after suspicious activity (default: 3600 seconds)
- suspicious_reconnection_threshold: Number of reconnections in 1 hour to be considered suspicious (default: 3)
- reputation_ban_threshold: Reputation score below which peer is banned (default: -100)
- enable_emergency_throttle: Enable emergency bandwidth throttling (default: false)
- emergency_throttle_percent: Percentage of bandwidth to throttle when emergency throttle is enabled (default: 50)
Code: ibd_protection.rs
Attack Mitigation
Single IP Attack
Attack: Attacker runs multiple fake nodes from same IP Protection: Per-IP bandwidth limits aggregate all peers from IP Result: Blocked after IP limit reached
Subnet Attack
Attack: Attacker distributes fake nodes across subnet Protection: Per-subnet bandwidth limits aggregate all IPs in subnet Result: Blocked after subnet limit reached
Rapid Reconnection Attack
Attack: Attacker disconnects and reconnects with new peer ID Protection: Reputation scoring detects pattern, enforces cooldown Result: Blocked during cooldown period
Distributed Attack
Attack: Coordinated attack from multiple IPs/subnets Protection: Concurrent serving limits prevent serving too many peers simultaneously Result: Additional requests queued, serving rotated fairly
Legitimate New Node
Scenario: Legitimate new node requests full sync Protection: First request always allowed, reasonable limits accommodate legitimate sync Result: Allowed to sync within limits
Integration
The IBD protection is automatically integrated into the network manager:
- Automatic Tracking: Tracks bandwidth when serving Headers/Block messages
- Request Protection: Protects GetHeaders and GetData requests
- Cleanup: Automatically cleans up tracking on peer disconnect
Code: mod.rs
LAN Peer Prioritization
LAN peers are automatically discovered and prioritized for IBD, but still respect bandwidth protection limits:
- Priority Assignment: LAN peers get priority within bandwidth limits
- Score Multiplier: LAN peers receive up to 3x score multiplier (progressive trust system)
- Bandwidth Limits: LAN peers still respect per-peer, per-IP, and per-subnet limits
- Reputation Scoring: LAN peer behavior affects reputation scoring
Code: parallel_ibd.rs
For details on LAN peering discovery, security, and configuration, see LAN Peering System.
See Also
- LAN Peering System - Automatic local network discovery and prioritization
- Network Operations - General network operations
- Node Configuration - IBD protection configuration
- Security Controls - Security system overview
LAN Peering System
Overview
The LAN peering system automatically discovers and prioritizes local network (LAN) Bitcoin nodes for faster Initial Block Download (IBD) while maintaining security through checkpoint validation and peer diversity requirements. This can speed up IBD by 10-50x when a local Bitcoin node is available on your network.
Benefits
- 10-50x IBD Speedup: LAN peers typically have <10ms latency vs 100-5000ms for internet peers
- High Throughput: ~1 Gbps local network vs ~10-100 Mbps internet
- 100% Reliability: No connection drops compared to internet peers
- Automatic Discovery: Scans local network automatically during startup
- Secure by Default: Internet checkpoint validation prevents eclipse attacks
How It Works
Automatic Discovery
During node startup, the system automatically:
- Detects Local Network Interfaces: Identifies private network interfaces (10.x, 172.16-31.x, 192.168.x)
- Scans Local Subnet: Scans /24 subnets (254 IPs per subnet) for Bitcoin nodes on port 8333
- Parallel Scanning: Uses up to 64 concurrent connection attempts for fast discovery
- Verifies Peers: Performs protocol handshake and chain verification before accepting
Code: lan_discovery.rs
LAN Peer Detection
A peer is considered a LAN peer if its IP address is in one of these ranges:
IPv4 Private Ranges:
10.0.0.0/8- Class A private network172.16.0.0/12- Class B private network (172.16-31.x)192.168.0.0/16- Class C private network (most common for home networks)127.0.0.0/8- Loopback addresses169.254.0.0/16- Link-local addresses
IPv6 Private Ranges:
::1- Loopbackfd00::/8- Unique Local Addresses (ULA)fe80::/10- Link-local addresses
Code: peer_scoring.rs
Progressive Trust System
LAN peers start with limited trust and earn higher priority over time:
-
Initial Trust (1.5x multiplier):
- Newly discovered LAN peers
- Whitelisted peers start at maximum trust instead
-
Level 2 Trust (2.0x multiplier):
- After 1000 valid blocks received
- Indicates reliable peer behavior
-
Maximum Trust (3.0x multiplier):
- After 10000 valid blocks AND 1 hour of connection time
- Maximum priority for block downloads
-
Demoted (1.0x multiplier, no bonus):
- After 3 failures
- Loses LAN status but remains connected
-
Banned (0.0x multiplier, not used):
- Checkpoint validation failure
- Permanent ban (1 year duration)
Code: lan_security.rs
Peer Prioritization
LAN peers receive priority for block downloads during IBD:
- IBD Optimization: LAN peers get priority chunks (first 50,000 blocks)
- Header Download: LAN peers prioritized for header sync (10-100x faster)
- Score Multiplier: Up to 3x score multiplier for peer selection
- Bandwidth Allocation: LAN peers receive more bandwidth allocation
Code: parallel_ibd.rs
Security Model
Hard Limits
The system enforces strict security limits to prevent eclipse attacks:
- Maximum 25% LAN Peers: Hard cap on LAN peer percentage
- Minimum 75% Internet Peers: Required for security
- Minimum 3 Internet Peers: Required for checkpoint validation
- Maximum 1 Discovered LAN Peer: Limits automatically discovered peers (whitelisted are separate)
Code: lan_security.rs
Checkpoint Validation
Internet checkpoints are the primary security mechanism. Even with discovery enabled, eclipse attacks are prevented through regular checkpoint validation:
- Block Checkpoints: Every 1000 blocks, validate block hash against internet peers
- Header Checkpoints: Every 10000 blocks, validate header hash against internet peers
- Consensus Requirement: Requires agreement from at least 3 internet peers
- Failure Response: Checkpoint failure results in permanent ban (1 year)
- Request Timeout: 5 seconds per checkpoint request
- Max Retries: 3 retry attempts per checkpoint
- Protocol Verify Timeout: 5 seconds for protocol handshake verification
- Headers Verify Timeout: 10 seconds for headers verification
- Max Header Divergence: 6 blocks maximum divergence allowed
Security Constants:
BLOCK_CHECKPOINT_INTERVAL: 1000 blocksHEADER_CHECKPOINT_INTERVAL: 10000 blocksMIN_CHECKPOINT_PEERS: 3 internet peers requiredCHECKPOINT_FAILURE_BAN_DURATION: 1 year (31,536,000 seconds)CHECKPOINT_REQUEST_TIMEOUT: 5 secondsCHECKPOINT_MAX_RETRIES: 3 retriesPROTOCOL_VERIFY_TIMEOUT: 5 secondsHEADERS_VERIFY_TIMEOUT: 10 secondsMAX_HEADER_DIVERGENCE: 6 blocks
Code: lan_security.rs, lan_security.rs
Security Guarantees
- No Eclipse Attacks: 75% internet peer minimum ensures honest network connection
- Checkpoint Validation: Regular validation prevents chain divergence
- LAN Address Privacy: LAN addresses are never advertised to external peers
- Progressive Trust: New LAN peers start with limited trust
- Failure Handling: Multiple failures result in demotion or ban
Code: lan_security.rs
Configuration
Whitelisting
You can whitelist trusted LAN peers to start at maximum trust:
#![allow(unused)]
fn main() {
// Whitelisted peers start at maximum trust (3x multiplier)
policy.add_to_whitelist("192.168.1.100:8333".parse().unwrap());
}
Code: lan_security.rs
Discovery Control
LAN discovery is enabled by default. The system automatically discovers peers during startup, but you can control this behavior through the security policy.
Code: lan_security.rs
Use Cases
Home Networks
If you run multiple Bitcoin nodes on your home network (e.g., Start9, Umbrel, RaspiBlitz), the system will automatically discover and prioritize them for faster sync.
Example: Node on 192.168.1.50 automatically discovers node on 192.168.1.100 and uses it for fast IBD.
Docker/VM Environments
The system also checks common Docker/VM bridge networks:
- Docker default bridge:
172.17.0.1 - Common VM network:
10.0.0.1
Code: lan_discovery.rs
Local Development
For local development and testing, LAN peering speeds up blockchain sync when running multiple nodes locally.
Troubleshooting
LAN Peers Not Discovered
Problem: LAN peers are not being discovered automatically.
Solutions:
- Verify both nodes are on the same network (check IP ranges)
- Verify Bitcoin P2P port (default 8333) is open and accessible
- Check firewall rules (local network traffic may be blocked)
- Verify network interface detection (check logs for “Detected local interface”)
Code: lan_discovery.rs
Checkpoint Failures
Problem: LAN peer is being banned due to checkpoint failures.
Solutions:
- Verify LAN peer is on the correct chain (not a testnet/mainnet mismatch)
- Verify internet peers are available (need at least 3 for validation)
- Check network connectivity (LAN peer may be on different chain due to network issues)
- Verify LAN peer is not malicious (check logs for checkpoint failure details)
Code: lan_security.rs
Trust Level Not Increasing
Problem: LAN peer trust level is not increasing beyond initial.
Solutions:
- Verify peer is actually sending valid blocks (check block validation logs)
- Wait for required blocks (1000 for Level 2, 10000 for Maximum)
- Verify connection time (Maximum trust requires 1 hour of connection)
- Check for failures (3 failures result in demotion)
Code: lan_security.rs
Performance Issues
Problem: LAN peer is not providing expected speedup.
Solutions:
- Verify network speed (check actual bandwidth between nodes)
- Check peer trust level (higher trust = more priority)
- Verify peer is not demoted (check trust level in logs)
- Check for network congestion (other traffic may affect performance)
Integration with IBD Protection
LAN peers are integrated with the IBD bandwidth protection system:
- Bandwidth Limits: LAN peers still respect per-peer bandwidth limits
- Priority Assignment: LAN peers get priority within bandwidth limits
- Reputation Scoring: LAN peer behavior affects reputation scoring
See IBD Bandwidth Protection for details.
Security Considerations
Eclipse Attack Prevention
The 25% LAN peer cap and 75% internet peer minimum ensure that even if all LAN peers are malicious, the node maintains connection to the honest network through internet peers.
Checkpoint Validation
Regular checkpoint validation ensures that LAN peers cannot diverge from the honest chain. Checkpoint failures result in immediate ban.
LAN Address Privacy
LAN addresses are never advertised to external peers, preventing information leakage about your local network topology.
Code: lan_security.rs
See Also
- IBD Bandwidth Protection - How LAN peers interact with bandwidth protection
- Network Operations - General network operations
- Security Threat Models - Security model details
- Node Configuration - Configuration options
Mining Integration
The reference node includes mining coordination functionality as part of the Bitcoin protocol. The system provides block template generation, mining coordination, and optional Stratum V2 protocol support.
Block Template Generation
Block templates are created using a formally verified algorithm from blvm-consensus that ensures correctness per Orange Paper Section 12.4.
Algorithm Overview
- Get Chain State: Retrieve current chain tip, height, and difficulty
- Get Mempool Transactions: Fetch transactions from mempool
- Get UTXO Set: Load UTXO set for fee calculation
- Select Transactions: Choose transactions based on fee priority
- Create Coinbase: Generate coinbase transaction with subsidy + fees
- Calculate Merkle Root: Compute merkle root from transaction list
- Build Template: Construct block header with all components
Code: mining.rs
Transaction Selection
Transactions are selected using a fee-based priority algorithm:
- Prioritize by Fee Rate: Transactions sorted by fee rate (satoshis per byte)
- Size Limits: Respect maximum block size (1MB) and weight (4M weight units)
- Minimum Fee: Filter transactions below minimum fee rate (1 sat/vB default)
- UTXO Validation: Verify all transaction inputs exist in UTXO set
Code: miner.rs
Fee Calculation
Transaction fees are calculated using the UTXO set:
#![allow(unused)]
fn main() {
fee = sum(input_values) - sum(output_values)
fee_rate = fee / transaction_size
}
The coinbase transaction includes:
- Block Subsidy: Calculated based on halving schedule
- Transaction Fees: Sum of all fees from selected transactions
Code: miner.rs
Block Template Structure
#![allow(unused)]
fn main() {
pub struct Block {
header: BlockHeader {
version: 1,
prev_block_hash: [u8; 32],
merkle_root: [u8; 32],
timestamp: u32,
bits: u32,
nonce: 0, // To be filled by miner
},
transactions: Vec<Transaction>, // Coinbase first
}
}
Code: miner.rs
Mining Process
Template Generation
The getblocktemplate RPC method generates a block template:
- Uses formally verified
create_block_templatefromblvm-consensus - Converts to JSON-RPC format (BIP 22/23)
- Returns template ready for mining
Code: mining.rs
Proof of Work
Mining involves finding a nonce that satisfies the difficulty target:
- Nonce Search: Iterate through nonce values (0 to 2^32-1)
- Hash Calculation: Compute SHA256(SHA256(block_header))
- Target Check: Verify hash < difficulty target
- Success: Return mined block with valid nonce
Code: miner.rs
Block Submission
Mined blocks are submitted via submitblock RPC method:
- Validation: Block validated against consensus rules
- Connection: Block connected to chain
- Confirmation: Block added to blockchain
Code: mining.rs
Mining Coordinator
The MiningCoordinator manages mining operations:
- Template Generation: Creates block templates from mempool
- Mining Loop: Continuously generates and mines blocks
- Stratum V2 Integration: Coordinates with Stratum V2 protocol
- Merge Mining: Available via optional
blvm-merge-miningmodule (paid plugin)
Code: miner.rs
Stratum V2 Support
Optional Stratum V2 protocol support provides:
- Binary Protocol: 50-66% bandwidth savings vs Stratum V1
- Encrypted Communication: TLS/QUIC encryption
- Multiplexed Channels: QUIC stream multiplexing
- Merge Mining: Simultaneous mining of multiple chains
Code: mod.rs
Configuration
Mining Configuration
[mining]
enabled = false
mining_threads = 1
Stratum V2 Configuration
[stratum_v2]
enabled = true
listen_addr = "0.0.0.0:3333"
Code: mod.rs
See Also
- Node Operations - Node operation and management
- RPC API Reference - Mining-related RPC methods (
getblocktemplate,submitblock) - Stratum V2 + Merge Mining - Stratum V2 protocol details
- Node Configuration - Mining configuration options
- Protocol Specifications - Stratum V2 and mining protocols
Stratum V2 Mining Protocol
Overview
Bitcoin Commons implements the Stratum V2 mining protocol, enabling efficient mining coordination. Merge mining is available as a separate paid plugin module (blvm-merge-mining) that integrates with Stratum V2.
Stratum V2 Protocol
Protocol Features
- Binary Protocol: 50-66% bandwidth savings compared to Stratum V1
- Encrypted Communication: TLS/QUIC encryption for secure connections
- Multiplexed Channels: QUIC stream multiplexing for multiple mining streams
- Template Distribution: Efficient block template distribution
- Share Submission: Optimized share submission protocol
Code: mod.rs
Message Encoding
Stratum V2 uses Tag-Length-Value (TLV) encoding:
[4-byte length][2-byte tag][4-byte length][payload]
Code: protocol.rs
Transport Support
Stratum V2 works with both TCP and QUIC transports via the transport abstraction layer:
- TCP: Bitcoin P2P compatible
- Quinn QUIC: Direct QUIC transport
- Iroh/QUIC: QUIC with NAT traversal and DERP
Code: mod.rs
Merge Mining (Optional Plugin)
Overview
Merge mining is NOT built into the core node. It is available as a separate, optional paid plugin module (blvm-merge-mining) that integrates with the Stratum V2 module.
Key Points
- Separate Module:
blvm-merge-miningis not part of the core node - Requires Stratum V2: The merge mining module depends on
blvm-stratum-v2module - One-Time Activation Fee: 100,000 sats (0.001 BTC) required to activate
- Revenue Model: Module developer receives a fixed percentage (default 5%) of merge mining rewards
- Not a Commons Funding Model: Merge mining revenue goes to the module developer, not to Commons infrastructure
Installation
To use merge mining:
- Install Stratum V2 module (required dependency)
- Install merge mining module:
blvm-merge-mining - Pay activation fee: 100,000 sats one-time payment
- Configure: Set up secondary chains and revenue share
Documentation
For complete merge mining documentation, see:
- blvm-merge-mining README - Module documentation
- Module System - How modules work
Server Implementation
StratumV2Server
The server accepts miner connections and coordinates mining operations:
- Accepts miner connections
- Distributes mining jobs
- Validates share submissions
- Coordinates merge mining channels
Code: server.rs
Pool Implementation
The StratumV2Pool manages mining pool operations:
- Job distribution
- Share validation
- Channel management
- Connection pooling
Code: pool.rs
Client Implementation
StratumV2Miner
The miner client connects to pools and submits shares:
- Connection management
- Job reception
- Share submission
Code: miner.rs
StratumV2Client
The client handles protocol communication:
- Message encoding/decoding
- Connection establishment
- Channel management
- Error handling
Code: client.rs
Configuration
StratumV2Config
[stratum_v2]
enabled = true
pool_url = "tcp://pool.example.com:3333" # or "iroh://<nodeid>"
listen_addr = "0.0.0.0:3333" # Server mode
Note: Merge mining configuration is handled by the blvm-merge-mining module, not in Stratum V2 config.
Code: mod.rs
Usage
Server Mode
#![allow(unused)]
fn main() {
use blvm_node::network::stratum_v2::StratumV2Server;
let server = StratumV2Server::new(
network_manager,
mining_coordinator,
listen_addr,
);
server.start().await?;
}
Miner Mode
#![allow(unused)]
fn main() {
use blvm_node::network::stratum_v2::StratumV2Miner;
let miner = StratumV2Miner::new(pool_url);
miner.connect().await?;
miner.start_mining().await?;
}
Benefits
- Bandwidth Efficiency: 50-66% bandwidth savings vs Stratum V1
- Security: Encrypted communication via TLS/QUIC
- Efficiency: Multiplexed channels for simultaneous mining
- Flexibility: Support for multiple mining streams
- Transport Choice: Works with TCP or QUIC
Components
The Stratum V2 system includes:
- Protocol encoding/decoding
- Server and client implementations
- QUIC multiplexed channels
- Transport abstraction support
Location: blvm-node/src/network/stratum_v2/
Note: Merge mining functionality is provided by the separate blvm-merge-mining module, not by the core Stratum V2 implementation.
Multi-Transport Architecture
Overview
The transport abstraction layer provides a unified interface for multiple network transport protocols, enabling Bitcoin Commons to support both traditional TCP (Bitcoin P2P compatible) and modern QUIC-based transports simultaneously.
Architecture
NetworkManager
└── Transport Trait (abstraction)
├── TcpTransport (Bitcoin P2P compatible)
├── QuinnTransport (direct QUIC)
└── IrohTransport (QUIC with NAT traversal)
Transport Types
Transport Comparison
| Feature | TCP | Quinn QUIC | Iroh QUIC |
|---|---|---|---|
| Protocol | TCP/IP | QUIC | QUIC + DERP |
| Compatibility | Bitcoin P2P | Bitcoin P2P compatible | Commons-specific |
| Addressing | SocketAddr | SocketAddr | Public Key |
| NAT Traversal | ❌ No | ❌ No | ✅ Yes (DERP) |
| Multiplexing | ❌ No | ✅ Yes | ✅ Yes |
| Encryption | ❌ No (TLS optional) | ✅ Built-in | ✅ Built-in |
| Connection Migration | ❌ No | ✅ Yes | ✅ Yes |
| Latency | Standard | Lower | Lower |
| Bandwidth | Standard | Better | Better |
| Default | ✅ Yes | ❌ No | ❌ No |
| Feature Flag | Always enabled | quinn | iroh |
Code: transport.rs
TCP Transport
Traditional TCP transport for Bitcoin P2P protocol compatibility:
- Uses standard TCP sockets
- Maintains Bitcoin wire protocol format
- Compatible with standard Bitcoin nodes
- Default transport for backward compatibility
- No built-in encryption (TLS optional)
- No connection multiplexing
Code: tcp_transport.rs
Quinn QUIC Transport
Direct QUIC transport using the Quinn library:
- QUIC protocol benefits (multiplexing, encryption, connection migration)
- SocketAddr-based addressing (similar to TCP)
- Lower latency and better congestion control
- Built-in TLS encryption
- Stream multiplexing over single connection
- Optional feature flag:
quinn
Code: quinn_transport.rs
Iroh Transport
QUIC-based transport using Iroh for P2P networking:
- Public key-based peer identity
- NAT traversal support via DERP (Distributed Endpoint Relay Protocol)
- Decentralized peer discovery
- Built-in encryption and multiplexing
- Connection migration support
- Optional feature flag:
iroh
Code: iroh_transport.rs
Performance Characteristics
TCP Transport:
- Latency: Standard (RTT-dependent)
- Throughput: Standard (TCP congestion control)
- Connection Overhead: Low (no encryption by default)
- Use Case: Bitcoin P2P compatibility, standard networking
Quinn QUIC Transport:
- Latency: Lower (0-RTT connection establishment)
- Throughput: Higher (better congestion control)
- Connection Overhead: Moderate (built-in encryption)
- Use Case: Modern applications, improved performance
Iroh QUIC Transport:
- Latency: Lower (0-RTT + DERP routing)
- Throughput: Higher (QUIC + optimized routing)
- Connection Overhead: Higher (DERP relay overhead)
- Use Case: NAT traversal, decentralized networking
Code: transport.rs
Transport Abstraction
Transport Trait
The Transport trait provides a unified interface:
#![allow(unused)]
fn main() {
pub trait Transport: Send + Sync {
fn connect(&self, addr: TransportAddr) -> Result<Box<dyn TransportConnection>>;
fn listen(&self, addr: TransportAddr) -> Result<Box<dyn TransportListener>>;
fn transport_type(&self) -> TransportType;
}
}
Code: transport.rs
TransportAddr
Unified address type supporting all transports:
#![allow(unused)]
fn main() {
pub enum TransportAddr {
Tcp(SocketAddr),
Quinn(SocketAddr),
Iroh(Vec<u8>), // Public key bytes
}
}
Code: transport.rs
TransportType
Runtime transport selection:
#![allow(unused)]
fn main() {
pub enum TransportType {
Tcp,
Quinn,
Iroh,
}
}
Code: transport.rs
Transport Selection
Transport Preference
Runtime preference for transport selection:
- TcpOnly: Use only TCP transport
- IrohOnly: Use only Iroh transport
- Hybrid: Prefer Iroh if available, fallback to TCP
Code: transport.rs
Feature Negotiation
Peers negotiate transport capabilities during connection:
- Service flags indicate transport support
- Automatic fallback if preferred transport unavailable
- Transport-aware message routing
Code: protocol.rs
Protocol Adapter
The ProtocolAdapter handles message serialization between:
- Consensus-proof
NetworkMessagetypes - Transport-specific wire formats (TCP Bitcoin P2P vs Iroh message format)
Code: protocol_adapter.rs
Message Bridge
The MessageBridge bridges blvm-consensus message processing with transport layer:
- Converts messages to/from transport formats
- Processes incoming messages
- Generates responses
Code: message_bridge.rs
Network Manager Integration
The NetworkManager supports multiple transports:
- Runtime transport selection
- Transport-aware peer management
- Unified message routing
- Automatic transport fallback
Code: mod.rs
Benefits
- Backward Compatibility: TCP transport maintains Bitcoin P2P compatibility
- Modern Protocols: QUIC support for improved performance
- Flexibility: Runtime transport selection
- Unified Interface: Single API for all transports
- NAT Traversal: Iroh transport enables NAT traversal
- Extensible: Easy to add new transport types
Usage
Configuration
[network]
transport_preference = "Hybrid" # or "TcpOnly" or "IrohOnly"
[network.tcp]
enabled = true
listen_addr = "0.0.0.0:8333"
[network.iroh]
enabled = true
node_id = "..."
Code Example
#![allow(unused)]
fn main() {
use blvm_node::network::{NetworkManager, TransportAddr, TransportType};
let network_manager = NetworkManager::new(config);
// Connect via TCP
let tcp_addr = TransportAddr::tcp("127.0.0.1:8333".parse()?);
network_manager.connect(tcp_addr).await?;
// Connect via Iroh
let iroh_addr = TransportAddr::iroh(pubkey_bytes);
network_manager.connect(iroh_addr).await?;
}
Components
The transport abstraction includes:
- Transport trait definitions
- TCP transport implementation
- Quinn QUIC transport (optional)
- Iroh QUIC transport (optional)
- Protocol adapter for message conversion
- Message bridge for unified routing
- Network manager integration
Location: blvm-node/src/network/transport.rs, blvm-node/src/network/tcp_transport.rs, blvm-node/src/network/quinn_transport.rs, blvm-node/src/network/iroh_transport.rs
Privacy Relay Protocols
Overview
Bitcoin Commons implements multiple privacy-preserving and performance-optimized transaction relay protocols: Dandelion++, Fibre, and Package Relay. These protocols improve privacy, reduce bandwidth, and enable efficient transaction propagation.
Dandelion++
Overview
Dandelion++ provides privacy-preserving transaction relay with formal anonymity guarantees against transaction origin analysis. It operates in two phases: stem phase (obscures origin) and fluff phase (standard diffusion).
Code: dandelion.rs
Architecture
Dandelion++ operates in two phases:
- Stem Phase: Transaction relayed along a random path (obscures origin)
- Fluff Phase: Transaction broadcast to all peers (standard diffusion)
Stem Path Management
Each peer maintains a stem path to a randomly selected peer:
#![allow(unused)]
fn main() {
pub struct StemPath {
pub next_peer: String,
pub expiry: Instant,
pub hop_count: u8,
}
}
Code: dandelion.rs
Stem Phase Behavior
- Transactions relayed to next peer in stem path
- Random path selection obscures transaction origin
- Stem timeout: 10 seconds (default)
- Fluff probability: 10% per hop (default)
- Maximum stem hops: 2 (default)
Code: dandelion.rs
Fluff Phase Behavior
- Transaction broadcast to all peers
- Standard Bitcoin transaction diffusion
- Triggered by:
- Random probability at each hop
- Stem timeout expiration
- Maximum hop count reached
Code: dandelion.rs
Configuration
[network.dandelion]
enabled = true
stem_timeout_secs = 10
fluff_probability = 0.1 # 10%
max_stem_hops = 2
Code: dandelion.rs
Benefits
- Privacy: Obscures transaction origin
- Formal Guarantees: Anonymity guarantees against origin analysis
- Backward Compatible: Falls back to standard relay if disabled
- Configurable: Adjustable timeouts and probabilities
Fibre
Overview
Fibre (Fast Internet Bitcoin Relay Engine) provides high-performance block relay using UDP transport with Forward Error Correction (FEC) encoding for packet loss tolerance.
Code: fibre.rs
Architecture
Fibre uses:
- UDP Transport: Low-latency UDP for block relay
- FEC Encoding: Reed-Solomon erasure coding for packet loss tolerance
- Chunk-based Transmission: Blocks split into chunks with parity shards
- Automatic Recovery: Missing chunks recovered via FEC
FEC Encoding
Blocks are encoded using Reed-Solomon erasure coding:
- Data Shards: Original block data split into shards
- Parity Shards: Redundant shards for error recovery
- Shard Size: Configurable (default: 1024 bytes)
- Parity Ratio: Configurable (default: 0.2 = 20% parity)
Code: fibre.rs
Block Encoding Process
- Serialize block to bytes
- Split into data shards
- Generate parity shards via FEC
- Create FEC chunks for transmission
- Send chunks via UDP
Code: fibre.rs
Block Assembly Process
- Receive FEC chunks via UDP
- Track received chunks per block
- When enough chunks received (data shards), reconstruct block
- Verify block hash matches
Code: fibre.rs
UDP Transport
Fibre uses UDP for low-latency transmission:
- Connection Tracking: Per-peer connection state
- Retry Logic: Automatic retry for lost chunks
- Sequence Numbers: Duplicate detection
- Timeout Handling: Connection timeout management
Code: fibre.rs
Configuration
[network.fibre]
enabled = true
bind_addr = "0.0.0.0:8334"
chunk_timeout_secs = 5
max_retries = 3
fec_parity_ratio = 0.2 # 20% parity
max_assemblies = 100
Code: fibre.rs
Statistics
Fibre tracks comprehensive statistics:
- Blocks sent/received
- Chunks sent/received
- FEC recoveries
- UDP errors
- Average latency
- Success rate
Code: fibre.rs
Benefits
- Low Latency: UDP transport reduces latency
- Packet Loss Tolerance: FEC recovers from lost chunks
- High Throughput: Efficient chunk-based transmission
- Automatic Recovery: No manual retry needed
Package Relay (BIP331)
Overview
Package Relay (BIP331) allows nodes to relay and validate groups of transactions together, enabling efficient fee-bumping (RBF) and CPFP (Child Pays For Parent) scenarios.
Code: package_relay.rs
Package Structure
A transaction package contains:
- Transactions: Ordered list (parents before children)
- Package ID: Combined hash of all transactions
- Combined Fee: Sum of all transaction fees
- Combined Weight: Total weight for fee rate calculation
Code: package_relay.rs
Package Validation
Packages are validated for:
- Size Limits: Maximum 25 transactions (BIP331)
- Weight Limits: Maximum 404,000 WU (BIP331)
- Fee Rate: Minimum fee rate requirement
- Ordering: Parents must precede children
- No Duplicates: No duplicate transactions
Code: package_relay.rs
Use Cases
- Fee-Bumping (RBF): Parent + child transaction for fee increase
- CPFP: Child transaction pays for parent’s fees
- Atomic Sets: Multiple transactions that must be accepted together
Code: package_relay.rs
Package ID Calculation
Package ID is calculated as double SHA256 of all transaction IDs:
#![allow(unused)]
fn main() {
pub fn from_transactions(transactions: &[Transaction]) -> PackageId {
// Hash all txids, then double hash
}
}
Code: package_relay.rs
Configuration
[network.package_relay]
enabled = true
max_package_size = 25
max_package_weight = 404000 # 404k WU
min_fee_rate = 1000 # 1 sat/vB
Code: package_relay.rs
Benefits
- Efficient Fee-Bumping: Better fee rate calculation for packages
- Reduced Orphans: Reduces orphan transactions in mempool
- Atomic Validation: Package validated as unit
- DoS Resistance: Size and weight limits prevent abuse
Integration
Relay Manager
The RelayManager coordinates all relay protocols:
- Standard block/transaction relay
- Dandelion++ integration (optional)
- Fibre integration (optional)
- Package relay support
Code: relay.rs
Protocol Selection
Relay protocols are selected based on:
- Feature flags (
dandelion,fibre) - Peer capabilities
- Configuration settings
- Runtime preferences
Code: relay.rs
Components
The privacy relay system includes:
- Dandelion++ stem/fluff phase management
- Fibre UDP transport with FEC encoding
- Package Relay (BIP331) validation
- Relay manager coordination
- Statistics tracking
Location: blvm-node/src/network/dandelion.rs, blvm-node/src/network/fibre.rs, blvm-node/src/network/package_relay.rs, blvm-node/src/network/relay.rs
Package Relay (BIP331)
Overview
Package Relay (BIP331) enables nodes to relay and validate groups of transactions together as atomic units. This is particularly useful for fee-bumping (RBF) transactions, CPFP (Child Pays For Parent) scenarios, and atomic transaction sets.
Specification: BIP 331
Architecture
Package Structure
A transaction package contains:
#![allow(unused)]
fn main() {
pub struct TransactionPackage {
pub transactions: Vec<Transaction>, // Ordered: parents first
pub package_id: PackageId,
pub combined_fee: u64,
pub combined_weight: usize,
}
}
Code: package_relay.rs
Package ID
Package ID is calculated as double SHA256 of all transaction IDs:
#![allow(unused)]
fn main() {
pub fn from_transactions(transactions: &[Transaction]) -> PackageId {
let mut hasher = Sha256::new();
for tx in transactions {
let txid = calculate_txid(tx);
hasher.update(txid);
}
let first = hasher.finalize();
let mut hasher2 = Sha256::new();
hasher2.update(first);
PackageId(final_hash)
}
}
Code: package_relay.rs
Validation Rules
Size Limits
- Maximum Transactions: 25 (BIP331 limit)
- Maximum Weight: 404,000 WU (~101,000 vB)
- Minimum Fee Rate: Configurable (default: 1 sat/vB)
Code: package_relay.rs
Ordering Requirements
Transactions must be ordered with parents before children:
- Each transaction’s inputs that reference in-package parents must reference earlier transactions
- Invalid ordering results in
InvalidOrderrejection
Code: package_relay.rs
Fee Calculation
Package fee is calculated as sum of all transaction fees:
#![allow(unused)]
fn main() {
combined_fee = sum(inputs) - sum(outputs) for all transactions
}
Fee rate is calculated as:
#![allow(unused)]
fn main() {
fee_rate = combined_fee / combined_weight
}
Code: package_relay.rs
Use Cases
Fee-Bumping (RBF)
Parent transaction + child transaction that increases fee:
Package:
- Parent TX (low fee)
- Child TX (bumps parent fee)
CPFP (Child Pays For Parent)
Child transaction pays for parent’s fees:
Package:
- Parent TX (insufficient fee)
- Child TX (pays for parent)
Atomic Transaction Sets
Multiple transactions that must be accepted together:
Package:
- TX1 (depends on TX2)
- TX2 (depends on TX1)
Code: package_relay.rs
Package Manager
PackageRelay
The PackageRelay manager handles:
- Package validation
- Package state tracking
- Package acceptance/rejection
- Package relay to peers
Code: package_relay.rs
Package States
#![allow(unused)]
fn main() {
pub enum PackageStatus {
Pending, // Awaiting validation
Accepted, // Validated and accepted
Rejected { reason: PackageRejectReason },
}
}
Code: package_relay.rs
Rejection Reasons
#![allow(unused)]
fn main() {
pub enum PackageRejectReason {
TooManyTransactions,
WeightExceedsLimit,
FeeRateTooLow,
InvalidOrder,
DuplicateTransactions,
InvalidStructure,
}
}
Code: package_relay.rs
Validation Process
- Size Check: Verify transaction count ≤ 25
- Weight Check: Verify combined weight ≤ 404,000 WU
- Ordering Check: Verify parents before children
- Duplicate Check: Verify no duplicate transactions
- Fee Calculation: Calculate combined fee and fee rate
- Fee Rate Check: Verify fee rate ≥ minimum
- Structure Check: Verify valid package structure
Code: package_relay.rs
Network Integration
Package Messages
PackageRelay: Relay package to peersPackageAccept: Package accepted by peerPackageReject: Package rejected with reason
Code: package_relay_handler.rs
Handler Integration
The PackageRelayHandler processes incoming package messages:
- Receives package relay requests
- Validates packages
- Accepts or rejects packages
- Relays accepted packages to other peers
Code: package_relay_handler.rs
Configuration
[network.package_relay]
enabled = true
max_package_size = 25
max_package_weight = 404000 # 404k WU
min_fee_rate = 1000 # 1 sat/vB
Code: package_relay.rs
Benefits
- Efficient Fee-Bumping: Better fee rate calculation for packages
- Reduced Orphans: Reduces orphan transactions in mempool
- Atomic Validation: Package validated as unit
- DoS Resistance: Size and weight limits prevent abuse
- CPFP Support: Enables child-pays-for-parent scenarios
Components
The Package Relay system includes:
- Package structure and validation
- Package ID calculation
- Fee and weight calculation
- Ordering validation
- Package manager
- Network message handling
Location: blvm-node/src/network/package_relay.rs, blvm-node/src/network/package_relay_handler.rs
Performance Optimizations
Overview
Bitcoin Commons implements performance optimizations for faster initial block download (IBD), parallel validation, and efficient UTXO operations. These optimizations provide 10-50x speedups for common operations while maintaining consensus correctness.
Parallel Initial Block Download (IBD)
Overview
Parallel IBD significantly speeds up initial blockchain synchronization by downloading and validating blocks concurrently from multiple peers. The system uses checkpoint-based parallel header download, block pipelining, streaming validation, and efficient batch storage operations.
Code: parallel_ibd.rs
Architecture
The parallel IBD system consists of several coordinated optimizations:
- Checkpoint Parallel Headers: Download headers in parallel using hardcoded checkpoints
- Block Pipelining: Download multiple blocks concurrently from each peer
- Streaming Validation: Validate blocks as they arrive using a reorder buffer
- Batch Storage: Use batch writes for efficient UTXO set updates
Checkpoint Parallel Headers
Headers are downloaded in parallel using hardcoded checkpoints at well-known block heights. This allows multiple header ranges to be downloaded simultaneously from different peers.
Checkpoints: Genesis, 11111, 33333, 74000, 105000, 134444, 168000, 193000, 210000 (first halving), 250000, 295000, 350000, 400000, 450000, 500000, 550000, 600000, 650000, 700000, 750000, 800000, 850000
Code: MAINNET_CHECKPOINTS
Algorithm:
- Identify checkpoint ranges for the target height range
- Download headers in parallel for each range
- Each range uses the checkpoint hash as its starting locator
- Verification ensures continuity and checkpoint hash matching
Performance: 4-8x faster header download vs sequential
Block Pipelining
Blocks are downloaded with deep pipelining per peer, allowing multiple outstanding block requests to hide network latency.
Configuration:
max_concurrent_per_peer: 64 concurrent downloads per peer (default)chunk_size: 100 blocks per chunk (default)download_timeout_secs: 60 seconds per block (default)
Code: ParallelIBDConfig
Dynamic Work Dispatch:
- Uses a shared work queue instead of static chunk assignment
- Fast peers automatically grab more work as they finish chunks
- FIFO ordering ensures lowest heights are processed first
- Natural load balancing across peers
Performance: 4-8x improvement vs sequential block requests
Streaming Validation with Reorder Buffer
Blocks may arrive out of order from parallel downloads. A reorder buffer ensures blocks are validated in sequential order while allowing downloads to continue.
Implementation:
- Bounded channel: 1000 blocks max in flight (~500MB-1GB memory)
- Reorder buffer: BTreeMap maintains blocks until next expected height
- Streaming validation: Validates blocks as they become available in order
- Natural backpressure: Downloads pause when buffer is full
Code: streaming validation
Memory Bounds: ~500MB-1GB maximum (1000 blocks × ~500KB average)
Batch Storage Operations
UTXO set updates use batch writes for efficient bulk operations. Batch writes are 10-100x faster than individual inserts.
BatchWriter Trait:
- Accumulates multiple put/delete operations
- Commits all operations atomically in a single transaction
- Ensures database consistency even on crash
Code: BatchWriter
Performance:
- Individual
Tree::insert(): ~1ms per operation (transaction overhead) - BatchWriter: ~1ms total for thousands of operations (single transaction)
Usage:
#![allow(unused)]
fn main() {
let mut batch = tree.batch();
for (key, value) in utxo_updates {
batch.put(key, value);
}
batch.commit()?; // Single atomic commit
}
Peer Scoring and Filtering
The system tracks peer performance and filters out extremely slow peers during IBD:
- Latency Tracking: Monitors average block download latency per peer
- Slow Peer Filtering: Drops peers with >90s average latency (keeps at least 2)
- Dynamic Selection: Fast peers automatically get more work
Code: peer filtering
Configuration
[ibd]
# Number of parallel workers (default: CPU count)
num_workers = 8
# Chunk size in blocks (default: 100)
chunk_size = 100
# Maximum concurrent downloads per peer (default: 64)
max_concurrent_per_peer = 64
# Checkpoint interval in blocks (default: 10,000)
checkpoint_interval = 10000
# Timeout for block download in seconds (default: 60)
download_timeout_secs = 60
Code: ParallelIBDConfig
Performance Impact
- Parallel Headers: 4-8x faster header download
- Block Pipelining: 4-8x improvement vs sequential requests
- Streaming Validation: Enables concurrent download + validation
- Batch Storage: 10-100x faster UTXO updates
- Overall IBD: 10-50x faster than sequential IBD
Parallel Block Validation
Architecture
Blocks are validated in parallel when they are deep enough from the chain tip. This optimization uses Rayon for parallel execution.
Code: mod.rs
Safety Conditions
Parallel validation is only used when:
- Blocks are beyond
max_parallel_depthfrom tip (default: 6 blocks) - Each block uses its own UTXO set snapshot (independent validation)
- Blocks are validated sequentially if too close to tip
Code: mod.rs
Implementation
#![allow(unused)]
fn main() {
pub fn validate_blocks_parallel(
&self,
contexts: &[BlockValidationContext],
depth_from_tip: usize,
network: Network,
) -> Result<Vec<(ValidationResult, UtxoSet)>> {
if depth_from_tip <= self.max_parallel_depth {
return self.validate_blocks_sequential(contexts, network);
}
// Parallel validation using Rayon
use rayon::prelude::*;
contexts.par_iter().map(|context| {
connect_block(&context.block, ...)
}).collect()
}
}
Code: mod.rs
Batch UTXO Operations
Batch Fee Calculation
Transaction fees are calculated in batches by pre-fetching all UTXOs before validation:
- Collect all prevouts from all transactions
- Batch UTXO lookup (single pass through HashMap)
- Cache UTXOs for fee calculation
- Calculate fees using cached UTXOs
Code: block.rs
Implementation
#![allow(unused)]
fn main() {
// Pre-collect all prevouts for batch UTXO lookup
let all_prevouts: Vec<&OutPoint> = block
.transactions
.iter()
.filter(|tx| !is_coinbase(tx))
.flat_map(|tx| tx.inputs.iter().map(|input| &input.prevout))
.collect();
// Batch UTXO lookup (single pass)
let mut utxo_cache: HashMap<&OutPoint, &UTXO> =
HashMap::with_capacity(all_prevouts.len());
for prevout in &all_prevouts {
if let Some(utxo) = utxo_set.get(prevout) {
utxo_cache.insert(prevout, utxo);
}
}
}
Code: block.rs
Configuration
[performance]
enable_batch_utxo_lookups = true
parallel_batch_size = 8
Code: config.rs
Assume-Valid Checkpoints
Overview
Assume-valid checkpoints skip expensive signature verification for blocks before a configured height, providing 10-50x faster IBD.
Code: block.rs
Safety
This optimization is safe because:
- These blocks are already validated by the network
- Block structure, Merkle roots, and PoW are still validated
- Only signature verification is skipped (the expensive operation)
Code: block.rs
Configuration
[performance]
assume_valid_height = 700000 # Skip signatures before this height
Environment Variable:
ASSUME_VALID_HEIGHT=700000
Code: block.rs
Performance Impact
- 10-50x faster IBD: Signature verification is the bottleneck
- Safe: Only skips signatures, validates everything else
- Configurable: Can be disabled (set to 0) for maximum safety
Parallel Transaction Validation
Architecture
Within a block, transaction validation is parallelized where safe:
-
Phase 1: Parallel Validation (read-only UTXO access)
- Transaction structure validation
- Input validation
- Fee calculation
- Script verification (read-only)
-
Phase 2: Sequential Application (write operations)
- UTXO set updates
- State transitions
- Maintains correctness
Code: block.rs
Implementation
#![allow(unused)]
fn main() {
#[cfg(feature = "rayon")]
{
use rayon::prelude::*;
// Phase 1: Parallel validation (read-only)
let validation_results: Vec<Result<...>> = block
.transactions
.par_iter()
.enumerate()
.map(|(i, tx)| {
// Validate transaction structure (read-only)
check_transaction(tx)?;
// Check inputs and calculate fees (read-only UTXO access)
check_tx_inputs(tx, &utxo_cache, height)?;
})
.collect();
// Phase 2: Sequential application (write operations)
for (tx, validation) in transactions.zip(validation_results) {
apply_transaction(tx, &mut utxo_set)?;
}
}
}
Code: block.rs
Advanced Indexing
Address Indexing
Indexes transactions by address for fast lookup:
- Address Database: Maps addresses to transaction history
- Fast Lookup: O(1) address-to-transaction mapping
- Incremental Updates: Updates on each block
Code: INDEXING_OPTIMIZATIONS.md
Value Range Indexing
Indexes UTXOs by value range for efficient queries:
- Range Queries: Find UTXOs in value ranges
- Optimized Lookups: Faster than scanning all UTXOs
- Memory Efficient: Sparse indexing structure
Runtime Optimizations
Constant Folding
Pre-computed constants avoid runtime computation:
#![allow(unused)]
fn main() {
pub mod precomputed_constants {
pub const U64_MAX: u64 = u64::MAX;
pub const MAX_MONEY_U64: u64 = MAX_MONEY as u64;
pub const BTC_PER_SATOSHI: f64 = 1.0 / (SATOSHIS_PER_BTC as f64);
}
}
Code: optimizations.rs
Bounds Check Optimization
Optimized bounds checking for proven-safe access patterns:
#![allow(unused)]
fn main() {
pub fn get_proven<T>(slice: &[T], index: usize, bound_check: bool) -> Option<&T> {
if bound_check {
slice.get(index)
} else {
// Unsafe only when bounds are statically proven
unsafe { ... }
}
}
}
Code: optimizations.rs
Cache-Friendly Memory Layouts
32-byte aligned hash structures for better cache performance:
#![allow(unused)]
fn main() {
#[repr(align(32))]
pub struct CacheAlignedHash([u8; 32]);
}
Code: optimizations.rs
Performance Configuration
Configuration Options
[performance]
# Script verification threads (0 = auto-detect)
script_verification_threads = 0
# Parallel batch size
parallel_batch_size = 8
# Enable SIMD optimizations
enable_simd_optimizations = true
# Enable cache optimizations
enable_cache_optimizations = true
# Enable batch UTXO lookups
enable_batch_utxo_lookups = true
Code: config.rs
Default Values
script_verification_threads: 0 (auto-detect from CPU count)parallel_batch_size: 8 transactions per batchenable_simd_optimizations: trueenable_cache_optimizations: trueenable_batch_utxo_lookups: true
Code: config.rs
Benchmark Results
Benchmark results are available at benchmarks.thebitcoincommons.org, generated by workflows in blvm-bench.
Performance Improvements
- Parallel Validation: 4-8x speedup for deep blocks
- Batch UTXO Lookups: 2-3x speedup for fee calculation
- Assume-Valid Checkpoints: 10-50x faster IBD
- Cache-Friendly Layouts: 10-30% improvement for hash operations
Components
The performance optimization system includes:
- Parallel block validation
- Batch UTXO operations
- Assume-valid checkpoints
- Parallel transaction validation
- Advanced indexing (address, value range)
- Runtime optimizations (constant folding, bounds checks, cache-friendly layouts)
- Performance configuration
Location: blvm-consensus/src/optimizations.rs, blvm-consensus/src/block.rs, blvm-consensus/src/config.rs, blvm-node/src/validation/mod.rs
See Also
- Node Overview - Node implementation details
- Node Configuration - Performance configuration options
- Benchmarking - Performance benchmarking
- Storage Backends - Storage performance
- Consensus Architecture - Optimization passes
- UTXO Commitments - UTXO proof verification and fast sync
- IBD Bandwidth Protection - IBD bandwidth management
QUIC RPC
Overview
Bitcoin Commons optionally supports JSON-RPC over QUIC using Quinn, providing improved performance and security compared to the standard TCP RPC server. QUIC RPC is an alternative transport protocol that runs alongside the standard TCP RPC server.
Features
- Encryption: Built-in TLS encryption via QUIC
- Multiplexing: Multiple concurrent requests over single connection
- Better Performance: Lower latency, better congestion control
- Backward Compatible: TCP RPC server always available
Code: QUIC_RPC.md
Usage
Basic (TCP Only - Default)
#![allow(unused)]
fn main() {
use blvm_node::rpc::RpcManager;
use std::net::SocketAddr;
let tcp_addr: SocketAddr = "127.0.0.1:8332".parse().unwrap();
let mut rpc_manager = RpcManager::new(tcp_addr);
rpc_manager.start().await?;
}
Code: QUIC_RPC.md
With QUIC Support
#![allow(unused)]
fn main() {
use blvm_node::rpc::RpcManager;
use std::net::SocketAddr;
let tcp_addr: SocketAddr = "127.0.0.1:8332".parse().unwrap();
let quinn_addr: SocketAddr = "127.0.0.1:18332".parse().unwrap();
// Option 1: Create with both transports
#[cfg(feature = "quinn")]
let mut rpc_manager = RpcManager::with_quinn(tcp_addr, quinn_addr);
// Option 2: Enable QUIC after creation
let mut rpc_manager = RpcManager::new(tcp_addr);
#[cfg(feature = "quinn")]
rpc_manager.enable_quinn(quinn_addr);
rpc_manager.start().await?;
}
Code: QUIC_RPC.md
Configuration
Feature Flag
QUIC RPC requires the quinn feature flag:
[dependencies]
blvm-node = { path = "../blvm-node", features = ["quinn"] }
Code: QUIC_RPC.md
Build with QUIC
cargo build --features quinn
Code: QUIC_RPC.md
QUIC RPC Server
Server Implementation
The QuinnRpcServer provides JSON-RPC over QUIC:
- Certificate Generation: Self-signed certificates for development
- Connection Handling: Accepts incoming QUIC connections
- Stream Management: Manages bidirectional streams
- Request Processing: Processes JSON-RPC requests
Code: quinn_server.rs
Certificate Management
QUIC uses TLS certificates:
- Development: Self-signed certificates
- Production: Should use proper certificate management
- Certificate Generation: Automatic certificate generation
Code: quinn_server.rs
Client Usage
QUIC Client
Clients need QUIC support. Example with quinn (requires quinn feature):
#![allow(unused)]
fn main() {
use quinn::Endpoint;
use std::net::SocketAddr;
let server_addr: SocketAddr = "127.0.0.1:18332".parse().unwrap();
let endpoint = Endpoint::client("0.0.0.0:0".parse().unwrap())?;
let connection = endpoint.connect(server_addr, "localhost")?.await?;
// Open bidirectional stream
let (mut send, mut recv) = connection.open_bi().await?;
// Send JSON-RPC request
let request = r#"{"jsonrpc":"2.0","method":"getblockchaininfo","params":[],"id":1}"#;
send.write_all(request.as_bytes()).await?;
send.finish().await?;
// Read response
let mut response = Vec::new();
recv.read_to_end(&mut response).await?;
let response_str = String::from_utf8(response)?;
}
Code: QUIC_RPC.md
Benefits Over TCP
- Encryption: Built-in TLS, no need for separate TLS layer
- Multiplexing: Multiple requests without head-of-line blocking
- Connection Migration: Survives IP changes
- Lower Latency: Better congestion control
- Stream-Based: Natural fit for request/response patterns
Code: QUIC_RPC.md
Limitations
- Bitcoin Core Compatibility: Bitcoin Core only supports TCP RPC
- Client Support: Requires QUIC-capable clients
- Certificate Management: Self-signed certs need proper handling for production
- Network Requirements: Some networks may block UDP/QUIC
Code: QUIC_RPC.md
Security Notes
- Self-Signed Certificates: Uses self-signed certificates for development. Production deployments require proper certificate management.
- Authentication: QUIC provides transport encryption but not application-level auth
- Same Security Boundaries: QUIC RPC has same security boundaries as TCP RPC (no wallet access)
Code: QUIC_RPC.md
When to Use
- High-Performance Applications: When you need better performance than TCP
- Modern Infrastructure: When all clients support QUIC
- Enhanced Security: When you want built-in encryption without extra TLS layer
- Internal Services: When you control both client and server
Code: QUIC_RPC.md
When Not to Use
- Bitcoin Core Compatibility: Need compatibility with Bitcoin Core tooling
- Legacy Clients: Clients that only support TCP/HTTP
- Simple Use Cases: TCP RPC is simpler and sufficient for most cases
Code: QUIC_RPC.md
Components
The QUIC RPC system includes:
- Quinn RPC server implementation
- Certificate generation and management
- Connection and stream handling
- JSON-RPC protocol over QUIC
- Client support examples
Location: blvm-node/src/rpc/quinn_server.rs, blvm-node/docs/QUIC_RPC.md
Developer SDK Overview
The developer SDK (blvm-sdk) provides governance infrastructure and a composition framework for Bitcoin. It includes reusable governance primitives and a composition framework for building alternative Bitcoin implementations.
Architecture Position
Tier 5 of the 6-tier Bitcoin Commons architecture:
1. Orange Paper (mathematical foundation)
2. blvm-consensus (pure math implementation)
3. blvm-protocol (Bitcoin abstraction)
4. blvm-node (full node implementation)
5. blvm-sdk (governance + composition) ← THIS LAYER
6. blvm-commons (governance enforcement)
Core Components
Governance Primitives
Cryptographic primitives for governance operations:
- Key Management: Generate and manage governance keypairs
- Signature Creation: Sign governance messages using Bitcoin-compatible standards
- Signature Verification: Verify signatures and multisig thresholds
- Multisig Logic: Threshold-based collective decision making
- Nested Multisig: Team-based governance with hierarchical multisig support
- Message Formats: Structured messages for releases, approvals, decisions
Code: signatures.rs
CLI Tools
Command-line tools for governance operations:
blvm-keygen: Generate governance keypairs (PEM, JSON formats)blvm-sign: Sign governance messages (releases, approvals)blvm-verify: Verify signatures and multisig thresholdsblvm-compose: Declarative node composition from modulesblvm-sign-binary: Sign binary files for release verificationblvm-verify-binary: Verify binary file signaturesblvm-aggregate-signatures: Aggregate multiple signatures
Code: blvm-keygen.rs
Composition Framework
Declarative node composition from modules:
- Module Registry: Discover and manage available modules
- Lifecycle Management: Load, unload, reload modules at runtime
- Economic Integration: Merge mining revenue distribution
- Dependency Resolution: Automatic module dependency handling
Code: mod.rs
Key Features
Governance Primitives
#![allow(unused)]
fn main() {
use blvm_sdk::governance::{
GovernanceKeypair, GovernanceMessage, Multisig
};
// Generate a keypair
let keypair = GovernanceKeypair::generate()?;
// Create a message to sign
let message = GovernanceMessage::Release {
version: "v1.0.0".to_string(),
commit_hash: "abc123".to_string(),
};
// Sign the message
let signature = keypair.sign(&message.to_signing_bytes())?;
// Verify with multisig
let multisig = Multisig::new(6, 7, maintainer_keys)?;
let valid = multisig.verify(&message.to_signing_bytes(), &[signature])?;
}
Multisig Support
Threshold-based signature verification:
- N-of-M Thresholds: Configurable signature requirements (e.g., 6-of-7, see Multisig Configuration)
- Key Management: Maintainer key registration and rotation
- Signature Aggregation: Combine multiple signatures
- Verification: Cryptographic verification of threshold satisfaction
Code: multisig.rs
Bitcoin-Compatible Signing
Uses Bitcoin message signing standards:
- Message Format: Bitcoin message signing format
- Signature Algorithm: secp256k1 ECDSA
- Hash Function: Double SHA256
- Compatibility: Compatible with Bitcoin Core signing
Code: signatures.rs
Design Principles
- Governance Crypto is Reusable: Clean library API for external consumers
- No GitHub Logic: SDK is pure cryptography + composition, not enforcement
- Bitcoin-Compatible: Uses Bitcoin message signing standards
- Test Everything: Governance crypto needs 100% test coverage
- Document for Consumers: Governance app developers are the customer
What This Is NOT
- NOT a general-purpose Bitcoin library
- NOT the GitHub enforcement engine (that’s blvm-commons)
- NOT handling wallet keys or user funds
- NOT competing with rust-bitcoin or BDK
Usage Examples
CLI Usage
# Generate a keypair
blvm-keygen --output alice.key --format pem
# Sign a release
blvm-sign release \
--version v1.0.0 \
--commit abc123 \
--key alice.key \
--output signature.txt
# Verify signatures
blvm-verify release \
--version v1.0.0 \
--commit abc123 \
--signatures sig1.txt,sig2.txt,sig3.txt,sig4.txt,sig5.txt,sig6.txt \
--threshold 6-of-7 \
--pubkeys keys.json
Library Usage
#![allow(unused)]
fn main() {
use blvm_sdk::governance::{GovernanceKeypair, GovernanceMessage};
// Generate keypair
let keypair = GovernanceKeypair::generate()?;
// Sign message
let message = GovernanceMessage::Release {
version: "v1.0.0".to_string(),
commit_hash: "abc123".to_string(),
};
let signature = keypair.sign(&message.to_signing_bytes())?;
}
See Also
- SDK Getting Started - Quick start guide
- API Reference - Complete SDK API documentation
- Module Development - Building modules
- SDK Examples - Usage examples
- Governance Overview - Governance system
SDK Getting Started
The developer SDK (blvm-sdk) provides governance infrastructure and cryptographic primitives for Bitcoin governance operations.
Quick Start
As a Library
#![allow(unused)]
fn main() {
use blvm_sdk::governance::{
GovernanceKeypair, GovernanceMessage, Multisig
};
// Generate a keypair
let keypair = GovernanceKeypair::generate()?;
// Create a message to sign
let message = GovernanceMessage::Release {
version: "v1.0.0".to_string(),
commit_hash: "abc123".to_string(),
};
// Sign the message
let signature = keypair.sign(&message.to_signing_bytes())?;
// Verify with multisig
let multisig = Multisig::new(6, 7, maintainer_keys)?;
let valid = multisig.verify(&message.to_signing_bytes(), &[signature])?;
}
CLI Usage
# Generate a keypair
blvm-keygen --output alice.key --format pem
# Sign a release
blvm-sign release \
--version v1.0.0 \
--commit abc123 \
--key alice.key \
--output signature.txt
# Verify signatures
blvm-verify release \
--version v1.0.0 \
--commit abc123 \
--signatures sig1.txt,sig2.txt,sig3.txt,sig4.txt,sig5.txt,sig6.txt \
--threshold 6-of-7 \
--pubkeys keys.json
For more details, see the blvm-sdk README.
See Also
- SDK Overview - SDK introduction and architecture
- API Reference - Complete SDK API documentation
- Module Development - Building modules with the SDK
- SDK Examples - More usage examples
- Governance Overview - Governance system details
Module Development
The BTCDecoded blvm-node includes a process-isolated module system that enables optional features (Lightning, merge mining, privacy enhancements) without affecting consensus or base node stability. Modules run in separate processes with IPC communication, providing security through isolation.
Core Principles
- Process Isolation: Each module runs in a separate process with isolated memory
- API Boundaries: Modules communicate only through well-defined APIs
- Crash Containment: Module failures don’t propagate to the base node
- Consensus Isolation: Modules cannot modify consensus rules, UTXO set, or block validation
- State Separation: Module state is completely separate from consensus state
Communication
Modules communicate with the node via Inter-Process Communication (IPC) using Unix domain sockets. Protocol uses length-delimited binary messages (bincode serialization) with message types: Requests, Responses, Events. Connection is persistent for request/response pattern; events use pub/sub pattern for real-time notifications.
Module Structure
Directory Layout
Each module should be placed in a subdirectory within the modules/ directory:
modules/
└── my-module/
├── Cargo.toml
├── src/
│ └── main.rs
└── module.toml # Module manifest (required)
Module Manifest (module.toml)
Every module must include a module.toml manifest file:
# ============================================================================
# Module Manifest
# ============================================================================
# ----------------------------------------------------------------------------
# Core Identity (Required)
# ----------------------------------------------------------------------------
name = "my-module"
version = "1.0.0"
entry_point = "my-module"
# ----------------------------------------------------------------------------
# Metadata (Optional)
# ----------------------------------------------------------------------------
description = "Description of what this module does"
author = "Your Name <your.email@example.com>"
# ----------------------------------------------------------------------------
# Capabilities
# ----------------------------------------------------------------------------
# Permissions this module requires to function
capabilities = [
"read_blockchain", # Query blockchain data
"subscribe_events", # Receive node events
]
# ----------------------------------------------------------------------------
# Dependencies
# ----------------------------------------------------------------------------
# Required dependencies (module cannot load without these)
[dependencies]
"blvm-lightning" = ">=1.0.0"
# Optional dependencies (module can work without these)
[optional_dependencies]
"blvm-mesh" = ">=0.5.0"
# ----------------------------------------------------------------------------
# Configuration Schema (Optional)
# ----------------------------------------------------------------------------
[config_schema]
poll_interval = "Polling interval in seconds (default: 5)"
Required Fields:
name: Module identifier (alphanumeric with dashes/underscores)version: Semantic version (e.g., “1.0.0”)entry_point: Binary name or path
Optional Fields:
description: Human-readable descriptionauthor: Module authorcapabilities: List of required permissionsdependencies: Required (hard) dependencies - module cannot load without themoptional_dependencies: Optional (soft) dependencies - module can work without them
Dependency Version Constraints:
>=1.0.0- Greater than or equal to version<=2.0.0- Less than or equal to version=1.2.3- Exact version match^1.0.0- Compatible version (>=1.0.0 and <2.0.0)~1.2.0- Patch updates only (>=1.2.0 and <1.3.0)
Module Development
Basic Module Structure
A minimal module implements the module lifecycle and connects to the node via IPC. There are two approaches:
Using ModuleIntegration (Recommended)
use blvm_node::module::integration::ModuleIntegration;
use blvm_node::module::EventType;
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
// Parse command-line arguments
let args = Args::parse();
// Connect to node using ModuleIntegration
// Note: socket_path must be PathBuf (convert from String if needed)
let socket_path = std::path::PathBuf::from(&args.socket_path);
let mut integration = ModuleIntegration::connect(
socket_path,
args.module_id.unwrap_or_else(|| "my-module".to_string()),
"my-module".to_string(),
env!("CARGO_PKG_VERSION").to_string(),
).await?;
// Subscribe to events
let event_types = vec![EventType::NewBlock, EventType::NewTransaction];
integration.subscribe_events(event_types).await?;
// Get NodeAPI
let node_api = integration.node_api();
// Get event receiver (broadcast::Receiver returns Result, not Option)
let mut event_receiver = integration.event_receiver();
// Main module loop
loop {
match event_receiver.recv().await {
Ok(ModuleMessage::Event(event_msg)) => {
// Process event
match event_msg.payload {
// Handle specific event types
_ => {}
}
}
Ok(_) => {} // Other message types
Err(tokio::sync::broadcast::error::RecvError::Lagged(skipped)) => {
warn!("Event receiver lagged, skipped {} messages", skipped);
}
Err(tokio::sync::broadcast::error::RecvError::Closed) => {
break; // Channel closed, exit loop
}
}
}
Ok(())
}
Using ModuleIpcClient + NodeApiIpc (Legacy)
use blvm_node::module::ipc::client::ModuleIpcClient;
use blvm_node::module::api::node_api::NodeApiIpc;
use blvm_node::module::ipc::protocol::{RequestMessage, RequestPayload, MessageType};
use std::sync::Arc;
use std::path::PathBuf;
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
// Parse command-line arguments
let args = Args::parse();
// Connect to node IPC socket (PathBuf required)
let socket_path = PathBuf::from(&args.socket_path);
let mut ipc_client = ModuleIpcClient::connect(&socket_path).await?;
// Perform handshake
let correlation_id = ipc_client.next_correlation_id();
let handshake_request = RequestMessage {
correlation_id,
request_type: MessageType::Handshake,
payload: RequestPayload::Handshake {
module_id: "my-module".to_string(),
module_name: "my-module".to_string(),
version: env!("CARGO_PKG_VERSION").to_string(),
},
};
let response = ipc_client.request(handshake_request).await?;
// Verify handshake response...
// Create NodeAPI wrapper (requires Arc<Mutex<ModuleIpcClient>> and module_id)
let ipc_client_arc = Arc::new(tokio::sync::Mutex::new(ipc_client));
let node_api = Arc::new(NodeApiIpc::new(ipc_client_arc.clone(), "my-module".to_string()));
// Subscribe to events using NodeAPI
let event_types = vec![EventType::NewBlock, EventType::NewTransaction];
let mut event_receiver = node_api.subscribe_events(event_types).await?;
// Main module loop (mpsc::Receiver returns Option)
while let Some(event) = event_receiver.recv().await {
match event {
ModuleMessage::Event(event_msg) => {
// Process event
}
_ => {}
}
}
Ok(())
}
Recommendation: New modules should use ModuleIntegration for simplicity and consistency. The legacy approach is still supported for existing modules but requires more boilerplate code.
Module Lifecycle
Modules receive command-line arguments (--module-id, --socket-path, --data-dir) and configuration via environment variables (MODULE_CONFIG_*). Lifecycle: Initialization (connect IPC) → Start (subscribe events) → Running (process events/requests) → Stop (clean shutdown).
Querying Node Data
Modules can query blockchain data through the Node API. Recommended approach: Use NodeAPI methods directly:
#![allow(unused)]
fn main() {
// Get NodeAPI from integration
let node_api = integration.node_api();
// Get current chain tip
let chain_tip = node_api.get_chain_tip().await?;
// Get a block by hash
let block = node_api.get_block(&block_hash).await?;
// Get block header
let header = node_api.get_block_header(&block_hash).await?;
// Get transaction
let tx = node_api.get_transaction(&tx_hash).await?;
// Get UTXO
let utxo = node_api.get_utxo(&outpoint).await?;
// Get chain info
let chain_info = node_api.get_chain_info().await?;
}
Alternative (Low-Level IPC): For advanced use cases, you can use the IPC protocol directly:
#![allow(unused)]
fn main() {
// Note: This requires request_type field in RequestMessage
let request = RequestMessage {
correlation_id: client.next_correlation_id(),
request_type: MessageType::GetChainTip,
payload: RequestPayload::GetChainTip,
};
let response = client.send_request(request).await?;
}
Recommendation: Use NodeAPI methods for simplicity and type safety. Low-level IPC is only needed for custom protocols.
Available NodeAPI Methods:
Blockchain API:
get_block(hash)- Get block by hashget_block_header(hash)- Get block header by hashget_transaction(hash)- Get transaction by hashhas_transaction(hash)- Check if transaction existsget_chain_tip()- Get current chain tip hashget_block_height()- Get current block heightget_block_by_height(height)- Get block by heightget_utxo(outpoint)- Get UTXO by outpoint (read-only)get_chain_info()- Get chain information (tip, height, difficulty, etc.)
Mempool API:
get_mempool_transactions()- Get all transaction hashes in mempoolget_mempool_transaction(hash)- Get transaction from mempool by hashget_mempool_size()- Get mempool size informationcheck_transaction_in_mempool(hash)- Check if transaction is in mempoolget_fee_estimate(target_blocks)- Get fee estimate for target confirmation blocks
Network API:
get_network_stats()- Get network statisticsget_network_peers()- Get list of connected peers
Storage API:
storage_open_tree(name)- Open a storage tree (isolated per module)storage_insert(tree_id, key, value)- Insert a key-value pairstorage_get(tree_id, key)- Get a value by keystorage_remove(tree_id, key)- Remove a key-value pairstorage_contains_key(tree_id, key)- Check if key existsstorage_iter(tree_id)- Iterate over all key-value pairsstorage_transaction(tree_id, operations)- Execute atomic batch of operations
Filesystem API:
read_file(path)- Read a file from module’s data directorywrite_file(path, data)- Write data to a filedelete_file(path)- Delete a filelist_directory(path)- List directory contentscreate_directory(path)- Create a directoryget_file_metadata(path)- Get file metadata (size, type, timestamps)
Module Communication API:
call_module(target_module_id, method, params)- Call an API method on another modulepublish_event(event_type, payload)- Publish an event to other modulesregister_module_api(api)- Register module API for other modules to callunregister_module_api()- Unregister module APIdiscover_modules()- Discover all available modulesget_module_info(module_id)- Get information about a specific moduleis_module_available(module_id)- Check if a module is available
RPC API:
register_rpc_endpoint(method, description)- Register a JSON-RPC endpointunregister_rpc_endpoint(method)- Unregister an RPC endpoint
Timers API:
register_timer(interval_seconds, callback)- Register a periodic timercancel_timer(timer_id)- Cancel a registered timerschedule_task(delay_seconds, callback)- Schedule a one-time task
Metrics API:
report_metric(metric)- Report a metric to the nodeget_module_metrics(module_id)- Get module metricsget_all_metrics()- Get aggregated metrics from all modules
Lightning & Payment API:
get_lightning_node_url()- Get Lightning node connection infoget_lightning_info()- Get Lightning node informationget_payment_state(payment_id)- Get payment state by payment ID
Network Integration API:
send_mesh_packet_to_module(module_id, packet_data, peer_addr)- Send mesh packet to a module
For complete API reference, see NodeAPI trait.
Subscribing to Events
Modules can subscribe to real-time node events. The approach depends on which integration method you’re using:
Using ModuleIntegration
#![allow(unused)]
fn main() {
// Subscribe to events
let event_types = vec![EventType::NewBlock, EventType::NewTransaction];
integration.subscribe_events(event_types).await?;
// Get event receiver
let mut event_receiver = integration.event_receiver();
// Receive events in main loop
while let Some(event) = event_receiver.recv().await {
match event {
ModuleMessage::Event(event_msg) => {
// Handle event
}
_ => {}
}
}
}
Using ModuleClient
#![allow(unused)]
fn main() {
// Subscribe to events
let event_types = vec![EventType::NewBlock, EventType::NewTransaction];
client.subscribe_events(event_types).await?;
// Get event receiver
let mut event_receiver = client.event_receiver();
// Receive events in main loop
while let Some(event) = event_receiver.recv().await {
match event {
ModuleMessage::Event(event_msg) => {
// Handle event
}
_ => {}
}
}
}
Available Event Types:
Core Blockchain Events:
NewBlock- New block connected to chainNewTransaction- New transaction in mempoolBlockDisconnected- Block disconnected (chain reorg)ChainReorg- Chain reorganization occurred
Payment Events:
PaymentRequestCreated,PaymentSettled,PaymentFailed,PaymentVerified,PaymentRouteFound,PaymentRouteFailed,ChannelOpened,ChannelClosed
Mining Events:
BlockMined,BlockTemplateUpdated,MiningDifficultyChanged,MiningJobCreated,ShareSubmitted,MergeMiningReward,MiningPoolConnected,MiningPoolDisconnected
Network Events:
PeerConnected,PeerDisconnected,PeerBanned,MessageReceived,MessageSent,BroadcastStarted,BroadcastCompleted,RouteDiscovered,RouteFailed
Module Lifecycle Events:
ModuleLoaded,ModuleUnloaded,ModuleCrashed,ModuleDiscovered,ModuleInstalled,ModuleUpdated,ModuleRemoved
Configuration & Lifecycle Events:
ConfigLoaded,NodeStartupCompleted,NodeShutdown,NodeShutdownCompleted
Maintenance & Resource Events:
DataMaintenance,MaintenanceStarted,MaintenanceCompleted,HealthCheck,DiskSpaceLow,ResourceLimitWarning
Governance Events:
GovernanceProposalCreated,GovernanceProposalVoted,GovernanceProposalMerged,EconomicNodeRegistered,EconomicNodeVeto,VetoThresholdReached
Consensus Events:
BlockValidationStarted,BlockValidationCompleted,ScriptVerificationStarted,ScriptVerificationCompleted,DifficultyAdjusted,SoftForkActivated
Mempool Events:
MempoolTransactionAdded,MempoolTransactionRemoved,FeeRateChanged
And many more. For complete list, see EventType enum and Event System.
Configuration
Module system is configured in node config (see Node Configuration):
[modules]
enabled = true
modules_dir = "modules"
data_dir = "data/modules"
enabled_modules = [] # Empty = auto-discover all
[modules.module_configs.my-module]
setting1 = "value1"
Modules can have their own config.toml files, passed via environment variables.
Security Model
Permissions
Modules operate with whitelist-only access control. Each module declares required capabilities in its manifest. Capabilities use snake_case in module.toml and map to Permission enum variants.
Core Permissions:
read_blockchain- Access to blockchain dataread_utxo- Query UTXO set (read-only)read_chain_state- Query chain state (height, tip)subscribe_events- Subscribe to node eventssend_transactions- Submit transactions to mempool (future: may be restricted)
Additional Permissions:
read_mempool- Read mempool dataread_network- Read network data (peers, stats)network_access- Send network packetsread_lightning- Read Lightning network dataread_payment- Read payment dataread_storage,write_storage,manage_storage- Storage accessread_filesystem,write_filesystem,manage_filesystem- Filesystem accessregister_rpc_endpoint- Register RPC endpointsmanage_timers- Manage timers and scheduled tasksreport_metrics,read_metrics- Metrics accessdiscover_modules- Discover other modulespublish_events- Publish events to other modulescall_module- Call other modules’ APIsregister_module_api- Register module API for other modules to call
For complete list, see Permission enum.
Sandboxing
Modules are sandboxed to ensure security:
- Process Isolation: Separate process, isolated memory
- File System: Access limited to module data directory
- Network: No network access (modules can only communicate via IPC)
- Resource Limits: CPU, memory, file descriptor limits (Phase 2+)
Request Validation
All module API requests are validated:
- Permission checks (module has required permission)
- Consensus protection (no consensus-modifying operations)
- Resource limits (rate limiting, Phase 2+)
API Reference
NodeAPI Methods: See Querying Node Data section above for complete list of available methods.
Event Types: See Subscribing to Events section above for complete list of available event types.
Permissions: See Permissions section above for complete list of available permissions.
For detailed API reference, see:
For detailed API reference, see blvm-node/src/module/ (traits, IPC protocol, Node API, security).
See Also
- SDK Overview - SDK introduction and capabilities
- SDK API Reference - Complete SDK API documentation
- SDK Examples - Module development examples
- Module System Architecture - Module system design
- Module IPC Protocol - IPC communication details
- Modules Overview - Available modules
- Node Configuration - Configuring modules
SDK API Reference
Complete API documentation for the BLVM Developer SDK, including governance primitives and composition framework.
Overview
The BLVM SDK provides two main API categories:
- Governance Primitives: Cryptographic operations for governance (keys, signatures, multisig)
- Composition Framework: Module registry and node composition APIs
For detailed Rust API documentation, see blvm-sdk on docs.rs.
Governance Primitives
Core Types
GovernanceKeypair
Cryptographic keypair for signing governance messages.
#![allow(unused)]
fn main() {
pub struct GovernanceKeypair {
// Private fields
}
}
Methods:
generate() -> GovernanceResult<Self>- Generate a new random keypairfrom_secret_key(secret_bytes: &[u8]) -> GovernanceResult<Self>- Create from secret key bytespublic_key(&self) -> PublicKey- Get the public keysecret_key_bytes(&self) -> [u8; 32]- Get the secret key bytes (32 bytes)public_key_bytes(&self) -> [u8; 33]- Get the compressed public key bytes (33 bytes)
Example:
#![allow(unused)]
fn main() {
use blvm_sdk::GovernanceKeypair;
let keypair = GovernanceKeypair::generate()?;
let pubkey = keypair.public_key();
}
PublicKey
Public key for governance operations (Bitcoin-compatible secp256k1).
#![allow(unused)]
fn main() {
pub struct PublicKey {
// Private fields
}
}
Methods:
from_bytes(bytes: &[u8]) -> GovernanceResult<Self>- Create from bytesto_bytes(&self) -> [u8; 33]- Get compressed public key bytesto_compressed_bytes(&self) -> [u8; 33]- Get compressed formatto_uncompressed_bytes(&self) -> [u8; 65]- Get uncompressed format
Signature
Cryptographic signature for governance messages.
#![allow(unused)]
fn main() {
pub struct Signature {
// Private fields
}
}
Methods:
from_bytes(bytes: &[u8]) -> GovernanceResult<Self>- Create from bytesto_bytes(&self) -> [u8; 64]- Get signature bytes (64 bytes)to_der_bytes(&self) -> Vec<u8>- Get signature in DER format
GovernanceMessage
Message types that can be signed for governance decisions.
#![allow(unused)]
fn main() {
pub enum GovernanceMessage {
Release {
version: String,
commit_hash: String,
},
ModuleApproval {
module_name: String,
version: String,
},
BudgetDecision {
amount: u64,
purpose: String,
},
}
}
Methods:
to_signing_bytes(&self) -> Vec<u8>- Convert to bytes for signingdescription(&self) -> String- Get human-readable description
Multisig
Multisig configuration for threshold signatures.
#![allow(unused)]
fn main() {
pub struct Multisig {
// Private fields
}
}
Methods:
new(threshold: usize, total: usize, public_keys: Vec<PublicKey>) -> GovernanceResult<Self>- Create new multisig (e.g., 3-of-5)verify(&self, message: &[u8], signatures: &[Signature]) -> GovernanceResult<bool>- Verify signatures meet thresholdcollect_valid_signatures(&self, message: &[u8], signatures: &[Signature]) -> GovernanceResult<Vec<usize>>- Get indices of valid signaturesthreshold(&self) -> usize- Get threshold (e.g., 3)total(&self) -> usize- Get total number of keys (e.g., 5)public_keys(&self) -> &[PublicKey]- Get all public keysis_valid_signature(&self, signature: &Signature, message: &[u8]) -> GovernanceResult<Option<usize>>- Check if signature is valid and return key index
Example:
#![allow(unused)]
fn main() {
use blvm_sdk::{Multisig, PublicKey};
let multisig = Multisig::new(3, 5, public_keys)?;
let valid = multisig.verify(&message_bytes, &signatures)?;
}
Functions
sign_message
Sign a message with a secret key.
#![allow(unused)]
fn main() {
pub fn sign_message(secret_key: &SecretKey, message: &[u8]) -> GovernanceResult<Signature>
}
Parameters:
secret_key- The secret key to sign withmessage- The message bytes to sign
Returns: GovernanceResult<Signature> - The signature or an error
verify_signature
Verify a signature against a message and public key.
#![allow(unused)]
fn main() {
pub fn verify_signature(
signature: &Signature,
message: &[u8],
public_key: &PublicKey,
) -> GovernanceResult<bool>
}
Parameters:
signature- The signature to verifymessage- The message that was signedpublic_key- The public key to verify against
Returns: GovernanceResult<bool> - true if signature is valid
Error Types
GovernanceError
Errors that can occur during governance operations.
#![allow(unused)]
fn main() {
pub enum GovernanceError {
InvalidKey(String),
SignatureVerification(String),
InvalidMultisig(String),
MessageFormat(String),
Cryptographic(String),
Serialization(String),
InvalidThreshold { threshold: usize, total: usize },
InsufficientSignatures { got: usize, need: usize },
InvalidSignatureFormat(String),
}
}
GovernanceResult<T>
Result type alias for governance operations.
#![allow(unused)]
fn main() {
pub type GovernanceResult<T> = Result<T, GovernanceError>;
}
Composition Framework
Module Registry
ModuleRegistry
Manages module discovery, installation, and dependency resolution.
#![allow(unused)]
fn main() {
pub struct ModuleRegistry {
// Private fields
}
}
Methods:
new<P: AsRef<Path>>(modules_dir: P) -> Self- Create registry for modules directorydiscover_modules(&mut self) -> Result<Vec<ModuleInfo>>- Discover all modules in directoryget_module(&self, name: &str, version: Option<&str>) -> Result<ModuleInfo>- Get module by name/versioninstall_module(&mut self, source: ModuleSource) -> Result<ModuleInfo>- Install module from sourceupdate_module(&mut self, name: &str, new_version: &str) -> Result<ModuleInfo>- Update module to new versionremove_module(&mut self, name: &str) -> Result<()>- Remove modulelist_modules(&self) -> Vec<ModuleInfo>- List all installed modulesresolve_dependencies(&self, module_names: &[String]) -> Result<Vec<ModuleInfo>>- Resolve module dependencies
Example:
#![allow(unused)]
fn main() {
use blvm_sdk::ModuleRegistry;
let mut registry = ModuleRegistry::new("modules");
let modules = registry.discover_modules()?;
let module = registry.get_module("lightning-module", Some("1.0.0"))?;
}
ModuleInfo
Information about a discovered module.
#![allow(unused)]
fn main() {
pub struct ModuleInfo {
pub name: String,
pub version: String,
pub description: String,
pub author: String,
pub capabilities: Vec<String>,
pub dependencies: HashMap<String, String>,
pub entry_point: String,
pub source: ModuleSource,
pub status: ModuleStatus,
pub health: ModuleHealth,
}
}
Node Composition
NodeComposer
Composes nodes from module specifications.
#![allow(unused)]
fn main() {
pub struct NodeComposer {
// Private fields
}
}
Methods:
new<P: AsRef<Path>>(modules_dir: P) -> Self- Create composer with module registryvalidate_composition(&self, spec: &NodeSpec) -> Result<ValidationResult>- Validate node compositiongenerate_config(&self) -> String- Generate node configuration from compositionregistry(&self) -> &ModuleRegistry- Get module registryregistry_mut(&mut self) -> &mut ModuleRegistry- Get mutable registry
NodeSpec
Specification for a composed node.
#![allow(unused)]
fn main() {
pub struct NodeSpec {
pub network_type: NetworkType,
pub modules: Vec<ModuleSpec>,
pub metadata: NodeMetadata,
}
}
ModuleSpec
Specification for a module in a composed node.
#![allow(unused)]
fn main() {
pub struct ModuleSpec {
pub name: String,
pub version: Option<String>,
pub config: HashMap<String, String>,
pub enabled: bool,
}
}
Module Lifecycle
ModuleLifecycle
Manages module lifecycle (start, stop, restart, health checks).
#![allow(unused)]
fn main() {
pub struct ModuleLifecycle {
// Private fields
}
}
Methods:
new(registry: ModuleRegistry) -> Self- Create lifecycle managerwith_module_manager(mut self, manager: Arc<Mutex<ModuleManager>>) -> Self- Attach module managerstart_module(&mut self, name: &str) -> Result<()>- Start a modulestop_module(&mut self, name: &str) -> Result<()>- Stop a modulerestart_module(&mut self, name: &str) -> Result<()>- Restart a modulemodule_status(&self, name: &str) -> Result<ModuleStatus>- Get module statusmodule_health(&self, name: &str) -> Result<ModuleHealth>- Get module healthregistry(&self) -> &ModuleRegistry- Get module registry
ModuleStatus
Module runtime status.
#![allow(unused)]
fn main() {
pub enum ModuleStatus {
Stopped,
Starting,
Running,
Stopping,
Error(String),
}
}
ModuleHealth
Module health information.
#![allow(unused)]
fn main() {
pub struct ModuleHealth {
pub is_healthy: bool,
pub last_heartbeat: Option<SystemTime>,
pub error_count: u64,
pub last_error: Option<String>,
}
}
CLI Tools
blvm-keygen
Generate governance keypairs.
blvm-keygen [OPTIONS]
Options:
-o, --output <OUTPUT> Output file [default: governance.key]
-f, --format <FORMAT> Output format (text, json) [default: text]
--seed <SEED> Generate deterministic keypair from seed
--show-private Show private key in output
blvm-sign
Sign governance messages.
blvm-sign [OPTIONS] <COMMAND>
Options:
-o, --output <OUTPUT> Output file [default: signature.txt]
-f, --format <FORMAT> Output format (text, json) [default: text]
-k, --key <KEY> Private key file
Commands:
release Sign a release message
module Sign a module approval message
budget Sign a budget decision message
blvm-verify
Verify governance signatures and multisig thresholds.
blvm-verify [OPTIONS] <COMMAND>
Options:
-f, --format <FORMAT> Output format (text, json) [default: text]
-s, --signatures <SIGS> Signature files (comma-separated)
--threshold <THRESHOLD> Threshold (e.g., "3-of-5")
--pubkeys <PUBKEYS> Public key files (comma-separated)
Commands:
release Verify a release message
module Verify a module approval message
budget Verify a budget decision message
Usage Examples
Basic Governance Operations
#![allow(unused)]
fn main() {
use blvm_sdk::{GovernanceKeypair, GovernanceMessage, sign_message, verify_signature};
// Generate keypair
let keypair = GovernanceKeypair::generate()?;
// Create message
let message = GovernanceMessage::Release {
version: "v1.0.0".to_string(),
commit_hash: "abc123".to_string(),
};
// Sign message
let message_bytes = message.to_signing_bytes();
let signature = sign_message(&keypair.secret_key, &message_bytes)?;
// Verify signature
let verified = verify_signature(&signature, &message_bytes, &keypair.public_key())?;
assert!(verified);
}
Multisig Operations
#![allow(unused)]
fn main() {
use blvm_sdk::{GovernanceKeypair, GovernanceMessage, Multisig, sign_message};
// Generate 5 keypairs for 3-of-5 multisig
let keypairs: Vec<_> = (0..5)
.map(|_| GovernanceKeypair::generate().unwrap())
.collect();
let public_keys: Vec<_> = keypairs.iter()
.map(|kp| kp.public_key())
.collect();
// Create multisig
let multisig = Multisig::new(3, 5, public_keys)?;
// Create message
let message = GovernanceMessage::Release {
version: "v1.0.0".to_string(),
commit_hash: "abc123".to_string(),
};
let message_bytes = message.to_signing_bytes();
// Sign with 3 keys
let signatures: Vec<_> = keypairs[0..3]
.iter()
.map(|kp| sign_message(&kp.secret_key_bytes(), &message_bytes).unwrap())
.collect();
// Verify multisig threshold
let verified = multisig.verify(&message_bytes, &signatures)?;
assert!(verified);
}
Module Registry Usage
#![allow(unused)]
fn main() {
use blvm_sdk::ModuleRegistry;
// Create registry
let mut registry = ModuleRegistry::new("modules");
// Discover modules
let modules = registry.discover_modules()?;
println!("Found {} modules", modules.len());
// Get specific module
let module = registry.get_module("lightning-module", Some("1.0.0"))?;
println!("Module: {} v{}", module.name, module.version);
// Resolve dependencies
let deps = registry.resolve_dependencies(&["lightning-module".to_string()])?;
}
See Also
- Module Development - Building modules that use these APIs
- SDK Examples - More usage examples
- API Index - Cross-reference to all APIs
- docs.rs/blvm-sdk - Complete Rust API documentation
SDK Examples
The SDK provides examples for common governance operations and module development.
Complete Governance Workflow
Step 1: Generate Keypairs
Using CLI:
# Generate a keypair
blvm-keygen --output alice.key --format pem
# Generate multiple keypairs for a team
blvm-keygen --output bob.key --format pem
blvm-keygen --output charlie.key --format pem
Using Rust:
#![allow(unused)]
fn main() {
use blvm_sdk::governance::GovernanceKeypair;
// Generate a keypair
let keypair = GovernanceKeypair::generate()?;
// Save to file
keypair.save_to_file("alice.key", blvm_sdk::governance::KeyFormat::Pem)?;
// Get public key
let public_key = keypair.public_key();
println!("Public key: {}", public_key);
}
Step 2: Create a Release Message
Using CLI:
blvm-sign release \
--version v1.0.0 \
--commit abc123def456 \
--key alice.key \
--output alice-signature.txt
Using Rust:
#![allow(unused)]
fn main() {
use blvm_sdk::governance::{GovernanceKeypair, GovernanceMessage};
// Load keypair
let keypair = GovernanceKeypair::load_from_file("alice.key")?;
// Create release message
let message = GovernanceMessage::Release {
version: "v1.0.0".to_string(),
commit_hash: "abc123def456".to_string(),
};
// Sign the message
let signature = keypair.sign(&message.to_signing_bytes())?;
// Save signature
std::fs::write("alice-signature.txt", signature.to_string())?;
}
Step 3: Collect Multiple Signatures
# Each maintainer signs independently
blvm-sign release --version v1.0.0 --commit abc123 --key alice.key --output sig1.txt
blvm-sign release --version v1.0.0 --commit abc123 --key bob.key --output sig2.txt
blvm-sign release --version v1.0.0 --commit abc123 --key charlie.key --output sig3.txt
Step 4: Verify Multisig Threshold
Using CLI:
blvm-verify release \
--version v1.0.0 \
--commit abc123 \
--signatures sig1.txt,sig2.txt,sig3.txt \
--threshold 3-of-5 \
--pubkeys maintainers.json
Using Rust:
#![allow(unused)]
fn main() {
use blvm_sdk::governance::{Multisig, GovernanceMessage, PublicKey};
// Load public keys
let pubkeys = vec![
PublicKey::from_file("alice.pub")?,
PublicKey::from_file("bob.pub")?,
PublicKey::from_file("charlie.pub")?,
PublicKey::from_file("dave.pub")?,
PublicKey::from_file("eve.pub")?,
];
// Create multisig (3 of 5 threshold)
let multisig = Multisig::new(3, 5, pubkeys)?;
// Load signatures
let signatures = vec![
load_signature("sig1.txt")?,
load_signature("sig2.txt")?,
load_signature("sig3.txt")?,
];
// Verify
let message = GovernanceMessage::Release {
version: "v1.0.0".to_string(),
commit_hash: "abc123".to_string(),
};
let valid = multisig.verify(&message.to_signing_bytes(), &signatures)?;
if valid {
println!("✓ Multisig verification passed (3/5 signatures)");
} else {
println!("✗ Multisig verification failed");
}
}
Nested Multisig Example
For team-based governance with hierarchical structure:
#![allow(unused)]
fn main() {
use blvm_sdk::governance::{Multisig, NestedMultisig};
// Team 1: 2 of 3 members
let team1_keys = vec![alice_key, bob_key, charlie_key];
let team1 = Multisig::new(2, 3, team1_keys)?;
// Team 2: 2 of 3 members
let team2_keys = vec![dave_key, eve_key, frank_key];
let team2 = Multisig::new(2, 3, team2_keys)?;
// Organization: 2 of 2 teams
let nested = NestedMultisig::new(2, 2, vec![team1, team2])?;
// Verify with signatures from both teams
let valid = nested.verify(&message.to_signing_bytes(), &all_signatures)?;
}
Binary Signing Example
Sign and verify binary files for release verification:
# Sign a binary
blvm-sign-binary \
--file target/release/blvm \
--key maintainer.key \
--output blvm.sig
# Verify binary signature
blvm-verify-binary \
--file target/release/blvm \
--signature blvm.sig \
--pubkey maintainer.pub
For more examples, see the blvm-sdk examples directory.
See Also
- SDK Overview - SDK introduction and architecture
- SDK Getting Started - Quick start guide
- API Reference - Complete SDK API documentation
- Module Development - Building modules with the SDK
- Module System Architecture - Module system design
- Modules Overview - Available modules
Modules Overview
Introduction
BLVM node uses a modular architecture where optional features run as separate, process-isolated modules. This extends node functionality without affecting consensus or base node stability.
Available Modules
The following modules are available for blvm-node:
Core Modules
- Lightning Network Module - Lightning Network payment processing with multiple provider support (LNBits, LDK, Stub), invoice verification, and payment state tracking
- Commons Mesh Module - Payment-gated mesh networking with routing fees, traffic classification, and anti-monopoly protection. Designed to support specialized modules (onion routing, mining pool coordination, messaging) via ModuleAPI
- Stratum V2 Module - Stratum V2 mining protocol support with network integration complete and mining pool management
- Datum Module - DATUM Gateway mining protocol module for Ocean pool integration (works with Stratum V2)
- Mining OS Module - Operating system-level mining optimizations and hardware management
Module System Architecture
All modules run in separate processes with IPC communication (see Module System Architecture for details), providing:
- Process Isolation: Each module runs in isolated memory space
- Crash Containment: Module failures don’t affect the base node
- Consensus Isolation: Modules cannot modify consensus rules or UTXO set
- Security: Modules communicate only through well-defined APIs
For detailed information about the module system architecture, see Module System.
Installing Modules
Modules can be installed in several ways:
Via Cargo
cargo install blvm-lightning
cargo install blvm-mesh
cargo install blvm-stratum-v2
cargo install blvm-datum
cargo install blvm-miningos
Via Module Installer
cargo install cargo-blvm-module
cargo blvm-module install blvm-lightning
cargo blvm-module install blvm-mesh
cargo blvm-module install blvm-stratum-v2
cargo blvm-module install blvm-datum
cargo blvm-module install blvm-miningos
Manual Installation
- Build the module:
cargo build --release - Copy the binary to
modules/<module-name>/target/release/ - Create
module.tomlmanifest in the module directory - Restart the node or use runtime module loading
Module Configuration
Each module requires a config.toml file in its module directory. See individual module documentation (Lightning, Mesh, Stratum V2, Datum, Mining OS) for configuration options. For blvm-mesh submodules, see the Mesh Module documentation.
Module Lifecycle
Modules can be:
- Loaded at node startup (if enabled in configuration)
- Loaded at runtime via RPC or module manager API
- Unloaded at runtime without affecting the base node
- Reloaded (hot reload) for configuration updates
See Also
- Module System Architecture - Detailed module system documentation
- Module Development - Guide for developing custom modules
- SDK Overview - SDK introduction and capabilities
- SDK API Reference - Complete SDK API documentation
- SDK Examples - Module development examples
- Node Configuration - Node-level module configuration
Lightning Network Module
Overview
The Lightning Network module (blvm-lightning) handles Lightning Network payment processing for blvm-node: invoice verification, payment routing, channel management, and payment state tracking. For information on developing custom modules, see Module Development.
Features
- Invoice Verification: Validates Lightning Network invoices (BOLT11) using multiple provider backends
- Payment Processing: Processes Lightning payments via LNBits API or LDK
- Provider Abstraction: Supports multiple Lightning providers (LNBits, LDK, Stub) with unified interface
- Payment State Tracking: Monitors payment lifecycle from request to settlement
Installation
Via Cargo
cargo install blvm-lightning
Via Module Installer
cargo install cargo-blvm-module
cargo blvm-module install blvm-lightning
Manual Installation
-
Clone the repository:
git clone https://github.com/BTCDecoded/blvm-lightning.git cd blvm-lightning -
Build the module:
cargo build --release -
Install to node modules directory:
mkdir -p /path/to/node/modules/blvm-lightning/target/release cp target/release/blvm-lightning /path/to/node/modules/blvm-lightning/target/release/
Configuration
The module supports multiple Lightning providers. Create a config.toml file in the module directory:
LNBits Provider (Recommended)
[lightning]
provider = "lnbits"
[lightning.lnbits]
api_url = "https://lnbits.example.com"
api_key = "your_lnbits_api_key"
wallet_id = "optional_wallet_id" # Optional
LDK Provider (Rust-native)
[lightning]
provider = "ldk"
[lightning.ldk]
data_dir = "data/ldk"
network = "testnet" # or "mainnet" or "regtest"
node_private_key = "hex_encoded_private_key" # Optional, will generate if not provided
Stub Provider (Testing)
[lightning]
provider = "stub"
Configuration Options
provider(required): Lightning provider to use ("lnbits","ldk", or"stub")- LNBits:
api_url,api_key,wallet_id(optional) - LDK:
data_dir,network,node_private_key(optional) - Stub: No additional configuration needed
Provider Comparison
| Feature | LNBits | LDK | Stub |
|---|---|---|---|
| Status | ✅ Production-ready | ✅ Fully implemented | ✅ Testing |
| API Type | REST (HTTP) | Rust-native (lightning-invoice) | None |
| Real Lightning | ✅ Yes | ✅ Yes | ❌ No |
| External Service | ✅ Yes | ❌ No | ❌ No |
| Invoice Creation | ✅ Via API | ✅ Native | ✅ Mock |
| Payment Verification | ✅ Via API | ✅ Native | ✅ Mock |
| Best For | Payment processing | Full control, Rust-native | Testing |
Switching Providers: All providers implement the same interface, so switching providers is just a configuration change. No code changes required.
Module Manifest
The module includes a module.toml manifest (see Module Development):
name = "blvm-lightning"
version = "0.1.0"
description = "Lightning Network payment processor"
author = "Bitcoin Commons Team"
entry_point = "blvm-lightning"
capabilities = [
"read_blockchain",
"subscribe_events",
]
Events
Subscribed Events
The module subscribes to the following node events:
PaymentRequestCreated- New payment request createdPaymentSettled- Payment confirmed on-chainPaymentFailed- Payment failed
Published Events
The module publishes the following events:
PaymentVerified- Lightning payment verifiedPaymentRouteFound- Payment route discoveredPaymentRouteFailed- Payment routing failedChannelOpened- Lightning channel openedChannelClosed- Lightning channel closed
Usage
Once installed and configured, the module automatically:
- Subscribes to payment-related events from the node (
PaymentRequestCreated,PaymentSettled,PaymentFailed) - Verifies Lightning invoices (BOLT11) when payment requests are created
- Processes payments using the configured provider (LNBits, LDK, or Stub)
- Publishes payment verification and status events (
PaymentVerified,PaymentRouteFound,PaymentRouteFailed) - Monitors payment lifecycle and publishes status events
The module automatically selects the provider based on configuration. All providers implement the same interface, so switching providers requires only a configuration change.
Provider Selection
The module uses the LightningProcessor to handle payment processing. The processor:
- Reads provider configuration from
lightning.provider - Creates the appropriate provider instance (LNBits, LDK, or Stub)
- Routes all payment operations through the provider interface
- Stores provider configuration in module storage for persistence
Batch Payment Verification
The module supports batch payment verification for improved performance when processing multiple payments:
#![allow(unused)]
fn main() {
use blvm_lightning::processor::LightningProcessor;
// Verify multiple payments in parallel
let payments = vec![
("invoice1", "payment_id_1"),
("invoice2", "payment_id_2"),
("invoice3", "payment_id_3"),
];
let results = processor.verify_payments_batch(&payments).await?;
// Returns Vec<bool> with verification results in same order as inputs
}
Batch verification processes all payments concurrently, significantly improving throughput for high-volume payment processing scenarios.
API Integration
The module integrates with the node via ModuleClient and NodeApiIpc:
- Read-only blockchain access: Queries blockchain data for payment verification
- Event subscription: Receives real-time events from the node
- Event publication: Publishes Lightning-specific events
- Module storage: Stores provider configuration and channel statistics in module storage tree
lightning_config
Storage Usage
The module uses module storage to persist configuration and statistics:
provider_type: Current provider type (lnbits, ldk, stub)channel_count: Number of active Lightning channelstotal_capacity_sats: Total channel capacity in satoshis
Troubleshooting
Module Not Loading
- Verify the module binary exists at the correct path
- Check
module.tomlmanifest is present and valid - Verify module has required capabilities
- Check node logs for module loading errors
Payment Verification Failing
- LNBits Provider: Verify API URL and API key are correct, check LNBits service is accessible
- LDK Provider: Verify data directory permissions, check network configuration (mainnet/testnet/regtest)
- General: Verify module has
read_blockchaincapability, check node logs for detailed error messages
Provider-Specific Issues
- LNBits: Check API endpoint is accessible, verify wallet_id if specified, check API rate limits
- LDK: Verify data directory exists and is writable, check network matches node configuration
- Stub: No real verification - only for testing
Repository
- GitHub: blvm-lightning
- Version: 0.1.0
See Also
- Module System Overview - Overview of all available modules
- Module System Architecture - Detailed module system documentation
- Module Development - Guide for developing custom modules
- SDK Overview - SDK introduction and capabilities
- SDK API Reference - Complete SDK API documentation
- SDK Examples - Module development examples
Commons Mesh Module
Overview
The Commons Mesh module (blvm-mesh) implements payment-gated mesh networking for blvm-node. It implements the Commons Mesh routing protocol with fee distribution, traffic classification, and anti-monopoly protection. For information on developing custom modules, see Module Development.
Features
- Payment-Gated Routing: Routes traffic based on payment verification
- Traffic Classification: Distinguishes between free and paid traffic
- Fee Distribution: Distributes routing fees (60% destination, 30% routers, 10% treasury)
- Anti-Monopoly Protection: Prevents single entity from dominating routing
- Network State Tracking: Monitors mesh network topology and state
Installation
Via Cargo
cargo install blvm-mesh
Via Module Installer
cargo install cargo-blvm-module
cargo blvm-module install blvm-mesh
Manual Installation
-
Clone the repository:
git clone https://github.com/BTCDecoded/blvm-mesh.git cd blvm-mesh -
Build the module:
cargo build --release -
Install to node modules directory:
mkdir -p /path/to/node/modules/blvm-mesh/target/release cp target/release/blvm-mesh /path/to/node/modules/blvm-mesh/target/release/
Configuration
Create a config.toml file in the module directory:
[mesh]
# Enable/disable module
enabled = true
# Mesh networking mode
# Options: "bitcoin_only", "payment_gated", "open"
mode = "payment_gated"
# Network listening address
listen_addr = "0.0.0.0:8334"
Configuration Options
enabled(default:true): Enable or disable the modulemode(default:"payment_gated"): Mesh networking mode"bitcoin_only": Bitcoin-only routing (no payment gating)"payment_gated": Payment-gated routing (default)"open": Open routing (no payment required)
listen_addr(default:"0.0.0.0:8334"): Network address to listen on
Module Manifest
The module includes a module.toml manifest (see Module Development):
name = "blvm-mesh"
version = "0.1.0"
description = "Commons Mesh networking module"
author = "Bitcoin Commons Team"
entry_point = "blvm-mesh"
capabilities = [
"read_blockchain",
"subscribe_events",
]
Events
Subscribed Events
The module subscribes to 46+ node events including:
- Network Events:
PeerConnected,MessageReceived,PeerDisconnected - Payment Events:
PaymentRequestCreated,PaymentVerified,PaymentSettled - Chain Events:
NewBlock,ChainTipUpdated,BlockDisconnected - Mempool Events:
MempoolTransactionAdded,FeeRateChanged,MempoolTransactionRemoved
Published Events
The module publishes the following events:
RouteDiscovered- Payment route discovered through mesh networkRouteFailed- Payment route discovery failedPaymentVerified- Payment verified for mesh routing
Routing Fee Distribution
Mesh routing fees are distributed as follows:
- 60% to destination node
- 30% to routing nodes (distributed proportionally based on route length)
- 10% to Commons treasury
Fee calculation is performed by the RoutingTable::calculate_fee() method, which takes into account:
- Route length (number of hops)
- Base routing cost
- Payment amount (for percentage-based fees)
Anti-Monopoly Protection
The module implements anti-monopoly protections to prevent single entities from dominating routing:
- Maximum routing share limits: Per-entity limits on routing market share
- Diversification requirements: Routing paths must include multiple entities
- Fee distribution mechanisms: Fee distribution favors decentralized routing paths
- Route quality scoring: Routes are scored based on decentralization metrics
These protections are enforced by the RoutingPolicyEngine and RoutingTable components.
Usage
Once installed and configured, the module automatically:
- Subscribes to network, payment, chain, and mempool events
- Classifies traffic as free or paid based on payment verification
- Routes traffic through the mesh network with payment gating
- Distributes routing fees according to the fee distribution model
- Tracks network topology and publishes routing events
Architecture
Core Infrastructure
blvm-mesh provides the core infrastructure layer for payment-gated routing. The module exposes a ModuleAPI that allows other modules to build specialized functionality on top of the mesh infrastructure. This separation of concerns makes the system composable and allows each module to focus on its domain.
Core Components
MeshManager
Central coordinator for mesh networking operations:
- Payment-gated routing: Routes traffic based on payment verification
- Protocol detection: Detects protocol from packet headers
- Route discovery: Finds routes through mesh network
- Replay prevention: Prevents payment proof replay attacks
- Payment verification: Verifies Lightning and CTV payments
- Routing table management: Manages mesh network topology
Code: manager.rs
PaymentVerifier
Verifies payment proofs for mesh routing:
- Lightning payments: Verifies BOLT11 invoices with preimages via NodeAPI
- CTV payments: Verifies covenant proofs for instant settlement (requires CTV feature flag)
- Expiry checking: Validates payment proof timestamps to prevent expired proofs
- Amount verification: Confirms payment amount matches routing requirements
- NodeAPI integration: Uses NodeAPI to query blockchain for payment verification
Code: verifier.rs
ReplayPrevention
Prevents payment proof replay attacks:
- Hash-based tracking: Tracks payment proof hashes to detect replays
- Sequence numbers: Uses sequence numbers for additional replay protection
- Expiry cleanup: Removes expired payment proof hashes (24-hour expiry)
- Lock-free reads: Uses DashMap for concurrent access without blocking
Code: replay.rs
RoutingTable
Manages mesh network routing:
- Direct peers: Tracks direct connections using DashMap for lock-free concurrent access
- Multi-hop routes: Discovers routes through intermediate nodes using distance vector routing
- Fee calculation: Calculates routing fees (60/30/10 split) based on route length and payment amount
- Route discovery: Finds optimal paths through network with route quality scoring
- Route expiry: Routes expire after 1 hour (configurable)
- Route caching: Caches discovered routes for performance
Code: routing.rs
RouteDiscovery
Implements route discovery protocol:
- Distance vector routing: Simple, scalable routing algorithm
- Route requests: Broadcasts route requests to find paths
- Route responses: Collects route responses from network
- Route advertisements: Advertises known routes to neighbors
- Timeout handling: 30-second timeout for route discovery
- Maximum hops: 10 hops maximum for route discovery
Code: discovery.rs
RoutingPolicyEngine
Determines routing policy based on protocol and configuration:
- Protocol detection: Identifies protocol from packet headers
- Policy determination: Decides if routing requires payment
- Mode support: Bitcoin-only, payment-gated, or open routing
Code: routing_policy.rs
ModuleAPI
Overview
blvm-mesh exposes a ModuleAPI that other modules can call via inter-module IPC. This allows specialized modules to use mesh routing without implementing routing logic themselves.
Code: api.rs
Available Methods
send_packet
Send a packet through the mesh network.
Request: SendPacketRequest
destination: NodeId- 32-byte destination node IDpayload: Vec<u8>- Packet payloadpayment_proof: Option<PaymentProof>- Required for paid routingprotocol_id: Option<String>- Optional protocol identifierttl: Option<u64>- Time-to-live (seconds)
Response: SendPacketResponse
success: bool- Whether packet was sent successfullypacket_id: [u8; 32]- Unique packet IDroute_length: usize- Number of hopsestimated_cost_sats: u64- Total routing costerror: Option<String>- Error message if failed
discover_route
Find a route to a destination node.
Request: DiscoverRouteRequest
destination: NodeId- Destination node IDmax_hops: Option<u8>- Maximum route lengthtimeout_seconds: Option<u64>- Discovery timeout
Response: DiscoverRouteResponse
route: Option<Vec<NodeId>>- Route path (None if not found)route_cost_sats: u64- Estimated routing costdiscovery_time_ms: u64- Time taken to discover
register_protocol_handler
Register a protocol handler for incoming packets.
Request: RegisterProtocolRequest
protocol_id: String- Protocol identifier (e.g., “onion-v1”, “mining-pool-v1”)handler_method: String- Module method to call when packet arrives
Response: RegisterProtocolResponse
success: bool- Whether registration succeeded
get_routing_stats
Get routing statistics.
Response: MeshStats
enabled: bool- Whether mesh is enabledmode: MeshMode- Current mesh moderouting: RoutingStats- Routing statisticsreplay: ReplayStats- Replay prevention statistics
get_node_id
Get the mesh module’s node ID.
Response: NodeId - 32-byte node ID
Building on Mesh Infrastructure
The blvm-mesh module exposes a ModuleAPI that allows other modules to build specialized functionality on top of the core mesh infrastructure. Specialized modules can use the mesh routing system via inter-module IPC.
Using the Mesh ModuleAPI
Modules can call the mesh ModuleAPI in two ways:
Option 1: Direct NodeAPI Call
#![allow(unused)]
fn main() {
use blvm_node::module::traits::NodeAPI;
use blvm_mesh::api::SendPacketRequest;
// Call mesh module API directly
let request = SendPacketRequest {
destination: target_node_id,
payload: packet_data,
payment_proof: Some(payment),
protocol_id: Some("onion-v1".to_string()),
ttl: Some(300),
};
let response_data = node_api
.call_module(Some("blvm-mesh"), "send_packet", bincode::serialize(&request)?)
.await?;
let response: SendPacketResponse = bincode::deserialize(&response_data)?;
}
Option 2: MeshClient Helper (Recommended)
For convenience, the mesh module provides a MeshClient API wrapper that handles serialization:
#![allow(unused)]
fn main() {
use blvm_mesh::MeshClient;
// Create mesh client
let mesh_client = MeshClient::new(node_api, "blvm-mesh".to_string());
// Send packet
let response = mesh_client
.send_packet("my-module-id", destination, payload, payment_proof, Some("onion-v1".to_string()))
.await?;
// Discover route
let route = mesh_client
.discover_route("my-module-id", destination, Some(10))
.await?;
// Register protocol handler
mesh_client
.register_protocol_handler("my-module-id", "onion-v1".to_string(), "handle_packet".to_string())
.await?;
}
Code: client_api.rs
Example Use Cases
Specialized modules can be built to use blvm-mesh for:
- Onion Routing: Multi-layer encrypted packets with anonymous routing (inspired by Tor Project)
- Mining Pool Coordination: Decentralized mining pool operations via mesh
- P2P Messaging: Payment-gated messaging over mesh network
Integration Pattern
Any module can integrate with blvm-mesh by:
- Using MeshClient: Create a
MeshClientinstance withMeshClient::new(node_api, "blvm-mesh".to_string()) - Registering a protocol: Call
mesh_client.register_protocol_handler()to register a protocol identifier (e.g.,"onion-v1","mining-pool-v1","messaging-v1") - Sending packets: Use
mesh_client.send_packet()to route packets through the mesh network - Discovering routes: Use
mesh_client.discover_route()to find routes to destination nodes - Receiving packets: Handle incoming packets via the registered protocol handler method
Implementation Details
The mesh module provides both internal routing via MeshManager and external API access via MeshModuleAPI:
- Internal routing: Processes incoming mesh packets via
handle_packet, routes packets through the mesh network, verifies payments, and manages routing tables - External API: Exposes
MeshModuleAPIfor other modules to call via inter-module IPC, providing methods for sending packets, discovering routes, and registering protocol handlers - ModuleIntegration: Uses the new
ModuleIntegrationAPI for IPC communication, replacing the oldModuleClientandNodeApiIpcapproach
For detailed information on the mesh implementation, see the API.md documentation. For developing modules that integrate with mesh routing, see Module Development.
API Integration
The module integrates with the node via ModuleIntegration:
- ModuleIntegration: Uses
ModuleIntegration::connect()for IPC communication (replaces oldModuleClientandNodeApiIpc) - NodeAPI access: Gets NodeAPI via
integration.node_api()for blockchain queries and payment verification - Event subscription: Subscribes to events via
integration.subscribe_events()and receives viaintegration.event_receiver() - Event publication: Publishes mesh-specific events via NodeAPI
- Inter-module IPC: Exposes ModuleAPI for other modules to call via
node_api.call_module()
Troubleshooting
Module Not Loading
- Verify the module binary exists at the correct path
- Check
module.tomlmanifest is present and valid - Verify module has required capabilities
- Check node logs for module loading errors
Routing Not Working
- Verify mesh mode is correctly configured (
bitcoin_only,payment_gated, oropen) - Check network listening address is accessible and not blocked by firewall
- Verify payment verification is working (if using payment-gated mode)
- Check node logs for routing errors
- Verify peers are connected and routing table has entries
- Check replay prevention isn’t blocking valid packets
Payment Verification Issues
- Verify Lightning node is accessible (if using Lightning payments)
- Check CTV covenant proofs are valid (if using CTV payments)
- Verify payment proof timestamps are not expired
- Check payment amounts match routing requirements
Repository
External Resources
- Tor Project: https://www.torproject.org/ - Inspiration for onion routing concepts used in mesh submodules
- Tor Documentation: Tor Project Documentation - Tor network documentation and technical details
See Also
- Module System Overview - Overview of all available modules
- Module System Architecture - Detailed module system documentation
- Module Development - Guide for developing custom modules
- SDK Overview - SDK introduction and capabilities
- SDK API Reference - Complete SDK API documentation
Stratum V2 Module
Overview
The Stratum V2 module (blvm-stratum-v2) implements Stratum V2 mining protocol support for blvm-node: Stratum V2 server implementation, mining pool management, and mining job distribution. For information on developing custom modules, see Module Development.
Note: Merge mining is available as a separate paid plugin module (blvm-merge-mining) that integrates with the Stratum V2 module. It is not built into the Stratum V2 module itself.
Features
- Stratum V2 Server: Full Stratum V2 protocol server implementation
- Mining Pool Management: Manages connections to mining pools
- Mining Job Distribution: Distributes mining jobs to connected miners
- Network Integration: Fully integrated with node network layer (messages routed automatically)
Installation
Via Cargo
cargo install blvm-stratum-v2
Via Module Installer
cargo install cargo-blvm-module
cargo blvm-module install blvm-stratum-v2
Manual Installation
-
Clone the repository:
git clone https://github.com/BTCDecoded/blvm-stratum-v2.git cd blvm-stratum-v2 -
Build the module:
cargo build --release -
Install to node modules directory:
mkdir -p /path/to/node/modules/blvm-stratum-v2/target/release cp target/release/blvm-stratum-v2 /path/to/node/modules/blvm-stratum-v2/target/release/
Configuration
Create a config.toml file in the module directory:
[stratum_v2]
# Enable/disable module
enabled = true
# Network listening address for Stratum V2 server
listen_addr = "0.0.0.0:3333"
# Mining pool URL (for pool mode)
pool_url = "stratum+tcp://pool.example.com:3333"
Configuration Options
enabled(default:true): Enable or disable the modulelisten_addr(default:"0.0.0.0:3333"): Network address to listen on for Stratum V2 serverpool_url(optional): Mining pool URL when operating in pool mode
Module Manifest
The module includes a module.toml manifest (see Module Development):
name = "blvm-stratum-v2"
version = "0.1.0"
description = "Stratum V2 mining protocol module"
author = "Bitcoin Commons Team"
entry_point = "blvm-stratum-v2"
capabilities = [
"read_blockchain",
"subscribe_events",
]
Events
Subscribed Events
The module subscribes to the following node events:
BlockMined- Block successfully minedBlockTemplateUpdated- New block template availableMiningDifficultyChanged- Mining difficulty changedChainTipUpdated- Chain tip updated (new block)
Published Events
The module publishes the following events:
MiningJobCreated- New mining job createdShareSubmitted- Mining share submittedMiningPoolConnected- Connected to mining poolMiningPoolDisconnected- Disconnected from mining pool
Note: Merge mining events (such as MergeMiningReward) are published by the separate blvm-merge-mining module, not by this module.
Stratum V2 Protocol
The module implements the Stratum V2 protocol specification, providing:
- Binary Protocol: 50-66% bandwidth savings compared to Stratum V1
- TLV Encoding: Tag-Length-Value encoding for efficient message serialization
- Encrypted Communication: TLS/QUIC encryption for secure connections
- Multiplexed Channels: QUIC stream multiplexing for multiple mining streams
- Template Distribution: Efficient block template distribution
- Share Submission: Optimized share submission protocol
- Channel Management: Multiple mining channels per connection
Protocol Components
- Server:
StratumV2Servermanages connections and job distribution - Pool:
StratumV2Poolmanages miners, channels, and share validation - Template Generator:
BlockTemplateGeneratorcreates block templates from mempool - Protocol Parser: Handles TLV-encoded Stratum V2 messages
For detailed information about the Stratum V2 protocol, see Stratum V2 Mining Protocol.
Merge Mining (Separate Plugin)
Merge mining is NOT part of the Stratum V2 module. It is available as a separate paid plugin module (blvm-merge-mining) that integrates with the Stratum V2 module.
For merge mining functionality, see:
- blvm-merge-mining README - Merge mining module documentation
- Stratum V2 + Merge Mining - How merge mining integrates with Stratum V2
Usage
Once installed and configured, the module automatically:
- Subscribes to mining-related events from the node
- Receives Stratum V2 messages via the node’s network layer (automatic routing)
- Creates and distributes mining jobs to connected miners
- Manages mining pool connections (if configured)
- Tracks mining rewards and publishes mining events
Note: Merge mining is handled by a separate module (blvm-merge-mining) that integrates with this module.
The node’s network layer automatically detects Stratum V2 messages (via TLV format) and routes them to this module via the event system. No additional network configuration is required.
Integration with Other Modules
- blvm-datum: Works together with
blvm-datumfor DATUM Gateway mining.blvm-stratum-v2handles miner connections whileblvm-datumhandles pool communication. - blvm-miningos: MiningOS can update pool configuration via this module’s inter-module API.
- blvm-merge-mining: Separate module that integrates with Stratum V2 for merge mining functionality.
API Integration
The module integrates with the node via ModuleClient and NodeApiIpc:
- Read-only blockchain access: Queries blockchain data for block templates
- Event subscription: Receives real-time mining events from the node
- Event publication: Publishes mining-specific events
Note: The module subscribes to MiningJobCreated and ShareSubmitted events for coordination with other modules (e.g., merge mining), but these events are also published by this module when jobs are created and shares are submitted.
Troubleshooting
Module Not Loading
- Verify the module binary exists at the correct path
- Check
module.tomlmanifest is present and valid - Verify module has required capabilities
- Check node logs for module loading errors
Mining Jobs Not Creating
- Verify node has
read_blockchaincapability - Check that block template events (
BlockTemplateUpdated) are being published by the node - Verify listening address is accessible and not blocked by firewall
- Check node logs for mining job creation errors
- Verify node is synced and can generate block templates
- Check that miners are connected (if no miners, jobs may not be created)
Pool Connection Failing
- Verify pool URL is correct and accessible
- Check network connectivity to mining pool
- Verify pool supports Stratum V2 protocol
- Check node logs for connection errors
Repository
- GitHub: blvm-stratum-v2
- Version: 0.1.0
External Resources
- Stratum V2 Specification: Stratum V2 Protocol Specification - Official Stratum V2 mining protocol specification
- Stratum V2 Documentation: Stratum V2 Docs - Complete Stratum V2 protocol documentation
See Also
- Module System Overview - Overview of all available modules
- Module System Architecture - Detailed module system documentation
- Module Development - Guide for developing custom modules
- SDK Overview - SDK introduction and capabilities
- Stratum V2 + Merge Mining - Stratum V2 protocol documentation
- Mining Integration - Mining functionality
- Datum Module - DATUM Gateway mining protocol (works with Stratum V2)
Datum Module
Overview
The Datum module (blvm-datum) implements the DATUM Gateway mining protocol for blvm-node, enabling decentralized mining with Ocean pool support. This module handles pool communication only - miners connect via the blvm-stratum-v2 module. For information on developing custom modules, see Module Development.
Features
- DATUM Protocol Client: Encrypted communication with DATUM pools (Ocean)
- Decentralized Templates: Block templates generated locally via NodeAPI
- Coinbase Coordination: Coordinates coinbase payouts with DATUM pool
- Module Cooperation: Works with
blvm-stratum-v2for complete mining solution
Architecture
The module integrates with both the node and the Stratum V2 module:
┌─────────────────┐
│ blvm-node │
│ (Core Node) │
└────────┬────────┘
│ NodeAPI
│ (get_block_template, submit_block)
│
┌────┴────┐
│ │
▼ ▼
┌─────────┐ ┌──────────────┐
│ blvm- │ │ blvm-datum │
│ stratum │ │ (Module) │
│ v2 │ │ │
│ │ │ ┌──────────┐ │
│ ┌─────┐ │ │ │ DATUM │ │◄─── DATUM Pool (Ocean)
│ │ SV2 │ │ │ │ Client │ │ (Encrypted Protocol)
│ │Server│ │ │ └──────────┘ │
│ └─────┘ │ └──────────────┘
│ │
│ │ │
│ ▼ │
│ Mining │
│Hardware │
└─────────┘
Key Points:
blvm-datum: Handles DATUM pool communication onlyblvm-stratum-v2: Handles miner connections- Both modules share block templates via NodeAPI
- Both modules can submit blocks independently
Installation
Via Cargo
cargo install blvm-datum
Via Module Installer
cargo install cargo-blvm-module
cargo blvm-module install blvm-datum
Manual Installation
-
Clone the repository:
git clone https://github.com/BTCDecoded/blvm-datum.git cd blvm-datum -
Build the module:
cargo build --release -
Install to node modules directory:
mkdir -p /path/to/node/modules/blvm-datum/target/release cp target/release/blvm-datum /path/to/node/modules/blvm-datum/target/release/
Configuration
Both blvm-stratum-v2 and blvm-datum modules should be enabled for full DATUM Gateway functionality. Create configuration in your node’s config.toml:
[modules.blvm-stratum-v2]
enabled = true
listen_addr = "0.0.0.0:3333"
mode = "solo" # or "pool"
[modules.blvm-datum]
enabled = true
pool_url = "https://ocean.xyz/datum"
pool_username = "user"
pool_password = "pass"
pool_public_key = "hex_encoded_32_byte_public_key" # Optional, for encryption
[modules.blvm-datum.mining]
coinbase_tag_primary = "DATUM Gateway"
coinbase_tag_secondary = "BLVM User"
pool_address = "bc1q..." # Bitcoin address for pool payouts
Configuration Options
enabled(default:true): Enable or disable the modulepool_url(required): DATUM pool URL (e.g.,https://ocean.xyz/datum)pool_username(required): Pool usernamepool_password(required): Pool passwordpool_public_key(optional): Pool public key (32-byte hex-encoded) for encryptioncoinbase_tag_primary(optional): Primary coinbase tagcoinbase_tag_secondary(optional): Secondary coinbase tagpool_address(optional): Bitcoin address for pool payouts
Note: The blvm-stratum-v2 module must also be enabled and configured for miners to connect.
Module Manifest
The module includes a module.toml manifest (see Module Development):
name = "blvm-datum"
version = "0.1.0"
description = "DATUM Gateway mining protocol module for blvm-node"
author = "Bitcoin Commons Team"
entry_point = "blvm-datum"
capabilities = [
"read_blockchain",
"subscribe_events",
]
Events
Subscribed Events
The module subscribes to node events including:
- Chain Events:
NewBlock,ChainTipUpdated,BlockDisconnected - Mining Events:
BlockTemplateGenerated,BlockFound
Published Events
The module publishes the following events:
DatumPoolConnected- Successfully connected to DATUM poolDatumPoolDisconnected- Disconnected from DATUM poolDatumTemplateReceived- Received block template from poolDatumBlockSubmitted- Block submitted to pool
Dependencies
blvm-node: Module system integrationsodiumoxide: Encryption for DATUM protocol (Ed25519, X25519, ChaCha20Poly1305, NaCl sealed boxes)ed25519-dalek: Ed25519 signature verificationx25519-dalek: X25519 key exchangechacha20poly1305: ChaCha20-Poly1305 authenticated encryptiontokio: Async runtime
API Integration
The module integrates with the node via ModuleClient and NodeApiIpc:
- Read-only blockchain access: Queries blockchain data for template generation
- Event subscription: Receives real-time events from the node
- Event publication: Publishes DATUM-specific events
- NodeAPI calls: Uses
get_block_templateandsubmit_blockvia NodeAPI - ModuleAPI registration: Registers
DatumModuleApifor inter-module communication
Inter-Module Communication
The module exposes a ModuleAPI for other modules (e.g., blvm-stratum-v2) to query coinbase payout requirements:
get_coinbase_payout: Returns the current coinbase payout structure (outputs, tags, unique ID) required by the DATUM pool
This allows other modules to construct block templates with the correct coinbase structure for DATUM pool coordination.
Integration with Stratum V2
The blvm-datum module works in conjunction with blvm-stratum-v2:
- blvm-stratum-v2: Handles miner connections via Stratum V2 protocol
- Miners connect to the Stratum V2 server
- Receives mining jobs and submits shares
- blvm-datum: Handles DATUM pool communication
- Communicates with Ocean pool via encrypted DATUM protocol
- Coordinates coinbase payouts
- Shared templates: Both modules use NodeAPI to get block templates independently
- Independent submission: Either module can submit blocks to the network
Architecture Flow:
Miners → blvm-stratum-v2 (Stratum V2 server) → NodeAPI (block templates)
↓
Ocean Pool ← blvm-datum (DATUM client) ← NodeAPI (block templates)
Status
🚧 In Development - Initial implementation phase
Troubleshooting
Module Not Loading
- Verify the module binary exists at the correct path
- Check
module.tomlmanifest is present and valid - Verify module has required capabilities
- Check node logs for module loading errors
- Ensure
blvm-stratum-v2module is also enabled
Pool Connection Issues
- Verify pool URL is correct and accessible
- Check pool username and password are valid
- Verify network connectivity to pool
- Check node logs for connection errors
- Ensure encryption libraries (sodiumoxide) are properly installed
Template Generation Issues
- Verify NodeAPI is accessible and module has
read_blockchaincapability - Check node is synced and can generate block templates via
get_block_template - Verify both
blvm-stratum-v2andblvm-datumare enabled and configured correctly - Check node logs for template generation errors
- Verify node is not in IBD (Initial Block Download) mode
- Check that the node has sufficient mempool transactions for template generation
Repository
- GitHub: blvm-datum
- Version: 0.1.0
- Status: 🚧 In Development
External Resources
- DATUM Gateway: Ocean DATUM Documentation - Official DATUM Gateway protocol documentation
- Ocean Pool: Ocean.xyz - Mining pool that supports DATUM Gateway protocol
See Also
- Module System Overview - Overview of all available modules
- Stratum V2 Module - Stratum V2 mining protocol (required for miners to connect)
- Module System Architecture - Detailed module system documentation
- Module Development - Guide for developing custom modules
- DATUM Gateway Documentation - Official DATUM documentation
Mining OS Module
Overview
The Mining OS module (blvm-miningos) provides bidirectional integration between BLVM and MiningOS (Mos), enabling BLVM to be managed as a MiningOS “rack” (worker) and exposing miners as “things”. For information on developing custom modules, see Module Development.
Features
- BLVM → MiningOS: Register BLVM as a MiningOS rack, expose miners as things, provide block templates
- MiningOS → BLVM: Execute actions (reboot, power management, pool config updates), query statistics, receive commands
- HTTP REST API Client: Full REST API integration with MiningOS app-node
- OAuth2 Authentication: Token-based authentication with automatic token refresh
- P2P Worker Bridge: Node.js bridge for Hyperswarm P2P integration
- Block Template Provider: Provides block templates to MiningOS
- Enhanced Statistics: Chain info, network stats, mempool statistics
- Action Execution System: Executes MiningOS actions (integrates with Stratum V2 for pool config updates)
- Data Conversion: Converts BLVM data to MiningOS “Thing” format
- Event Subscription: Subscribes to block mined, template updates, and other events
Architecture
The module uses a hybrid approach combining:
- Rust Module: Core integration logic, HTTP client, data conversion, action handling
- Node.js Bridge: P2P worker that extends
TetherWrkBasefor Hyperswarm integration - IPC Communication: Unix socket-based JSON-RPC between Rust and Node.js
┌─────────────────────┐
│ MiningOS │
│ Orchestrator │
│ (Hyperswarm P2P) │
└──────────┬──────────┘
│
│ Hyperswarm
│
┌──────────▼──────────┐ Unix Socket ┌──────────────┐
│ Node.js Bridge │ ◄───────────────────► │ Rust Module │
│ (worker.js) │ JSON-RPC │ │
└─────────────────────┘ └──────┬───────┘
│ IPC
│
┌──────▼───────┐
│ BLVM Node │
└──────────────┘
Installation
Via Cargo
cargo install blvm-miningos
Via Module Installer
cargo install cargo-blvm-module
cargo blvm-module install blvm-miningos
Manual Installation
-
Clone the repository:
git clone https://github.com/BTCDecoded/blvm-miningos.git cd blvm-miningos -
Build the Rust module:
cargo build --release -
Install Node.js bridge dependencies:
cd bridge npm install -
Install to node modules directory:
mkdir -p /path/to/node/modules/blvm-miningos/target/release cp target/release/blvm-miningos /path/to/node/modules/blvm-miningos/target/release/ cp -r bridge /path/to/node/modules/blvm-miningos/
Configuration
The module searches for configuration files in the following order (first found is used):
{data_dir}/config/miningos.toml{data_dir}/miningos.toml./config/miningos.toml./miningos.toml
If no configuration file is found, the module uses default values.
Create data/config/miningos.toml:
[miningos]
enabled = true
[p2p]
enabled = true
rack_id = "blvm-node-001"
rack_type = "miner"
auto_register = true
[http]
enabled = true
app_node_url = "https://api.mos.tether.io"
oauth_provider = "google"
oauth_client_id = "your-client-id"
oauth_client_secret = "your-client-secret"
token_cache_file = "miningos-token.json"
[stats]
enabled = true
collection_interval_seconds = 60
[template]
enabled = true
update_interval_seconds = 30
Configuration Options
enabled(default:true): Enable or disable the module- P2P Configuration:
enabled(default:true): Enable P2P worker bridgerack_id(required): Unique identifier for this BLVM node in MiningOSrack_type(default:"miner"): Type of rack (e.g.,"miner")auto_register(default:true): Automatically register with MiningOS
- HTTP Configuration:
enabled(default:true): Enable HTTP REST API clientapp_node_url(required): MiningOS app-node API URLoauth_provider(required): OAuth2 provider (e.g.,"google")oauth_client_id(required): OAuth2 client IDoauth_client_secret(required): OAuth2 client secrettoken_cache_file(default:"miningos-token.json"): Token cache file path
- Statistics Configuration:
enabled(default:true): Enable statistics collectioncollection_interval_seconds(default:60): Statistics collection interval
- Template Configuration:
enabled(default:true): Enable block template providerupdate_interval_seconds(default:30): Template update interval
Module Manifest
The module includes a module.toml manifest (see Module Development):
name = "blvm-miningos"
version = "0.1.0"
description = "MiningOS integration module for BLVM"
author = "Bitcoin Commons Team"
entry_point = "blvm-miningos"
capabilities = [
"read_blockchain",
"subscribe_events",
"publish_events",
"call_module",
]
Events
Subscribed Events
The module subscribes to node events including:
- Chain Events:
NewBlock,ChainTipUpdated,BlockDisconnected - Mining Events:
BlockTemplateGenerated,BlockFound,ShareSubmitted - Network Events:
PeerConnected,PeerDisconnected - Mempool Events:
MempoolTransactionAdded,MempoolTransactionRemoved
Published Events
The module publishes the following events:
MiningOSRegistered- Successfully registered with MiningOSMiningOSActionExecuted- Action executed from MiningOSMiningOSStatsUpdated- Statistics updated and sent to MiningOSMiningOSTemplateUpdated- Block template updated and sent to MiningOS
API Integration
The module integrates with the node via ModuleClient and NodeApiIpc:
- Read-only blockchain access: Queries blockchain data for statistics
- Event subscription: Receives real-time events from the node
- Event publication: Publishes MiningOS-specific events
- Module calls: Can call other modules (e.g., Stratum V2 for pool config updates) via
call_module - Block templates: Gets block templates via NodeAPI
get_block_templatemethod (no special permission required) - Block submission: Submits mined blocks via NodeAPI
submit_blockmethod (no special permission required)
Note: get_block_template and submit_block are NodeAPI methods, not permissions. Modules can call these methods through the NodeAPI interface without requiring special capabilities.
Action Execution System
The module can execute actions from MiningOS:
- Reboot: System reboot commands
- Power Management: Power on/off commands
- Pool Config Update: Updates pool configuration via inter-module IPC to Stratum V2 module
- Statistics Query: Queries node statistics (chain info, network stats, mempool)
- Template Refresh: Refreshes block templates
Inter-Module Integration
The module integrates with other modules via inter-module IPC:
- Stratum V2: Can update pool configuration when MiningOS sends pool config update actions
- Node API: Queries blockchain data, network statistics, and mempool information
Usage
The module is automatically discovered and loaded by the BLVM node system when placed in the modules directory.
For manual testing:
./target/release/blvm-miningos \
--module-id blvm-miningos \
--socket-path ./data/modules/modules.sock \
--data-dir ./data
Troubleshooting
Module Not Loading
- Verify the module binary exists at the correct path
- Check
module.tomlmanifest is present and valid - Verify module has required capabilities
- Check node logs for module loading errors
- Ensure Node.js bridge is properly installed
OAuth2 Authentication Issues
- Verify OAuth2 credentials are correct
- Check token cache file permissions
- Verify OAuth2 provider URL is accessible
- Check node logs for authentication errors
- Ensure token refresh is working correctly
P2P Bridge Issues
- Verify Node.js bridge is installed (
npm installinbridge/directory) - Check bridge process is running
- Verify Hyperswarm connectivity
- Check bridge logs for connection errors
- Ensure rack_id is unique
Statistics Collection Issues
- Verify node is synced and can provide statistics
- Check collection interval configuration
- Verify NodeAPI is accessible
- Check node logs for statistics collection errors
Repository
- GitHub: blvm-miningos
- Version: 0.1.0
- Documentation: QUICKSTART.md, Integration Guide
External Resources
- MiningOS: https://mos.tether.io/ - The open-source, self-hosted OS for Bitcoin mining and energy orchestration that this module integrates with
See Also
- Module System Overview - Overview of all available modules
- Stratum V2 Module - Stratum V2 mining protocol (integrates with MiningOS for pool config updates)
- Module System Architecture - Detailed module system documentation
- Module Development - Guide for developing custom modules
Governance Overview
The governance system enforces development processes cryptographically across Bitcoin Commons repositories. See Governance Model for details.
{{#include ../../../modules/governance/README.md}}
Attack Path Interception
Figure: Risk interception points across GitHub, Nostr, and OpenTimestamps. Multiple layers of verification prevent single points of failure.
See Also
- Governance Model - Governance architecture and rules
- Multisig Configuration - Configuring multisig thresholds
- Keyholder Procedures - Maintainer responsibilities
- Audit Trails - Audit logging and verification
- SDK API Reference - Governance primitives API
blvm-commons
Overview
blvm-commons is the governance enforcement system for Bitcoin Commons. It provides GitHub integration, OpenTimestamps verification, Nostr integration, and cross-layer validation for the Bitcoin Commons governance framework.
Key Features
- GitHub Integration: GitHub App for cryptographic signature verification and merge enforcement
- OpenTimestamps: Immutable audit trail for governance artifacts
- Nostr Integration: Decentralized governance communication and voting
- Cross-Layer Validation: Security controls and validation across all layers
- CI/CD Workflows: Reusable workflows for Bitcoin Commons repositories
Components
GitHub Integration
The GitHub App enforces cryptographic signatures on pull requests, verifies signature thresholds, and blocks merges until governance requirements are met.
Code: GitHub App
OpenTimestamps Integration
Provides immutable timestamping for governance artifacts, verification proofs, and audit trails.
Code: OpenTimestamps Integration
Nostr Integration
Enables decentralized governance communication, voting, and proposal distribution through Nostr relays.
Code: Nostr Integration
Security Controls
Validates code changes, detects placeholder implementations, and enforces security policies across all Bitcoin Commons repositories.
Code: Security Controls
Repository
GitHub: blvm-commons
See Also
- Governance Overview - Governance system introduction
- OpenTimestamps Integration - Audit trail system
- Nostr Integration - Decentralized communication
- Security Controls - Security validation
- CI/CD Workflows - Reusable workflows
Governance Model
Bitcoin Commons implements a constitutional governance model that makes Bitcoin governance 6x harder to capture.
{{#include ../../../modules/governance/GOVERNANCE.md}}
Governance Signature Thresholds
Figure: Signature thresholds by layer showing the graduated security model. Constitutional layers require 6-of-7, while extension layers require 2-of-3.
Governance Process Latency
Figure: Governance process latency showing review periods and decision timelines across different tiers.
PR Review Time Distribution
Figure: Pull request review time distribution. Long tails reveal why throughput stalls without process and tooling. Bitcoin Commons addresses this through structured review periods and automated tooling.
See Also
- PR Process - How governance applies to pull requests
- Layer-Tier Model - Layer and tier combination rules
- Multisig Configuration - Signature threshold configuration
- Governance Overview - Governance system introduction
- Keyholder Procedures - Maintainer responsibilities
Layer-Tier Governance Model
Overview
Bitcoin Commons implements dual-dimensional governance combining Layers (repository architecture) and Tiers (action classification). When both apply, the system uses the most restrictive wins rule, taking the highest signature requirement and longest review period.
Layer System
The layer system maps repository architecture to governance requirements:
| Layer | Repository | Purpose | Signatures | Review Period |
|---|---|---|---|---|
| 1 | blvm-spec | Constitutional | 6-of-7 | 180 days |
| 2 | blvm-consensus | Constitutional | 6-of-7 | 180 days |
| 3 | blvm-protocol | Implementation | 4-of-5 | 90 days |
| 4 | blvm-node / blvm | Application | 3-of-5 | 60 days |
| 5 | blvm-sdk | Extension | 2-of-3 | 14 days |
Note: For consensus rule changes, Layer 1-2 require 365 days review period.
Tier System
The tier system classifies changes by action type:
| Tier | Type | Signatures | Review Period |
|---|---|---|---|
| 1 | Routine Maintenance | 3-of-5 | 7 days |
| 2 | Feature Changes | 4-of-5 | 30 days |
| 3 | Consensus-Adjacent | 5-of-5 | 90 days |
| 4 | Emergency Actions | 4-of-5 | 0 days |
| 5 | Governance Changes | 5-of-5 | 180 days |
Combination Rules
When both Layer and Tier requirements apply, the system takes the most restrictive (highest) requirements:
| Layer | Tier | Final Signatures | Final Review | Source |
|---|---|---|---|---|
| 1 | 1 | 6-of-7 | 180 days | Layer 1 |
| 1 | 2 | 6-of-7 | 180 days | Layer 1 |
| 1 | 3 | 6-of-7 | 180 days | Layer 1 |
| 1 | 4 | 6-of-7 | 180 days | Layer 1 |
| 1 | 5 | 6-of-7 | 180 days | Layer 1 |
| 2 | 1 | 6-of-7 | 180 days | Layer 2 |
| 2 | 2 | 6-of-7 | 180 days | Layer 2 |
| 2 | 3 | 6-of-7 | 180 days | Layer 2 |
| 2 | 4 | 6-of-7 | 180 days | Layer 2 |
| 2 | 5 | 6-of-7 | 180 days | Layer 2 |
| 3 | 1 | 4-of-5 | 90 days | Layer 3 |
| 3 | 2 | 4-of-5 | 90 days | Layer 3 |
| 3 | 3 | 5-of-5 | 90 days | Tier 3 |
| 3 | 4 | 4-of-5 | 90 days | Layer 3 |
| 3 | 5 | 5-of-5 | 180 days | Tier 5 |
| 4 | 1 | 3-of-5 | 60 days | Layer 4 |
| 4 | 2 | 4-of-5 | 60 days | Tier 2 |
| 4 | 3 | 5-of-5 | 90 days | Tier 3 |
| 4 | 4 | 4-of-5 | 60 days | Layer 4 |
| 4 | 5 | 5-of-5 | 180 days | Tier 5 |
| 5 | 1 | 2-of-3 | 14 days | Layer 5 |
| 5 | 2 | 4-of-5 | 30 days | Tier 2 |
| 5 | 3 | 5-of-5 | 90 days | Tier 3 |
| 5 | 4 | 4-of-5 | 14 days | Layer 5 |
| 5 | 5 | 5-of-5 | 180 days | Tier 5 |
Examples
| Example | Layer | Tier | Result | Source |
|---|---|---|---|---|
| Bug fix in blvm-protocol | 3 (4-of-5, 90d) | 1 (3-of-5, 7d) | 4-of-5, 90d | Layer 3 |
| New feature in blvm-sdk | 5 (2-of-3, 14d) | 2 (4-of-5, 30d) | 4-of-5, 30d | Tier 2 |
| Consensus change in blvm-spec | 1 (6-of-7, 180d) | 3 (5-of-5, 90d) | 6-of-7, 180d | Layer 1 |
| Emergency fix in blvm-node | 4 (3-of-5, 60d) | 4 (4-of-5, 0d) | 4-of-5, 0d | Tier 4 |
Implementation
Code: threshold.rs
#![allow(unused)]
fn main() {
pub fn get_combined_requirements(layer: i32, tier: u32) -> (usize, usize, i64) {
let (layer_sigs_req, layer_sigs_total) = Self::get_threshold_for_layer(layer);
let layer_review = Self::get_review_period_for_layer(layer, false);
let (tier_sigs_req, tier_sigs_total) = Self::get_tier_threshold(tier);
let tier_review = Self::get_tier_review_period(tier);
// Take most restrictive
(layer_sigs_req.max(tier_sigs_req), layer_sigs_total.max(tier_sigs_total), layer_review.max(tier_review))
}
}
Test: cd blvm-commons && cargo test threshold
Configuration
config/repository-layers.yml- Layer definitionsconfig/action-tiers.yml- Tier definitionsconfig/tier-classification-rules.yml- PR classification
See Also
- PR Process - How governance tiers apply to pull requests
- Governance Model - Governance system
- Multisig Configuration - Signature threshold configuration
- Governance Overview - Governance system introduction
Configuration System
Overview
The Bitcoin Commons configuration system provides a unified, type-safe interface for all governance-controlled parameters. The system uses YAML files as the source of truth with a database-backed registry for governance-controlled changes and a comprehensive fallback chain.
Architecture
The configuration system has three core components:
1. YAML Files (Source of Truth)
YAML configuration files in the governance/config/ directory serve as the authoritative source for all configuration defaults. These files are version-controlled and human-readable.
Key Files:
action-tiers.yml- Tier definitions and signature requirementsrepository-layers.yml- Layer definitions and requirementsemergency-tiers.yml- Emergency tier definitionscommons-contributor-thresholds.yml- Commons contributor thresholdsgovernance-fork.yml- Governance fork configurationmaintainers/*.yml- Maintainer configurations by layerrepos/*.yml- Repository-specific configurations
2. ConfigRegistry (Database-Backed)
The ConfigRegistry stores all governance-controlled configuration parameters in a database, enabling governance-approved changes without modifying YAML files directly.
Features:
- Stores 87+ forkable governance variables
- Tracks change proposals and approvals
- Requires Tier 5 governance to modify
- Complete audit trail of all changes
- Automatic sync from YAML on startup
Code: config_registry.rs
3. ConfigReader (Unified Interface)
The ConfigReader provides a type-safe interface for reading configuration values with caching and fallback support.
Features:
- Type-safe accessors (
get_i32(),get_f64(),get_bool(),get_string()) - In-memory caching (5-minute TTL)
- Automatic cache invalidation on changes
- Fallback chain support
Code: config_reader.rs
Fallback Chain
The system uses a four-tier fallback chain for configuration values:
1. Cache (in-memory, 5-minute TTL)
↓ (if not found)
2. Config Registry (database, governance-controlled)
↓ (if not found)
3. YAML Config (file-based, source of truth)
↓ (if not found)
4. Hardcoded Defaults (safety fallback)
Implementation: 76:110:blvm-commons/src/governance/config_reader.rs
Sync Mechanisms
sync_from_yaml()
On startup, the system automatically syncs YAML values into the database:
#![allow(unused)]
fn main() {
config_registry.sync_from_yaml(config_path).await?;
}
This process:
- Loads all YAML configuration files
- Extracts configuration values using
YamlConfigLoader - Compares with database values
- Updates database if no governance history exists (preserves governance-approved changes)
Code: config_registry.rs
sync_to_yaml()
When governance-approved changes are activated, the system can write changes back to YAML files. Full bidirectional sync is planned.
Code: config_registry.rs
Configuration Categories
Configuration parameters are organized into categories:
- FeatureFlags: Feature toggles (e.g.,
feature_governance_enforcement) - Thresholds: Signature and veto thresholds (e.g.,
tier_3_signatures_required) - TimeWindows: Review periods and time limits (e.g.,
tier_3_review_period_days) - Limits: Size and count limits (e.g.,
max_pr_size_bytes) - Network: Network-related parameters
- Security: Security-related parameters
- Other: Miscellaneous parameters
87+ Forkable Variables
The system manages 87+ governance-controlled configuration variables, organized into categories:
Complete Configuration Schema
| Category | Variables | Description |
|---|---|---|
| Action Tier Thresholds | 15 | Signature requirements and review periods for each tier |
| Commons Contributor Thresholds | 8 | Qualification thresholds and weight calculation |
| Governance Phase Thresholds | 11 | Phase boundaries (Early, Growth, Mature) |
| Repository Layer Thresholds | 9 | Signature requirements per repository layer |
| Emergency Tier Thresholds | 10 | Emergency action thresholds and windows |
| Governance Review Policy | 10 | Review period policies and requirements |
| Feature Flags | 7 | Feature enable/disable flags |
| Network & Security | 3 | Network and security configuration |
Total: 87+ variables
Code: config_defaults.rs
Action Tier Thresholds (15 variables)
| Variable | Default | Description |
|---|---|---|
tier_1_signatures_required | 3 | Tier 1: Required signatures (out of 5) |
tier_1_signatures_total | 5 | Tier 1: Total signatures available |
tier_1_review_period_days | 7 | Tier 1: Review period (days) |
tier_2_signatures_required | 4 | Tier 2: Required signatures (out of 5) |
tier_2_signatures_total | 5 | Tier 2: Total signatures available |
tier_2_review_period_days | 30 | Tier 2: Review period (days) |
tier_3_signatures_required | 5 | Tier 3: Required signatures (unanimous) |
tier_3_signatures_total | 5 | Tier 3: Total signatures available |
tier_3_review_period_days | 90 | Tier 3: Review period (days) |
tier_4_signatures_required | 4 | Tier 4: Required signatures (emergency) |
tier_4_signatures_total | 5 | Tier 4: Total signatures available |
tier_4_review_period_days | 0 | Tier 4: Review period (immediate) |
tier_5_signatures_required | 5 | Tier 5: Required signatures (governance) |
tier_5_signatures_total | 5 | Tier 5: Total signatures available |
tier_5_review_period_days | 180 | Tier 5: Review period (days) |
Code: config_defaults.rs
| signaling_tier_5_mining_percent | 50.0 | Tier 5: Mining hashpower for support (%) |
| signaling_tier_5_economic_percent | 60.0 | Tier 5: Economic activity for support (%) |
Code: config_defaults.rs
Commons Contributor Thresholds (8 variables)
| Variable | Default | Description |
|---|---|---|
commons_contributor_min_zaps_btc | 0.01 | Minimum zap contribution (BTC) |
commons_contributor_min_marketplace_btc | 0.01 | Minimum marketplace contribution (BTC) |
commons_contributor_measurement_period_days | 90 | Measurement period (days) |
commons_contributor_qualification_logic | “OR” | Qualification logic (OR/AND) |
commons_contributor_weight_formula | “linear” | Weight calculation formula |
commons_contributor_weight_cap | 0.10 | Maximum weight per contributor (10%) |
Code: config_defaults.rs
Governance Phase Thresholds (11 variables)
| Variable | Default | Description |
|---|---|---|
phase_early_max_blocks | 50000 | Early phase: Maximum blocks |
phase_early_max_contributors | 10 | Early phase: Maximum contributors |
phase_growth_min_blocks | 50000 | Growth phase: Minimum blocks |
phase_growth_max_blocks | 200000 | Growth phase: Maximum blocks |
phase_growth_min_contributors | 10 | Growth phase: Minimum contributors |
phase_growth_max_contributors | 100 | Growth phase: Maximum contributors |
phase_mature_min_blocks | 200000 | Mature phase: Minimum blocks |
phase_mature_min_contributors | 100 | Mature phase: Minimum contributors |
Code: config_defaults.rs
Repository Layer Thresholds (9 variables)
| Variable | Default | Description |
|---|---|---|
layer_1_2_signatures_required | 3 | Layer 1-2: Required signatures |
layer_1_2_signatures_total | 5 | Layer 1-2: Total signatures |
layer_1_2_review_period_days | 7 | Layer 1-2: Review period (days) |
layer_3_signatures_required | 4 | Layer 3: Required signatures |
layer_3_signatures_total | 5 | Layer 3: Total signatures |
layer_3_review_period_days | 30 | Layer 3: Review period (days) |
layer_4_signatures_required | 5 | Layer 4: Required signatures |
layer_4_signatures_total | 5 | Layer 4: Total signatures |
layer_4_review_period_days | 90 | Layer 4: Review period (days) |
layer_5_signatures_required | 5 | Layer 5: Required signatures |
layer_5_signatures_total | 5 | Layer 5: Total signatures |
layer_5_review_period_days | 180 | Layer 5: Review period (days) |
Code: config_defaults.rs
Complete Reference
For the complete list of all 87+ variables with descriptions and default values, see:
- YAML Source:
governance/config/FORKABLE_VARIABLES.md - Default Values:
governance/config/DEFAULT_VALUES_REFERENCE.md - Implementation:
blvm-commons/src/governance/config_defaults.rs
Governance Change Workflow
Changing a configuration parameter requires Tier 5 governance approval:
- Proposal: Create a configuration change proposal via PR
- Review: 5-of-5 maintainer signatures required
- Review Period: 180 days review period
- Activation: Change activated in database via
activate_change() - Sync: Change optionally synced back to YAML files
Code: config_registry.rs
Usage Examples
Basic Configuration Access
#![allow(unused)]
fn main() {
use crate::governance::config_reader::ConfigReader;
use crate::governance::config_registry::ConfigRegistry;
use std::sync::Arc;
// Initialize
let registry = Arc::new(ConfigRegistry::new(pool));
let yaml_loader = YamlConfigLoader::new(config_path);
let config = Arc::new(ConfigReader::with_yaml_loader(
registry.clone(),
Some(yaml_loader),
));
// Read a value (with fallback)
let review_period = config.get_i32("tier_3_review_period_days", 90).await?;
let veto_threshold = config.get_f64("veto_tier_3_mining_percent", 30.0).await?;
let enabled = config.get_bool("feature_governance_enforcement", false).await?;
}
Convenience Methods
#![allow(unused)]
fn main() {
// Get tier signatures
let (required, total) = config.get_tier_signatures(3).await?;
// Get Commons contributor threshold (with YAML fallback)
let threshold = config.get_commons_contributor_threshold("zaps").await?;
}
Integration with Validators
#![allow(unused)]
fn main() {
// ThresholdValidator with config support
let validator = ThresholdValidator::with_config(config.clone());
// All methods use config registry
let (req, total) = validator.get_tier_threshold(3).await?;
}
Caching Strategy
- Cache TTL: 5 minutes (configurable via
cache_ttl) - Cache Invalidation:
- Automatic after config changes are activated
- Manual via
clear_cache()orinvalidate_key()
- Cache Storage: In-memory
HashMap<String, serde_json::Value>
Code: config_reader.rs
YAML Structure
YAML files use a structured format. Example from action-tiers.yml:
tiers:
- tier: 1
name: "Routine Maintenance"
signatures_required: 3
signatures_total: 5
review_period_days: 7
- tier: 3
name: "Consensus-Adjacent"
signatures_required: 5
signatures_total: 5
review_period_days: 90
The YamlConfigLoader extracts values from these files into a flat key-value structure for the registry.
Code: yaml_loader.rs
Initialization
On system startup:
- Load YAML Files: System loads YAML configuration files
- Sync to Database:
sync_from_yaml()populates database from YAML - Initialize Defaults:
initialize_governance_defaults()registers any missing configs - Create ConfigReader: ConfigReader created with YAML loader for fallback access
Code: main.rs
Configuration Key Reference
All configuration keys follow a naming convention:
- Tier configs:
tier_{n}_{property} - Layer configs:
layer_{n}_{property} - Veto configs:
veto_tier_{n}_{type}_percent - Commons contributor:
commons_contributor_threshold_{type}
Complete Reference: See governance/config/DEFAULT_VALUES_REFERENCE.md for all keys and default values.
Benefits
- YAML as Source of Truth: Human-readable, version-controlled defaults
- Governance Control: Database enables governance-approved changes without YAML edits
- Type Safety: Type-safe accessors prevent configuration errors
- Performance: Caching reduces database queries
- Flexibility: Fallback chain ensures system always has valid configuration
- Audit Trail: Complete history of all configuration changes
Governance Fork System
Overview
The governance fork mechanism enables users to choose between different governance rulesets without affecting Bitcoin consensus. This provides an escape hatch for users who disagree with governance decisions while maintaining Bitcoin protocol integrity.
Fork Types
| Type | Definition | Compatibility | Examples |
|---|---|---|---|
| Soft Fork | Changes without breaking compatibility | Existing users continue, new users choose updated | Adding signature requirements, modifying time locks, updating thresholds |
| Hard Fork | Breaking changes | All users must choose, no backward compatibility | Changing signature schemes, modifying fundamental principles, removing tiers |
Ruleset Export
Export Format
Governance rulesets exported as versioned, signed packages in YAML format:
ruleset_version: "1.2.0"
export_timestamp: "YYYY-MM-DDTHH:MM:SSZ"
previous_ruleset_hash: "sha256:abc123..."
governance_rules:
action_tiers: { /* tier definitions */ }
repository_layers: { /* layer definitions */ }
maintainers: { /* maintainer registry */ }
emergency_procedures: { /* emergency protocols */ }
cryptographic_proofs:
maintainer_signatures: [ /* signed by maintainers */ ]
ruleset_hash: "sha256:def456..."
merkle_root: "sha256:ghi789..."
compatibility:
min_version: "1.0.0"
max_version: "2.0.0"
breaking_changes: false
Export Process
- Ruleset preparation (compile current governance rules from YAML files)
- Cryptographic signing (maintainers sign the ruleset)
- Hash calculation (generate tamper-evident hash)
- Merkle tree (create verification structure)
- Export generation (package for distribution)
- Publication (make available for download)
Code: export.rs
Versioning System
| Version Component | Meaning | Example |
|---|---|---|
| Major | Breaking changes (hard fork) | 2.0.0 (incompatible with 1.x) |
| Minor | New features (soft fork) | 1.2.0 (compatible with 1.x) |
| Patch | Bug fixes and improvements | 1.1.1 (compatible with 1.1.x) |
Compatibility: Compatible (upgrade without issues), Incompatible (must choose), Deprecated (removed), Supported (receives updates).
Adoption Tracking
Track ruleset adoption through: node count, hash rate, user count, exchange support.
Public Dashboard: Current distribution, adoption trends, geographic distribution, exchange listings.
Fork Decision Process
User Choice
- Download ruleset package
- Verify maintainer signatures
- Validate ruleset integrity (hash)
- Configure client (set ruleset)
- Announce choice (publicly declare)
Client Implementation
- Ruleset loading (load chosen ruleset)
- Signature verification (verify maintainer signatures)
- Rule enforcement (apply governance rules)
- Status reporting (report chosen ruleset)
- Update mechanism (handle ruleset updates)
Code: executor.rs
Fork Resolution
Conflict Resolution
When forks occur:
- User notification (alert users to fork)
- Choice period (30 days to choose ruleset)
- Migration support (tools for ruleset migration)
- Documentation (clear migration guides)
- Support (community support during transition)
Fork Merging
Forks can be merged by: consensus building, gradual migration, feature adoption, clean slate.
Security Considerations
| Aspect | Requirements |
|---|---|
| Ruleset Integrity | Cryptographic signatures, hash verification, Merkle trees, timestamp anchoring |
| Fork Security | Replay protection, version validation, signature verification, threshold enforcement |
Examples
| Scenario | Type | Change | Result |
|---|---|---|---|
| Adding signature requirement | Soft Fork | Require 4-of-5 instead of 3-of-5 | Existing users continue with 3-of-5, new users use 4-of-5 |
| Changing signature scheme | Hard Fork | Switch from Ed25519 to Dilithium | Clean split into two governance models |
Configuration
governance/config/governance-fork.yml- Fork configurationgovernance/fork-registry.yml- Registered forks
P2P Governance Messages
Overview
Bitcoin Commons nodes relay governance messages through the P2P network, enabling decentralized governance communication without requiring direct connection to the governance infrastructure. Nodes forward governance messages to other peers and optionally to the governance application.
Architecture
Message Flow
Node
│
├─→ P2P Network (Bitcoin Protocol)
│ │
│ ├─→ Node A (relays to peers)
│ ├─→ Node B (relays to peers)
│ └─→ Node C (relays to peers)
│
└─→ Governance Application (blvm-commons)
(if governance relay enabled)
Two-Mode Operation
- Gossip Mode: Messages relayed to governance-enabled peers only
- Relay Mode: Messages forwarded to governance application via VPN/API
Code: mod.rs
Governance Message Types
Code: mod.rs
Gossip Protocol
Peer Selection
Nodes gossip governance messages to:
- Governance-enabled peers only
- Excluding the sender
- Using Bitcoin P2P protocol
Code: mod.rs
Message Serialization
Messages are serialized using JSON for gossip:
#![allow(unused)]
fn main() {
let msg_json = serde_json::to_vec(msg)?;
peer.send_message(msg_json).await?;
}
Code: mod.rs
Governance Relay
Configuration
[governance]
enabled = true
commons_url = "https://commons.example.com/api"
vpn_enabled = true
Code: mod.rs
Relay Process
- Receive Message: Node receives governance message from peer
- Check Configuration: Verify governance relay enabled
- Forward to Commons: Send message to governance application via API
- Gossip to Peers: Also gossip message to other governance-enabled peers
Code: mod.rs
Message Deduplication
Duplicate Detection
The governance application deduplicates messages:
- Message ID: Unique identifier per message
- Sender Tracking: Tracks message origin
- Timestamp: Prevents replay attacks
Code: message_dedup.rs
P2P Receiver
Message Processing
The governance application receives messages via P2P receiver:
- HTTP Endpoint: Receives forwarded messages
- Validation: Validates message structure
- Storage: Stores messages in database
- Processing: Processes governance actions
Code: p2p_receiver.rs
Network Integration
Protocol Messages
Governance messages are integrated into Bitcoin P2P protocol:
- Message Types: New protocol message types for governance
- Backward Compatible: Non-governance nodes ignore messages
- Service Flags: Nodes advertise governance capability
Code: protocol.rs
Peer Management
Nodes track governance-enabled peers:
- Service Flags: Identify governance-capable peers
- Peer List: Maintain list of governance peers
- Connection Management: Handle peer connections/disconnections
Code: mod.rs
Benefits
- Decentralization: Governance messages flow through P2P network
- Resilience: No single point of failure
- Privacy: Messages relayed without revealing origin
- Scalability: Gossip protocol scales to many nodes
- Backward Compatibility: Non-governance nodes unaffected
Components
The P2P governance message system includes:
- Governance message types (registration, veto, status, fork decision)
- Gossip protocol for peer-to-peer relay
- Governance relay to application
- Message deduplication
- P2P receiver in governance application
- Network protocol integration
Location: blvm-node/src/network/mod.rs, blvm-node/src/network/protocol.rs, blvm-commons/src/governance/p2p_receiver.rs, blvm-commons/src/governance/message_dedup.rs
Privacy-Preserving Voting
Overview
Bitcoin Commons implements privacy-preserving voting through contribution-based voting and zap-to-vote mechanisms. The system uses quadratic voting (square root of contribution amount) to prevent vote buying while allowing contributors to express preferences.
Voting Mechanisms
Contribution-Based Voting
Contributors receive voting weight based on their contributions:
- Zaps: Lightning Network zap contributions (tracked for transparency, don’t affect governance)
Code: contributions.rs
Zap-to-Vote
Zaps to governance events are converted into votes:
- Proposal Zaps: Zaps to governance event IDs
- Vote Types: Support, Veto, Abstain
- Quadratic Weight: Vote weight = sqrt(zap_amount_btc)
- Message Parsing: Vote type extracted from zap message
Code: zap_voting.rs
Vote Weight Calculation
Quadratic Formula
Vote weight uses quadratic formula to prevent vote buying:
#![allow(unused)]
fn main() {
vote_weight = sqrt(zap_amount_btc)
}
Code: weight_calculator.rs
Participation Weight
Base participation weight from contributions:
- 90-Day Window: Contributions within 90 days
- Contribution Types: Zaps (tracked for transparency only)
- Cooling-Off Period: New contributions have reduced weight
Code: weight_calculator.rs
Combined Weight
Proposal vote weight uses higher of zap or participation:
#![allow(unused)]
fn main() {
base_weight = max(zap_weight, participation_weight * 0.1)
final_weight = apply_weight_cap(base_weight, total_system_weight)
}
Code: weight_calculator.rs
Vote Types
Support
Default vote type for proposal zaps:
- Default: If no message, vote is support
- Message Keywords: “support”, “yes”, “approve”
- Weight: Calculated from zap amount
Veto
Opposition vote:
- Message Keywords: “veto”, “oppose”, “against”
- Threshold: 40% of zap votes blocks proposal
- Independent: Zap veto independent of participation votes
Code: vote_aggregator.rs
Abstain
Neutral vote:
- Message Keywords: “abstain”, “neutral”
- Weight: Counted but doesn’t affect threshold
- Purpose: Express neutrality without blocking
Code: zap_voting.rs
Vote Processing
Zap Vote Processing
- Receive Zap: Zap contribution received via Nostr
- Check Proposal Zap: Verify zap is for governance event
- Calculate Weight: Weight = sqrt(amount_btc)
- Parse Vote Type: Extract from zap message
- Check Duplicate: Prevent duplicate votes
- Record Vote: Store in database
Code: zap_voting.rs
Vote Aggregation
Votes are aggregated for proposals:
- Get Zap Votes: All zap votes for proposal
- Get Participation Votes: Participation-based votes
- Combine Totals: Sum support, veto, abstain weights
- Check Threshold: Verify threshold met
- Check Veto: Verify no veto blocking
Code: vote_aggregator.rs
Privacy Features
Pseudonymous Voting
Votes are linked to Nostr pubkeys, not real identities:
- Pubkey-Based: Votes tracked by sender pubkey
- No KYC: No identity verification required
- Privacy: Real identity not revealed
Quadratic Voting
Quadratic formula prevents vote buying:
- Square Root: Vote weight = sqrt(contribution)
- Diminishing Returns: Large contributions have proportionally less weight
- Fairness: Prevents wealthy contributors from dominating
Code: weight_calculator.rs
Cooling-Off Period
New contributions have reduced weight:
- Age Check: Contributions must be old enough
- Reduced Weight: New contributions use participation weight only
- Prevents Gaming: Prevents last-minute contribution manipulation
Code: weight_calculator.rs
Vote Aggregation
Vote Totals
#![allow(unused)]
fn main() {
pub struct VoteTotals {
pub support_weight: f64,
pub veto_weight: f64,
pub abstain_weight: f64,
pub total_weight: f64,
pub support_count: u32,
pub veto_count: u32,
pub abstain_count: u32,
pub total_count: u32,
}
}
Code: zap_voting.rs
Proposal Vote Result
#![allow(unused)]
fn main() {
pub struct ProposalVoteResult {
pub pr_id: i32,
pub tier: u8,
pub threshold: u32,
pub total_votes: f64,
pub support_votes: f64,
pub veto_votes: f64,
pub abstain_votes: f64,
pub zap_vote_count: u32,
pub participation_vote_count: u32,
pub threshold_met: bool,
pub veto_blocks: bool,
}
}
Code: vote_aggregator.rs
Veto Mechanisms
Zap Veto
Zap votes can veto proposals:
- Threshold: 40% of zap votes must be veto
- Independent: Independent of participation votes
- Blocking: Veto blocks proposal approval
Code: vote_aggregator.rs
Database Schema
Proposal Zap Votes Table
CREATE TABLE proposal_zap_votes (
id INTEGER PRIMARY KEY,
pr_id INTEGER NOT NULL,
governance_event_id TEXT NOT NULL,
sender_pubkey TEXT NOT NULL,
amount_msat INTEGER NOT NULL,
amount_btc REAL NOT NULL,
vote_weight REAL NOT NULL, -- sqrt(amount_btc)
vote_type TEXT NOT NULL, -- 'support', 'veto', 'abstain'
timestamp DATETIME NOT NULL,
verified BOOLEAN DEFAULT FALSE
);
Code: 005_governance_contributions.sql
Integration with Nostr
Zap Tracking
Zaps are tracked via Nostr integration:
- Zap Tracker: Monitors Nostr zaps
- Event Filtering: Filters zaps to governance events
- Vote Conversion: Converts zaps to votes
Code: zap_tracker.rs
Governance Events
Governance events on Nostr:
- Event IDs: Unique identifiers for proposals
- Zap Targets: Zaps to event IDs become votes
- Real-Time: Votes processed in real-time
Benefits
- Privacy: Pseudonymous voting via Nostr pubkeys
- Fairness: Quadratic voting prevents vote buying
- Accessibility: Anyone can vote via Lightning zaps
- Transparency: All votes recorded on-chain/off-chain
- Resilience: No single point of failure
Components
The privacy-preserving voting system includes:
- Zap-to-vote processor
- Vote weight calculator (quadratic formula)
- Vote aggregator
- Participation weight calculation
- Cooling-off period enforcement
- Zap tracking (for transparency, governance is maintainer-only multisig)
Location: blvm-commons/src/nostr/zap_voting.rs, blvm-commons/src/governance/weight_calculator.rs, blvm-commons/src/governance/vote_aggregator.rs
OpenTimestamps Integration
Overview
Bitcoin Commons uses OpenTimestamps (OTS) to anchor governance registries to the Bitcoin blockchain, providing cryptographic proof that governance state existed at specific points in time. This creates immutable historical records that cannot be retroactively modified.
Purpose
OpenTimestamps integration serves as a temporal proof mechanism by:
- Anchoring governance registries to Bitcoin blockchain
- Providing cryptographic proof of governance state
- Creating immutable historical records
- Enabling verification of governance timeline
Architecture
Monthly Registry Anchoring
Anchoring Schedule:
- Frequency: Monthly on the 1st day of each month
- Content: Complete governance registry snapshot
- Proof: OpenTimestamps proof anchored to Bitcoin
- Storage: Local proof files and public registry
Code: anchor.rs
Registry Structure
{
"version": "YYYY-MM",
"timestamp": "YYYY-MM-DDTHH:MM:SSZ",
"previous_registry_hash": "sha256:abc123...",
"maintainers": [...],
"authorized_servers": [...],
"audit_logs": {...},
"multisig_config": {...}
}
Code: anchor.rs
OTS Client
Client Implementation
The OtsClient handles communication with OpenTimestamps calendar servers:
- Calendar Servers: Multiple calendar servers for redundancy
- Hash Submission: Submits SHA256 hashes for timestamping
- Proof Generation: Receives OpenTimestamps proofs
- Verification: Verifies proofs against Bitcoin blockchain
Code: client.rs
Calendar Servers
Default calendar servers:
alice.btc.calendar.opentimestamps.orgbob.btc.calendar.opentimestamps.org
Code: client.rs
Proof Generation
OTS Proof Format
- Format: Binary OpenTimestamps proof
- Extension:
.json.ots(e.g.,YYYY-MM.json.ots) - Content: Cryptographic proof of registry existence
- Verification: Can be verified against Bitcoin blockchain
Proof Process
- Calculate Hash: SHA256 hash of registry JSON
- Submit to Calendar: POST hash to OpenTimestamps calendar
- Receive Proof: Calendar returns OTS proof
- Store Proof: Save proof file locally
- Publish: Make proof publicly available
Code: client.rs
Registry Anchorer
Monthly Anchoring
The RegistryAnchorer creates monthly governance registries:
- Registry Generation: Creates complete registry snapshot
- Hash Chain: Links to previous registry via hash
- OTS Stamping: Submits registry for timestamping
- Proof Storage: Stores proofs for verification
Code: anchor.rs
Registry Content
Monthly registries include:
- Maintainer information
- Authorized servers
- Audit log summaries
- Multisig configuration
- Previous registry hash (hash chain)
Code: anchor.rs
Verification
Proof Verification
OTS proofs can be verified:
ots verify YYYY-MM.json.ots
Code: verify.rs
Verification Process
- Load Proof: Read OTS proof file
- Verify Structure: Validate proof format
- Check Calendar: Verify calendar server signatures
- Verify Bitcoin: Check Bitcoin blockchain anchor
- Verify Hash: Confirm hash matches registry
Integration with Governance
Audit Trail Anchoring
Audit log entries are anchored via monthly registries:
- Monthly Snapshots: Complete audit log state
- Hash Chain: Links between monthly registries
- Immutable History: Cannot be retroactively modified
- Public Verification: Anyone can verify proofs
Code: entry.rs
Governance State Proof
Monthly registries prove governance state:
- Maintainer List: Who had authority at that time
- Server Authorization: Which servers were authorized
- Configuration: Governance configuration snapshot
- Timeline: Historical record of changes
Configuration
[ots]
enabled = true
aggregator_url = "https://alice.btc.calendar.opentimestamps.org"
monthly_anchor_day = 1 # Anchor on 1st of each month
registry_path = "./registries"
proofs_path = "./proofs"
Code: config.rs
Benefits
- Immutability: Proofs anchored to Bitcoin blockchain
- Verifiability: Anyone can verify proofs independently
- Historical Record: Complete timeline of governance state
- Tamper-Evident: Any modification breaks hash chain
- Decentralized: No single point of failure
Components
The OpenTimestamps integration includes:
- OTS client for calendar communication
- Registry anchorer for monthly anchoring
- Proof verification
- Hash chain maintenance
- Proof storage and publishing
Location: blvm-commons/src/ots/, blvm-commons/src/audit/
Nostr Integration
Overview
Bitcoin Commons uses Nostr (Notes and Other Stuff Transmitted by Relays) for real-time transparency and decentralized governance communication. The system includes a multi-bot architecture for different types of announcements and status updates.
Purpose
Nostr integration serves as a transparency mechanism by:
- Publishing real-time governance status updates
- Providing public verification of server operations
- Enabling decentralized monitoring of governance events
- Creating an immutable public record of governance actions
Multi-Bot System
Bot Types
The system uses multiple bot identities for different purposes:
- gov: Governance announcements and status updates
- dev: Development updates and technical information
- research: Educational content (optional)
- network: Network metrics and statistics (optional)
Code: bot_manager.rs
Bot Configuration
[nostr.bots.gov]
nsec_path = "env:GOV_BOT_NSEC" # or file path
npub = "npub1..."
lightning_address = "gov@bitcoincommons.org"
[nostr.bots.gov.profile]
name = "@BTCCommons_Gov"
about = "Bitcoin Commons Governance Bot"
picture = "https://bitcoincommons.org/logo.png"
Code: config.rs
Nostr Client
Client Implementation
The NostrClient manages connections to multiple Nostr relays:
- Multi-Relay Support: Connects to multiple relays for redundancy
- Event Publishing: Publishes events to all connected relays
- Error Handling: Handles relay failures gracefully
- Retry Logic: Automatic retry for failed publishes
Code: client.rs
Relay Management
#![allow(unused)]
fn main() {
let client = NostrClient::new(nsec, relay_urls).await?;
client.publish_event(event).await?;
}
Code: client.rs
Event Types
Governance Status Events (Kind 30078)
Published hourly by each authorized server:
- Server health status
- Binary and config hashes
- Audit log status
- Tagged with
d:governance-status
Code: events.rs
Server Health Events (Kind 30079)
Published when server status changes:
- Uptime metrics
- Last merge information
- Operational status
- Tagged with
d:server-health
Code: events.rs
Audit Log Head Events (Kind 30080)
Published when audit log head changes:
- Current audit log head hash
- Entry count
- Tagged with
d:audit-head
Code: events.rs
Governance Action Events
Published for governance actions:
- PR merges
- Review period notifications
- Keyholder announcements
Code: governance_publisher.rs
Governance Publisher
Status Publishing
The StatusPublisher publishes governance status:
- Hourly Updates: Regular status updates
- Event Signing: Events signed with server key
- Multi-Relay: Published to multiple relays
- Error Recovery: Handles relay failures
Code: publisher.rs
Action Publishing
The GovernanceActionPublisher publishes governance actions:
- PR Events: Merge and review events
- Keyholder Events: Signature announcements
- Fork Events: Governance fork decisions
Code: governance_publisher.rs
Zap Tracking
Zap Contributions
Zaps are tracked for contribution-based voting:
- Zap Tracker: Monitors Nostr zaps
- Contribution Recording: Records zap contributions
- Vote Conversion: Converts zaps to votes
- Real-Time Processing: Processes zaps as received
Code: zap_tracker.rs
Zap-to-Vote
Zaps to governance events become votes:
- Proposal Zaps: Zaps to governance event IDs
- Vote Weight: Calculated using quadratic formula
- Vote Type: Extracted from zap message
- Database Storage: Stored in proposal_zap_votes table
Code: zap_voting.rs
Configuration
[nostr]
enabled = true
relays = [
"wss://relay.bitcoincommons.org",
"wss://nostr.bitcoincommons.org"
]
publish_interval_secs = 3600 # 1 hour
governance_config = "commons_mainnet"
[nostr.bots.gov]
nsec_path = "env:GOV_BOT_NSEC"
npub = "npub1..."
lightning_address = "gov@bitcoincommons.org"
Code: config.rs
Real-Time Transparency
Public Monitoring
Anyone can monitor governance via Nostr:
- Event Filtering: Filter by event kind and tags
- Relay Queries: Query any Nostr relay
- Real-Time Updates: Receive updates as they happen
- Verification: Verify event signatures
Event Verification
All events are signed:
- Server Keys: Each server has Nostr keypair
- Event Signing: Events signed with server key
- Public Verification: Anyone can verify signatures
- Tamper-Evident: Cannot modify events without breaking signature
Benefits
- Decentralization: No single point of failure
- Censorship Resistance: Multiple relays, no central authority
- Real-Time: Immediate status updates
- Public Verification: Anyone can verify events
- Transparency: Complete public record of governance actions
Components
The Nostr integration includes:
- Multi-bot manager
- Nostr client with multi-relay support
- Event types (status, health, audit, actions)
- Governance publisher
- Status publisher
- Zap tracker and voting processor
Location: blvm-commons/src/nostr/
Multisig Configuration
Bitcoin Commons uses multisig thresholds for governance decisions, with different thresholds based on the layer and tier of the change. See Layer-Tier Model for details.
Layer-Based Thresholds
Constitutional Layers (Layer 1-2)
- Orange Paper (Layer 1): 6-of-7 maintainers, 180 days (365 for consensus changes)
- blvm-consensus (Layer 2): 6-of-7 maintainers, 180 days (365 for consensus changes)
Implementation Layer (Layer 3)
- blvm-protocol: 4-of-5 maintainers, 90 days
Application Layer (Layer 4)
- blvm-node: 3-of-5 maintainers, 60 days
Extension Layer (Layer 5)
- blvm-sdk: 2-of-3 maintainers, 14 days
- governance: 2-of-3 maintainers, 14 days
- blvm-commons: 2-of-3 maintainers, 14 days
Tier-Based Thresholds
Tier 1: Routine Maintenance
- Signatures: 3-of-5 maintainers
- Review Period: 7 days
- Scope: Bug fixes, documentation, performance optimizations
Tier 2: Feature Changes
- Signatures: 4-of-5 maintainers
- Review Period: 30 days
- Scope: New RPC methods, P2P changes, wallet features
Tier 3: Consensus-Adjacent
- Signatures: 5-of-5 maintainers
- Review Period: 90 days
- Scope: Changes affecting consensus validation code
Tier 4: Emergency Actions
- Signatures: 4-of-5 maintainers
- Review Period: 0 days (immediate)
- Scope: Critical security patches, network-threatening bugs
Tier 5: Governance Changes
- Signatures: 5-of-5 maintainers (special process)
- Review Period: 180 days
- Scope: Changes to governance rules themselves
Combined Model
When both layer and tier apply, the system uses “most restrictive wins” rule. See Layer-Tier Model for the decision matrix.
Multisig Threshold Sensitivity
Figure: Multisig threshold sensitivity analysis showing how different threshold configurations affect security and decision-making speed.
Governance Signature Thresholds
Figure: Signature thresholds by layer showing the graduated security model.
For configuration details, see the governance configuration files.
See Also
- Layer-Tier Model - How layers and tiers combine
- PR Process - How thresholds apply to PRs
- Governance Model - Governance system
- Keyholder Procedures - Maintainer signing process
- Governance Overview - Governance system introduction
Keyholder Procedures
Bitcoin Commons uses cryptographic keyholders (maintainers) to sign governance decisions. This section describes procedures for keyholders.
Maintainer Responsibilities
Maintainers are responsible for:
- Reviewing Changes: Understanding the impact of proposed changes
- Signing Decisions: Cryptographically signing approved changes
- Maintaining Keys: Securely storing and managing cryptographic keys
- Following Procedures: Adhering to governance processes and review periods
Signing Process
- Review PR: Understand the change and its impact
- Generate Signature: Use
blvm-signfrom blvm-sdk - Post Signature: Comment
/governance-sign <signature>on PR - Governance App Verifies: Cryptographically verifies signature
- Status Check Updates: Shows signature count progress
Key Management
Key Generation
blvm-keygen --output maintainer.key --format pem
Key Storage
- Development: Test keys can be stored locally
- Production: Keys should be stored in HSMs (Hardware Security Modules)
- Backup: Secure backup procedures required
Key Rotation
Keys can be rotated through the governance process. See MAINTAINER_GUIDE.md for detailed procedures.
Emergency Keyholders
Emergency keyholders (5-of-7) can activate emergency mode for critical situations:
- Activation: 5-of-7 emergency keyholders required
- Duration: Maximum 90 days
- Review Periods: Reduced to 30 days during emergency
- Signature Thresholds: Unchanged
Release Pipeline Gate Strength
Figure: Gate strength across the release pipeline. Each gate requires specific signatures and review periods based on the change tier.
For detailed maintainer procedures, see MAINTAINER_GUIDE.md.
See Also
- PR Process - How maintainers sign PRs
- Multisig Configuration - Signature threshold requirements
- Layer-Tier Model - Governance tier system
- Governance Model - Governance system
- Governance Overview - Governance system introduction
Audit Trails
Bitcoin Commons maintains immutable audit trails of all governance decisions using cryptographic hash chains and Bitcoin blockchain anchoring.
Audit Log System
The governance system maintains tamper-evident audit logs that record:
- All Governance Decisions: Every PR merge, signature, and veto
- Maintainer Actions: Key generation, rotation, and usage
- Emergency Activations: Emergency mode activations and deactivations
Cryptographic Properties
- Hash Chains: Each log entry includes hash of previous entry
- Bitcoin Anchoring: Monthly registry anchoring via OpenTimestamps
- Immutable: Logs cannot be modified without detection
- Verifiable: Anyone can verify log integrity
Audit Log Verification
Audit logs can be verified using:
blvm-commons verify-audit-log --log-path audit.log
Three-Layer Verification Architecture
See Also
- Governance Overview - Governance system introduction
- Governance Model - Governance architecture
- Keyholder Procedures - Maintainer responsibilities
- OpenTimestamps Integration - OTS anchoring details
- Multisig Configuration - Signature requirements
The governance system implements three complementary verification layers:
Figure: Three-layer verification: GitHub merge control, real-time Nostr transparency, and OpenTimestamps historical proof.
Audit Trail Completeness
Figure: Audit-trail completeness across governance layers.
OpenTimestamps Integration
The system uses OpenTimestamps to anchor audit logs to the Bitcoin blockchain:
- Monthly Anchoring: Registry state anchored monthly
- Immutable Proof: Proof of existence at specific time
- Public Verification: Anyone can verify timestamps
For detailed audit log documentation, see the blvm-commons repository documentation.
Orange Paper
The Orange Paper provides the mathematical specification of Bitcoin consensus.
{{#include ../../blvm-spec/THE_ORANGE_PAPER.md}}
Protocol Specifications
Bitcoin Improvement Proposals (BIPs) implemented in BLVM. Consensus-critical BIPs are formally verified. See Formal Verification for verification details.
Consensus-Critical BIPs
Script Opcodes:
- BIP65 (CLTV, opcode 0xb1): Locktime validation (
blvm-consensus/src/script.rs) - BIP112 (CSV, opcode 0xb2): Relative locktime via sequence numbers (
blvm-consensus/src/script.rs) - BIP68: Relative locktime sequence encoding (used by BIP112)
Time Validation:
- BIP113: Median time-past for CLTV timestamp validation (
blvm-consensus/src/block.rs)
Transaction Features:
- BIP125 (RBF): Replace-by-fee with all 5 requirements (
blvm-consensus/src/mempool.rs) with tests - BIP141/143 (SegWit): Witness validation, weight calculation, P2WPKH/P2WSH (
blvm-consensus/src/segwit.rs) - BIP340/341/342 (Taproot): P2TR validation framework (
blvm-consensus/src/taproot.rs)
Network Protocol BIPs
- BIP152: Compact block relay - short transaction IDs, block reconstruction (see Compact Blocks)
- BIP157/158: Client-side block filtering - GCS filter construction, integrated with network layer, works over all transports (see BIP157/158)
- BIP331: Package relay - efficient transaction relay (see Package Relay)
Application-Level BIPs
- BIP21: Bitcoin URI scheme (
blvm-node/src/bip21.rs) - BIP32/39/44: HD wallets, mnemonic phrases, standard derivation paths (
blvm-node/src/wallet/) - BIP70: Payment protocol (deprecated, legacy compatibility only,
blvm-node/src/bip70.rs) - BIP174: PSBT format for hardware wallet support (
blvm-node/src/psbt.rs) - BIP350/351: Bech32m for Taproot (P2TR), Bech32 for SegWit (
blvm-node/src/bech32m.rs)
Experimental Features
Available in experimental build variant: UTXO commitments, BIP119 CTV (CheckTemplateVerify), Dandelion++ privacy relay, Stratum V2 mining protocol.
Configuration Reference
Reference for BLVM node configuration options. Configuration can be provided via TOML file, JSON file, command-line arguments, or environment variables. See Node Configuration for usage examples.
Configuration File Format
Configuration files support both TOML (.toml) and JSON (.json) formats. TOML is recommended for readability.
Example Configuration File
# blvm.toml
listen_addr = "127.0.0.1:8333"
transport_preference = "tcp_only"
max_peers = 100
protocol_version = "BitcoinV1"
[storage]
data_dir = "/var/lib/blvm"
database_backend = "auto"
[storage.cache]
block_cache_mb = 100
utxo_cache_mb = 50
header_cache_mb = 10
[storage.pruning]
mode = { type = "normal", keep_from_height = 0, min_recent_blocks = 288 }
auto_prune = true
auto_prune_interval = 144
[modules]
enabled = true
modules_dir = "modules"
data_dir = "data/modules"
[rpc_auth]
required = false
rate_limit_burst = 100
rate_limit_rate = 10
Core Configuration
Network Settings
listen_addr
- Type:
SocketAddr(e.g.,"127.0.0.1:8333") - Default:
"127.0.0.1:8333" - Description: Network address to listen on for incoming P2P connections.
- Example:
listen_addr = "0.0.0.0:8333"(listen on all interfaces)
transport_preference
- Type:
string(enum) - Default:
"tcp_only" - Options:
"tcp_only"- Use only TCP transport (Bitcoin P2P compatible, default)"quinn_only"- Use only Quinn/QUIC transport (requiresquinnfeature)"iroh_only"- Use only Iroh transport (requiresirohfeature, experimental)"hybrid"- Use both TCP and Iroh simultaneously (requiresirohfeature)"all"- Use all available transports (requires bothquinnandirohfeatures)
- Description: Transport protocol selection. See Transport Abstraction for details.
max_peers
- Type:
integer - Default:
100 - Description: Maximum number of simultaneous peer connections.
protocol_version
- Type:
string - Default:
"BitcoinV1" - Options:
"BitcoinV1"(mainnet),"Testnet3"(testnet),"Regtest"(regtest) - Description: Bitcoin protocol variant. See Network Variants.
persistent_peers
- Type:
arrayofSocketAddr - Default:
[] - Description: List of peer addresses to connect to on startup. Format:
["192.168.1.1:8333", "example.com:8333"] - Example:
persistent_peers = ["192.168.1.1:8333", "10.0.0.1:8333"]
enable_self_advertisement
- Type:
boolean - Default:
true - Description: Whether to advertise own address to peers. Set to
falsefor privacy.
Storage Configuration
storage.data_dir
- Type:
string(path) - Default:
"data" - Description: Directory for storing blockchain data (blocks, UTXO set, indexes).
storage.database_backend
- Type:
string(enum) - Default:
"auto" - Options:
"auto"- Auto-select based on availability (prefers redb, falls back to sled)"redb"- Use redb database (default, recommended, production-ready)"sled"- Use sled database (beta, fallback option)
- Description: Database backend selection. System automatically falls back if preferred backend fails.
Storage Cache
storage.cache.block_cache_mb
- Type:
integer(megabytes) - Default:
100 - Description: Size of block cache in megabytes. Caches recently accessed blocks.
storage.cache.utxo_cache_mb
- Type:
integer(megabytes) - Default:
50 - Description: Size of UTXO cache in megabytes. Caches frequently accessed UTXOs.
storage.cache.header_cache_mb
- Type:
integer(megabytes) - Default:
10 - Description: Size of header cache in megabytes. Caches block headers.
Pruning Configuration
storage.pruning.mode
- Type:
object(enum with variants) - Default: Aggressive mode (if UTXO commitments enabled) or Normal mode
- Description: Pruning mode configuration. See Pruning Modes below.
storage.pruning.auto_prune
- Type:
boolean - Default:
true(if mode is Aggressive),falseotherwise - Description: Automatically prune old blocks periodically as chain grows.
storage.pruning.auto_prune_interval
- Type:
integer(blocks) - Default:
144(~1 day at 10 min/block) - Description: Prune every N blocks when
auto_pruneis enabled.
storage.pruning.min_blocks_to_keep
- Type:
integer(blocks) - Default:
144(~1 day at 10 min/block) - Description: Minimum number of blocks to keep as safety margin, even with aggressive pruning.
storage.pruning.prune_on_startup
- Type:
boolean - Default:
false - Description: Prune old blocks when node starts (if they exceed configured limits).
storage.pruning.incremental_prune_during_ibd
- Type:
boolean - Default:
true(if Aggressive mode) - Description: Prune old blocks incrementally during initial block download (IBD), keeping only a sliding window. Requires UTXO commitments.
storage.pruning.prune_window_size
- Type:
integer(blocks) - Default:
144(~1 day) - Description: Number of recent blocks to keep during incremental pruning (sliding window).
storage.pruning.min_blocks_for_incremental_prune
- Type:
integer(blocks) - Default:
288(~2 days) - Description: Minimum blocks before starting incremental pruning during IBD.
Pruning Modes
Disabled Mode
[storage.pruning]
mode = { type = "disabled" }
Keep all blocks. No pruning performed.
Normal Mode
[storage.pruning]
mode = { type = "normal", keep_from_height = 0, min_recent_blocks = 288 }
keep_from_height: Keep blocks from this height onwards (default:0)min_recent_blocks: Keep at least this many recent blocks (default:288= ~2 days)
Aggressive Mode
[storage.pruning]
mode = { type = "aggressive", keep_from_height = 0, keep_commitments = true, keep_filtered_blocks = false, min_blocks = 144 }
Requires: utxo-commitments feature enabled.
keep_from_height: Keep blocks from this height onwards (default:0)keep_commitments: Keep UTXO commitments for pruned blocks (default:true)keep_filtered_blocks: Keep spam-filtered blocks for pruned range (default:false)min_blocks: Minimum blocks to keep as safety margin (default:144= ~1 day)
Custom Mode
[storage.pruning]
mode = {
type = "custom",
keep_headers = true, # Always required for PoW verification
keep_bodies_from_height = 0,
keep_commitments = false,
keep_filters = false,
keep_filtered_blocks = false,
keep_witnesses = false,
keep_tx_index = false
}
Fine-grained control over what data to keep:
keep_headers: Keep block headers (always required, default:true)keep_bodies_from_height: Keep block bodies from this height onwardskeep_commitments: Keep UTXO commitments (if feature enabled)keep_filters: Keep BIP158 filters (if feature enabled)keep_filtered_blocks: Keep spam-filtered blockskeep_witnesses: Keep witness data (for SegWit verification)keep_tx_index: Keep transaction index
UTXO Commitments Pruning (Experimental)
Requires: utxo-commitments feature enabled.
[storage.pruning.utxo_commitments]
keep_commitments = true
keep_filtered_blocks = false
generate_before_prune = true
max_commitment_age_days = 0 # 0 = keep forever
BIP158 Filter Pruning (Experimental)
Requires: bip158 feature enabled.
[storage.pruning.bip158_filters]
keep_filters = true
keep_filter_headers = true # Always required for verification
max_filter_age_days = 0 # 0 = keep forever
Module System Configuration
modules.enabled
- Type:
boolean - Default:
true - Description: Enable the module system. Set to
falseto disable all modules.
modules.modules_dir
- Type:
string(path) - Default:
"modules" - Description: Directory containing module binaries and manifests.
modules.data_dir
- Type:
string(path) - Default:
"data/modules" - Description: Directory for module data (state, configs, logs).
modules.socket_dir
- Type:
string(path) - Default:
"data/modules/sockets" - Description: Directory for IPC sockets used for module communication.
modules.enabled_modules
- Type:
arrayofstring - Default:
[](empty = auto-discover all modules) - Description: List of module names to enable. Empty list enables all discovered modules.
- Example:
enabled_modules = ["lightning-module", "mining-module"]
modules.module_configs
- Type:
object(nested key-value pairs) - Default:
{} - Description: Module-specific configuration overrides.
- Example:
[modules.module_configs.lightning-module]
port = "9735"
network = "mainnet"
Module Resource Limits
[module_resource_limits]
default_max_cpu_percent = 50 # CPU limit (0-100%)
default_max_memory_bytes = 536870912 # Memory limit (512 MB)
default_max_file_descriptors = 256 # File descriptor limit
default_max_child_processes = 10 # Child process limit
module_startup_wait_millis = 100 # Startup wait time
module_socket_timeout_seconds = 5 # Socket timeout
module_socket_check_interval_millis = 100
module_socket_max_attempts = 50
RPC Configuration
rpc_auth.required
- Type:
boolean - Default:
false - Description: Require authentication for RPC requests. Set to
truefor production.
rpc_auth.tokens
- Type:
arrayofstring - Default:
[] - Description: Valid authentication tokens for RPC access.
- Example:
tokens = ["token1", "token2"]
rpc_auth.certificates
- Type:
arrayofstring - Default:
[] - Description: Valid certificate fingerprints for certificate-based authentication.
rpc_auth.rate_limit_burst
- Type:
integer - Default:
100 - Description: RPC rate limit burst size (token bucket).
rpc_auth.rate_limit_rate
- Type:
integer - Default:
10 - Description: RPC rate limit (requests per second).
Network Configuration
Network Timing
[network_timing]
target_peer_count = 8 # Target number of peers (Bitcoin Core uses 8-125)
peer_connection_delay_seconds = 2 # Wait before connecting to database peers
addr_relay_min_interval_seconds = 8640 # Min interval between addr broadcasts (2.4 hours)
max_addresses_per_addr_message = 1000 # Max addresses per addr message
max_addresses_from_dns = 100 # Max addresses from DNS seeds
Request Timeouts
[request_timeouts]
async_request_timeout_seconds = 300 # Timeout for async requests (getheaders, getdata)
utxo_commitment_request_timeout_seconds = 30
request_cleanup_interval_seconds = 60 # Cleanup interval for expired requests
pending_request_max_age_seconds = 300 # Max age before cleanup
DoS Protection
[dos_protection]
max_connections_per_window = 10 # Max connections per IP per window
window_seconds = 60 # Time window for rate limiting
max_message_queue_size = 10000 # Max message queue size
max_active_connections = 200 # Max active connections
auto_ban_threshold = 3 # Violations before auto-ban
ban_duration_seconds = 3600 # Ban duration (1 hour)
Relay Configuration
[relay]
max_relay_age = 3600 # Max age for relayed items (1 hour)
max_tracked_items = 10000 # Max items to track
enable_block_relay = true # Enable block relay
enable_tx_relay = true # Enable transaction relay
enable_dandelion = false # Enable Dandelion++ privacy relay
Address Database
[address_database]
max_addresses = 10000 # Max addresses to store
expiration_seconds = 86400 # Address expiration (24 hours)
Peer Rate Limiting
[peer_rate_limiting]
default_burst = 100 # Token bucket burst size
default_rate = 10 # Messages per second
Ban List Sharing
[ban_list_sharing]
enabled = true # Enable ban list sharing
share_mode = "periodic" # "immediate", "periodic", or "disabled"
periodic_interval_seconds = 300 # Sharing interval (5 minutes)
min_ban_duration_to_share = 3600 # Min ban duration to share (1 hour)
Experimental Features
Dandelion++ Privacy Relay
Requires: dandelion feature enabled.
[dandelion]
stem_timeout_seconds = 10 # Stem phase timeout
fluff_probability = 0.1 # Probability of fluffing at each hop (10%)
max_stem_hops = 2 # Max stem hops before forced fluff
Stratum V2 Mining
Requires: stratum-v2 feature enabled.
[stratum_v2]
enabled = false
pool_url = "tcp://pool.example.com:3333" # Pool URL for miner mode
listen_addr = "127.0.0.1:3333" # Listen address for server mode
transport_preference = "tcp_only"
merge_mining_enabled = false
secondary_chains = []
Command-Line Arguments
Configuration can be overridden via command-line arguments:
blvm --config /path/to/config.toml # Load config from file
blvm --network testnet # Override protocol version
blvm --data-dir /custom/path # Override data directory
Environment Variables
Configuration can also be set via environment variables (prefixed with BLVM_):
export BLVM_NETWORK=testnet
export BLVM_DATA_DIR=/var/lib/blvm
export BLVM_RPC_PORT=8332
Configuration Precedence
- Command-line arguments (highest priority)
- Environment variables
- Configuration file
- Default values (lowest priority)
Validation
The node validates configuration on startup. Invalid configurations will cause the node to exit with an error message indicating the problem.
Common validation errors:
- Pruning mode requires features that aren’t enabled
- Invalid network addresses
- Resource limits set to zero
- Conflicting transport preferences
See Also
- Node Configuration - Quick start guide
- Storage Backends - Backend selection details
- Transport Abstraction - Transport options
- Module Development - Module system details
API Index
Quick reference and cross-references to all BLVM APIs across the ecosystem.
Complete API Documentation
For detailed Rust API documentation with full type signatures, examples, and implementation details:
- blvm-consensus - Consensus layer APIs (transaction validation, block validation, script execution)
- blvm-protocol - Protocol abstraction layer APIs (network variants, message handling)
- blvm-node - Node implementation APIs (storage, networking, RPC, modules)
- blvm-sdk - Developer SDK APIs (governance primitives, composition framework)
Quick Reference by Component
Consensus Layer (blvm-consensus)
Core Functions (Orange Paper spec names):
CheckTransaction- Validate transaction structure and signaturesConnectBlock- Validate and connect block to chainEvalScript- Execute Bitcoin scriptVerifyScript- Verify script execution results
Note: These are Orange Paper mathematical specification names (PascalCase). The Rust API uses ConsensusProof struct methods (see API Usage Patterns below).
Key Types:
Transaction,Block,BlockHeaderUTXO,OutPointScript,ScriptOpcodeValidationResult
Documentation: See Consensus Overview, Consensus Architecture, and Formal Verification.
Protocol Layer (blvm-protocol)
Core Abstractions:
BitcoinProtocolEngine- Protocol engine for network variantsNetworkMessage- P2P message typesProtocolVersion- Network variant (BitcoinV1, Testnet3, Regtest)
Key Types:
NetworkMessage,MessageTypePeerConnection,ConnectionStateBlockTemplate(for mining)
Documentation: See Protocol Overview, Network Protocol, and Protocol Architecture.
Node Implementation (blvm-node)
Node API
Main Node Type:
Node- Main node orchestrator
Key Methods:
Node::new(protocol_version: Option<ProtocolVersion>) -> Result<Self>- Create new nodeNode::start() -> Result<()>- Start the nodeNode::stop() -> Result<()>- Stop the node gracefully
Module System API
NodeAPI Trait - Interface for modules to query node state:
#![allow(unused)]
fn main() {
pub trait NodeAPI {
async fn get_block(&self, hash: &Hash) -> Result<Option<Block>, ModuleError>;
async fn get_block_header(&self, hash: &Hash) -> Result<Option<BlockHeader>, ModuleError>;
async fn get_transaction(&self, hash: &Hash) -> Result<Option<Transaction>, ModuleError>;
async fn has_transaction(&self, hash: &Hash) -> Result<bool, ModuleError>;
async fn get_chain_tip(&self) -> Result<Hash, ModuleError>;
async fn get_block_height(&self) -> Result<u64, ModuleError>;
async fn get_utxo(&self, outpoint: &OutPoint) -> Result<Option<UTXO>, ModuleError>;
async fn subscribe_events(&self, event_types: Vec<EventType>) -> Result<Receiver<ModuleMessage>, ModuleError>;
}
}
Event Types: Core Blockchain Events:
EventType::NewBlock- New block connected to chainEventType::NewTransaction- New transaction in mempoolEventType::BlockDisconnected- Block disconnected (chain reorg)EventType::ChainReorg- Chain reorganization occurred
Payment Events:
EventType::PaymentRequestCreated,EventType::PaymentSettled,EventType::PaymentFailed,EventType::PaymentVerified,EventType::PaymentRouteFound,EventType::PaymentRouteFailed,EventType::ChannelOpened,EventType::ChannelClosed
Mining Events:
EventType::BlockMined,EventType::BlockTemplateUpdated,EventType::MiningDifficultyChanged,EventType::MiningJobCreated,EventType::ShareSubmitted,EventType::MergeMiningReward,EventType::MiningPoolConnected,EventType::MiningPoolDisconnected
Network Events:
EventType::PeerConnected,EventType::PeerDisconnected,EventType::MessageReceived,EventType::MessageSent,EventType::BroadcastStarted,EventType::BroadcastCompleted,EventType::RouteDiscovered,EventType::RouteFailed
Module Lifecycle Events:
EventType::ModuleLoaded,EventType::ModuleUnloaded,EventType::ModuleCrashed,EventType::ModuleDiscovered,EventType::ModuleInstalled,EventType::ModuleUpdated,EventType::ModuleRemoved
And many more. For complete list, see EventType enum and Event System.
ModuleContext - Context provided to modules:
#![allow(unused)]
fn main() {
pub struct ModuleContext {
pub module_id: String,
pub socket_path: String,
pub data_dir: String,
pub config: HashMap<String, String>,
}
}
Documentation: See Module Development for complete module API details.
RPC API
RPC Methods: Bitcoin Core-compatible JSON-RPC methods. See RPC API Reference for complete list.
Key Categories:
- Blockchain methods (8):
getblockchaininfo,getblock,getblockhash,getblockheader,getbestblockhash,getblockcount,getdifficulty,gettxoutsetinfo,verifychain - Raw transaction methods (7):
getrawtransaction,sendrawtransaction,testmempoolaccept,decoderawtransaction,gettxout,gettxoutproof,verifytxoutproof - Mempool methods (3):
getmempoolinfo,getrawmempool,savemempool - Network methods (9):
getnetworkinfo,getpeerinfo,getconnectioncount,ping,addnode,disconnectnode,getnettotals,clearbanned,setban,listbanned - Mining methods (4):
getmininginfo,getblocktemplate,submitblock,estimatesmartfee
Documentation: See RPC API Reference.
Storage API
Storage Trait:
Storage- Storage backend interface
Key Methods:
get_block(&self, hash: &Hash) -> Result<Option<Block>>get_block_header(&self, hash: &Hash) -> Result<Option<BlockHeader>>get_utxo(&self, outpoint: &OutPoint) -> Result<Option<UTXO>>get_chain_tip(&self) -> Result<Hash>get_block_height(&self) -> Result<u64>
Backends:
redb- Production-ready embedded database (default, see Storage Backends)sled- Beta fallback option (see Storage Backends)
Documentation: See Storage Backends and Node Configuration.
Developer SDK (blvm-sdk)
Governance Primitives
Core Types:
GovernanceKeypair- Keypair for signingPublicKey- Public key (secp256k1)Signature- Cryptographic signatureGovernanceMessage- Message types (Release, ModuleApproval, BudgetDecision)Multisig- Threshold signature configuration
Functions:
sign_message(secret_key: &SecretKey, message: &[u8]) -> GovernanceResult<Signature>verify_signature(signature: &Signature, message: &[u8], public_key: &PublicKey) -> GovernanceResult<bool>
Documentation: See SDK API Reference.
Composition Framework
Core Types:
ModuleRegistry- Module discovery and managementNodeComposer- Node composition from modulesModuleLifecycle- Module lifecycle managementNodeSpec,ModuleSpec- Composition specifications
Documentation: See SDK API Reference.
API Usage Patterns
Consensus Validation
#![allow(unused)]
fn main() {
use blvm_consensus::{ConsensusProof, Transaction, Block};
// Create consensus proof instance
let proof = ConsensusProof::new();
// Validate transaction
let result = proof.validate_transaction(&tx)?;
// Validate and connect block
let (result, new_utxo_set) = proof.validate_block(&block, utxo_set, height)?;
}
Alternative: Direct module functions are also available:
#![allow(unused)]
fn main() {
use blvm_consensus::{transaction, block, types::*};
// Validate transaction using direct module function
let result = transaction::check_transaction(&tx)?;
// Connect block using direct module function
let (result, new_utxo_set, _undo_log) = block::connect_block(
&block,
&witnesses,
utxo_set,
height,
None,
network_time,
network,
)?;
}
Protocol Abstraction
#![allow(unused)]
fn main() {
use blvm_protocol::{BitcoinProtocolEngine, ProtocolVersion};
// Create protocol engine for testnet
let engine = BitcoinProtocolEngine::new(ProtocolVersion::Testnet3)?;
}
Module Development
#![allow(unused)]
fn main() {
use blvm_node::module::traits::NodeAPI;
// In module code, use NodeAPI trait through IPC
let block = node_api.get_block(&hash).await?;
let tip = node_api.get_chain_tip().await?;
}
Governance Operations
#![allow(unused)]
fn main() {
use blvm_sdk::{GovernanceKeypair, GovernanceMessage, Multisig};
// Generate keypair and sign message
let keypair = GovernanceKeypair::generate()?;
let message = GovernanceMessage::Release { version, commit_hash };
let signature = sign_message(&keypair.secret_key_bytes(), &message.to_signing_bytes())?;
}
API Stability
Stable APIs:
- Consensus layer (
blvm-consensus) - Stable, formally verified - Protocol layer (
blvm-protocol) - Stable, Bitcoin-compatible - Node storage APIs - Stable
Development APIs:
- Module system APIs - Stable interface, implementation may evolve
- Composition framework - Active development
- Experimental features - Subject to change
Error Handling
All APIs use consistent error types:
blvm_consensus::ConsensusError- Consensus validation errorsblvm_protocol::ProtocolError- Protocol layer errorsblvm_node::module::ModuleError- Module system errorsblvm_sdk::GovernanceError- Governance operation errors
See Also
- SDK API Reference - Detailed SDK documentation
- Module Development - Module API usage
- RPC API Reference - RPC method documentation
- Configuration Reference - Configuration APIs
Glossary
Key terms and concepts used throughout the BLVM documentation.
BLVM Components
BLVM (Bitcoin Low-Level Virtual Machine) - Compiler-like infrastructure for Bitcoin implementations. Transforms the Orange Paper (IR) through optimization passes into optimized code, with formal verification ensuring correctness. Similar to how LLVM provides compiler infrastructure, BLVM provides Bitcoin implementation infrastructure.
Orange Paper - Mathematical specification of Bitcoin’s consensus protocol, serving as the “intermediate representation” (IR) in BLVM’s compiler-like architecture. Transformed through optimization passes into optimized code. See Orange Paper.
Optimization Passes - Runtime optimization passes in blvm-consensus that transform the Orange Paper specification into optimized code: Pass 2 (Constant Folding), Pass 3 (Memory Layout Optimization), Pass 5 (SIMD Vectorization), plus bounds check optimization, dead code elimination, and inlining hints. See Optimization Passes.
blvm-consensus - Optimized mathematical implementation of Bitcoin consensus rules with formal verification. Includes optimization passes that transform the Orange Paper specification into production-ready code. Foundation layer with no dependencies. See Consensus Overview.
blvm-protocol - Protocol abstraction layer for multiple Bitcoin variants (mainnet, testnet, regtest) while maintaining consensus compatibility. See Protocol Overview.
blvm-node - Bitcoin node implementation with storage, networking, RPC, and mining capabilities. Production-ready reference implementation. See Node Overview.
blvm-sdk - Developer toolkit providing governance cryptographic primitives, module composition framework, and CLI tools for key management and signing. See SDK Overview.
Governance
Bitcoin Commons - Forkable governance framework applying Elinor Ostrom’s commons management principles through cryptographic enforcement. See Governance Overview.
5-Tier Governance Model - Constitutional governance system with graduated signature thresholds (3-of-5 to 6-of-7) and review periods (7 days to 365 days) based on change impact. See Layer-Tier Model.
Forkable Governance - Governance rules can be forked by users if they disagree with decisions, creating exit competition and preventing capture. See Governance Fork.
Cryptographic Enforcement - All governance actions require cryptographic signatures from maintainers, making power visible and accountable. See Keyholder Procedures.
Technical Concepts
Formal Verification - Mathematical proof of code correctness. BLVM uses formal verification for critical consensus paths.
Proofs Locked to Code - Formal verification proofs are embedded in the code itself, ensuring correctness is maintained as code changes.
Spec Drift Detection - Automated detection when implementation code diverges from the Orange Paper mathematical specification.
Compiler-Like Architecture - Architecture where Orange Paper (IR) → optimization passes → blvm-consensus → blvm-node, similar to source code → IR → optimization passes → machine code in compilers. See System Overview.
Process Isolation - Module system design where each module runs in a separate process with isolated memory, preventing failures from propagating to the base node.
IPC (Inter-Process Communication) - Communication mechanism between modules and the node using Unix domain sockets with length-delimited binary messages. See Module IPC Protocol.
Storage & Networking
Storage Backends - Database backends for blockchain data: redb (default, production-ready), sled (beta, fallback), auto (auto-select based on availability). See Storage Backends.
Pruning - Storage optimization that removes old block data while keeping the UTXO set. Configurable to keep last N blocks.
Transport Abstraction - Unified abstraction supporting multiple transport protocols: TCP (default, Bitcoin P2P compatible) and Iroh/QUIC (experimental). See Transport Abstraction.
Network Variants - Bitcoin network types: Mainnet (BitcoinV1, production), Testnet3 (test network), Regtest (regression testing, isolated).
Consensus & Protocol
Consensus Rules - Mathematical rules that all Bitcoin nodes must follow to maintain network consensus. Defined in the Orange Paper and implemented in blvm-consensus.
BIP (Bitcoin Improvement Proposal) - Standards for Bitcoin protocol changes. BLVM implements numerous BIPs including BIP30, BIP34, BIP66, BIP90, BIP147, BIP141/143, BIP340/341/342. See Protocol Specifications.
SegWit (Segregated Witness) - BIP141/143 implementation separating witness data from transaction data, enabling transaction malleability fixes and capacity improvements.
Taproot - BIP340/341/342 implementation providing Schnorr signatures, Merkle tree scripts, and improved privacy.
RBF (Replace-By-Fee) - BIP125 implementation allowing transaction replacement with higher fees before confirmation.
Development
Module System - Process-isolated system supporting optional features (Lightning, merge mining, privacy enhancements) without affecting consensus or base node stability.
Module Manifest (module.toml) - Configuration file defining module metadata, capabilities, dependencies, and entry point.
Capabilities - Permissions system for modules. Capabilities use snake_case in module.toml and map to Permission enum variants. Core capabilities include: read_blockchain, read_utxo, read_chain_state, subscribe_events, send_transactions, read_mempool, read_network, network_access, read_lightning, read_payment, read_storage, write_storage, manage_storage, read_filesystem, write_filesystem, manage_filesystem, register_rpc_endpoint, manage_timers, report_metrics, read_metrics, discover_modules, publish_events, call_module, register_module_api. See Permission System for complete list.
RPC (Remote Procedure Call) - JSON-RPC 2.0 interface for interacting with the node. BLVM implements Bitcoin Core-compatible methods.
Governance Phases
Phase 1 (Infrastructure Building) - All core components are implemented. Governance is not activated. Test keys are used.
Phase 2 (Governance Activation) - Governance rules are enforced with real cryptographic keys and keyholder onboarding.
Phase 3 (Full Operation) - Mature, stable system with battle-tested governance and production deployment.
Contributing to BLVM
Thank you for your interest in contributing to BLVM (Bitcoin Low-Level Virtual Machine)! This guide covers the complete developer workflow from setting up your environment to getting your changes merged.
Code of Conduct
This project and everyone participating in it is governed by our Code of Conduct. By participating, you are expected to uphold this code.
Getting Started
Prerequisites
- Rust 1.70 or later - Check with
rustc --version - Git - For version control
- Cargo - Included with Rust
- Text editor or IDE - Your choice
Development Setup
- Fork the repository you want to contribute to (e.g., blvm-consensus, blvm-protocol, blvm-node)
- Clone your fork:
git clone https://github.com/YOUR_USERNAME/blvm-consensus.git cd blvm-consensus - Add upstream remote:
git remote add upstream https://github.com/BTCDecoded/blvm-consensus.git - Build the project:
cargo build - Run tests:
cargo test
Contribution Workflow
1. Create a Feature Branch
Always create a new branch from main:
git checkout main
git pull upstream main
git checkout -b feature/your-feature-name
Branch naming conventions:
feature/- New featuresfix/- Bug fixesdocs/- Documentation changesrefactor/- Code refactoringtest/- Test additions
2. Make Your Changes
Follow these guidelines when making changes:
Code Style
- Follow Rust conventions - Use
cargo fmtto format code - Run clippy - Use
cargo clippy -- -D warningsto check for improvements - Write clear, self-documenting code - Code should be readable without excessive comments
Testing
- Write tests for all new functionality - See Testing Infrastructure for details
- Ensure existing tests continue to pass - Run
cargo testbefore committing - Add integration tests for complex features
- Aim for high test coverage - Consensus-critical code requires >95% coverage
Documentation
- Document all public APIs - Use Rust doc comments (
///) - Update README files when adding features
- Include code examples in documentation
- Follow Rust documentation conventions
3. Commit Your Changes
Use conventional commit format:
type(scope): description
[optional body]
[optional footer]
Commit types:
feat- New featurefix- Bug fixdocs- Documentation changestest- Test additions/changesrefactor- Code refactoringperf- Performance improvementsci- CI/CD changeschore- Maintenance tasks
Examples:
feat(consensus): add OP_CHECKSIGVERIFY implementation
fix(node): resolve connection timeout issue
docs(readme): update installation instructions
test(block): add edge case tests for block validation
4. Push and Create Pull Request
git push origin feature/your-feature-name
Then open a Pull Request on GitHub. See the PR Process for details on governance tiers, review periods, and maintainer signatures. Your PR should include:
- Clear title - Describes what the PR does
- Detailed description - Explains the changes and why
- Reference issues - Link to related issues if applicable
- Checklist - Mark items as you complete them (see PR Checklist below)
Repository-Specific Guidelines
blvm-consensus
Critical: This code implements Bitcoin consensus rules. Any changes must:
- Match Bitcoin Core behavior exactly - No deviations
- Not deviate from the Orange Paper specifications - Mathematical correctness required
- Handle all edge cases correctly - Consensus code must be bulletproof
- Maintain mathematical precision - No approximations
Additional requirements:
- Exact Version Pinning: All consensus-critical dependencies must be pinned to exact versions
- Pure Functions: All functions must remain side-effect-free
- Testing: All mathematical functions must be thoroughly tested (see Testing Infrastructure)
- Formal Verification: Consensus-critical changes may require Z3 proofs (via BLVM Specification Lock)
blvm-protocol
- Protocol Abstraction: Changes must maintain clean abstraction
- Variant Support: Ensure all Bitcoin variants continue to work
- Backward Compatibility: Avoid breaking changes to protocol interfaces
blvm-node
- Consensus Integrity: Never modify consensus rules (use blvm-consensus for that)
- Production Readiness: Consider production deployment implications
- Performance: Maintain reasonable performance characteristics
Pull Request Checklist
Before submitting your PR, ensure:
- All tests pass - Run
cargo testlocally - Code is formatted - Run
cargo fmt - No clippy warnings - Run
cargo clippy -- -D warnings - Documentation is updated - Public APIs documented, README updated if needed
- Commit messages follow conventions - Use conventional commit format
- Changes are focused and atomic - One logical change per PR
- Repository-specific guidelines followed - See section above
Review Process
What Happens After You Submit a PR
- Automated CI runs - Tests, linting, and checks run automatically
- Governance tier classification - Your PR is automatically classified into a governance tier
- Maintainers review - Code review by project maintainers
- Signatures required - Maintainers must cryptographically sign approval (see PR Process)
- Review period - Tier-specific review period must elapse (see PR Process for details)
- Merge - Once all requirements are met, your PR is merged
Review Criteria
Reviewers will check:
- Correctness - Does the code work as intended?
- Consensus compliance - Does it match Bitcoin Core? (for consensus code)
- Test coverage - Are all cases covered?
- Performance - No regressions?
- Documentation - Is it clear and complete?
- Security - Any potential vulnerabilities?
Getting Your PR Reviewed
- Be patient - Review periods vary by tier (7-180 days)
- Respond to feedback - Address review comments promptly
- Keep PRs small - Smaller PRs are reviewed faster
- Update PR description - Keep it current as you make changes
Governance Tiers
Your PR will be automatically classified into a governance tier based on the changes. See PR Process for detailed information about:
- Tier 1: Routine Maintenance - Bug fixes, documentation, performance optimizations (7 day review, see Layer-Tier Model)
- Tier 2: Feature Changes - New RPC methods, P2P changes, wallet features (30 day review)
- Tier 3: Consensus-Adjacent - Changes affecting consensus validation code (90 day review)
- Tier 4: Emergency Actions - Critical security patches (0 day review)
- Tier 5: Governance Changes - Changes to governance rules (180 day review)
Testing Your Changes
See Testing Infrastructure for testing documentation. Key points:
- Unit tests - Test individual functions
- Integration tests - Test cross-module functionality
- Property-based testing - Test with generated inputs
- Fuzzing - Find edge cases automatically
- Differential testing - Compare with Bitcoin Core behavior
CI/CD Workflows
When you push code or open a PR, automated workflows run:
- Tests - All test suites run
- Linting - Code style and quality checks
- Coverage - Test coverage analysis
- Build verification - Ensures code compiles
See CI/CD Workflows for detailed information about what runs and how to debug failures.
Getting Help
- Discussions - Use GitHub Discussions for questions
- Issues - Use GitHub Issues for bugs and feature requests
- Security - Use private channels for security issues (see SECURITY.md in each repo)
Recognition
Contributors will be recognized in:
- Repository CONTRIBUTORS.md files
- Release notes for significant contributions
- Organization acknowledgments
Questions?
If you have questions about contributing:
- Check existing discussions and issues
- Open a new discussion
- Contact maintainers privately for sensitive matters
Thank you for contributing to BLVM!
Contributing to Documentation
For documentation-specific contributions (improving docs, fixing typos, adding examples), see Contributing to Documentation in the Appendices section. This guide covers:
- Documentation standards and style guidelines
- Where to contribute (source repos vs. unified docs)
- Documentation workflow
- Local testing of documentation changes
Note: Code contributions (this page) and documentation contributions (linked above) follow different workflows but both are welcome!
See Also
- PR Process - Pull request review process and governance tiers
- CI/CD Workflows - What happens when you push code
- Testing Infrastructure - Testing guides
- Release Process - How releases are created
- Contributing to Documentation - Documentation contribution guide
CI/CD Workflows
This document explains what happens when you push code or open a Pull Request, how to interpret CI results, and how to debug failures.
Overview
BLVM uses GitHub Actions for continuous integration and deployment. All workflows run on self-hosted Linux x64 runners to ensure security and deterministic builds.
What Happens When You Push Code
On Push to Any Branch
When you push code to any branch, the following workflows may trigger:
- CI Workflow - Runs tests, linting, and build verification
- Coverage Workflow - Calculates test coverage
- Security Workflow - Runs security checks (if configured)
On Push to Main Branch
In addition to the above, pushing to main triggers:
- Release Workflow - Automatically creates a new release (see Release Process)
- Version Bumping - Auto-increments patch version
- Cargo Publishing - Publishes dependencies to crates.io
- Git Tagging - Tags all repositories with the new version
Repository-Specific CI Workflows
blvm-consensus
Workflows:
ci.yml- Runs test suite, linting, and build verificationcoverage.yml- Calculates test coverage
What Runs:
- Unit tests
- Integration tests
- Property-based tests
- BLVM Specification Lock formal verification (optional, can be enabled)
- Code formatting check (
cargo fmt --check) - Linting check (
cargo clippy)
blvm-protocol
Workflows:
ci.yml- Runs test suite and build verificationcoverage.yml- Calculates test coverage
What Runs:
- Unit tests
- Integration tests
- Protocol compatibility tests
- Build verification
blvm-node
Workflows:
ci.yml- Runs test suite and build verificationcoverage.yml- Calculates test coverage
What Runs:
- Unit tests
- Integration tests
- Node functionality tests
- Network protocol tests
- Build verification
blvm (Main Repository)
Workflows:
ci.yml- Runs tests across all componentscoverage.yml- Aggregates coverage from all reposrelease.yml- Official release workflowprerelease.yml- Prerelease workflownightly-prerelease.yml- Scheduled nightly builds
Reusable Workflows (blvm-commons)
The blvm-commons repository provides reusable workflows that other repositories call:
verify_consensus.yml
Purpose: Runs tests and optional BLVM Specification Lock verification for consensus code
Inputs:
repo- Repository nameref- Git reference (branch/tag)blvm-spec-lock- Boolean to enable BLVM Specification Lock verification
What It Does:
- Checks out the repository
- Runs test suite
- Optionally runs BLVM Specification Lock formal verification
- Reports results
build_lib.yml
Purpose: Deterministic library build with artifact hashing
Inputs:
repo- Repository nameref- Git referencepackage- Cargo package namefeatures- Feature flags to enableverify_deterministic- Optional: rebuild and compare hashes
What It Does:
- Builds the library with
cargo build --locked --release - Hashes outputs to
SHA256SUMS - Optionally verifies deterministic builds (rebuild and compare)
build_docker.yml
Purpose: Builds Docker images
Inputs:
repo- Repository nameref- Git referencetag- Docker image tagimage_name- Docker image namepush- Boolean to push to registry
What It Does:
- Builds Docker image
- Optionally pushes to registry
Workflow Dependencies and Ordering
Builds follow a strict dependency order:
1. blvm-consensus (L2) - No dependencies
↓
2. blvm-protocol (L3) - Depends on blvm-consensus
↓
3. blvm-node (L4) - Depends on blvm-protocol + blvm-consensus
↓
4. blvm (main) - Depends on blvm-node
Parallel:
5. blvm-sdk - No dependencies
↓
6. blvm-commons - Depends on blvm-sdk
Security Gates: Consensus verification (tests + optional BLVM Specification Lock) must pass before downstream builds proceed.
Self-Hosted Runners
All workflows run on self-hosted Linux x64 runners:
- Security: Code never leaves our infrastructure
- Performance: Faster builds, no rate limits
- Deterministic: Consistent build environment
- Labels: Optional labels (
rust,docker,blvm-spec-lock) optimize job assignment (note: label uses lowercase for technical compatibility)
Runner Policy:
- All jobs run on
[self-hosted, linux, x64]runners - Workflows handle installation as fallback if labeled runners unavailable
- Repos should restrict Actions to self-hosted in settings
Deterministic Builds
All builds use deterministic build practices:
- Locked Dependencies:
cargo build --lockedensures exact dependency versions - Toolchain Pinning: Per-repo
rust-toolchain.tomldefines exact Rust version - Artifact Hashing: All outputs hashed to
SHA256SUMS - Verification: Optional deterministic verification (rebuild and compare hashes)
Interpreting CI Results
✅ Success
All checks pass:
- ✅ Tests pass
- ✅ Linting passes
- ✅ Build succeeds
- ✅ Coverage meets threshold
Action: Your PR is ready for review (subject to governance requirements).
❌ Test Failures
One or more tests fail:
- Check the test output in the workflow logs
- Look for error messages and stack traces
- Run tests locally to reproduce:
cargo test
Common Causes:
- Logic errors in your code
- Test environment differences
- Flaky tests (timing issues)
❌ Linting Failures
Code style or quality issues:
- Formatting: Run
cargo fmtlocally - Clippy warnings: Run
cargo clippy -- -D warningsand fix issues
Action: Fix locally and push again.
❌ Build Failures
Code doesn’t compile:
- Check compiler errors in workflow logs
- Build locally:
cargo build - Check for missing dependencies or version conflicts
Action: Fix compilation errors and push again.
⚠️ Coverage Below Threshold
Test coverage is below the required threshold:
- Add more tests to cover untested code
- Check coverage report to see what’s missing
Action: Add tests to increase coverage.
Debugging CI Failures
1. Check Workflow Logs
Click on the failed check in your PR to see detailed logs:
- Expand failed job
- Look for error messages
- Check which step failed
2. Reproduce Locally
Run the same commands locally:
# Run tests
cargo test
# Check formatting
cargo fmt --check
# Run clippy
cargo clippy -- -D warnings
# Build
cargo build --release
3. Check for Environment Differences
CI runs in a clean environment:
- Dependencies are fresh
- No local configuration
- Specific Rust toolchain version
Solution: Use rust-toolchain.toml to pin Rust version.
4. Common Issues
Issue: Tests pass locally but fail in CI
- Cause: Timing issues, environment differences
- Solution: Make tests more robust, check for race conditions
Issue: Build works locally but fails in CI
- Cause: Dependency version mismatch
- Solution: Ensure
Cargo.lockis committed, use--lockedflag
Issue: Coverage calculation fails
- Cause: Coverage tool issues
- Solution: Check coverage tool version, ensure tests run successfully
Workflow Status Checks
PRs require all status checks to pass before merging:
- Required Checks: Must pass (configured per repository)
- Optional Checks: Can fail but won’t block merge
- Status: Shown in PR checks section
Note: Even if all checks pass, PRs still require:
- Maintainer signatures (see PR Process)
- Review period to elapse
Best Practices
Before Pushing
- Run tests locally:
cargo test - Check formatting:
cargo fmt - Run clippy:
cargo clippy -- -D warnings - Build:
cargo build --release
During Development
- Push frequently: Small commits are easier to debug
- Check CI early: Don’t wait until PR is “done”
- Fix issues immediately: Don’t let failures accumulate
When CI Fails
- Don’t panic: CI failures are normal during development
- Read the logs: Error messages are usually clear
- Reproduce locally: Fix the issue, then push again
- Ask for help: If stuck, ask in discussions or PR comments
Workflow Configuration
Workflows are configured in .github/workflows/ in each repository:
- Trigger conditions: When workflows run
- Job definitions: What each job does
- Runner requirements: Which runners to use
- Dependencies: Job ordering
Note: Workflows in blvm-commons are reusable and called by other repositories via workflow_call.
Workflow Optimization
Caching Strategies
For self-hosted runners, local caching can provide significant performance improvements:
Local Caching System
Using /tmp/runner-cache with rsync provides 10-100x faster cache operations than GitHub Actions cache:
- No API rate limits: Local filesystem access
- Faster restore: rsync is much faster than GitHub cache API
- Works offline: Once cached, no network needed
- Preserves symlinks: Better than GitHub cache for complex builds
Shared Setup Jobs
Use a single setup job that all other jobs depend on:
- Checkout dependencies once: Avoid redundant checkouts
- Generate cache keys once: Share keys via job outputs
- Parallel execution: Other jobs can run in parallel after setup
Cross-Repo Build Artifact Caching
Cache target/ directories for dependencies across workflow runs:
- Don’t rebuild dependencies: Cache blvm-consensus and blvm-protocol build artifacts
- Faster incremental builds: Only rebuild what changed
- Shared across repos: Same cache can be used by multiple repositories
Cache Key Strategy
Use deterministic cache keys based on:
Cargo.lockhash (for dependency changes)- Rust toolchain version (for toolchain changes)
- Combined key:
${DEPS_KEY}-${TOOLCHAIN}
Disk Space Management
For long-running runners, implement cache cleanup:
- Automatic cleanup: Remove caches older than N days
- Keep recent caches: Maintain last N cache entries
- Emergency cleanup: Check disk space and clean if >80% full
Performance Improvements
With proper caching optimization:
- Dependency checkout: ~30s (once in setup job)
- Cache restore: ~5s per job (local cache vs ~20s for GitHub cache)
- Dependency build: ~30s (cached artifacts vs ~5min without cache)
- Total overhead: ~2min vs ~35min without optimization
Estimated speedup: ~17x faster for setup overhead
Additional Resources
- Testing Infrastructure - Comprehensive testing documentation
- Release Process - How releases are created
- PR Process - Pull request review process
Pull Request Process
This document explains the PR review process, governance tiers, signature requirements, and how to get your PR reviewed and merged.
Overview
BLVM uses a 5-tier constitutional governance model with cryptographic signatures to ensure secure, transparent, and accountable code changes. Every PR is automatically classified into a governance tier based on the scope and impact of the changes.
PR Lifecycle
1. Developer Opens PR
When you open a Pull Request:
- Automated CI runs - Tests, linting, and build verification
- Governance tier classification - PR is automatically classified (with temporary manual override available)
- Status checks appear - Shows what needs to happen for merge
2. Maintainers Review and Sign
Maintainers review your code and cryptographically sign approval:
- Review PR - Understand the change and its impact
- Generate signature - Use
blvm-signfrom blvm-sdk - Post signature - Comment
/governance-sign <signature>on PR - Governance App verifies - Cryptographically verifies signature
- Status check updates - Shows signature count progress
3. Review Period Elapses
Each tier has a specific review period that must elapse:
- Tier 1: 7 days
- Tier 2: 30 days
- Tier 3: 90 days
- Tier 4: 0 days (immediate)
- Tier 5: 180 days
The review period starts when the PR is opened and all required signatures are collected.
4. Requirements Met → Merge Enabled
Once all requirements are met:
- Required signatures collected
- Review period elapsed
- All CI checks pass
The PR can be merged.
Governance Tiers
Tier 1: Routine Maintenance
Scope: Bug fixes, documentation, performance optimizations
Requirements:
- Signatures: 3-of-5 maintainers
- Review Period: 7 days
- Restriction: Non-consensus changes only
Examples:
- Fixing a typo in documentation
- Performance optimization in non-consensus code
- Bug fix in node networking code
- Code refactoring
Tier 2: Feature Changes
Scope: New RPC methods, P2P changes, wallet features
Requirements:
- Signatures: 4-of-5 maintainers
- Review Period: 30 days
- Requirement: Must include technical specification
Examples:
- Adding a new RPC method
- Implementing a new P2P protocol feature
- Adding wallet functionality
- New SDK features
Tier 3: Consensus-Adjacent
Scope: Changes affecting consensus validation code
Requirements:
- Signatures: 5-of-5 maintainers
- Review Period: 90 days
- Requirement: Formal verification (BLVM Specification Lock) required
Examples:
- Changes to consensus validation logic
- Modifications to block/transaction validation
- Updates to consensus-critical algorithms
Note: This tier requires the most scrutiny because changes can affect network consensus.
Tier 4: Emergency Actions
Scope: Critical security patches, network-threatening bugs
Requirements:
- Signatures: 4-of-5 maintainers
- Review Period: 0 days (immediate)
- Requirement: Post-mortem required
Sub-tiers:
- Critical Emergency: Network-threatening (7 day maximum duration)
- Urgent Security: Security issues (30 day maximum duration)
- Elevated Priority: Important fixes (90 day maximum duration)
Examples:
- Critical security vulnerability
- Network-threatening bug
- Consensus-breaking issue requiring immediate fix
Tier 5: Governance Changes
Scope: Changes to governance rules themselves
Requirements:
- Signatures: Special process (5-of-7 maintainers + 2-of-3 emergency keyholders)
- Review Period: 180 days
Examples:
- Changing signature requirements
- Modifying review periods
- Updating governance tier definitions
Layer + Tier Combination
The governance system combines two dimensions:
- Layers (Repository Architecture) - Which repository the change affects
- Tiers (Action Classification) - What type of change is being made
When both apply, the system uses “most restrictive wins” rule:
| Example | Layer | Tier | Final Signatures | Final Review | Source |
|---|---|---|---|---|---|
| Bug fix in Protocol Engine | 3 | 1 | 4-of-5 | 90 days | Layer 3 |
| New feature in Developer SDK | 5 | 2 | 4-of-5 | 30 days | Tier 2 |
| Consensus change in Orange Paper | 1 | 3 | 6-of-7 | 180 days | Layer 1 |
| Emergency fix in Reference Node | 4 | 4 | 4-of-5 | 0 days | Tier 4 |
See Layer-Tier Model for the complete decision matrix.
Signature Requirements by Layer
In addition to tier requirements, layers have their own signature requirements:
- Layer 1-2 (Constitutional): 6-of-7 maintainers, 180 days (365 for consensus changes)
- Layer 3 (Implementation): 4-of-5 maintainers, 90 days
- Layer 4 (Application): 3-of-5 maintainers, 60 days
- Layer 5 (Extension): 2-of-3 maintainers, 14 days
The most restrictive requirement (layer or tier) applies.
Maintainer Signing Process
How Maintainers Sign
- Review PR: Understand the change and its impact
- Generate signature: Use
blvm-signfrom blvm-sdk:blvm-sign --message "Approve PR #123" --key ~/.blvm/maintainer.key - Post signature: Comment on PR:
/governance-sign <signature> - Governance App verifies: Cryptographically verifies signature
- Status check updates: Shows signature count progress
Signature Verification
The Governance App cryptographically verifies each signature:
- Uses secp256k1 ECDSA (Bitcoin-compatible)
- Verifies signature matches maintainer’s public key
- Ensures signature is for the correct PR
- Prevents signature reuse
Emergency Procedures
The system includes a three-tiered emergency response system:
Tier 1: Critical Emergency (Network-threatening)
- Review period: 0 days
- Signatures: 4-of-7 maintainers
- Activation: 5-of-7 emergency keyholders required
- Maximum duration: 7 days
Tier 2: Urgent Security Issue
- Review period: 7 days
- Signatures: 5-of-7 maintainers
- Maximum duration: 30 days
Tier 3: Elevated Priority
- Review period: 30 days
- Signatures: 6-of-7 maintainers
- Maximum duration: 90 days
How to Get Your PR Reviewed
1. Ensure PR is Ready
- All CI checks pass
- Code is well-documented
- Tests are included
- PR description is clear
2. Be Patient
Review periods vary by tier:
- Tier 1: 7 days minimum
- Tier 2: 30 days minimum
- Tier 3: 90 days minimum
- Tier 4: 0 days (immediate)
- Tier 5: 180 days minimum
3. Respond to Feedback
- Address review comments promptly
- Update PR as needed
- Keep PR description current
4. Keep PRs Small
- Smaller PRs are reviewed faster
- Easier to understand
- Less risk of issues
5. Communicate
- Update PR description if scope changes
- Respond to questions
- Ask for help if stuck
PR Status Indicators
Your PR will show status indicators:
- Signature progress:
3/5 signatures collected - Review period:
5 days remaining - CI status: All checks passing/failing
Common Questions
How do I know what tier my PR is?
The Governance App automatically classifies your PR. You’ll see the tier in the PR status checks.
Can I speed up the review process?
No. Review periods are fixed by tier to ensure adequate scrutiny. However, you can:
- Ensure your PR is ready (all checks pass)
- Respond to feedback quickly
- Keep PRs small and focused
What if I disagree with the tier classification?
Contact maintainers. There’s a temporary manual override available for tier classification.
Can I merge my own PR?
No. All PRs require maintainer signatures and review period to elapse, regardless of who opened it.
Additional Resources
- Contributing Guide - Complete developer workflow
- Governance Model - Detailed governance documentation
- Layer-Tier Model - Complete decision matrix
Release Process
This document explains how BLVM releases are created, what variants are available, version numbering, and how to verify releases.
Overview
BLVM uses an automated release pipeline that builds and releases the entire ecosystem when code is merged to main in any repository. The system uses Cargo’s dependency management to build repositories in the correct order.
Release Triggers
Automatic Release (Push to Main)
The release pipeline automatically triggers when:
- A commit is pushed to the
mainbranch in any repository - The commit changes code files (not just documentation)
- Paths ignored:
**.md,.github/**,docs/**
What happens:
- Version is auto-incremented (patch version: X.Y.Z → X.Y.(Z+1))
- Dependencies are published to crates.io
- All repositories are built in dependency order
- Release artifacts are created
- GitHub release is created
- All repositories are tagged with the version
Manual Release (Workflow Dispatch)
You can manually trigger a release with:
- Custom version tag (e.g.,
v0.2.0) - Platform selection (linux, windows, or both)
- Option to skip tagging (for testing)
When to use:
- Major or minor version bumps
- Coordinated releases
- Testing release process
Version Numbering
Automatic Version Bumping
When triggered by a push to main:
- Reads current version from
blvm/versions.toml(fromblvm-consensusversion) - Auto-increments the patch version (X.Y.Z → X.Y.(Z+1))
- Generates a release set ID (e.g.,
set-2025-0123)
Manual Version Override
When using workflow dispatch:
- Provide a specific version tag (e.g.,
v0.2.0) - The pipeline uses your provided version instead of auto-incrementing
Semantic Versioning
BLVM uses Semantic Versioning:
- MAJOR (X.0.0): Breaking changes
- MINOR (0.X.0): New features, backward compatible
- PATCH (0.0.X): Bug fixes, backward compatible
Build Process
Dependency Order
The build follows Cargo’s dependency graph:
1. blvm-consensus (no dependencies)
↓
2. blvm-protocol (depends on blvm-consensus)
↓
3. blvm-node (depends on blvm-protocol + blvm-consensus)
↓
4. blvm (depends on blvm-node)
Parallel:
5. blvm-sdk (no dependencies)
↓
6. blvm-commons (depends on blvm-sdk)
Build Variants
Each release includes two variants:
Base Variant
Purpose: Stable, production-ready
Features:
- Core functionality
- Production optimizations
- All standard Bitcoin features
Use for: Production deployments
Experimental Variant
Purpose: Full-featured with experimental features
Features: All base features plus:
- UTXO commitments
- Dandelion++ privacy relay
- BIP119 CheckTemplateVerify (CTV)
- Stratum V2 mining
- BIP158 compact block filters
- Signature operations counting
- Iroh transport support
Use for: Development, testing, advanced features
Platforms
Both variants are built for:
- Linux x86_64 (native)
- Windows x86_64 (cross-compiled with MinGW)
Release Artifacts
Binaries Included
Both variants include:
blvm- Bitcoin reference nodeblvm-keygen- Key generation toolblvm-sign- Message signing toolblvm-verify- Signature verification toolblvm-commons- Governance application server (Linux only)key-manager- Key management utilitytest-content-hash- Content hash testing tooltest-content-hash-standalone- Standalone content hash test
Archive Formats
Each platform/variant combination produces:
.tar.gzarchive (Linux/Unix).ziparchive (Windows/universal)SHA256SUMSfile for verification
Release Notes
Automatically generated RELEASE_NOTES.md includes:
- Release date
- Component versions
- Build variant descriptions
- Installation instructions
- Verification instructions
Quality Assurance
Deterministic Build Verification
The pipeline verifies builds are reproducible by:
- Building once and saving binary hashes
- Cleaning and rebuilding
- Comparing hashes (must match exactly)
Note: Non-deterministic builds are warnings (not failures) but should be fixed for production.
Test Execution
All repositories run their test suites:
- Unit tests
- Integration tests
- Library and binary tests
- Excluded: Doctests (for Phase 1 speed)
Test Requirements:
- All tests must pass
- 30-minute timeout per repository
- Single-threaded execution to avoid resource contention
Git Tagging
Automatic Tagging
When a release succeeds, the pipeline:
- Creates git tags in all repositories with the version tag
- Tags are annotated with release message
- Pushes tags to origin
Repositories Tagged:
blvm-consensusblvm-protocolblvm-nodeblvmblvm-sdkblvm-commons
Tag Format
- Format:
vX.Y.Z(e.g.,v0.1.0) - Semantic versioning
- Immutable once created
GitHub Release
Release Creation
The pipeline creates a GitHub release with:
- Tag: Version tag (e.g.,
v0.1.0) - Title:
Bitcoin Commons v0.1.0 - Body: Generated from
RELEASE_NOTES.md - Artifacts: All binary archives and checksums
- Type: Official release (not prerelease)
Release Location
Releases are created in the blvm repository as the primary release point for the ecosystem.
Cargo Publishing
Publishing Strategy
To avoid compiling all dependencies when building the final blvm binary, all library dependencies are published to crates.io as part of the release process.
Publishing Order:
- blvm-consensus (no dependencies) → Published first
- blvm-protocol (depends on blvm-consensus) → Published after
- blvm-node (depends on blvm-protocol) → Published after
- blvm-sdk (no dependencies) → Published in parallel
Publishing Process
The release pipeline automatically:
- Publishes dependencies in dependency order to crates.io
- Waits for publication to complete before building dependents
- Updates Cargo.toml in dependent repos to use published versions
- Builds final binary using published crates (no compilation of dependencies)
Benefits
- Faster builds: Final binary uses pre-built dependencies
- Better caching: Cargo can cache published crates
- Version control: Exact versions published and tracked
- Reproducibility: Same versions available to all users
- Distribution: Users can depend on published crates directly
Crate Names
Published crates use the same names as the repositories:
blvm-consensus→blvm-consensusblvm-protocol→blvm-protocolblvm-node→blvm-nodeblvm-sdk→blvm-sdk
Version Coordination
versions.toml
The blvm/versions.toml file tracks:
- Current version of each repository
- Dependency requirements
- Release set ID
Updating Versions
For major/minor version bumps:
- Manually edit
versions.toml - Update version numbers
- Trigger release with workflow dispatch
- Provide the new version tag
For patch releases:
- Automatic via push to main
- Patch version auto-increments
Release Verification
Verifying Release Artifacts
- Download artifacts from GitHub release
- Download SHA256SUMS file
- Verify checksums:
sha256sum -c SHA256SUMS - Verify signatures (if GPG signing is enabled)
Verifying Deterministic Builds
For deterministic build verification:
- Check release notes for deterministic build status
- Compare hashes from multiple builds (if available)
- Rebuild from source and compare hashes
Getting Notified of Releases
GitHub Notifications
- Watch repository: Get notified of all releases
- Release notifications: GitHub will notify you of new releases
Release Announcements
Releases may be announced via:
- GitHub release notes
- Project website
- Community channels (if configured)
Best Practices
When to Release
- Automatic: After merging PRs to main (recommended)
- Manual: For major/minor version bumps
- Skip: For documentation-only changes (auto-ignored)
Version Strategy
- Patch: Bug fixes, minor improvements (auto-increment)
- Minor: New features, backward compatible (manual)
- Major: Breaking changes (manual)
Release Frequency
- Regular: After each merge to main (automatic)
- Scheduled: For coordinated releases (manual)
- Emergency: For critical fixes (manual with version override)
Troubleshooting
Build Failures
Common Issues:
- Missing dependencies: Check all repos are cloned
- Cargo config issues: Pipeline auto-fixes common problems
- Windows cross-compile: Verify MinGW is installed
Solutions:
- Check build logs in GitHub Actions
- Verify all repositories are accessible
- Ensure Rust toolchain is up to date
Test Failures
Common Issues:
- Flaky tests: Check for timing issues
- Resource contention: Tests run single-threaded
- Timeout: Tests have 30-minute limit
Solutions:
- Review test output in logs
- Check for CI-specific test issues
- Consider skipping problematic tests temporarily
Tagging Failures
Common Issues:
- Tag already exists: Pipeline skips gracefully
- Permission issues: Verify
REPO_ACCESS_TOKENhas write access
Solutions:
- Check if tag exists before release
- Verify token permissions
- Use
skip_taggingoption for testing
Additional Resources
- CI/CD Workflows - Detailed CI/CD documentation
- Contributing Guide - Developer workflow
- GitHub Releases - All releases
Testing Infrastructure
Overview
Bitcoin Commons uses a multi-layered testing strategy combining formal verification, property-based testing, fuzzing, integration tests, and runtime assertions. This approach ensures correctness across consensus-critical code.
Testing Strategy
Layered Verification
The testing strategy uses multiple complementary techniques:
- Formal Verification: Proves correctness for all inputs (bounded)
- Property-Based Testing (Proptest): Verifies invariants with random inputs (unbounded)
- Fuzzing (libFuzzer): Discovers edge cases through random generation
- Integration Tests: Verifies end-to-end correctness
- Unit Tests: Tests individual functions
- Runtime Assertions: Catches violations during execution
- MIRI Integration: Detects undefined behavior
Code: CONSENSUS_COVERAGE_ASSESSMENT.md
Test Types
Unit Tests
Unit tests verify individual functions in isolation:
- Location:
tests/directory,#[test]functions - Coverage: Public functions
- Examples: Transaction validation, block validation, script execution
Code: estimate_test_coverage.py
Property-Based Tests
Property-based tests verify mathematical invariants:
- Location:
tests/consensus_property_tests.rsand other property test files - Coverage: Mathematical invariants
- Tool: Proptest
Code: consensus_property_tests.rs
Integration Tests
Integration tests verify end-to-end correctness:
- Location:
tests/integration/directory - Coverage: Multi-component scenarios
- Examples: BIP compliance, historical replay, mempool mining
Code: mod.rs
Fuzz Tests
Fuzz tests discover edge cases through random generation:
- Location:
fuzz/fuzz_targets/directory - Tool: libFuzzer
- Coverage: Critical consensus functions
Code: README.md
Formal Verification
Formal verification verifies correctness for all inputs:
- Location:
src/andtests/directories - Formal Verification: Proofs with tiered execution system
- Coverage: Critical consensus functions
- Tool: Formal verification tooling
Code: formal-verification.md
Spec-Lock Formal Verification
blvm-spec-lock provides formal verification of consensus and UTXO operations:
- Coverage: Functions with
#[spec_locked("section")]annotations - Tool: blvm-spec-lock (Z3-based)
- Mathematical Specifications: Verifies compliance with Orange Paper sections
Verified Properties (via spec-lock):
- Consensus rules (transaction, block, script validation)
- UTXO set operations where annotated
- Cryptographic primitives
Code: blvm-spec-lock
See Also: UTXO Commitments - Verification workflow
Runtime Assertions
Runtime assertions catch violations during execution:
- Coverage: Critical paths with runtime assertions
- Production: Available via feature flag
Code: CONSENSUS_COVERAGE_ASSESSMENT.md
MIRI Integration
MIRI detects undefined behavior:
- CI Integration: Automated undefined behavior detection
- Coverage: Property tests and critical unit tests
- Tool: MIRI interpreter
Code: CONSENSUS_COVERAGE_ASSESSMENT.md
Coverage Statistics
Overall Coverage
| Verification Technique | Status |
|---|---|
| Formal Proofs | ✅ Critical functions |
| Property Tests | ✅ All mathematical invariants |
| Runtime Assertions | ✅ All critical paths |
| Fuzz Targets | ✅ Edge case discovery |
| MIRI Integration | ✅ Undefined behavior detection |
| Mathematical Specs | ✅ Complete formal documentation |
Code: CONSENSUS_COVERAGE_ASSESSMENT.md
Coverage by Consensus Area
Verification coverage includes all major consensus areas:
- Economic Rules: Formal proofs, property tests, runtime assertions, and fuzz targets
- Proof of Work: Formal proofs, property tests, runtime assertions, and fuzz targets
- Transaction Validation: Formal proofs, property tests, runtime assertions, and fuzz targets
- Block Validation: Formal proofs, property tests, runtime assertions, and fuzz targets
- Script Execution: Formal proofs, property tests, runtime assertions, and fuzz targets
- Chain Reorganization: Formal proofs, property tests, and runtime assertions
- Cryptographic: Formal proofs, property tests, and runtime assertions
- Mempool: Formal proofs, runtime assertions, and fuzz targets
- SegWit: Formal proofs, runtime assertions, and fuzz targets
- Serialization: Formal proofs, runtime assertions, and fuzz targets
Code: EXACT_VERIFICATION_COUNTS.md
Running Tests
Run All Tests
cd blvm-consensus
cargo test
Run Specific Test Type
# Unit tests
cargo test --lib
# Property tests
cargo test --test consensus_property_tests
# Integration tests
cargo test --test integration
# Fuzz tests
cargo +nightly fuzz run transaction_validation
Run with MIRI
cargo +nightly miri test
Run blvm-spec-lock Proofs
cargo blvm-spec-lock
Code: formal-verification.md
Run Spec-Lock Verification
# Run spec-lock verification (requires cargo-spec-lock)
cargo spec-lock verify --crate-path .
Coverage Goals
Target Coverage
- blvm-spec-lock Proofs: All critical consensus functions
- Property Tests: All mathematical invariants
- Fuzz Targets: All critical validation paths
- Runtime Assertions: All critical code paths
- Integration Tests: All major workflows
Current Status
All coverage goals met:
- ✅ Formal proofs covering all critical functions
- ✅ Property test functions covering all invariants
- ✅ Fuzz targets covering all critical paths
- ✅ Runtime assertions in all critical paths
- ✅ Comprehensive integration test suite
Code: CONSENSUS_COVERAGE_ASSESSMENT.md
Test Organization
Directory Structure
blvm-consensus/
├── src/ # Source code with blvm-spec-lock proofs
├── tests/
│ ├── consensus_property_tests.rs # Main property tests
│ ├── integration/ # Integration tests
│ ├── unit/ # Unit tests
│ ├── fuzzing/ # Fuzzing helpers
│ └── verification/ # Verification tests
└── fuzz/
└── fuzz_targets/ # Fuzz targets
Edge Case Coverage
Beyond Proof Bounds
Edge cases beyond blvm-spec-lock proof bounds are covered by:
- Property-Based Testing: Random inputs of various sizes
- Mainnet Block Tests: Real Bitcoin mainnet blocks
- Integration Tests: Realistic scenarios
- Fuzz Testing: Random generation
Code: PROOF_LIMITATIONS.md
Differential Testing
Bitcoin Core Comparison
Differential tests compare behavior with Bitcoin Core:
- Location:
tests/integration/differential_tests.rs - Purpose: Verify consistency with Bitcoin Core
- Coverage: Critical consensus functions
Code: differential_tests.rs
CI Integration
Automated Testing
All tests run in CI:
- Unit Tests: Required for merge
- Property Tests: Required for merge
- Integration Tests: Required for merge
- Fuzz Tests: Run on schedule
- blvm-spec-lock Proofs: Run separately, not blocking
- MIRI: Run on property tests and critical unit tests
Code: formal-verification.md
Test Metrics
- Property Test Functions: Multiple functions across all files
- Runtime Assertions: Multiple assertions (
assert!anddebug_assert!) - Fuzz Targets: Multiple fuzz targets
Code: EXACT_VERIFICATION_COUNTS.md
Components
The testing infrastructure includes:
- Unit tests for all public functions
- Property-based tests for mathematical invariants
- Integration tests for end-to-end scenarios
- Fuzz tests for edge case discovery
- blvm-spec-lock proofs for formal verification
- Runtime assertions for execution-time checks
- MIRI integration for undefined behavior detection
- Differential tests for Bitcoin Core comparison
Location: blvm-consensus/tests/, blvm-consensus/fuzz/, blvm-consensus/src/
See Also
- Property-Based Testing - Verify mathematical invariants
- Fuzzing Infrastructure - Automated bug discovery
- Differential Testing - Compare with Bitcoin Core
- Benchmarking - Performance measurement
- Snapshot Testing - Output consistency verification
- Formal Verification - blvm-spec-lock model checking
- UTXO Commitments - Spec-lock verification for UTXO operations
- Contributing - Testing requirements for contributions
Fuzzing Infrastructure
Overview
Bitcoin Commons implements fuzzing infrastructure using libFuzzer for automated bug discovery. The system includes 19 fuzz targets covering all critical consensus validation functions, with sanitizer support and corpus management.
Fuzz Targets
Core Consensus (Critical)
- transaction_validation - Transaction parsing and validation
- block_validation - Block validation and connection
- script_execution - Script VM execution
- script_opcodes - Individual opcode execution
Advanced Features
- segwit_validation - SegWit weight calculations and witness validation
- mempool_operations - Mempool acceptance, RBF, standardness checks
- utxo_commitments - UTXO commitment verification
Infrastructure
- serialization - Serialization/deserialization round-trips
- pow_validation - Proof of Work validation and difficulty adjustment
- economic_validation - Supply and fee calculations
- compact_block_reconstruction - Compact block parsing
- differential_fuzzing - Internal consistency testing
- block_header_validation - Block header validation
- merkle_validation - Merkle tree validation
- signature_verification - Signature verification
- taproot_validation - Taproot validation
- transaction_input_validation - Transaction input validation
- transaction_output_validation - Transaction output validation
- reorganization - Chain reorganization handling
Location: blvm-consensus/fuzz/fuzz_targets/
Quick Start
Initialize Corpus
cd blvm-consensus/fuzz
./init_corpus.sh
This creates corpus directories and adds basic seed inputs for all targets.
Run a Fuzzing Campaign
# Run single target (5 minutes)
cargo +nightly fuzz run transaction_validation
# Run with corpus
cargo +nightly fuzz run transaction_validation fuzz/corpus/transaction_validation
# Run all targets (24 hours each, background)
./run_campaigns.sh --background
# Run with test runner (parallel execution)
python3 test_runner.py fuzz/corpus/ --parallel
Build with Sanitizers
# AddressSanitizer (ASAN)
./build_with_sanitizers.sh asan
# UndefinedBehaviorSanitizer (UBSAN)
./build_with_sanitizers.sh ubsan
# MemorySanitizer (MSAN)
./build_with_sanitizers.sh msan
# All sanitizers
./build_with_sanitizers.sh all
libFuzzer Integration
Primary Fuzzing Engine
libFuzzer is the primary fuzzing engine, providing:
- Coverage-guided fuzzing
- Automatic corpus management
- Crash reproduction
- Mutation-based input generation
Fuzz Target Structure
#![allow(unused)]
#![no_main]
fn main() {
use libfuzzer_sys::fuzz_target;
fuzz_target!(|data: &[u8]| {
// Parse and validate input
if let Ok(transaction) = parse_transaction(data) {
// Test validation function
let _ = validate_transaction(&transaction);
}
});
}
Location: blvm-consensus/fuzz/fuzz_targets/
Sanitizer Support
AddressSanitizer (ASAN)
Detects memory errors:
- Use-after-free
- Buffer overflows
- Memory leaks
- Double-free
Usage:
RUSTFLAGS="-Zsanitizer=address" cargo +nightly fuzz run transaction_validation
UndefinedBehaviorSanitizer (UBSAN)
Detects undefined behavior:
- Integer overflow
- Null pointer dereference
- Invalid shifts
- Type mismatches
Usage:
RUSTFLAGS="-Zsanitizer=undefined" cargo +nightly fuzz run transaction_validation
MemorySanitizer (MSAN)
Detects uninitialized memory reads:
- Uninitialized stack reads
- Uninitialized heap reads
- Uninitialized memory in structs
Usage:
RUSTFLAGS="-Zsanitizer=memory" cargo +nightly fuzz run transaction_validation
Corpus Management
Corpus Structure
fuzz/corpus/
├── transaction_validation/
├── block_validation/
├── script_execution/
├── script_opcodes/
├── segwit_validation/
├── mempool_operations/
├── utxo_commitments/
├── serialization/
├── pow_validation/
├── economic_validation/
├── compact_block_reconstruction/
└── differential_fuzzing/
Corpus Initialization
The init_corpus.sh script:
- Creates corpus directories for all targets
- Adds basic seed inputs
- Sets up corpus structure
Code: blvm-consensus/fuzz/init_corpus.sh
Corpus Growth
Corpus grows automatically as libFuzzer discovers new code paths:
- Coverage-guided selection
- Mutation-based generation
- Automatic deduplication
- Persistent storage
Test Runner
Parallel Execution
The test_runner.py script provides:
- Parallel fuzzing across targets
- Corpus management
- Crash reproduction
- Sanitizer integration
- Progress tracking
Usage:
python3 test_runner.py fuzz/corpus/ --parallel
Code: blvm-consensus/fuzz/test_runner.py
Sequential Execution
For debugging or resource-constrained environments:
python3 test_runner.py fuzz/corpus/ --sequential
Differential Fuzzing
Internal Consistency Testing
Differential fuzzing verifies internal consistency without relying on Bitcoin Core:
- Multiple implementations of same function
- Round-trip properties
- Invariant checking
- Cross-component validation
Code: blvm-consensus/fuzz/fuzz_targets/differential_fuzzing.rs
CI Integration
Continuous Fuzzing
Fuzzing runs in CI via GitHub Actions:
- Automated corpus updates
- Crash detection
- Sanitizer builds
- Coverage tracking
Location: .github/workflows/fuzz.yml
Running Fuzzing Campaigns
Short Verification (5 minutes each)
./run_campaigns.sh 300
Full Campaigns (24 hours each)
./run_campaigns.sh --background
Individual Target
cargo +nightly fuzz run transaction_validation -- -max_total_time=3600
Code: blvm-consensus/fuzz/run_campaigns.sh
Crash Reproduction
Reproducing Crashes
# Run with crash input
cargo +nightly fuzz run transaction_validation crash_inputs/crash-abc123
# Run with sanitizer for detailed error
RUSTFLAGS="-Zsanitizer=address" cargo +nightly fuzz run transaction_validation crash_inputs/crash-abc123
Crash Analysis
Crashes are automatically:
- Saved to
fuzz/artifacts/ - Tagged with target name
- Reproducible with exact input
- Analyzable with sanitizers
Coverage Tracking
Coverage Reports
Generate coverage reports:
cargo +nightly fuzz coverage transaction_validation
Coverage Analysis
- Identify untested code paths
- Guide fuzzing improvements
- Measure fuzzing effectiveness
- Track coverage over time
Best Practices
Seed Inputs
Provide diverse seed inputs:
- Valid transactions/blocks
- Edge cases
- Boundary conditions
- Real-world examples
Corpus Maintenance
- Regularly update corpus
- Remove redundant inputs
- Add interesting inputs manually
- Share corpus across runs
Sanitizer Usage
- Run with ASAN regularly
- Use UBSAN for undefined behavior
- Use MSAN for uninitialized memory
- Combine sanitizers for comprehensive testing
Components
The fuzzing infrastructure includes:
- 12 fuzz targets covering all critical functions
- libFuzzer integration
- Sanitizer support (ASAN, UBSAN, MSAN)
- Corpus management
- Test runner for automation
- Differential fuzzing
- CI integration
Location: blvm-consensus/fuzz/
See Also
- Testing Infrastructure - Overview of all testing techniques
- Property-Based Testing - Verify invariants with random inputs
- Differential Testing - Compare with Bitcoin Core
- Formal Verification - BLVM Specification Lock verification
- Contributing - Testing requirements for contributions
Property-Based Testing
Overview
Bitcoin Commons uses property-based testing with Proptest to verify mathematical invariants across thousands of random inputs. The system includes property tests in the main test file and property test functions across all test files.
Property Test Categories
Economic Rules
prop_block_subsidy_halving_schedule- Verifies subsidy halves every 210,000 blocksprop_total_supply_monotonic_bounded- Verifies supply increases monotonically and is boundedprop_block_subsidy_non_negative_decreasing- Verifies subsidy is non-negative and decreasing
Proof of Work
prop_pow_target_expansion_valid_range- Verifies target expansion within valid rangeprop_target_expansion_deterministic- Verifies target expansion is deterministic
Transaction Validation
prop_transaction_output_value_bounded- Verifies output values are boundedprop_transaction_non_empty_inputs_outputs- Verifies transactions have inputs and outputsprop_transaction_size_bounded- Verifies transaction size is boundedprop_coinbase_script_sig_length- Verifies coinbase script sig length limitsprop_transaction_validation_deterministic- Verifies validation is deterministic
Script Execution
prop_script_execution_deterministic- Verifies script execution is deterministicprop_script_size_bounded- Verifies script size is boundedprop_script_execution_performance_bounded- Verifies script execution performance
Performance
prop_sha256_performance_bounded- Verifies SHA256 performanceprop_double_sha256_performance_bounded- Verifies double SHA256 performanceprop_transaction_validation_performance_bounded- Verifies transaction validation performanceprop_script_execution_performance_bounded- Verifies script execution performanceprop_block_subsidy_constant_time- Verifies block subsidy calculation is constant-timeprop_target_expansion_performance_bounded- Verifies target expansion performance
Deterministic Execution
prop_transaction_validation_deterministic- Verifies transaction validation determinismprop_block_subsidy_deterministic- Verifies block subsidy determinismprop_total_supply_deterministic- Verifies total supply determinismprop_target_expansion_deterministic- Verifies target expansion determinismprop_fee_calculation_deterministic- Verifies fee calculation determinism
Integer Overflow Safety
prop_fee_calculation_overflow_safety- Verifies fee calculation overflow safetyprop_output_value_overflow_safety- Verifies output value overflow safetyprop_total_supply_overflow_safety- Verifies total supply overflow safety
Temporal/State Transition
prop_supply_never_decreases_across_blocks- Verifies supply never decreasesprop_reorganization_preserves_supply- Verifies reorganization preserves supplyprop_supply_matches_expected_across_blocks- Verifies supply matches expected values
Compositional Verification
prop_connect_block_composition- Verifies block connection compositionprop_disconnect_connect_idempotency- Verifies disconnect/connect idempotency
SHA256 Correctness
sha256_matches_reference- Verifies SHA256 matches reference implementationdouble_sha256_matches_reference- Verifies double SHA256 matches referencesha256_idempotent- Verifies SHA256 idempotencysha256_deterministic- Verifies SHA256 determinismsha256_output_length- Verifies SHA256 output lengthdouble_sha256_output_length- Verifies double SHA256 output length
Location: blvm-consensus/tests/consensus_property_tests.rs
Proptest Integration
Basic Usage
#![allow(unused)]
fn main() {
use proptest::prelude::*;
proptest! {
#[test]
fn prop_function_invariant(input in strategy) {
let result = function_under_test(input);
prop_assert!(result.property_holds());
}
}
}
Strategy Generation
Proptest generates random inputs using strategies:
#![allow(unused)]
fn main() {
// Integer strategy
let height_strategy = 0u64..10_000_000;
// Vector strategy
let tx_strategy = prop::collection::vec(tx_strategy, 1..1000);
// Custom strategy
let block_strategy = (height_strategy, tx_strategy).prop_map(|(h, txs)| {
Block::new(h, txs)
});
}
Property Assertions
#![allow(unused)]
fn main() {
// Basic assertion
prop_assert!(condition);
// Assertion with message
prop_assert!(condition, "Property failed: {}", reason);
// Assertion with equality
prop_assert_eq!(actual, expected);
}
Property Test Patterns
Invariant Testing
Test that invariants hold across all inputs:
#![allow(unused)]
fn main() {
proptest! {
#[test]
fn prop_subsidy_non_negative(height in 0u64..10_000_000) {
let subsidy = get_block_subsidy(height);
prop_assert!(subsidy >= 0);
}
}
}
Round-Trip Properties
Test that operations are reversible:
#![allow(unused)]
fn main() {
proptest! {
#[test]
fn prop_serialization_round_trip(tx in tx_strategy()) {
let serialized = serialize(&tx);
let deserialized = deserialize(&serialized)?;
prop_assert_eq!(tx, deserialized);
}
}
}
Determinism Properties
Test that functions are deterministic:
#![allow(unused)]
fn main() {
proptest! {
#[test]
fn prop_deterministic(input in input_strategy()) {
let result1 = function(input.clone());
let result2 = function(input);
prop_assert_eq!(result1, result2);
}
}
}
Bounds Properties
Test that values stay within bounds:
#![allow(unused)]
fn main() {
proptest! {
#[test]
fn prop_value_bounded(value in 0i64..MAX_MONEY) {
let result = process_value(value);
prop_assert!(result >= 0 && result <= MAX_MONEY);
}
}
}
Additional Property Tests
Comprehensive Property Tests
Location: tests/unit/comprehensive_property_tests.rs
- Multiple proptest! blocks covering comprehensive scenarios
Script Opcode Property Tests
Location: tests/unit/script_opcode_property_tests.rs
- Multiple proptest! blocks for script opcode testing
SegWit/Taproot Property Tests
Location: tests/unit/segwit_taproot_property_tests.rs
- Multiple proptest! blocks for SegWit and Taproot
Edge Case Property Tests
Multiple files with edge case testing:
tests/unit/block_edge_cases.rs: Multiple proptest! blockstests/unit/economic_edge_cases.rs: Multiple proptest! blockstests/unit/reorganization_edge_cases.rs: Multiple proptest! blockstests/unit/transaction_edge_cases.rs: Multiple proptest! blockstests/unit/utxo_edge_cases.rs: Multiple proptest! blockstests/unit/difficulty_edge_cases.rs: Multiple proptest! blockstests/unit/mempool_edge_cases.rs: Multiple proptest! blocks
Cross-BIP Property Tests
Location: tests/cross_bip_property_tests.rs
- Multiple proptest! blocks for cross-BIP validation
Statistics
- Property Test Blocks: Multiple proptest! blocks across all test files
- Property Test Functions: Multiple prop_* functions across all test files
Running Property Tests
Run All Property Tests
cargo test --test consensus_property_tests
Run Specific Property Test
cargo test --test consensus_property_tests prop_block_subsidy_halving_schedule
Run with Verbose Output
cargo test --test consensus_property_tests -- --nocapture
Run with MIRI
cargo +nightly miri test --test consensus_property_tests
Shrinking
Proptest automatically shrinks failing inputs to minimal examples:
- Initial Failure: Large random input fails
- Shrinking: Proptest reduces input size
- Minimal Example: Smallest input that still fails
- Debugging: Minimal example is easier to debug
Configuration
Test Cases
Default: 256 test cases per property test
#![allow(unused)]
fn main() {
proptest! {
#![proptest_config(ProptestConfig::with_cases(1000))]
#[test]
fn prop_test(input in strategy) {
// ...
}
}
}
Max Shrink Iterations
Default: 65536 shrink iterations
#![allow(unused)]
fn main() {
proptest! {
#![proptest_config(ProptestConfig {
max_shrink_iters: 10000,
..ProptestConfig::default()
})]
#[test]
fn prop_test(input in strategy) {
// ...
}
}
}
Integration with Formal Verification
Property tests complement Z3 proofs:
- BLVM Specification Lock (Z3): Proves correctness for all inputs (bounded)
- Property Tests: Verifies invariants with random inputs (unbounded)
- Combined: Comprehensive verification coverage
Components
The property-based testing system includes:
- Property tests in main test file
- Property test blocks across all files
- Property test functions
- Proptest integration
- Strategy generation
- Automatic shrinking
- MIRI integration
Location: blvm-consensus/tests/consensus_property_tests.rs, blvm-consensus/tests/unit/
See Also
- Testing Infrastructure - Overview of all testing techniques
- Fuzzing Infrastructure - Automated bug discovery
- Differential Testing - Compare with Bitcoin Core
- Formal Verification - BLVM Specification Lock verification
- Contributing - Testing requirements for contributions
Benchmarking Infrastructure
Overview
Bitcoin Commons maintains a comprehensive benchmarking infrastructure to measure and track performance across all components. Benchmarks are automatically generated and published at benchmarks.thebitcoincommons.org.
Benchmark Infrastructure
blvm-bench Repository
The benchmarking infrastructure is maintained in a separate repository (blvm-bench) that:
- Runs performance benchmarks across all BLVM components
- Parallel benchmark execution for efficient testing
- Differential testing infrastructure (comparing against Bitcoin Core)
- FIBRE protocol performance benchmarks
- Generates benchmark reports and visualizations
- Publishes results to
benchmarks.thebitcoincommons.org - Tracks performance over time
- Compares performance with Bitcoin Core
Automated Benchmark Generation
Benchmarks are generated automatically via GitHub Actions workflows:
- Scheduled Runs: Regular benchmark runs on schedule
- PR Triggers: Benchmarks run on pull requests
- Release Triggers: Comprehensive benchmarks before releases
- Results Publishing: Automatic publishing to benchmark website
Published Benchmarks
Benchmark Website
All benchmark results are published at:
- URL: benchmarks.thebitcoincommons.org
- Content: Performance metrics, comparisons, historical trends
- Format: Interactive charts and detailed reports
Benchmark Categories
Benchmarks cover:
-
Consensus Performance
- Block validation speed
- Transaction validation speed
- Script execution performance
- UTXO operations
-
Network Performance
- P2P message handling
- Block propagation
- Transaction relay
- Network protocol overhead
-
Storage Performance
- Database operations
- Index operations
- Cache performance
- Disk I/O
-
Memory Performance
- Memory usage
- Allocation patterns
- Cache efficiency
- Memory leaks
Running Benchmarks Locally
Prerequisites
# Install Rust benchmarking tools
cargo install criterion
# Install benchmark dependencies
cargo build --release --benches
Run All Benchmarks
cd blvm-consensus
cargo bench
Run Specific Benchmark
# Run specific benchmark suite
cargo bench --bench block_validation
# Run specific benchmark
cargo bench --bench block_validation -- block_connect
Benchmark Configuration
Benchmarks can be configured via environment variables:
# Set benchmark iterations
export BENCH_ITERATIONS=1000
# Set benchmark warmup time
export BENCH_WARMUP_SECS=5
# Set benchmark measurement time
export BENCH_MEASUREMENT_SECS=10
Benchmark Structure
Criterion Benchmarks
Benchmarks use the Criterion.rs framework:
#![allow(unused)]
fn main() {
use criterion::{black_box, criterion_group, criterion_main, Criterion};
fn benchmark_block_validation(c: &mut Criterion) {
c.bench_function("block_connect", |b| {
let block = create_test_block();
b.iter(|| {
black_box(validate_block(&block));
});
});
}
criterion_group!(benches, benchmark_block_validation);
criterion_main!(benches);
}
Benchmark Groups
Benchmarks are organized into groups:
- Block Validation: Block connection, header validation
- Transaction Validation: Transaction parsing, input validation
- Script Execution: Script VM performance, opcode execution
- Cryptographic: SHA256, double SHA256, signature verification
- UTXO Operations: UTXO set updates, lookups, batch operations
Interpreting Results
Performance Metrics
Benchmarks report:
- Throughput: Operations per second
- Latency: Time per operation
- Memory: Memory usage per operation
- CPU: CPU utilization
Comparison with Bitcoin Core
Benchmarks include comparisons with Bitcoin Core:
- Relative Performance: Speedup/slowdown vs Bitcoin Core
- Feature Parity: Functional equivalence verification
- Optimization Impact: Performance impact of optimizations
Historical Trends
Benchmark results track performance over time:
- Performance Regression Detection: Identify performance regressions
- Optimization Validation: Verify optimization effectiveness
- Release Impact: Measure performance impact of releases
Benchmark Workflows
GitHub Actions
Benchmark workflows in blvm-bench:
- Scheduled Benchmarks: Daily/weekly benchmark runs
- PR Benchmarks: Benchmark on pull requests
- Release Benchmarks: Comprehensive benchmarks before releases
- Results Publishing: Automatic publishing to website
Benchmark Artifacts
Workflows generate:
- Benchmark Reports: Detailed performance reports
- Visualizations: Charts and graphs
- Comparison Data: Bitcoin Core comparisons
- Historical Data: Performance trends
Performance Targets
Consensus Performance
- Block Validation: Target <100ms per block (mainnet average)
- Transaction Validation: Target <1ms per transaction
- Script Execution: Target <10ms per script (average complexity)
Network Performance
- Block Propagation: Target <1s for block propagation
- Transaction Relay: Target <100ms for transaction relay
- P2P Overhead: Target <5% protocol overhead
Storage Performance
- Database Operations: Target <10ms for common queries
- Index Operations: Target <1ms for index lookups
- Cache Hit Rate: Target >90% cache hit rate
Benchmark Best Practices
Benchmark Design
- Isolate Components: Benchmark individual components
- Use Realistic Data: Use real-world data when possible
- Warm Up: Include warmup iterations
- Multiple Runs: Run benchmarks multiple times
- Statistical Analysis: Use statistical methods for accuracy
Benchmark Maintenance
- Regular Updates: Update benchmarks with code changes
- Performance Monitoring: Monitor for regressions
- Documentation: Document benchmark methodology
- Reproducibility: Ensure benchmarks are reproducible
Components
The benchmarking infrastructure includes:
- Criterion.rs benchmark framework
- Automated benchmark generation (GitHub Actions)
- Benchmark website (benchmarks.thebitcoincommons.org)
- Performance tracking and visualization
- Bitcoin Core comparisons
- Historical performance trends
Location: blvm-bench repository, benchmark results at benchmarks.thebitcoincommons.org
See Also
- Testing Infrastructure - Overview of all testing techniques
- Node Performance - Performance optimizations
- CI/CD Workflows - CI integration
- Contributing - Performance requirements for contributions
Differential Testing
Overview
Bitcoin Commons implements differential testing to compare validation results with Bitcoin Core, providing empirical validation of consensus correctness. This ensures compatibility and catches consensus divergences.
Location: Differential testing is fully implemented in blvm-bench with parallel execution support and comprehensive Bitcoin Core comparison infrastructure.
Purpose
Differential testing serves to:
- Verify Compatibility: Ensure validation results match Bitcoin Core
- Catch Divergences: Detect consensus differences early
- Empirical Validation: Provide real-world validation testing
- Automated Consistency: Automated consistency checks
Implementation Location
Primary Implementation: blvm-bench repository
- Full differential testing infrastructure
- Parallel execution support
- Bitcoin Core RPC integration
- Regtest node management
- Historical block replay testing
- BIP-specific differential tests
- FIBRE performance benchmarks
Code: differential_tests.rs (skeleton)
Architecture
Comparison Process
- Local Validation: Validate transaction/block locally
- Core RPC Call: Call Bitcoin Core RPC for validation
- Result Comparison: Compare local and Core results
- Divergence Detection: Report any divergences
Code: differential_tests.rs
Bitcoin Core RPC Integration
Differential tests use Bitcoin Core RPC:
- testmempoolaccept: Transaction validation
- submitblock: Block validation
- JSON-RPC 2.0: Standard RPC protocol
Code: differential_tests.rs
Transaction Validation Comparison
Comparison Function
#![allow(unused)]
fn main() {
pub async fn compare_transaction_validation(
tx: &Transaction,
config: &CoreRpcConfig,
) -> Result<ComparisonResult>
}
Process:
- Validate transaction locally using
check_transaction - Serialize transaction to hex
- Call Bitcoin Core’s
testmempoolacceptRPC - Compare validation results
- Report divergence if results differ
Code: differential_tests.rs
Block Validation Comparison
Comparison Function
#![allow(unused)]
fn main() {
pub async fn compare_block_validation(
block: &Block,
config: &CoreRpcConfig,
) -> Result<ComparisonResult>
}
Process:
- Validate block locally using
connect_block - Serialize block to hex
- Call Bitcoin Core’s
submitblockRPC - Compare validation results
- Report divergence if results differ
Code: differential_tests.rs
Configuration
Core RPC Configuration
#![allow(unused)]
fn main() {
pub struct CoreRpcConfig {
pub url: String, // e.g., "http://127.0.0.1:8332"
pub username: Option<String>, // RPC username
pub password: Option<String>, // RPC password
}
}
Default: http://127.0.0.1:8332 (local Bitcoin Core)
Code: differential_tests.rs
Differential Fuzzing
Fuzz Target
Differential fuzzing compares internal consistency:
- Serialization Round-Trips: Ensures serialize→deserialize preserves properties
- Validation Consistency: Same transaction validates the same way after round-trip
- Calculation Idempotency: Weight calculations, economic calculations are deterministic
- Cross-Validation: Different code paths agree on validation results
Code: differential_fuzzing.rs
Internal Consistency
Differential fuzzing tests internal consistency within blvm-consensus:
- No Bitcoin Core Dependency: Tests blvm-consensus independently
- Round-Trip Properties: Serialization round-trips
- Validation Consistency: Validation consistency across code paths
- Calculation Determinism: Deterministic calculations
Code: README.md
Bitcoin Core Test Vectors
Test Vector Integration
Bitcoin Core test vectors are integrated:
- Transaction Vectors:
tx_valid.json,tx_invalid.json - Script Vectors:
script_valid.json,script_invalid.json - Block Vectors:
block_valid.json,block_invalid.json
Code: BLINDSPOT_COVERAGE_REPORT.md
Test Execution
Test vectors are executed:
- Parsing: Parse test vector JSON files
- Validation: Execute validation with test vectors
- Pass/Fail Reporting: Report test results
- Graceful Handling: Handle missing test data gracefully
Code: BLINDSPOT_COVERAGE_REPORT.md
Mainnet Block Tests
Real Block Validation
Real Bitcoin mainnet blocks are used:
- Genesis Block: Genesis block validation
- SegWit Activation: SegWit activation block validation
- Taproot Activation: Taproot activation block validation
- Historical Blocks: Blocks from all consensus eras
Code: BLINDSPOT_COVERAGE_REPORT.md
Historical Consensus Tests
Historical Validation
Historical consensus validation tests:
- CVE-2012-2459: Merkle tree duplicate hash test framework
- Pre-SegWit: Block validation (height < 481824)
- Post-SegWit: Block validation (height >= 481824)
- Post-Taproot: Block validation (height >= 709632)
- Halving Points: Historical block subsidy calculations
- Difficulty Adjustment: Historical difficulty adjustment tests
Code: BLINDSPOT_COVERAGE_REPORT.md
Usage
Running Differential Tests
Differential tests are run from the blvm-bench repository:
cd blvm-bench
cargo test --features differential
Or run specific BIP differential tests:
cargo test --test bip_differential
Prerequisites
- Bitcoin Core binary available (auto-discovered or via
CORE_PATHenvironment variable) blvm-benchrepository cloned- Network connectivity for RPC calls (if using remote Core node)
Note: The placeholder in blvm-consensus is not functional and should not be used.
Interpretation
Comparison Results
#![allow(unused)]
fn main() {
pub struct ComparisonResult {
pub local_valid: bool,
pub core_valid: bool,
pub divergence: bool,
pub divergence_reason: Option<String>,
}
}
Code: differential_tests.rs
Divergence Handling
When divergence is detected:
- Report: Detailed divergence report
- Investigation: Investigate root cause
- Fix: Fix consensus bug if found
- Verification: Re-run tests to verify fix
Automated Consistency Checks
CI Integration
Differential tests can be integrated into CI:
- On PRs: Run differential tests on pull requests
- On Schedule: Regular scheduled runs
- Divergence Detection: Fail CI on divergence
- Reporting: Report divergences with details
Benefits
- Compatibility: Ensures compatibility with Bitcoin Core
- Early Detection: Catches consensus divergences early
- Empirical Validation: Real-world validation testing
- Automated: Automated consistency checks
- Comprehensive: Tests across all consensus eras
Components
The differential testing system includes:
- Transaction validation comparison
- Block validation comparison
- Bitcoin Core RPC integration
- Differential fuzzing
- Bitcoin Core test vector integration
- Mainnet block tests
- Historical consensus tests
Primary Location: blvm-bench repository
blvm-bench/src/core_builder.rs- Bitcoin Core binary detectionblvm-bench/src/regtest_node.rs- Regtest node managementblvm-bench/src/core_rpc_client.rs- RPC client wrapperblvm-bench/src/differential.rs- Comparison frameworkblvm-bench/tests/integration/bip_differential.rs- BIP-specific tests
Placeholder Location: blvm-consensus/tests/integration/differential_tests.rs (skeleton, not fully implemented)
Differential Fuzzing: blvm-consensus/fuzz/fuzz_targets/differential_fuzzing.rs (internal consistency testing, not Core comparison)
See Also
- Testing Infrastructure - Overview of all testing techniques
- Fuzzing Infrastructure - Automated bug discovery
- Property-Based Testing - Verify invariants with random inputs
- Formal Verification - BLVM Specification Lock verification
- Contributing - Testing requirements for contributions
Snapshot Testing
Overview
Bitcoin Commons uses snapshot testing to verify that complex data structures and outputs don’t change unexpectedly. Snapshot tests capture the output of functions and compare them against stored snapshots, making it easy to detect unintended changes.
Purpose
Snapshot testing serves to:
- Detect Regressions: Catch unexpected changes in output
- Verify Complex Outputs: Test complex data structures without writing detailed assertions
- Document Behavior: Snapshots serve as documentation of expected behavior
- Review Changes: Interactive review of snapshot changes
Code: mod.rs
Architecture
Snapshot Testing Library
Bitcoin Commons uses insta for snapshot testing:
- Snapshot Storage: Snapshots stored in
.snapfiles - Version Control: Snapshots committed to git
- Interactive Review: Review changes before accepting
- Format Support: Text, JSON, YAML, and custom formats
Code: TESTING_SETUP.md
Usage
Creating Snapshots
#![allow(unused)]
fn main() {
use insta::assert_snapshot;
#[test]
fn test_example() {
let result = compute_something();
assert_snapshot!("snapshot_name", result);
}
}
Code: TESTING_SETUP.md
Snapshot Examples
Content Hash Snapshot
#![allow(unused)]
fn main() {
#[test]
fn test_content_hash_snapshot() {
let validator = ContentHashValidator::new();
let content = b"test content for snapshot";
let hash = validator.compute_file_hash(content);
assert_snapshot!("content_hash", hash);
}
}
Code: validation_snapshot_tests.rs
Directory Hash Snapshot
#![allow(unused)]
fn main() {
#[test]
fn test_directory_hash_snapshot() {
let validator = ContentHashValidator::new();
let files = vec![
("file1.txt".to_string(), b"content1".to_vec()),
("file2.txt".to_string(), b"content2".to_vec()),
("file3.txt".to_string(), b"content3".to_vec()),
];
let result = validator.compute_directory_hash(&files);
assert_snapshot!("directory_hash", format!(
"file_count: {}\ntotal_size: {}\nmerkle_root: {}",
result.file_count,
result.total_size,
result.merkle_root
));
}
}
Code: validation_snapshot_tests.rs
Version Format Snapshot
#![allow(unused)]
fn main() {
#[test]
fn test_version_format_snapshot() {
let validator = VersionPinningValidator::default();
let format = validator.generate_reference_format(
"v1.2.3",
"abc123def456",
"sha256:fedcba9876543210"
);
assert_snapshot!("version_format", format);
}
}
Code: validation_snapshot_tests.rs
Running Snapshot Tests
Run Tests
cargo test --test snapshot_tests
Or using Makefile:
make test-snapshot
Code: TESTING_SETUP.md
Updating Snapshots
Interactive Review
When snapshots change (expected changes):
cargo insta review
This opens an interactive review where you can:
- Accept changes
- Reject changes
- See diffs
Code: TESTING_SETUP.md
Update Command
make update-snapshots
Code: TESTING_SETUP.md
Snapshot Files
File Location
- Location:
tests/snapshots/ - Format:
.snapfiles - Version Controlled: Yes
Code: TESTING_SETUP.md
File Structure
Snapshot files are organized by test module:
tests/snapshots/
├── validation_snapshot_tests/
│ ├── content_hash.snap
│ ├── directory_hash.snap
│ └── version_format.snap
└── github_snapshot_tests/
└── ...
Best Practices
1. Commit Snapshots
- Commit
.snapfiles to version control - Review snapshot changes in PRs
- Don’t ignore snapshot files
Code: TESTING_SETUP.md
2. Review Changes
- Always review snapshot changes before accepting
- Understand why snapshots changed
- Verify changes are expected
Code: TESTING_SETUP.md
3. Use Descriptive Names
- Use clear snapshot names
- Include context in snapshot names
- Group related snapshots
4. Test Complex Outputs
- Use snapshots for complex data structures
- Test formatted output
- Test serialized data
Troubleshooting
Snapshots Failing
If snapshots fail unexpectedly:
- Review changes:
cargo insta review - If changes are expected, accept them
- If changes are unexpected, investigate
Code: TESTING_SETUP.md
Snapshot Not Found
If snapshot file is missing:
- Run test to generate snapshot
- Review generated snapshot
- Accept if correct
CI Integration
GitHub Actions
Snapshot tests run in CI:
- On PRs: Run snapshot tests
- On Push: Run snapshot tests
- Fail on Mismatch: Tests fail if snapshots don’t match
Local CI Simulation
# Run snapshot tests (like CI)
make test-snapshot
Code: TESTING_SETUP.md
Configuration
Insta Configuration
Configuration file: .insta.yml
# Insta configuration
snapshot_path: tests/snapshots
Code: .insta.yml
Test Suites
Validation Snapshots
Tests for validation functions:
- Content hash computation
- Directory hash computation
- Version format generation
- Version parsing
Code: validation_snapshot_tests.rs
GitHub Snapshots
Tests for GitHub integration:
- PR comment formatting
- Status check formatting
- Webhook processing
Code: github_snapshot_tests.rs
See Also
- Testing Infrastructure - Overview of all testing techniques
- Property-Based Testing - Verify invariants with random inputs
- Fuzzing Infrastructure - Automated bug discovery
- Contributing - Testing requirements for contributions
Benefits
- Easy Regression Detection: Catch unexpected changes easily
- Complex Output Testing: Test complex structures without detailed assertions
- Documentation: Snapshots document expected behavior
- Interactive Review: Review changes before accepting
- Version Control: Track changes over time
Components
The snapshot testing system includes:
- Insta snapshot testing library
- Snapshot test suites
- Snapshot file management
- Interactive review tools
- CI integration
Location: blvm-commons/tests/snapshot/, blvm-commons/docs/testing/TESTING_SETUP.md
Known Issues
This document tracks known technical issues in the codebase that require attention. These are validated issues confirmed through code inspection and static analysis.
Critical Issues
MutexGuard Held Across Await Points
Status: Known issue
Severity: Critical
Location: blvm-node/src/network/mod.rs and related files
Problem
Multiple instances where std::sync::Mutex guards are held across await points, causing deadlock risks. The async runtime may yield while holding a blocking mutex guard, and another task trying to acquire the same lock will block, potentially causing deadlock.
Code Pattern
#![allow(unused)]
fn main() {
// Problematic pattern
let mut peer_states = self.peer_states.lock().unwrap(); // std::sync::Mutex
// ... code that uses peer_states ...
if let Err(e) = self.send_to_peer(peer_addr, wire_msg).await { // AWAIT WITH LOCK HELD!
// MutexGuard still held here - DEADLOCK RISK
}
}
Impact
- Deadlock risk: Holding a
std::sync::Mutexguard across anawaitpoint can cause deadlocks - The async runtime may yield, and another task trying to acquire the same lock will block
- If that task is on the same executor thread, deadlock occurs
Root Cause
peer_statesusesArc<Mutex<...>>withstd::sync::Mutexinstead oftokio::sync::Mutex- Guard is held while calling async function
send_to_peer().await
Recommended Fix
#![allow(unused)]
fn main() {
// Option 1: Drop guard before await
{
let mut peer_states = self.peer_states.lock().unwrap();
// ... use peer_states ...
} // Guard dropped here
if let Err(e) = self.send_to_peer(peer_addr, wire_msg).await {
// ...
}
// Option 2: Use tokio::sync::Mutex (preferred for async code)
// Change field type to Arc<tokio::sync::Mutex<...>>
let mut peer_states = self.peer_states.lock().await;
}
Affected Locations
blvm-node/src/network/mod.rs: Multiple locationsblvm-node/src/network/utxo_commitments_client.rs: Lines 156, 165, 257, 349, 445
Mixed Mutex Types
Status: Known issue
Severity: Critical
Location: blvm-node/src/network/mod.rs
Problem
NetworkManager uses Arc<Mutex<...>> with std::sync::Mutex (blocking) in async contexts, causing deadlock risks. All Mutex fields in NetworkManager are std::sync::Mutex but used in async code.
Current State
#![allow(unused)]
fn main() {
pub struct NetworkManager {
peer_manager: Arc<Mutex<PeerManager>>, // std::sync::Mutex
peer_states: Arc<Mutex<HashMap<...>>>, // std::sync::Mutex
// ... many more Mutex fields
}
}
Recommended Fix
- Audit all Mutex fields in NetworkManager
- Convert to tokio::sync::Mutex for async contexts
- Update all
.lock().unwrap()to.lock().await - Remove blocking locks from async functions
Affected Fields
peer_manager: Arc<Mutex<PeerManager>>peer_states: Arc<Mutex<HashMap<...>>>persistent_peers: Arc<Mutex<HashSet<...>>>ban_list: Arc<Mutex<HashMap<...>>>socket_to_transport: Arc<Mutex<HashMap<...>>>pending_requests: Arc<Mutex<HashMap<...>>>request_id_counter: Arc<Mutex<u32>>address_database: Arc<Mutex<...>>- And more…
Unwrap() on Mutex Locks
Status: Known issue
Severity: High
Location: Multiple files
Problem
Using .unwrap() on mutex locks can cause panics if the lock is poisoned (a thread panicked while holding the lock).
#![allow(unused)]
fn main() {
let mut db = self.address_database.lock().unwrap(); // Can panic!
let peer_states = network.peer_states.lock().unwrap(); // Can panic!
}
Impact
- If a thread panics while holding a Mutex, the lock becomes “poisoned”
.unwrap()will panic, potentially crashing the entire node- No graceful error handling
Recommended Fix
#![allow(unused)]
fn main() {
// Option 1: Handle poisoning gracefully
match self.address_database.lock() {
Ok(guard) => { /* use guard */ }
Err(poisoned) => {
warn!("Mutex poisoned, recovering...");
let guard = poisoned.into_inner();
// use guard
}
}
// Option 2: Use tokio::sync::Mutex (doesn't poison)
}
Affected Locations
blvm-node/src/network/mod.rs: Multiple locations (19+ instances)blvm-node/src/network/utxo_commitments_client.rs: Lines 156, 165, 257, 349, 445blvm-consensus/src/script.rs: Multiple locations
Medium Priority Issues
Transport Abstraction Not Fully Integrated
Status: Known issue
Severity: Medium
Location: blvm-node/src/network/
Problem
Transport abstraction exists (Transport trait, TcpTransport, IrohTransport), but Peer struct still uses raw TcpStream directly in some places, not using the transport abstraction consistently.
Impact
- Code duplication
- Inconsistent error handling
- Harder to add new transports
Recommended Fix
- Audit all
Peercreation sites - Ensure all use
from_transport_connection - Remove direct
TcpStreamusage
Nested Locking Patterns
Status: Known issue
Severity: Medium
Location: blvm-node/src/network/utxo_commitments_client.rs
Problem
Nested locking where RwLock read guard is held while acquiring inner Mutex locks, which can cause deadlocks.
#![allow(unused)]
fn main() {
let network = network_manager.read().await; // RwLock read
// ...
network.socket_to_transport.lock().unwrap(); // Mutex lock inside
}
Recommended Fix
- Review locking strategy
- Consider flattening the lock hierarchy
- Or ensure consistent lock ordering
Testing Gaps
Missing Concurrency Tests
Status: Known gap
Severity: Low
Problem
- No tests for Mutex deadlock scenarios
- No tests for lock ordering
- No stress tests for concurrent access
Recommendation
- Add tests that spawn multiple tasks accessing shared Mutex
- Test lock ordering to prevent deadlocks
- Add timeout tests for lock acquisition
Priority Summary
Priority 1 (Critical - Fix Immediately)
- Fix MutexGuard held across await
- Convert all
std::sync::Mutextotokio::sync::Mutexin async contexts - Replace
.unwrap()on locks with proper error handling
Priority 2 (High - Fix Soon)
- Review and fix nested locking patterns
- Complete transport abstraction integration
Priority 3 (Medium - Fix When Possible)
- Add concurrency stress tests
Files Requiring Immediate Attention
- blvm-node/src/network/mod.rs - Multiple critical issues
- blvm-node/src/network/utxo_commitments_client.rs - MutexGuard across await
- blvm-consensus/src/script.rs - Unwrap() on locks
See Also
- Contributing Guide - How to contribute fixes
- Testing Guide - Testing practices
- PR Process - Pull request workflow
Security Controls System
Overview
Bitcoin Commons implements a security controls system that automatically classifies pull requests based on affected security controls and determines required governance tiers. This embeds security controls directly into the governance system, making it self-enforcing.
Architecture
Security Control Mapping
Security controls are defined in a YAML configuration file that maps file patterns to security controls:
- File Patterns: Glob patterns matching code files
- Control Definitions: Security control metadata
- Priority Levels: P0 (Critical), P1 (High), P2 (Medium), P3 (Low)
- Categories: Control categories (consensus_integrity, cryptographic, etc.)
Code: security_controls.rs
Security Control Structure
security_controls:
- id: "A-001"
name: "Genesis Block Implementation"
category: "consensus_integrity"
priority: "P0"
description: "Proper genesis blocks"
files:
- "blvm-protocol/**/*.rs"
required_signatures: "7-of-7"
review_period_days: 180
requires_security_audit: true
requires_formal_verification: true
requires_cryptography_expert: false
Code: security_controls.rs
Priority Levels
P0 (Critical)
Highest priority security controls:
- Impact: Blocks production deployment and security audit
- Requirements: Security audit, formal verification, cryptographer approval
- Governance Tier:
security_critical - Examples: Genesis block implementation, cryptographic primitives
Code: security_controls.rs
P1 (High)
High priority security controls:
- Impact: Medium impact, may require cryptography expert
- Requirements: Security review, formal verification
- Governance Tier:
cryptographicorsecurity_enhancement - Examples: Signature verification, key management
Code: security_controls.rs
P2 (Medium)
Medium priority security controls:
- Impact: Low impact
- Requirements: Security review by maintainer
- Governance Tier:
security_enhancement - Examples: Access control, rate limiting
Code: security_controls.rs
P3 (Low)
Low priority security controls:
- Impact: Minimal impact
- Requirements: Standard review
- Governance Tier: None (standard process)
- Examples: Logging, monitoring
Code: security_controls.rs
Control Categories
Consensus Integrity
Controls related to consensus-critical code:
- Max Priority: P0
- Examples: Block validation, transaction validation, UTXO management
- Requirements: Formal verification, security audit
Cryptographic
Controls related to cryptographic operations:
- Max Priority: P0
- Examples: Signature verification, key generation, hash functions
- Requirements: Cryptographer approval, side-channel analysis
Access Control
Controls related to authorization and access:
- Max Priority: P1
- Examples: Maintainer authorization, server authorization
- Requirements: Security review
Network Security
Controls related to network protocols:
- Max Priority: P1
- Examples: P2P message validation, relay security
- Requirements: Security review
Security Control Validator
Impact Analysis
The SecurityControlValidator analyzes security impact of changed files:
- File Matching: Matches changed files against control patterns
- Control Identification: Identifies affected security controls
- Priority Calculation: Determines highest priority affected
- Tier Determination: Determines required governance tier
- Requirement Collection: Collects additional requirements
Code: security_controls.rs
Impact Levels
#![allow(unused)]
fn main() {
pub enum ImpactLevel {
None, // No controls affected
Low, // P2 controls
Medium, // P1 controls
High, // P0 controls
Critical, // Multiple P0 controls
}
}
Code: security_controls.rs
Governance Tier Mapping
Impact levels map to governance tiers:
- Critical/High:
security_criticaltier - Medium (crypto):
cryptographictier - Medium (other):
security_enhancementtier - Low:
security_enhancementtier - None: Standard tier
Code: security_controls.rs
Placeholder Detection
Placeholder Patterns
The validator detects placeholder implementations in security-critical files:
PLACEHOLDER- See Threat Models for comprehensive security documentation
0x00[PLACEHOLDER0x02[PLACEHOLDER0x03[PLACEHOLDER0x04[PLACEHOLDERreturn None as a placeholderreturn vec![] as a placeholderThis is a placeholder
Code: security_controls.rs
Placeholder Violations
Placeholder violations block PRs affecting P0 controls:
- Detection: Automatic scanning of changed files
- Blocking: Blocks production deployment
- Reporting: Detailed violation reports
Code: security_controls.rs
Security Gate CLI
Status Check
Check security control status:
security-gate status
security-gate status --detailed
Code: security-gate.rs
PR Impact Analysis
Analyze security impact of a PR:
security-gate check-pr 123
security-gate check-pr 123 --format json
Code: security-gate.rs
Placeholder Check
Check for placeholder implementations:
security-gate check-placeholders
security-gate check-placeholders --fail-on-placeholder
Code: security-gate.rs
Production Readiness
Verify production readiness:
security-gate verify-production-readiness
security-gate verify-production-readiness --format json
Code: security-gate.rs
Integration with Governance
Automatic Classification
Security controls automatically classify PRs:
- File Analysis: Analyzes changed files
- Control Matching: Matches files to controls
- Tier Assignment: Assigns governance tier
- Requirement Collection: Collects requirements
Code: security_controls.rs
PR Comments
The validator generates PR comments with security impact:
- Impact Level: Visual indicator of impact
- Affected Controls: List of affected controls
- Required Tier: Governance tier required
- Additional Requirements: List of requirements
- Blocking Status: Production/audit blocking status
Code: security_controls.rs
Control Requirements
Security Critical Tier
Requirements for security_critical tier:
- All affected P0 controls must be certified
- No placeholder implementations in diff
- Formal verification proofs passing
- Security audit report attached to PR
- Cryptographer approval required
Code: security_controls.rs
Cryptographic Tier
Requirements for cryptographic tier:
- Cryptographer approval required
- Test vectors from standard specifications
- Side-channel analysis performed
- Formal verification proofs passing
Code: security_controls.rs
Security Enhancement Tier
Requirements for security_enhancement tier:
- Security review by maintainer
- Comprehensive test coverage
- No placeholder implementations
Code: security_controls.rs
Production Blocking
P0 Control Blocking
P0 controls block production deployment:
- Blocks Production: Cannot deploy to production
- Blocks Audit: Cannot proceed with security audit
- Requires Certification: Must be certified before merge
Code: security_controls.rs
Components
The security controls system includes:
- Security control mapping (YAML configuration)
- Security control validator (impact analysis)
- Placeholder detection
- Security gate CLI tool
- Governance tier integration
- PR comment generation
Location: blvm-commons/src/validation/security_controls.rs, blvm-commons/src/bin/security-gate.rs
See Also
- Threat Models - Security threat analysis
- Developer Security Checklist - Security checklist for developers
- Security Architecture Review Template - Architecture review process
- Security Testing Template - Security testing guidelines
- Contributing - Development workflow
- PR Process - Security review in PR process
Threat Models
Overview
Bitcoin Commons implements security boundaries and threat models to protect against various attack vectors. The system uses defense-in-depth principles with multiple layers of security.
Security Boundaries
Node Security Boundaries
What blvm-node Handles:
- Consensus validation (delegated to blvm-consensus)
- Network protocol (P2P message parsing, peer management)
- Storage layer (block storage, UTXO set, chain state)
- RPC interface (JSON-RPC 2.0 API)
- Module orchestration (loading, IPC, lifecycle management)
- Mempool management
- Mining coordination
What blvm-node NEVER Handles:
- Consensus rule validation (delegated to blvm-consensus)
- Protocol variant selection (delegated to blvm-protocol)
- Private key management (no wallet functionality)
- Cryptographic key generation (delegated to blvm-sdk or modules)
- Governance enforcement (delegated to blvm-commons)
Code: SECURITY.md
Module System Security Boundaries
Process Isolation:
- Modules run in separate processes with isolated memory
- Node consensus state is protected and read-only to modules
- Module crashes are isolated and do not affect the base node
Code: MODULE_SYSTEM.md
What Modules Cannot Do:
- Modify consensus rules
- Modify UTXO set
- Access node private keys
- Bypass security boundaries
- Affect other modules
Code: MODULE_SYSTEM.md
Threat Model: Pre-Production Testing
Environment
- Network: Trusted network only
- Timeline: 6-12 months testing phase
- Threats: Limited to development and testing scenarios
Threats NOT Applicable (Trusted Network)
- Eclipse attacks
- Sybil attacks
- Network partitioning attacks
- Malicious peer injection
Code: SECURITY.md
Threats That Apply
- Code vulnerabilities in consensus validation
- Memory corruption in parsing
- Integer overflow in calculations
- Resource exhaustion (DoS)
- Supply chain attacks on dependencies
Code: SECURITY.md
Threat Model: Mainnet Deployment
Environment
- Network: Public Bitcoin network
- Timeline: After security audit and hardening
- Threats: Full Bitcoin network threat model
Additional Threats for Mainnet
- Eclipse attacks - malicious peers isolate node
- Sybil attacks - fake peer identities
- Network partitioning - routing attacks
- Resource exhaustion - memory/CPU DoS
- Protocol manipulation - malformed messages
Code: SECURITY.md
Attack Vectors
Eclipse Attacks
Threat: Malicious peers isolate node from honest network
Mitigations:
- IP diversity tracking
- Limits connections from same IP range
- LAN peering security: 25% LAN peer cap, 75% internet peer minimum, checkpoint validation
- Geographic diversity requirements
- ASN diversity tracking
Code: SECURITY.md
Sybil Attacks
Threat: Attacker creates many fake peer identities
Mitigations:
- Connection rate limiting
- Per-IP connection limits
- Peer reputation tracking
- Ban list sharing
Code: SECURITY.md
Resource Exhaustion (DoS)
Threat: Attacker exhausts node resources (memory, CPU, network)
Mitigations:
- Connection rate limiting (token bucket)
- Message queue limits
- Auto-ban for abusive peers
- Resource monitoring
- Per-user RPC rate limiting
Code: SECURITY.md
Protocol Manipulation
Threat: Attacker sends malformed messages to exploit parsing bugs
Mitigations:
- Input validation and sanitization
- Fuzzing (19 fuzz targets)
- Formal verification
- Property-based testing (141 property tests)
- Network protocol validation
Code: SECURITY.md
Memory Corruption
Threat: Buffer overflows, use-after-free, double-free
Mitigations:
- Rust memory safety
- MIRI integration (undefined behavior detection)
- Fuzzing with sanitizers (ASAN, UBSAN, MSAN)
- Runtime assertions
Code: CONSENSUS_COVERAGE_ASSESSMENT.md
Integer Overflow
Threat: Integer overflow in calculations causing consensus divergence
Mitigations:
- Checked arithmetic
- Formal verification (Z3 proofs via BLVM Specification Lock)
- Property-based testing
- Runtime assertions
Code: CONSENSUS_COVERAGE_ASSESSMENT.md
Supply Chain Attacks
Threat: Malicious dependencies compromise node
Mitigations:
- Dependency pinning (exact versions)
- Regular security audits (cargo audit)
- Minimal dependency set
- Trusted dependency sources
Code: SECURITY.md
Security Hardening
Phase 1: Pre-Production (Current)
- Fix signature verification with real transaction hashes
- Implement proper Bitcoin double SHA256 hashing
- Pin all dependencies to exact versions
- Add network protocol input validation
- Replace sled with redb (production-ready database)
- Add DoS protection mechanisms
- Add RPC authentication
- Implement rate limiting
- Add comprehensive fuzzing
- Add eclipse attack prevention
- Add storage bounds checking
Code: SECURITY.md
Phase 2: Production Readiness
- All Phase 1 items completed
- Professional security audit (external, requires security firm)
- Formal verification of critical paths
- Advanced peer management
Code: SECURITY.md
Module System Security
Process Isolation
Modules run in separate processes:
- Isolated Memory: Each module has separate memory space
- IPC Communication: Modules communicate only via IPC
- Crash Isolation: Module crashes don’t affect node
- Resource Limits: CPU, memory, and network limits enforced
Code: mod.rs
Sandboxing
Modules are sandboxed:
- File System: Restricted file system access
- Network: Network access controlled
- Process: Resource limits enforced
- Capabilities: Permission-based access control
Code: mod.rs
Permission System
Modules require explicit permissions:
- Capability Checks: Permission validator checks capabilities
- Tier Validation: Tier-based permission system
- Resource Limits: Enforced resource limits
- Request Validation: All requests validated
Code: MODULE_SYSTEM.md
RPC Security
Authentication
RPC authentication implemented:
- Token-Based: Token-based authentication
- Certificate-Based: Certificate-based authentication
- Configurable: Authentication method configurable
Code: SECURITY.md
Rate Limiting
RPC rate limiting implemented:
- Per-User: Per-user rate limiting
- Token Bucket: Token bucket algorithm
- Configurable: Rate limits configurable
Code: SECURITY.md
Input Validation
RPC input validation:
- Sanitization: Input sanitization
- Validation: Input validation
- Access Control: Access control via authentication
Code: SECURITY.md
Network Security
DoS Protection
DoS protection mechanisms:
- Connection Rate Limiting: Token bucket, per-IP connection limits
- Message Queue Limits: Limits on message queue size
- Auto-Ban: Automatic banning of abusive peers
- Resource Monitoring: Resource usage monitoring
Code: SECURITY.md
Eclipse Attack Prevention
Eclipse attack prevention:
- IP Diversity Tracking: Tracks IP diversity
- Subnet Limits: Limits connections from same IP range
- Geographic Diversity: Geographic diversity requirements
- ASN Diversity: ASN diversity tracking
Code: SECURITY.md
Storage Security
Database Security
Storage layer security:
- redb Default: Production-ready database (pure Rust, ACID)
- sled Fallback: Available as fallback (beta quality)
- Database Abstraction: Allows switching backends
- Storage Bounds: Storage bounds checking
Code: SECURITY.md
LAN Peering Security
The LAN peering system includes multiple security mechanisms to prevent eclipse attacks while allowing fast local network sync:
Security Limits
- 25% LAN Peer Cap: Maximum percentage of peers that can be LAN peers (hard limit)
- 75% Internet Peer Minimum: Minimum percentage of peers that must be internet peers
- Minimum 3 Internet Peers: Required for checkpoint validation consensus
- Maximum 1 Discovered LAN Peer: Limits automatically discovered peers (whitelisted are separate)
Code: lan_security.rs
Checkpoint Validation
Internet checkpoints are the primary security mechanism for LAN peering:
- Block Checkpoints: Every 1000 blocks, validate block hash against internet peers
- Header Checkpoints: Every 10000 blocks, validate header hash against internet peers
- Consensus Requirement: Requires agreement from at least 3 internet peers
- Failure Response: Checkpoint failure results in permanent ban (1 year duration)
Code: lan_security.rs
Progressive Trust System
LAN peers start with limited trust and earn higher priority over time:
- Initial Trust: 1.5x multiplier for newly discovered peers
- Level 2 Trust: 2.0x multiplier after 1000 valid blocks
- Maximum Trust: 3.0x multiplier after 10000 blocks AND 1 hour connection
- Demotion: After 3 failures, peer loses LAN status
- Banning: Checkpoint failure results in permanent ban
Code: lan_security.rs
Eclipse Attack Prevention
The security model ensures eclipse attack prevention:
- Internet Peer Majority: 75% minimum ensures connection to honest network
- Checkpoint Validation: Regular validation prevents chain divergence
- LAN Address Privacy: LAN addresses never advertised to external peers
- Failure Handling: Multiple failures result in demotion or ban
Code: lan_security.rs
For complete documentation, see LAN Peering System.
See Also
- LAN Peering System - Complete LAN peering documentation
- Security Controls - Security control implementation
- Developer Security Checklist - Security checklist for developers
- Security Architecture Review Template - Architecture review process
- Security Testing Template - Security testing guidelines
- Node Overview - Node security features
- Contributing - Security in development workflow
Components
The threat model and security boundaries include:
- Node security boundaries (what node handles vs. never handles)
- Module system security (process isolation, sandboxing)
- Threat models (pre-production, mainnet)
- Attack vectors and mitigations
- Security hardening roadmap
- RPC security (authentication, rate limiting)
- Network security (DoS protection, eclipse prevention)
- Storage security
Location: blvm-node/SECURITY.md, blvm-node/src/module/, blvm-node/docs/MODULE_SYSTEM.md
Developer Security Checklist
Use this checklist when writing new code or modifying existing code to ensure security best practices.
Before Writing Code
- Understand the security implications of your changes
- Identify affected security controls (check
governance/config/security-control-mapping.yml) - Review relevant security documentation
- Consider threat model for your changes
Input Validation
- Validate all user inputs at boundaries
- Sanitize inputs before processing
- Use type-safe APIs (Rust’s type system)
- Reject invalid inputs early
- Validate data from external sources (network, files, databases)
Examples:
#![allow(unused)]
fn main() {
// ✅ Good: Validate input
fn process_amount(amount: u64) -> Result<u64, Error> {
if amount > MAX_AMOUNT {
return Err(Error::AmountTooLarge);
}
Ok(amount)
}
// ❌ Bad: No validation
fn process_amount(amount: u64) -> u64 {
amount // Could overflow
}
}
Authentication & Authorization
- Implement proper authentication (if applicable)
- Check authorization before sensitive operations
- Use principle of least privilege
- Verify permissions at every boundary
- Don’t trust client-side authorization checks
Examples:
#![allow(unused)]
fn main() {
// ✅ Good: Check authorization
fn transfer_funds(from: Account, to: Account, amount: u64) -> Result<(), Error> {
if !from.has_permission(Permission::Transfer) {
return Err(Error::Unauthorized);
}
// ... transfer logic
}
// ❌ Bad: No authorization check
fn transfer_funds(from: Account, to: Account, amount: u64) {
// ... transfer logic without checking permissions
}
}
Cryptographic Operations
- Use well-tested cryptographic libraries (secp256k1, bitcoin_hashes)
- Never hardcode keys or secrets
- Use cryptographically secure random number generation
- Follow Bitcoin standards (BIP32, BIP39, BIP44)
- Verify signatures completely
- Use constant-time operations where needed (avoid timing attacks)
Examples:
#![allow(unused)]
fn main() {
// ✅ Good: Use secure random
use rand::rngs::OsRng;
let mut rng = OsRng;
let key = secp256k1::SecretKey::new(&mut rng);
// ❌ Bad: Insecure random
let key = secp256k1::SecretKey::from_slice(&[1, 2, 3, ...])?;
}
Consensus & Protocol
- Implement consensus rules exactly as specified
- Validate all protocol messages
- Handle network errors gracefully
- Prevent DoS attacks (rate limiting, resource limits)
- Don’t bypass consensus validation
Examples:
#![allow(unused)]
fn main() {
// ✅ Good: Validate consensus rules
fn validate_block(block: &Block) -> Result<(), ConsensusError> {
if !block.verify_merkle_root() {
return Err(ConsensusError::InvalidMerkleRoot);
}
// ... more validation
}
// ❌ Bad: Skip validation
fn validate_block(block: &Block) -> Result<(), ConsensusError> {
Ok(()) // No validation!
}
}
Memory Safety
- Prefer safe Rust code
- Document and justify any
unsafecode - Ensure proper resource cleanup (Drop trait)
- Avoid memory leaks (use RAII patterns)
- Check bounds before array/vector access
Examples:
#![allow(unused)]
fn main() {
// ✅ Good: Safe Rust
let value = vec.get(index).ok_or(Error::OutOfBounds)?;
// ❌ Bad: Unsafe indexing
let value = vec[index]; // Could panic
}
Error Handling
- Don’t leak sensitive information in errors
- Use specific error types
- Handle all error cases
- Fail securely (default deny)
- Log errors appropriately (no sensitive data)
Examples:
#![allow(unused)]
fn main() {
// ✅ Good: Generic error message
return Err(Error::AuthenticationFailed); // Doesn't reveal why
// ❌ Bad: Leaks information
return Err(Error::InvalidPassword("user123")); // Reveals username
}
Dependencies
- Use minimal dependencies
- Keep dependencies up-to-date
- Pin consensus-critical dependencies to exact versions
- Check for known vulnerabilities (cargo audit)
- Review dependency licenses
Examples:
# ✅ Good: Pin critical dependencies
[dependencies]
secp256k1 = "=0.28.0" # Exact version for consensus-critical
# ❌ Bad: Allow version ranges for critical code
[dependencies]
secp256k1 = "^0.28" # Could break consensus
Testing
- Write security-focused tests
- Test edge cases and boundary conditions
- Test error handling paths
- Include fuzzing for consensus/protocol code
- Test with malicious inputs
- Achieve adequate test coverage
Examples:
#![allow(unused)]
fn main() {
#[test]
fn test_amount_overflow() {
assert!(process_amount(u64::MAX).is_err());
}
#[test]
fn test_invalid_signature() {
let invalid_sig = vec![0u8; 64];
assert!(verify_signature(&invalid_sig).is_err());
}
}
Documentation
- Document security assumptions
- Document threat model considerations
- Document security implications of design decisions
- Update security documentation if adding new controls
- Document configuration security requirements
Code Review
- Request security review for security-sensitive code
- Address security review feedback
- Update security control mapping if needed
- Ensure appropriate governance tier is selected
Post-Implementation
- Verify security tests pass
- Check for new security advisories
- Update threat model if needed
- Document any security trade-offs
Security Control Categories
Category A: Consensus Integrity
- Genesis block implementation
- SegWit witness verification
- Taproot support
- Script execution limits
- UTXO set validation
Category B: Cryptographic
- Maintainer key management
- Emergency signature verification
- Multisig threshold enforcement
- Key derivation and storage
Category C: Governance
- Tier classification logic
- Database query implementation
- Cross-layer file verification
Category D: Data Integrity
- Audit log hash chain
- OTS timestamping
- State synchronization
Category E: Input Validation
- GitHub webhook signature verification
- Input sanitization
- SQL injection prevention
- API rate limiting
Resources
- Security Controls System
- Threat Models
- Security Review Checklist (in main repo)
Security Architecture Review Template
Use this template when conducting security architecture reviews for new features, major changes, or system components.
Review Information
Component/Feature: [Name of component or feature]
Reviewer: [Name]
Date: [Date]
Review Type: [Initial / Follow-up / Final]
Affected Security Controls: [List control IDs, e.g., A-001, B-002]
Executive Summary
Brief Description: [One-paragraph summary of the component/feature and its security implications]
Security Risk Level:
- Low
- Medium
- High
- Critical
Recommendation:
- Approve
- Approve with conditions
- Request changes
- Reject
Architecture Overview
Component Description
[Detailed description of the component, its purpose, and how it fits into the system]
Data Flow
[Describe how data flows through the component, including inputs, outputs, and transformations]
Threat Model
[Identify potential threats, attackers, and attack vectors]
Security Analysis
Authentication & Authorization
Current Implementation: [Describe how authentication and authorization are handled]
Security Assessment:
- Authentication is properly implemented
- Authorization checks are present at all boundaries
- Principle of least privilege is followed
- No privilege escalation vulnerabilities
- Session management is secure (if applicable)
Issues Found: [List any authentication/authorization issues]
Recommendations: [List recommendations for improvement]
Cryptographic Operations
Current Implementation: [Describe cryptographic operations used]
Security Assessment:
- Cryptographic primitives are appropriate and well-tested
- Key management follows best practices
- No hardcoded keys or secrets
- Random number generation is secure
- Signature verification is complete
- Constant-time operations used where needed
Issues Found: [List any cryptographic issues]
Recommendations: [List recommendations for improvement]
Input Validation & Sanitization
Current Implementation: [Describe input validation approach]
Security Assessment:
- All inputs are validated at boundaries
- Input sanitization is appropriate
- No injection vulnerabilities (SQL, command, etc.)
- Path traversal is prevented
- Buffer overflows are prevented
- Integer overflow/underflow is handled
Issues Found: [List any input validation issues]
Recommendations: [List recommendations for improvement]
Data Protection
Current Implementation: [Describe how sensitive data is protected]
Security Assessment:
- Sensitive data is encrypted at rest (if applicable)
- Sensitive data is encrypted in transit
- No sensitive data in logs
- No sensitive data in error messages
- Proper data retention and deletion
Issues Found: [List any data protection issues]
Recommendations: [List recommendations for improvement]
Error Handling
Current Implementation: [Describe error handling approach]
Security Assessment:
- Errors don’t leak sensitive information
- Error handling is comprehensive
- Fail-secure defaults are used
- No information disclosure through errors
Issues Found: [List any error handling issues]
Recommendations: [List recommendations for improvement]
Network Security
Current Implementation: [Describe network security measures]
Security Assessment:
- Network communication is encrypted (TLS)
- DoS protection is implemented
- Rate limiting is appropriate
- Network message validation is complete
- Protocol security is maintained
Issues Found: [List any network security issues]
Recommendations: [List recommendations for improvement]
Consensus & Protocol Compliance
Current Implementation: [Describe consensus/protocol implementation]
Security Assessment:
- Consensus rules are correctly implemented
- No consensus bypass vulnerabilities
- Protocol compliance is maintained
- Network compatibility is preserved
Issues Found: [List any consensus/protocol issues]
Recommendations: [List recommendations for improvement]
Security Controls Mapping
Affected Controls: [List all security controls affected by this component]
| Control ID | Control Name | Priority | Status | Notes |
|---|---|---|---|---|
| A-001 | Genesis Block | P0 | ✅ Complete | - |
| B-002 | Emergency Signatures | P0 | ⚠️ Partial | Needs review |
Required Actions:
- Security audit required (P0 controls)
- Formal verification required (consensus-critical)
- Cryptography expert review required
Testing & Validation
Current Testing: [Describe existing tests]
Security Testing Assessment:
- Security tests are included
- Edge cases are tested
- Fuzzing is appropriate (if applicable)
- Integration tests cover security scenarios
- Test coverage is adequate
Recommendations: [List testing recommendations]
Dependencies
Dependencies: [List security-sensitive dependencies]
Security Assessment:
- Dependencies are up-to-date
- No known vulnerabilities
- Consensus-critical dependencies are pinned
- Licenses are compatible
Issues Found: [List dependency issues]
Compliance & Governance
Governance Tier: [Identify required governance tier]
Compliance:
- Appropriate governance tier is selected
- Required signatures are identified
- Review period is appropriate
Risk Assessment
Identified Risks
| Risk | Severity | Likelihood | Impact | Mitigation |
|---|---|---|---|---|
| Example risk | High | Medium | Critical | Mitigation strategy |
Risk Summary
[Overall risk assessment and summary]
Recommendations
Critical (Must Fix)
[List critical issues that must be fixed before approval]
High Priority
[List high-priority recommendations]
Medium Priority
[List medium-priority recommendations]
Low Priority
[List low-priority recommendations]
Approval
Reviewer Signature: [Name]
Date: [Date]
Status: [Approved / Conditionally Approved / Rejected]
Conditions (if applicable): [List any conditions for approval]
Follow-up
Required Actions: [List actions required before final approval]
Follow-up Review Date: [Date for follow-up review, if needed]
References
- Security Controls System
- Threat Models
- Developer Security Checklist
- Security Testing Template
- Security Review Checklist (in main repo)
Security Testing Template
Use this template to plan and document security testing for new features, components, or security-sensitive changes.
Test Information
Component/Feature: [Name of component or feature]
Tester: [Name]
Date: [Date]
Test Type: [Unit / Integration / Fuzzing / Penetration / Review]
Affected Security Controls: [List control IDs]
Test Objectives
Primary Objectives:
- Verify input validation
- Verify authentication/authorization
- Verify cryptographic operations
- Verify consensus compliance
- Verify error handling
- Verify data protection
- Verify DoS resistance
Secondary Objectives: [List any additional testing objectives]
Test Scope
In Scope: [List what is being tested]
Out of Scope: [List what is explicitly not being tested]
Assumptions: [List any assumptions made during testing]
Test Environment
Environment Details:
- OS: [Operating system]
- Rust Version: [Version]
- Dependencies: [Key dependencies and versions]
- Network: [Network configuration if applicable]
Test Data: [Describe test data used]
Test Cases
Input Validation Tests
Test Case 1: Valid Input
- Description: Test with valid inputs
- Steps: [Test steps]
- Expected Result: [Expected behavior]
- Actual Result: [Actual behavior]
- Status: [Pass / Fail / Blocked]
Test Case 2: Invalid Input - Boundary Values
- Description: Test with boundary values (min, max, zero)
- Steps: [Test steps]
- Expected Result: [Expected behavior]
- Actual Result: [Actual behavior]
- Status: [Pass / Fail / Blocked]
Test Case 3: Invalid Input - Type Mismatch
- Description: Test with wrong data types
- Steps: [Test steps]
- Expected Result: [Expected behavior]
- Actual Result: [Actual behavior]
- Status: [Pass / Fail / Blocked]
Test Case 4: Invalid Input - Injection Attempts
- Description: Test for SQL injection, command injection, etc.
- Steps: [Test steps]
- Expected Result: [Expected behavior]
- Actual Result: [Actual behavior]
- Status: [Pass / Fail / Blocked]
Authentication & Authorization Tests
Test Case 5: Valid Authentication
- Description: Test successful authentication
- Steps: [Test steps]
- Expected Result: [Expected behavior]
- Actual Result: [Actual behavior]
- Status: [Pass / Fail / Blocked]
Test Case 6: Invalid Authentication
- Description: Test with invalid credentials
- Steps: [Test steps]
- Expected Result: [Expected behavior]
- Actual Result: [Actual behavior]
- Status: [Pass / Fail / Blocked]
Test Case 7: Authorization Bypass
- Description: Test attempts to bypass authorization
- Steps: [Test steps]
- Expected Result: [Expected behavior]
- Actual Result: [Actual behavior]
- Status: [Pass / Fail / Blocked]
Test Case 8: Privilege Escalation
- Description: Test for privilege escalation vulnerabilities
- Steps: [Test steps]
- Expected Result: [Expected behavior]
- Actual Result: [Actual behavior]
- Status: [Pass / Fail / Blocked]
Cryptographic Tests
Test Case 9: Signature Verification
- Description: Test signature verification with valid signatures
- Steps: [Test steps]
- Expected Result: [Expected behavior]
- Actual Result: [Actual behavior]
- Status: [Pass / Fail / Blocked]
Test Case 10: Invalid Signature
- Description: Test signature verification with invalid signatures
- Steps: [Test steps]
- Expected Result: [Expected behavior]
- Actual Result: [Actual behavior]
- Status: [Pass / Fail / Blocked]
Test Case 11: Key Management
- Description: Test key generation, storage, and usage
- Steps: [Test steps]
- Expected Result: [Expected behavior]
- Actual Result: [Actual behavior]
- Status: [Pass / Fail / Blocked]
Test Case 12: Random Number Generation
- Description: Test cryptographic random number generation
- Steps: [Test steps]
- Expected Result: [Expected behavior]
- Actual Result: [Actual behavior]
- Status: [Pass / Fail / Blocked]
Consensus & Protocol Tests
Test Case 13: Consensus Rule Compliance
- Description: Test consensus rule implementation
- Steps: [Test steps]
- Expected Result: [Expected behavior]
- Actual Result: [Actual behavior]
- Status: [Pass / Fail / Blocked]
Test Case 14: Protocol Message Validation
- Description: Test protocol message validation
- Steps: [Test steps]
- Expected Result: [Expected behavior]
- Actual Result: [Actual behavior]
- Status: [Pass / Fail / Blocked]
Test Case 15: Consensus Bypass Attempts
- Description: Test attempts to bypass consensus rules
- Steps: [Test steps]
- Expected Result: [Expected behavior]
- Actual Result: [Actual behavior]
- Status: [Pass / Fail / Blocked]
Error Handling Tests
Test Case 16: Error Information Disclosure
- Description: Test that errors don’t leak sensitive information
- Steps: [Test steps]
- Expected Result: [Expected behavior]
- Actual Result: [Actual behavior]
- Status: [Pass / Fail / Blocked]
Test Case 17: Error Recovery
- Description: Test error recovery mechanisms
- Steps: [Test steps]
- Expected Result: [Expected behavior]
- Actual Result: [Actual behavior]
- Status: [Pass / Fail / Blocked]
DoS Resistance Tests
Test Case 18: Resource Exhaustion
- Description: Test resistance to resource exhaustion attacks
- Steps: [Test steps]
- Expected Result: [Expected behavior]
- Actual Result: [Actual behavior]
- Status: [Pass / Fail / Blocked]
Test Case 19: Rate Limiting
- Description: Test rate limiting mechanisms
- Steps: [Test steps]
- Expected Result: [Expected behavior]
- Actual Result: [Actual behavior]
- Status: [Pass / Fail / Blocked]
Test Case 20: Memory Exhaustion
- Description: Test resistance to memory exhaustion
- Steps: [Test steps]
- Expected Result: [Expected behavior]
- Actual Result: [Actual behavior]
- Status: [Pass / Fail / Blocked]
Fuzzing Tests
Fuzzing Tool: [Tool used, e.g., cargo-fuzz, AFL]
Fuzzing Duration: [Duration]
Coverage: [Code coverage achieved]
Issues Found: [List issues found during fuzzing]
Fuzzing Results: [Summary of fuzzing results]
Penetration Tests
Penetration Test Scope: [Describe penetration testing scope]
Issues Found: [List issues found during penetration testing]
Penetration Test Results: [Summary of penetration test results]
Test Results Summary
Total Test Cases: [Number]
Passed: [Number]
Failed: [Number]
Blocked: [Number]
Critical Issues: [Number]
High Issues: [Number]
Medium Issues: [Number]
Low Issues: [Number]
Issues Found
Critical Issues
Issue 1: [Title]
- Description: [Description]
- Impact: [Impact]
- Steps to Reproduce: [Steps]
- Recommendation: [Recommendation]
- Status: [Open / Fixed / Deferred]
High Issues
[List high-priority issues]
Medium Issues
[List medium-priority issues]
Low Issues
[List low-priority issues]
Recommendations
Immediate Actions: [List immediate actions required]
Short-term Actions: [List short-term actions]
Long-term Actions: [List long-term actions]
Test Coverage
Code Coverage: [Percentage]
Security Control Coverage: [Percentage]
Coverage Gaps: [List areas with insufficient coverage]
Sign-off
Tester: [Name]
Date: [Date]
Status: [Pass / Fail / Conditional Pass]
Approval: [Approval from security team/maintainers]
References
- Security Controls System
- Threat Models
- Developer Security Checklist
- Security Architecture Review Template
- Security Review Checklist (in main repo)
Migration Guides
Migration guides for upgrading between BLVM versions are documented here.
Migration guides are provided as needed for version upgrades. When available, they cover configuration changes, database migrations, API compatibility, and upgrade procedures.
When migration guides are needed, they will cover:
- Configuration changes between versions
- Database schema migrations
- API changes and compatibility
- Breaking changes and upgrade procedures
Frequently Asked Questions
General Questions
What is Bitcoin Commons?
Bitcoin Commons is a project that solves Bitcoin’s governance asymmetry through two complementary innovations: BLVM (the technical stack providing mathematical rigor) and Bitcoin Commons (the governance framework providing coordination without civil war). Together, they enable safe alternative Bitcoin implementations with forkable governance. See Introduction and Governance Overview for details.
How is this different from Bitcoin Core?
Bitcoin Core is a single implementation with informal governance. Bitcoin Commons provides: (1) BLVM - Mathematical rigor enabling safe alternatives, (2) Commons - Forkable governance enabling coordination. Bitcoin Core has excellent consensus security; Bitcoin Commons adds governance security and implementation diversity.
Is this a fork of Bitcoin?
No. Neither BLVM nor Bitcoin Commons forks Bitcoin’s blockchain or consensus rules. BLVM provides mathematical specification enabling safe alternative implementations. Bitcoin Commons provides governance framework enabling coordination. Both maintain full Bitcoin consensus compatibility.
Is the system production ready?
BLVM provides a complete Bitcoin node implementation with all core components. The system includes formal verification, comprehensive testing, and production-ready features. For production deployment, ensure proper security hardening, RPC authentication, and monitoring are configured. See Node Configuration and Security for production considerations.
How do BLVM and Bitcoin Commons work together?
BLVM provides the mathematical foundation and compiler-like architecture (Orange Paper as IR, formal verification passes). Bitcoin Commons provides the governance framework (coordination without civil war). The modular architecture is where both meet: BLVM ensures correctness through architectural enforcement; Commons ensures coordination through governance rules. You can’t have safe alternative implementations without BLVM’s mathematical rigor, and you can’t have coordination without Commons’ governance framework.
What are the two innovations?
BLVM (Bitcoin Low-Level Virtual Machine): Technical innovation providing mathematical rigor through the Orange Paper (mathematical specification), formal verification (Z3 proofs via BLVM Specification Lock), proofs locked to code, and a compiler-like architecture. This ensures correctness. See Introduction and Consensus Overview for details.
Bitcoin Commons (Cryptographic Commons): Governance innovation providing forkable governance through Ostrom’s principles, cryptographic enforcement, 5-tier governance model, and transparent audit trails. This ensures coordination. See Governance Overview for details.
What’s the relationship between Bitcoin Commons and BTCDecoded?
Bitcoin Commons is the governance framework; BTCDecoded is the first complete implementation of both innovations (BLVM + Commons). Think of BLVM as the technical foundation, Bitcoin Commons as the governance constitution, and BTCDecoded as the first “government” built on both. Other implementations can adopt the same framework.
What is Bitcoin Commons (the governance framework)?
Bitcoin Commons is a forkable governance framework that applies Elinor Ostrom’s proven commons management principles through cryptographic enforcement. It solves Bitcoin’s governance asymmetry by making development governance as robust as technical consensus. It provides coordination without civil war through forkable rules, cryptographic signatures, and transparent audit trails.
How does Bitcoin Commons governance work?
Bitcoin Commons uses a 5-tier constitutional governance model with graduated signature thresholds (3-of-5 for routine maintenance, up to 6-of-7 for consensus changes) and review periods (7 days to 365 days). All governance actions are cryptographically signed and transparently auditable. Users can fork governance rules if they disagree, creating exit competition.
What makes Bitcoin Commons governance “6x harder to capture”?
Multiple mechanisms: (1) Forkable governance rules allow users to exit if governance is captured, (2) Multiple implementations compete, preventing monopoly, (3) Cryptographic enforcement makes power visible and accountable, (4) Economic alignment through merge mining, (5) Graduated thresholds prevent rapid changes, (6) Transparent audit trails.
How does forkable governance work?
Users can fork the governance rules (not just the code) if they disagree with decisions. This creates exit competition: if governance is captured, users can fork to a better governance model while maintaining Bitcoin consensus compatibility. The threat of forking prevents capture.
What are Ostrom’s principles?
Elinor Ostrom’s Nobel Prize-winning research identified 8 principles for managing commons successfully. Bitcoin Commons applies these through: clearly defined boundaries, proportional equivalence, collective choice, monitoring, graduated sanctions, conflict resolution, minimal recognition of rights, and nested enterprises.
Why do you need both BLVM and Bitcoin Commons?
BLVM solves the technical problem (mathematical rigor, safe alternative implementations). Bitcoin Commons solves the governance problem (coordination without civil war). You can’t have safe alternatives without BLVM’s mathematical foundation, and you can’t have coordination without Commons’ governance framework. They enable each other.
How does the modular architecture combine both innovations?
The modular architecture has three layers: (1) Mandatory Consensus (BLVM ensures correctness), (2) Optional Modules (Commons enables competition), (3) Economic Coordination (module marketplace funds infrastructure). BLVM ensures correctness through architectural enforcement; Commons ensures coordination through governance rules. The architecture is where both meet.
Can you use BLVM without Bitcoin Commons governance?
Technically yes: BLVM is a technical stack that can be used independently. However, without Bitcoin Commons governance, you’d still have the governance capture problem. The innovations are designed to work together: BLVM enables safe alternatives, Commons enables coordination between alternatives.
Can you use Bitcoin Commons governance without BLVM?
The governance framework could theoretically be applied to other implementations, but BLVM’s mathematical rigor (Orange Paper, formal verification) is what makes alternative implementations safe. Without BLVM, you’d have governance but still risk consensus bugs from informal implementations.
What happens if governance is captured?
Forkable governance means users can fork to a better governance model. This creates exit competition: captured governance loses users to better-governed implementations. The threat of forking prevents capture. Unlike Bitcoin Core, you can fork the governance rules, not just the code.
How does economic alignment work?
Through the module marketplace. Module authors receive 75% of sales, Commons receives 15% for infrastructure, and node operators receive 10%. This creates sustainable funding while incentivizing quality module development.
What is merge mining?
Merge mining is available as a separate paid plugin module (blvm-merge-mining). It allows miners to mine multiple blockchains simultaneously using the same proof-of-work. However, merge mining is not a Commons funding model - revenue goes to the module developer, not to Commons infrastructure.
What features does BLVM provide?
Orange Paper complete, blvm-consensus with formal proofs, blvm-protocol, blvm-node, and blvm-sdk all implemented. All 6 tiers are functional and production-ready.
How is Bitcoin Commons governance implemented?
Bitcoin Commons governance uses a 5-tier constitutional model with cryptographic enforcement. Governance rules are defined, the governance-app is implemented, and cryptographic primitives are available. Governance activation requires a suitable cohort of keyholders to be onboarded. See Governance Overview for details.
How does governance activation work?
Governance activation requires a suitable cohort of keyholders to be onboarded. This involves security audits, keyholder onboarding, governance app deployment, and community testing. See Governance Overview and Keyholder Procedures for details.
How can I contribute?
Review BLVM code and formal proofs, review Bitcoin Commons governance rules, submit issues and pull requests, help with testing and security audits, build your own implementation using both innovations, or participate in governance discussions.
Can I build my own implementation?
Yes! You can use BLVM’s technical stack (Orange Paper, blvm-consensus) and adopt Bitcoin Commons governance framework. Fork the governance model, customize it for your organization, and build your own Bitcoin-compatible implementation. See the Implementations Registry.
Where is the code?
All code is open source on GitHub under the BTCDecoded organization. Key repositories: BLVM (blvm-spec/Orange Paper, blvm-consensus, blvm-protocol, blvm-node, blvm-sdk) and Commons (governance, governance-app).
What documentation should I read?
White Paper for complete technical and governance overview, Unified Documentation for technical documentation, and Governance Docs for governance rules and processes.
Why “commons”?
Bitcoin’s codebase is a commons: a shared resource that benefits everyone but no one owns. Traditional commons fail due to tragedy of the commons. Ostrom showed how to manage commons successfully. Bitcoin Commons applies these proven principles through cryptographic enforcement.
How does this relate to cypherpunk philosophy?
Cypherpunks focused on eliminating trusted third parties in transactions. Bitcoin Commons extends this to development: eliminate trusted parties in governance through cryptographic enforcement, transparency, and forkability. BLVM extends this to implementation: eliminate trusted implementations through mathematical proof.
Technical Questions
What is BLVM?
BLVM (Bitcoin Low-Level Virtual Machine) is a compiler-like infrastructure for Bitcoin implementations, similar to how LLVM provides compiler infrastructure for programming languages. It includes: (1) Orange Paper - complete mathematical specification serving as the IR (intermediate representation), (2) Optimization Passes - runtime optimization passes (constant folding, memory layout optimization, SIMD vectorization, bounds check optimization, dead code elimination), (3) blvm-consensus - optimized mathematical implementation with formal verification, (4) blvm-protocol - Bitcoin abstraction layer, (5) blvm-node - full node implementation, (6) blvm-sdk - developer toolkit. The optimization passes transform the Orange Paper specification into optimized, production-ready code.
What is the Orange Paper?
The Orange Paper is a complete mathematical specification of Bitcoin’s consensus protocol, extracted from Bitcoin Core using AI-assisted analysis. It serves as the “intermediate representation” (IR) in BLVM’s compiler-like architecture. It enables safe alternative implementations by providing formal, verifiable consensus rules that can be mathematically proven correct.
How does formal verification work in BLVM?
BLVM uses BLVM Specification Lock (with Z3) to formally verify consensus-critical code. The Orange Paper provides the mathematical specification; blvm-consensus implements it with proofs locked to code. All consensus decisions flow through verified functions, and the dependency chain prevents bypassing verification. This provides mathematical proof of correctness, not just testing.
How is BLVM different from Bitcoin Core?
Bitcoin Core embeds consensus rules in 350,000+ lines of C++ with no mathematical specification. BLVM provides: (1) Mathematical specification (Orange Paper), (2) Formal verification (Z3 proofs via BLVM Specification Lock), (3) Proofs locked to code, (4) Compiler-like architecture enabling safe alternative implementations. BLVM doesn’t replace Bitcoin Core; it enables safe alternatives.
What does “compiler-like architecture” mean?
Like a compiler has source code → IR → optimization passes → machine code, BLVM has: Bitcoin Core code → Orange Paper (IR) → optimization passes → blvm-consensus → blvm-node. The Orange Paper serves as the intermediate representation that gets transformed through optimization passes (constant folding, memory layout optimization, SIMD vectorization, bounds check optimization, dead code elimination) into optimized code. Just like multiple compilers can target the same LLVM IR, multiple Bitcoin implementations can target the Orange Paper specification. This enables implementation diversity while maintaining consensus correctness through shared mathematical foundations.
What is formal verification in BLVM?
BLVM uses BLVM Specification Lock (with Z3) to mathematically prove code correctness. The Orange Paper provides the specification; blvm-consensus implements it with proofs. All consensus decisions flow through verified functions. This provides mathematical proof, not just testing.
How many formal proofs does BLVM have?
BLVM has comprehensive formal proofs in the source code, providing formal verification coverage of consensus-critical functions. The proofs are embedded directly in the codebase and verified continuously.
What does “proofs locked to code” mean?
Formal verification proofs are embedded in the code itself, not separate documentation. The proofs verify that the code matches the Orange Paper specification. If code changes, proofs must be updated, ensuring correctness is maintained.
How does BLVM prevent consensus bugs?
Through multiple layers: (1) Orange Paper provides mathematical specification, (2) Formal verification proves implementation matches spec, (3) Proofs locked to code prevent drift, (4) Dependency chain forces all consensus through verified functions, (5) Spec drift detection alerts if code diverges from spec.
How does cryptographic enforcement work?
All governance actions require cryptographic signatures from maintainers. The governance-app (GitHub App) verifies signatures, enforces thresholds (e.g., 6-of-7), and blocks merges until requirements are met. This makes power visible and accountable: you can see who signed what, when.
What BIPs are implemented?
BLVM implements numerous Bitcoin Improvement Proposals. See Protocol Specifications for a complete list, including consensus-critical BIPs (BIP65, BIP112, BIP68, BIP113, BIP125, BIP141/143, BIP340/341/342), network protocol BIPs (BIP152, BIP157/158, BIP331), and application-level BIPs (BIP21, BIP32/39/44, BIP174, BIP350/351).
What storage backends are supported?
The node supports multiple storage backends with automatic fallback: redb (default, recommended), sled (beta, fallback option), and rocksdb (optional, Bitcoin Core compatible - can read Bitcoin Core’s LevelDB databases). The system automatically selects the best available backend. See Storage Backends for complete details.
What transport protocols are supported?
The network layer supports multiple transport protocols: TCP (default, Bitcoin P2P compatible) and Iroh/QUIC (experimental). See Network Protocol for details.
How do I install BLVM?
Pre-built binaries are available from GitHub Releases. See Installation for platform-specific instructions.
What experimental features are available?
The experimental build variant includes: UTXO commitments, BIP119 CTV (CheckTemplateVerify), Dandelion++ privacy relay, BIP158, Stratum V2 mining protocol, and enhanced signature operations counting. See Installation for details.
How do I configure the node?
Configuration can be done via config file (blvm.toml), environment variables, or command-line options. See Node Configuration for complete configuration options.
What RPC methods are available?
The node implements numerous Bitcoin Core-compatible JSON-RPC methods across blockchain, raw transaction, mempool, network, mining, control, address, transaction, and payment categories. See RPC API Reference for the complete list of all available methods.
How does the module system work?
The node includes a process-isolated module system that enables optional features (Lightning, merge mining, privacy enhancements) without affecting consensus or base node stability. Modules run in separate processes with IPC communication. See Module Development for details.
How do I troubleshoot issues?
See Troubleshooting for common issues and solutions, including node startup problems, storage issues, network connectivity, RPC configuration, module system issues, and performance optimization.
Troubleshooting
Common issues and solutions when running BLVM nodes. See Node Operations for operational details.
Node Won’t Start
Port Already in Use
Error: Address already in use or Port 8332 already in use
Solution:
# Use a different port
blvm --rpc-port 8333
# Or find and stop the process using the port
lsof -i :8332
kill <PID>
Permission Denied
Error: Permission denied when accessing data directory
Solution:
# Fix directory permissions
sudo chown -R $USER:$USER /var/lib/blvm
# Or use a user-writable directory
blvm --data-dir ~/.blvm
Storage Issues
Database Backend Fails
Error: Failed to initialize database backend
Solution:
- The system automatically falls back to alternative backends
- Check data directory permissions
- Ensure sufficient disk space
- Try specifying backend explicitly:
--storage-backend sled
Corrupted Database
Error: Database corruption or inconsistent state
Solution:
# Stop the node
# Remove corrupted database files (backup first!)
rm -rf data/blocks data/chainstate
# Restart and resync
blvm
Network Issues
No Peer Connections
Symptoms: Node starts but shows 0 connections
Solutions:
- Check firewall settings (port 8333 for mainnet, 18333 for testnet)
- Verify network connectivity
- Try adding manual peers:
blvm --addnode <peer-ip>:8333 - Check DNS seed resolution
Connection Drops
Symptoms: Connections established but immediately drop
Solutions:
- Check network stability
- Verify protocol version compatibility
- Review node logs for specific error messages
- Try different transport:
--transport tcp_only
RPC Issues
RPC Connection Refused
Error: Connection refused when calling RPC
Solutions:
- Verify RPC is enabled:
--rpc-enabled true - Check RPC port:
--rpc-port 8332 - Verify bind address:
--rpc-host 127.0.0.1(default) - Check firewall for RPC port
RPC Authentication Errors
Error: Unauthorized or authentication failures
Solutions:
- Configure RPC credentials in config file
- Use correct username/password in requests
- For development, RPC can run without auth (not recommended for production)
Module System Issues
Module Not Loading
Error: Module fails to load or start
Solutions:
- Verify
module.tomlexists and is valid - Check module binary exists at expected path
- Review module logs in
data/modules/logs/ - Verify module has required permissions in manifest
- Check IPC socket directory permissions
IPC Connection Failures
Error: Module cannot connect to node IPC
Solutions:
- Ensure socket directory exists:
data/modules/sockets/ - Check file permissions on socket directory
- Verify module process has access to socket
- Restart node to recreate sockets
Performance Issues
Slow Initial Sync
Symptoms: Node takes very long to sync
Solutions:
- Use pruning:
--pruning enabled --pruning-keep-blocks 288 - Increase cache sizes in config
- Use faster storage backend (redb recommended)
- Check network bandwidth and latency
High Memory Usage
Symptoms: Node uses excessive memory
Solutions:
- Reduce cache sizes in config
- Enable pruning to reduce data size
- Check for memory leaks in logs
- Consider using lighter storage backend
Getting Help
- Check node logs:
data/logs/or console output - Review Configuration for options
- See RPC API for available methods
- Check GitHub issues for known problems
See Also
- Node Operations - Node management and operations
- Node Configuration - Configuration options
- Getting Started - First node setup
- FAQ - Frequently asked questions
- Migration Guides - Migration from other implementations
Contributing to BLVM Documentation
Thank you for your interest in improving BLVM documentation!
Documentation Philosophy
Documentation is maintained in source repositories alongside code. This repository (blvm-docs) aggregates that documentation into a unified site.
Where to contribute:
- Component-specific documentation → Edit in the source repository (e.g.,
blvm-consensus/docs/) - Cross-cutting documentation → Edit in this repository (e.g.,
blvm-docs/src/architecture/) - Navigation structure → Edit
SUMMARY.mdin this repository
Documentation Standards
Markdown Format
- Use standard Markdown (no mdBook-specific syntax in source repos)
- Follow consistent heading hierarchy
- Use relative links for internal documentation
- Include code examples where helpful
Style Guidelines
- Clarity: Write clearly and concisely
- Completeness: Cover all important aspects
- Examples: Include practical examples
- Links: Link to related documentation
- Code: Include testable code examples where possible
File Organization
Each source repository should maintain documentation in:
repository-root/
├── README.md # High-level overview
├── docs/
│ ├── README.md # Documentation index
│ ├── architecture.md # Component architecture
│ ├── guides/ # How-to guides
│ ├── reference/ # Reference documentation
│ └── examples/ # Code examples
Contribution Workflow
For Source Repository Documentation
- Fork the source repository (e.g.,
blvm-consensus) - Make documentation improvements
- Submit a pull request to the source repository
- After merge, changes will appear in the unified documentation site (via
{{#include}}directives)
For Cross-Cutting Documentation
- Fork this repository (
blvm-docs) - Edit files in
src/directory (not in submodules) - Submit a pull request
- After merge, GitHub Actions will automatically rebuild and deploy
For Navigation Changes
- Edit
src/SUMMARY.mdto add/remove/modify navigation - Create corresponding content files if needed
- Submit a pull request
Local Testing
Before submitting changes:
-
Clone the repository:
git clone https://github.com/BTCDecoded/blvm-docs.git -
Serve locally:
mdbook serve -
Review changes at
http://localhost:3000 -
Check for broken links:
mdbook test
Review Process
- All documentation changes require review
- Maintainers will review for clarity, completeness, and accuracy
- Technical accuracy is especially important for consensus and protocol documentation
Questions?
- Open an issue for questions about documentation structure
- Ask in GitHub Discussions for general questions
- Contact maintainers for repository-specific questions
Thank you for helping improve BLVM documentation!