L2 Block Format Specification
Standardized block format for L2 rollups with compression, batch aggregation, and AI metadata
Abstract
This LP defines a standardized block format specification for L2 rollups on Lux Network, optimizing for compression, efficient state transitions, and specialized metadata for AI computations. The format supports multiple transaction types including standard transfers, smart contract calls, AI inference requests, and distributed training updates. It implements advanced compression techniques achieving 10-20x reduction in data size, batch aggregation for efficient L1 settlement, and cryptographic commitments for state integrity. The specification includes extensions for cross-rollup communication and quantum-resistant signatures.
Activation
| Parameter | Value |
|---|---|
| Flag string | lp505-l2-block-format |
| Default in code | false until block X |
| Deployment branch | v0.0.0-lp505 |
| Roll‑out criteria | 1M blocks produced |
| Back‑off plan | Legacy format support |
Motivation
Current L2 block formats lack:
- Compression Optimization: Inefficient data encoding
- AI Metadata: No support for compute workload tracking
- Batch Efficiency: Poor aggregation for L1 submission
- Cross-rollup Standards: Incompatible formats between L2s
- Future-proofing: No quantum-resistant signature support
This specification provides:
- 90% Size Reduction: Advanced compression techniques
- AI-Native Support: Built-in compute metadata fields
- Efficient Aggregation: Optimized for batch submission
- Interoperability: Standard format across rollups
- Quantum Resistance: Support for post-quantum signatures
Specification
Core Block Structure
interface IL2BlockFormat {
struct L2Block {
BlockHeader header;
TransactionData[] transactions;
StateTransition stateTransition;
ComputeMetadata computeMetadata;
CrossRollupMessages[] messages;
bytes witness; // Merkle proofs for light clients
}
struct BlockHeader {
uint256 blockNumber;
bytes32 parentHash;
bytes32 stateRoot;
bytes32 transactionsRoot;
bytes32 receiptsRoot;
address sequencer;
uint256 timestamp;
uint256 l1BlockNumber;
uint256 gasUsed;
uint256 gasLimit;
bytes32 extraData;
bytes signature; // Sequencer signature
}
struct StateTransition {
bytes32 previousStateRoot;
bytes32 newStateRoot;
AccountDelta[] accountDeltas;
StorageDelta[] storageDeltas;
bytes32 globalStateHash;
}
struct AccountDelta {
address account;
uint256 balanceDelta; // Signed integer encoded
uint256 nonceDelta;
bytes32 codeHash; // Only if contract deployed
}
struct StorageDelta {
address contract;
bytes32[] keys;
bytes32[] values;
bytes proof; // Merkle proof of changes
}
}
Transaction Format
interface ITransactionFormat {
enum TransactionType {
Legacy,
EIP2930, // Access list
EIP1559, // Dynamic fees
AIInference, // AI compute request
Training, // Distributed training
CrossRollup, // Cross-L2 message
Compressed, // Batch compressed
Quantum // Post-quantum signed
}
struct TransactionData {
TransactionType txType;
bytes payload; // Type-specific encoding
bytes signature;
bytes metadata;
}
struct StandardTransaction {
uint256 nonce;
uint256 gasPrice;
uint256 gasLimit;
address to;
uint256 value;
bytes data;
uint256 chainId;
}
struct AIInferenceTransaction {
address model;
bytes input;
uint256 maxGas;
uint256 maxPrice;
bytes32 expectedOutputHash; // For verification
ComputeRequirements requirements;
}
struct TrainingTransaction {
bytes32 jobId;
uint256 epoch;
bytes gradients; // Compressed gradients
bytes32 datasetHash;
uint256 samplesProcessed;
bytes proof; // Proof of computation
}
struct ComputeRequirements {
uint256 minFlops;
uint256 minMemory;
bool requiresTEE;
bool requiresGPU;
string acceleratorType;
}
}
Compression Schemes
interface ICompressionSchemes {
enum CompressionType {
None,
Gzip,
Snappy,
Zstandard,
LZ4,
Brotli,
Custom
}
struct CompressionConfig {
CompressionType algorithm;
uint256 compressionLevel; // 1-9 for most algorithms
uint256 dictionarySize; // For dictionary-based compression
bytes dictionary; // Shared dictionary
}
struct CompressedBlock {
bytes32 uncompressedHash;
uint256 uncompressedSize;
CompressionType compression;
bytes compressedData;
}
struct BatchCompression {
TransactionData[] transactions;
bytes commonData; // Extracted common fields
bytes[] deltas; // Only differences
CompressionConfig config;
}
// Compression operations
function compressBlock(
L2Block calldata block,
CompressionConfig calldata config
) external pure returns (CompressedBlock memory);
function decompressBlock(
CompressedBlock calldata compressed
) external pure returns (L2Block memory);
// Transaction compression
function batchCompressTransactions(
TransactionData[] calldata txs
) external pure returns (BatchCompression memory);
function extractCommonFields(
TransactionData[] calldata txs
) external pure returns (bytes memory common, bytes[] memory deltas);
// State compression
function compressStateDeltas(
AccountDelta[] calldata accounts,
StorageDelta[] calldata storage
) external pure returns (bytes memory);
function rleEncode(bytes calldata data) external pure returns (bytes memory);
function huffmanEncode(bytes calldata data) external pure returns (bytes memory);
}
AI Compute Metadata
interface IAIComputeMetadata {
struct ComputeMetadata {
uint256 totalFlops; // Total FLOPs in block
uint256 totalInferences; // Number of AI inferences
uint256 totalTrainingSteps; // Training steps completed
ModelExecution[] executions;
ResourceUsage resources;
bytes performanceMetrics;
}
struct ModelExecution {
bytes32 modelHash;
uint256 inferenceCount;
uint256 totalLatency; // Cumulative latency
uint256 avgTokensPerSecond; // For LLMs
bytes32 checkpointHash; // For training
}
struct ResourceUsage {
uint256 cpuCycles;
uint256 gpuUtilization; // Percentage (basis points)
uint256 memoryPeak; // Peak memory usage
uint256 networkBandwidth; // Bytes transferred
uint256 storageIO; // Read/write operations
}
struct ModelMetrics {
bytes32 modelId;
uint256 accuracy; // Basis points
uint256 loss; // Fixed point decimal
uint256[] layerLatencies; // Per-layer timing
bytes activationStats; // Statistical summary
}
// Metadata operations
function aggregateComputeMetrics(
ComputeMetadata[] calldata metrics
) external pure returns (ComputeMetadata memory);
function validateComputeProof(
ModelExecution calldata execution,
bytes calldata proof
) external view returns (bool);
function benchmarkModel(
bytes32 modelHash,
bytes calldata testInput
) external returns (ModelMetrics memory);
}
Batch Aggregation
interface IBatchAggregation {
struct BatchedBlocks {
uint256 startBlock;
uint256 endBlock;
bytes32 batchRoot; // Merkle root of blocks
L2Block[] blocks;
bytes aggregatedProof; // Single proof for batch
bytes32 l1SubmissionHash;
}
struct AggregationStrategy {
uint256 maxBlocks; // Max blocks per batch
uint256 maxSize; // Max batch size in bytes
uint256 maxDelay; // Max time before submission
bool compressFirst;
bool aggregateProofs;
}
struct MerkleMultiProof {
bytes32 root;
bytes32[] leaves;
bytes32[][] proofs; // Proof for each leaf
uint256[] indices; // Leaf positions
}
// Batch operations
function createBatch(
L2Block[] calldata blocks,
AggregationStrategy calldata strategy
) external returns (BatchedBlocks memory);
function verifyBatch(
BatchedBlocks calldata batch
) external view returns (bool);
// Merkle operations
function computeBatchRoot(
bytes32[] calldata blockHashes
) external pure returns (bytes32);
function generateMultiProof(
bytes32[] calldata leaves,
uint256[] calldata indices
) external pure returns (MerkleMultiProof memory);
function verifyMultiProof(
MerkleMultiProof calldata proof
) external pure returns (bool);
// Aggregated signatures
function aggregateSignatures(
bytes[] calldata signatures
) external pure returns (bytes memory);
function verifyAggregatedSignature(
bytes calldata aggregated,
bytes32[] calldata messages,
address[] calldata signers
) external view returns (bool);
}
Cross-Rollup Messaging
interface ICrossRollupMessaging {
struct CrossRollupMessage {
uint256 sourceChainId;
uint256 targetChainId;
address sender;
address target;
uint256 nonce;
bytes payload;
bytes32 messageHash;
bytes proof; // Inclusion proof
}
struct MessageBatch {
CrossRollupMessage[] messages;
bytes32 batchHash;
uint256 blockHeight;
bytes aggregatedProof;
}
struct RoutingInfo {
uint256[] path; // Chain IDs for routing
uint256[] fees; // Fee per hop
uint256 deadline;
bytes metadata;
}
// Message handling
function packMessage(
CrossRollupMessage calldata message
) external pure returns (bytes memory);
function unpackMessage(
bytes calldata packed
) external pure returns (CrossRollupMessage memory);
function routeMessage(
CrossRollupMessage calldata message,
RoutingInfo calldata routing
) external returns (bytes32);
// Batch processing
function batchMessages(
CrossRollupMessage[] calldata messages
) external pure returns (MessageBatch memory);
function verifyMessageInclusion(
CrossRollupMessage calldata message,
bytes32 blockRoot,
bytes calldata proof
) external view returns (bool);
}
Quantum-Resistant Extensions
interface IQuantumResistant {
enum PostQuantumAlgorithm {
SPHINCS_SHAKE256, // Hash-based signatures (NIST SLH-DSA)
DILITHIUM3, // Lattice-based (NIST ML-DSA-65)
FALCON512, // Lattice-based (NIST alternate)
// Rainbow REMOVED - cryptanalytically broken March 2022 (Ward Beullens attack)
McEliece // Code-based
}
struct QuantumSignature {
PostQuantumAlgorithm algorithm;
bytes publicKey;
bytes signature;
uint256 securityLevel; // Bits of security
}
struct HybridSignature {
bytes classicalSig; // ECDSA/EdDSA
QuantumSignature quantumSig;
bool requireBoth; // Both must verify
}
struct QuantumProof {
bytes32 statement;
bytes witness;
PostQuantumAlgorithm zkAlgorithm;
bytes proof;
}
// Signature operations
function signQuantumResistant(
bytes32 message,
PostQuantumAlgorithm algorithm
) external returns (QuantumSignature memory);
function verifyQuantumSignature(
bytes32 message,
QuantumSignature calldata sig
) external view returns (bool);
function createHybridSignature(
bytes32 message
) external returns (HybridSignature memory);
// Migration support
function upgradeToQuantum(
address account,
QuantumSignature calldata newAuth
) external;
function isQuantumReady(
address account
) external view returns (bool);
}
Encoding Specifications
interface IEncodingSpecs {
// RLP encoding for Ethereum compatibility
function rlpEncodeBlock(L2Block calldata block) external pure returns (bytes memory);
function rlpDecodeBlock(bytes calldata encoded) external pure returns (L2Block memory);
// SSZ encoding for efficiency
function sszEncodeBlock(L2Block calldata block) external pure returns (bytes memory);
function sszDecodeBlock(bytes calldata encoded) external pure returns (L2Block memory);
// Protobuf for cross-platform compatibility
function protobufEncodeBlock(L2Block calldata block) external pure returns (bytes memory);
function protobufDecodeBlock(bytes calldata encoded) external pure returns (L2Block memory);
// Custom binary format optimized for size
function binaryEncodeBlock(L2Block calldata block) external pure returns (bytes memory);
function binaryDecodeBlock(bytes calldata encoded) external pure returns (L2Block memory);
// Canonical encoding for hashing
function canonicalEncode(L2Block calldata block) external pure returns (bytes memory);
function computeBlockHash(L2Block calldata block) external pure returns (bytes32);
}
Rationale
Block Structure Design
The nested structure provides:
- Modularity: Separate concerns (header, transactions, state)
- Efficiency: Optimal field ordering for compression
- Extensibility: Metadata fields for future additions
- Compatibility: Support for existing Ethereum transactions
Compression Strategy
Multi-level compression approach:
- Field Extraction: Common fields stored once
- Delta Encoding: Only store differences
- Dictionary Compression: Shared dictionaries for common patterns
- Algorithm Selection: Choose optimal per data type
Expected compression ratios:
- Standard transfers: 15-20x
- Smart contract calls: 8-12x
- AI metadata: 5-8x
- State deltas: 10-15x
AI Metadata Inclusion
Following emerging standards for AI workload tracking:
- Resource Accounting: Precise compute usage tracking
- Performance Metrics: Latency and throughput monitoring
- Model Versioning: Checkpoint and version tracking
- Verification: Proofs of correct computation
Quantum Resistance
Preparing for post-quantum era:
- Hybrid Approach: Classical + quantum signatures
- Algorithm Agility: Support multiple PQ algorithms
- Graceful Migration: Upgrade path for existing accounts
- NIST Standards: Following standardized algorithms
Backwards Compatibility
The format maintains compatibility through:
- Version Field: Indicates format version
- Legacy Support: Can encode as standard Ethereum blocks
- Progressive Enhancement: New fields are optional
- Migration Tools: Converters between formats
Test Cases
Block Encoding Tests
describe("Block Format", () => {
it("should encode and decode block correctly", async () => {
const block = {
header: {
blockNumber: 1000,
parentHash: keccak256("parent"),
stateRoot: keccak256("state"),
timestamp: Date.now(),
gasUsed: 10000000,
gasLimit: 30000000
},
transactions: generateTransactions(100),
stateTransition: generateStateTransition(),
computeMetadata: generateComputeMetadata()
};
const encoded = await encoder.binaryEncodeBlock(block);
const decoded = await encoder.binaryDecodeBlock(encoded);
expect(decoded.header.blockNumber).to.equal(block.header.blockNumber);
expect(decoded.transactions.length).to.equal(100);
});
it("should achieve target compression ratio", async () => {
const block = generateLargeBlock(1000); // 1000 transactions
const original = await encoder.canonicalEncode(block);
const compressed = await compression.compressBlock(block, {
algorithm: CompressionType.Zstandard,
compressionLevel: 9,
dictionarySize: 100000
});
const ratio = original.length / compressed.compressedData.length;
expect(ratio).to.be.gt(10); // At least 10x compression
});
});
Transaction Compression Tests
describe("Transaction Compression", () => {
it("should batch compress similar transactions", async () => {
// Generate similar transactions (same to address)
const txs = [];
for(let i = 0; i < 100; i++) {
txs.push({
txType: TransactionType.EIP1559,
payload: encodeTransaction({
to: commonAddress,
value: ethers.parseEther("1"),
gasLimit: 21000,
maxFeePerGas: ethers.parseUnits("100", "gwei"),
nonce: i
})
});
}
const batch = await compression.batchCompressTransactions(txs);
const originalSize = txs.reduce((sum, tx) => sum + tx.payload.length, 0);
const compressedSize = batch.commonData.length +
batch.deltas.reduce((sum, d) => sum + d.length, 0);
expect(compressedSize).to.be.lt(originalSize / 10);
});
it("should extract common fields", async () => {
const txs = generateSimilarTransactions(50);
const [common, deltas] = await compression.extractCommonFields(txs);
// Common fields should include repeated values
expect(common).to.include(commonAddress);
expect(common).to.include(ethers.toBeHex(21000)); // Common gas limit
// Deltas should be small
for(let delta of deltas) {
expect(delta.length).to.be.lt(100);
}
});
});
AI Metadata Tests
describe("AI Compute Metadata", () => {
it("should track model execution metrics", async () => {
const executions = [
{
modelHash: keccak256("gpt-4"),
inferenceCount: 100,
totalLatency: 50000, // 50 seconds total
avgTokensPerSecond: 150
},
{
modelHash: keccak256("stable-diffusion"),
inferenceCount: 20,
totalLatency: 120000, // 2 minutes
avgTokensPerSecond: 0
}
];
const metadata = {
totalFlops: ethers.parseUnits("500", 15), // 500 PFLOPS
totalInferences: 120,
executions,
resources: {
gpuUtilization: 8500, // 85%
memoryPeak: 32 * 1024 * 1024 * 1024 // 32 GB
}
};
const aggregated = await aiMetadata.aggregateComputeMetrics([metadata, metadata]);
expect(aggregated.totalInferences).to.equal(240);
expect(aggregated.totalFlops).to.equal(ethers.parseUnits("1", 18)); // 1 EFLOPS
});
it("should validate compute proof", async () => {
const execution = {
modelHash: keccak256("bert"),
inferenceCount: 10,
totalLatency: 5000
};
const proof = generateComputeProof(execution);
expect(await aiMetadata.validateComputeProof(execution, proof)).to.be.true;
});
});
Cross-Rollup Tests
describe("Cross-Rollup Messaging", () => {
it("should pack and unpack messages", async () => {
const message = {
sourceChainId: 1,
targetChainId: 2,
sender: alice,
target: bob,
nonce: 1,
payload: "0x1234",
messageHash: keccak256("message")
};
const packed = await crossRollup.packMessage(message);
expect(packed.length).to.be.lt(200); // Efficient packing
const unpacked = await crossRollup.unpackMessage(packed);
expect(unpacked.sender).to.equal(alice);
expect(unpacked.payload).to.equal("0x1234");
});
it("should batch messages efficiently", async () => {
const messages = [];
for(let i = 0; i < 50; i++) {
messages.push(generateCrossRollupMessage(i));
}
const batch = await crossRollup.batchMessages(messages);
expect(batch.messages.length).to.equal(50);
expect(batch.aggregatedProof).to.not.be.null;
});
});
Quantum Signature Tests
describe("Quantum Resistant Signatures", () => {
it("should create and verify quantum signature", async () => {
const message = keccak256("quantum-safe-message");
const sig = await quantum.signQuantumResistant(
message,
PostQuantumAlgorithm.DILITHIUM3
);
expect(sig.signature.length).to.be.gt(2000); // Large PQ signatures
expect(await quantum.verifyQuantumSignature(message, sig)).to.be.true;
});
it("should create hybrid signature", async () => {
const message = keccak256("hybrid-message");
const hybrid = await quantum.createHybridSignature(message);
expect(hybrid.classicalSig).to.not.be.null;
expect(hybrid.quantumSig).to.not.be.null;
// Both signatures must verify
const classicalValid = await verifyECDSA(message, hybrid.classicalSig);
const quantumValid = await quantum.verifyQuantumSignature(
message,
hybrid.quantumSig
);
expect(classicalValid && quantumValid).to.be.true;
});
});
Reference Implementation
Available at:
- https://github.com/luxfi/l2-block-format
- https://github.com/luxfi/block-compression
- https://github.com/luxfi/quantum-signatures
Key components:
src/encoding/: Various encoding implementationssrc/compression/: Compression algorithmssrc/crypto/quantum/: Post-quantum cryptographytools/: Block explorer and debugging toolsbenchmarks/: Performance benchmarks
Security Considerations
Data Integrity
- Hash Verification: Every component has merkle proofs
- Signature Validation: Multiple signature schemes supported
- Compression Safety: Decompression bombs prevented
- Canonical Encoding: Prevents malleability attacks
Compression Security
- Size Limits: Maximum decompressed size enforced
- Dictionary Validation: Shared dictionaries verified
- Algorithm Safety: Only approved algorithms
- DoS Prevention: Complexity limits on decompression
Quantum Security
- Algorithm Selection: NIST-approved algorithms only
- Key Sizes: Minimum 256-bit security level
- Hybrid Mode: Classical + quantum for transition
- Side Channels: Constant-time implementations
Cross-Rollup Security
- Message Authentication: Signatures required
- Replay Protection: Nonces and chain IDs
- Inclusion Proofs: Merkle proofs of message
- Timeout Handling: Messages expire after deadline
Economic Impact
Storage Cost Reduction
| Data Type | Original Size | Compressed | Savings |
|---|---|---|---|
| Transfers | 200 bytes | 10 bytes | 95% |
| Contract Calls | 500 bytes | 50 bytes | 90% |
| State Updates | 1 KB | 100 bytes | 90% |
| AI Metadata | 2 KB | 400 bytes | 80% |
L1 Settlement Costs
- Before: $10 per 100 transactions
- After: $0.50 per 100 transactions
- Annual Savings: $10M+ at 100K tx/day
Performance Improvements
- Block Production: 10x faster
- State Sync: 5x faster
- Light Client Verification: 20x faster
Open Questions
- Optimal Compression Dictionary: Size vs efficiency tradeoff
- Quantum Migration Timeline: When to enforce PQ signatures
- Cross-Rollup Standards: Industry-wide standardization
- Compression Algorithm Updates: Upgrade mechanism
References
- Buterin, V. (2022). "The Different Types of ZK-EVMs"
- StarkWare (2021). "Volition: Hybrid Data Availability"
- Google (2020). "Snappy Compression Algorithm"
- Facebook (2021). "Zstandard Compression"
- NIST (2022). "Post-Quantum Cryptography Standards"
- Bernstein, D.J., et al. (2019). "SPHINCS+: Practical stateless hash-based signatures"
- Polygon (2023). "Polygon zkEVM Block Format"
- Optimism (2023). "Bedrock Block Format Specification"
- Arbitrum (2023). "Nitro Block Structure"
- Ethereum Foundation (2023). "SSZ Specification"
Implementation
Status: Specification stage - implementation planned for future release
Planned Locations:
- Block format core:
~/work/lux/l2-block-format/(to be created) - Compression library:
~/work/lux/l2-block-format/compression/ - Encoding utilities:
~/work/lux/l2-block-format/encoding/ - Quantum extensions:
~/work/lux/l2-block-format/quantum/ - Contracts:
~/work/lux/standard/src/l2-block-format/
Build on Existing Infrastructure:
- Merkle tree utilities from
~/work/lux/database/merkle/ - Cryptographic primitives from
~/work/lux/crypto/ - Post-quantum signatures from Q-Chain
- Existing L1 block structures as reference
Core Implementation Modules:
-
Block Encoder/Decoder (~800 LOC)
- RLP encoding (Ethereum compatibility)
- SSZ encoding (efficiency)
- Protobuf encoding (cross-platform)
- Custom binary format
-
Compression Engine (~1500 LOC)
- Multi-algorithm support (gzip, Zstandard, Snappy, LZ4, Brotli)
- Dictionary-based compression
- Delta encoding for transactions
- Field extraction and common data pooling
-
State Delta Compression (~1000 LOC)
- Account balance/nonce compression
- Storage key-value compression
- Merkle proof optimization
- RLE and Huffman encoding
-
AI Compute Metadata (~900 LOC)
- Model execution tracking
- Resource usage aggregation
- Performance metrics collection
- Checkpoint and version management
-
Batch Aggregation (~1200 LOC)
- Multi-proof aggregation (BLS)
- Merkle multi-proofs
- Batch size optimization
- L1 submission packing
-
Cross-Rollup Messaging (~800 LOC)
- Message packing/unpacking
- Routing information handling
- Batch message processing
- Inclusion proof verification
-
Quantum-Resistant Extensions (~600 LOC)
- Post-quantum signature support (ML-DSA, SPHINCS, etc.)
- Hybrid classical + quantum signing
- Algorithm agility and migration
Compression Performance Targets:
- Standard transfers: 15-20x compression
- Smart contract calls: 8-12x compression
- AI metadata: 5-8x compression
- State deltas: 10-15x compression
- Overall L2 block: 10-20x reduction
Encoding Compatibility:
- Full RLP compatibility with Ethereum transactions
- SSZ efficient serialization (Beacon Chain compatible)
- Protobuf for language-agnostic APIs
- Custom format optimized for Lux
Testing Strategy:
- Fuzz testing for encoding/decoding
- Compression ratio benchmarking
- Cross-encoding round-trip testing
- AI metadata aggregation verification
- Quantum signature validation
- Cross-rollup message delivery simulation
Deployment Strategy:
- Phase 1: Core block format + RLP/SSZ encoding
- Phase 2: Compression algorithms + field optimization
- Phase 3: State delta compression
- Phase 4: AI compute metadata collection
- Phase 5: Batch aggregation optimization
- Phase 6: Quantum-resistant extensions
- Phase 7: Cross-rollup messaging integration
Integration with L2 Rollups:
- Works with LP-500 L2 Rollup Framework
- Compatible with LP-501 Data Availability Layer
- Uses fraud proofs from LP-502
- Interacts with sequencers from LP-504
Performance Metrics:
- Encoding time: < 10ms per block
- Decoding time: < 10ms per block
- Compression/decompression: < 50ms per block
- Merkle proof generation: < 20ms per 100 state deltas
Copyright
Copyright and related rights waived via CC0.