Polynomial PoRs for Subspace v2

The Hitchhiker’s Guide to Subspace — Episode I

Jeremiah Wagstaff
Subspace Network

--

Special thanks to our Research Engineer, Dariia Porechna, for proofreading and adding examples to this article, and our Research Partner, Dr. Chen Feng, for proofreading.

This is an introductory guide for those seeking to understand the technical details of our v2 consensus, which constructs proofs-of-replication (PoRs) and proofs-of-archival-storage (PoAS) among several other exciting things, from polynomial schemes such as Reed-Solomon erasure codes and Kate-Zaverucha-Goldberg (KZG) commitments. This work is also a starting point for our version of The Hitchhiker’s Guide to Ethereum.

Introduction

As part of our transition to v2 consensus (now called Dilithium, stay tuned!), we took a hard look at polynomial schemes and their applications and concluded (as have many others in the industry) that these primitives will be as central to the next decade of blockchain design as hash functions, Merkle trees, and ECC signatures have been to the last decade. Moreover, I firmly believe that Subspace is uniquely positioned to employ these primitives more naturally and effectively than any other project.

This is an accidental yet fortuitous result of our proof-of-archival-storage (PoAS) consensus and its novel mechanism for dynamically adjusting the cost of blockspace. The permissionless nature of farming and the dynamic cost of storage on our chain create a positive feedback loop, or flywheel, which allows the Subspace Network to scale in step with demand, such that the cost of blockspace remains constant. Polynomial schemes will allow us to more easily achieve linear scaling in the blockspace proportional to the number of farmers on our network than we had ever imagined.

We employ two specific polynomial-based schemes in our v2 consensus, Reed-Solomon (RS) erasure coding and Kate-Zaverucha-Goldberg (KZG) commitments. RS codes treat some useful data as a polynomial, allowing us to “extend” the data such that we can efficiently recover the data in the presence of either errors (communication) or erasures (storage). KZG commitments also treat the data as a polynomial, allowing us to create a tiny fingerprint of some larger data set (or file) and efficiently prove that any subset of the data matches the fingerprint (sometimes referred to as a Kate proof or witness).

The key takeaway is that both RS and KZG treat the underlying data as a polynomial, albeit for different reasons, but with exciting and often complementary side effects that we may compose into more extensive protocols.

What can you do with Polynomials?

Below is a taste of the exciting things now possible with polynomial schemes. These tricks are currently being explored for ETH upgrades, Polygon Avail, and nearly all Zero-Knowledge (ZK) protocols. Additionally, it is important (and exciting) to note that new schemes and applications are still being discovered, and there is still plenty of room to imagine for protocol designers.

  • Employing KZG as a vector commitment may replace Merkle proofs in many schemes, allowing a reduction of the proof size from logarithmic to constant, typically 48 bytes.
  • As noted in a recent paper, KZG-based polynomial commitment schemes (PCS) have become an essential ingredient for zk-SNARKs such as SONIC, Marlin, DARK, and PLONK. As a result, there are already several efficient implementations of the underlying KZG primitives and various reusable aggregation (batch proofs) techniques.
  • If we use RS to erasure code some data, say a block, and commit to the block data using KZG, we can succinctly prove that an erasure-coded block fragment belongs to the original commitment. This could be a Zero-Knowledge (ZK) proof of correct erasure coding.
  • We can leverage the information above to obviate the need for fraud proofs in 2D-RS schemes, such as those used by Celestia. This is primarily the direction ETH upgrades and Polygon Avail have gone for light client Data Availability Sampling (DAS) schemes.
  • Various 2D-RS-KZG constructions can remove the scalability bottleneck of restricting the block size to the expected network bandwidth of the single block producer, as proposed (and later revised) for ETH data sharding upgrades.
  • KZG commitments may be used to manage not only the history of a blockchain (as described above) but also the state database of user and contract accounts through a “stateless” database, significantly reducing the storage overhead of full nodes. Ethereum plans to employ Verkle trees (KZG-VC-based Merkle trees) and is also exploring the use of aggregatable subvector commitments built from KZG.

What Subspace unlocks with Polynomial Schemes

The following is an elaboration of our specific plans (roughly in priority/roadmap order) to integrate polynomial schemes more fundamentally within the Subspace tech stack. Our long-term goal is to connect the dots and deduplicate the workflow between archiving, plotting, proofs-of-archival-storage, data availability sampling, and data sharding within a unified EC-KZG scheme.

  • When archiving the history of Subspace, we replace Merkle roots with KZG commitments. Farmers can then provide constant-sized Kate proofs to clients of the Distributed Storage Network (DSN) as the witness for their pledged archival storage space.
  • We construct generic proofs-of-replication (PoR) from RS-KZG schemes and extend these into an extremely simple and efficient proof-of-archival-storage (PoAS).

Our immediate priority is to implement and deploy the two points mentioned above for testing during Gemini III.

  • We will eventually upgrade to a distributed archiving scheme, where a single large block is erasure coded and committed to under KZG in sections across farmer subnets (data shards). This approach will allow us to scale from ~ 2.5 MiB/sec of total storage bandwidth to > 1 GiB/sec without compromising security or decentralization. This architecture will be specified before the mainnet beta launch and implemented before the transition to mainnet omega.
  • We will likely eventually (and incrementally) transition to a KZG-based stateless version of Subspace to fully and finally resolve the Farmer’s Dilemma. Existing schemes can already handle simple numeric accounting, constituting most of the native logic on our primary and secondary chains and across many of our core domains. These upgrades may be explored post-mainnet omega in roughly that order. Stateless open domains could be explored in parallel by the community at their own pace, similar to the roll-up landscape in Ethereum.

Background Materials for Self-Study

While I will do my best to describe our new primitives at the “black-box” level, the following resources will be helpful for those seeking an inside look. If you’re fine with high-level abstractions and are not willing to cram several semesters of graduate-level math and crypto into a weekend, feel free to skip this section : )

For those brave souls who wish to continue, the guide below attempts to systemize the required background knowledge based on lessons learned from my self-study over the last few years. Much of this is a curated catalog of writing and talks from a handful of intellectual heavyweights in the field. Given that I have no formal background in these subjects, there are likely errors and misrepresentations, which I am happy to correct and refine.

Primer on Abstract Algebra (AA)

To understand how the internals of polynomial schemes work, you need to be familiar with finite (prime) fields, matrix multiplication, basic group theory, and bilinear maps. This was the steepest part of the learning curve for me, and worth spending the time to master if you would like to understand much of modern cryptography and information theory intuitively.

Primer on Polynomials and other Hard Math

We all studied polynomials in high school algebra. If you can recall the basics, you can follow along with most of the information below. However, switching my mental model from standard to finite field arithmetic was the hardest.

  • Intro to KZG by Subspace Labs’ very own Özgün Özerk provides a refresher on notation and helps demystify the basic terminology.
  • Overview of polynomials for cryptography by Alin Tomescu is terse and comprehensive.
  • Moving back to AA, Mary Wootters’ lecture on Polynomials over Finite Fields helps us understand how we can represent polynomials within finite fields to work with real data.
  • Alin provides a good overview of Lagrange Interpolation, a technique to derive a polynomial from some abstract data
  • This video by vcubingx provides an example of Lagrange interpolation for RS codes
  • Vitalik’s post on Fast Fourier Transforms (FFTs) explains the theory and practice behind the famous technique for interpolating and evaluating polynomials over finite fields.

Reed-Solomon Erasure Codes

Once you understand polynomials and finite fields, RS codes are far easier to grok than KZG. Note that both “Erasure Codes” and “Reed-Solomon” refer to families of algorithms rather than a single universal technique.

🤯 Different papers, articles, and presentations often use slightly different notation and terminology, which can be confusing when comparing schemes. For example, the original message M is sometimes called a block, consisting of information, data, or source symbols. Symbols themselves are varyingly referred to as shards, fragments, or pieces. Finally, the symbols of the coded message, or code word, are typically called coded or parity symbols.

  • This short video from Backblaze provides a great intro, which is elaborated on in this blog post. Our existing implementation of RS in Rust is based on a port of Backblaze’s Java code.
  • Vishehs’ article on Erasure Coding for the Masses provides an excellent step-by-step walkthrough of a systematic RS code based on how polynomial interpolation works.
  • RS, however, is one of the few cases where I do not recommend reading the original paper.
  • The course mentioned above on Algebraic Coding Theory (ACT) provides an excellent overview of the broader topic of coding theory, which builds up to RS by lecture six.

Polynomial Commitment Schemes and KZG

This part was genuinely hard for me to grasp end-to-end. Only after reading several different explanations that approached the construction from different angles, did it finally make sense to me.

  • Dan Boneh’s lecture from the DeFi MOOC provides a sound basis for understanding commitment schemes.
  • Dankrad Feist’s Intro to KZG is the most widely read, but it can be hard to unpack for those lacking a Ph.D. — luckily, Özgün was nice enough to break it down for the rest of us in his KZG explainer.
  • Nothing beats reading the original KZG paper, though I didn’t fully comprehend it for some time.
  • Alin provides another terse but comprehensive overview of KZG schemes in his blog.
  • Tom’s Primer on Kate Commitments is also helpful but a bit more from the zk-SNARK setting.
  • Alin’s overview of how to build Bilinear Accumulators from first principles helped me understand how you can do far more with KZG than what is described in the original paper.

Mathematical Preliminaries

The critical mathematical concept is that any vector of data (i.e., a list of integer values) can be represented as a polynomial, and the polynomial may take multiple forms or views.

The most straightforward approach is known as the coefficient form of the polynomial, where we treat the data values as the coefficients of the polynomial. Although this is a trivial example where each data point is a single byte (the integer values 0 to 7, or a u8 in Rust), we can extend this idea by employing much larger numbers to reduce the degree of our polynomial while increasing the amount of data we can represent as a single polynomial.

Why is this useful?

As a quick aside, why would we want to treat data as polynomials in the first place? First, instead of using the abstract term data, let’s imagine we have some specific data, which we often call a file. Then we represent the file as a polynomial.

In one key application, we may evaluate a polynomial at a set of points, say {0, 1, 2, …, n}, then generate a replica of the file, which we can later use to reconstruct the file using some other math tricks. The replica can be larger than the original file to handle the event where some part of the file is erased, hence the name erasure coding.

In another key application, we can commit to the polynomial by evaluating it at one specific point and later providing proof or witness with a mathematical relationship to the commitment. This algorithm is known as a polynomial commitment scheme (PCS). We can then extend the PCS into a vector commitment and obtain something akin to a Merkle Hash Tree but with other interesting properties.

From coefficient form to evaluation form

Representing the data as coefficients of a polynomial is the most straightforward way to think about things. In practice, however, it is often more efficient and useful to work with the evaluation form of the polynomial. In this form, we treat the data as evaluations of some unknown polynomial and then interpolate (solve for) the polynomial using one of several possible mathematical techniques.

These techniques revolve around the problem of solving systems of linear equations to extract the coefficients of the interpolated polynomial of degree n. The most straightforward way to understand this is to use Lagrange Interpolation, which has O(n²) complexity. Many implementations employ Gaussian elimination with O(n³) complexity. When the order (degree) of the polynomial is large, Fast Fourier Transforms are often preferred, as they have O(n∗log(n)) complexity.

In practice, these techniques are typically used to create a generator matrix, which is then used to evaluate or interpolate the polynomial. They lead to efficient implementations for Galois fields over {0, 1}ⁿ. It is important to note that the generator matrix depends solely on the polynomial degree and is independent of the actual data. However, since most implementations are meant to be general-purpose, they will often construct these matrices on the fly. Commonly used matrices include the Vandermonde matrix (and its inverse), the Cauchy matrix, and the Toepelitz matrix.

Roots of unity

An approach to represent the data, as in the example above, is evaluations on a list of sequential indices {0, 1, 2, …, d - 1}.

In many applications, however, we can choose the evaluation points. For example, we are working in finite fields. In that case, we can treat the data as evaluations at the roots of unity of the polynomial domain:

(i.e., the i-th data point reflects an evaluation of the i-th power of d-th root of unity). A “primitive d-th root of unityw is an element of the field, such that wᵈ = 1.

For practical purposes, the root of unity serves as an iterator over the field elements in a set of a given size. Such an enumeration method allows us to perform many operations in finite fields more efficiently.

Interpolation using FFT

Fast Fourier Transforms (FFTs) are often used for polynomial interpolation when the order (degree) of the polynomial is large, as they have O(n∗log(n)) complexity. The FFT algorithm is used to convert a polynomial from coefficient form to evaluation form by evaluating the polynomial at the indices of the polynomial domain. The coefficients of the polynomial can be recovered from the evaluation form using the inverse FFT algorithm.

In polynomial commitment schemes, an FFT can be used to interpolate a polynomial over a set of opening points.

Given a set of points:

and a set of corresponding values:

an FFT can be used to efficiently compute the coefficients of the degree-d polynomial p(x) that interpolates the values V over the points H. Once the coefficients of p(x) are computed, they can be used to compute the polynomial commitment or proof of evaluation at any given point, in or outside the domain. When working in finite fields, the set of evaluation points H is usually a set of powers of a primitive root of unity for efficiency. Vitalik’s post on FFTs provides an excellent overview of the details.

Technically, Discrete Fourier Transform (DFT) is the correct name for the operation, whereas FFT is an implementation algorithm. However, people often use FFT to mean the operation and the algorithm.

In summary, FFTs are efficient for computing polynomial interpolation and crucial for commitment and proof computation. Subspace v2 utilizes this approach in archiving and plotting, which will be described in detail in our next blog post.

The Barycentric Interpolation

Ethereum suggests another approach to committing, evaluating, and opening polynomials in the Lagrange basis without FFTs, by employing the Barycentric Formula and roots of unity. The takeaway is that we don’t actually have to interpolate the polynomial anymore.

We are only introducing this method for background information, but it also has other nice properties that mesh well with KZG commitments.

For a step-by-step explanation of how the Barycentric approach can be applied to KZG commitments, see Evaladas’ post for IOTA. This method also requires a one-time setup cost to transform the KZG SRS (Structured Reference String, public parameters) from the monomial basis to the Lagrange basis. Commitments to Lagrange polynomials can also be computed from existing public parameters as described by Alin Tomescu here.

For a more detailed example of the Barycentric method applied to Verkle Trees and some of its advantages, see Dankrad’s blog post on PCS Multiproofs.

Reed-Solomon Erasure Codes

Reed-Solomon (RS) codes are a family of polynomial-based codes that allow us to make information more robust by protecting it from errors and erasures. They were initially designed to handle corruption (or errors) within a digital message sent over a noisy channel, so they are often referred to as error-correcting codes. In the (typically distributed) storage setting, they are referred to as erasure codes, though they may also include error-correcting properties, depending on the implementation.

Key Terminology

An erasure code transforms a message M, consisting of m symbols, into a longer message (or code-word) with 𝓃 symbols, such that the original message may be recovered from any k symbols, where mk < n . The fraction r = m/𝓃 is called the code rate.

An optimal erasure code, or Maximum Distance Separable (MDS) code, has the property that m = k, i.e., any m-of-n (equivalently any k-of-n) symbols may be used to construct the original message. Linear erasure codes, where every coded symbol is a linear combination of the first m data symbols, are often MDS codes.

A systematic erasure code has the property where each of the first m coded symbols are the original data symbols, i.e., the data symbols are left unchanged, and the new parity symbols are mixed with the original symbols or appended to the end of the original message. Systematic erasure codes are often linear erasure codes as well.

RS codes are a family of linear MDS codes that can be implemented in either a systematic or non-systematic manner. RS codes treat the data as polynomial, either in the coefficient form (non-systematic) or the evaluation form (systematic), sometimes referred to as the dual view of Reed-Solomon.

Bringing this back to our earlier example, you can think of the message M as our data vector D or some file F that we wish to encode. The m symbols will reflect a polynomial of degree d = m-1, where each symbol is a large number representing a data point.

Non-systematic or Coefficient View of RS

To encode, we treat the m source chunks of the message as the coefficients of a polynomial and evaluate that polynomial over the integers {0, …, (𝓃-1)} to obtain the 𝓃 parity chunks.

Note that we only store the n parity chunks and discard the m source chunks in this scheme.

To decode or recover, we need any k parity chunks (and their indices), from which we can interpolate the original polynomial, extract the m source chunks as the coefficients of that polynomial, and finally concatenate them to reconstruct the original message.

Information Dispersal with Provable Retrievably for Rollups: https://arxiv.org/abs/2111.12323?context=cs

Systematic or Evaluation View of RS

To encode, we treat the m source chunks as evaluations of a polynomial at the indices {0, …, (m-1)} and interpolate to extract the coefficients of the polynomial. We then evaluate that polynomial across the indices {m, …, (𝓃-1)} to produce the k parity chunks.

We now store the encoded file as the combined m + k = 𝓃 source and parity chunks. Note that in different implementations, the source and parity chunks may be kept distinct or interleaved while stored across one or more drives, servers, or nodes.

To recover, we need any k subset of the source or parity chunks (and their indices), from which we can repeat the interpolation procedure to extract the polynomial coefficients, as in encode. We can then evaluate that polynomial at the indices of the missing source chunks to recover them, then concatenate the first m chunks to reconstruct the original message.

Trade-offs between Views

Non-systematic RS has an encoding complexity of O(𝓃) since it avoids the computationally expensive interpolation step in the forward direction. While the complexity of decoding is equivalent to systematic RS, it must always be done, regardless of whether errors or erasures have occurred. In other words, it is always a low-cost operation for the sender (encoder) and a high-cost operation for the receiver (decoder). In the distributed storage setting, archiving is always low-cost. In contrast, data retrieval is always high-cost, making this scheme more suitable for distributed storage schemes where recovery is less frequent (i.e., cold storage).

Systematic RS has a naive encoding complexity of O(𝓃³) when Gaussian elimination is used to generate the encoding matrix. However, this can be reduced to O(𝓃²) with Lagrange Interpolation and O(𝓃∗log(𝓃)) with Discrete Fourier Transform (DFT) for higher degree polynomials. If the polynomial degree is constant, these matrices may be pre-generated and hard-coded into the implementation for a small storage overhead, further reducing the cost of interpolation and evaluation to O(𝓃). See research notes by Vitalik here, here, and here.

While the complexity of systematic decoding is equivalent to non-systematic RS, it must be done only in the presence of errors or erasures, proportional to the degree of data loss or corruption. In other words, it is always a low-cost operation for the sender (encoder) and typically a zero-cost operation for the receiver (decoder). In the distributed storage setting, archiving is always low-cost, while data retrieval is typically zero-cost, making this scheme more suitable for distributed storage schemes where recovery is more frequent (i.e., hot storage).

Systematic RS codes also have the added benefit of direct compatibility with polynomial commitment schemes, such as the KZG scheme treated as a vector commitment, since they share the same interpolated polynomial. This symmetry can be exploited to efficiently prove and verify that some parity chunk corresponds to a KZG commitment to the original data.

KZG Polynomial Commitments

KZG schemes are notoriously difficult to grok. However, consider them a general-purpose hash function with some added magical properties. For example, they provide the same functionality as a common hash function like SHA256 by creating a short (48 vs. 32 bytes) commitment for any arbitrary data of variable sizes, such as a file. Like normal hash functions, the commitment is guaranteed to be random (unpredictable) and unique, analogous to a file’s digital fingerprint. However, unlike normal hash functions, the commitment also encapsulates some aspects of the algebraic structure of the underlying data, making it more analogous to the digital DNA of a file. While this analogy has its limits — we can’t recreate the file from the KZG commitment, similar to how we can’t recreate an organism from its DNA — we can still retain the high-level blueprints for the file, perhaps making it more akin to a digital phenotype of some data, rather than its digital genotype.

The Good — Special Properties

Concretely, KZG allows us to create a succinct 48-byte commitment to a polynomial p(X) and then provide a constant-sized 48-byte proof of evaluation (or witness), regardless of the size (or degree) of the polynomial. For example, you can prove that p(x)=y for any random x. Moreover, you can combine many proofs for the same polynomial into a single proof that is still just 48 bytes. The conceptual leap many of us have struggled with is figuring out how to represent any arbitrary data as a polynomial to take advantage of these properties.

KZG commitment schemes may also be used to construct vector commitments (VCs), a form in which they are often compared to Merkle Hash Trees; however, this analogy only goes so far. While KZG-based VCs provide similar (and better) features to a Merkle tree, they are constructed entirely differently. A better analogy to the Merkle tree for KZG would be the Verkle Tree (a portmanteau of Vector Commitment and Merkle Tree), which constructs a k-ary Merkle tree by replacing the standard cryptographic hash function with a KZG-based VC scheme. Verkle trees will be utilized heavily within ETH to replace the Merkle Patricia Trie within the Ethereum state database.

The most interesting (and most challenging to understand) features of KZG schemes revolve around their additive homomorphic properties. This so-called magical moon math allows us to add two commitments to two different polynomials and obtain the same commitment as if we had manually added the polynomials and committed to their sum. Moreover, the same property holds for witnesses for evaluations at the same index of different polynomials and allows for very efficient proof-batching techniques. While these properties have been documented for over a decade, researchers are still untangling their implications. For example, Vitalik has written several posts that speculate about their potential.

The Bad — Witness Generation

Nevertheless, KZG has its shortcomings. The biggest challenge, which most protocol designers don’t fully grasp until they’ve built an implementation, is the computational cost of witness generation. For a VC-based polynomial of degree 𝓃, the cost of generating the commitment is O(𝓃). If we also wish to generate a witness for the evaluation at each index (as is often done with Merkle trees), the total cost becomes O(𝓃²). This cost can be reduced to O(𝓃 log(𝓃)) with efficient batch-proof techniques, which we discuss in more detail below.

It is also worth noting that batch proofs are not the silver bullet as they are often presented. While they ensure a constant-sized witness for any number of elements, and a constant time verification for any single element, the verification time for all elements in the batch is still linear, along with full proof size, which must include all elements. Moreover, batch proofs require significantly more complex polynomial math to verify, which raises the burden for light clients and smart contracts. For these reasons, many recent protocols (which we extend) employ novel two-dimensional constructions to reduce the effective cost back to O(n) or leverage the homomorphic properties of KZG to either interpolate witnesses or avoid witness generation entirely.

The Ugly — Trusted Setup

The biggest drawback of KZG schemes is that they require a trusted setup to generate the public parameters, known as a Structured Reference String (SRS), similar to the ceremony conducted by Z-Cash and many other ZK protocols. This requirement can be mitigated by employing a secure multi-party computation (MPC) protocol for 𝓃 participants, in which only one of the n participants needs to be honest. Existing trusted SRS parameters may also be upgraded to derive unique parameters for new protocols.

When compared to the benefits of KZG, the drawback of a trusted setup is seen as acceptable by many projects and their communities — when conducted with a proper MPC ceremony. Vitalik provides an excellent overview of how these ceremonies work under the hood. Alin also comprehensively compares the trade-offs among KZG and other PCS and VC schemes in his talk on aSVC. The key takeaway is that despite significant research over the last decade, a scheme with a trustless setup that provides the same properties as KZG does not yet exist.

Multiple Views within the KZG PCS

Similar to what we saw earlier with Reed-Solomon erasure codes, several different forms or views may be employed to represent some data vector as a polynomial, each with unique properties and trade-offs suited for different applications. Recall the title of the original KZG paper: “Constant-Sized Commitments to Polynomials and their Applications.” KZG is best understood as a mathematical primitive that we can use to construct application-specific cryptographic protocols.

While I’m not aware of any KZG-based protocols that treat the data directly as the coefficients of a polynomial (as non-systematic RS codes do), random polynomials (effectively random data) may be employed to reduce the communication complexity of Verifiable Secret Sharing (VSS) schemes, as described in the original paper.

Suppose we treat the data as the evaluations of a polynomial. In that case, we can then interpolate the polynomial and construct a VC scheme in the same manner as systematic RS codes do. So far, this is the killer app of KZG. However, we can further enhance the VC scheme if we convert the SRS from monomial to Lagrange basis and have the evaluation points as roots of unity rather than [0,1,…,n-1]. This enumeration allows us to efficiently commit to the polynomial, generate witnesses, and evaluate the polynomial at any random point, as described here.

If we instead treat the data as elements of a set, we can view those elements as the roots of a polynomial (not to be confused with the roots of unity) and commit to the data as a zero-knowledge set. This approach allows for efficient proofs of membership and non-membership, as described in the original paper and explained by Alin in his post on bilinear accumulators.

Batch Proof Schemes for KZG

One important detail about a pairing-based commitment scheme (which KZG is), that is often implied rather than explicitly stated, is that values in the committed vector are bounded by the order of the elliptic curve used for the pairings. For the BLS12–381 curve used in practical instantiations of the scheme, that limit is slightly below the full 32 bytes, and one can only commit to values that fit into 31 bytes without an overflow of 254 bits at most. Thus, the data one wishes to commit to must be partitioned into 254-bit (or 31-byte) sized chunks. It is often desirable to prove membership of data larger than 31 bytes, i.e., prove inclusion of 1 KiB data pieces into some larger dataset.

One way to address this is to hash the desired size pieces into 31-byte values. For instance, Filecoin employs this tactic using the same curve for SNARKs. However, this approach impedes the ability to prove that committed data was correctly erasure coded, and therefore is not applicable in the Subspace case where we want to preserve this feature of KZG commitments.

The second approach is to batch proof the inclusion of multiple 31-byte chunks into a single proof, i.e., a batch proof of 33 chunks would serve as proof for a ~1KiB data piece. Currently, there exist three generic classes of batched proof evaluation approaches. Each approach to batching has several use cases described in the below-cited papers and may be instantiated with different trade-offs on space and computation complexity.

Single Polynomial at Multiple Points

aka subvector commitments, same-commitment aggregation

In literature: aSVC (TAB+20), Hyperproofs (SCP+21), Balanceproofs (WUP22) and KZG10

Proof of inclusion of a batch of t chunks is achieved by interpolating a degree-t polynomial I(x) over the t opening points such that the committed polynomial p(x) is equal to I(x) over the domain H of these points, and proving the difference polynomial (pI)(x) is divisible by the vanishing polynomial 𝓩ʜ(x) over this domain.

The prover computes a commitment:

to the quotient polynomial:

and proves that:

by opening:

at a random challenge point.

Multiple Polynomials at the Same Point

aka cross-commitment aggregation

Achieved by computing a (pseudo)random linear combination of the individual proofs.

In literature:

Pointproofs (GRWZ20): A batch proof:

where t is the hash of something known to both prover and verifier (e.g., commitments, outputs, and whatnot).

Succinct Multi-Point Proofs (ADVZ21, Sec IV.D): Achieved by interpolating over the original commitments and evaluating that polynomial at a random point and proving it.

A proof πᵣ for a random point 𝑟 would prove a value:

corresponding to commitment:

where yᵢ are values at the same point of polynomials with commitments Cᵢ and Lagrangeᵢ are Lagrange basis polynomials.

Multiple Polynomials at Multiple Points

In literature: Pointproofs (GRWZ20), Multiproofs (Feist21), and BDFG20

Achieved by combining the previous two methods into a composite two-step protocol.

Combining Reed Solomon with KZG Commitments

Subspace v1 already employed a simple 1D-RS erasure coding scheme for archiving the blockchain history, combined with a standard Merkle Hash Tree to extend Proofs-of-Replication (PoRs) into Proofs-of-Archival-Storage (PoAS). In Subspace v2, we will still use RS codes but under a multi-dimensional scheme. We will also replace Merkle Hash Trees with KZG Vector Commitments for more efficient and verifiable PoAS. We call this verifiable archiving.

Separately, our v2 consensus will also replace the SLOTH-based PoR and Binary Search Tree (BST) audit with an entirely new PoR based on RS codes that will leverage the homomorphic properties of KZG for efficient audits and proofs. We call this polynomial PoRs.

It is imperative to treat these two upgrades as conceptually distinct. Even though they rely on the relationship between EC and KZG, they solve different problems differently.

Verifiable Erasure Coding

It is a common practice to apply erasure coding to distributed data storage. However, in the blockchain setting, we need a way to ensure both the correctness of the erasure coding and the availability of the underlying data to guarantee the data is recoverable. For example, a malicious block producer could just erasure code some random data instead of erasure coding the blockchain history. Moreover, if they also withheld enough data, no one could recover the block and even know if the erasure coding was computed correctly.

The standard technique of committing to data using Merkle trees does not solve this problem. The original solution, as proposed in the seminal paper on Data Availability Sampling (DAS), combines Merkle trees with a novel 2D-RS scheme which allows for the creation of erasure coding fraud proofs. This technique is employed with the LazyLedger protocol, which eventually became the Celestia project.

Compared to 2D-RS with Merkle trees, 1D-RS-KZG commitments allow us to remove the need for optimistic fraud proofs and immediately verify the correctness of the erasure coding with a short proof. They do this by committing to a single polynomial, which reflects both source and extended data evaluations, proving they lie on the same polynomial (i.e., the data was extended correctly).

For example, given 128 chunks of data (d₀ , …, d₁₂₇), treated as evaluations of a polynomial, there exists a polynomial of degree 127 that runs through these evaluations. We can then extend this data using a systematic RS code to add 128 more evaluations (e₀ , …, e₁₂₇), which also lie along the same polynomial. This is achieved by evaluating the polynomial that represents the source data outside its domain, also called polynomial oversampling.

Stay tuned!

We hope this was a good overview of the algorithms we use in Subspace v2. Regardless, this is just the beginning. Please stay tuned for part 2 of this series for a deeper look at the Subspace Network Consensus v2 dilithium.

Subspace Network Forum

Our research discussions have migrated to the Subspace Network Forum, so don’t miss out on the chance to witness the discussions leading innovation in blockchain technology.

Let’s Connect!

Website — https://www.subspace.network
GitHub — https://www.github.com/subspace
Discord — https://discord.gg/subspace-network
Telegram — https://t.me/subspace_network
Twitter — https://www.twitter.com/NetworkSubspace

--

--