Introduction and Overview
In this post we continue with the description of our proposed solution for the problem of â€śAttributable consensusâ€ť. Please check the First Post for all the necessary background!
In the First Post we introduced the problem and provided a high level overview of our mechanism. After that, we went on to describe in detail how the cluster operators create their Optimistic Participation Proofs. An important component if the solution is given by the concept of cslots and cepochs. These are collections of Beacon chain slots that are used to bound the duration of different steps in our protocol.
In this post we explain how the cluster operators can use their Optimistic Participation Proofs (PPs). In short, during a cepoch, during what we call a Point Settlement Phase, cluster operators make public the PPs created during the previous cepoch. These PPs are publicized as blob data (leveraging EIP4844). Along with these, the operators claim a certain amount of â€śpointsâ€ť. Points are a metric that is used later to determine how many rewards are assigned to each cluster operator. The way rewards are distributed as a function of points earned will be described in our next blogpost for the Attributable Consensus solution.
To ensure that operators do not lie about the contents of their PPs and the points they are claiming, we design a whistleblower mechanism. This allows to proceed optimistically, i.e. accepting in principle the validity of the PPs and claimed points. If this is not the case, 3rd parties can raise an alarm when they detect some problem (in exchange for a reward). In this event, the validity or falsehood of the PPs and claimed points is checked onchain. This fraudproof approach allows us to save a huge amount of gas. In this post we also provide most of the details of this whistleblower mechanism.
In Post 3, besides describing our pointtoreward solution, we will describe a second type of of Participation Proof which is based on Verifiable Delay Functions. These are proofs of activity that can be reliably created without the need of interacting with any other operator (see Post 1 for more information).
Point settlement phase
This phase lasts for a cepoch, and its goal is to settle the points operators earned during the previous cepoch.
The phase runs in parallel with the rest of the activity clusters are meant to perform during a cepoch.
It consists of 2 subphases:
 Submission of blob data and points.
 Whistleblower challenges.
Each phase lasts for a number of L1 blocks \Delta_{data},\Delta_{accuse}, respectively.
Cluster coalitions â€” We describe first the simplest approach where each cluster submits its own blob data and its own point updates. A more complex approach allows clusters to form coalitions, and to create a single blob of data, encompassing the data of all clusters in the coalition.
This saves on transaction and computation costs. It is not trivial to design due to different attacks that can occur.
Submission of blob data and claiming of points â€” \Delta_{data}
Notation: Recall we let B[C, s] denote the optimistic Participation Proof of cluster C for cslot s. We also let B[C] be the list (B[C,1],\ldots, B[C,M]) where M is the number of cslots in the cepoch.
Once the L1 SC records the end of a cepoch, it records the current block number Bnum and sets its state to â€śPoint settlement phase â€”Blob submissionâ€ť.
This phase lasts \Delta_{data} beacon chain epochs.
Each cluster C creates a threshold signature TS(B[C]) on the data B[C] (first, to agree on the data B[C], the cluster uses a consensus protocol, which we select to be HotStuff). Then some operator O in the cluster (â€ťchosenâ€ť via a race mechanism):

Creates an Ethereum transaction that includes the list B[C] and TS(B[C]) as blob data (using EIP4844). Later in this post we describe the exact details on how B[C] is structured into blob data.

The transaction also calls the L1 SC and has it store the versioned hash of B[C] and the KZG commitment to B[C]. This is done by updating 32 byte and 48 byte variables in the L1 SC reserved for cluster C.
A whistleblower can raise an alarm later if the hash of \text{KZG}(B[C]) does not match the versioned hash of B[C].

The L1 SC also stores the identity of the operator submitting the transaction (for compensation/penalization purposes), and reverts if the submitter does not belong to the cluster C.
(note the L1 SC knows what operators are in each cluster)

The L1 SC checks that the tx is processed before block number Bnum + \Delta_{data}. If it is not, it reverts.
The submission of this transaction is done via a race among the operators in the cluster. This is similar to how the commitments to participation proofs are sent to the L2 SC.
The operator whose transaction is accepted earns a small reward. The gas costs of the transaction are split evenly among the cluster.
The signature TS(B[C])
This is an aggregate BLS signature, together with a bitfield indicating whose operatorâ€™s signatures have been aggregated. We require there are at least \lceil 2n/3\rceil such signatures.
Converting B[C] into blob data format
A blob of data is a sequence of 254bit strings (w_1, \ldots, w_m), which we call words. There are up to 3900 blob entries in such sequence (because a blob of data stores at most 125 KB as of October 27).
We organize B[C] into such format as follows:

For each cslot number s, let
k_s = \left\lceil \frac{B[C,s]}{253} \right\rceil,where B[C,s] denotes the bitlength of B[C,s].

Then we store B[C,1] in the blob entries w_1,\ldots, w_{k_1}. In each w_i we use at most 253 bits of w_i, and keep the first bit as 0.
We do this because we use elements w_i starting with 1 to denote the end of a participation proof B[C,s].

For the word w_{k_1+1}, we store the string 1\overset{254}{\ldots}1 (254 copies of 1), to indicate that B[C,1] ends right before this entry.

We proceed similarly with the rest of B[C].

Once this is done, we store 1\overset{254}{\ldots}1 in the next word, and we then store the threshold signature TS(B[C_m]) (possible across more than one word).
Overall, the blob of data looks as follows:
where \widetilde{B[C, s]} denotes the data B[C_i,s] split into k_{C_i, s} consecutive 254bit strings, each starting with the prefix 0. Similarly, \widetilde{TS(B[C])} denotes TS(B[C]) split into 254bit chunks (in this case we do not reserve any special prefix).
Remark
The reason why we enforce this relatively rigid structure is due to how we access its contents later. Precisely, we use a KZG precompile which allows to â€śopenâ€ť a word in the blob at a specified word index. Hence, we need a fixed canonical map from word indices to cslot numbers. The details of this â€śopeningâ€ť scheme are given ilater in this post.
Using more efficient methods for dividing the data B[C] into words may result in the need to read the whole blob, if the indexes â†” slots mapping is not available.
Blob size and requirements
We should make sure that (B[C], TS(B[C])) can always be fit into a sequence (w_1, \ldots, w_m) with m\leq 3900. The exact number of 254bit blob entries w_i needed to store (B[C], TS(B[C])) is
In Post 1 we estimated the size of B[C,s] for any s. We saw the size is dominated by the signatures of the operators. For 1000 cluster validators and n operators we had, in expectation,
For a threshold signature of less than 512 bits we obtain that (1) is at most, in expectation,
for clusters of at most \lfloor 1.93 \cdot 253\rfloor = 17 operators. For clusters with up to 270 operators, the value becomes 58.
Point submission
Again using the same racing mechanism as before, for each cluster, an operator O does:

O looks at all the optimistic (and also to all VDFbased Participation Proofs, though we will describe this in a future post) submitted by its cluster C during the last cepoch, and computes the amount of points each operator in the cluster earned.
We will explain how these points are to be computed in an upcoming post, when we have given the details of how the VDFbased PPs work.

Then it sends this information to the L1 SC, and this records it.

The L1 SC also records an identifier of the operator sending this data. This is for reimbursing and compensation purposes.

The L1 SC reverts if the information is provided after block number Bnum + \Delta_{data}.

It also reverts if the data is provided by someone that is not in the cluster C.
Costs
This step triggers once every 14 days and requires, for each cluster:

To update at most \lambda\cdot n bits of information (corresponding to points earned by each operator), where:
 n is the number of operators in the cluster.
 2^\lambda is an upper bound for the number of points an operator can gain in a cepoch.
Assuming a cluster that controls 2000 validators, with 10 operators, and assuming the overestimate that each validator performs two tasks per epoch, each worth 4 points, this amounts to updating \leq 26 bits.

Store the identifier of the operator sending the data (which could be 12 byte sized), and check the identifier corresponds to a validator in the the cluster. It is assumed that the operatorâ€™s addresses are stored in the L1 SC during the set up of the cluster.

Checking that the current block number is not larger than a certain integer.
Phase for whistleblower challenges â€” \Delta_{accuse}
Due to its length and complexity, this phase is described separately later in this post.
Amortizing costs via cluster coalitions
We allow clusters to form coalitions. A coalition is a collection of clusters \mathcal{C}=\{C_1, \ldots, C_m\}. At the end of a cepoch, one operator in the coalition, which we call â€śaggregatorâ€ť, (chosen via race) gathers the data
(recall TS(B[C]) is a threshold signature on B[C]) and organizes it into a blob of data. The exact details of how this is done are given later in this post.
Given a coalition cluster C\in\mathcal{C}, we denote
Similarly as before, the aggregator O then:

Creates an Ethereum transaction that includes \mathbb{B}_\mathcal{C} as blob data (using EIP4844).

The transaction also calls the L1 SC and has it store the versioned hash of \mathbb{B}_\mathcal{C} and the KZG commitment to \mathbb{B}_\mathcal{C}.
For the latter, O submits the KZG commitment \text{KZG}(\mathbb{B}_\mathcal{C}) to \mathbb{B}_\mathcal{C}. A whistleblower can raise an alarm later if the hash of \text{KZG}(\mathbb{B}_\mathcal{C}) does not match the versioned hash of \mathbb{B}_\mathcal{C}.

The L1 SC stores the identity of O. This is for compensation/penalization purposes.

The L1 SC stores the tx cost (which will be split evenly among cluster operators). See here for an explanation of how to do that.

The L1 SC checks that the tx is processed before block number Bnum + \Delta_{data}. If it is not, it reverts.
Escape hatch
After block Bnum + \Delta_{data}, there is an extra time window \Delta_{data}' during which each cluster C is allowed to submit its data B[C] and threshold signature TS(B[C]).
The process is the same as in the â€ścoalitionlessâ€ť case.
If the cluster C uses this â€śescape hatchâ€ť, then the data (B[C], TS(B[C])) submitted with this mechanism supersedes the data \mathbb{B}_\mathcal{C}[C] submitted by the aggregator. This allows the cluster C to provide its info if something goes wrong with the aggregator O.
Incentives design and gas costs
The gas costs of the blob transaction are split evenly among all operators in the clusters C that have a valid threshold signature TS(B[C]) in \mathbb{B}_\mathcal{C}.
Aggregator incentives
To incentivize the aggregator O to include the data of all clusters in the coalition \mathcal{C}, we give them a small reward for each cluster C\in \mathcal{C} that:
 The threshold signature TS(B[C]) in \mathbb{B}_\mathcal{C}[C] is correct. (If it is incorrect, then O is penalized since then the aggregator submitted an incorrect signature)
 C didnâ€™t supersede their data in \mathbb{B}_\mathcal{C} via the escape hatch mechanism.
This way, a rational aggregator O will always include all the data in the cluster coalition.
Escape hatch incentives
A cluster C that supersedes the data provided by the aggregator will pay for the gas costs of doing so. In this case, there is no penalty for the aggregator O, other than losing the small reward amount that O would get from submitting $C$â€™s data (assuming O does submits correct threshold signatures).
Using the escape hatch when there is no need for it is irrational, since the cluster ends up paying for more gas than the necessary. There is also no apparent serious attack towards the coalition aggregator O: unnecessarily using the escape hatch would only cause a minor grief to O.
Converting \mathbb{B}_\mathcal{C} into blob format
We follow a similar approach as in the coalitionless setting, but now we use one extra prefix bit to indicate the separation between the data \mathbb{B}_\mathcal{C}[C_i] and \mathbb{B}_\mathcal{C}[C_{i+1}] corresponding to two clusters C_i, C_{i+1}.
Precisely, if \mathcal{C}=\{C_1,\ldots,C_m\} and there are M cslots in the cepoch:

For each cluster C_i and slot number s, we split B[C_i,s] into 252bit (as opposed to 253 as before) chunks
k_{C_i,s} = \left\lfloor \frac{B[C_i,s]}{252} \right\rfloor,where B[C_i,s] denotes the bitlength of B[C_i,s].

The chunks are stored in consecutive entries from the list (w_1,\ldots, w_{3900}). Since the chunks have 252bit length, the first two bits of the used entries w_j are always 00.

To separate B[C_i,s] and B[C_i, s+1], we add the entry 01\overset{253}{\ldots}1 between the chunks corresponding to B[C_i,s] and the chunks corresponding to B[C_i, s+1], for all s=1,\ldots, M1.

After storing B[C,M] we append the element 01\overset{253}{\ldots}1 and then the threshold signature TS(B[C]).

Then, we append the entry 1\overset{254}{\ldots}1

We repeat the whole process with B[C_{i+1}], TS(B[C_{i+1}]), and so on.
Overall, the blob of data looks as follows:
where \widetilde{B[C_i, s]} denotes the data B[C_i,s] split into k_{C_i, s} consecutive 254bit strings, each starting with the prefix 00. Similarly, \widetilde{TS(B[C_i])} denotes TS(B[C_i]) split into 254bit chunks (without no special prefix being used this time)
Blob size and requirements
As in the coalitionlesscase, we must make sure the data \mathbb{B}_{\mathcal{C}} fits in a single blob. Recall a blob consists of about 3900 words. A word is a 254bit string.
A similar computation as in the coalitionlesscase yields the following upper bound number of blob entries required, in expectation
(the initial term (m1) counts the number of strings of the form 1\overset{254}{\ldots}1)
Since the number of blob entries has to be at most 3900, the maximum number m of allowable clusters in a coalition (in expectation) is 88 or 67.
Whistleblower mechanisms
In general, Participation Proofs and the points claimed with them are optimistically accepted as correct. An actual verification of correctness is only performed if a whistleblower requests it.
In general, the workflow is as follows (though some cases differ a bit). We give the general outline, and afterwards provide details in a casebycase basis.
 The whistleblower calls a specific function of the L2 Smart Contract, and deposits a small bond.
 The whistleblower submits a subset of some blob of data to the L2 SC + some metadata.
 The L2 SC requests a proof that the submitted subset of blob data is actually a subset of some blob data. This proof is created by the L1 SC using the KZG evaluation precompile that comes with EIP4844. If the verification passes, the L1 SC sends a message to the L2 SC saying that everything is ok.
 The L2 SC makes pertinent checks onchain with the subset of blob data.
 If the checks fail, the â€śaccusedâ€ť Participation Proof is considered false.
We defer the penalty and bond/compensation discussion to a further post in this series.
The specific types of alarms that can be raised by a whistleblower are:

Incorrect subset of blob data.

An optimistic Participation Proof (L, \mathcal{S}) is incorrect. More precisely either:

\mathcal{S} contains an invalid signature.

\mathcal{S} < 2n/3

L.start contains incorrect information. I.e. the L2 block number and header at which the cslot started is incorrect.

L.attestations > N_v \cdot N_B\cdot \Delta_{L2block}/6.4, where N_v is the number of cluster validators, and N_B is the number of L2 blocks in the cslot.

Informally, some of the lists L.slot.i falsely indicate that a cluster validator performed a task of type i at a slot.
Precisely, for some i=2,3,4 and some 1\leq t\leq n_i, no validator in the cluster performed a task of type i at the slot index Index(t, L.slots.i).


Incorrect VDF Participation Proof submitted. This will be explained in detail in the 3rd post of this series.

Incorrect amount of points claimed by a cluster. More precisely, either

Incorrect threshold signature TS(B[C]) on a clusterâ€™s cepoch data B[C].

The hash of a stored KZG commitment to blob data does not match the stored versioned hash of the data.
Auxiliary process â€” Getting a piece of blob data into the L2 SC
Let C_i be a cluster in a coalition, and let s be a cslot number for the last cepoch. Suppose a whistleblower wants to make B[C_i, s] available to the L2 SC, in a verifiable way.
Recall we let B[C_i, s] denote the blob data recorded for cluster C_i at cslot s, as explained previously. It corresponds to the string \widetilde{B[C_i,s]} in the blob
The whistleblower and the L1 and L2 SCâ€™s proceed as follows:

The whistleblower calls an appropriate function of the L1 SC and submits i, s, and B[C_i,s] as function calldata.
It calls the L2 SC and submits the same data.
Additionally, it deposits a bond on the L2 SC. The amount of bond to deposit, and the economics of the whistleblower mechanism, will be described in an upcoming part of this series of posts.

Intuitively, the L1 SC uses the KZG precompile from EIP4844 to verify that the received B[C_i,s] is the entry in \text{BLOB} corresponding to cluster C_i and cslot s.
Precisely, the L1 SC proceeds as follows:

Decomposes B[C_i,s] into the format \widetilde{B[C_i, s]} (this is a sequence of 254bit strings, each starting with the prefix 00).

Computes the word indices where the data for cluster C_i and slot s is stored in \text{BLOB}. Say these are a_{C_i,s}, a_{C_i,s}+1,\ldots, b_{C_i,s}.

Uses the KZG point evaluation precompile to check that
\begin{align*}p_{\text{BLOB}}(a_{C_i,s}) &= \widetilde{B[C_i, s]}_{1}, \\ &\ldots\\ p_{\text{BLOB}}(b_{C_i,s})&= \widetilde{B[C_i, s]}_{a_{C_i,s}b_{C_i,s}+1}\end{align*}where p_{\text{BLOB}}(X) is the polynomial interpolating the elements in \text{BLOB}.


If the check passes, the L1 SC computes Hash_{L1}(B[C_i,s]) and sends it to the L2 SC.
If it does not, the whistleblower loses its bond.

The L2 SC computes Hash_{L1}(B[C_i,s]) using the data B[C_i, s] provided by the whistleblower. Then the L2 SC asserts that it equals the hash received from the L1 SC.
If the check fails, the whistleblower loses its bond.
Otherwise, we can be sure that the blob data B[C_i,s] that was submitted by the whistleblower to the L2 SC is legitimate.
Note: At Steps 3 and 4, we could have the L1 SC send the whole B[C,s] to the L2 SC (instead of computing Hash_{L1}(B[C,s]) and sending it to the L2 SC). This would be cheaper computationally, but would require sending more data to the L2 SC. Additionally, this variation allow the whistleblower to not submit to the L2 as well, and to avoid computing on the L2 (where Hash_{L1} may be a hard function to compute).
Which is the most optimal variation depends on the L2 being used.
Issue 1 â€” Incorrect subset of blob data.
The whistleblowers complains if there is a cluster C_i and cslot number c such that:Hash_{L2}(B[C_i, c]) does not match the hash recorded on the L2 SC for cluster C_i at cslot c.
The whistleblower, L1 SC, and L2 SC, proceed as follows:

The data B[C_i, c] is made available to the L2 SC using the previous auxiliary subprocess.

The L2 SC computes Hash_{L2}(B[C_i,c]), and asserts that it equals the recorded hash on the L2 SC for cluster at cepoch s.
If the check fails, then the cluster is deemed guilty of having submitted bogus data (either bogus participation proofs or bogus commitments to them).
Issues 2.a, 2.b, 2.d
Here the whistleblower points to an optimistic Participation Proof (L, \mathcal{S}) such that either:
 The aggregate signature in \mathcal{S} is incorrect.
 There are less than \lceil 2n/3\rceil operators that have contributed to the signature (i.e. there are less than 2n/3 ones in the bitfield of \mathcal{S}).
 L.attestations > N_v \cdot N_B\cdot \Delta_{L2block}/6.4,.
In that case:
 The L1 SC asserts that:

The signature in \mathcal{S} is valid, using a precompile for BLS381 pairings.
Note: Checking the correctness of an aggregate BLS signature costs about 8 USD (roughly 170k gas).

There are at least 2n/3 ones in the bitfield of \mathcal{S}.

L.attestations \leq N_v \cdot N_B\cdot \Delta_{L2block}/6.4.

If all checks pass, the Participation Proof is accepted as correct, and the whistleblower is penalized by taking its bond.
Otherwise, the optimistic PP is considered invalid and penalties and compensation are distributed appropriately (this is to be described in an upcoming post).
Note: Here we do not use the L2 SC.
Issue 2.e â€” Fake cluster duties
Here a whistleblower points to a task type i=2,3,4 and slot index t and claims L.slot.i registered a validator task for slot index t, but this task was not performed by any validator.
The whistleblower, L1 SC, and L2 SC proceed as follows:
 The optimistic PP (L, \mathcal{S}) is made available to the L2 SC via the â€śauxiliary processâ€ť described previously (recall L is an entry in \text{BLOB} of the form B[C_i, c] for some C_i,c).
 The L2 SC checks that a task of type i at slot index t was performed by a validator in the cluster.
How this is done exactly will be explained in the last blogpost of this series. For now, we give an example for block proposal duties.
The slot where a block proposal was supposed to be performed is in the optimistic participation lists. Once we know the slot, we can get the relevant block root to the L2 SC (the details of how to do that will be given in the last blogpost of the series).
The block proposer information can be found in BeaconBlock>proposer_index: ValidatorIndex
.
Note that this is not the validatorâ€™s public key. If we keep the record of indices of our validators as in the beacon state validator registry we are done. Otherwise, we need to get the public keys of the validators from the validator registry. We envision that Obol could, when deploying the L2 SC associated with a cluster, hardcode the indices of all validators controlled by the cluster, and update this information if necessary.
Suppose that the L2 SC has a record of the indices of all the validators of the cluster. The whistleblower that wants to claim that there is a problem with block proposal at a specific slot j_{2,\ell} for list L starting at block s should proceed as follows:
 the whistleblower should get the block root r for block j_{2,\ell} in the L2 SC.
 the whistleblower submits the relevant list L such that L.start=s to the L2 SC by using the auxiliary process
 the L2 SC should parse L and identify the slot j_{2, \ell}. If it cannot be found, the whistleblower loses the claim.
 the cluster has some time period to submit a Merkle proof (against the root r) that certifies that
proposer_index
isidx
. If it fails, the whistleblower wins the claim.  the L2 SC checks the Merkle proof. It checks that
idx
is the index of some validator controlled by the cluster. If any of the checks fail, the whistleblower wins the claim.
The whistleblower has some time to claim that the data submitted is inconsistent with the blob data published or the L2 hash commitment.
Issue 3  Incorrect VDF proofs
This will be described in our next blogpost, where we will describe VDF PPs in full detail
Issue 4 â€” Incorrect amount of points provided
The whistleblower calls the L2 SC, pointing to a cluster C_i that, according to the whistleblower, has reported an incorrect number of earned points during the past cepoch. This is saying that the cluster did not compute the points of some operator O_j according to the formulas specified by the protocol (which we will explain in an upcoming post)
In that case:
 All participation proofs corresponding to C_i in \text{BLOB} are made available to the L2 SC via the auxiliary process. All relevant VDF proofs are also made available. This is done by the whistleblower. The L2 SC also verifies this information against the L2 commitments submitted by the cluster and the operators during the end of the cslots.
 The L2 SC pulls from the L1 SC the points that the cluster C_i submitted.
 The L2 SC examines all the Optimistic and VDF Proofs and computes the amount of points operator O_j should get.
If the amounts match the information pulled from the L1, the whistleblower can make additional claims. Claims about the reality of optimistic participation proof duties are already covered and should be made separately. What remains is the correctness of the blob data, which is handled in Issue 7.
If all checks go through, we deem the whistleblower as incorrect, and we take its bond.
Otherwise, we penalize the operator that submitted the point amounts.
Issue 6  incorrect threshold signature TS(B[C]) on a clusterâ€™s cepoch data B[C]
Here the whistleblower claims that there is a problem with the threshold signature on the clusterâ€™s cepoch data submitted to L1 as blob data. Here the whistleblower should get the data B[C], TS(B[C]) to the L1 SC by using the auxiliary procedure.
The L1 SC verifies TS(B[C]) against B[C] and the BLS threshold key of the cluster, by using BLS precompiles.
Issue 7  incorrect versioned hash of the KZG commitment to a clusterâ€™s cepoch data
Whenever the whistleblower submits some clusterâ€™s cepoch data, the only check that ensures the correctness of the data is the KZG evaluation precompile applied to the stored KZG commitment on the L1 SC. This KZG commitment was submitted to the L1 SC by a cluster operator. We need to make sure that this KZG commitment is the correct one.
If the whistleblower complains about the correctness of the KZG commitment, the L1 SC hashes the KZG commitment provided by the cluster operator that submitted the blob data, and checks that it is consistent with the versioned hash of the KZG commitment to the blob data stored in L1.