Proposal: Attributable Consensus Solution for DV Clusters – Part 3

Albert Garreta, Ignacio Manzur, and Aikaterini Panagiota-Stouka on behalf of Nethermind Research.

Introduction and overview

This is the third installment of our series of posts on the topic of “Attributable consensus”. If you haven’t done so already, please check the First and Second Posts for all the necessary background!

In the first two posts we 1) introduced the problem; 2) described the main components of our solution; 3) explained how operators create so-called Optimistic Participation Proofs; 4) described how the clusters claim participation points and how these proofs are used when doing so; and 5) we explained how whistleblowers can trigger an on-chain verification in case some of proof (or amount of points claimed) is incorrect.

In this post we introduce a new type of participation proof, which we call VDF-Participation Proof. Here VDF stands for Verifiable Delay Function. These proofs can be created by an operator O individually without the need of interacting with any other cluster operator. This creates an “escape hatch” for the operator O, in that it allows O to prove its participation (or readiness to participate), regardless of whether other operators are offline or maliciously ignoring O. Hence, VDF PPs provide resilience against some of the challenges we set up to tackle in Post 1. However, VDF PPs are more costly computationally than optimistic PPs, and so, as we will see, we envision a scenario where, as long as things run smoothly, operators rely solely on optimistic PPs, and rarely submit VDF PPs. The main idea is that, whenever O detects a large number of inactive cluster operators, O starts computing VDF PPs. Then, at the end of a c-epoch, if its not possible for O to get its points via the submission of Optimistic Participation Proofs, then O submits its VDF PPs instead. As with Optimistic PPs, we rely on whistleblowers to make sure the content of the VDF PPs is correct.

After discussing VDF PPs, we proceed to describe how cluster rewards are assigned to the cluster operators as a function of the points each operator has earned. Then, we explain how to reward whistleblowers that correctly raised an alarm, and how to penalize the parties at fault in that case.

VDF-based Participation Proofs

In this section we describe how and when the VDF PPs are created, and how they are used afterwards. We further discuss how whistleblowers check the validity of these proofs, and how on-chain verification is performed in case it is needed. We also discuss how operators can gain participation points with their VDF PPs.

Preliminaries on VDFs

A Verifiable Delay Function (VDF) is a primitive allowing a party to produce a succinct proof that it performed a certain computation that took a certain amount of time. This computation must possess some additional properties that ensure that an adversary cannot, say, produce a proof faster than it can perform the computation. Often times, VDFs are constructed by sequentially chaining computations that take a minimal incompressible amount of time to perform. This could be hashing, or squaring in an unknown order group. The VDF then is a way to produce a succinct proof that one performed a certain number of iterations of that computation. We refer to this paper for an overview of the topic.

The idea we present is that if we randomly make the state of a blockchain (or in our case, the L2 block headers) be a part of the input in these chained sequential computations, by having the prover submit the VDF proof at a specific point in time we can ensure that it had access to the chain state. This is particularly useful when the cluster is inactive and DV nodes have no way of reaching consensus.

A recent discussion with members from the EF have brought our attention to the fact that the security properties of VDFs are not that well studied and understood. In particular the candidate VDF for Ethereum, MinRoot, has been shown to have weaker security guarantees than anticipated. We can instead define hash-based VDFs, but we should keep in mind that the security (in particular, the sequentiality) of these cryptosystems have yet to be proven formally.

That being said, a well-studied VDF is the RSA-based type. These VDFs compute successive exponentiation in an RSA group, and we believe that the trusted setup of the RSA group could be performed by Obol, for all clusters to use.

RSA-based VDF

As we mentioned, the setup is an RSA group \mathbb{G}=\Z/N\Z where N=pq is the product of two large primes (with additional properties, these could be generated by Obol), together with an efficiently computable hash function H:\mathcal{X}\to \mathbb{G}, where \mathcal{X} are the 256-bit integers. We propose to use Wesolowski’s scheme regardless of the security loss incurred by applying Fiat-Shamir, because the proof is a single group element independently of the length of the computation. The prover will have to perform more computation (compared to Pietrzak), but the information published on L2 will be significantly reduced.

The difficulty parameter T is fixed in the original Wesolowski scheme, the claim by the prover is as follows: “I know h, g\in\mathbb{G} such that h=(g)^{2^T}, where g=H(x) for some $x\in\mathcal{X}$”. The interactive scheme works as follows, and can be made non-interactive by using Fiat-Shamir:

  1. The verifier checks that g,h \in \mathbb{G} and outputs reject if not,
  2. The verifier sends to the prover a random prime \ell sampled from Primes(\lambda) (the set formed by the first 2^\lambda primes).
  3. The prover computes q,r\in \mathbb{Z} such that 2^T=q\ell+r with 0\leq r < \ell, and sends \pi\gets g^q to the verifier.
  4. The verifier computes r\gets 2^T \text{ mod } \ell and outputs accept if **\pi \in \mathbb{G} and h=\pi^\ell g^r in \mathbb{G}.

:warning: Important. In the non-interactive version, obtained by applying the Fiat-Shamir heuristic, the random prime MUST BE SAMPLED FROM \mathrm{Primes}(2\lambda) (see the previously mentioned survey for an explanation)

Creating VDF-based PPs

In this section we discuss how an operator can create a VDF-based proof during a c-epoch.

We fix a certain number of L2 blocks M such that every M blocks, the prover computing a VDF PP needs to include information about the latest L2 block. The computation will start at a block s, and the prover will need to include the L2 information at blocks s, s+M, s+2M, \dots until it stops the computation. We reward VDFs only with respect to the time interval they cover, independently of which duties were supposed to be performed (as explained later in this post). To ensure sequentiality, we need the computation to be chained (the output of the previous VDF should contribute to the input of the next VDF computation and so on) and submitted right after the computation ends (to prevent attacks). If the operators include L2 information too often, or if the submission period happens at large time intervals, we need to post too much information. Operators will upload the relevant VDF proof information at the end of the c-slot (see below for the details). The number $M$should be chosen such that even in the worst case scenario —which happens when an operator notices it is alone just after the beginning of a c-slot—, the operator does not need to include the L2 state more than k times, for some small integer (e.g. k\sim 20).

Once the RSA group \mathbb{G} and M are chosen, we should also choose a time difficulty T which approximately corresponds to the time that passes in M blocks. Because the computation of the Wesoloswki VDF with difficulty T takes times (1+1/s)T with s processors and space s (see the this survey for more details), we could choose the minimum integer T such that 2T (this is worst case, but it could be tweaked) is over the time that passes between M blocks. Of course, some operators will compute these faster, but not faster than in time T (so they can cheat and come back online after M/2 blocks have passed, but that is it), and they will need to wait for the remaining blocks until we hit the next block of the form s+kM to continue the computation.

With M and T chosen, the prover that wants to show that it was active between L2 blocks s and s+kM will need to provide the L2 block number s, and 2k group elements h_1,\pi_1, \dots, h_k, \pi_k\in\mathbb{G} such that:

For all 1\leq i\leq k, h_i=(g_i)^{2^T}, where we set g_1=H(b_1), and for all 2\leq i\leq k, we have g_i=h_{i-1}+H(b_i) where b_i is the L2 block header at slot s+(i-1)\cdot M
For all i, \pi_i is a proof of the correctness of the equation computed according to the non-interactive version of Wesolowski’s scheme. To be precise, \pi_i=g_i^{q_i}, 2^T=q_i\ell_i+r_i (0\leq r_i<\ell_i) and \ell_i has been computed using the Fiat-Shamir heuristic as \mathcal{H}(\mathbb{G}, g_i, h_i, T ) for some hash function \mathcal{H}: \{0,1\}^*\to \mathrm{Primes}(2\lambda).

When should the prover stop its computation and submit this information? The prover should continue the computation until the last block of the form s+i\cdot M before the end of the c-slot. In general, if the end of the c-slot happens between blocks s+kM and s+(k+1)M the prover stops computing the VDF proof that included the L2 block header at block s+kM, and submits a commitment Hash_{L2}(s, h_1,\pi_1 \dots, h_k,\pi_k) to the L2 SC in the appropriate time window. At the end of the c-epoch, the operators will submit the vector in full in a blob on L1. For typical RSA moduli (we can give ourselves an N\sim 1000 bits), assuming k\leq 20 this vector is at most 40K bits, or 5kB.

If a VDF computation started at block s, from the length of the vector of proofs one can deduce the integer k, and therefore the last block s+kM where the operator stopped the computation. This must be the last slot of the form s+i\cdot M before the block where the submission period started, otherwise the computation is not considered valid. This prevents some attacks where operators start computing VDFs late.

VDF-based PPs at the end of a c-slot, and of a c-epoch

The commitment to the VDF proofs is sent to the L2 Smart Contract at the end of the c-slot, this is the hash of the VDF proof data for the c-slot (if any). Then, at the end of the epoch, each operator is responsible for submitting its full VDF proof(s) as blob data (this is outside the race for submitting the blob data corresponding to optimistic participation proofs), with the same time constraints \Delta_{data}. The versioned hash of the KZG commitment to that blob data also needs to be stored in the L1 contract (see our Post 2).

Because the VDF reward (see Points for VDF duties) gives strictly less rewards than attestations (and we do not check the reality of attestations) a possible strategy for an operator that notices it is alone is to optimistically believe that the cluster will come back online before the end of the c-slot, and not submit the VDF so that the cluster can claim points for the whole cluster through participation points at the end of the c-slot. This is a risky strategy, because the time of the end of the c-slot is a uniform random variable with expectancy equal to a day. There is no guarantee that the cluster will come back before the end of the c-slot. In general, operators should compute and submit VDFs if they are alone (especially close to the end of a c-slot). These VDFs are then valid if there is no optimistic participation proof at the end of the c-slot which covers the same time interval.

VDF-based PPs at the end of a c-epoch

Next, we describe how the VDF-based PPs are used at the end of a c-epoch. Recall that, at the end of a c-epoch, a point settlement phase begins. During this phase, in normal circumstances, clusters create Optimistic Participation Proofs and make them publicly available as part of blob data. Additionally, the clusters submit the amount of points each operator has earned. A set of whistleblowers monitors the proofs and the claimed point amounts and triggers on-chain verifications if a problem is detected.

Additionally, each operator may also opt to submit a VDF-based PP as blob data. More precisely, if the operator O_j has computed m VDF proofs VDF_j=(s^1,h_1^1, \pi_1^1, \dots, h_{k_1}^1, \pi_{k_1}^1), \dots, (s^m,h_1^m, \pi_1^m, \dots, h_{k_m}^m, \pi_{k_m}^m) (meaning it has submitted the corresponding hashes of the VDF proofs to the L2 SC at the end of some c-slot) during the c-epoch, then it should:

  • Create an Ethereum blob transaction that includes VDF_j and a BLS signature \sigma_j=\sigma(VDF_j) as blob data (created with its own VDF BLS key, it could even be one of its partial validator keys)
  • The transaction also calls the L1 SC and has it store the versioned hash of VDF_j and the KZG commitment to VDF_j.
  • The L1 SC stores an identifier for the operator submitting the transaction (for compensation/penalization purposes), and reverts if the submitter does not belong to the cluster C.
  • The L1 SC checks that the tx is processed before block number B_{epoch-end} + Δ_{data}. If it is not, it reverts. The number B_{epoch-end} is the number of the block at which the L1 SC receives the END_OF_EPOCH message from the L2 SC.
  • The L1 SC stores the gas cost of the tx (since this cost will be shared evenly among cluster members). (See here for an explanation of how this can be done)

The gas costs of the transaction is covered by the operator.

Remark: In some scenarios submitting the VDF proof data as a blob may be too expensive. This is because each block has a small number of blobs attached, and the cost of attaching a blob is the same regardless of whether the blob contains a lot o data or not. Hence, depending on the cost of blob gas and the amount of VDF data, it could be more profitable for the operator to submit the blob data as calldata instead.

Alternatively, we think it is reasonable to assume there will be a third party service that aggregates small pieces of data from different users into a big blob. The existence of such service is only natural, as it allows to use blob space much more efficiently, and the provider of such service can earn potential good money by charging a fee for the aggregation of data into full blobs. If such a service exists, then the operator submitting VDF proofs should use it.

Converting VDF proofs into blob data format

Here we detail how VDF PPs are organized into blob data in a way that is later accessible in a verifiable manner. The approach is very similar to the one used for Optimistic Participation Proofs, see Post 2.

A VDF proof for one operator O_j, is a collection over c-slots c of vectors VDF_j(c)=(s^1,h_1^1, \pi_1^1, \dots, h_{k_1}^1, \pi_{k_1}^1), \dots, (s^m,h_1^m, \pi_1^m, \dots, h_{k_m}^m, \pi_{k_m}^m), and a BLS signature \sigma_j(VDF_j) on VDF_j which is a concatenation over all c-slots in the c-epoch of the VDF_j(c). Usually, m=1 (i.e. the operator was online for the whole duration of the computation) but it can happen that the operator goes offline and then back online and restarts a VDF computation. In any case, we expect m to be a very small integer.

Then we proceed similarly as in Post 2, that is:

  • For each c-slot number s, let

    k_s=\left\lceil\frac{|VDF_j(c)|}{253}\right\rceil

    where |VDF_j(c)| denotes the bit-length of VDF_j(c).

  • Then we store VDF_j(c) in the blob entries w_1,\ldots, w_{k_1}. In each w_i we use at most 253 bits of w_i, and keep the first bit as 0.

  • For the word w_{k_1+1}, we store the string 1\overset{254}{\ldots}1 (254 copies of 1), to indicate that VDF_j(c) ends right before this entry.

  • We proceed similarly with the rest of the VDF proofs.

  • Once this is done, we store 1\overset{254}{\ldots}1 in the next word, and we then store the operator’s signature \sigma_j(VDF_j).

Overall, the blob of data looks as follows:

( \widetilde{VDF_j(s_1)},1\overset{254}{\ldots}1, \ldots, \widetilde{VDF_j(s_m)}, 1\overset{254}{\ldots}1,\widetilde{\sigma_j(VDF_j)} )

where \widetilde{VDF_j(s)} denotes the data VDF_j(s) split into k_{ s} consecutive 254-bit strings, each starting with the prefix 0. Similarly, \widetilde{\sigma_j(VDF_j)} denotes \sigma_j(VDF_j) split into 254-bit chunks (in this case we do not reserve any special prefix).

Whistleblowers and VDF-based PPs

Here the whistleblower claims that there is some VDF proof (s, h_1, \pi_1, \dots, h_k,\pi_k) submitted by operator O_j that is incorrect. The whistleblower has all the relevant information: it knows M, T, it can deduce from the vector (s, h_1 , π_1 ... , h_k , π_k ) the initial block/slot s as well as k, it can recompute any of the $g_i$’s by looking at the relevant chain state and it can check the correctness of the computation against π_i.

More precisely, the whistleblower can complain about the following properties:

  1. For some i, h_i\neq \pi_i^{\ell_i}\cdot (H(b_i)+h_{i-1})^{r_i} (we set h_0=0), where b_i is the L2 block header (resp. the beacon chain state) of block number (resp. slot) s+(i-1)\cdot M, \ell_i=\mathcal{H}(\mathbb{G}, H(b_i)+h_{i-1}, h_i, T) and q_i, 0\leq r_i< \ell_i are such that 2^T=q_i\ell_i+r_i.
  2. The slot s+kM is not the last slot of the form s+i\cdot M before the end of the c-slot.
  3. The commitment submitted to L2 and the data submitted is inconsistent.

For all three claims, the relevant blob data is made available to the L2 SC through the auxiliary procedure. The whistleblower performs the auxiliary procedure. The L2 SC verifies the signature on the VDF proof. Then:

For 1., the verification procedure should be as follows. The contract should compute (we assume that the L2 SC has access to the execution environment) \ell_i = H(G,H(b_i)+h_{i−1},h_i,T), then it should compute the Euclidean division 2T = q_i\ell_i + r_i, and finally it should check whether h_i \overset{?}{=} π^\ell_i ⋅ (H(b_i) + h_{i−1})^{r_i}.

For 2., the contract has the information of the slot s' where the c-slot ended ; it can compute s + (k + 1)M and check whether it is bigger than s'.

For 3., the contract computes Hash_{L2} of the VDF proofs and compares with the commitment.

Additionally, the whistleblower can also complain that the blob data commitments stored on L1 are inconsistent with the information provided. This is handled in Issue 7.

Points-to-rewards

Recall that, at the end of a c-epoch, a number of points are assigned to each cluster operator. Later on, the cluster accumulated rewards are distributed among the operators unevenly, depending on the amount of points earned by each operator. In this section we explain how points are assigned and how the reward distribution is made. Our design choices are made to fulfill each of the following goals:

  • It should be irrational for operators to exclude other operators from the participation proofs.
  • An operator that has a very small amount of faults should receive a negligible penalty.
  • An operator with a significant amount of faults should receive almost no rewards.

These goals are in accordance to the general objective we set ourselves to solve in Post 1.

We proceed to describe our solution. First we treat the case where there were no VDF proofs (we will see that VDF proofs do not modify the system in a significant way). In this situation, operators P_1, \dots, P_n claim, through optimistic participation proofs, that over some c-epoch there were:

  • d_1 attestation duties, worth k_1 points each.
  • d_2 block proposal duties, worth k_2 points each.
  • d_3 beacon aggregation duties, worth k_3 each.
  • d_4 sync aggregation duties, worth k_4 each.
  • d_5 sync committee duties, worth k_5 points each.

The total amount of points that the cluster should receive with perfect participation is P_{total}=n\cdot\sum_{i=1}^5d_i\cdot k_i. The cluster claims that each operator O_i obtained p_i points. The claimed points p_i have to be coherent with the participation lists signed by the operators. We provide the details of how p_i is to be computed later in this section, when we discuss how to incorporate VDF proofs in our point-reward system.

A key idea is to reduce the total reward pool by a factor which depends on the participation of the operators in the cluster. This is to prevent rational operators from excluding other operators from the Optimistic Participation Proofs. In detail, we compute:

\begin{align*} \rho&=\frac{1}{P_{total}}\sum_{i=1}^n{p_i} \end{align*}

and reduce the amount of rewards to be distributed by the factor \rho, i.e. we distribute R rewards where R=\rho\cdot R_{total}, and R_{total} is the total amount of CL rewards obtained by the cluster in the prior c-epoch.
This does not include leftover rewards from eventual missed duties from previous c-epochs.

Next, we compute, for each operator O_j, its score S({p}_i) as the following logistic function:

\forall i,\,\,S(p_i)=(1+\exp(-\alpha({p_i}-m)))^{-1}

where \alpha>0 and m are globally fixed parameters. In the figures below we depict the function S({p}_i) and how different choices of \alpha and m afect the function. Intuitively, \alpha is a shape parameter that controls how sharp the drop is around m; and m represents the amount of points necessary to obtain a score of 1/2, which we should express as a percentage of an estimate of the number of points that each cluster operator should get during a c-epoch.


This graph shows the effect of changing the shape parameter \alpha. Here the total number of points is 225 (the number of validator attestations duties in a day) and m=112.5.


This graph shows the effect of modifying m instead.

We finally define for all i the weights w_i=S_i/n, and then assign rewards R_i to operator O_i, where

R_i=w_i\cdot R.

It is clear that \rho\cdot w_i< 1/n for all i (equality is never reached because of the exponential, but we can get very close). Furthermore, it is clear that for all i, the function (\rho\cdot w_i)({p_1}, \dots,{p_n}) is strictly increasing in {p_i} for j\neq i, and this is also easy to see for the {p_i} variable. This means that independently of the number of duties, their respective value, and whether these duties correspond to real on-chain duties, operators have an incentive to include as many operators as possible in every duty. Hence, our first design goal is met. Further, the other two design goals are met because of the logistic shape (i.e. ``S shape’') of the score function.

Overall, the points-to-rewards system invites operators to uphold a certain standard of individual (controlled by the m parameter) and collective (controlled by the \rho parameter) participation. It leads to more interesting and fair dynamics regarding reward distribution. In the two scenarios in the figure below, if we had a strictly proportional reward distribution each node would get the same amount of rewards (1/4 of the total) in each case. However, it is clear that in the second scenario one operator should get less rewards, as is the case with our reward scheme.


Two scenarios that showcase the kind of dynamic that can arise with our reward distribution scheme.

Taking a power of \rho

One consequence of this choice of reward scheme is that it is difficult to obtain a 1/n fraction of the rewards even if one individually participates perfectly. If one participates in all possible duties, one should have a score very close to 1. Looking at the case where 2n/3 operators participate perfectly, and n/3 participate just enough —meaning m times, where m is chosen as a 70\% fraction of the total number of duties (for simplicity assume all duties are worth 1 point)—, then \rho=0.9. The operators that participate perfectly get a 0.9/n fraction of the rewards.

To mitigate this effect, we can take a root \rho^{r} for 0<r<1, and compute R=\rho^rR_{total}. We leave this discussion for later.

How to compute the points earned by each operator

Every c-epoch, the operators claim the amount of points for each operator in the cluster. This is also done optimistically in a race, where a designed operator claims that each operator O_j (including itself) should get p_j points for their participation in the last c-epoch.

The operators submit VDF proofs at the end of the c-slot. These proofs are then superseded without any further condition by optimistic participation proofs submitted at the end of the same c-slot. It is difficult to write the point computation formula, but conceptually it is very simple. For each operator, and each c-slot, if there was no optimistic participation proofs for that c-slot then all points come from VDF proofs. If there was an optimistic participation proof it supersedes all VDF proofs, and the operator receives points depending on the number of each different duties that were completed, if it signed the optimistic participation proof. Formally, the point computation looks like this:

\tilde p_j=\sum_{c\in\{c-slots\}}\left(\chi_{VDF}(c)\cdot pts_{VDF}(\sum_k\mathsf{len}(VDF_j^k))+(1- \chi_{VDF}(c))\cdot\chi_{sgn}(j, L_c)\cdot \left(p_1(N_v\cdot N- L_c.attestations) + \sum_{i=2}^4 \#L_c.slots.i\cdot p_{i}\right)\right)

where:

  • \chi_{VDF}(c) is 1 if there was at least one VDF submission during c-slot c and no participation proof for c-slot c; and 0 if there was a participation proof submission for c-slot c.
  • pts_{VDF}(\sum_k\mathsf{len}(VDF_j^k)) is computed as specified in the next section, if operator O_j submitted correct VDF participation proofs VDF_j^1, \dots during c-slot c covering \sum_k\mathsf{len}(VDF_j^k) L2 blocks.
  • \chi_{sgn}(j, L_c) is 1 if O_j signed the optimistic participation proof L_c, and it is 0 otherwise.
  • \#L_c.slots.i is the length of the vector L_c.slots.i, and each task of type i is worth p_{i} points.

Points for VDF duties

Points for VDF computations do not modify the reward reduction coefficient \rho. Instead, VDF proofs simply increase the amount of points that each operator that computed a VDF gets. Again, these extra points are not involved in the computation of \rho, they only influence the score S_j of each operator O_j that computed valid VDF proofs for a c-slot in which there was no optimistic participation proof.

To prevent some attacks, and to clearly establish that VDF computations are less rewarding than following the consensus protocol (fast and slow), we set the points for a VDF computation that spans b_{L2} L2 blocks to be:

0.95\times \#\text{epochs}\times\#\text{validators}\times \min_i(k_i)

where, recall, the k_i 's are the task weights (defined at the beginning of this “Points-to-Rewards” section); \#\text{validators} is the number of validators owned by the operator; and the number \#\text{epochs} of epochs is computed as:

\lceil\frac{t_{L2}}{t_e}\rceil

where t_{L2} is the time in seconds that passed during b_{L2} L2 blocks (If the L2 block time is variable we should use an upper bound. Typically Starknet is moving towards constant block time, and Arbitrum’s block time is consistently around 0.26±0.03 seconds) ; and t_e is the length of an Ethereum epoch (384 seconds).

Rewards and penalties after successful whistleblower complaints

As already mentioned, the Participation Proofs (PP) and the points claimed with them are verified only when a whistleblower raises an alarm (by calling the L2 Smart Contract and depositing a bond b).

If the whistleblower’s alarm about a cluster’s PP or claimed points is proved to be correct then:

  1. The cluster rewards and the points are adapted taking into account the data submitted by the whistleblower. Thus, when a whistleblower spots a malicious action and raises an alarm, then the rewards R that will be allocated to the cluster operators and principal providers will not be higher than the rewards that would be shared if the cluster had submitted the correct PPs and points. In more detail:
    • Issue 1: the PP that corresponds to the incorrect blob data reported by the whistleblower is not taken into account in the calculation of the points.
    • Issues 2.a, 2.b, 2.d: the PP that is incorrect is not taken into account in the calculation of the points.
    • Issues 2.e: the fake duties reported by the whistleblower will not be taken into account in the calculation of the points.
    • Issue 3: the incorrect VDF is not taken into account in the calculation of the points.
    • Issue 4: the points are updated based on the whistleblower’s claim.
    • Issue 6., 7. : the PP that corresponds to the incorrect threshold signature or the incorrect KZG commitment is not taken into account in the calculation of the points.
  2. A f_w fraction of the cluster rewards R is removed from the cluster and is given to to the whistleblower. Recall that R are the cluster rewards after taking into account the data submitted by the whistleblower.
  3. A f_{obol} fraction of R is removed from the cluster and is kept by the network (in our case, Obol). Note that we do not give all the rewards to the whistleblower, because we cannot exclude a scenario where the whistleblower and the cluster operators are the same entity (in optimistic rollups a similar approach is followed, where the challenger does not receive the whole bond that is removed from the asserter; some amount is burned in order to prevent collusions between the asserter and the challenger).
  4. The bond b is returned to the whistleblower.
  5. The whistleblower is reimbursed for the gas that it spent to raise the alarm.

If the whistleblower’s claim is proved to be false, then it loses its bond and it does not get reimbursed for the gas it needed to raise the alarm.

In every case we assume that the whistleblower incurs an extra cost c to report the claim that is not reimbursed (e.g the cost to monitor the PPs and spot errors).

In the following paragraph we describe which are the sufficient properties for f_{obol}, f_w to disincentivize the cluster operators to deviate from the honest behaviour (described in the previous sections) when they construct and submit their PPs and their points. By disincentivize we mean among others that at an equilibrium, the cluster operators construct and submit their PPs and their points honestly, and the whistleblower raises an alarm only if the cluster operators deviate from the honest behaviour.

Sufficient Conditions

We need to select f_{obol}, f_w such that:

  1. f_{obol}>0,
  2. f_w\cdot R-c>0, where c is the cost of the whistleblower that is not reimbursed.
  3. b>0, where b is the bond.

Analysis

In order to perform a game theoretic analysis we will consider a game of two rounds.

In the first round, all the cluster operators play simultaneously and choose either to play honestly or to perform at least one deviation described in the issues in the previous section.

In the second round, the whistleblower learns the strategies of the cluster operators in the first round and it can select either to give a bond b, incur cost c and raise an alarm or to do nothing. Following the model described in Section 7 here we can consider its strategy as an algorithm that takes as input the outcome of the first round and outputs either \mathsf{report} or \mathsf{not Report}.

We assume that every coalition (i.e. every set of cluster operators and/or whistleblower that collude) tries to maximize its joint utility which is defined as the joint profit of all the members in the coalition.

We argue that if the above sufficient conditions hold, then we have:

  1. The following strategy profile \vec S (vector that in position i has the strategy of player i) is a Subgame Perfect Equilibrium (even if we examine coalitions):

    the cluster operators construct and submit their PPs and their points as described in the previous sections (honestly), and the whistleblower raises an alarm only if the cluster operators deviate from the honest behaviour (in terms of correct construction and submission of PPs and points).

  2. A whistleblower who spots a malicious behaviour in the first round will have a positive profit if it raises an alarm.

  3. A malicious whistleblower (who does not care about maximizing its profit) cannot spam the system by submitting false alarms without a cost.

Next we explain why these hold:

Property 1. holds because:

  • A whistleblower cannot increase its utility by raising an alarm when the PPs and the points are constructed and submitted honestly (i.e. when the cluster operators chose a strategy as described in \vec S). This is because if the alarm is proved to be false, then it does not earn any reward, it loses its bond and it does not get reimbursed for the gas it incurred to call the L2 Smart Contract.
  • Any coalition that consists of the cluster operators and potentially the whistleblower cannot increase its utility by deviating because: (i) when a whistleblower spots a malicious action and raises an alarm then the rewards R that will be allocated to the cluster operators and principal providers will not be higher than the rewards that would be shared if the cluster had submitted correct PPs and points. (ii) f_{obol}>0. Note that even if the whistleblower colludes with the cluster operators and gives them back a fraction of f_w\cdot R that it receives, there is still an amount equal to f_{obol} \cdot R that the cluster will lose.

Property 2. holds because f_w\cdot R-c>0.

Finally, Property 3. holds because b>0 and thus when whistleblower raises a false alarm it loses a non zero bond (apart from the gas that it has spent).

1 Like