Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

CIP-0381? | Plutus support for pairings over curve Bls12-381 #220

Merged
merged 12 commits into from
Jul 15, 2022

Conversation

iquerejeta
Copy link
Contributor

This CIP proposes to include plutus support for pairings over curve Bls12-381

@L-as
Copy link
Contributor

L-as commented Feb 15, 2022

I don't think this approach is viable. Even if you add support for pairings, executing the algorithms in question in the CEK machine would be horrifically slow. It is a much better idea to add support for a single universal proof system, and I highly recommend Halo 2. The main blocker is the API for recursive proofs not being done yet, but that should change soon AFAICT.

@iquerejeta
Copy link
Contributor Author

Thanks for the comment. Indeed, it was something that we considered. However, what we want to support with pairings is not only proof verifications. Proof verifications are not even the first in the list of what we can achieve with pairing support. Our first goal is to be able to verify BLS signatures (or similar), whose verification consists of only two pairings. We've thought about including functions such as AggregateSignatures, VerifySignature, etc. The problem is that the use cases that we have in the backlog differ in the verification mechanisms (see ATMS, Mithril or anonymous credentials), and require different verifying logic.

I think it is totally worth making the consideration of adding individual functions to each of these use cases, but from my perspective, I think it might be limiting in the long run. Every time we want to perform a verification slightly different than an existing one, we need to wait till the next hard-fork. Providing pairing support will enable a big range of use cases.

Regarding zero knowledge proofs, I agree that verification might become excessively costly, and we should consider adding verification of a universal proof system. But this is not trivial either. If we go ahead with your suggestion, Halo2 (with IPA commitments), this would result in proofs which are logarithmic in size with respect to the computation we want to provide a proof for, and the verifier would be linear to this computation. Not sure if this level of complexity will be supported with the computation and space limits of plutus. Even if we go ahead with Plonk with KZG10 (which is more concise and has a cheaper verifier) there are still many details to nail down. I was speaking with @davidnevadoc (core developer of the plonk library that IOHK is working on), and it turns out that verifiers compiled with different custom gates are not cross-compatible (at least now), so we would find ourselves in a similar scenario, where if we want to include custom gates, we would need to have new verifiers at each time.

I think that it is not either or. We can expose the low-level functions for some use cases (e.g. signatures), and work on the support of a universal proof system depending on our short term goals (do we want recursion? do we want efficient batch verification? trusted setup? etc). I'm planning to include a back-of-the-envelope calculation for the number of operations listed in the CIP required to perform some of the functionalities we are seeking. This might help with this discussion.

@iquerejeta
Copy link
Contributor Author

I've added some estimates of the number of operations for each of the different application listed in the CIP. Even for Plonk verification I get estimates that could fit within the plutus execution budget. In particular the significant operations in a verification are

  • 6 hash computations
  • 1 vanishing polynomial (1 scalar exponantiation)
  • Quotient polynomial evaluation (1 scalar inverse)
  • First part of batch poly commitment (7 scalar mults and 6 additions in G1)
  • Fully batched poly commitment (7 scalar mults and 8 additions over G1)
  • Group encoded batch verification (1 scalar mult over G1)
  • Batch-validate all evaluations (3 scalar mults and 4 additions over G1, plus two pairing evaluations + equality check)

@L-as, could you elaborate on why the CIP is not a viable approach in general for the other primitives please? And also, could you elaborate on why you think that Halo 2 (with IPA) is the best choice for a universal proof system (over Plonk, for instance)? I believe that verification of relatively complex circuits with Halo2 will most probably exceed plutus execution budget.

Thanks!

* Check that $n\leq t - k$
* $n\cdot \log_2(t)$ hash computations
* Equality check of two hashes
* $n$ $\mathbb{G}_1$ additions and $1$ $\mathbb{G}_1$ negation [WE NEED TO ADD THIS]
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

good thing you checked!

## Mithril (using trivial proving system)
Let $k$ be the number of submitted signatures (as above, $k$ is a security parameter determining the number of required signatures). However, in mithril $k << N$ where $N$ is the total number of eligible signers. Mithril verification goes as foollows:
* Check the $k$ is large enough
* $t$ $\mathbb{G}_1$ additions
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

k?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

and below

@michaelpj
Copy link
Contributor

Could you also include the estimates you had for how long the various primitives take? And maybe then add that up for the examples?

It looks like as long as we're not talking about milliseconds per operation we're probably okay.

@iquerejeta
Copy link
Contributor Author

Ok, yes. I will add that as well. It's true that the numbers with respect to ed25519 might not be very helpful.


An [EIP](https://eips.ethereum.org/EIPS/eip-2537) for precompiles of curve BLS12-381 already exists, but has been stagnant for a while. Nonetheless, Zcash, MatterLabs and Consensys support BLS12-381 curve, so it is certainly widely used in the space.

Further reading regarding curve BLS12-381 can be found [here](https://hackmd.io/@benjaminion/bls12-381) and the references thereof cited. For level of detail required in this document, it is sufficient to understand that one can define a pairing over BLS12-381, where a pairing is a map: $e:G_{1}\times G_{2}\rightarrow G_{T}$, which satisfies the following properties:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Github markdown doesn't support latex maths, so these look pretty bad throughout. Maybe we're okay just asking people to render the latex in their heads, but it might be nice to try and use ascii.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You can use inline SVG IIRC.

@L-as
Copy link
Contributor

L-as commented Feb 16, 2022

And also, could you elaborate on why you think that Halo 2 (with IPA) is the best choice for a universal proof system (over Plonk, for instance)? I believe that verification of relatively complex circuits with Halo2 will most probably exceed plutus execution budget.

If we are to support Halo 2 directly in the interpreter, it would have a budget defined by the interpreter. It wouldn't be extremely fast, but we are free to choose whatever cost model we want when putting it into the interpreter. In practice, it would probably be half the CPU and memory budget + 1 or something similar.

As for why Halo 2, there isn't really any competitor. The four important properties are recursive proofs, trustlessness, open source license, and quality code base. There isn't any alternative to Halo 2 that accomplishes all of this.

@iquerejeta
Copy link
Contributor Author

As for why Halo 2, there isn't really any competitor.

Plonk is being used in several applications, so I would consider it as a competitor.

The four important properties are recursive proofs, trustlessness, open source license, and quality code base.

Agree, however, plonk trusted setup is universal (meaning that it has to be performed only once) and is updatable which, IMO, it is enough for a lot of scenarios (maybe not all). What I'm concerned of Halo2 is the time it takes to verify a proof. It is my understanding that the plutus execution budget is 5-10ms. I don't have recent benchmarks at hand, but at least what was published in the Halo paper indicates that verifying a circuit of 2^11 constraints (pretty quite limiting) takes 0,1 seconds, and growth is linear. And verifying 2^17 constraints takes close to 1 second. I know these numbers are definitely better using Halo2, but I couldn't find any benchmarks. Maybe you have some numbers you can share to support the statement that it will be around half the CPU budget? For reference, using plonk, times are 4,13ms and 5,15ms respectively.

@michaelpj
Copy link
Contributor

It wouldn't be extremely fast, but we are free to choose whatever cost model we want when putting it into the interpreter. In practice, it would probably be half the CPU and memory budget + 1 or something similar.

We certainly can't just pick whatever cost model we like. It's important that it's at least somewhat accurate, because ultimately those limits are what stop us from blowing out our block diffusion times. If Halo 2 takes 2ms to verify something, it's going to have to be costed like that.

CIP-0037/CIP-0037.md Outdated Show resolved Hide resolved
@L-as
Copy link
Contributor

L-as commented Mar 10, 2022

@iquerejeta I think I agree now that we should just add support for the curve to UPLC now instead of adding support for the verification directly. However, do the Pasta curves also work for your use case?

@iquerejeta
Copy link
Contributor Author

The support that we will be including in this CIP is only for BLS12-381 curve, not the pasta curves. However, other curves could be considered if we see that they provide attractive properties. I'm guessing that the main attractive point of pasta curves is its recursive nature, right? If that is the case, considering plonky2 might be worth it, that combines ZKPs using FRI commitments (for speed in producing the proof), and batches them using a Plonk proof with KZG10 commitments (for succinctness in the proof size). I'm no expert in the field, but I believe this would provide attractive performance when batching several proofs, and would be compatible with the introduction of BLS12-381 curves. I would be more than happy in having a more in-depth discussion on this and the general topic of zkps and roll ups.

@L-as
Copy link
Contributor

L-as commented Mar 11, 2022 via email

@KtorZ KtorZ changed the title Plutus support for pairings over curve Bls12-381 CIP-43? | Plutus support for pairings over curve Bls12-381 Mar 17, 2022
Copy link

@davidnevadoc davidnevadoc left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have added some comments in the Plonk section with suggestions to make it a bit more precise.
In general, I think this is a very good and useful proposal! :)

limits. However, it is hard to make individual functions for each possible statement accepted.

## Plonk verification
Plonk verification is quite complex, and might differ depending on the number of custom gates, or specific wire

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The specific wire connections do not affect the verifier algorithm complexity. The connections are enforced as copy constraits with a permutation argument whose complexity is the same no matter the permutation we are using.

## Plonk verification
Plonk verification is quite complex, and might differ depending on the number of custom gates, or specific wire
connections. In this section we expose the verifier complexity as described
[here](https://hackmd.io/Vr0DUYPCR0iWADpO8kpkAA?view#Verifier-Algorithm). We count every challenge computation

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Better use a reference to the paper as it is more updated.
In particular the current version has a new way of computing the linearisation polynomial that saves some operations for the verifier.


* 6 hash computations
* 1 vanishing polynomial (1 scalar exponantiation)
* Quotient polynomial evaluation (1 scalar inverse)

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No need to compute the quotient evaluation in the newest version.

* 6 hash computations
* 1 vanishing polynomial (1 scalar exponantiation)
* Quotient polynomial evaluation (1 scalar inverse)
* First part of batch poly commitment (7 scalar mults and 6 additions in G_1)

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Now: 10 scalar mults and 9 additions in G_1

* 1 vanishing polynomial (1 scalar exponantiation)
* Quotient polynomial evaluation (1 scalar inverse)
* First part of batch poly commitment (7 scalar mults and 6 additions in G_1)
* Fully batched poly commitment (7 scalar mults and 8 additions over G_1)
Copy link

@davidnevadoc davidnevadoc Apr 9, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've counted 9 scalar mults in the previous version.
Now: 6 scalar mults and 5 additions in G_1

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I count 9 and 9 in the first part of the poly com, and 5 and 5 in the second part. Thanks for the suggestion of referencing the paper, will do that 👍

* Batch-validate all evaluations (3 scalar mults and 4 additions over G_1, plus two pairing evaluations + equality check)


Implementations may differ, and adding custom gates or additional wires will affect these estimates. Pairing

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is a good estimate as a reference, but we have to expect higher costs because this is the most basic version of Plonk and real life implementations will likely be more complex.
For example adding lookups gates introduces tables (with its corresponding commitments), an extra permutation argument (increases the number of scalar multiplications and additions) and the need for new challenges (more hashes).

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Blinder costs should also be considered as well as the custom gates that any implementation uses.

From my perspective, plonk is "completely agnostic to this PR". In the sense that any project will be free to implement it's own Plonk Verifier contract/lib with their own custom gate design picks for their use-cases.
What I think would be really valuable is to first make sure that this is indeed doable for vanilla-plonk or turbo-plonk (excluding custom gates) as we still don't have any idea of how much this execution will cost or even if it is possible to execute all that stuff inside a contract.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks. Indeed, as you say this CIP is agnostic to what the pairings are used for. This list served as an estimate of some of the use cases that could leverage pairings. On your second point, I complete agree as well. Once we get the pairings running on a test version of plutus (and we have the functions costed), we'll see how it performs for verifying a vanilla plonk, and how much of the execution budget we spend.

Copy link
Contributor

@michaelpj michaelpj left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this is starting to look viable and we should maybe move to a prototype implementation.

* `BLS12_381_G1_add :: Bls12381G1Element -> Bls12381G1Element -> Bls12381G1Element`
* `BLS12_381_G1_mult :: Integer -> Bls12381G1Element -> Bls12381G1Element`
* `BLS12_381_G1_neg :: Bls12381G1Element -> Bls12381G1Element`
* `BLS12_381_H2G1 :: ByteArray -> Bls12381G1Element`
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I assume this is the hashing function. It should say in the name.

* `Bls12381GTElement`

We need to support the binary operation of G_1 and G_2 (which are additive groups), as well as the binary operation
over G_T (which is represented as a multiplicative group). We also want to enable hashing to G_1 and G_2.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What is the hashing algorithm?

* Equality functions:
* `eq :: Bls12381G1Element -> Bls12381G1Element -> bool`
* `eq :: Bls12381G2Element -> Bls12381G2Element -> bool`
* `eq :: Bls12381GTElement -> Bls12381GTElement -> bool -- This perform the final exponantiation`
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not sure if the trailing comment belongs here? If it does I don't know what it means.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I found it below - reference the section!

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks. As suggested by Tobias, we'll change the name of this function, as performing the 'final exponantiation' in an eq was a bit off.

among the different types.

* Deserialisation:
* `deserializeG1 :: ByteString -> Either BLSTError (Bls12381G1Element)`
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We don't have Either, so I assume the implication is that these will just fail with an error.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Or is it important to be able to handle the error?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If it is you could scott-encode the Either, i.e. deserializeG1 :: ByteString -> (BLSTError -> a) -> (Bls12381G1Element -> a) -> a

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think I will go with the latter. An error in deserialization should be considered as a misbehaviour (or invalid signature), so we might want Plutus scripts to be able to handle such cases (e.g. disqualifying parties).

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ignore me, you can't lambda-encode it, that's a higher-order function. This is actually a bit tricky.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

But then, if we want to handle the error, what would be your recommendation?

performed with the Integral type. In particular, we need the following functions:

* Group operations:
* `BLS12_381_G1_add :: Bls12381G1Element -> Bls12381G1Element -> Bls12381G1Element`
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You write BLS12-318 in a lot of different ways in this proposal:

  • Bls12-381
  • BLS12_381
  • Bls12381

I don't know what the normal conventions are, but it would be good to pick one and stick to it.

and we verify that the result is the identity element. In other words, the equality check function, `eq`, first
computes an inverse, then a multiplication, and finally a final exponentiation.

For this reason it is important to limit what can be done with GTElements, as the pairing really is not the full
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Does this limit us if we wanted to add more operations on such elements in future?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

All current use cases in the roadmap use pairings for checking relations of points (i.e. to check equality among two distinct GT elements, or a linear relation amongst them), rather than to use the GT element for something different. However, if that changes in the future, we would need to add a function to fully evaluate the pairing. I believe the efficiency benefits listed above are worth it.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So essentially, you'd add BLS12_381_full_pairing? Shouldn't BLS12_381_pairing just have a different name then?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, right. If there is the possibility that in the future we would add the full pairing, we should name it differently here. Maybe something like BLS12_381_ppairing_ml (standing for partial pairing miller loop), and then have clear documentation of the function.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@michaelpj Thoughts on this? Should we try to make a complete API or should we add it in another Plutus language version in the future if someone needs it?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

bump

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's hard to say in advance. I think the efficiency arguments are compelling, but it would indeed be good to identify clearly what the functions that we are adding do, e.g. naming and documenting them appropriately. That will also reduce confusion if we do add more functions later.

81, 11, 100, 122, 227, 209, 119, 11, 172, 3, 38, 168, 5, 187, 239, 212, 128, 86, 200, 193, 33, 189, 184]
```

#### An important note on GT elements
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm unsure if this is worth it. Looking at the document history, it looks like adding this allowed you to go from "2 pairings plus an equality check = 1100us" to "2 pairings plus an equality check = 1000us", which is a pretty modest improvement. Is that the main benefit we get here?

Copy link
Contributor Author

@iquerejeta iquerejeta Apr 12, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What we had before (of only the equality check) allowed us to compare two pairing evaluations, e.g. e(A,B) =? e(C,D) where e(,) is the full pairing function. However, in some cases, we want to have more complex linear relations of pairing evaluations, e.g. e(A, B) * e(C, D) =? e(E, F) * e(G, H). The benefit of using this type of representation comes here. If we did not use that, we would take around 700-800 us for each full pairing, resulting in ~3ms for the whole. Instead, by using this representation we expect around 1,6ms.

I'll clarify this in the section.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just to clarify, the previous function (pairing_check) was already performing this trick, of breaking down the pairing evaluation into its two blocks. The main benefit that we get with this representation is that we can generalise this trick to any linear relation between two or more pairings, when the pairing_check only allowed equality check of strictly two pairings.

@L-as
Copy link
Contributor

L-as commented Apr 15, 2022

One very important thing to note here, is that we're likely going to add more curves in the future, notably the Pasta (Pallas and Vesta) Curves, secp256k1 (the curve).

The core question I have then is, should we for each curve add the same primitives again separately?
Should basic primitives like addition work for many curves (and even integers)?

among the G1 and G2 types.

* Deserialisation:
* `deserializeG1 :: ByteString -> (BLSTError -> a) -> (Bls12_381G1Element -> a) -> a`
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why not just deserialiseG1 :: ByteString -> BuiltinList Bls12_381G1Element, where the output is empty if it errs and otherwise a single element? We don't really care about error handling too much on-chain.
You could also just make it partial, deserialiseG1 :: ByteString -> Bls12_381G1Element looks reasonable to me.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I can't think of a single case where handling the error is necessary.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think that the only handling that needs to be done is if deserialization worked or not. If this can be done with the output being empty, I'm totally fine with that option. I don't have a strong opinion here as long as the smart contract can react in case the deserialization failed.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why can't we just fail the whole thing if deserialization fails? Surely this indicates malformed input of some kind?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am very much for just failing the whole thing.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, absolutely. I'll follow your advice here. How would be the function signature in that case? (and I'll change it)

In particular, we expose the hash to curve (which we denote with `H2G1` and `H2G2` for G1 and G2
respectively) algorithm as described in
[hash to curve draft](https://datatracker.ietf.org/doc/html/draft-irtf-cfrg-hash-to-curve#section-8),
using `BLS12381G1_XMD:SHA-256_SSWU_RO_` and `BLS12381G2_XMD:SHA-256_SSWU_RO_` for G1 and G2 respectively.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Shouldn't we use BLAKE2 as is common in Cardano AFAICT? There are a lot of reasons to avoid SHA2, see zcash/zcash#706.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We are using the recommendation of the standard for BLS12 381, which is also what is implemented in the blst library. BLS signatures depend on this choice, and the goal is to follow the standard to be compatible with as many implementations of BLS signatures over BLS12 381.

While Blake is the common and default choice in Cardano, there are some exceptions. These exceptions are made to follow the standard specifications and make Cardano verification procedures compatible with other implementations. Notable exceptions are ed25519 and VRF (where both use SHA512 to compute the challenge).

* `BLS12_381_G2_add :: BLS12_381G2Element -> BLS12_381G2Element -> BLS12_381G2Element`
* `BLS12_381_G2_mult :: Integer -> BLS12_381G2Element -> BLS12_381G2Element`
* `BLS12_381_G2_neg :: BLS12_381G2Element -> BLS12_381G2Element`
* `BLS12_381_H2G2 :: ByteArray -> BLS12_381G2Element`
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Shouldn't this be ByteString instead of ByteArray?

* Pairing operations:
* `BLS12_381_ppairing_ml :: BLS12_381G1Element -> BLS12_381G2Element -> BLS12_381GTElement`
* `BLS12_381_final_verify :: BLS12_381GTElement -> BLS12_381GTElement -> bool` This performs the final
exponantiation (see section `An important note on GT elements` below).
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

typo


* Deserialisation:
* `deserialiseG1 :: ByteString -> Bls12_381G1Element`
* `deserialiseG2 :: ByteString -> Bls12_381G2Element`
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we want serialisation? That way we can hash G1 and G2.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes we do, good catch. I was just discussing this with @davidnevadoc and others this morning. We will include serialisation of G1s and G2s. And probably deserialisation for target elements to save some pairing computations in, for example, groth16 verifications.

@KtorZ KtorZ changed the title CIP-43? | Plutus support for pairings over curve Bls12-381 CIP-0043? | Plutus support for pairings over curve Bls12-381 May 11, 2022
@KtorZ KtorZ changed the title CIP-0043? | Plutus support for pairings over curve Bls12-381 CIP-0381? | Plutus support for pairings over curve Bls12-381 May 11, 2022
@L-as
Copy link
Contributor

L-as commented May 16, 2022

what happened to PR name?

@L-as
Copy link
Contributor

L-as commented Jun 7, 2022

https://github.com/cardano-foundation/CIPs/pull/220/files#r865771606

How complete should the API be? Should we add the full functionality in later versions? Should we add all the basic operations initially?

@KtorZ
Copy link
Member

KtorZ commented Jun 7, 2022

@L-as regarding PR's name: CIP numbers are assigned on a best-effort basis. It sometimes happen that two CIPs are submitted along the same time and authors allocate the same numbers. That's why encourage authors not to assign numbers themselves, we do it as part of the housekeeping. Plus, a number is only truly settled once the CIP is accepted and merged. Hence the ? on candidates CIPs. This is summarized on the main README.

@iquerejeta
Copy link
Contributor Author

@KtorZ , do you think the PR is ready to be merged? We are going to start doing some prototyping, so IMO we could merge this as a draft, WDYT?

@KtorZ
Copy link
Member

KtorZ commented Jun 14, 2022

@iquerejeta we can add it to the agenda for next session (cc @mangelsjover) for review.

I am yet to go through the comments and discussions that happened as part of this one, but I can already make a few points that would need to be fixed before we merge it:

  • Change the name & folder to CIP-0381
  • Rename Reason for choosing BLS12_381 as Rationale and move it after the Specification section.
  • Rename Moving to Proposed as Path to Proposed and consider perhaps having a Path to Active as well (though for this one, I don't see much beyond seeing this actually implemented in Plutus and released in an upcoming update).
  • I am not quite sure about the purpose of the Some operation count section? It seems to sits in-between the realm of recommendations for some of considered use-cases and a justification of the use of pairing in those situations; so perhaps this last section ought to be nested within the Rationale section?

@iquerejeta
Copy link
Contributor Author

I've included your suggestions. What I'm not entirely convinced is about the Some operation count. I've removed that, and added the operation count in the motivation (beginning of the document). Do you think it fits well there? Happy to make further changes, and thanks for looking into it 👍

@KtorZ
Copy link
Member

KtorZ commented Jul 5, 2022

We're happy to move forward with this one as proposed.

@iquerejeta may I ask you to kindly do two minor editorial adjustments:

  • Please rename the main file to README.md (under the same directory).

  • Remove the headline and 'tags':

    ###### tags: `CIP`
    # CIP-0381
    
    
  • Change status to Proposed

@mangelsjover mangelsjover added the Waiting for Author Proposal showing lack of activity or participation from their authors. label Jul 13, 2022
Copy link
Collaborator

@rphair rphair left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

author has completed changes requested in #220 (comment).

@rphair rphair removed Candidate CIP Waiting for Author Proposal showing lack of activity or participation from their authors. labels Jul 15, 2022
@rphair rphair merged commit 0dd5ad8 into cardano-foundation:master Jul 15, 2022
rphair added a commit that referenced this pull request Jul 15, 2022
@dcoutts
Copy link
Contributor

dcoutts commented Apr 14, 2023

In addition to security audits on the proposed implementation, it is important that we review the inclusion of the library for practical issues of build systems, and long term maintainability.

In particular in this case the library will be used in the chain format and in chain verification. This means we have special constraints that the format and verification logic must never change. Or more specifically: it must be possible forever to be able to verify the existing uses on the chain. So even if there are upgrades or format changes in future, it must still be possible to decode and verify the old uses. This is not a constraint that most software has, and so many libraries are not designed to work within that constraint.

The proposed implementation is https://github.com/supranational/blst

  • The implementation is in C and assembly for x86 and ARM v8. This is good for the existing Haskell implementation of the Cardano node because integrating with C libraries is relatively straightforward, it's well supported by the build and CI system and does not pull in too many extra dependencies. (Contrast for example if it were a Rust library where we would have serious practical problems on this front.)
  • The implementation appears to have been designed with blockchain implementations in mind. This is a good sign for the long term maintainability because it probably means that the library authors will continue to support the existing format and semantics even if there are changes or improvements.
  • The implementation is claimed to be compliant with draft IETF specifications. There is a risk that the specs may change before being declared final, and the library may be updated to follow. There is a risk that the Cardano community will have to support the old version forever. Though given the point above, it's probably the case that library updates would still provide compatibility with the current IETF drafts and serialisation formats.

So overall this seems like something the core development team and Cardano community could support long term without too high a cost. Though we should be aware of the risk that we may have to support an old version of the library, if the library gets changed in incompatible ways.

To ensure no compatibility surprises, it is worth considering forking the repository at a specific commit/version and building the node using that version. This is to guarantee the version remains available. Then for any future updates, the fork repo could be updated to a new version explicitly, with appropriate compatibility checks to ensure the existing on-chain uses are still compatible.

@iquerejeta
Copy link
Contributor Author

To ensure no compatibility surprises, it is worth considering forking the repository at a specific commit/version and building the node using that version.

I'm not sure I see what is the benefit of forking here. This would only be useful if it happened that the repo maintainers overwrote history, right? As long as that does not happen, we are good sticking to a particular version/commit of the original repo without the need to fork it.

@michaelpj
Copy link
Contributor

@iquerejeta can you fold Duncan's argument into the CIP? I think it's a good extended justification for why the library choice is okay and worth making part of the Rationale.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

10 participants