Elliptic Curves In-Depth (Part 9)
Alright! We left off the previous installment with a wrap on the very specific way pairings are defined.
After the dust settled, we were left with a couple suspiciously simple definitions for the Weil and Tate pairings. For such complex constructions, it feels almost poetical that we wind up with expressions such as this one:
However, and as you’ve probably come to expect by now, appearances are deceiving: we have in fact said nothing about how to compute pairings out of these simple expressions.
And that’s actually the most important part, if we want to have any hope of putting pairings to good use!
Today, we’ll be looking at the technique that makes pairing computation possible, along with a few more details that will finally bring all the elements together.
We’re much closer to the finish line now. Just a little more!
Computation
Let’s focus on the Weil pairing for a moment:
In the previous article, we saw that there’s a natural way to evaluate a function on a divisor — and that’s exactly what we’d need to do to compute this pairing. That is, for divisor and function :
Those and right there are not just any functions — we said they were exactly the functions whose divisor were:
What we never mentioned is how to obtain them - last time, we merely mentioned that such functions can be constructed, but not how. It’s clear that once we have them, it’s just a matter of plugging them into the definition. But how do we find them?
Let’s see how we can get there.
Building the Functions
To get us started, we’ll need to go back to one of the affirmations at the start of the previous article: that we can build a function whose divisor (which is principal) is of this form:
What we’ll do is kind of an inductive process here: assuming we have , can we build ?
The answer is yes — and in quite an elegant fashion.
We’ll make use of a couple tricks we have already seen, which involve two very simple functions. First, let’s take the line going through and , whose divisor is just:
Second, take the vertical line going through :
Using these two divisors, we can find a very neat relation between and :
Our prior knowledge of divisors reveals something amazing about the above observation: the divisor equality can be mapped directly into an algebraic expression with which we can work with:
And that’s pretty much it! Given any value of , we can start at the function , and work our way to the desired function. Piece o’ cake, am I right?

Not so fast, cowboy.
Where it Gets Tricky
Remember our goal with all this was to construct a function whose divisor is:
That value right there is exactly the size of the r-torsion. In practice, the value of will be huge (we’re talking over ), which means that finding our functions by performing one update at a time will get us nowhere.
Said differently, by using the strategy above, pairing computation would be infeasible in practice, so all we’d have is a nice bit of theory that serves no purpose.
However, pairings are used in practice for various cryptographic protocols — so there must be some trick to all this. And sure as hell, pairing computation was made possible by the clever ideas of a guy called Victor S. Miller.
And that’s what we shall see next.
Miller’s Algorithm
Rather than trying to add one to on each iteration, what would happen if we try to double it?
Let’s see. We being with some function :
Even when going one iteration at a time, we’ll eventually get to the following function:
The interesting bit happens when we try to square our original function. By the properties of divisors we already know, we can work out the resulting divisor to be:
Which is almost suspiciously similar to the divisor we want to get to!
In fact, we can work out their difference to be:
That should look familiar by now, right?

Let me save you the trouble: it’s the divisor of the line tangent to , minus the divisor of the vertical line through . In essence, the functions used to double .
This brilliant insight enables us to double the current value by:
- Squaring the current function,
- Multiplying by the line tangent to ,
- Dividing by the vertical line at (which is just a scalar value).
We can get to any value of in logarithmic time by squaring-and-adding, using a single step when needed.
For example, to get to , we could do double, add, double, add, double, double.
This should be very reminiscent of the double-and-add algorithm we used for fast multiplication of elliptic curve points. Behind the scenes, both use the tangent rule for the duplication — it’s just that we’re building functions in the case of Miller’s algorithm.
As a final remark, it should be noted that as we go deeper into the algorithm, the resulting functions will have high degree, so storing them in full becomes prohibitively expensive.
The trick is not to store the full function, but its evaluation at some divisor. Naturally, all we have to do is choose the divisor at which we want to compute the pairing functions!
Miller’s elegant algorithm is what ultimately enables practical pairing computation. But this isn’t all there’s to pairings — there are a few more things we need to take into account.
Pairing Types
One thing we've left forgotten in the shelves is the definition of the trace and anti-trace maps. It seems we took all that effort for no apparent reason.
Until now.
You see, when we define a pairing as:
We need to consider who these groups actually are. Sure, we know they belong to the r-torsion, but is it okay for us to pick any torsion subgroup? Or should we be mindful of our decisions?
As you might expect, care indeed has to be taken when selecting those groups. Doing otherwise might cause some complications.
This will ultimately lead us to the idea of pairing types, which essentially describe the relationship between and . But to fully grasp what’s at stake, we must first understand a few things about what we may be using pairings for.
Group Requirements
When building cryptographic protocols with pairings, there are two fundamental operations we’ll need to perform.
First, we’ll need to hash into our groups of choice. By this, we mean taking arbitrary data — like a user’s name, email address, some message, or just about anything — and deterministically mapping it to a point in or .
For example, this is essential for schemes like identity-based encryption, where a person’s identity itself becomes their public key through hashing.
We also want this hashing process to be efficient. Some group choices make such a thing very hard to do, so instead, we’re forced to work with fixed generators and scalar multiples, which severely limits what protocols we can build. That is, we can only create points of the form for some fixed generator , and we can’t handle arbitrary inputs directly — a dealbreaker for most applications.
Second, we occasionally need efficient homomorphisms between groups. Having an efficiently computable map like this one:
can prove extremely useful.
The reason for this is that some protocols require checking relationships or performing operations that only make sense when elements are in the same group. Additionally, operations in some groups are much faster than in others — so if we can move between groups for faster computation, it’s a big win!
Kinda like a computational cheat code!
But of course, there’s the catch. There’s always a catch.
In this case, the problem is that we can’t have both things easily at the same time. Different choices of groups provide different balances in this tradeoff. And that’s exactly what the different pairing types are about.
With this, we’re now ready to formally present them.
Type 1: Symmetric Pairings
The simplest case we could imagine is when
Recall that is the base field subgroup.
Since we use the same group for both and , it’s no surprise these are called symmetric pairings.
Let’s see how they fare against our requirements. First, hashing into is relatively efficient, since the entire group is on the base field — so that’s a win. And since both groups are the same, we only need one hash function.
In terms of maps between groups, we actually have a trivial one — the identity map! I guess that doesn’t really count? Heck, let’s just say it’s another win.
So what’s the tradeoff then? Everything seems pretty good...

Well, remember that for pairings to work, the divisor supports must be disjoint. When both and come from , ensuring disjoint supports becomes problematic — we’re essentially pairing a group with itself.
The solution to this is to use what’s called the distortion map: an efficiently computable map that takes a point in and moves it into a different r-torsion subgroup. With this, we could compute:
And the disjoint supports condition would be easier to cover.
So here’s the real problem: distortion maps only exist for supersingular curves.

Supersingular curves have small embedding degrees, typically . Small embedding degrees mean the pairing outputs land on a small field extension, which in turn causes the discrete logarithm problem in the output group to become tractable, resulting in curves that are vulnerable to attacks.
In short, type 1’s simplicity comes at the cost of security. As more secure alternatives emerged, type 1 pairings have fallen out of favor in practice.
Type 2: Asymmetric with Isomorphism
We now move onto asymmetric pairings for types 2, 3, and 4. In all three cases, we’ll take to be .
In type 2 pairings, is chosen to be one of the other r-torsion subgroups except for (the trace zero subgroup).
Again, let’s check on our requirements:
- In terms of available isomorphisms, we do have an efficiently computable map , and you’ve already seen it — it’s the trace map! This allows us to move elements from into when protocols require it. We can even go the reverse direction using the anti-trace map for certain operations, though this doesn’t map directly back into .
- However, in terms of hashing, this selection struggles. There’s no known efficient way to hash directly into these other r-torsion subgroups.
So hashing is the real offender here. The best we can do is, as we previously mentioned, to choose a fixed generator and create elements via scalar multiplication . But this means every element has a known discrete logarithm relative to , which breaks protocols that require random or hash-derived points.
This single limitation is so critical that it has prevented type 2 pairings from seeing widespread practical use — most protocols simply need the ability to hash into both groups, which immediately rules this type out.
Type 3: Asymmetric without Isomorphism
Now for this type, we take to be — the trace zero subgroup itself.
This seemingly small change has profound implications — it actually flips the tradeoff completely on its head. Let’s see:
- Unlike type 2, we can now hash into . The mechanism involves first hashing into the entire r-torsion, and then simply applying the anti-trace map to move into .
- Of course, we must pay a price for this convenience. In this case, that translates into not having an efficient way to compute an isomorphism ψ that we can use.
It’s important to point out that such an isomorphism must exist — after all, both and have the same size. The problem is that there’s no known efficient way to compute it.
Ironically, the only group we can efficiently hash into (besides ) is the only subgroup we cannot efficiently map out of.
Thus, we lose the computational shortcuts and protocol flexibility that type 2 offered. And no less important is the fact that most security proofs rely on actually having such a map, so to prove the robustness of type 3 pairings, protocols need to be designed around different hardness assumptions.
In practice, this tradeoff has proven to be acceptable, as the ability to hash into both groups is the most critical aspect for most protocols out there.
Type 3 is the gold standard for modern pairing-based cryptography.
Type 4: The Full Torsion
Finally, type 4 takes to be the entire r-torsion .
- Hashing into is possible, but it’s less efficient than hashing into or . Since the group is larger (it has order , as we’ve already seen), there’s more computational overhead.
- Then, we can look at the structure of . In this sense, it has a distinctive feature: it’s not cyclic. We already know it’s isomorphic to , so there’s no single generator that can produce all elements in the group. Some protocols rely on cyclicity to work, so they cannot use type 4 pairings.
Curiously, when hashing into E[r], the probability of landing in either or is close to — negligible for big values. Certain specialized protocols might need to ensure avoiding specific subgroups, and in those cases, type 4 pairings can be useful.
The efficiency costs and lack of cyclicity mean that type 4 pairings are rarely used in practice.
All in all, type 3 pairings are the predominant variant in modern pairing-based cryptography. And truth be told, we hardly ever worry about which type of pairing we’re using. It’s just that if we have to get really technical in our protocol design, we might have to consider what using each group choice implies!
Now that we understand the different pairing types and their tradeoffs, let’s address one more critical concern, before closing the chapter on these amazing constructions.
Degeneracy
We’ve talked about pairing types and how to construct the functions needed for computation, but there’s one more critical property we need to ensure: that our pairing actually works.

What do I mean by works? Remember that pairings are meant to be bilinear maps. But there’s a trivial way to satisfy bilinearity: just map everything to !
While such a pairing would technically satisfy bilinearity, it’s completely useless. We call such a pairing degenerate.
And of course, a pairing is non-degenerate if there exist points and such that .
Should I Worry?
After we went through the trouble of defining pairings in these convoluted and complex ways, it would seem quite unlikely that our pairings would be degenerate, right?
Well... Not quite. If we’re not careful about how we choose and , we can end up with a degenerate pairing even when following all the construction steps correctly. The mathematics might be sound, but the result is cryptographically useless.
A concrete example of what can go wrong happens when we try to use the Weil pairing with , but we’re working with an ordinary (non-supersingular) curve that has no distortion map.
When the Weil pairing operates on two points from the same Frobenius eigenspace (in this case, ), the pairing always evaluates to , regardless of which points you choose. At the cost of glazing over some of the details, here’s the short explanation:
- First, we know that the Frobenius endomorphism acts trivially on points in , so for any and in , we have and .
- Then, for rational functions on the curve, the Frobenius map has a special property:
- We can plug this property into the Weil pairing itself, and looking at the resulting divisors, we obtain:
- But since and are in :
What we’re saying is that whatever the output of the pairing is, it’s fixed by the q-th power map. In other words, it has to be an element in the base field, for if belongs to a field extension, then raising it to the q-th power will not yield the same result.
We also know that must be an r-th root of unity.
Recall that for a field to contain r-th roots of unity, must divide the cardinality of the multiplicative subgroup of the field, so in this case, .
That’s the final nail in the coffin: we usually choose so that it does not divide , meaning that there are no roots of unity — except for the trivial one: the multiplicative identity element. Thus, must equal .
This is precisely why Type 1 pairings require supersingular curves with distortion maps — without them, degeneracy is unavoidable.
Requirements
Okay! That was certainly a lot. Sorry for that!

Beyond the specific example, there are some general conditions we should check to ensure non-degeneracy. Let’s round things off by looking at them.
- First and most important: and must be disjoint (except of course for the identity, ). If they share non-trivial points, the pairing degenerates on those points. We already saw the extreme case where without a distortion map — and the pairing becomes completely degenerate.
This is why the trace and anti-trace decomposition is so valuable: it naturally allows us to use and , which are disjoint by definition.
- Second: the divisor supports must be disjoint. We’ve mentioned this before, but there’s no harm in repeating it: if the supports aren’t disjoint, the function evaluation collapses to 0 or blows up to infinity, causing undefined behaviors that can result in degeneracy. Choosing and from different eigenspaces naturally ensures this.
- Third: the embedding degree must be large enough. Recall that the embedding degree is the smallest positive integer such that divides , and it determines where pairing values live. Essentially, we must ensure there are enough independent r-th roots of unity to support a non-degenerate pairing.
In practice, choosing the right pairing type (especially type 3) and using properly constructed pairing-friendly curves automatically satisfies all these conditions. The mathematical machinery ensures non-degeneracy, so we can focus on the cryptographic applications rather than worrying about the edge cases.
Still, understanding why these conditions matter may help us appreciate the careful design that goes into pairing-based cryptography. One step in the wrong direction, and the entire thing can fall apart!
Summary
With this, we’ve completed our exploration of pairings, by covering some of the most important practical aspects that make them usable in cryptography.
As always, there’s more to say. In particular, I want to point at the fact that we still need to find suitable curves for these constructions. Most of the time, we assume we can find them, but it’s not necessarily an easy task.
Still, with these past few articles, you should have a rough idea of what pairings are all about. It’s interesting how we set out to find functions with a seemingly simple and harmful property — bilinearity — , but ended up having to get into pretty deep stuff to make it work.
Part of what makes math beautiful, I guess: everything ends up clicking nicely!
To summarize today’s article, remember: most of the time, you’ll be using type 3 pairings, with Miller’s algorithm working in the background for actual computation, and you’ll be using curves and embedding degrees that ensure non-degeneracy.
While we could say much, much more about elliptic curves (heck, there are entire books written about them, if you’re interested), I don’t wish to cover everything in this series.
However, I believe some practical considerations would be a good wrap-up. Thus, next article will be dedicated to understanding some of the curves with the most widespread adoption and their particularities.
I’ll see you on the finish line!