Get Even More Visitors To Your Blog, Upgrade To A Business Listing >>

The Controversy Surrounding Hybrid Cryptography

Did you know that, in the Furry Fandom, the most popular choice in species for one’s fursona is actually a hybrid?

Source: FurScience

Of course, we’re not talking about that kind of hybrid today. I just thought it was an amusing coincidence.

Art: Lynx vs Jackalope

Nor are we talking about what comes to mind for engineers accustomed to classical Cryptography when you say Hybrid.

(Such engineers typically envision some combination of asymmetric key encapsulation with symmetric encryption; because too many people encrypt with RSA directly and the sane approach is often described as a Hybrid Cryptosystem in the literature.)

Rather, Hybrid Cryptography in today’s context refers to:

Cryptography systems that use a post-quantum cryptography algorithm, combined with one of the Algorithms deployed today that aren’t resistant to quantum computers.

Why Hybrid Cryptosystems?

At some point in the future (years or decades from now), humanity may build a practical quantum computer. This will be a complete disaster for all of the cryptography deployed on the Internet today.

In response to this distant existential threat, cryptographers have been hard at work designing and attacking algorithms that remain secure even when quantum computers arrive. These algorithms are classified as post-quantum cryptography (mostly to distinguish it from techniques that uses quantum computers to facilitate cryptography rather than attack it, which is “quantum cryptography” and not really worth our time talking about).

However, a lot of the post-quantum cryptography designs are relatively new or comparatively less studied than their classical (pre-quantum) counterparts. Several of the Round 1 candidates to NIST’s post quantum cryptography project were broken immediately (PDF). Exploit code referenced in PDF duplicated below.:

#!/usr/bin/env python3
import binascii, struct

def recover_bit(ct, bit):
    assert bit = sum(ys) / len(ys)

def decrypt(ct):
    res = sum(recover_bit(ct, b)  0), kat))
        print('correct plaintext:   {}'.format(binascii.hexlify(msg).decode()))

        plain = decrypt(ct)

        print('recovered plaintext: {} ({})'.format(binascii.hexlify(plain).decode(), plain == msg))

        kat += 1

More pertinent to our discussions: Rainbow, which was one of the Round 3 Finalists for post-quantum digital signature algorithms, was discovered in 2020 to be much easier to attack than previously thought. Specifically, for the third round parameters, the attack cost was reduced by a factor of , , and .

That security reduction is just a tad bit more concerning than a Round 1 candidate being totally broken, since NIST had concluded by then that Rainbow was a good signature algorithm until that attack was discovered. Maybe there are similar attacks just waiting to be found?

Given that new cryptography is accompanied by less confidence than incumbent cryptography, hybrid designs are an excellent way to mitigate the risk of attack advancements in post-quantum cryptography:

If the security of your system requires breaking the cryptography used today AND breaking one of the new-fangled designs, you’ll always be at least as secure as the stronger algorithm.

Art: Lynx vs Jackalope

Why Is Hybrid Cryptography Controversial?

Despite the risks of greenfield cryptographic algorithms, the NSA has begun recommending a strictly-PQ approach to cryptography and have explicitly stated that they will not require hybrid designs.

Another pushback on hybrid cryptography comes from Uri Blumenthal of MIT’s Lincoln Labs (the acronym CRQC expands to “Cryptographically-Relevant Quantum Computer”) on the IETF CFRG mailing list:

Here are the possibilities and their relation to the usefulness of the Hybrid approach.

1. CRQC arrived, Classic hold against classic attacks, PQ algorithms hold – Hybrid is useless.

2. CRQC arrived, Classic hold against classic attacks, PQ algorithms fail – Hybrid is useless.

3. CRQC arrived, Classic broken against classic attacks, PQ algorithms hold – Hybrid is useless.

4. CRQC arrived, Classic hold against classic attacks, PQ algorithms broken – Hybrid useless.

5. CRQC doesn’t arrive, Classic hold against classic attacks, PQ algorithms hold – Hybrid is useless.

6. CRQC doesn’t arrive, Classic hold against classic attacks, PQ algorithms broken – Hybrid helps.

7. CRQC doesn’t arrive, Classic broken against classic attacks, PQ algorithms hold – Hybrid is useless.

8. CRQC doesn’t arrive, Classic broken against classic attacks, PQ algorithms broken – Hybrid is useless.

Uri Blumenthal, IETF CFRG mailing list, December 2021 (link)

Why Hybrid Is Actually A Damn Good Idea

Art: Scruff Kerfluff

Uri’s risk analysis is, of course, flawed. And I’m not the first to disagree with him.

First, Uri’s framing sort of implies that each of the 8 possible outputs of these 3 boolean variables are relatively equally likely outcomes.

It’s very tempting to look at this and think, “Wow, that’s a lot of work for something that only helps in 12.5% of possible outcomes!” Uri didn’t explicitly state this assumption, and he might not even believe that, but it is a cognitive trap that emerges in the structure of his argument, so watch your step.

Second, for many candidate algorithms, we’re already in scenario 6 that Uri outlined! It’s not some hypothetical future, it’s the present state of affairs.

To wit: The advances in cryptanalysis on Rainbow don’t totally break it in a practical sense, but they do reduce the security by a devastating margin (which will require significantly larger parameter sets and performance penalties to remedy).

For many post-quantum algorithms, we’re still uncertain about which scenario is most relevant. But since PQ algorithms are being successfully attacked and a quantum computer still hasn’t arrived, it’s very clear that “hybrid helps” is the world we most likely inhabit today, and likely will for many years (until the existence of quantum computers is finally settled).

From what I’ve read, the NSA didn’t make as strong an argument as Uri; they said hybrid would not be required, but didn’t go so far as to attack it.

Hybrid cryptography is a long-term bet that will protect the most users from cryptanalytic advancements, contrasted with strictly-PQ and no-PQ approaches.

Why The Hybrid Controversy Remains Unsettled

Even if we can all agree that hybrid is the way to go, there’s still significant disagreement on exactly how to do it.

Hybrid KEMs

There are two schools of thought on hybrid Key Encapsulation Mechanisms (KEMs):

  1. Wrap the post-quantum KEM in the encrypted channel created by the classical KEM.
  2. Use both the post-quantum KEM and classical KEM as inputs to a secure KDF, then use a single encrypted channel secured by both.

The first option (layered) has the benefit of making migrations smoother. You can begin with pre-quantum cryptography (i.e. ECDHE for TLS ciphersuites), which is what most systems online support today. Then you can do your post-quantum cryptography inside the existing channel to create a post-quantum-secure channel. This also lends toward opportunistic upgrades (which might not be a good idea).

The second option (composite) has the benefit of making the security of your protocol all-or-nothing: You cannot attack the weak now and the strong part later. The session keys you’ll derive require attacking both algorithms in order to get access to the plaintext. Additionally, you only need a single layer. The complexity lies entirely within the handshake, instead of every packet.

Personally, I think composite is a better option for security than layered.

Hybrid Signatures

There are, additionally, two different schools of thought on hybrid digital signature algorithms. However, the difference is more subtle than with KEMs.

  1. Require separate classical signatures and post-quantum signatures.
  2. Specify a composite mode that combines the two together and treat it as a distinct algorithm.

To better illustrate what this looks like, I outlined what a composite hybrid digital signature algorithm could look like on the CFRG mailing list:

primary_seed := randombytes_buf(64) // store this

ed25519_seed := hash_sha512256(PREFIX_PREQUANTUM || primary_seed)
pq_seed := hash_sha512256(PREFIX_POSTQUANTUM || primary_seed)

ed25519_keypair := crypto_sign_seed_keypair(ed25519_seed)
pq_keypair := pqcrypto_sign_seed_keypair(pq_seed)

Your composite public key would be your Ed25519 public key, followed by your post-quantum public key. Since Ed25519 public keys are always 32 bytes, this is easy to implement securely.

Every composite signature would be an Ed25519 signature concatenated with the post-quantum signature. Since Ed25519 signatures are always 64 bytes, this leads to a predictable signature size relative to the post-quantum signature.

The main motivation for preferring a composite hybrid signature over a detached hybrid signature is to push the hybridization of cryptography lower in the stack so developers don’t have to think about these details. They just select HYBRIDSIG1 or HYBRIDSIG2 in their ciphersuite configuration, and cryptographers get to decide what that means.

TL;DR

Hybrid designs of post-quantum crypto are good, and I think composite hybrid designs make the most sense for both KEMs and signatures.



This post first appeared on Dhole Moments - Software, Security, Cryptography, And The Furry Fandom, please read the originial post: here

Share the post

The Controversy Surrounding Hybrid Cryptography

×

Subscribe to Dhole Moments - Software, Security, Cryptography, And The Furry Fandom

Get updates delivered right to your inbox!

Thank you for your subscription

×