Get Even More Visitors To Your Blog, Upgrade To A Business Listing >>

Soatok’s Guide to Side-Channel Attacks

If you’re ever tasked with implementing a cryptography feature–whether a high-level protocol or a low-level primitive–you will have to take special care to ensure you’re not leaking secret information through side-channels.

The descriptions of algorithms you learn in a classroom or textbook are not sufficient for real-world use. (Yes, that means your toy RSA implementation based on GMP from your computer science 101 class isn’t production-ready. Don’t deploy it.)

But what are these elusive side-channels exactly, and how do you prevent them? And in cases where you cannot prevent them, how can you mitigate the risk to your users?

Art by Swizz.

Contents

  • Cryptographic Side-Channels
    • Timing Leaks
    • Power Usage
    • Electromagnetic Emissions
  • Side-Channel Prevention and Mitigation
    • Prevention vs. Mitigation
    • What is Constant-Time?
    • Malicious Environments and Algorithmic Constant-Time
    • Mitigation with Blinding Techniques
  • Design Patterns for Algorithmic Constant-Time Code
    • Constant-Time String Comparison
    • Alternative: “Double HMAC” String Comparison
    • Constant-Time Conditional Select
    • Constant-Time String Inequality Comparison
    • Constant-Time Integer Multiplication
    • Constant-Time Integer Division
    • Constant-Time Modular Inversion
  • Further Reading and Online Resources

Cryptographic Side-Channels

The concept of a side-channel isn’t inherently cryptographic, as Taylor Hornby demonstrates, but a side-channel can be a game over vulnerability in a system meant to maintain confidentiality (even if only for its cryptography keys).

Cryptographic side-channels allow an attacker to learn secret data from your cryptography system. To accomplish this, the attacker doesn’t necessarily study the system’s output (i.e. ciphertext); instead, they observe some other measurement, such as how much time or power was spent performing an operation, or what kind of electromagnetic radiation was emitted.

Important: While being resistant to side-channels is a prerequisite for implementations to be secure, it isn’t in and of itself sufficient for security. The underlying design of the primitives, constructions, and high-level protocols needs to be secure first, and that requires a clear and specific threat model for what you’re building.

Constant-time ECDSA doesn’t help you if you reuse k-values like it’s going out of style, but variable-time ECDSA still leaks your secret key to anyone who cares to probe your response times. Secure cryptography is very demanding.

Art by Riley.

Timing Leaks

Timing side-channels leak secrets through how much time it takes for an operation to complete.

There are many different flavors of timing leakage, including:

  • Fast-failing comparison functions (memcmp() in C)
  • Cache-timing vulnerabilities (e.g. software AES)
  • Conditional branches controlled by secrets

The bad news about timing leaks is that they’re almost always visible to an attacker over the network (including over the Internet (PDF)).

The good news is that most of them can be prevented or mitigated in software.

Art by Kyume.

Power Usage

Different algorithms or processor operations may require different amounts of power.

For example, squaring a large number may take less power than multiplying two different large numbers. This observation has led to the development of power analysis attacks against RSA.

Power analysis is especially relevant for embedded systems and smart cards, which are easier to extract a meaningful signal from than your desktop computer.

Some information leakage through power usage can be prevented through careful engineering (for example: BearSSL, which uses Montgomery multiplication instead of square-and-multiply).

But that’s not always an option, so generally these risks are mitigated.

My reaction when I first learned of power leaks: WATT (Art by Swizz)

Electromagnetic Emissions

Your computer is a reliable source of electromagnetic emissions (such as radio waves). Some of these emissions may reveal information about your cryptographic secrets, especially to an attacker with physical proximity to your device.

The good news is that research into EM emission side-channels isn’t as mature as side-channels through timing leaks or power usage. The bad news is that mitigations for breakthroughs will generally require hardware (e.g. electromagnetic shielding).

Aren’t computers terrifying? (Art by Swizz)

Side-Channel Prevention and Mitigation

Now that we’ve established a rough sense of the types of side-channels that are possible, we can begin to identify what causes them and aspire to prevent the leaks from happening–and where we can’t, to mitigate the risk to a reasonable level.

Prevention vs. Mitigation

Preventing a side-channel means eliminating the conditions that allow the information leak to occur in the first place. For timing leaks, this means making all algorithms constant-time.

There are entire classes of side-channel leaks that aren’t possible or practical to mitigate in software. When you encounter one, the best you can hope to do is mitigate the risk.

Ideally, you want to make the attack more expensive to pull off than the reward an attacker will gain from it.

What is Constant-Time?

Toto, I don’t think we’re in Tanelorn Kansas anymore.

When an implementation is said to be constant-time, what we mean is that the execution time of the code is not a function of its secret inputs.

Vulnerable AES uses table look-ups to implement the S-Box. Constant-time AES is either implemented in hardware, or is bitsliced.

Malicious Environments and Algorithmic Constant-Time

One of the greatest challenges with writing constant-time code is distinguishing between algorithmic constant-time and provably constant-time. The main difference between the two is that you cannot trust your compiler (especially a JIT compiler), which may attempt to optimize your code in a way that reintroduces the side-channel you aspired to remove.

A sufficiently advanced compiler optimization is indistinguishable from an adversary.

John Regehr, possibly with apologies to Arthur C. Clarke

For compiled languages, this is a tractable but expensive problem to solve: You simply have to formally verify everything from the source code to the compiler to the silicon chips that the code will be deployed on, and then audit your supply chain to prevent malicious tampering from going undetected.

For interpreted languages (e.g. PHP and JavaScript), this formal verification strategy isn’t really an option, unless you want to formally verify the runtime that interprets scripts and prove that the operations remain constant-time on top of all the other layers of distrust.

Is this level of paranoia really worth the effort?

For our cases, anyway! (Art by Khia.)

For that reason, we’re going to assume that algorithmic constant-time is adequate for the duration of this blog post.

If your threat model prevents you from accepting this assumption, feel free to put in the extra effort yourself and tell me how it goes. After all, as a furry who writes blog posts in my spare time for fun, I don’t exactly have the budget for massive research projects in formal verification.

Mitigation with Blinding Techniques

The best mitigation for some side-channels is called blinding: Obfuscating the inputs with some random data, then deobfuscating the outputs with the same random data, such that your keys are not revealed.

Two well-known examples include RSA decryption and ECDSA signing. I’ll focus on the latter, since it’s not as widely covered in the literature (although several cryptographers I’ve talked with were somehow knowledgeable about it; I suspect gatekeeping is involved).

Blinded ECDSA Signing

In typical ECDSA implementations, you will convert a point on a Weierstrass curve to a Jacobian coordinate system .

The exact conversion formula is (, ). The conversion almost makes intuitive sense.

Where does come from though?

Art by circuitslime

It turns out, the choice for is totally arbitrary. Libraries typically set it equal to 1 (for best performance), but you can also set it to a random number. (You cannot set it to 0, however, for obvious reasons.)

Choosing a random number means the calculations performed over Jacobian coordinates will be obscured by a randomly chosen factor (and thus, if is only used once per signing, the bitwise signal the attackers rely on will be lost).

Blinding techniques are cool. (Art by Khia.)

I think it’s really cool how one small tweak to the runtime of an algorithm can make it significantly harder to attack.

Design Patterns for Algorithmic Constant-Time Code

Mitigation techniques are cool, but preventing side-channels is a better value-add for most software.

To that end, let’s look at some design patterns for constant-time software. Some of these are relatively common; others, not so much.

Art by Scout Pawfoot.

Constant-Time String Comparison

Rather than using string comparison (== in most programming languages, memcmp() in C), you want to compare cryptographic secrets and/or calculated integrity checks with a secure compare algorithm, which looks like this:

  1. Initialize a variable (let’s call it D) to zero.
  2. For each byte of the two strings:
    1. Calculate (lefti XOR righti)
    2. Bitwise OR the current value of D with the result of the XOR, store the output in D
  3. When the loop has concluded, D will be equal to 0 if and only if the two strings are equal.

In code form, it looks like this:

In this example, I’m using PHP’s unpack() function to avoid cache-timing leaks with ord() and chr(). Of course, you can simply use hash_equals() instead of writing it yourself (PHP 5.6.0+).

Alternative: “Double HMAC” String Comparison

If the previous algorithm won’t work (i.e. because you’re concerned your JIT compiler will optimize it away), there is a popular alternative to consider. It’s called “Double HMAC” because it was traditionally used with Encrypt-Then-HMAC schemes.

The algorithm looks like this:

  1. Generate a random 256-bit key, K. (This can be cached between invocations, but it should be unpredictable.)
  2. Calculate HMAC-SHA256(K, left).
  3. Calculate HMAC-SHA256(K, right).
  4. Return true if the outputs of step 2 and 3 are equal.

This is provably secure, so long as HMAC-SHA256 is a secure pseudo-random function and the key K is unknown to the attacker.

In code form, the Double HMAC compare function looks like this:

Constant-Time Conditional Select

I like to imagine a conversation between a cryptography engineer and a Zen Buddhist, that unfolds like so:

  • CE: “I want to eliminate branching side-channels from my code.”
  • ZB: “Then do not have branches in your code.”

And that is precisely what we intend to do with a constant-time conditional select: Eliminate branches by conditionally returning between one of two strings, without an IF statement.

Mind. Blown. (Art by Khia.)

This isn’t as tricky as it sounds. We’re going to use XOR and two’s complement to achieve this.

The algorithm looks like this:

  1. Convert the selection bit (TRUE/FALSE) into a mask value (-1 for TRUE, 0 for FALSE). Bitwise, -1 looks like 111111111…1111111111, while 0 looks like 00000000…00000000.
  2. Copy the right string into a buffer, call it tmp.
  3. Calculate left XOR right, call it x.
  4. Return (tmp XOR (x AND mask)).

Once again, in code this algorithm looks like this:

You can test this code for yourself here. The function was designed to read intuitively like a ternary operator.

Constant-Time String Inequality Comparison

Sometimes you don’t just need to know if two strings are equal, you also need to know which one is larger than the other.

To accomplish this in constant-time, we need to maintain two state variables:

  1. gt (initialized to 0, will be set to 1 at some point if left > right)
  2. eq (initialized to 1, will be set to 0 at some point if left != right)

Endian-ness will dictate the direction our algorithm goes, but we’re going to perform two operations in each cycle:

  1. gt should be bitwise ORed with (eq AND ((right – left) right shifted 8 times)
  2. eq should be bitwise ANDed with ((right XOR left) – 1) right shifted 8 times

If right and left are ever different, eq will be set to 0.

If the first time they’re different the value for lefti is greater than the value for righti, then the subtraction will produce a negative number. Right shifting a negative number 8 places then bitwise ANDing the result with eq (which is only 1 until two bytes differ, and then 0 henceforth if they do) will result in a value for 1 with gt. Thus, if (righti – lefti) is negative, gt will be set to 1. Otherwise, it remains 0.

At the end of this loop, return (gt + gt + eq) – 1. This will result in the following possible values:

  • left
  • left == right: 0
  • left > right: 1

The arithmetic based on the possible values of gt and eq should be straightforward.

  • Different (eq == 0) but not greater (gt == 0) means left
  • Different (eq == 0) and greater (gt == 1) means left > right, 1.
  • If eq == 1, no bytes ever differed, so left == right, 0.

A little endian implementation is as follows:

 0) {
        --$i;
        $leftCharCode = unpack('C', $left[$i])[1];
        $rightCharCode = unpack('C', $right[$i])[1];
        $gt |= (($rightCharCode - $leftCharCode) >> 8) & $eq;
        $eq &= (($rightCharCode ^ $leftCharCode) -1) >> 8;
    }
    return ($gt + $gt + $eq) - 1;
}

Demo for this function is available here.

Constant-Time Integer Multiplication

Multiplying two integers is one of those arithmetic operations that should be constant-time. But on many older processors, it isn’t.

Of course there’s a microarchitecture timing leak! (Art by Khia.)

Fortunately, there is a workaround. It involves an algorithm called Ancient Egyptian Multiplication in some places or Peasant Multiplication in others.

Multiplying two numbers and this way looks like this:

  1. Determine the number of operations you need to perform. Generally, this is either known ahead of time or .
  2. Set to 0.
  3. Until the operation count reaches zero:
    1. If the lowest bit of is set, add to .
    2. Left shift by 1.
    3. Right shfit by 1.
  4. Return .

The main caveat here is that you want to use bitwise operators in step 3.1 to remove the conditional branch.

Rather than bundle example code in our blog post, please refer to the implementation in sodium_compat (a pure PHP polyfill for libsodium).

For big number libraries, implementing Karatsuba on top of this integer multiplying function should be faster than attempting to multiply bignums this way.

Constant-Time Integer Division

Although some cryptography algorithms call for integer division, division isn’t usually expected to be constant-time.

However, if you look up a division algorithm for unsigned integers with a remainder, you’ll likely encounter this algorithm, which is almost constant-time:

if D = 0 then error(DivisionByZeroException) end
Q := 0                  -- Initialize quotient and remainder to zero
R := 0                     
for i := n − 1 .. 0 do  -- Where n is number of bits in N
  R := R 

If we use the tricks we learned from implementing constant-time string inequality with constant-time conditional selection, we can implement this algorithm without timing leaks.

Our constant-time version of this algorithm looks like this:

if D = 0 then error(DivisionByZeroException) end
Q := 0                  -- Initialize quotient and remainder to zero
R := 0                     
for i := n − 1 .. 0 do  -- Where n is number of bits in N
  R := R  D  then compared ==  1, swap = 1
  -- if R == D then compared ==  0, swap = 1
  -- if R > 31) & 1))

  -- R' = R - D
  -- Q' = Q, Q[i] = 1
  Rprime := R - D
  Qprime := Q
  Qprime(i) := 1 -- The i'th bit is set to 1

  -- Replace (R with R', Q with Q') if swap == 1
  R = ct_select(swap, Rprime, R)
  Q = ct_select(swap, Qprime, Q)
end

It’s approximately twice as slow as the original, but it’s constant-time.

(Art by Khia.)

Constant-Time Modular Inversion

Modular inversion is the calculation of for some prime . This is used in a lot of places, but especially in elliptic curve cryptography and RSA.

Daniel J. Bernstein and Bo-Yin Yang published a paper on fast constant-time GCD and Modular Inversion in 2019. The algorithm in question is somewhat straightforward to implement (although determining whether or not that implementation is safe is left as an exercise to the rest of us).

A simpler technique is to use Fermat’s Little Theorem: for some prime . This only works with prime fields, and is slower than a Binary GCD (which isn’t necessarily constant-time, as OpenSSL discovered).

BearSSL provides an implementation (and accompanying documentation) for a constant-time modular inversion algorithm based on Binary GCD.

(In the future, I may update this section of this blog post with an implementation in PHP, using the GMP extension.)

Further Reading and Online Resources

If you’re at all interested in cryptographic side-channels, your hunger for knowledge probably won’t be sated by a single blog post. Here’s a collection of articles, papers, books, etc. worth reading.

  • BearSSL’s Documentation on Constant-Time Code — A must-read for anyone interested in this topic
  • Cryptographically Secure PHP Development — How to write secure cryptography in languages that cryptographers largely neglect
  • Meltdown and Spectre — Two vulnerabilities that placed side-channels in the scope of most of infosec that isn’t interested in cryptography
  • Serious Cryptography — For anyone who lacks the background knowledge to fully understand what I’m talking about on this page

I hope you find this guide to side-channels helpful.

Thanks for reading!

Follow my blog for more Defense Against the Bark Arts posts in the future.



This post first appeared on Dhole Moments - Software, Security, Cryptography, And The Furry Fandom, please read the originial post: here

Share the post

Soatok’s Guide to Side-Channel Attacks

×

Subscribe to Dhole Moments - Software, Security, Cryptography, And The Furry Fandom

Get updates delivered right to your inbox!

Thank you for your subscription

×