Saturday, June 13, 2020

CryptoSEED Bug 2020(tm) - Patches chip flaw that could leak your cryptographic secrets : RND

CryptoSEED Bug 2020(tm) - Patches chip flaw that could leak your cryptographic secrets RND Security 2020-06-12

Core RNDSEED Security MEGA BOMB Hits all SSL Certificates #SecurityNEWS #RS

Patches chip flaw that could leak your cryptographic secrets RND Security possibly compromised


"Notably, memory addresses that have been accessed recently typically
end up cached inside the chip, to speed up access in case they’re
needed again soon, because that improves performance a lot. Therefore
the speed with which memory locations can be accessed generally gives
away information about how recently they were peeked at – and thus
what memory address values were used – even if that “peeking” was
speculative and was retrospectively cancelled internally for security
reasons."

flaws are to cache for :

According to the theory of the cure ? Clear cache, As par my meltdown
fix XOR is a potential solution,
However crypto keys frequently seed from cache data.. Leaving a
massive security flaw in the entropy pool.
So what do we do ?
Potentially we AES a pool of Data hashes.. Hashing a key with a re-sort?
In short so like this X = cache buffer For Length 16 to 64KB Fragment
& Sort by Factor Xn

Factor Xn is Ram Fullness = 32Bit Integer * %CPUUsage as fraction of 100

Hash is then plausibly meaningless by viable for RND Random

Factor Xn is then fed into a roll loop to factor Cryptic key RND

Meaning that the cache clear fix can potentially reveal MASSIVE
Random/Entropic pools

(free for Linux Kernels & RND Seed Generators based upon Ubuntu Code (the Ubuntu code can be donations of Support & Development,Money or service & specifically does not have to be GPL but Ubuntu is charitable : RS)

(c) Rupert Summerskill



https://science.n-helix.com/2024/03/gofetch.html

https://science.n-helix.com/2020/06/cryptoseed.html

RDRAND & RDSEED & EGETKEY


CVE-2020-0543
In the case of the Special Register Buffer Data Sampling bug, or
CVE-2020-0543, the internal data that might accidentally leak out –
or, more precisely, be coaxed out – of the processor chip includes
recent output values from the following machine code instructions:

RDRAND. This instruction code is short for ReaD secure hardware RANDom
number. Ironically, RDRAND was designed to produce top-quality
hardware random numbers, based on the physics of electronic thermal
noise, which is generally regarded as impossible to model
realistically. This makes it a more trusted source of random data than
software-derived sources such as keystroke and mouse timing (which
doesn’t exist on servers), network latency (which depends on software
that itself follows pre-programmed patterns), and so on. If another
program running on the same CPU as yours can figure out or guess some
of the random numbers you’ve knitted into your recent cryptographic
calculations, they might get a handy head start at cracking your keys.

RDSEED. This is short for ReaD random number SEED, an instruction that
operates more slowly and relies on more thermal noise than RDRAND.
It’s designed for cases where you want to use a software random number
generator but would like to initialise it with what’s known as a
“seed” to kickstart its randomness or entropy. An attacker who knows
your software random generator seed could reconstruct the entire
sequence, which might enable or at least greatly assist future
cryptographic cracking.

EGETKEY. This stands for Enclave GET encryption KEY. Enclave means
it’s part of Intel’s much vaunted SGX set of instructions which are
supposed to provide a sealed-off block of memory that even the
operating system kernel can’t look inside. This means an SGX enclave
acts as a sort of tamper-proof security module like the specialised
chips used in smart cards or mobile phones for storing lock codes and
other secrets. In theory, only software that is already running in the
enclave can read data stored inside it, and can’t write that data
outside it, so that encryption keys generated inside the enclave can’t
escape – neither by accident nor by design. An attacker who could make
inferences about random cryptographic keys inside an enclave of yours
could end up with access to secret data that even you aren’t supposed
to be able to read out!

Intel patches chip flaw that could leak your cryptographic secrets
12 JUN 2020


"Vulnerability

Previous: Facebook paid for a 0-day to help FBI unmask child predator
by Paul Ducklin
This week, Intel patched a CPU security bug that hasn’t attracted a
funky name, even though the bug itself is admittedly pretty funky.

Known as CVE-2020-0543 for short, or Special Register Buffer Data
Sampling in its full title, it serves as one more reminder that as we
expect processor makers to produce ever-faster chips that can churn
through ever more code and data in ever less time…

…we sometimes pay a cybersecurity price, at least in theoretical terms.

If you’re a regular Naked Security reader, you’re probably familiar
with the term speculative execution, which refers to the fact that
modern CPUs often race ahead of themselves by performing internal
calculations, or partial calculations, that might nevertheless turn
out to be redundant.

The idea isn’t as weird as it sounds because modern chips typically
break down operations, that look to the programmer like one machine
code instruction, into numerous subinstructions, and they can work on
many of these so-called microarchitectural operations on multiple CPU
cores at the same time.

If, for example, your program is reading through an array of data to
perform a complex calculation based on all the values in it, the
processor needs to make sure that you don’t read past the end of your
memory buffer, because that could allow someone else’s private data to
leak into your computation.

In theory, the CPU should freeze your program every time you peek at
the next byte in the array, perform a security check that you are
authorised to see it, and only then allow your program to proceed.

But every time there’s a delay in finishing the security check, all
the microarchitectural calculation units that your program would
otherwise have been using to keep the computation flying along would
be sitting idle – even though the outcome of their calculations would
not be visible outside the chip core.

Speculative execution says, amongst other things, “Let’s allow
internal calculations to carry on ahead of the security checks, on the
grounds that if the checks ultimately pass, we’re ahead in the race
and can release the final output quickly.”

The theory is that if the checks fail, the chip can just discard the
internal data that it now knows is tainted by insecurity, so there’s a
possible performance boost without a security risk given that the
security checks will ultimately prevent secret data being disclosed
anyway.

The vast majority of code that churns through arrays doesn’t read off
the end of its allotted memory, so the typical performance boost is
huge, and there doesn’t seem to be a downside.

Except for the inconvenient fact that the tainted data sometimes
leaves behind ghostly echoes of its presence that are detectable
outside the chip, even though the data itself was never officially
emitted as the output of a machine code instruction.

Notably, memory addresses that have been accessed recently typically
end up cached inside the chip, to speed up access in case they’re
needed again soon, because that improves performance a lot. Therefore
the speed with which memory locations can be accessed generally gives
away information about how recently they were peeked at – and thus
what memory address values were used – even if that “peeking” was
speculative and was retrospectively cancelled internally for security
reasons.

Discernible traces
Unfortunately, any security shortcuts taken inside the core of the
chip may inadvertently leave discernible traces that could allow
untrusted software to make later inferences about some of that data.

Even if all an attacker can do is guess, say, that the first and last
bits of your secret decryption key must be zero, or that the very last
cell in your spreadsheet has a value that is larger than 32,767 but
smaller than 1,048,576, there’s still a serious security risk there.

That risk is often compounded in cases like this because attackers may
be able to refine those guesses by making millions or billions of
inferences and greatly improving their reckoning over time.

Imagine, for instance, that your decryption key is rotated leftwards
by one bit every so often, and that the attacker gets to “re-infer”
the value of its first and last bits every time that rotation happens.

Given enough time and a sufficiently accurate series of inferences,
the attackers may gradually figure out more and more about your secret
key until they are well-placed enough to guess it successfully.

(If you recover 16 bits of a decryption key that was supposed to
withstand 10 years of concerted cracking, you can probably break it
216 or 65,536 times faster than before, which means you now only need
a few hours.)

What about CVE-2020-0543
In the case of the Special Register Buffer Data Sampling bug, or
CVE-2020-0543, the internal data that might accidentally leak out –
or, more precisely, be coaxed out – of the processor chip includes
recent output values from the following machine code instructions:

RDRAND. This instruction code is short for ReaD secure hardware RANDom
number. Ironically, RDRAND was designed to produce top-quality
hardware random numbers, based on the physics of electronic thermal
noise, which is generally regarded as impossible to model
realistically. This makes it a more trusted source of random data than
software-derived sources such as keystroke and mouse timing (which
doesn’t exist on servers), network latency (which depends on software
that itself follows pre-programmed patterns), and so on. If another
program running on the same CPU as yours can figure out or guess some
of the random numbers you’ve knitted into your recent cryptographic
calculations, they might get a handy head start at cracking your keys.
RDSEED. This is short for ReaD random number SEED, an instruction that
operates more slowly and relies on more thermal noise than RDRAND.
It’s designed for cases where you want to use a software random number
generator but would like to initialise it with what’s known as a
“seed” to kickstart its randomness or entropy. An attacker who knows
your software random generator seed could reconstruct the entire
sequence, which might enable or at least greatly assist future
cryptographic cracking.
EGETKEY. This stands for Enclave GET encryption KEY. Enclave means
it’s part of Intel’s much vaunted SGX set of instructions which are
supposed to provide a sealed-off block of memory that even the
operating system kernel can’t look inside. This means an SGX enclave
acts as a sort of tamper-proof security module like the specialised
chips used in smart cards or mobile phones for storing lock codes and
other secrets. In theory, only software that is already running in the
enclave can read data stored inside it, and can’t write that data
outside it, so that encryption keys generated inside the enclave can’t
escape – neither by accident nor by design. An attacker who could make
inferences about random cryptographic keys inside an enclave of yours
could end up with access to secret data that even you aren’t supposed
to be able to read out!
How bad is this?
The good news is that guessing someone else’s most recent RDRAND
values doesn’t automatically and instantly give you the power to
decrypt all their files and network traffic.

The bad news, as Intel itself admits:

RDRAND and RDSEED may be used in methods that rely on the data
returned being kept secret from potentially malicious actors on other
physical cores. For example, random numbers from RDRAND or RDSEED may
be used as the basis for a session encryption key. If these values are
leaked, an adversary potentially may be able to derive the encryption
key.

And researchers at the Vrije University Amsterdam and ETH Zurich
have published a paper called CROSSTALK: Speculative data leaks across
cores are real (they did come up with a funky name!) which explains
how the CVE-2020-0543 flaw could be exploited, concluding that:

The cryptographically-secure RDRAND and RDSEED instructions turn out
to leak their output to attackers […] on many Intel CPUs, and we have
demonstrated that this is a realistic attack. We have also seen that
[…] it is almost trivial to apply these attacks to break code running
in Intel’s secure SGX enclaves.

What to do?
Intel has released a series of microcode updates for affected chips
that dial back speed in favour of security to mitigate these
“CROSSTALK” attacks.

Simply put, secret data generated inside the chip as part of the
random generator circuitry will be aggressively purged after use so it
doesn’t leave behind those “ghostly echoes” that might be detected
thanks to speculative execution.

Also, access to the random data generated for RDRAND and RDSEED (and
consumed by EGETKEY) will be more strictly regulated so that the
random numbers generated for multiple programs running in parallel
will only be made available in the order that those programs made
their requests.

That may reduce performance slightly – every program wanting RDRAND
numbers will have to wait its turn instead of going in parallel – but
ensures that the internal “secret data” used to generate process X’s
random numbers will have been purged from the chip before process X+1
gets a look in.

Where to get your microcode updates depends on your computer and your
operating system.

Linux distros will typically bundle and distribute the fixes as part
of a kernel update (mine turned up yesterday, for example); for other
operating systems you may need to download a BIOS update from the
vendor of your computer or its motherboard – so please consult your
computer maker for advice.

(Intel says that, “in general, Intel Core-family […] and Intel Xeon E3
processors […] may be affected”, and has published a list of at-risk
processor chips if you happen to know which chip is in your computer.)"

No comments: