Latest SP 800-90B tests

BLUF:-

Still

Useless stamp.

so don’t use 90B’s ea_non_iid at all. De-correlate instead.

This page’s analysis of ea_non_iid pertains to v1.0 of the NIST SP 800-90B suite released 21 May 2019, located here at GitHub. We are mainly looking at non IID entropy measurement of four widely different correlated sample distributions, and one perfectly uniform IID distribution just for fun. The previous analyses were carried out on a much earlier codebase. It seems to have improved a small little, but still remains pretty much useless.

Perfect IID uniform distribution

Why? Because we immediately see a problem with the non_IID test. Assume a perfectly random and IID 1 MB file created as dd if=/dev/urandom of=/tmp/urandom bs=1000 count=1000. Theoretically $H_{\infty} = 8$ bits/byte. From ent we measure normalised autocorrelation ($R$) to be -0.002010. As can be seen from the ea_iid test output below, NIST considers the file to be independent and identically distributed. Yet if $R=-0.002010$ and not zero, we are justified in running ea_non_iid and expect $H_{\infty} \approx 8$ bits/byte. The anecdotal normalised threshold for an IID determination is at $R \approx 10^{-3}$.

Yet from ea_non_iid below we have $ H_{\infty} = 7.048241$ bits/byte. Nearly an entire bit of entropy has disappeared when the non-IID test is run. That’s a 12% entropy loss in just using an alternative set of algorithms. Entropy and autocorrelation are linear properties, and we expect the measurement of a sample’s min.entropy to also decrease continuously and smoothly as autocorrelation increases from the ideal $R=0$. There should not be a discontinuity in $H_{\infty}$ between ea_iid and ea_non_iid programs as charted below:-

Expand NIST IID, non-IID and ent tests over a perfectly random file:-

Read about nuances of ent entropy reporting.

Discontinuity in entropy measurement between ea_iid and ea_non_iid.

Discontinuity in entropy measurement between ea_iid and ea_non_iid.

Non IID Gaussian distribution

A synthetic Gaussian $\big( N(127, 10) \big)$ data file was generated with strong autocorrelation. This is the most common frequency distribution for analogue entropy sources. See it’s similarity to the Zener distribution of our uber simple Quantum entropy source on a breadboard and our Zenerglass. The Python code creating the simple autoregressive model of order $p=3$ with $\varphi = 1$ is:-

import random

MEAN = 127
STDEV = 10
VARPHI = 1

with open('/tmp/non_iid_gaussian.bin', 'wb') as f:
    for i in range(250_000):
        value1 = round(random.gauss(MEAN, STDEV))
        f.write(value1.to_bytes(1, byteorder='big'))
        value2 = round(VARPHI * random.gauss(value1, STDEV))
        f.write(value2.to_bytes(1, byteorder='big'))
        value3 = round(VARPHI * random.gauss(value2, STDEV))
        f.write(value3.to_bytes(1, byteorder='big'))
        value4 = round(VARPHI * random.gauss(value3, STDEV))
        f.write(value4.to_bytes(1, byteorder='big'))
Frequency distribution of a non IID Gaussian data file for testing 800-90B.

Frequency distribution.

Notice subsequent samples being constructed around the mean of the previous one. Consequently we have a lag of n = 3 giving a maximum autocorrelation coefficient at n = 1 of 0.60 as:-

Autocorrelation of a non IID Gaussian data file for testing 800-90B.

Autocorrelation with correlation lag of 3.

Pretty correlated then[†].

Expand NIST ea_non_iid test over the non IID Gaussian sample file:-

The test gives $H_{\infty}=4.094$ bits/byte. Not unreasonable. But this happens to be $8 \times \text{H_bitstring}$ and ironically $\text{H_bitstring}$ is calculated via a (weakly implemented) compression test. The test is based on the Maurer Universal Statistic modified to calculate min.entropy rather than Shannon entropy (ref. 800-90B §6.3.4):-

Compression Test Estimate (bit string) = 0.511761 / 1 bit(s)

Comparing $H_{\infty}=4.094$ bits/byte with a Shannon entropy value obtained via strong compression:-

$ ./cmix_v17  -c /tmp/non_iid_gaussian.bin   non_iid_gaussian.cmix
1000000 bytes -> 675105 bytes in 1438.83 s.
cross entropy: 5.401

This seems sane, but we are left wondering as to the relationship between min.entropy and Shannon entropy in the general case. We will always be wondering how much further might strong compression go? Even worse is that the NIST Compression Test Estimate is based on a weak algorithm that executes within seconds. Based on past experience of myriad empirical distributions, we believe $\frac{H_{Compression}}{H_{\infty}} \ngtr 2$ in most cases. If we hold to compressive entropy measurement and our conservative ratio of two, we arrive at $H_{\infty | Compression} = 2.7005$ bits/byte for this Gaussian non-IID file.

Non IID binary distribution

By binary, we mean an entropy source that produces only two possible outputs, as $ X_i \in \{ “0”, “1” \}^{5 \rightarrow 20} $ as text or $ X_i \in \{ 48, 49 \}^{5 \rightarrow 20} $ as ASCII binary, creating a relaxation period of 5.

Such a format is common in thresholded sampling and oversampled sources like our example Zener signal. Unbiased but highly correlated as results from an over zealous $(\epsilon, \tau)$ sampling regime where $\tau$ is too short an interval. Created like this:-

import random

MIN = 5
MAX = 20

with open('/tmp/non_iid.txt', 'w') as f:
    for i in range(100_000):
        zeros = random.randint(MIN, MAX)
        for z in range(zeros):
            f.write("0")
        ones = random.randint(MIN, MAX)
        for o in range(ones):
            f.write("1")
Frequency distribution of a non IID binary data file for testing 800-90B.

Frequency distribution.

Autocorrelated binary file with lags of 4 to 19 for testing 800-90B.

Autocorrelation with lags of 4 to 19.

Expand NIST ea_non_iid test over a non-IID binary file:-
Collision Test Estimate = 0.062150 / 1 bit(s)

which is also 0.062150 bits/character given the file is in octets.

~/cmix_v17  -c /tmp/non_iid.txt non_iid_txt.cmix
2399253 bytes -> 103019 bytes in 2128.56 s.
cross entropy: 0.344

Knowing the generating algorithm, we can deduce that the expectation $E(X) = \frac{5 + 20}{2} = 12.5$. Therefore is $H_{\infty} = \frac{1}{12.5} = 0.08$ bits/byte which seemingly matches the above 90B result, given that the expectation of a uniform distribution $U(a, b)$ is $\frac{a + b}{2}$ and allowing for some random quantum fluctuations?

Make of this what you will. $\frac{H_{Compression|cmix}}{H_{\infty}} = 5.5$, but we have no other way to verify this.

Shakespeare’s Works

Someone recently asked:-

“Can you show a sample that fools NIST SP 800-90B? NIST SP 800-90B has been reported to give ‘massive underestimates’?”

Yes, very much so. We have an intractably correlated ASCII text file in the following format of Shakespeare’s complete works (~5.4 MB) as:-

  Chor. Two households, both alike in dignity,
    In fair Verona, where we lay our scene,
    From ancient grudge break to new mutiny,
    Where civil blood makes civil hands unclean.
    From forth the fatal loins of these two foes
    A pair of star-cross'd lovers take their life;
    Whose misadventur'd piteous overthrows
    Doth with their death bury their parents' strife.
    The fearful passage of their death-mark'd love,
    And the continuance of their parents' rage,
    Which, but their children's end, naught could remove,
    Is now the two hours' traffic of our stage;
    The which if you with patient ears attend,
    What here shall miss, our toil shall strive to mend...
ASCII frequency distribution of a file of Shakespeare's works for testing 800-90B.

ASCII frequency distribution.

Autocorrelation of a file of Shakespeare's works for testing 800-90B.

Autocorrelation.

Expand NIST ea_non_iid test over Shakespeare's complete works:-

which gives:-

min(H_original, 7 X H_bitstring): 0.028690 bits/character.

And now by strong compression (cmix):-

$ ./cmix_v17  -c /tmp/shakespeare.txt   shakey.cmix
5458199 bytes -> 1034003 bytes in 6701.17 s.
cross entropy: 1.516

And we know from Shannon’s 1950 work that general English text has an entropy of approximately 0.6 - 1.3 bits/character. Schürmann and Grassberger have the asymptotic entropy of these works as 1.7 bits/character. The current (world class) cmix compression algorithm estimates the entropy of enwik9 to be 0.93 bits/byte.

We can conclude that cmix and other strong compression algorithms are fairly accurate in determining at least Shannon entropy given our current state of knowledge. It certainly appears to be in the order of ~1 bit/character. Therefore it is difficult to accept the 90B $H_{\infty} = 0.028690$ bits/character result given that it would imply $\frac{H_{Compression|cmix}}{H_{\infty}} > 52$! But as highlighted previously, what is it’s exact relationship to min.entropy?

Photonic Instrument JPEGs

Wacky JPEG frequency distribution from the Photonic Instrument.

Wacky JPEG frequency distribution.

Expand NIST ea_non_iid test over 50 concatenated JPEG frames:-

This latest version still gives $H_{\infty} = 0.023$ bits/bytes. Exactly as before. And so, $\frac{H_{Compression|paq8}}{H_{\infty}} = \frac{4.94}{0.023} > 214! $ Does this make any sense? We think not.

Bugs

Simple code and test problems still persist…

Expand NIST ea_iid test over NIST's version of pi:-

The igamc: UNDERFLOW error (‘incomplete gamma integral’ - a form of arithmetic underflow) is repeatable and inexcusable. But the more worrying fact is that the test data itself is still incorrect. By virtue of being an irrational number, binary expansions of $\pi$ are perfectly pseudo-random and form one of the gold standards for testing test suites. NIST’s version of $\pi$ is proven to be non-IID. How can we have confidence in a test suite that was validated against dodgy test data?

Furthermore, the kind of spelling mistake below only serves to undermine NIST’s quality control. It may seem trivial at first glance, but ‘Permutation’ is an important word when developing a permutation test.

Beginning permutation tests... these may take some time
100.00% of Permutuation test rounds,  31.58% of Permutuation tests  

Focus on detail is vital for scientific programming. If you’ve misspelt your output for years, how can a typical user trust that you’ve not miscoded your algorithms? Imagine if ‘Rolls Royce’ was misspelt on all of their cars and turbo fans.

Meta-Analysis

The ubiquitous definition of min.entropy is:- $$ H_{\infty} = -\log_2(p_{max}) $$ But that’s not quite the whole picture. Determination of the apparently simple $p_{max}$ is not straightforward, as all these algorithms used to calculate it attest. And not forgetting the 26 pages of 800-90B dedicated to measuring non-IID $H_{\infty}$ and justifying themselves.

The issue is confounded by ea_non_iid reporting entropy in various units like 1 bit, 7 bit and 8 bit, yet entropic signals are typically consumed in octets, so how are 7 bit packets of entropy to be managed or extracted from? As:-

Compression Test Estimate (bit string) = 0.511761 / 1 bit(s)
Collision Test Estimate = 0.062150 / 1 bit(s)
T-Tuple Test Estimate = 0.028690 / 7 bit(s)
T-Tuple Test Estimate = 0.023100 / 8 bit(s)

Even with what we have, global confidence bounds are pretty relaxed leading to uncertainty as to the efficacy of these algorithms, compressive or otherwise, and thus to accurately measuring general min.entropy.

Must watch this.

Must watch: Things become much clearer (but more worrisome) if the reader watches John Kelsey and Kerry McKay’s presentations at the NIST Random Bit Generation Workshop 2016, Day 1, Parts 1 & 2. Not the slides themselves, but the unguarded comments the presenters make. A précis:-

  • The new 2016 confidence criterion (p = 0.001) only applies independently to each specific test and predictor. It is not an aggregate universal bound on $H_{\infty}$.
  • Predictors are based on Shannon’s 1950’s work. Of seven decades ago.
  • ea_non_iid has features to detect limited local failures in the source signal via ’local’ predictors. These conflict with non uniform distributions such as Gaussian.
  • ea_non_iid has been tested on real world non uniform distributions. However “We don’t know what the answers should have been” - Kerry McKay.
  • There are elements of health checking and source failure compensation, diluting the purpose of the tool. “Jack of all..?”

Conclusion

“Estimating the entropy is non-trivial in the presence of complex and long range correlations.”

-Thomas Schürmann and Peter Grassberger.

NIST has been struggling for a long time to measure randomness.

It’s an impossible thing to do.

-John Kelsey, NIST.

Unfortunately then, $ H_{\infty} = -\log_2(p_{max})$ is not all it’s cracked up to be. The ease of determining $p_{max}$ in the IID case sadly does not extend to the non-IID case. What does min.entropy even mean in the general case given that correlated data is the superset of IID data with respect to an autocorrelation parameter? Most common bit, nibble, byte or tuple? And which tuples in what combinations? And what is their cardinality for data sets like the works of Shakespeare or JPEG files?

'The truth points to itself' - Kosh, Vorlon Ambassador to Babylon 5.

Babylon 5 fans will be familiar with Kosh, The Vorlon Ambassador. He said “The truth points to itself”. This is Aristotle’s concept of the scientific method in that what is true can be arrived at via many different and independent routes. Each one confirms said truth. With non-IID data sets though, we only have one path to follow (800-90B’s ea_non_iid program). There are many paths on the map, but only NIST’s is commonly promoted. How can 3rd parties independently verify the measurements then?

This all leads to ostensibly the greatest of mysteries in cryptography:-

$$ X = \frac{H_{\infty}}{H_{Shannon}} , Y = \frac{H_{Actual}}{H_{Estimate}} $$

Just what are $X$ and $Y$ numerically? A question perhaps even harder than the $P \enspace \text{versus} \enspace NP$ debate as most scientists have already coalesced around $P \ne NP$. $X$ and $Y$ remain unknown and open questions other than $X,Y < 1$ . Yet so many entropy measurement algorithms are founded upon compressive techniques. DIY entropy source makers should take heed, stay well away and decorrelate instead.

So therefore we conclude that for entropy measurement of real world (autocorrelated) sources, NIST 800-90B ea_non_iid is still pretty much:-

Useless stamp.

Notes:-

[†] Which is also an autocorrelation of 10% in terms of bitwise entropy shmear. I.e.

$H_{Shannon|ent} = 6.02$ bits/byte and $H_{Compression|cmix} = \frac{(675,105 - 66) \times 8}{1,000,000} = 5.40$ bits/byte.

This gives an entropy shmear, $ES = \frac{6.02 - 5.40}{6.02} = 0.103$, or $\approx 10 \%$, where the compression file overhead ($k$) in using cmix v17 to compress a $10^6$ byte file = 66 bytes.

[††] Entropy shmear, $ES = 65 \%$ in this case.