Edited the OP for clarity.
Why a nano-second clock? Are you thinking about applying to the CERN or similar?
Nah! Just for use in Spectre/Meltdown type exploits Musher ... systems still with pre-emptive processing enabled ... only kidding
Actually I was looking at using the least significant digit(s) as a means to generate my own version of some random data (bytes). Even running the exact same code repeatedly will have different nano-second timings, so the LSD is in effect a real random value (odd digit : output '1' bit, even digit output : '0' bit; shift right 1 and repeat - to build up sequences of random bytes) and where the code/activity being run is that program. If that takes around 1000 nano-seconds per bit produced then 1 million real random bits produced per second (120KB of random bytes/second). But in practice it was too slow for my needs, so for the time being I've opted to revert to using around a 160 random characters (base64 encoding of 128 byte /dev/random value) key encrypted version of /dev/urandom instead.
Code: Select all
dd status=none if=/dev/urandom bs=65536 count=$C | pv --bytes | \
openssl enc -aes-256-ctr -pass \
pass:"$(dd if=/dev/random bs=128 count=1 2>/dev/null | base64)" \
-nosalt > outfile
(dd blocksize of 65536 - because that's the default buffer size that Linux uses for pipe's).
A factor however is that's not appropriate for longer term encrypted storage. Some have already brought cracking aes-256 down by some considerable factors, so perhaps a shelf life for stored data of maybe 5 years or less before having to be decrypted and re-encrypted again using a 'better' method.
/dev/random alone is OK'ish
Code: Select all
dd if=/dev/random bs=32 iflag=fullblock | pv --bytes | head -c $FILESIZE >outfile
possibly more so if mixed in with the above (nano second LSD), as that blocks output of /dev/random bytes until the pool is approximated to have high entropy (low predictability). But when using Vernam Cipher (One Time Pad style) you need a random stream at least as large as the data being encoded, and /dev/random is very slow at producing such large amounts of random data. It's also more open to side channel attacks such as embedded 'distortions/predictable bias'.
Off shelf methods could be used, but again they're subject to back-doors and the algorithms are widely known. Better to use a bespoke algorithm which in itself is part of the 'secret' (keys on one server, pads on another, algorithms on yet another and all three have to be brought together in order to be able to 'open' the data).