linux - Does 'urandom' share the same entropy of 'random'?

06
2014-04
  • Xiè Jìléi

    Does the entropy pool /dev/random used the same to /dev/urandom?

    I want to

    mknod /dev/random 1 9
    

    to replace the slow random, I think the current entropy is randomly enough, if urandom is based on the same entropy, and all succeed random numbers are generated based on that entropy, I don't think there'll be any vulnerable.

  • Answers
  • T.J. Crowder

    At the end of the day, what urandom gives you may well be implementation-specific, but the man page says that it will use the available entropy if it's there, and only fall back to the PRNG when it runs out of entropy. So if you have enough entropy, you should get as good a result as if you'd used random instead.

    But, and this is a big but: You have to assume you're getting a purely pseudo-generated value with no genuine entropy at all, because the entropy pool may be empty. Therefore, you have to treat urandom as a PRNG, even though it may do better than that in any given situation. Whether it does is not deterministic (within the confines of your code) and you have to expect that the worst case will apply. After all, if you were sure there's enough entropy in the pool, you'd use random, right? So the act of using urandom means you're okay with a PRNG, and that means a potentially, theoretically crackable result.

  • grawity

    random uses only the collected entropy, and urandom is a PRNG. While both may be "secure enough", I seriously would not use urandom for generating keys, for example.

  • The Spooniest

    urandom uses the same entropy pool that random does, and if there is enough entropy in the pool at the moment you call it, it returns the same kinds of results that random would.

    However, you might be surprised at just how big of an if that can be, and it's not something that you have any direct control over. Most computers are not equipped with hardware that constantly gathers any kind of reliable entropy, and gathering enough of it from non-constant but reliable sources can take a while. When there isn't enough, urandom falls back on a PRNG, with all the problems (including predictability) that go with it.

    For a lot of applications -most games, for example- that's still good enough. But there are important applications where it isn't, and I assure you, your machine does use those applications behind the scenes even if you don't consciously see/use them. For that reason, it's not a good idea to just use urandom everywhere.

    Out of curiosity, what makes you think random is so slow? Where is your computer locking up?

  • Erwan Legrand

    The problem here is not that /dev/urandom is a PRNG. The problem is /dev/urandom will not block until enough entropy has been gathered to seed it.

    Thus, you don't want to use either /dev/random or /dev/urandom on Linux. You need something which provides a replacement for these, be it a kernel module or a daemon.

    Another option is to switch to FreeBSD where both /dev/random and /dev/urandom do what you want, i.e. they provide cryptographically strong pseudo-random numbers and block until they are seeded.


  • Related Question

    How to investigate the stuff that Linux kernel does with my hardware?
  • kagali-san

    Looking to see, illustrated, how the kernel does access some PCI soundcard. E.g., device I/O, device registers, function calls (including calls to DMA, data not required - can guess it from sources). Want to get a log, <...read it, write some script to make a Graphviz chart>,

    I can setup virtual machine for testing, had pointed out several things to look upon in ALSA code, but still have no idea how to get the whole stuff tracked in realtime..

    The ideal debug mode for me is to enter debugging mode, load modules, call aplay to send data to sound card, unload modules, exit debugging mode, dump debugging log to file.. Any kind of recommendations would be fine.


  • Related Answers
  • kagali-san

    systemtap and source code. source code and systemtap.