On Mon, 2006-09-18 at 16:00 -0500, Jim C. Nasby wrote: > BTW, at a former company we used SHA1s to identify files that had been > uploaded. We were wondering on the odds of 2 different files hashing to > the same value and found some statistical comparisons of probabilities. > I don't recall the details, but the odds of duplicating a SHA1 (1 in > 2^160) are so insanely small that it's hard to find anything in the > physical world that compares. To duplicate random 256^256 numbers you'd > probably have to search until the heat-death of the universe.
That assumes you have good random data. Usually there is some kind of tradeoff between the randomness and the performance. If you read /dev/random each time, that eliminates some applications that need to generate UUIDs very quickly. If you use pseudorandom data, you are vulnerable in the case a clock is set back or the data repeats. Regards, Jeff Davis ---------------------------(end of broadcast)--------------------------- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq