On 04/09/2015 05:33 AM, janhein.vanderb...@gmail.com wrote:
Op donderdag 19 februari 2015 19:25:14 UTC+1 schreef Dave Angel:
I wrote the following pair of functions:

   <snip>

Here's a couple of ranges of output, showing that the 7bit scheme does
better for values between 384 and 16379.
Thanks for this test; I obviously should have done it myself.
Please have a look at 
http://optarbvalintenc.blogspot.nl/2015/04/inputs-from-complangpython.html and 
the next two postings.


I still don't see where you have anywhere declared what your goal is. Like building a recursive compression scheme [1], if you don't have a specific goal in mind, you'll never actually be sure you've achieved it, even though you might be able to fool the patent examiners.

Any method of encoding will be worse for some values in order to be better for others. Without specifying a distribution, you cannot tell whether a "typical" set of integers is better with one method than another.

For example, if you have uniform distribution of all integer values up to 256**n-1, you will not be able to beat a straight n-byte binary storage.

Other than that, I make no claims that any of the schemes previously discussed in this thread is unbeatable.

You also haven't made it clear whether you're assuming such a compressed bit stream is required to occupy an integral number of bytes. For example, if your goal is to store a bunch of these arbitrary length integers in a file of minimal size, then you're talking classic compression techniques. Or maybe you should be minimizing the time to convert such a bit stream to and from a conventional one.

I suggest you study Huffman encoding[2], and see what makes it tick. It makes the assumptions that there are a finite set of symbols, and that there exists a probability distribution of the likelihood of each symbol, and that each takes an integral number of *bits*.

Then study arithmetic-encoding[3], which no longer assumes that a single symbol occupy a whole number of bits. A mind-blowing concept. Incidentally, it introduces a "stop-symbol" which is given a very low probability.

See the book "Text Compression", 1990, by Bell, Cleary, and Witten.

[1] - http://gailly.net/05533051.html
[2] - http://en.wikipedia.org/wiki/Huffman_coding
[3] - http://en.wikipedia.org/wiki/Arithmetic_coding

If you're going to continue the discussion on python-list, you probably should start a new thread, state your actual goals, and


--
DaveA

--
https://mail.python.org/mailman/listinfo/python-list

Reply via email to