On 4/11/18 9:29 PM, cuddlycave...@gmail.com wrote:
I’m replying to your post on January 28th
Nice carefully chosen non random numbers Steven D'Aprano.
Was just doing what you asked, but you don’t remember 😂😂😂
Best practice is to include a quote of the thing you are replying to.
It makes it m
I’m replying to your post on January 28th
Nice carefully chosen non random numbers Steven D'Aprano.
Was just doing what you asked, but you don’t remember 😂😂😂
--
https://mail.python.org/mailman/listinfo/python-list
On Tue, 10 Apr 2018 23:36:27 -0700, cuddlycaveman wrote:
[snip a number of carefully chosen, non-random numbers shown in binary]
> Don’t know if that helps
Helps what?
With no context, we don't know who you are replying to, what they asked,
or why you think this is helpful.
According to my a
387420479
00110011 00111000 00110111 00110100 00110010 0011 00110100 00110111 00111001
72 bits
Equal to
(9^9)-10
00101000 00111001 0100 00111001 00101001 00101101 00110001 0011
64 bits
387420499
00110011 00111000 00110111 00110100 00110010 0011 00110100 00111001 00111001
72 bits
On Fri, 09 Feb 2018 17:52:33 -0800, Dan Stromberg wrote:
> Perhaps:
>
> import lzma
> lzc = lzma.LZMACompressor()
Ah, thanks for the suggestion!
--
Steve
--
https://mail.python.org/mailman/listinfo/python-list
, out2, out3, out4])
?
lzma compresses harder than bzip2, but it's probably slower too.
On Fri, Feb 9, 2018 at 5:36 PM, Steven D'Aprano
wrote:
> I want to compress a sequence of bytes one byte at a time. (I am already
> processing the bytes one byte at a time, for other reasons.) I
I want to compress a sequence of bytes one byte at a time. (I am already
processing the bytes one byte at a time, for other reasons.) I don't
particularly care *which* compression method is used, and in fact I'm not
even interested in the compressed data itself, only its length. So I&
On Sat, 27 Jan 2018 21:26:06 -0800 (PST), pendrysamm...@gmail.com wrote:
> If it is then show him this
>
> 387,420,489
>=
> 00110011 00111000 00110111 00101100 00110100 00110010 0011 0 ...
To save the casual reader a moment of disorientation, the
above binary string is just the ASCII represent
On 2018-01-28, pendrysamm...@gmail.com wrote:
> I have it in my head, just need someone to write the program for me,
> I know nothing about data compression or binary data other than 1s
> and 0s and that you can not take 2 number without a possible value
> more or less than the
On Sat, 27 Jan 2018 21:50:24 -0800, pendrysammuel wrote:
> 387,420,489 is a number with only 2 repeating binary sequences
Okay. Now try these two numbers:
387420479
387420499
--
Steve
--
https://mail.python.org/mailman/listinfo/python-list
On Sat, 27 Jan 2018 22:14:46 -0800, pendrysammuel wrote:
> I have it in my head, just need someone to write the program for me,
Sure, my rate is $150 an hour.
> I
> know nothing about data compression or binary data other than 1s and 0s
> and that you can not take 2 number withou
Lawrence D’Oliveiro
In other words yes, I just need to be sober first.
--
https://mail.python.org/mailman/listinfo/python-list
I have it in my head, just need someone to write the program for me, I know
nothing about data compression or binary data other than 1s and 0s and that you
can not take 2 number without a possible value more or less than them selves
and compress them, I have been working for 1 1/2 years on a
387,420,489 is a number with only 2 repeating binary sequences
In binary 387,420,489 is expressed as 00110011 00111000 00110111 00101100
00110100 00110010 0011 00101100 00110100 00111000 00111001
387,420,489 can be simplified to 9*9 or nine to the power of nine
In binary 9*9 is represented
On Sun, Jan 28, 2018 at 4:26 PM, wrote:
> If it is then show him this
>
> 387,420,489
> =
> 00110011 00111000 00110111 00101100 00110100 00110010 0011 00101100
> 00110100 00111000 00111001
>
> 9^9 = ⬇️ (^ = to the power of)
> = 387,420,489
>
> But
>
> 9^9
> =
> 00111001 0100 00111001
I
If it is then show him this
387,420,489
=
00110011 00111000 00110111 00101100 00110100 00110010 0011 00101100
00110100 00111000 00111001
9^9 = ⬇️ (^ = to the power of)
= 387,420,489
But
9^9
=
00111001 0100 00111001
--
https://mail.python.org/mailman/listinfo/python-list
> a probability distribution, and calculate an entropy.
>
>> I think the argument that you can't compress arbitrary data is simpler
>> ... it's obvious that it includes the results of previous
>> compressions.
>
> What? I don't see how "results of pr
On Sun, 29 Oct 2017 01:56 pm, Stefan Ram wrote:
> If the entropy of an individual message is not defined,
> than it is still available to be defined. I define it
> to be log2(1/p), where p is the probability of this
> message. I also choose a unit for it, which I call "bit".
That is exact
On Sun, 29 Oct 2017 06:03 pm, Chris Angelico wrote:
> On Sun, Oct 29, 2017 at 6:00 PM, Ian Kelly wrote:
>> On Oct 28, 2017 5:53 PM, "Chris Angelico" wrote:
>>> One bit. It might send the message, or it might NOT send the message.
>>
>> Not sending the message is equivalent to having a second pos
On Sun, 29 Oct 2017 02:31 pm, Gregory Ewing wrote:
> Steve D'Aprano wrote:
>> I don't think that's right. The entropy of a single message is a
>> well-defined quantity, formally called the self-information.
>>
>> https://en.wikipedia.org/wiki/Self-information
>
> True, but it still depends on kn
Chris Angelico wrote:
One bit. It might send the message, or it might NOT send the message.
The entropy formula assumes that you are definitely
going to send one of the possible messages. If not
sending a message is a possibility, then you need
to include an empty message in the set of messages
On Sun, Oct 29, 2017 at 6:00 PM, Ian Kelly wrote:
> On Oct 28, 2017 5:53 PM, "Chris Angelico" wrote:
>> One bit. It might send the message, or it might NOT send the message.
>
> Not sending the message is equivalent to having a second possible message.
Okay, now we're getting seriously existenti
On Oct 28, 2017 5:53 PM, "Chris Angelico" wrote:
> One bit. It might send the message, or it might NOT send the message.
Not sending the message is equivalent to having a second possible message.
--
https://mail.python.org/mailman/listinfo/python-list
On Sun, Oct 29, 2017 at 2:08 PM, Gregory Ewing
wrote:
> Stefan Ram wrote:
>>
>> Well, then one can ask about the entropy of a data source
>> that only is emitting this message.
>
>
> You can, but it's still the *source* that has the entropy,
> not the message.
>
> (And the answer in that case
Steve D'Aprano wrote:
I don't think that's right. The entropy of a single message is a well-defined
quantity, formally called the self-information.
https://en.wikipedia.org/wiki/Self-information
True, but it still depends on knowing (or assuming) the
probability of getting that particular me
Stefan Ram wrote:
Well, then one can ask about the entropy of a data source
that only is emitting this message.
You can, but it's still the *source* that has the entropy,
not the message.
(And the answer in that case is that the entropy is zero.
If there's only one possible message you can
On Sun, Oct 29, 2017 at 1:32 PM, Chris Angelico wrote:
> On Sun, Oct 29, 2017 at 1:18 PM, Gregory Ewing
> wrote:
>> You're missing something fundamental about what
>> entropy is in information theory.
>>
>> It's meaningless to talk about the entropy of a single
>> message. Entropy is a function o
On Sun, Oct 29, 2017 at 1:18 PM, Gregory Ewing
wrote:
> You're missing something fundamental about what
> entropy is in information theory.
>
> It's meaningless to talk about the entropy of a single
> message. Entropy is a function of the probability
> distribution of *all* the messages you might
Steve D'Aprano wrote:
Random data = any set of data generated by "a source of random".
Any set of data generated by Grant Thompson?
https://www.youtube.com/user/01032010814
--
Greg
--
https://mail.python.org/mailman/listinfo/python-list
danceswithnumb...@gmail.com wrote:
10101011
This equals
61611
This can be represented using
0-6 log2(7)*5= 14.0367746103 bits
11010101
This equals
54543
This can be represented using
0-5 log2(6)*5= 12.9248125036 bits
You're missing something fundamental about what
entropy is
compress arbitrary data is simpler
... it's obvious that it includes the results of previous
compressions.
What? I don't see how "results of previous compressions" comes
into it. The source has an entropy even if you're not doing
compression at all.
--
Greg
--
https://mail.python.org/mailman/listinfo/python-list
On Oct 28, 2017 10:30 AM, "Stefan Ram" wrote:
> Well, then one can ask about the entropy of a data source
> thatt only is emitting this message. (If it needs to be endless:
> thatt only is emitting this message repeatedly.)
If there is only one possible message then the entropy is zero.
-1.0 * l
On Sun, 29 Oct 2017 07:03 am, Peter Pearson wrote:
> On Thu, 26 Oct 2017 19:26:11 -0600, Ian Kelly wrote:
>>
>> . . . Shannon entropy is correctly calculated for a data source,
>> not an individual message . . .
>
> Thank you; I was about to make the same observation. When
> people talk about t
On Thu, 26 Oct 2017 19:26:11 -0600, Ian Kelly wrote:
>
> . . . Shannon entropy is correctly calculated for a data source,
> not an individual message . . .
Thank you; I was about to make the same observation. When
people talk about the entropy of a particular message, you
can bet they're headed
ing error there; it should be "a source of random data".)
Yes, that's a fine definition, but it has the disadvantage of not being
a verifiable property of the thing defined -- you can't know, from the
data themselves, if they constitute random data. You would not care
about a compr
On Fri, 27 Oct 2017 09:53 am, Ben Bacarisse wrote:
> A source of random can be defined but "random data" is much more
> illusive.
Random data = any set of data generated by "a source of random".
--
Steve
“Cheer up,” they said, “things could be worse.” So I cheered up, and sure
enough, thing
On Thu, Oct 26, 2017 at 8:48 PM, wrote:
> Shouldn't that be?
>
> py> 16 * (-7/16 * math.log2(7/16) - 6/16 * math.log2(6/16)) =
No, that's failing to account for 3/16 of the probability space.
--
https://mail.python.org/mailman/listinfo/python-list
On Thu, Oct 26, 2017 at 8:19 PM, wrote:
> It looks like that averages my two examples.
I don't know how you can look at two numbers and then look at a third
number that is larger than both of them and conclude it is the
average.
> H by the way that equation is really coolwhy does it ret
Marko Rauhamaa writes:
> Ben Bacarisse :
>
>>> In this context, "random data" really means "uniformly distributed
>>> data", i.e. any bit sequence is equally likely to be presented as
>>> input. *That's* what information theory says can't be compressed.
>>
>> But that has to be about the process
Ben Bacarisse :
>> In this context, "random data" really means "uniformly distributed
>> data", i.e. any bit sequence is equally likely to be presented as
>> input. *That's* what information theory says can't be compressed.
>
> But that has to be about the process that gives rise to the data, not
Shouldn't that be?
py> 16 * (-7/16 * math.log2(7/16) - 6/16 * math.log2(6/16)) =
--
https://mail.python.org/mailman/listinfo/python-list
It looks like that averages my two examples. H by the way that equation is
really coolwhy does it return a high bit count when compared to >>>dec to
bin?
--
https://mail.python.org/mailman/listinfo/python-list
On Thu, Oct 26, 2017 at 2:38 PM, wrote:
>
> Thomas Jollans
>
> On 2017-10-25 23:22, danceswi...@gmail.com wrote:
>> With every transform the entropy changes,
>
> That's only true if the "transform" loses or adds information.
>
> If it loses informa
I can, as a parlour trick, compress and
recover this "random data" because I chose it.
A source of random can be defined but "random data" is much more
illusive.
>> I think "arbitrary data" (thereby including the results of compression
>> by said alg
Thomas Jollans
On 2017-10-25 23:22, danceswi...@gmail.com wrote:
> With every transform the entropy changes,
That's only true if the "transform" loses or adds information.
If it loses information, that's lossy compression, which is only useful
in very specific (bu
0" which requires 10
> bits.
Right. This is most obvious in Huffman encoding, where each symbol is
replaced by a sequence of bits which is directly related to the
frequency of that symbol. So the letter 'e' might be encoded in 3 or 4
bits (in a corpus of text I happen to hav
On 2017-10-25 23:22, danceswithnumb...@gmail.com wrote:
> With every transform the entropy changes,
That's only true if the "transform" loses or adds information.
If it loses information, that's lossy compression, which is only useful
in very specific (but also extremel
On Thu, 26 Oct 2017 08:22 am, danceswithnumb...@gmail.com wrote:
> with each pass you can compress untill the entropy is so random it can no
> longer be comressed.
Which is another way of saying that you cannot compress random binary data.
--
Steve
“Cheer up,” they said, “things could be wo
So if the theoretical min compression limit (log2(n)*(x)) has a 3% margin but
your transform has a less than 3% inflate rate at most then there is room for
the transform to compress below the theoretical min. With every transform the
entropy changes, the potential for greater compression also
Whatever you do, you'll find that *on average* you
will need *at least* 34 bits to be able to represent
all possible 10-digit decimal numbers. Some might
be shorter, but then others will be longer, and
the average won't be less than 34.
The theoretical limit for arbitrary numbers 0 - 9 must
On 10/24/17, Richard Damon wrote:
> My understanding of the 'Random Data Comprehensibility' challenge is
> that is requires that the compression take ANY/ALL strings of up to N
> bits, and generate an output stream no longer than the input stream, and
> sometime less.
That
lowed to alternate between two compression
methods, then the way you decompress
music.mp3.zip.zip.tgz.zip...tgz.zip.tgz
is to output 0 each time zip was applied and 1 each
time tar/gz was applied.
You may be able to take some shortcuts in some
cases, e.g. anything beginning with "mov
finitions being used.
In this context, "random data" really means "uniformly distributed
data", i.e. any bit sequence is equally likely to be presented as
input. *That's* what information theory says can't be compressed.
I think "arbitrary data" (thereby in
Steve D'Aprano wrote:
- Encrypted data looks very much like random noise.
There's actually a practical use for that idea. If you can feed
the output of an encryption algorithm through a compressor and
make it smaller, it means there is a cryptographic weakness
in the algorithm that could potent
ible bit sequences", but a heavily restricted subset
where there are lots of 00 pairs and fewer 01, 10, and 11 pairs.
My understanding of the 'Random Data Comprehensibility' challenge is
that is requires that the compression take ANY/ALL strings of up to N
bits, and generate an
On Wed, Oct 25, 2017 at 9:11 AM, Steve D'Aprano
wrote:
> On Wed, 25 Oct 2017 02:40 am, Lele Gaifax wrote:
>
>> Steve D'Aprano writes:
>>
>>> But given an empty file, how do you distinguish the empty file you get
>>> from 'music.mp3' and the identical empty file you get from 'movie.avi'?
>>
>> Tha
On Wed, 25 Oct 2017 07:09 am, Peter J. Holzer wrote:
> On 2017-10-23 04:21, Steve D'Aprano wrote:
>> On Mon, 23 Oct 2017 02:29 pm, Stefan Ram wrote:
>>>
>> If the probability of certain codes (either single codes, or sequences of
>> codes) are non-equal, then you can take advantage of that by enc
On Wed, 25 Oct 2017 02:40 am, Lele Gaifax wrote:
> Steve D'Aprano writes:
>
>> But given an empty file, how do you distinguish the empty file you get
>> from 'music.mp3' and the identical empty file you get from 'movie.avi'?
>
> That's simple enough: of course one empty file would be
> "music.m
On Tue, Oct 24, 2017 at 12:20 AM, Gregory Ewing
wrote:
> danceswithnumb...@gmail.com wrote:
>>
>> I did that quite a while ago. 352,954 kb.
>
>
> Are you sure? Does that include the size of all the
> code, lookup tables, etc. needed to decompress it?
My bet is that danceswithnumbers does indeed h
On Tue, 24 Oct 2017 14:51:37 +1100, Steve D'Aprano wrote:
On Tue, 24 Oct 2017 01:27 pm, danceswithnumb...@gmail.com wrote:
> Yes! Decode reverse is easy..sorry so excited i could shout.
Then this should be easy for you:
http://marknelson.us/2012/10/09/the-random-compression-c
On 2017-10-23 04:21, Steve D'Aprano wrote:
> On Mon, 23 Oct 2017 02:29 pm, Stefan Ram wrote:
>>
> If the probability of certain codes (either single codes, or sequences of
> codes) are non-equal, then you can take advantage of that by encoding the
> common cases into a short representation, and th
On 24/10/2017 16:40, Lele Gaifax wrote:
Steve D'Aprano writes:
But given an empty file, how do you distinguish the empty file you get
from 'music.mp3' and the identical empty file you get from 'movie.avi'?
That's simple enough: of course one empty file would be
"music.mp3.zip.zip.zip", while
Steve D'Aprano writes:
> But given an empty file, how do you distinguish the empty file you get
> from 'music.mp3' and the identical empty file you get from 'movie.avi'?
That's simple enough: of course one empty file would be
"music.mp3.zip.zip.zip", while the other would be
"movie.avi.zip.zip.z
Steve D'Aprano writes:
> On Tue, 24 Oct 2017 06:46 pm, danceswithnumb...@gmail.com wrote:
>
>> Greg, you're very smart, but you are missing a big key. I'm not padding,
>> you are still thinking inside the box, and will never solve this by doing
>> so. Yes! At least you see my accomplishment, thi
experience) to endless
attempts to define random data. My preferred way out of that is to talk
about algorithmic complexity but for your average "I've got a perfect
compression algorithm" poster, that is step too far.
I think "arbitrary data" (thereby including the result
On Tue, 24 Oct 2017 06:46 pm, danceswithnumb...@gmail.com wrote:
> Greg, you're very smart, but you are missing a big key. I'm not padding,
> you are still thinking inside the box, and will never solve this by doing
> so. Yes! At least you see my accomplishment, this will compress any random
> fi
its very nature, random data is
> not interesting. What people want is a reversible compression algorithm
> that works on *arbitrary data* -- i.e. on *any* file at all, no matter
> how structured and *non-random* it is.
In a sense you are right. Compressing randomly generated data
for fun, if you are able to guarantee compressing
>> arbitrary data, then
>
> It's a small point, but you are replying to a post of mine and saying
> "you". That could make people think that /I/ am claiming to have a perfect
> compression algorithm.
Sorry. I intende
e replying to a post of mine and saying
"you". That could make people think that /I/ am claiming to have a perfect
compression algorithm.
> 1. Take a document you want to compress.
> 2. Compress it using your magic algorithm. The result is smaller.
> 3. Compress the compressed d
On 24 October 2017 at 11:23, Ben Bacarisse wrote:
> For example, run the complete works of Shakespeare through your program.
> The result is very much not random data, but that's the sort of data
> people want to compress. If you can compress the output of your
> compressor you have made a good s
On Tue, 24 Oct 2017 05:20 pm, Gregory Ewing wrote:
> danceswithnumb...@gmail.com wrote:
>> I did that quite a while ago. 352,954 kb.
>
> Are you sure? Does that include the size of all the
> code, lookup tables, etc. needed to decompress it?
>
> But even if you have, you haven't disproved the th
danceswithnumb...@gmail.com writes:
> Finally figured out how to turn this into a random binary compression
> program. Since my transform can compress more than dec to binary. Then
> i took a random binary stream,
Forget random data. For one thing it's hard to define, but more
On 24 October 2017 at 09:43, Gregory Ewing wrote:
> Paul Moore wrote:
>>
>> But that's not "compression", that's simply using a better encoding.
>> In the technical sense, "compression" is about looking at redundancies
>> that go beyond th
Paul Moore wrote:
But that's not "compression", that's simply using a better encoding.
In the technical sense, "compression" is about looking at redundancies
that go beyond the case of how effectively you pack data into the
bytes available.
There may be a diffe
danceswithnumb...@gmail.com wrote:
My 8 year old can decode this back into base 10,
Keep in mind that your 8 year old has more information
than just the 32 bits you wrote down -- he can also
see that there *are* 32 bits and no more. That's
hidden information that you're not counting.
--
Greg
No leading zeroes are being dropped offwish this board has an edit button.
--
https://mail.python.org/mailman/listinfo/python-list
ticle, that string is incompressible
by a particular algorithm. I can see no more general claims.
Here's a compression algorithm that manages to compress that string into
a 0-bit string:
* If the original string is 12344321 (whatever that means),
return the empty
Greg, you're very smart, but you are missing a big key. I'm not padding, you
are still thinking inside the box, and will never solve this by doing so. Yes!
At least you see my accomplishment, this will compress any random file.
--
https://mail.python.org/mailman/listinfo/python-list
34 bits to be able to represent
all possible 10-digit decimal numbers. Some might
be shorter, but then others will be longer, and
the average won't be less than 34.
New compression method:
11000101
11000111
0100
A full byte less than bin.
You need to be *very* careful about what you
Gregory Ewing :
> What you *can't* do is compress 16 random decimal digits to less than
> 6.64 bytes.
More precisely:
Regardless of the compression scheme, the probability of shortening
the next bit sequence is less than 0.5 if the bits are distributed
evenly, r
danceswithnumb...@gmail.com wrote:
I did that quite a while ago. 352,954 kb.
Are you sure? Does that include the size of all the
code, lookup tables, etc. needed to decompress it?
But even if you have, you haven't disproved the theorem about
compressing random data. All you have is a program t
danceswithnumb...@gmail.com wrote:
12344321
It only takes seven 8 bit bytes to represent this
This is not surprising. The theoretical minimum size
for 16 arbitrary decimal digits is:
log2(10) * 16 = 53.15 bits = 6.64 bytes
I think you misunderstand what is meant by the phrase
"random
On Tue, Oct 24, 2017 at 2:28 AM, Paul Moore wrote:
> Hope this helps put the subject into context. Compression is a very
> technical subject, to "do it right". Special cases can be worked out,
> sure, but the "hidden assumptions" in a method are what make the
> d
On Tue, 24 Oct 2017 03:13 pm, danceswithnumb...@gmail.com wrote:
> I did that quite a while ago. 352,954 kb.
Sure you did. Let's see the code you used.
--
Steve
“Cheer up,” they said, “things could be worse.” So I cheered up, and sure
enough, things got worse.
--
https://mail.python.org/ma
I did that quite a while ago. 352,954 kb.
--
https://mail.python.org/mailman/listinfo/python-list
On Tue, 24 Oct 2017 01:27 pm, danceswithnumb...@gmail.com wrote:
> Finally figured out how to turn this into a random binary compression
> program. Since my transform can compress more than dec to binary. Then i
> took a random binary stream, changed it to a decimal stream 0-9 tranfo
Finally figured out how to turn this into a random binary compression program.
Since my transform can compress more than dec to binary. Then i took a random
binary stream, changed it to a decimal stream 0-9 tranformed it into a
compressed/encrypted binary stream 23.7% smaller. Yes! Decode
On Mon, Oct 23, 2017 at 1:42 PM, wrote:
> Wow, do programmers actually use zscii. That is huge. So much wated space.
Not really. ZSCII is only relevant if you're writing Z-code or a
Z-code interpreter. Those in turn are only relevant if you're writing
Infocom games.
--
https://mail.python.org/m
Wow, do programmers actually use zscii. That is huge. So much wated space.
--
https://mail.python.org/mailman/listinfo/python-list
se?
>>>>> Paul
>>>>
>>>> I would suspect he is using BCD & storing 2 values in reach
>>>> byte that is not what is meant by you cant compress random
>>>> data. his compression is simply removing redundant space
>>>> from
Good point
I hope it has a use, other than a cute toyi don't see it yet.
--
https://mail.python.org/mailman/listinfo/python-list
On 2017-10-23 17:39, danceswithnumb...@gmail.com wrote:
> Thanks Paul...blunt to the point.
>
> My 8 year old can decode this back into base 10, i still have to help him a
> bit going from base 10 to 8 bit bytesit's incredibly simple to decode. No
> dictionary, can easily be done with pencil
On 2017-10-23 07:39, Steve D'Aprano wrote:
> By the way: here is a very clever trick for hiding information in the file
> system:
>
> http://www.patrickcraig.co.uk/other/compression.php
>
>
> but as people point out, the information in the file, plus the information in
> the file system, ends up
Thanks Paul...blunt to the point.
My 8 year old can decode this back into base 10, i still have to help him a bit
going from base 10 to 8 bit bytesit's incredibly simple to decode. No
dictionary, can easily be done with pencil and paper, does not rely on
redundancies.
Jon Hutton
--
https
Just trying to find a practical application for this alg. Not real useful as it
stands now.
Jon Hutton
--
https://mail.python.org/mailman/listinfo/python-list
nformation-theoretic meaning of the word), you need to be *really*
careful not to include hidden information in your assumptions.
For example, if I have a string made up of only the numbers 0-7, then
I can trivially (octal) store that data in 3 bits per digit. But
that's not compression, as
ry or change it to binary,
one would unflate the other would only get it down to
Compress this:
4135124325
Bin to dec...still very large
0110
0000
1101
01100101
New compression method:
11000101
11000111
0100
A full byte less than bin.
I know many are skepticalthats okay.thi
gt;
>>>>> It only takes seven 8 bit bytes to represent this
>>>>
>>>> Would you care to provide the seven 8-bit bytes you propose to use?
>>>> Paul
>>>
>>> I would suspect he is using BCD & storing 2 values in reach byte that
>>
vide the seven 8-bit bytes you propose
>>> to use?
>>> Paul
>>
>> I would suspect he is using BCD & storing 2 values in reach
>> byte that is not what is meant by you cant compress random
>> data. his compression is simply removing redundant space from
amp; storing 2 values in reach byte
> that is not what is meant by you cant compress random data.
> his compression is simply removing redundant space from an inefficient
> coding
I suspect he is using ASCII and storing one value in each byte.
ChrisA
--
https://mail.python.org/mailman/listinfo/python-list
1 - 100 of 342 matches
Mail list logo