On Fri, Nov 8, 2013 at 8:48 AM, wrote:
> 3^2-2^2=5
How do you intend to encode 3**2 - 2**2 in such a way that it is more
compact than simply encoding 5? If you actually have an algorithm,
you should share it instead of dropping these cryptic one-line
non-explanations and leaving us guessing abo
On Friday, November 8, 2013 9:18:05 PM UTC+5:30, jonas.t...@gmail.com wrote:
> Den fredagen den 8:e november 2013 kl. 03:43:17 UTC+1 skrev zipher:
> > >> I am not sure if it is just stupidness or laziness that prevent you from
> > >> seeing that 4^8=65536.
> > > I can see that 4^8 = 65536. Now how
Den fredagen den 8:e november 2013 kl. 03:43:17 UTC+1 skrev zipher:
> >> I am not sure if it is just stupidness or laziness that prevent you from
> >> seeing that 4^8=65536.
>
> >
>
> > I can see that 4^8 = 65536. Now how are you going to render 65537? You
>
> > claimed that you could render *a
On Fri, Nov 8, 2013 at 6:09 PM, Gregory Ewing
wrote:
> You've got me thinking now about how viable a compression
> scheme this would be, efficiency issues aside. I suppose
> it would depend on things like the average density of primes
> and the average number of prime factors a number has.
> Any n
Mark Janssen wrote:
Technically, the universe could expand temporarily or reconfigure to
allow it;
Oh, no! If Jonas ever succeeds in getting his algorithm to
work, the Void will expand and swallow us all!
http://en.wikipedia.org/wiki/The_Dreaming_Void
--
Greg
--
https://mail.python.org/mailma
Steven D'Aprano wrote:
Of course, to reverse the
compression you need to keep a table of the prime numbers somewhere,
That's not strictly necessary -- you could calculate them
as needed. It wouldn't be *fast*, but it would work...
You've got me thinking now about how viable a compression
schem
On Thu, 07 Nov 2013 18:25:18 -0800, jonas.thornvall wrote:
> Please, you are he obnoxious, so fuck off
Pot, meet kettle.
> I am not sure if it is just stupidness or laziness that prevent you from
> seeing that 4^8=65536.
And what is it that prevents you from seeing that 4**8=65536 is
irrelevan
On Thu, 07 Nov 2013 18:43:17 -0800, Mark Janssen wrote:
>>> I am not sure if it is just stupidness or laziness that prevent you
>>> from seeing that 4^8=65536.
>>
>> I can see that 4^8 = 65536. Now how are you going to render 65537? You
>> claimed that you could render *any* number efficiently. Wh
On Thu, 7 Nov 2013 18:43:17 -0800, Mark Janssen
wrote:
I think the idea would be to find the prime factorization for a
given
number, which has been proven to be available (and unique) for any
and
every number. Most numbers can compress given this technique.
Prime
numbers, of course, woul
On Fri, Nov 8, 2013 at 3:05 PM, R. Michael Weylandt
wrote:
>
>
> On Nov 7, 2013, at 22:24, Chris Angelico wrote:
>
>> On Fri, Nov 8, 2013 at 1:43 PM, R. Michael Weylandt
>> wrote:
>>> Chris's point is more subtle: the typical computer will store the number
>>> 65536 in a single byte, but it w
On Nov 7, 2013, at 22:24, Chris Angelico wrote:
> On Fri, Nov 8, 2013 at 1:43 PM, R. Michael Weylandt
> wrote:
>> Chris's point is more subtle: the typical computer will store the number
>> 65536 in a single byte, but it will also store 4 and 8 in one byte.
>
> Well, 65536 won't fit in a s
On Fri, Nov 8, 2013 at 1:43 PM, R. Michael Weylandt
wrote:
> Chris's point is more subtle: the typical computer will store the number
> 65536 in a single byte, but it will also store 4 and 8 in one byte. So if
> your choice is between sending "65536" and "(4,8)", you actually loose
> efficienc
In article ,
Chris Angelico wrote:
> 2) How do you factor large numbers efficiently? Trust me, if you've
> solved this one, there are a *LOT* of people who want to know. People
> with money, like Visa.
Not to mention the NSA.
--
https://mail.python.org/mailman/listinfo/python-list
On Fri, Nov 8, 2013 at 1:43 PM, Mark Janssen wrote:
>>> I am not sure if it is just stupidness or laziness that prevent you from
>>> seeing that 4^8=65536.
>>
>> I can see that 4^8 = 65536. Now how are you going to render 65537? You
>> claimed that you could render *any* number efficiently. What
>> I am not sure if it is just stupidness or laziness that prevent you from
>> seeing that 4^8=65536.
>
> I can see that 4^8 = 65536. Now how are you going to render 65537? You
> claimed that you could render *any* number efficiently. What you've
> proven is that a small subset of numbers can be r
On Nov 7, 2013, at 21:25, jonas.thornv...@gmail.com wrote:
> Den fredagen den 8:e november 2013 kl. 03:17:36 UTC+1 skrev Chris Angelico:
>> On Fri, Nov 8, 2013 at 1:05 PM, wrote:
>>
>>> I guess what matter is how fast an algorithm can encode and decode a big
>>> number, at least if you want
On Friday, November 8, 2013 7:55:18 AM UTC+5:30, jonas wrote:
> Den fredagen den 8:e november 2013 kl. 03:17:36 UTC+1 skrev Chris Angelico:
> > On Fri, Nov 8, 2013 at 1:05 PM, jonas.thornvall wrote:
> > > I guess what matter is how fast an algorithm can encode and decode a big
> > > number, at le
On Fri, Nov 8, 2013 at 1:25 PM, wrote:
> Please, you are he obnoxious, so fuck off or go learn about reformulation of
> problems. Every number has an infinite number of arithmetical solutions. So
> every number do has a shortest arithmetical encoding. And that is not the
> hard part to figure
Den fredagen den 8:e november 2013 kl. 03:17:36 UTC+1 skrev Chris Angelico:
> On Fri, Nov 8, 2013 at 1:05 PM, wrote:
>
> > I guess what matter is how fast an algorithm can encode and decode a big
> > number, at least if you want to use it for very big sets of random data, or
> > losless video
On Fri, Nov 8, 2013 at 1:24 PM, Mark Janssen wrote:
> On Thu, Nov 7, 2013 at 6:17 PM, Chris Angelico wrote:
>> On Fri, Nov 8, 2013 at 1:05 PM, wrote:
>>> I guess what matter is how fast an algorithm can encode and decode a big
>>> number, at least if you want to use it for very big sets of ran
On Thu, Nov 7, 2013 at 6:17 PM, Chris Angelico wrote:
> On Fri, Nov 8, 2013 at 1:05 PM, wrote:
>> I guess what matter is how fast an algorithm can encode and decode a big
>> number, at least if you want to use it for very big sets of random data, or
>> losless video compression?
>
> I don't ca
On Fri, Nov 8, 2013 at 1:05 PM, wrote:
> I guess what matter is how fast an algorithm can encode and decode a big
> number, at least if you want to use it for very big sets of random data, or
> losless video compression?
I don't care how fast. I care about the laws of physics :) You can't
stuf
Den torsdagen den 7:e november 2013 kl. 23:26:45 UTC+1 skrev Chris Angelico:
> On Fri, Nov 8, 2013 at 5:59 AM, Mark Janssen
> wrote:
>
> > I think the idea is that you could take any arbitrary input sequence,
>
> > view it as a large number, and then find what exponential equation can
>
> > pr
On Fri, Nov 8, 2013 at 5:59 AM, Mark Janssen wrote:
> I think the idea is that you could take any arbitrary input sequence,
> view it as a large number, and then find what exponential equation can
> produce that result. The equation becomes the "compression".
Interesting idea, but I don't see ho
Mark Janssen wrote:
>>> Well let me try to explain why it is working and i have implemented one.
>>> I only need to refresh my memory it was almost 15 years ago.
>>> This is not the solution but this is why it is working.
>>> 65536=256^2=16^4=***4^8***=2^16
>>
>> I think the idea is that you could
>>Well let me try to explain why it is working and i have implemented one.
>>I only need to refresh my memory it was almost 15 years ago.
>>This is not the solution but this is why it is working.
>>65536=256^2=16^4=***4^8***=2^16
>
> All of those values are indeed the same, and yet that is complete
jonas.thornv...@gmail.com wrote:
>
>Well let me try to explain why it is working and i have implemented one.
>I only need to refresh my memory it was almost 15 years ago.
>This is not the solution but this is why it is working.
>65536=256^2=16^4=***4^8***=2^16
All of those values are indeed the sa
On Tue, 05 Nov 2013 04:33:46 +, Steven D'Aprano wrote:
> On Mon, 04 Nov 2013 14:34:23 -0800, jonas.thornvall wrote:
>
>> Den måndagen den 4:e november 2013 kl. 15:27:19 UTC+1 skrev Dave Angel:
>>> On Mon, 4 Nov 2013 05:53:28 -0800 (PST), jonas.thornv...@gmail.com
>>> wrote:
> [...]
>>> > This
On Mon, 04 Nov 2013 14:34:23 -0800, jonas.thornvall wrote:
> Den måndagen den 4:e november 2013 kl. 15:27:19 UTC+1 skrev Dave Angel:
>> On Mon, 4 Nov 2013 05:53:28 -0800 (PST), jonas.thornv...@gmail.com
>> wrote:
[...]
>> > This is not the solution but this is why it is working.
>>
>> > 65536=256
On Mon, 4 Nov 2013 14:34:23 -0800 (PST), jonas.thornv...@gmail.com
wrote:
e is an approximation... and your idea is not general for any n.
e is certainly not an approximation, and I never mentioned n.
--
DaveA
--
https://mail.python.org/mailman/listinfo/python-list
Den måndagen den 4:e november 2013 kl. 15:27:19 UTC+1 skrev Dave Angel:
> On Mon, 4 Nov 2013 05:53:28 -0800 (PST), jonas.thornv...@gmail.com
>
> wrote:
>
> > Den lördagen den 2:e november 2013 kl. 22:31:09 UTC+1 skrev Tim
>
> Roberts:
>
> > > Here's another way to look at it. If f(x) is smal
On Monday, November 4, 2013 7:57:19 PM UTC+5:30, Dave Angel wrote:
> On Mon, 4 Nov 2013 05:53:28 -0800 (PST), Jonas wrote:
> > Well let me try to explain why it is working and i have implemented one.
> > I only need to refresh my memory it was almost 15 years ago.
> > This is not the solution but t
On Mon, 4 Nov 2013 05:53:28 -0800 (PST), jonas.thornv...@gmail.com
wrote:
Den lördagen den 2:e november 2013 kl. 22:31:09 UTC+1 skrev Tim
Roberts:
> Here's another way to look at it. If f(x) is smaller than x for
every x,
> that means there MUST me multiple values of x that produce the
Den måndagen den 4:e november 2013 kl. 14:53:28 UTC+1 skrev
jonas.t...@gmail.com:
> Den lördagen den 2:e november 2013 kl. 22:31:09 UTC+1 skrev Tim Roberts:
>
> > jonas.thornv...@gmail.com wrote:
>
> >
>
> > >
>
> >
>
> > >Well then i have news for you.
>
> >
>
> >
>
> >
>
> > Well,
Den lördagen den 2:e november 2013 kl. 22:31:09 UTC+1 skrev Tim Roberts:
> jonas.thornv...@gmail.com wrote:
>
> >
>
> >Well then i have news for you.
>
>
>
> Well, then, why don't you share it?
>
>
>
> Let me try to get you to understand WHY what you say is impossible. Let's
>
> say you d
On 2013-11-03 19:40, Mark Janssen wrote:
> But you cheated by using a piece of information from "outside the
> system": length. A generic compression algorithm doesn't have this
> information beforehand.
By cheating with outside information, you can perfectly compress any
one data-set down to 1 b
> Note that I *can* make a "compression" algorithm that takes any
> length-n sequence and compresses all but one sequence by at least one
> bit, and does not ever expand the data.
>
> "00" -> ""
> "01" -> "0"
> "10" -> "1"
> "11" -> "00"
>
> This, obviously, is just 'cause the length is an extra pi
On 3 November 2013 15:34, Joshua Landau wrote:
>I can genuinely compress
> the whole structure by N log2 Y items.
By which I mean 2N items.
--
https://mail.python.org/mailman/listinfo/python-list
On 3 November 2013 03:17, Steven D'Aprano
wrote:
> On Sat, 02 Nov 2013 14:31:09 -0700, Tim Roberts wrote:
>
>> jonas.thornv...@gmail.com wrote:
>>>
>>>Well then i have news for you.
>>
>> Well, then, why don't you share it?
>>
>> Let me try to get you to understand WHY what you say is impossible.
On 11/03/2013 12:09 AM, Mark Janssen wrote:
>>> Congratulations Jonas. My kill file for this list used to have only one
>>> name, but now has 2.
>>
>> You have more patience than I! Jonas just made mine seven. :)
>
> Gosh, don't kill the guy. It's not an obvious thing to hardly anyone
> but co
On Sunday 03 November 2013 04:40:45 Ethan Furman did opine:
> On 10/30/2013 01:32 PM, Gene Heskett wrote:
> > Congratulations Jonas. My kill file for this list used to have only
> > one name, but now has 2.
>
> You have more patience than I! Jonas just made mine seven. :)
>
> --
> ~Ethan~
Ye
>> Congratulations Jonas. My kill file for this list used to have only one
>> name, but now has 2.
>
> You have more patience than I! Jonas just made mine seven. :)
Gosh, don't kill the guy. It's not an obvious thing to hardly anyone
but computer scientists. It's an easy mistake to make.
--
On 10/30/2013 12:23 PM, jonas.thornv...@gmail.com wrote:
What i actually saying is that you are indeed... [insult snipped]
*plonk*
--
https://mail.python.org/mailman/listinfo/python-list
On 10/30/2013 01:32 PM, Gene Heskett wrote:
Congratulations Jonas. My kill file for this list used to have only one
name, but now has 2.
You have more patience than I! Jonas just made mine seven. :)
--
~Ethan~
--
https://mail.python.org/mailman/listinfo/python-list
On Sun, Nov 3, 2013 at 2:17 PM, Steven D'Aprano
wrote:
> There is a way to apparently get around these limits: store data
> externally, perhaps inside the compression application itself. Then, if
> you just look at the compressed file (the "data.zip" equivalent, although
> I stress that zip compre
On Sat, 02 Nov 2013 14:31:09 -0700, Tim Roberts wrote:
> jonas.thornv...@gmail.com wrote:
>>
>>Well then i have news for you.
>
> Well, then, why don't you share it?
>
> Let me try to get you to understand WHY what you say is impossible.
[snip reasons]
Expanding on Tim's post... the first scena
> Let me try to get you to understand WHY what you say is impossible. Let's
> say you do have a function f(x) that can produce a compressed output y for
> any given x, such that y is always smaller than x. If that were true, then
> I could call f() recursively:
> f(f(...f(f(f(f(f(x)...))
jonas.thornv...@gmail.com wrote:
>
>Well then i have news for you.
Well, then, why don't you share it?
Let me try to get you to understand WHY what you say is impossible. Let's
say you do have a function f(x) that can produce a compressed output y for
any given x, such that y is always smaller t
On Thursday, October 31, 2013 3:00:24 AM UTC+5:30, Joshua Landau wrote:
> What I'm confounded about is this list's inability to recognise a
> troll when it slaps it vocally in the face.
> This isn't like Nikos. There's no "troll vs. incompetent" debate to be
> had.
Its usually called "entertain
On 30/10/2013 14:21, jonas.thornv...@gmail.com wrote:
> I am searching for the program or algorithm that makes the best possible of
> completly (diffused data/random noise) and wonder what the state of art
> compression is.
>
> I understand this is not the correct forum but since i think i have
On Thu, Oct 31, 2013 at 10:01 AM, Tim Chase
wrote:
> On 2013-10-30 21:30, Joshua Landau wrote:
>> started talking about compressing *random data*
>
> If it's truly random bytes, as long as you don't need *the same*
> random data, you can compress it quite easily. Lossy compression is
> acceptable
On 2013-10-30 21:30, Joshua Landau wrote:
> started talking about compressing *random data*
If it's truly random bytes, as long as you don't need *the same*
random data, you can compress it quite easily. Lossy compression is
acceptable for images, so why not random files? :-)
import os
inn
On 30/10/2013 21:30, Joshua Landau wrote:
On 30 October 2013 19:18, Mark Lawrence wrote:
On 30/10/2013 19:01, jonas.thornv...@gmail.com wrote:
And your still a stupid monkey i dare you to go test your IQ.
It's you're as in you are and not your as in belongs to me.
I have no intention of ge
On 30 October 2013 19:18, Mark Lawrence wrote:
> On 30/10/2013 19:01, jonas.thornv...@gmail.com wrote:
>>
>> And your still a stupid monkey i dare you to go test your IQ.
>
> It's you're as in you are and not your as in belongs to me.
>
> I have no intention of getting my IQ tested, but I do know
On Wed, Oct 30, 2013 at 11:21 AM, wrote:
> I am searching for the program or algorithm that makes the best possible of
> completly (diffused data/random noise) and wonder what the state of art
> compression is.
Is this an April Fool's Joke? A key idea of "completely" random is
that you *can't
On 2013-10-30, jonas.thornv...@gmail.com wrote:
> I am searching for the program or algorithm that makes the best
> possible of completly (diffused data/random noise) and wonder what
> the state of art compression is.
[...]
> It is of course lossless compression i am speaking of.
For completel
On Wednesday 30 October 2013 16:29:12 jonas.thornv...@gmail.com did opine:
> Den onsdagen den 30:e oktober 2013 kl. 20:46:57 UTC+1 skrev Modulok:
> > On Wed, Oct 30, 2013 at 12:21 PM, wrote:
> >
> >
> >
> > I am searching for the program or algorithm that makes the best
> > possible of comple
Op 30-10-13 20:01, jonas.thornv...@gmail.com schreef:
Den onsdagen den 30:e oktober 2013 kl. 19:53:59 UTC+1 skrev Mark Lawrence:
On 30/10/2013 18:21, jonas.thornv...@gmail.com wrote:
I am searching for the program or algorithm that makes the best possible of
completly (diffused data/random no
Den onsdagen den 30:e oktober 2013 kl. 20:46:57 UTC+1 skrev Modulok:
> On Wed, Oct 30, 2013 at 12:21 PM, wrote:
>
>
>
> I am searching for the program or algorithm that makes the best possible of
> completly (diffused data/random noise) and wonder what the state of art
> compression is.
>
>
On Wed, Oct 30, 2013 at 12:21 PM, wrote:
> I am searching for the program or algorithm that makes the best possible
> of completly (diffused data/random noise) and wonder what the state of art
> compression is.
>
> I understand this is not the correct forum but since i think i have an
> algorithm
Den onsdagen den 30:e oktober 2013 kl. 20:46:57 UTC+1 skrev Modulok:
> On Wed, Oct 30, 2013 at 12:21 PM, wrote:
>
>
>
> I am searching for the program or algorithm that makes the best possible of
> completly (diffused data/random noise) and wonder what the state of art
> compression is.
>
>
Den onsdagen den 30:e oktober 2013 kl. 20:35:59 UTC+1 skrev Tim Delaney:
> On 31 October 2013 05:21, wrote:
>
> I am searching for the program or algorithm that makes the best possible of
> completly (diffused data/random noise) and wonder what the state of art
> compression is.
>
>
>
>
>
On 30/10/2013 19:23, jonas.thornv...@gmail.com wrote:
Den onsdagen den 30:e oktober 2013 kl. 20:18:30 UTC+1 skrev Mark Lawrence:
On 30/10/2013 19:01, jonas.thornv...@gmail.com wrote:
And your still a stupid monkey i dare you to go test your IQ.
It's you're as in you are and not you
On 30/10/2013 19:22, jonas.thornv...@gmail.com wrote:
Den onsdagen den 30:e oktober 2013 kl. 20:18:30 UTC+1 skrev Mark Lawrence:
On 30/10/2013 19:01, jonas.thornv...@gmail.com wrote:
And your still a stupid monkey i dare you to go test your IQ.
It's you're as in you are and not you
On 31 October 2013 05:21, wrote:
> I am searching for the program or algorithm that makes the best possible
> of completly (diffused data/random noise) and wonder what the state of art
> compression is.
>
> I understand this is not the correct forum but since i think i have an
> algorithm that ca
xz compression is pretty hard, if a little bit slow. Also, if you want
really stellar compression ratios and you don't care about time to
compress, you might check out one of the many paq implementations.
I have a module that does xz compression in 4 different ways:
http://stromberg.dnsalias.org/
Den onsdagen den 30:e oktober 2013 kl. 20:18:30 UTC+1 skrev Mark Lawrence:
> On 30/10/2013 19:01, jonas.thornv...@gmail.com wrote:
>
> >
>
> > And your still a stupid monkey i dare you to go test your IQ.
>
> >
>
>
>
> It's you're as in you are and not your as in belongs to me.
>
>
>
> I h
Den onsdagen den 30:e oktober 2013 kl. 20:18:30 UTC+1 skrev Mark Lawrence:
> On 30/10/2013 19:01, jonas.thornv...@gmail.com wrote:
>
> >
>
> > And your still a stupid monkey i dare you to go test your IQ.
>
> >
>
>
>
> It's you're as in you are and not your as in belongs to me.
>
>
>
> I h
On 30/10/2013 19:01, jonas.thornv...@gmail.com wrote:
And your still a stupid monkey i dare you to go test your IQ.
It's you're as in you are and not your as in belongs to me.
I have no intention of getting my IQ tested, but I do know that it's a
minimum of 120 as that was required for me t
Den onsdagen den 30:e oktober 2013 kl. 19:53:59 UTC+1 skrev Mark Lawrence:
> On 30/10/2013 18:21, jonas.thornv...@gmail.com wrote:
>
> > I am searching for the program or algorithm that makes the best possible of
> > completly (diffused data/random noise) and wonder what the state of art
> > com
On 30/10/2013 18:21, jonas.thornv...@gmail.com wrote:
I am searching for the program or algorithm that makes the best possible of
completly (diffused data/random noise) and wonder what the state of art
compression is.
I understand this is not the correct forum but since i think i have an
algo
71 matches
Mail list logo