On 11/27/17 1:57 PM, bartc wrote:
On 27/11/2017 17:41, Chris Angelico wrote:
On Tue, Nov 28, 2017 at 2:14 AM, bartc <b...@freeuk.com> wrote:
JPEG uses lossy compression. The resulting recovered data is an
approximation of the original.

Ah but it is a perfect representation of the JPEG stream. Any given
compressed stream must always decode to the same output. The lossiness
is on the ENcoding, not the DEcoding.

You make it sound as bad as currency calculations where different software must produce results that always match to the nearest cent.

We're talking about perhaps +1 or -1 difference in the least significant bit of one channel of one pixels. If the calculation is consistent, then you will not know anything is amiss.

By +1 or -1, I mean compared with the same jpeg converted by independent means.

I also passed the same jpeg through another C program (not mine) using its own algorithms. There some pixels varied by up to +/- 9 from the others (looking at the first 512 bytes of the conversion to ppm).

Here's my test image: https://github.com/bartg/langs/blob/master/card2.jpg (nothing naughty).

Tell me what the definitive values of the pixels in this part of the image should be (I believe this corresponds to roughly the leftmost 180 pixels of the top line; there are two million pixels in all). And tell me WHY you think those are the definitive ones.

Bear in mind also that this is not intended to be the world's most perfect software. But it does do a good enough job. And the other two Python versions I have, at the minute don't work at all.



Surely the details of JPEG decompression, and the possible presence of flaws in certain decoders, is beyond the scope of this thread?

--Ned.
--
https://mail.python.org/mailman/listinfo/python-list

Reply via email to