Mike Ignatiev wrote:
Basically, the 3rd step is where the "pixel" come into play. In theory,
one can write a codec for an arbitrary color depth. In practice, I am yet
to see one. Since all "observable" codecs work on 24 bit color, there's a good reason to think of JPEG as 24 bit format (if you want to be able
to read it back, anyway).

No, no, no. :-)


JPEG is a *lossy* standard. The fact that all "observable" codecs work on 24 bit images sets an *upper limit* on the colour depth of an image produced from a JPEG file. Talking about the colour depth of a JPEG file itself is meaningless, due to the way the compression works.

In practice, the representation of colours in an image *will* be affected by JPEG encoding, but it won't be a simple function of colour depth, it'll appear as quantisation noise. (This isn't the same as the "bright noise" surrounding sharp edges that is caused by the spatial quantisation, although the latter is usually more of an issue.)

JPEG is only "8-bit" in that with the current generation of codecs, you cannot obtain a colour depth greater than that.

Bringing the discussion back to William's original question, the fact that JPEG codecs usually only code from and to 8-bit-per-channel RGB images is probably not the reason for the loss of quality mentioned. Some more details about the specific problem would help narrow down the cause some more. :-)

S

Reply via email to