According to upstream there is a qualitative change in coding strategy
when -lossiness is increased above 100, rather than just a
quantitative one.  So I don't think the problem you're worrying about
will actually occur.  But you might want to consider using a lower
level of compression, like -lossless or -clean, and see how much
compression is actually sacrificed.

When I scan works like that myself I use -lossless, save those, and
then compress down further for distribution if necessary.  That way I
can always get back the original scans and compress them again using
future versions with new features currently unavailable.

But bottom line: according to upstream, the issue you are worrying
about should not be a problem.
--
Barak A. Pearlmutter <[EMAIL PROTECTED]>
 Hamilton Institute & Dept Comp Sci, NUI Maynooth, Co. Kildare, Ireland
 http://www-bcl.cs.nuim.ie/~barak/


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]

Reply via email to