Mike Hommey ([EMAIL PROTECTED]): > On Mon, Dec 06, 2004 at 11:23:48AM +0000, Anders Karlsson <[EMAIL PROTECTED]> > wrote: > > Well, yes and no. If you are a normal user, you would not have to > > tweak anything. If you mess with printing/image manipulating a lot, > > then you might have to tweak two sets of values, one to tell X what > > DPI the screen actually is, and one to tell the desktop env what DPI > > you want the fonts displayed in. > > I still fail to see the advantage of having 2 settings of the same > thing, being the number of dots per inch.
Mike, I think this is the key point in this argument. A key part of my argument has been the assumption that using the actual "dots per inch" of the display device for rendering the user interface is not useful. I have submitted the following as evidence for this claim: 1. Other operating systems do not use the screen's DPI when rendering fonts. On Windows, there is a different function to determine the real DPI of the display, separate from the DPI used in text rendering. This seems to work well in practice. 2. Font hints are designed for specific font sizes at certain common DPIs. There is value in using a small set of "standard" DPI values for UI rendering. (see http://scanline.ca/dpi/fonts.html) 3. DPI becomes more complicated given different display devices such as data projectors. Originally, I argued that both the DPI values from the X server and from Xft should be the same thing, and that this should be the "canonical DPI" used for UI rendering. As many people have pointed out in this thread, there are other uses for the "real DPI", and furthermore there is some prior art in that these are two separate values on Windows. So, by sticking to separately configured DPI values with Xft makes sense, as does choosing a default canonical value of 96 DPI. -Billy