Re: [XeTeX] OldStyle Numbers not changeable to Lining Numbers

2010-08-17 Thread M. Niedermair

Hi,

try

\documentclass{article}
\usepackage[osf]{libertine}

\newfontfamily\libertineX[Mapping=tex-text,
  RawFeature=+liga% ;+pnum
 ]{Linux Libertine O}

\begin{document}
A0123456789

{\libertineX
B0123456789
}

A0123456789

\end{document}

tested with libertine.sty 4.8.4 and xelatex 3.1415926-2.2-0.9997.4 (TeX 
Live 2010).


You can change the rawfeature as you like.
smcp, frac, hlig, dlig, lnum, pnum, zero, ...

By
Michael



--
Subscriptions, Archive, and List information, etc.:
 http://tug.org/mailman/listinfo/xetex


Re: [XeTeX] Good Text Font + Math Font Combination

2010-08-17 Thread Peter Dyballa
Has Thunderbird a way to send a message to this list without usurping  
an existing thread? Some users on this list, including me, prefer to  
have to clean threads in their eMail applications, so we do not start  
a new thread by replying to an old message and erasing the original  
subject.


Because some "metadata" is kept in the new message...

--
Mit friedvollen Grüßen

  Pete

Windows, c'est un peu comme le beaujolais nouveau: à chaque nouvelle  
cuvée on sait que ce sera dégueulasse, mais on en prend quand même,  
par masochisme.





--
Subscriptions, Archive, and List information, etc.:
 http://tug.org/mailman/listinfo/xetex


Re: [XeTeX] Good Text Font + Math Font Combination

2010-08-17 Thread Philipp Stephani
Am 17.08.2010 um 10:31 schrieb Peter Dyballa:

> Has Thunderbird a way to send a message to this list without usurping an 
> existing thread?

Posting a new message (without using the answer function) to xetex@tug.org 
should start a new thread.


--
Subscriptions, Archive, and List information, etc.:
  http://tug.org/mailman/listinfo/xetex


[XeTeX] OldStyle Numbers not changeable to Lining Numbers

2010-08-17 Thread Georg A. Duffner

Peter Baker wrote:


…
The various OpenType
features are supposed to do this:

lnum (Lining Numbers) converts oldstyle-height numbers to full-height
tnum (Tabular Numbers) converts proportional-width numbers to fixed-width
pnum (Proportional Numbers) converts fixed-width numbers to
proportional-width
onum (Old-Style numbers) converts full-height numbers to oldstyle-height

The trouble is, the lookups are ordered exactly as above. So lnum
(Lining Numbers) can *never* be invoked, since it can operate only on
one.oldstyle or one.taboldstyle, which can only be produced by lookups
that come later in the sequence. Likewise, tnum (Tabular Numbers) can
never be invoked. pnum is supposed to convert one to one.fitted or
one.taboldstyle to one.oldstyle, but the latter of these can never be
invoked. Finally onum, being last in the list, works as expected,
converting either one to one.taboldstyle or one.fitted to one.oldstyle.
…


This seems indeed to be the problem. I have put the features in a new 
order and this solved the issue. You may try it at 
http://github.com/georgd/Linux-Libertine
If there’s interest I can fix the other font files too. Later today I 
will file a bug on sourceforge.


Georg


--
Subscriptions, Archive, and List information, etc.:
 http://tug.org/mailman/listinfo/xetex


Re: [XeTeX] Problems with thickness of \frac rule and width of accents (\hat) with XeLaTeX

2010-08-17 Thread Khaled Hosny
On Mon, Aug 16, 2010 at 12:15:35AM +0200, Ulrik Vieth wrote:
> Hi,
> 
> I tested it with both XeLaTeX and LuaLaTeX (both from TL2010 pretest).
> In short, the problem only occurs in XeLaTeX, but not in LuaLaTeX,
> despite using the same macro packages and fonts for both engines.
> 
> I do not really understand the problem with the fraction rule thickness.
> It probably should be the same on both engines (despite conceptual
> differences), as it should be based on the same OpenType parameter
> RadicalRuleThickness, but it seems to be different nonetheless:

Likely XeTeX does not check this parameter at all and resorts to some
hard coded default rule thickness.

Regards,
 Khaled

-- 
 Khaled Hosny
 Arabic localiser and member of Arabeyes.org team
 Free font developer


--
Subscriptions, Archive, and List information, etc.:
  http://tug.org/mailman/listinfo/xetex


Re: [XeTeX] xe(la)tex to epub?

2010-08-17 Thread Khaled Hosny
On Mon, Aug 16, 2010 at 10:37:18AM -0700, Michiel Kamermans wrote:
> Hi all,
> 
> just wondering: is there an output driver that will generate an epub
> rather than pdf file from xe(la)tex source? I know it's less precise
> than pdf files in terms of boxing, but epub demand is high, and it
> allows reflowing much more naturally than pdf making it a far more
> suitable format for documents that are released for small screen
> devices (regardless of whether we call them ereaders, tablet pcs,
> slates, or etch-a-sketches). Basically it'd be a good compliment to
> the standard pdf output when generating public documents (parallel
> format generation makes users and customers happy) but I don't think
> I've seen an output driver for this... just custom "latex to .epub
> converters", which aren't really useful. Something that operates on
> xdv/dvi would be far superior.

AFAIK, epup is just a subset of xhtml with a subset of css2, so IMO not
a kind of output format that is very well suited for TeX (well, I hardly
consider html an output format at all, the output is what the browser
renders out of it).

Regards,
 Khaled

-- 
 Khaled Hosny
 Arabic localiser and member of Arabeyes.org team
 Free font developer


--
Subscriptions, Archive, and List information, etc.:
  http://tug.org/mailman/listinfo/xetex


Re: [XeTeX] Good Text Font + Math Font Combination

2010-08-17 Thread Khaled Hosny
On Tue, Aug 17, 2010 at 12:05:17AM +0200, Tobias Schoel wrote:
> Hi,
> 
> as there seems to be only Asana Math and XITS Math as free and
> complete OpenType Math fonts distributed along texlive the following
> question arises for me (as I used to use Linux Libertine as text
> font):
> 
> What is a good text font in Combination with Asana Math respectively
> XITS Math?

Asana is based on Palatino, so any Palatino font would be good,
particularly TeX Gyre Pagella should do (though it has terrible Greek).
For XITS, it already a complete family, you have XITS for regular text
and XITS Math for math, all on CTAN (and TeXLive 2010).

Regards,
 Khaled

-- 
 Khaled Hosny
 Arabic localiser and member of Arabeyes.org team
 Free font developer


--
Subscriptions, Archive, and List information, etc.:
  http://tug.org/mailman/listinfo/xetex


Re: [XeTeX] Problems with thickness of \frac rule and width of accents (\hat) with XeLaTeX

2010-08-17 Thread Ulrik Vieth

On 08/17/2010 04:32 PM, Khaled Hosny wrote:


Likely XeTeX does not check this parameter at all and resorts to some
hard coded default rule thickness.


No, that cannot be the reason. I know for sure that XeTeX does load some 
(but not all) of the OpenType font parameters and maps them to TeX 
fontdimen parameters. I have seen that myself in the web2c sources:


http://www.tug.org/svn/texlive/trunk/Build/source/texk/web2c/xetexdir/XeTeXOTMath.cpp?view=markup

However, I have now realized that the problem is caused by of a mismatch 
between the unicode-math LaTeX package and the XeTeX engine:


When LaTeX starts out, it has LM/CM fonts preloaded in families 0-3.
When unicode-math loads another OpenType math font with \setmathfont, it 
loads the font into a new family, e.g. family 4.


Now XeTeX's math engine still behaves very much like a traditional TeX,
which expects math font parameters to come from families 2 and 3,
e.g. rule thickness from \fontdimen8\textfont3 (xi_8 in Appendix G).

Now families 2 and 3 still contains unchanged TFM-based CM fonts with 
default parameters, so we get the default rule thickness of 0.4pt.


What would be needed, is to load the selected OpenType math font not 
just into a new family 4 (for access to the glyphs in unicode-math), but 
also into families 2-3 (for access to the font metric parameters).


Unfortunately, there seems to be a major conceptual difference between 
XeTeX and LuaTeX here with respect to font loading of OT math fonts, 
which cannot be resolved quickly without major changes to the engine.


The most feasible short-term solution probably would be to apply a 
XeTeX-specific workaround in the font loading code in unicode-math.


Regards, Ulrik.


P.S: The attached test file show a quick solution, which reassigns
the fonts in families 2 and 3 to the font loaded in family 4.

The first equation show the incorrect fraction rule thickness from CM 
and the second one the correct one from XITS Math.
\documentclass[12pt,a4paper]{article}
\usepackage{unicode-math}

\setmainfont{XITS}
\setmathfont{XITS Math}

\begin{document}
\tracingoutput=1
\showboxbreadth=\maxdimen
\showboxdepth=\maxdimen

\begin{displaymath}
\nabla \times \mathbf{E} = -\frac{\partial \mathbf{B}}{\partial t} \quad \nabla \cdot \mathbf{D} = \rho_f
\end{displaymath}

\textfont2=\textfont4
\scriptfont2=\scriptfont4
\scriptscriptfont2=\scriptscriptfont4
\textfont3=\textfont4
\scriptfont3=\scriptfont4
\scriptscriptfont3=\scriptscriptfont4

\begin{displaymath}
\nabla \times \mathbf{E} = -\frac{\partial \mathbf{B}}{\partial t} \quad \nabla \cdot \mathbf{D} = \rho_f
\end{displaymath}

\end{document}

--
Subscriptions, Archive, and List information, etc.:
  http://tug.org/mailman/listinfo/xetex


Re: [XeTeX] Problems with thickness of \frac rule and width of accents (\hat) with XeLaTeX

2010-08-17 Thread Khaled Hosny
On Tue, Aug 17, 2010 at 09:54:00PM +0200, Ulrik Vieth wrote:
> On 08/17/2010 04:32 PM, Khaled Hosny wrote:
> >
> >Likely XeTeX does not check this parameter at all and resorts to some
> >hard coded default rule thickness.
> 
> No, that cannot be the reason. I know for sure that XeTeX does load
> some (but not all) of the OpenType font parameters and maps them to
> TeX fontdimen parameters. I have seen that myself in the web2c
> sources:
> 
> http://www.tug.org/svn/texlive/trunk/Build/source/texk/web2c/xetexdir/XeTeXOTMath.cpp?view=markup
> 
> However, I have now realized that the problem is caused by of a
> mismatch between the unicode-math LaTeX package and the XeTeX
> engine:
> 
> When LaTeX starts out, it has LM/CM fonts preloaded in families 0-3.
> When unicode-math loads another OpenType math font with
> \setmathfont, it loads the font into a new family, e.g. family 4.

I thought about that possibility, but I don't have the expertise to
check it my self, thanks.

-- 
 Khaled Hosny
 Arabic localiser and member of Arabeyes.org team
 Free font developer


--
Subscriptions, Archive, and List information, etc.:
  http://tug.org/mailman/listinfo/xetex


Re: [XeTeX] Problems with thickness of \frac rule and width of accents (\hat) with XeLaTeX

2010-08-17 Thread Khaled Hosny
On Tue, Aug 17, 2010 at 09:54:00PM +0200, Ulrik Vieth wrote:
[...]
> Unfortunately, there seems to be a major conceptual difference
> between XeTeX and LuaTeX here with respect to font loading of OT
> math fonts, which cannot be resolved quickly without major changes
> to the engine.
> 
> The most feasible short-term solution probably would be to apply a
> XeTeX-specific workaround in the font loading code in unicode-math.

The test file seems to work OK on both engines, if there no major issues
that would result from assigning to families 2 and 3 in luatex, then I
think unicode-math should always do that.

Hmm, thinking a bit more, this is likely to break legacy math control
sequences that has no equivalent in unicode-math yet, which will
currently just grap a glyph from CM, more seriously, it will break
\overbrace and likes since XeTeX support seems not working and
unicode-math is simply using the CM constructs for those.

-- 
 Khaled Hosny
 Arabic localiser and member of Arabeyes.org team
 Free font developer


--
Subscriptions, Archive, and List information, etc.:
  http://tug.org/mailman/listinfo/xetex


Re: [XeTeX] xe(la)tex to epub?

2010-08-17 Thread Michiel Kamermans

Khaled,


AFAIK, epup is just a subset of xhtml with a subset of css2, so IMO not a kind 
of output format that is very well suited for TeX (well, I hardly consider html 
an output format at all, the output is what the browser renders out of it).
   


True, but CSS uses a box model too, so it should be possible to create 
an "initial view" document that -- provided the render engine is 
properly compliant -- essentially looks the same as a generated pdf 
(barring special pdf commands, of course). Given the pretty rigid 
description of how CSS should be rendered by the w3c documentation for 
it, any x(ht)ml+css document is a proper format (be that for print or 
screen. note that there are a number of stand-alone css render engines 
which don't rely on browsers, but are meant for integration into reader 
applications devices, for instance). The difference between something 
like epub and pdf is that the layout in the first is mutable. For 
digital readers, with many different viewport sizes and aspects, that's 
highly desirable. For print media the epub format is, of course, 
nonsense. Hence the desire for parallel format generation.


- Mike


--
Subscriptions, Archive, and List information, etc.:
 http://tug.org/mailman/listinfo/xetex


Re: [XeTeX] Problems with thickness of \frac rule and width of accents (\hat) with XeLaTeX

2010-08-17 Thread Ulrik Vieth

On 08/17/2010 10:12 PM, Khaled Hosny wrote:


Hmm, thinking a bit more, this is likely to break legacy math control
sequences that has no equivalent in unicode-math yet, which will
currently just grap a glyph from CM, more seriously, it will break
\overbrace and likes since XeTeX support seems not working and
unicode-math is simply using the CM constructs for those.


Good question. Perhaps one might argue that unicode-math should not rely 
on anything to be taken from CM.  If something is not readily available 
in XeTeX such as \overbrace, it should probably provide its own 
redefinition in terms of Unicode slots (if that is possible). However, 
this might also get trick in terms of which building blocks are encoded 
as Unicode slots or only in the Private Use Area.


Hmmm, this probably still needs more thinking...

Regards, Ulrik.



--
Subscriptions, Archive, and List information, etc.:
 http://tug.org/mailman/listinfo/xetex


Re: [XeTeX] xe(la)tex to epub?

2010-08-17 Thread Khaled Hosny
On Tue, Aug 17, 2010 at 01:16:02PM -0700, Michiel Kamermans wrote:
> Khaled,
> 
> >AFAIK, epup is just a subset of xhtml with a subset of css2, so IMO not a 
> >kind of output format that is very well suited for TeX (well, I hardly 
> >consider html an output format at all, the output is what the browser 
> >renders out of it).
> 
> True, but CSS uses a box model too, so it should be possible to
> create an "initial view" document that -- provided the render engine
> is properly compliant -- essentially looks the same as a generated
> pdf (barring special pdf commands, of course). Given the pretty
> rigid description of how CSS should be rendered by the w3c
> documentation for it, any x(ht)ml+css document is a proper format
> (be that for print or screen. note that there are a number of
> stand-alone css render engines which don't rely on browsers, but are
> meant for integration into reader applications devices, for
> instance). The difference between something like epub and pdf is
> that the layout in the first is mutable. For digital readers, with
> many different viewport sizes and aspects, that's highly desirable.
> For print media the epub format is, of course, nonsense. Hence the
> desire for parallel format generation.

I understand the benefits of EPUB, what I don't understand is the need
for TeX at all. (X)HTML is dynamic by nature, you should be able to
resize or change text size and the layout will re-flow, forcing a rigid,
box based layout that is a direct translation of TeX output just does
not make much sense to me. I've the feeling that you are looking for the
wrong solution to the problem. One of the strengths of TeX that I mis
in almost all HTML renderers is decent line breaking and hyphenation
algorithms. While I don't know any any HTML engines, especially
browsers, that have given much attention to this, there are JavaScript
implementations of TeX's line breaking and hyphenation algorithms,
assuming EPUB readers can execute JavaScript, I think this is a good
compromise. See [1] for example (some interesting links near the end,
too).

[1] http://typophile.com/node/71247

Regards,
 Khaled

-- 
 Khaled Hosny
 Arabic localiser and member of Arabeyes.org team
 Free font developer


--
Subscriptions, Archive, and List information, etc.:
  http://tug.org/mailman/listinfo/xetex


Re: [XeTeX] xe(la)tex to epub?

2010-08-17 Thread Ross Moore
Hi Khaled and Michiel,

On 18/08/2010, at 6:58 AM, Khaled Hosny wrote:

> On Tue, Aug 17, 2010 at 01:16:02PM -0700, Michiel Kamermans wrote:
>> Khaled,
>> 
>>> AFAIK, epup is just a subset of xhtml with a subset of css2, so IMO not a 
>>> kind of output format that is very well suited for TeX (well, I hardly 
>>> consider html an output format at all, the output is what the browser 
>>> renders out of it).

>> For print media the epub format is, of course, nonsense. Hence the
>> desire for parallel format generation.
> 
> I understand the benefits of EPUB, what I don't understand is the need
> for TeX at all.

To me the problem is not about using TeX for formatting,
it is about obtaining different output formats from
the same (La)TeX sources --- especially when math formulas,
and other 2-dimensional layouts, are involved.

Since ePub, and similar, are XML- or XHTML-based, you want the
detailed structure of the tagging to be produced automatically,
without having to make edits on each output result, to "get it right".
You want to enter your information in just one place, in a language
that the author already understands and can use effectively.
Software should then do the rest, modulo possible minor tweaking 
at the end.


This is not just simply a matter of redefining macros, because the
structure rules for the markup can be quite different for different
output formats. So some kind of knowledge about what macros are being
used for, and what kinds of things will follow after, is required 
of any translation software. 
Since LaTeX, processing to PDF as a major form of output, figures
to be the comfortable input format, this is desirable for encoding
the author's work --- though some may say it ought to be in XML.

And since TeX already understands the expansion of macros and their 
arguments, it is attractive to want to use it as a starting point
for generating other formats; but certainly it cannot be the 
whole shebang.

For instance, in my work for Tagged PDF, an XML version will be able
to be exported (using Adobe Acrobat Pro) from the complete PDF.
Mathematics will be fully tagged as MathML, in this view.
Other PDF readers may only see the rendered pages, but others may
be able to use the tagging to extract an alternative view suitable
to their own display screen.

> (X)HTML is dynamic by nature, you should be able to
> resize or change text size and the layout will re-flow, forcing a rigid,
> box based layout that is a direct translation of TeX output just does
> not make much sense to me.

I agree that it is not the TeX *output* that needs to be further 
processed, but the input source --- or something intermediate 
that can be generated and written to a file as a by-product 
of LaTeX processing, with extra packages loaded to achieve this.

TeX4Ht works by putting extra information into the .dvi file, 
to encode the required tagging. An extra post-processor is required
to extract this information, producing HTML or XML or whatever.
That is very similar to what I do for Tagged PDF, where the 
extra post-processor is Acrobat Pro. This is even more flexible
than TeX4HT, since Acrobat can export into a range of formats, 
whereas TeX4ht only produces the format that was specified when 
the .dvi was being created.


> I've the feeling that you are looking for the
> wrong solution to the problem. One of the strengths of TeX that I mis
> in almost all HTML renderers is decent line breaking and hyphenation
> algorithms. While I don't know any any HTML engines, especially
> browsers, that have given much attention to this, there are JavaScript
> implementations of TeX's line breaking and hyphenation algorithms,
> assuming EPUB readers can execute JavaScript, I think this is a good
> compromise. See [1] for example (some interesting links near the end,
> too).
> 
> [1] http://typophile.com/node/71247
> 
> Regards,
> Khaled



Hope this helps,

Ross


Ross Moore   ross.mo...@mq.edu.au 
Mathematics Department   office: E7A-419  
Macquarie University tel: +61 (0)2 9850 8955
Sydney, Australia  2109  fax: +61 (0)2 9850 8114







--
Subscriptions, Archive, and List information, etc.:
  http://tug.org/mailman/listinfo/xetex


Re: [XeTeX] xe(la)tex to epub?

2010-08-17 Thread Khaled Hosny
On Wed, Aug 18, 2010 at 08:11:06AM +1000, Ross Moore wrote:
> Hi Khaled and Michiel,
> 
> On 18/08/2010, at 6:58 AM, Khaled Hosny wrote:
> 
> > On Tue, Aug 17, 2010 at 01:16:02PM -0700, Michiel Kamermans wrote:
> >> Khaled,
> >> 
> >>> AFAIK, epup is just a subset of xhtml with a subset of css2, so IMO not a 
> >>> kind of output format that is very well suited for TeX (well, I hardly 
> >>> consider html an output format at all, the output is what the browser 
> >>> renders out of it).
> 
> >> For print media the epub format is, of course, nonsense. Hence the
> >> desire for parallel format generation.
> > 
> > I understand the benefits of EPUB, what I don't understand is the need
> > for TeX at all.
> 
> To me the problem is not about using TeX for formatting,
> it is about obtaining different output formats from
> the same (La)TeX sources --- especially when math formulas,
> and other 2-dimensional layouts, are involved.
> 
> Since ePub, and similar, are XML- or XHTML-based, you want the
> detailed structure of the tagging to be produced automatically,
> without having to make edits on each output result, to "get it right".
> You want to enter your information in just one place, in a language
> that the author already understands and can use effectively.
> Software should then do the rest, modulo possible minor tweaking 
> at the end.

If that is the case, I wouldn't start with TeX as input format, but with
some thing else easier to parse with 3rd party tools to get different
output formats. XML is the preferred by industry, and there are
structural XML based formats like DocBook with tools to convert it to
many output formats including HTML and LaTeX or even EPUB.

However, If I'm to do that myself, I'd even try something much simpler
like Markdown.

> This is not just simply a matter of redefining macros, because the
> structure rules for the markup can be quite different for different
> output formats. So some kind of knowledge about what macros are being
> used for, and what kinds of things will follow after, is required 
> of any translation software. 
> Since LaTeX, processing to PDF as a major form of output, figures
> to be the comfortable input format, this is desirable for encoding
> the author's work --- though some may say it ought to be in XML.
> 
> And since TeX already understands the expansion of macros and their 
> arguments, it is attractive to want to use it as a starting point
> for generating other formats; but certainly it cannot be the 
> whole shebang.

Trying to parse TeX input is something that I'd not try to do in my
right mind, but others have did that, PlasTeX seems to work nicely and
generates clean HTML. But since you loose all the visual formating of
TeX, the remaining structural formating is not worth the trouble, you
can get with more parser friendly formats.

> For instance, in my work for Tagged PDF, an XML version will be able
> to be exported (using Adobe Acrobat Pro) from the complete PDF.
> Mathematics will be fully tagged as MathML, in this view.
> Other PDF readers may only see the rendered pages, but others may
> be able to use the tagging to extract an alternative view suitable
> to their own display screen.
> 
> > (X)HTML is dynamic by nature, you should be able to
> > resize or change text size and the layout will re-flow, forcing a rigid,
> > box based layout that is a direct translation of TeX output just does
> > not make much sense to me.
> 
> I agree that it is not the TeX *output* that needs to be further 
> processed, but the input source --- or something intermediate 
> that can be generated and written to a file as a by-product 
> of LaTeX processing, with extra packages loaded to achieve this.
> 
> TeX4Ht works by putting extra information into the .dvi file, 
> to encode the required tagging. An extra post-processor is required
> to extract this information, producing HTML or XML or whatever.
> That is very similar to what I do for Tagged PDF, where the 
> extra post-processor is Acrobat Pro. This is even more flexible
> than TeX4HT, since Acrobat can export into a range of formats, 
> whereas TeX4ht only produces the format that was specified when 
> the .dvi was being created.

As I wrote above, if it is about the structural formating, then it does
not worth the trouble, it can be achieved with almost every tool and
document format out there (even office suits can build structured
documents). It is visual, the precise output, where TeX excels which is
totally lost during such conversions.

This can be useful, however, if one have existing TeX material that need
to be processed to other output format, though one can still argue that
converting it ones to some sort of XML is much better long term plan.

Don't get me wrong, I like TeX syntax and find it more easier to author
with than many other markups, but I accept that it does not fit every
need.

Regards,
 Khaled

-- 
 Khaled Hosny
 Arabic localiser and member of Arabeyes.org team
 Free font de

Re: [XeTeX] xe(la)tex to epub?

2010-08-17 Thread Michiel Kamermans

Khaled, Ross,


As I wrote above, if it is about the structural formating, then it does
not worth the trouble, it can be achieved with almost every tool and
document format out there (even office suits can build structured
documents). It is visual, the precise output, where TeX excels which is
totally lost during such conversions.

This can be useful, however, if one have existing TeX material that need
to be processed to other output format, though one can still argue that
converting it ones to some sort of XML is much better long term plan.

Don't get me wrong, I like TeX syntax and find it more easier to author
with than many other markups, but I accept that it does not fit every
need.
   


I write textbooks. I write these in TeX, because that allows me to 
easily modify very large, structured documents. I have used DocBook in 
the past, and the best way I can summarise it, is "easy to write 
initially, migraine inducingly insane to update or revise". It is really 
easy to mark up a document as DocBook, and it is then very hard to 
modify the structure without getting so frustrated with the utterly 
inadequate DocBook editors on the market that you resort to completely 
wiping the document's markup, moving everything around, and then 
reapplying all the markup.


TeX, on the other hand, is "steep learning curve for the initial 
document, child's play to revise". It's why I gave up DocBook in favour 
of TeX. So that's my situation. My texts are in TeX, and we take it from 
there: I would like to generate not just pdf, but also epub from these 
sources, without having to write a completely different book using a 
completely different toolchain that gives me two completely different 
documents with the same words in it... I can't even being to imagine the 
potential for errors and inconsistencies that introduces =)


It's not really about a favourite "synax", it's about having a tool that 
already produces a device independent document format that then gets 
converted to a specific device readable format. Can that independent 
format also be converted to epub? If it can't, that unfortunate, and 
perhaps someone will end up writing a dvi to epub driver (limited in its 
functionality by what epub offers for document layout). If it can, then 
that's great and I'd like to start using it as soon as possible.


- Mike


--
Subscriptions, Archive, and List information, etc.:
 http://tug.org/mailman/listinfo/xetex


Re: [XeTeX] xe(la)tex to epub?

2010-08-17 Thread Ross Moore
Hi Michiel,

On 18/08/2010, at 10:28 AM, Michiel Kamermans wrote:

> Khaled, Ross,

>> This can be useful, however, if one have existing TeX material that need
>> to be processed to other output format, though one can still argue that
>> converting it ones to some sort of XML is much better long term plan.
>> 
>> Don't get me wrong, I like TeX syntax and find it more easier to author
>> with than many other markups, but I accept that it does not fit every
>> need.
>>   
> 
> I write textbooks. I write these in TeX, because that allows me to easily 
> modify very large, structured documents. I have used DocBook in the past, and 
> the best way I can summarise it, is "easy to write initially, migraine 
> inducingly insane to update or revise". It is really easy to mark up a 
> document as DocBook, and it is then very hard to modify the structure without 
> getting so frustrated with the utterly inadequate DocBook editors on the 
> market that you resort to completely wiping the document's markup, moving 
> everything around, and then reapplying all the markup.
> 
> TeX, on the other hand, is "steep learning curve for the initial document, 
> child's play to revise". It's why I gave up DocBook in favour of TeX. So 
> that's my situation. My texts are in TeX, and we take it from there: I would 
> like to generate not just pdf, but also epub from these sources, without 
> having to write a completely different book using a completely different 
> toolchain that gives me two completely different documents with the same 
> words in it... I can't even being to imagine the potential for errors and 
> inconsistencies that introduces =)

This is exactly what I expected.
Use of computer software for books, documentation, etc.
should be about what is convenient for the author to write
and maintain, not what is best suited to machine transfer.

> 
> It's not really about a favourite "synax", it's about having a tool that 
> already produces a device independent document format that then gets 
> converted to a specific device readable format. Can that independent format 
> also be converted to epub? If it can't, that unfortunate, and perhaps someone 
> will end up writing a dvi to epub driver (limited in its functionality by 
> what epub offers for document layout).

Agreed. Getting onto other devices is about having well-written
translators between formats. It certainly should not require
the author to re-write large chunks of manuscript.

Computers are meant to help people do a better job.
They should not be mandating requirements that force authors 
to do more work, of a repetitious nature that adds little 
extra value to work already done.

Of course someone needs to write those translators, and make
them sufficiently flexible as well as being robust.
But that's what computer scientists and programmers are
paid for, surely.   :-)

> If it can, then that's great and I'd like to start using it as soon as 
> possible.

Sorry, I cannot promise time-lines for things that do not yet
exist.

> 
> - Mike


Cheers,

Ross


Ross Moore   ross.mo...@mq.edu.au 
Mathematics Department   office: E7A-419  
Macquarie University tel: +61 (0)2 9850 8955
Sydney, Australia  2109  fax: +61 (0)2 9850 8114







--
Subscriptions, Archive, and List information, etc.:
  http://tug.org/mailman/listinfo/xetex


Re: [XeTeX] xe(la)tex to epub?

2010-08-17 Thread Khaled Hosny
On Wed, Aug 18, 2010 at 10:48:59AM +1000, Ross Moore wrote:
> Hi Michiel,
> 
> On 18/08/2010, at 10:28 AM, Michiel Kamermans wrote:
> 
> > Khaled, Ross,
> 
> >> This can be useful, however, if one have existing TeX material that need
> >> to be processed to other output format, though one can still argue that
> >> converting it ones to some sort of XML is much better long term plan.
> >> 
> >> Don't get me wrong, I like TeX syntax and find it more easier to author
> >> with than many other markups, but I accept that it does not fit every
> >> need.
> >>   
> > 
> > I write textbooks. I write these in TeX, because that allows me to easily 
> > modify very large, structured documents. I have used DocBook in the past, 
> > and the best way I can summarise it, is "easy to write initially, migraine 
> > inducingly insane to update or revise". It is really easy to mark up a 
> > document as DocBook, and it is then very hard to modify the structure 
> > without getting so frustrated with the utterly inadequate DocBook editors 
> > on the market that you resort to completely wiping the document's markup, 
> > moving everything around, and then reapplying all the markup.
> > 
> > TeX, on the other hand, is "steep learning curve for the initial document, 
> > child's play to revise". It's why I gave up DocBook in favour of TeX. So 
> > that's my situation. My texts are in TeX, and we take it from there: I 
> > would like to generate not just pdf, but also epub from these sources, 
> > without having to write a completely different book using a completely 
> > different toolchain that gives me two completely different documents with 
> > the same words in it... I can't even being to imagine the potential for 
> > errors and inconsistencies that introduces =)
> 
> This is exactly what I expected.
> Use of computer software for books, documentation, etc.
> should be about what is convenient for the author to write
> and maintain, not what is best suited to machine transfer.

True, but then you have to accept its limitations. I've very slow
typing speed, it is far more convenient to me to write on paper than to
type on a keyboard, but I know I can't get nice printed documents from
my handwriting, so I've to trade my personal convenience with the ease
of getting ready to print documents.

> > It's not really about a favourite "synax", it's about having a tool that 
> > already produces a device independent document format that then gets 
> > converted to a specific device readable format. Can that independent format 
> > also be converted to epub? If it can't, that unfortunate, and perhaps 
> > someone will end up writing a dvi to epub driver (limited in its 
> > functionality by what epub offers for document layout).
> 
> Agreed. Getting onto other devices is about having well-written
> translators between formats. It certainly should not require
> the author to re-write large chunks of manuscript.

I hope if it has been easier to "translate" TeX output, but unless you
restrict your TeX input to a managed subset, it is near impossible to
translate it by anything but TeX itself. However, starting from TeX
output, whether DVI or PDF, you loss many important information, not
only the structure of your document, but the actual textual material. It
is almost impossible to retrieve the original Arabic text from a PDF,
for example, unless every word have been tagged with an /ActualText tag
or any other form of saving the original text alongside the visual
output. The same goes for any complex textual material, like
mathematical formulas, for example.
> 
> Computers are meant to help people do a better job.
> They should not be mandating requirements that force authors 
> to do more work, of a repetitious nature that adds little 
> extra value to work already done.
> 
> Of course someone needs to write those translators, and make
> them sufficiently flexible as well as being robust.
> But that's what computer scientists and programmers are
> paid for, surely.   :-)

Good luck parsing TeX macros :) (there are certainly reasons why there
is no "sufficiently flexible as well as being robust" (La)TeX to
anything translators out there.)

I suggest trying PlasTeX[1], though. Last time I checked, they had
everything implemented in Python and all TeX translation is done on
there own, generating clean HTML, which I think should not be hard to
repackaged as EPUB.

Regards,
 Khaled

-- 
 Khaled Hosny
 Arabic localiser and member of Arabeyes.org team
 Free font developer


--
Subscriptions, Archive, and List information, etc.:
  http://tug.org/mailman/listinfo/xetex


Re: [XeTeX] unicode-math (I think) redefines \slash

2010-08-17 Thread Will Robertson
On 2010-08-12 04:54:45 +0930, "Joel C. Salomon" 
 said:



Using all packages as of TL ’10 pretest.

At one point in my document I needed an optional line-break after a
slash, but
blah blah overfull line UNIX\slash Linux
didn’t break.  Inserting the line
\show\slash
showed me that \slash was now simply the character ‘/’.  Even
re-defining it in my preamble didn’t work right away; I needed
\AtBeginDocument{\def\slash{/\penalty \exhyphenpenalty}}
since unicode-math also does its manipulations then.


Hi Joel,

Thanks for reporting this. (And noted in the bug tracker -- thanks for 
that especially so I don't forget!). Definitely a problem with 
unicode-math. I'll take a look when I can.


Cheers,
Will




--
Subscriptions, Archive, and List information, etc.:
 http://tug.org/mailman/listinfo/xetex


Re: [XeTeX] Checking for existence of font feature

2010-08-17 Thread Will Robertson
On 2010-08-12 16:16:54 +0930, Khaled Hosny 
 said:



Ah, thank you. Are these features (relatively) new? Rather
embarrassing not to have spotted them, but anyway…


All were introduced in version 2, I think.


Yep, very new.


I've got a problem using these commands. Using XeTeX v0.9995.1
texlive svn 15079 on MiKTeX 2.8, fontspec 2010/08/01, none of these
“lower-level” fontspec features I've tried work.

I've copied the appropriate example from the manual:

\fontspec_if_feature:nTF {smcp} {True} {False}.



Since those command use expl3 syntax, you will need \ExplSyntaxOn before
using them.


Right; something like this:

\ExplSyntaxOn
% note that spaces are completely ignored in here!
\newcommand \testcaps [2] {
 \fontspec_if_feature:nTF {smcp} {#1}{#2}
}
\ExplSyntaxOff
...
\fontspec{Times New Roman}
\testcaps{Caps}{No Caps}

Hope this helps,
Will




--
Subscriptions, Archive, and List information, etc.:
 http://tug.org/mailman/listinfo/xetex


Re: [XeTeX] OldStyle Numbers not changeable to Lining Numbers

2010-08-17 Thread Will Robertson
On 2010-08-17 08:23:15 +0930, Tobias Schoel 
 said:



\defaultfeatures{Numbers=OldStyle} gives medieval numbers,
\addfontfeature{Numbers=Lining} afterwards keeps medieval numbers.

Minimal Example:

\documentclass{article}
\usepackage{fontspec}
\defaultfontfeatures{Numbers={OldStyle}}
\setmainfont[Mapping=tex-text]{Linux Libertine O}
\begin{document}
A0123456789

\addfontfeatures{Numbers={Lining}}A0123456789
\end{document}


"Mixing" conflicting font features like this is not very reliable in 
fontspec, because after saying Numbers=Lining, fontspec is not smart 
enough to deactivate Numbers=OldStyle. (Unfortunately. Work is planned, 
theoretically, to fix this problem.)


So you end up with the font having to make the decision about what to 
do with BOTH features active at the same time. Which is all a little 
frustrating, since fontspec is supposed to be making things easier to 
use, but oh well.


On a related note...

On 2010-08-17 16:31:35 +0930, "M. Niedermair" 
 said:



\newfontfamily\libertineX[Mapping=tex-text,
   RawFeature=+liga% ;+pnum
  ]{Linux Libertine O}
...
You can change the rawfeature as you like.
smcp, frac, hlig, dlig, lnum, pnum, zero, ...


While there's nothing wrong with this, I'll just point out that these 
are all equivalent to fontspec features such as Ligatures=Common, 
Numbers=Lining, and so on.


There's no error checking on the raw features (yet), which is the main 
reason I recommend using the fontspec features, but I also find the 
OpenType features harder to remember. Whichever you're more comfortable 
with, though.


W




--
Subscriptions, Archive, and List information, etc.:
 http://tug.org/mailman/listinfo/xetex