Georg Baum wrote:
Abdelrazak Younes wrote:

Georg Baum wrote:
It looks to me as if you want to convert almost everything to docstring.
That would have been better done with some global search/replace.
I really think we should have done just that. We are just complicating
our life for no real benefit. What's the problem if we use unicode
capable strings to handle ascii? This is just a container. If you want
we can force them to be ascii via well placed ASSERT.

I am repeating myself: That is not the point (for me). I take the unciode
conversion as a chance to get code that is easier to understand: If it is a
strign I know that it is ASCII (assuming that we eliminate all utf8 strings
internally). If it is a docstring I don't.

My point is that if you need to know, then there is a use-case problem. Using std::string is not really the solution but, as you proposed yourself, using a stronger type. I understand that my point is not interesting to neither of you or Lars though...

The step
by step conversion only makes sense if you don't do it blindly but look
at each conversion. I already found problematic cases (e.g. where a file
contents is interpreted as UTF8 without knowing the file encoding).
I would have preferred the other way around. Convert everything and then
look at the remaining problems.

Then I suggest that you don't get involved anymore in the converting
business, that will be easier for all of us.

I might do that indeed.

If you run out of work you
could for example fix the bidi stuff (assuming you know something about
right to left languages which I am not sure if it is true).

No, I used to know when I was a child but not any more I'm afraid. That could be a good occasion to come back to it though :-)


Or you could think how we could deal with different encodings for LaTeX
output.

Ouch LateX is an unknown territory for me...

Abdel.

Reply via email to