-Original Message-
From: Vasily I. Volchenko <[EMAIL PROTECTED]>
To: Luiz Americo Pereira Camara <[EMAIL PROTECTED]>
Date: Tue, 20 Nov 2007 17:44:51 +0300
Subject: Re[2]: [lazarus] Plots
>
> > Hi,
> >
> > Thanks, i got the packages and successf
Wiki pages are being constructed. Here is an example project showing all 3
components.
ptest.tgz
Description: Binary data
I'll change it (but it is a bad idea to save text files only in utf-8 in linux,
but in ansi in windows, besides, utf8 is very rarely used in final Russian
files, when for internal functions it is well). Lazarus-ccr may be a good place
(it is in one of my sf.net projects, but it is a little part)
Why not? Course, it is supported. But... At first, double size increasing makes
it sensitive (especially in mob.phones). But it is the least problem.
Traditionally one symbol is considered as 1 byte (it is more than enough for
latin and cyrillic), it makes many incompartibilities with a good old
> UTF8 can be resynced. So if you loose one char (partially) software can
> detect it
> and restart at the next char.
Can be. But not all times and not all the software.
> > At last, the fact that Lazarus compiled for windows, uses ansi (cp1251)
> > while lazarus for linux uses UTF-8 is compl
OK, I understand the problems of using utf8 on windows, but I am speaking about
other things. Why lazarus should save (and load) pascal (and only pascal, not
lfm/lrs, not po) files in the internal encoding? If gtk2 version supports utf8,
it forces user to save all files in utf8, when if windows
> At first i was against UTF8 but now i understand it's power, UTF8 is
> small since most text is in ASCII 7bit there is no increase there,
> that's why the web uses UTF8, and it's also best for storage,
> Microsoft chose UTF16 actually UCS2 because they thought UCS2 will be
> enough for all chara
> LCL internal encoding is not the same thing as source code encoding
> which is not the same things as arbitrary text data used by your
> application.
>
Of course. There are:
1. LCL internal encoding (encoding of LCL sources),
2. Lazarus IDE internal encoding,
3. encoding of .po files,
4. Syn
>
> Vasily I. Volchenko schreef:
> > And lazarus team is trying to force UTF8 introduction with a revolution
> > without supporting neither old project nor saving files (and only saving)
> > in compartible with other projects format. Besides, that revolutionary
>
> did you notice
> Office applications also use Unicode for storage and nobody seems to
> complain about it
OK, office applications do so. It was some (or it is better to say - a lot of)
problems
with them, so when MSOffice 97 was introduced, most people had it with an old
office.
And these probl
I said not just about text files in IDE (it is not really interesting), but
about pascal project files. Where to store encoding?
It is a good question. The easiest way is to add a string to the .lpi file
which either contains the locale of all the project (but it is not so easy to
completely con
> Use a text editor with multiple encoding support like Notepad++
>
> Open the file on a encoding and save it on another. Done.
>
Good. It is a "very convenient", especially if project contains many files.
> A trivial pascal app can be built to automate that if you have many files.
>
Of course,
This is only about unicode text files. Can you imagine that BOM will be in a
text file given to pascal compiler?
_
To unsubscribe: mail [EMAIL PROTECTED] with
"unsubscribe" as the Subject
archives at http://ww
> About autoconversion:
>
> Changing the encoding of text files is pretty easy. For example: use
> the 'recode' tool. An alternative would be to extend the IDE to
> load/save files of other encodings and extend the compiler to auto
> convert the strings.
> After that you must check all places, whe
My lconv.pas can use libc because this unit was added by someone else (in LCL),
I have fixed only working swich combination. It must work without it. Anyway,
lconv.pas is a bad workaround, presently it fit my purposes, but not others.
_
> - When a TCodeBuffer loads a file it must find out the encoding,
> convert it to UTF-8 and on saving convert it back.
OK, this patch may be a start. Course, lconv should be either extended or
changed to something more suitable.
Presenlty I am checking only the first line
mycp.patch.gz
Descript
Besides, in current implementation UTF8 might have a disadvantage with 2-byte+
encodings. Those encodings are in WideString format, and conversion to old
string can be done either automatically or via special procedures (as it seems
to be on kylix). UTF8 is implemented as a string. It has some a
It is a question of using UTF-8. Using non-UTF8 in lazarus is not allowed (I am
not far from making an import patch, but illness prevents from this). But, as I
see, this is a problem of GTK-2. Anyway, gtk2 widgetset works only with utf-8,
period (all arguments of making more universal were leave
It's a known problem, Win32 (for now except for compiled with option
-dWindowsUnicodeSupport or something like it) and gtk-1 interfaces use native
1-byte encoding while gtk2 interface uses utf8 anyway (even if
LANG=ru_RU.CP1251).
_
Seems like that. May be it can be possible to use/write such a translation
functions, but at present - neither they are compatible in the source using
"high char" comments/strings nor in reading non-utf texts.
I noticed that this is not acceptable - the answer is that win32 will be moved
to UTF8
OK, here is another patch. It allows to work (or just import or export) files
with different encodings. All you need is to add {&encoding=<...>} (tested with
<...>=cp1251 and koi8-r) to any place of the source.
As for delphi import, I made some work, but presently delphi import works not
good e
OK, here is another patch. It allows to work (or just import or export) files
with different encodings. All you need is to add {&encoding=<...>} (tested with
<...>=cp1251 and koi8-r) to any place of the source.
As for delphi import, I made some work, but presently delphi import works not
good e
OK.
_
To unsubscribe: mail [EMAIL PROTECTED] with
"unsubscribe" as the Subject
archives at http://www.lazarus.freepascal.org/mailarchives
>Does it means that the source of all current projects for Win32 should
>be reedited?
What is reediting?
Let's classify all problems of such changes:
1. Source problem (both national comments and string constants). It will be
possible to fix this with my patch only by including {&encoding=ANSI}in
Here is a patch to svn lazarus. It enables {%encoding xxx} mechanism. Some
changes are not very good, but... It enables hack which allows to use
cp1251/koi8r LFM in gtk2 pseudo UTF (Hint='{%encoding=cp1251}'). That hack
works partially on win32. Anyway, it is good for translating old/win32
proj
>This will find the string even in strings and comments.
Yes. This won't be useful in future when all widgetsets will be in one encoding
AND/OR when there will be another mechanism to translate .lfm files (this hack
is a bad idea, I agree, but it WORKS).
May be LFM comments?
Besides, placing {%e
26 matches
Mail list logo