On Sat, 28 May 2016 01:53 am, Rustom Mody wrote:
> On Friday, May 27, 2016 at 7:21:41 PM UTC+5:30, Random832 wrote:
>> On Fri, May 27, 2016, at 05:56, Steven D'Aprano wrote:
>> > On Fri, 27 May 2016 05:04 pm, Marko Rauhamaa wrote:
>> >
>> > > They are all ASCII derivatives. Those that aren't don'
On Saturday, May 28, 2016 at 12:34:14 AM UTC+5:30, Marko Rauhamaa wrote:
> Random832 :
>
> > On Fri, May 27, 2016, at 05:56, Steven D'Aprano wrote:
> >> On Fri, 27 May 2016 05:04 pm, Marko Rauhamaa wrote:
> >> > They are all ASCII derivatives. Those that aren't don't exist.
> >> *plonk*
> >
> > Th
Random832 :
> On Fri, May 27, 2016, at 05:56, Steven D'Aprano wrote:
>> On Fri, 27 May 2016 05:04 pm, Marko Rauhamaa wrote:
>> > They are all ASCII derivatives. Those that aren't don't exist.
>> *plonk*
>
> That's a bit harsh,
Everybody has a right to plonk anybody -- and even declare it
ceremoni
On Sat, May 28, 2016 at 2:09 AM, Random832 wrote:
> On Fri, May 27, 2016, at 11:53, Rustom Mody wrote:
>> And coding systems are VERY political.
>> Sure what characters are put in (and not) is political
>> But more invisible but equally political is the collating order.
>>
>> eg No one understands
On Fri, May 27, 2016, at 11:53, Rustom Mody wrote:
> And coding systems are VERY political.
> Sure what characters are put in (and not) is political
> But more invisible but equally political is the collating order.
>
> eg No one understands what jmf's gripes are... My guess is that a Euro
> costs
On Friday, May 27, 2016 at 7:21:41 PM UTC+5:30, Random832 wrote:
> On Fri, May 27, 2016, at 05:56, Steven D'Aprano wrote:
> > On Fri, 27 May 2016 05:04 pm, Marko Rauhamaa wrote:
> >
> > > They are all ASCII derivatives. Those that aren't don't exist.
> >
> > *plonk*
>
> That's a bit harsh, consi
On Fri, May 27, 2016, at 05:56, Steven D'Aprano wrote:
> On Fri, 27 May 2016 05:04 pm, Marko Rauhamaa wrote:
>
> > They are all ASCII derivatives. Those that aren't don't exist.
>
> *plonk*
That's a bit harsh, considering that this argument started when you
invented your own definition of "ASCII
On Fri, 27 May 2016 05:04 pm, Marko Rauhamaa wrote:
> They are all ASCII derivatives. Those that aren't don't exist.
*plonk*
--
Steven
--
https://mail.python.org/mailman/listinfo/python-list
Steven D'Aprano :
> I don't mind being corrected if I make a genuine mistake, in fact I
> appreciate correction. But being corrected for something I already
> acknowledged? That's just arguing for the sake of arguing.
> [...]
>> ASCII derivatives are in wide use in the Americas and Antarctica as
>
On Fri, 27 May 2016 04:10 pm, Marko Rauhamaa wrote:
> Steven D'Aprano :
>> This concept of ASCII = "all character sets", or "nearly all", or
>> "okay, maybe not nearly all of them, but just the important ones" is
>> terribly Euro-centric. The very idea would be laughable in Japan and
>> other East
Steven D'Aprano :
> This concept of ASCII = "all character sets", or "nearly all", or
> "okay, maybe not nearly all of them, but just the important ones" is
> terribly Euro-centric. The very idea would be laughable in Japan and
> other East Asian countries, where Shift-JIS and Big5 still dominate.
Erik writes:
> On 26/05/16 08:21, Jussi Piitulainen wrote:
>> UTF-8 ASCII is nice
>>
>> UTF-16 ASCII is weird.
>
> I am dumbstruck.
I'm joking, of course.
But those statements do make sense when one knows to distinguish a
character set from its encoding as bytes, and then the UTF-8 encoding of
A
On Fri, 27 May 2016 07:12 am, Marko Rauhamaa wrote:
> However, I must correct myself slightly: ASCII refers to any
> byte-oriented character encoding scheme *largely coinciding with ASCII
> proper*. But since all of them *are* derivatives of ASCII proper,
> mentioning is somewhat redundant.
"All"
Erik :
> On 26/05/16 10:20, Marko Rauhamaa wrote:
>> ASCII has taken new meanings. For most coders, in relaxed style, it
>> refers to any byte-oriented character encoding scheme. In C terms,
>>
>> ASCII == char *
>
> Is this really true? So by "taken new meanings" you are saying that it
> has
On 26/05/16 08:21, Jussi Piitulainen wrote:
UTF-8 ASCII is nice
UTF-16 ASCII is weird.
I am dumbstruck.
E.
--
https://mail.python.org/mailman/listinfo/python-list
On 26/05/16 10:20, Marko Rauhamaa wrote:
ASCII has taken new meanings. For most coders, in relaxed style, it
refers to any byte-oriented character encoding scheme. In C terms,
ASCII == char *
Is this really true? So by "taken new meanings" you are saying that it
has actually lost all mea
On Thursday, May 26, 2016 at 1:41:41 PM UTC+5:30, Erik wrote:
> On 26/05/16 02:28, Dennis Lee Bieber wrote:
> > On Wed, 25 May 2016 22:03:34 +0100, Erik
> > declaimed the following:
> >
> >> Indeed - at that time, I was working with COBOL on an IBM S/370. On that
> >> system, we used EBCDIC ASCII.
Erik :
> To break it down, Stephen was making the observation that people call
> all sorts of extended ASCII encodings (including proprietary things)
> "ASCII". So I took it to the extreme and called something that had
> nothing to do with ASCII a type of ASCII.
ASCII has taken new meanings. For
On Thu, May 26, 2016 at 7:11 PM, Marko Rauhamaa wrote:
> Python didn't come out unscathed, either. Multithreading is being
> replaced with asyncio
Incorrect. Threading is still important - it's not being replaced.
Asynchronous code support is being added to an existing pool of
multiprocessing tec
Jussi Piitulainen :
> UTF-16 ASCII is weird. Wierd. Probably all right in an environment
> that is otherwise set to use UTF-16.
>
> Nothing is as weird as a mix of different encodings of a foreign
> script in the same "plain text" file, said to be "Unicode".
Some children are just born under unl
On 26/05/16 02:28, Dennis Lee Bieber wrote:
On Wed, 25 May 2016 22:03:34 +0100, Erik
declaimed the following:
Indeed - at that time, I was working with COBOL on an IBM S/370. On that
system, we used EBCDIC ASCII. That was the wierdest ASCII of all ;)
It would have to be... Extended
On Thursday, May 26, 2016 at 12:52:09 PM UTC+5:30, Jussi Piitulainen wrote:
> UTF-16 ASCII is weird. Wierd. Probably all right in an environment that
> is otherwise set to use UTF-16.
In http://blog.languager.org/2015/03/whimsical-unicode.html
are some examples of why UTF-16 is bug-inviting
[ sect
Erik writes:
> On 25/05/16 11:19, Steven D'Aprano wrote:
>> On Wednesday 25 May 2016 19:10, Christopher Reimer wrote:
>>
>>> Back in the early 1980's, I grew up on 8-bit processors and latin-1
>>> was all we had for ASCII.
>>
>> It really, truly wasn't. But you can be forgiven for not knowing
>> t
On 25/05/16 11:19, Steven D'Aprano wrote:
On Wednesday 25 May 2016 19:10, Christopher Reimer wrote:
Back in the early 1980's, I grew up on 8-bit processors and latin-1 was all
we had for ASCII.
It really, truly wasn't. But you can be forgiven for not knowing that, since
until the rise of the
On Wed, May 25, 2016 at 8:19 PM, Steven D'Aprano
wrote:
> While the code page system was necessary at
> the time, the legacy of them today continues to plague computer users, causing
> moji-bake, errors on file systems[1], and holding back the adoption of
> Unicode.
>
> [1] I'm speaking from expe
On Wednesday 25 May 2016 19:10, Christopher Reimer wrote:
> Back in the early 1980's, I grew up on 8-bit processors and latin-1 was all
> we had for ASCII.
It really, truly wasn't. But you can be forgiven for not knowing that, since
until the rise of the public Internet most people weren't expos
26 matches
Mail list logo