RE: COnvert to unicode

2016-04-08 Thread Joaquin Alzola
Thanks Peter. Much appreciate will look into codecs and how they work.

-Original Message-
From: Python-list 
[mailto:python-list-bounces+joaquin.alzola=lebara@python.org] On Behalf Of 
Peter Otten
Sent: 07 April 2016 19:14
To: python-list@python.org
Subject: Re: COnvert to unicode

Joaquin Alzola wrote:

> Hi People
>
> I need to covert this string:
>
> hello  there
> this is a test
>
> (also \n important)
>
> To this Unicode:
>
00680065006c006c006f0020002000740068006500720065000a00740068006900730020006900730020006100200074006500730074000a
> Without the \u and space.
>
> https://www.branah.com/unicode-converter
>
> I seem not to be able to do that conversion.
>
> Help to guide me will be appreciated.

>>> import codecs
>>> s = u"hello  there\nthis is a test\n"
>>> codecs.encode(s.encode("utf-16-be"), "hex")
'00680065006c006c006f0020002000740068006500720065000a00740068006900730020006900730020006100200074006500730074000a'


--
https://mail.python.org/mailman/listinfo/python-list
This email is confidential and may be subject to privilege. If you are not the 
intended recipient, please do not copy or disclose its content but contact the 
sender immediately upon receipt.
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: how to convert code that uses cmp to python3

2016-04-08 Thread Paul Rubin
Marko Rauhamaa  writes:
> On the surface, the garbage collection scheme looks dubious, but maybe
> it works perfect in practice.

It looked suspicious at first glance but I think it is ok.  Basically on
at most every timeout event (scheduling, expiration, or cancellation),
it does an O(n) operation (scanning and re-heapifying the timeout list)
with probability O(1/n) where n is the queue size, which itself changes
(by 0, +1 or -1) when a timeout event happens.  That is, its overhead is
a constant factor unless I'm missing something.  There are some
efficiency gains possible but it seems par for the course for Python
code.
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: From email addresses sometimes strange on this list - was Re: [beginner] What's wrong?

2016-04-08 Thread Cameron Simpson

On 05Apr2016 08:58, Chris Angelico  wrote:

On Tue, Apr 5, 2016 at 8:55 AM, Michael Torrie  wrote:

Usenet-orginating posts look fine.  For example:

From: Marko Rauhamaa 
Newsgroups: comp.lang.python

Whereas email ones are sometimes looking like this:

From: Mark Lawrence via Python-list 
Reply-To: Mark Lawrence 


O That probably explains it. It's because of Yahoo and mailing
lists. Yahoo did stuff that breaks stuff, so Mailman breaks stuff
differently to make sure that only Yahoo people get messed up a bit.
It means their names and addresses get slightly obscured, but delivery
works.


It is yahoo and mailman and a funky spec called DKIM or DMARC (related, not 
identical).  This makes a signature related to the originating host, and if 
mailman forwarded the message unchanged the signature would break - people 
honouring it would decide the mailman hosts were forging Mark's email.


Fortunately you can fix all this up on receipt, which is why I wasn't noticing 
this myself (I had in the past, and wrote myself a recipe for repair - my mail 
folders contain the reapired messages).


For Mark's messages I am using these mailfiler rules (the latter I think):

 from:s/.*/$reply_to/
   X-Yahoo-Newman-Id:/.
   from:python-list@python.org,python-id...@python.org,tu...@python.org

 from:s/.*/$reply_to/
   DKIM-Signature:/.
   from:python-list@python.org,python-id...@python.org,tu...@python.org

which just replaces the contents of the From: line with the contents of the 
Reply-To: line for this kind of message via the python lists.


Yahoo do something equivalent but more agressive to lists hosted on yahoo 
itself, such as sed-users. For that I have a couple of scripts - fix-dkim-from:


 https://bitbucket.org/cameron_simpson/css/src/tip/bin/fix-dkim-from

which is a sed script, and fix-dkim-from-swap:

 https://bitbucket.org/cameron_simpson/css/src/tip/bin/fix-dkim-from-swap

The former works on messages whose From: header is enough - it can be reversed 
in place. The latter is for messages where the from isn't enough, but there is 
another header contianing the original (default "X-Original-From").


You can use these in systems like procmail, eg:

 :0whf
 * from:.*
 | fix-dkim-from

It is annoying, and I'm happy to help people utilise these recipes if possible.  
Most all-in-one mail readers (Thunderbird, GMail, Apple Mail etc) are a bit too 
dumb, but if you can do your mail collection separately from your reader you 
can usually insert something in the processing.


It is nowhere near as annoying as the usenet<->mail gateway which is eating 
message-ids; that is truly uncivilised.


Cheers,
Cameron Simpson 
--
https://mail.python.org/mailman/listinfo/python-list


Python,ping,csv

2016-04-08 Thread Smith

Hello to all,
I have this little script that pings certain ip addresses.
Considering that I am a newbie to the Python programming language, can 
you help me change these lines in order to put the output into a csv file?

Sorry for unclear English
Thanks in advance


import subprocess

for ping in range(1,254):
address = "10.24.59." + str(ping)
res = subprocess.call(['ping', '-c', '3', address])
if res == 0:
print ("ping to", address, "OK")
elif res == 2:
print ("no response from", address)
else:
print ("ping to", address, "failed!")
--
https://mail.python.org/mailman/listinfo/python-list


Re: how to convert code that uses cmp to python3

2016-04-08 Thread Antoon Pardon
Op 08-04-16 om 00:21 schreef Chris Angelico:
> On Fri, Apr 8, 2016 at 6:56 AM, Antoon Pardon
>  wrote:
>> That solution will mean I will have to do about 100% more comparisons
>> than previously.
> Try it regardless. You'll probably find that performance is fine.
> Don't prematurely optimize!
>
> ChrisA

But it was already working and optimized. The python3 approach forces
me to make changes to working code and make the performance worse.

-- 
Antoon Pardon
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: how to convert code that uses cmp to python3

2016-04-08 Thread Chris Angelico
On Fri, Apr 8, 2016 at 5:35 PM, Antoon Pardon
 wrote:
> Op 08-04-16 om 00:21 schreef Chris Angelico:
>> On Fri, Apr 8, 2016 at 6:56 AM, Antoon Pardon
>>  wrote:
>>> That solution will mean I will have to do about 100% more comparisons
>>> than previously.
>> Try it regardless. You'll probably find that performance is fine.
>> Don't prematurely optimize!
>>
>> ChrisA
>
> But it was already working and optimized. The python3 approach forces
> me to make changes to working code and make the performance worse.

Is performance actually worse because you're doing two comparisons?
Visibly worse? If so, you probably have an overly-complex comparison
function, and a tree is *always* going to be suboptimal. Have you
actually measured a performance hit?

ChrisA
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: how to convert code that uses cmp to python3

2016-04-08 Thread Marko Rauhamaa
Paul Rubin :

> Marko Rauhamaa  writes:
>> On the surface, the garbage collection scheme looks dubious, but
>> maybe it works perfect in practice.
>
> It looked suspicious at first glance but I think it is ok. Basically
> on at most every timeout event (scheduling, expiration, or
> cancellation), it does an O(n) operation (scanning and re-heapifying
> the timeout list) with probability O(1/n) where n is the queue size,
> which itself changes (by 0, +1 or -1) when a timeout event happens.
> That is, its overhead is a constant factor unless I'm missing
> something. There are some efficiency gains possible but it seems par
> for the course for Python code.

I compared the performance of AVL trees, the older heapq technique as
well as the "GC scheme" in 2014 with a challenging but realistic
test scenario. AVL trees and the GC scheme had pretty much the same
throughput (and the older, simple heapq was slower).

With AVL trees, it's easier to be convinced about worst-case
performance. It is more difficult to see the potential pathological
cases with the GC scheme.


Marko
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: how to convert code that uses cmp to python3

2016-04-08 Thread Antoon Pardon
Op 07-04-16 om 23:08 schreef Ben Finney:
> Antoon Pardon  writes:
>
>> With this method I have to traverse the two tuples almost always
>> twice. Once to find out if they are equal and if not a second time to
>> find out which is greater.
> You are essentially describing the new internal API of comparison
> operators. That's pretty much unavoidable.

And nobody thought about this kind of cases or found them important enough?

> If you want to avoid repeating an expensive operation – the computation
> of the comparison value for an object – you could add an LRU cache to
> that function. See ‘functools.lru_cache’.

I'll have a look.

-- 
Antoon

-- 
https://mail.python.org/mailman/listinfo/python-list


Re: how to convert code that uses cmp to python3

2016-04-08 Thread Ben Finney
Antoon Pardon  writes:

> But it was already working and optimized. The python3 approach forces
> me to make changes to working code and make the performance worse.

Yes, changing from Python 2 to Python 3 entails changing working code,
and entails different implementations for some things.

As for worse performance, that is something you can objectively measure.
What is the size of the performance reduction you have objectively
measured from this change?

-- 
 \“Spam will be a thing of the past in two years' time.” —Bill |
  `\ Gates, 2004-01-24 |
_o__)  |
Ben Finney

-- 
https://mail.python.org/mailman/listinfo/python-list


Re: how to convert code that uses cmp to python3

2016-04-08 Thread Ben Finney
Antoon Pardon  writes:

> Op 07-04-16 om 23:08 schreef Ben Finney:
> > You are essentially describing the new internal API of comparison
> > operators. That's pretty much unavoidable.
>
> And nobody thought about this kind of cases

I'm quite confident the API changes were thought about by many people.

> or found them important enough?

Important enough for what? You still haven't demonstrated what actual
harm is done by these API changes.

-- 
 \  “Reichel's Law: A body on vacation tends to remain on vacation |
  `\unless acted upon by an outside force.” —Carol Reichel |
_o__)  |
Ben Finney

-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Untrusted code execution

2016-04-08 Thread Lele Gaifax
Paul Rubin  writes:

> Lua is supposed to be easy to embed and sandbox.  It might be
> interesting to write Python bindings for the Lua interpreter sometime.

Isn't this something similar to already existing
https://pypi.python.org/pypi/lupa/?

ciao, lele.
-- 
nickname: Lele Gaifax | Quando vivrò di quello che ho pensato ieri
real: Emanuele Gaifas | comincerò ad aver paura di chi mi copia.
l...@metapensiero.it  | -- Fortunato Depero, 1929.

-- 
https://mail.python.org/mailman/listinfo/python-list


Re: how to convert code that uses cmp to python3

2016-04-08 Thread Marko Rauhamaa
Antoon Pardon :

> In python2 descending the tree would only involve at most one
> expensive comparison, because using cmp would codify that comparison
> into an integer which would then be cheap to compare with 0. Now in
> python3, I may need to do two expensive comparisons, because there is
> no __cmp__ method, to make such a codefication.

I think you should base your tree implementation on key.__lt__() only.
Only compare keys using <, nothing else, ever.

It may lead to faster or slower performance, depending on the ordering
function, but I think it is more minimalistic, and thus philosophically
more appealing than __cmp__.


Marko
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: how to convert code that uses cmp to python3

2016-04-08 Thread Paul Rubin
Marko Rauhamaa  writes:
> With AVL trees, it's easier to be convinced about worst-case
> performance.

I'd have thought the main reason to use AVL trees was persistence, so
you could have multiple slightly different trees sharing most of their
structures.

> It is more difficult to see the potential pathological cases with the
> GC scheme.

How bad can the GC scheme be, if the constants are picked properly?
I'll think about this tomorrow, it's late here now.
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Python,ping,csv

2016-04-08 Thread Joel Goldstick
On Fri, Apr 8, 2016 at 3:25 AM, Smith  wrote:
> Hello to all,
> I have this little script that pings certain ip addresses.
> Considering that I am a newbie to the Python programming language, can you
> help me change these lines in order to put the output into a csv file?
> Sorry for unclear English
> Thanks in advance
>
>
> import subprocess
>
> for ping in range(1,254):
> address = "10.24.59." + str(ping)
> res = subprocess.call(['ping', '-c', '3', address])
> if res == 0:
> print ("ping to", address, "OK")
> elif res == 2:
> print ("no response from", address)
> else:
> print ("ping to", address, "failed!")
> --
> https://mail.python.org/mailman/listinfo/python-list

What do you want in your csv file?  Show a sample output

-- 
Joel Goldstick
http://joelgoldstick.com/blog
http://cc-baseballstats.info/stats/birthdays
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: how to convert code that uses cmp to python3

2016-04-08 Thread Steven D'Aprano
On Fri, 8 Apr 2016 06:34 pm, Marko Rauhamaa wrote:

> Antoon Pardon :
> 
>> In python2 descending the tree would only involve at most one
>> expensive comparison, because using cmp would codify that comparison
>> into an integer which would then be cheap to compare with 0. Now in
>> python3, I may need to do two expensive comparisons, because there is
>> no __cmp__ method, to make such a codefication.
> 
> I think you should base your tree implementation on key.__lt__() only.
> Only compare keys using <, nothing else, ever.

I believe that's how list.sort() and sorted() work:

py> class Spam(object):
... def __init__(self, n):
... self.n = n
... def __lt__(self, other):
... return self.n < other.n
... def __repr__(self):
... return repr(self.n)
...
py> L = [Spam(5), Spam(3), Spam(9), Spam(1), Spam(2)]
py> L
[5, 3, 9, 1, 2]
py> sorted(L)
[1, 2, 3, 5, 9]


as well as max() and min().



-- 
Steven

-- 
https://mail.python.org/mailman/listinfo/python-list


Python, Linux, default search places.

2016-04-08 Thread Frantisek . Fridrich
Hello.

Thank you to Karim, thank you to Wildman for response. 

I will describe my problem in more detail. Python on my computer is 
installed in /usr directory. My Python contains additional modules such as 
numpy, scipy, matplotlib, h5py, … . Python runs correctly.

Third party program opens Python but I can’t import numpy and so on. 
Python can’t find numpy. After modifying PYTHONPATH, Python can find numpy 
but shows the following error message:
/usr/lib64/python2.6/lib-dynload/math.so: undefined symbol: PyFPE_jbuf
I think Python can’t find some libraries. 

I don’t know what the third party program does with environment variables 
but I think I should correctly define environment either in shell before I 
start third party program or in Python in third party program.

That is the reason why I would like to know default:
 - search path for libraries,
 - search path for Python modules,
in case LD_LIBRARARY_PATH and PYTHONPATH is empty not defined.

I think that default
 - search path for executables
is easier task. I also would like to know the correct default definition. 

OS: SUSE Linux Enterprise Server 11. 
HW: HP DL160 Gen8 SFF CTO.
Python 2.6.

Frantisek

Disclaimer: This email and any files transmitted with it are confidential 
and intended solely for the use of the individual or entity to whom they 
are addressed. Distribution only by express authority of a Rubena company.

-- 
https://mail.python.org/mailman/listinfo/python-list


MySQL - Django can not display international characters

2016-04-08 Thread asimkon .
I have connected successfully MySQL with Django after installing MySQL
module via easy_install command. But unfortunately I have a problem setting
mysql db to properly recognise greek characters in django. In my setting.py
file I have included the following options:

'OPTIONS': { 'charset': 'utf8', 'use_unicode': True, }, } instead of utf8
in the charset I tried using windows-1253 and iso-8859-1 but with no
results.

I have stored database records successfully in greek via CMD, but when i
try to do the opposite in CMD again via ORM i get strange characters. Any
kind of help?


Regards
Kostas Asimakopoulos
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: MySQL - Django can not display international characters

2016-04-08 Thread INADA Naoki
How did you create DB?
http://dev.mysql.com/doc/refman/5.7/en/charset-database.html

On Fri, Apr 8, 2016 at 7:26 PM, asimkon .  wrote:

> I have connected successfully MySQL with Django after installing MySQL
> module via easy_install command. But unfortunately I have a problem setting
> mysql db to properly recognise greek characters in django. In my setting.py
> file I have included the following options:
>
> 'OPTIONS': { 'charset': 'utf8', 'use_unicode': True, }, } instead of utf8
> in the charset I tried using windows-1253 and iso-8859-1 but with no
> results.
>
> I have stored database records successfully in greek via CMD, but when i
> try to do the opposite in CMD again via ORM i get strange characters. Any
> kind of help?
>
>
> Regards
> Kostas Asimakopoulos
> --
> https://mail.python.org/mailman/listinfo/python-list
>



-- 
INADA Naoki  
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Python, Linux, default search places.

2016-04-08 Thread Karim



On 08/04/2016 12:01, frantisek.fridr...@rubena.cgs.cz wrote:

Hello.

Thank you to Karim, thank you to Wildman for response.

I will describe my problem in more detail. Python on my computer is
installed in /usr directory. My Python contains additional modules such as
numpy, scipy, matplotlib, h5py, … . Python runs correctly.

Third party program opens Python but I can’t import numpy and so on.
Python can’t find numpy. After modifying PYTHONPATH, Python can find numpy
but shows the following error message:
/usr/lib64/python2.6/lib-dynload/math.so: undefined symbol: PyFPE_jbuf
I think Python can’t find some libraries.

I don’t know what the third party program does with environment variables
but I think I should correctly define environment either in shell before I
start third party program or in Python in third party program.

That is the reason why I would like to know default:
  - search path for libraries,
  - search path for Python modules,
in case LD_LIBRARARY_PATH and PYTHONPATH is empty not defined.

I think that default
  - search path for executables
is easier task. I also would like to know the correct default definition.

OS: SUSE Linux Enterprise Server 11.
HW: HP DL160 Gen8 SFF CTO.
Python 2.6.

Frantisek

Disclaimer: This email and any files transmitted with it are confidential
and intended solely for the use of the individual or entity to whom they
are addressed. Distribution only by express authority of a Rubena company.


Hi Frantisek,

If it could help you whatever the platform you are on the site python 
pkg gives you information of all 3rd parties modules and dynamic loads:


bash: python /usr/lib/python2.7/site.py

sys.path = [
'/usr/lib/python2.7',
'/home/karim/project/pyparsing-2.0.3',
'/usr/lib/python2.7/plat-x86_64-linux-gnu',
'/usr/lib/python2.7/lib-tk',
'/usr/lib/python2.7/lib-old',
'/usr/lib/python2.7/lib-dynload',
'/usr/local/lib/python2.7/dist-packages',
'/usr/lib/python2.7/dist-packages',
'/usr/lib/python2.7/dist-packages/PILcompat',
'/usr/lib/python2.7/dist-packages/gtk-2.0',
'/usr/lib/pymodules/python2.7',
'/usr/lib/python2.7/dist-packages/ubuntu-sso-client',
]
USER_BASE: '/home/karim/.local' (exists)
USER_SITE: '/home/karim/.local/lib/python2.7/site-packages' (doesn't exist)
ENABLE_USER_SITE: True

Karim
--
https://mail.python.org/mailman/listinfo/python-list


Re: how to convert code that uses cmp to python3

2016-04-08 Thread Steven D'Aprano
On Fri, 8 Apr 2016 05:35 pm, Antoon Pardon wrote:

> Op 08-04-16 om 00:21 schreef Chris Angelico:
>> On Fri, Apr 8, 2016 at 6:56 AM, Antoon Pardon
>>  wrote:
>>> That solution will mean I will have to do about 100% more comparisons
>>> than previously.
>> Try it regardless. You'll probably find that performance is fine.
>> Don't prematurely optimize!
>>
>> ChrisA
> 
> But it was already working and optimized. The python3 approach forces
> me to make changes to working code and make the performance worse.


What exactly is the problem here? Is it just that the built-in "cmp"
function is gone? Then define your own:

def cmp(a, b):
"""Return negative if ab."""
return (b < a) - (a < b)


That's pretty much how it works in terms of Python operators. It may be very
slightly different in some corner cases, but you may not notice unless
you're using some weird objects.

If that's not good enough, you can copy the code from the built-in cmp from
the 2.7 release and make a C extension. Or just duplicate the built-in cmp
semantics even more closely. To do that, you have to look at the C code.

In Python 2.7, the built-in cmp is implemented as PyObject_Cmp.

https://hg.python.org/cpython/file/2.7/Python/bltinmodule.c

PyObject_Cmp does some error-checking, then calls PyObject_Compare:

https://hg.python.org/cpython/file/2.7/Objects/abstract.c

PyObject_Compare does some error-checking, then it checks for object
identity (as an optimization), then calls do_cmp:

https://hg.python.org/cpython/file/2.7/Objects/object.c

do_cmp has a bunch of logic to decide whether to use rich comparisons or the
legacy __cmp__ method, and then calls one of a bunch of functions. If you
care, I recommend that you read them yourself, because I'm not fluent with
C. But as near as I can tell, the logic is basically:

(1) if both arguments a and b are the same, and they define __cmp__, 
then return the result of type(a).__cmp__(a, b);

(2) otherwise try in this order a == b, a < b, a > b, and return
the appropriate value for the first which succeeds (if any);

(3) otherwise (there are no comparison operators defined at all,
so if a and b are the same type, return 
cmp(address of a, address of b) (yes, this is insane);

(4) if they're different types, there are a bunch of arbitrary rules 
that decide which object comes first, all subject to change 
without warning:

- None is smaller than everything;
- numbers are smaller than other things;
- otherwise compare type names;
- if the type names happen to be the same, or if the numeric values
  are incompatible, then compare the object addresses.


I've simplified a lot: there's extra handling for classic classes, and
warnings if __cmp__ doesn't return -1, 0 or 1, and lots of error checking.
In a nutshell, cmp in Python 2 is a tangled mess. I'm not surprised the
devs wanted to get rid of it.


-- 
Steven

-- 
https://mail.python.org/mailman/listinfo/python-list


Re: how to convert code that uses cmp to python3

2016-04-08 Thread Steven D'Aprano
On Fri, 8 Apr 2016 05:45 pm, Antoon Pardon wrote:

> Op 07-04-16 om 23:08 schreef Ben Finney:
>> Antoon Pardon  writes:
>>
>>> With this method I have to traverse the two tuples almost always
>>> twice. Once to find out if they are equal and if not a second time to
>>> find out which is greater.
>> You are essentially describing the new internal API of comparison
>> operators. That's pretty much unavoidable.
> 
> And nobody thought about this kind of cases or found them important
> enough?

Probably not.

But you know, if you can demonstrate a genuine and severe slowdown with no
easy work-around, you should report it as a bug. It wouldn't be the first
time that functions removed from Python 3 have been re-added because it
turned out that they were needed.



>> If you want to avoid repeating an expensive operation – the computation
>> of the comparison value for an object – you could add an LRU cache to
>> that function. See ‘functools.lru_cache’.
> 
> I'll have a look.

I would be stunned if tuple comparisons with only a handful of values were
slow enough that its worth caching their results with an lru_cache. But try
it, and see how you go.




-- 
Steven

-- 
https://mail.python.org/mailman/listinfo/python-list


Re: how to convert code that uses cmp to python3

2016-04-08 Thread Marko Rauhamaa
Steven D'Aprano :

> I would be stunned if tuple comparisons with only a handful of values
> were slow enough that its worth caching their results with an
> lru_cache. But try it, and see how you go.

There are two ways your Python program can be slow:

 * You are doing something stupid like an O(exp(n)) when there is an
   O(n**2) available.

 * Python is inherently slower than C by a constant factor.

If you are doing something stupid, do refactor your code. But you
shouldn't try to make artificial optimizations to your code to make it
perform more like C. Write it in C, maybe partially, if you have to.

For example, your performance measurements might show that Python's
method calls are slow. That may be true but it is no invitation to
eliminate method calls from your code.


Marko
-- 
https://mail.python.org/mailman/listinfo/python-list


I'd like to add -march=native to my pip builds

2016-04-08 Thread Neal Becker
I'd like to add -march=native to my pip builds.  How can I do this?


-- 
https://mail.python.org/mailman/listinfo/python-list


Re: how to convert code that uses cmp to python3

2016-04-08 Thread Antoon Pardon
Op 08-04-16 om 09:47 schreef Ben Finney:
> Antoon Pardon  writes:
>
>> But it was already working and optimized. The python3 approach forces
>> me to make changes to working code and make the performance worse.
> Yes, changing from Python 2 to Python 3 entails changing working code,
> and entails different implementations for some things.
>
> As for worse performance, that is something you can objectively measure.
> What is the size of the performance reduction you have objectively
> measured from this change?

Well having a list of 1000 Sequence like object. Each sequence
containing between 1 and 100 numbers. Comparing each sequence
to each other a 100 times. I get the following results.

Doing it as follows:
seq1 < seq2
seq2 < seq1

takes about 110 seconds.


Doing it like this:
delta = cmp(seq1, seq2)
delta < 0
delta > 0

takes about 50 seconds.

Comparing was done by just iterating over the two sequences and
the first time both numbers were not the same, returning the difference
of the numbers

Granted, this test just lifted the comparison code from the module.
What was interesting was that the worst case in python3 was comparable
to the better case in python2.

-- 
Antoon.

-- 
https://mail.python.org/mailman/listinfo/python-list


Re: I'd like to add -march=native to my pip builds

2016-04-08 Thread Stefan Behnel
Neal Becker schrieb am 08.04.2016 um 15:27:
> I'd like to add -march=native to my pip builds.  How can I do this?

First of all, make sure you don't install binary packages and wheels.
Changing the C compiler flags will require source builds.

Then, it should be enough to set the CFLAGS environment variable, e.g.

  CFLAGS="-O3 -march=native"  pip install  --no-use-wheel  numpy

Stefan


-- 
https://mail.python.org/mailman/listinfo/python-list


Re: I'd like to add -march=native to my pip builds

2016-04-08 Thread Neal Becker
Stefan Behnel wrote:

> CFLAGS="-O3 -march=native"  pip install  --no-use-wheel

Thanks, not bad.  But no way to put this in a config file so I don't have to 
remember it, I guess?

-- 
https://mail.python.org/mailman/listinfo/python-list


Re: how to convert code that uses cmp to python3

2016-04-08 Thread Marko Rauhamaa
Antoon Pardon :

> Well having a list of 1000 Sequence like object. Each sequence
> containing between 1 and 100 numbers. Comparing each sequence
> to each other a 100 times. I get the following results.
>
> Doing it as follows:
> seq1 < seq2
> seq2 < seq1
>
> takes about 110 seconds.
>
> Doing it like this:
> delta = cmp(seq1, seq2)
> delta < 0
> delta > 0
>
> takes about 50 seconds.

Looks like a completely artificial scenario.


Marko
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: how to convert code that uses cmp to python3

2016-04-08 Thread Chris Angelico
On Fri, Apr 8, 2016 at 11:31 PM, Antoon Pardon
 wrote:
> Doing it as follows:
> seq1 < seq2
> seq2 < seq1
>
> takes about 110 seconds.
>
>
> Doing it like this:
> delta = cmp(seq1, seq2)
> delta < 0
> delta > 0
>
> takes about 50 seconds.

Why are you comparing in both directions, though? cmp() is more
equivalent to this:

seq1 == seq2
seq1 < seq2

You only need ONE comparison, and the other is presumed to be its
opposite. When, in the Python 3 version, would you need to compare
twice?

ChrisA
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: how to convert code that uses cmp to python3

2016-04-08 Thread Ian Kelly
On Fri, Apr 8, 2016 at 3:23 AM, Steven D'Aprano  wrote:
> On Fri, 8 Apr 2016 06:34 pm, Marko Rauhamaa wrote:
>
>> Antoon Pardon :
>>
>>> In python2 descending the tree would only involve at most one
>>> expensive comparison, because using cmp would codify that comparison
>>> into an integer which would then be cheap to compare with 0. Now in
>>> python3, I may need to do two expensive comparisons, because there is
>>> no __cmp__ method, to make such a codefication.
>>
>> I think you should base your tree implementation on key.__lt__() only.
>> Only compare keys using <, nothing else, ever.
>
> I believe that's how list.sort() and sorted() work:
>
> py> class Spam(object):
> ... def __init__(self, n):
> ... self.n = n
> ... def __lt__(self, other):
> ... return self.n < other.n
> ... def __repr__(self):
> ... return repr(self.n)
> ...
> py> L = [Spam(5), Spam(3), Spam(9), Spam(1), Spam(2)]
> py> L
> [5, 3, 9, 1, 2]
> py> sorted(L)
> [1, 2, 3, 5, 9]
>
>
> as well as max() and min().

That's fine for those operations and probably insert, but how do you
search an AVL tree for a specific key without also using __eq__?
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: how to convert code that uses cmp to python3

2016-04-08 Thread Antoon Pardon
Op 08-04-16 om 16:08 schreef Chris Angelico:
> On Fri, Apr 8, 2016 at 11:31 PM, Antoon Pardon
>  wrote:
>> Doing it as follows:
>> seq1 < seq2
>> seq2 < seq1
>>
>> takes about 110 seconds.
>>
>>
>> Doing it like this:
>> delta = cmp(seq1, seq2)
>> delta < 0
>> delta > 0
>>
>> takes about 50 seconds.
> Why are you comparing in both directions, though? cmp() is more
> equivalent to this:
>
> seq1 == seq2
> seq1 < seq2

That doesn't make a difference.

> You only need ONE comparison, and the other is presumed to be its
> opposite. When, in the Python 3 version, would you need to compare
> twice?

About 50% of the time. When I traverse the tree I go left when the
argument key is smaller than the node key, I go right when it is
greater than the node key and I have found the node I want when
they are equal.

-- 
Antoon Pardon

-- 
https://mail.python.org/mailman/listinfo/python-list


Re: how to convert code that uses cmp to python3

2016-04-08 Thread Antoon Pardon
Op 08-04-16 om 15:52 schreef Marko Rauhamaa:
> Antoon Pardon :
>
>> Well having a list of 1000 Sequence like object. Each sequence
>> containing between 1 and 100 numbers. Comparing each sequence
>> to each other a 100 times. I get the following results.
>>
>> Doing it as follows:
>> seq1 < seq2
>> seq2 < seq1
>>
>> takes about 110 seconds.
>>
>> Doing it like this:
>> delta = cmp(seq1, seq2)
>> delta < 0
>> delta > 0
>>
>> takes about 50 seconds.
> Looks like a completely artificial scenario.
>
It is the code I run when I traverse a tree
and decide either to go left, right or have
found the node I am looking for.

And yes I have worked with keys like that.

-- 
Antoon

-- 
https://mail.python.org/mailman/listinfo/python-list


Re: how to convert code that uses cmp to python3

2016-04-08 Thread Ian Kelly
On Fri, Apr 8, 2016 at 8:08 AM, Chris Angelico  wrote:
> On Fri, Apr 8, 2016 at 11:31 PM, Antoon Pardon
>  wrote:
>> Doing it as follows:
>> seq1 < seq2
>> seq2 < seq1
>>
>> takes about 110 seconds.
>>
>>
>> Doing it like this:
>> delta = cmp(seq1, seq2)
>> delta < 0
>> delta > 0
>>
>> takes about 50 seconds.
>
> Why are you comparing in both directions, though? cmp() is more
> equivalent to this:
>
> seq1 == seq2
> seq1 < seq2
>
> You only need ONE comparison, and the other is presumed to be its
> opposite. When, in the Python 3 version, would you need to compare
> twice?

When there are three possible code paths depending on the result.

def search(key, node):
if node is None:
raise KeyError(key)
if key < node.key:
return search(key, node.left)
elif key == node.key:
return node
else:
return search(key, node.right)

How would you implement this with only one comparison?
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: how to convert code that uses cmp to python3

2016-04-08 Thread Chris Angelico
On Sat, Apr 9, 2016 at 12:20 AM, Antoon Pardon
 wrote:
>> You only need ONE comparison, and the other is presumed to be its
>> opposite. When, in the Python 3 version, would you need to compare
>> twice?
>
> About 50% of the time. When I traverse the tree I go left when the
> argument key is smaller than the node key, I go right when it is
> greater than the node key and I have found the node I want when
> they are equal.

How about this:

You have found the node if they are equal.
Otherwise, go left if your argument is smaller than the node.
Otherwise, go right.

You don't have to do three comparisons, only two - and one of them is
an equality, rather than an inequality, which is often cheaper. But
hey. If you really can't handle the double comparison, *write your
own* special-purpose comparison function - nobody's stopping you! It's
just not something that exists *in the language*. If your specific
objects need this, write it!

ChrisA
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: how to convert code that uses cmp to python3

2016-04-08 Thread Random832
On Fri, Apr 8, 2016, at 10:08, Chris Angelico wrote:
> seq1 == seq2
> seq1 < seq2
> 
> You only need ONE comparison, and the other is presumed to be its
> opposite. When, in the Python 3 version, would you need to compare
> twice?

== might be just as expensive as the others, particularly if the
sequences are unlikely to share object identity. I suspect he chose "s1
< s2; s2 < s1" precisely because someone suggested in another post on
this thread to exclusively use the less-than operator on purity grounds.
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: how to convert code that uses cmp to python3

2016-04-08 Thread Chris Angelico
On Sat, Apr 9, 2016 at 12:22 AM, Ian Kelly  wrote:
>> seq1 == seq2
>> seq1 < seq2
>>
>> You only need ONE comparison, and the other is presumed to be its
>> opposite. When, in the Python 3 version, would you need to compare
>> twice?
>
> When there are three possible code paths depending on the result.
>
> def search(key, node):
> if node is None:
> raise KeyError(key)
> if key < node.key:
> return search(key, node.left)
> elif key == node.key:
> return node
> else:
> return search(key, node.right)
>
> How would you implement this with only one comparison?

I was assuming that the equality check could be a lot cheaper than the
inequality, which is often the case when it is false (it's pretty easy
to prove that two enormous objects are different - any point of
difference proves it). Doing the equality check first generally means
you're paying the price of one expensive lookup each time.

If proving that x != y is expensive too, then call it two comparisons
rather than three. But you still don't need to check both directions
of inequality.

ChrisA
-- 
https://mail.python.org/mailman/listinfo/python-list


Python 3.4 problem with requests module

2016-04-08 Thread 1leefig
Hi all,

I would appreciate any thoughts that you may have regarding a troublesome build 
error. I am at my wits end.

For some strange reason a get a single error on importing. It's to do with the 
requests module and pyopenssl.py

The comment block indicates:
This needs the following packages installed:

* pyOpenSSL (tested with 0.13)
* ndg-httpsclient (tested with 0.3.2)
* pyasn1 (tested with 0.1.6)

I have done this.

The requests module then imports:
from __future__ import absolute_import

try:
from ndg.httpsclient.ssl_peer_verification import SUBJ_ALT_NAME_SUPPORT
from ndg.httpsclient.subj_alt_name import SubjectAltName as 
BaseSubjectAltName
except SyntaxError as e:
raise ImportError(e)

import OpenSSL.SSL
from pyasn1.codec.der import decoder as der_decoder
from pyasn1.type import univ, constraint
from socket import _fileobject, timeout, error as SocketError

But I get a python exception on this last line:

ImportError was unhandled by user code
Message: cannot import name '_fileobject'


I am running Python 3.4 and had no problems previously. 

I am not sure if its to do with the csv fetcher functionality and have tried 
everything.

Thanks
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Python 3.4 problem with requests module

2016-04-08 Thread Steven D'Aprano
On Sat, 9 Apr 2016 02:00 am, 1lee...@gmail.com wrote:

> import OpenSSL.SSL
> from pyasn1.codec.der import decoder as der_decoder
> from pyasn1.type import univ, constraint
> from socket import _fileobject, timeout, error as SocketError
> 
> But I get a python exception on this last line:
> 
> ImportError was unhandled by user code
> Message: cannot import name '_fileobject'

If you run this:


import socket
print(socket.__file__)


what does it say?




-- 
Steven

-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Python 3.4 problem with requests module

2016-04-08 Thread Chris Angelico
On Sat, Apr 9, 2016 at 2:00 AM,  <1lee...@gmail.com> wrote:
> from socket import _fileobject, timeout, error as SocketError
>
> But I get a python exception on this last line:
>
> ImportError was unhandled by user code
> Message: cannot import name '_fileobject'
>
>
> I am running Python 3.4 and had no problems previously.

This looks like a shadowed import problem. Try running this:

import socket
print(socket.__file__)

If all's well, you should get a file path that points to your Python
installation (for me, "/usr/local/lib/python3.6/socket.py"). If it
tells you something about your current directory, check for a file
called socket.py or socket.pyc and rename or delete it - you've
shadowed the standard library module.

ChrisA
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: how to convert code that uses cmp to python3

2016-04-08 Thread Marko Rauhamaa
Ian Kelly :

> That's fine for those operations and probably insert, but how do you
> search an AVL tree for a specific key without also using __eq__?

Not needed:


if key < node.key:
look_right()
elif node.key < key:
look_left()
else:
found_it()



Marko
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Python programs and relative imports

2016-04-08 Thread Rob Gaddi
Rob Gaddi wrote:

> Does anyone know the history of why relative imports are only available
> for packages and not for "programs"?  It certainly complicates life.
>

Really, no one?  It seems like a fairly obvious thing to have included;
all of the reasons that you want to be explicit in saying:

  from . import mypkg

in a package apply just as well in an executable script.  But instead,
they've got different semantics such that you expressly _cannot_ use
relative imports in a script.  This feels like such a glaring oversight
that there must have been some rationale behind it.

-- 
Rob Gaddi, Highland Technology -- www.highlandtechnology.com
Email address domain is currently out of order.  See above to fix.
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: how to convert code that uses cmp to python3

2016-04-08 Thread Ian Kelly
On Fri, Apr 8, 2016 at 10:33 AM, Marko Rauhamaa  wrote:
> Ian Kelly :
>
>> That's fine for those operations and probably insert, but how do you
>> search an AVL tree for a specific key without also using __eq__?
>
> Not needed:
>
> 
> if key < node.key:
> look_right()
> elif node.key < key:
> look_left()
> else:
> found_it()
> 

That makes me a little nervous since it assumes that the keys are
totally ordered and could return an incorrect node if they aren't.
Granted, the keys *should* be totally ordered if the data structure is
being used properly, but an explicit equality check ensures that the
worst that could happen is the node simply isn't found despite being
present.

More to the contextual point, this is still doing two comparisons,
even if both of them are less than, so it doesn't really solve the
OP's issue.
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Python programs and relative imports

2016-04-08 Thread Chris Angelico
On Sat, Apr 9, 2016 at 2:59 AM, Rob Gaddi
 wrote:
> Rob Gaddi wrote:
>
>> Does anyone know the history of why relative imports are only available
>> for packages and not for "programs"?  It certainly complicates life.
>>
>
> Really, no one?  It seems like a fairly obvious thing to have included;
> all of the reasons that you want to be explicit in saying:
>
>   from . import mypkg
>
> in a package apply just as well in an executable script.  But instead,
> they've got different semantics such that you expressly _cannot_ use
> relative imports in a script.  This feels like such a glaring oversight
> that there must have been some rationale behind it.

You can use the simple "import mypkg" syntax to load these up. I'm not
sure what you're looking for - do you want to prevent that syntax from
working, to prevent accidental shadowing?

ChrisA
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: how to convert code that uses cmp to python3

2016-04-08 Thread Marko Rauhamaa
Ian Kelly :

> On Fri, Apr 8, 2016 at 10:33 AM, Marko Rauhamaa  wrote:
>> Ian Kelly :
>>
>>> That's fine for those operations and probably insert, but how do you
>>> search an AVL tree for a specific key without also using __eq__?
>>
>> Not needed:
>>
>> 
>> if key < node.key:
>> look_right()
>> elif node.key < key:
>> look_left()
>> else:
>> found_it()
>> 
>
> That makes me a little nervous since it assumes that the keys are
> totally ordered and could return an incorrect node if they aren't.

That's all the more reason to tie the order explicitly to < and nothing
else.

> Granted, the keys *should* be totally ordered if the data structure is
> being used properly, but an explicit equality check ensures that the
> worst that could happen is the node simply isn't found despite being
> present.

Well, how do you know how __eq__ and __lt__ are related?

Better simply *define* a *match* as

   not key < node.key and not node.key < key

> More to the contextual point, this is still doing two comparisons,
> even if both of them are less than, so it doesn't really solve the
> OP's issue.

I'm not sure the OP has a real issue. If *that* is a real issue, I
recommend a different programming language. Thing is, I bet method calls
are the single most expensive Python operation, yet would anyone suggest
avoiding method calls?


Marko
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Unicode normalisation [was Re: [beginner] What's wrong?]

2016-04-08 Thread Peter Pearson
On Fri, 08 Apr 2016 16:00:10 +1000, Steven D'Aprano  wrote:
> On Fri, 8 Apr 2016 02:51 am, Peter Pearson wrote:
>> 
>> The Unicode consortium was certifiably insane when it went into the
>> typesetting business.
>
> They are not, and never have been, in the typesetting business. Perhaps
> characters are not the only things easily confused *wink*

Defining codepoints that deal with appearance but not with meaning is
going into the typesetting business.  Examples: ligatures, and spaces of
varying widths with specific typesetting properties like being non-breaking.

Typesetting done in MS Word using such Unicode codepoints will never
be more than a goofy approximation to real typesetting (e.g., TeX), but
it will cost a huge amount of everybody's time, with the current discussion
of ligatures in variable names being just a straw in the wind.  Getting
all the world's writing systems into a single, coherent standard was
an extraordinarily ambitious, monumental undertaking, and I'm baffled
that the urge to broaden its scope in this irrelevant direction was
entertained at all.

(Should this have been in cranky-geezer font?)

-- 
To email me, substitute nowhere->runbox, invalid->com.
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Unicode normalisation [was Re: [beginner] What's wrong?]

2016-04-08 Thread Marko Rauhamaa
Peter Pearson :

> On Fri, 08 Apr 2016 16:00:10 +1000, Steven D'Aprano  
> wrote:
>> They are not, and never have been, in the typesetting business.
>> Perhaps characters are not the only things easily confused *wink*
>
> Defining codepoints that deal with appearance but not with meaning is
> going into the typesetting business. Examples: ligatures, and spaces
> of varying widths with specific typesetting properties like being
> non-breaking.
>
> Typesetting done in MS Word using such Unicode codepoints will never
> be more than a goofy approximation to real typesetting (e.g., TeX),
> but it will cost a huge amount of everybody's time, with the current
> discussion of ligatures in variable names being just a straw in the
> wind. Getting all the world's writing systems into a single, coherent
> standard was an extraordinarily ambitious, monumental undertaking, and
> I'm baffled that the urge to broaden its scope in this irrelevant
> direction was entertained at all.

I agree completely but at the same time have a lot of understanding for
the reasons why Unicode had to become such a mess. Part of it is
historical, part of it is political, yet part of it is in the
unavoidable messiness of trying to define what a character is.

For example, is "ä" one character or two: "a" plus "¨"? Is "i" one
character of two: "ı" plus "˙"? Is writing linear or two-dimensional?

Unicode heroically and definitively solved the problems ASCII had posed
but introduced a bag of new, trickier problems.

(As for ligatures, I understand that there might be quite a bit of
legacy software that dedicated code points and code pages for ligatures.
Translating that legacy software to Unicode was made more
straightforward by introducing analogous codepoints to Unicode. Unicode
has quite many such codepoints: µ, K, Ω etc.)


Marko
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Unicode normalisation [was Re: [beginner] What's wrong?]

2016-04-08 Thread Chris Angelico
On Sat, Apr 9, 2016 at 3:44 AM, Marko Rauhamaa  wrote:
> Unicode heroically and definitively solved the problems ASCII had posed
> but introduced a bag of new, trickier problems.
>
> (As for ligatures, I understand that there might be quite a bit of
> legacy software that dedicated code points and code pages for ligatures.
> Translating that legacy software to Unicode was made more
> straightforward by introducing analogous codepoints to Unicode. Unicode
> has quite many such codepoints: µ, K, Ω etc.)

More specifically, Unicode solved the problems that *codepages* had
posed. And one of the principles of its design was that every
character in every legacy encoding had a direct representation as a
Unicode codepoint, allowing bidirectional transcoding for
compatibility. Perhaps if Unicode had existed from the dawn of
computing, we'd have less characters; but backward compatibility is
way too important to let a narrow purity argument sway it.

ChrisA
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Unicode normalisation [was Re: [beginner] What's wrong?]

2016-04-08 Thread Rustom Mody
On Friday, April 8, 2016 at 10:24:17 AM UTC+5:30, Chris Angelico wrote:
> On Fri, Apr 8, 2016 at 2:43 PM, Rustom Mody  wrote:
> > No I am not clever/criminal enough to know how to write a text that is 
> > visually
> > close to
> > print "Hello World"
> > but is internally closer to
> > rm -rf /
> >
> > For me this:
> >  >>> Α = 1
>  A = 2
>  Α + 1 == A
> > True
> 
> >
> >
> > is cure enough that I am not amused
> 
> To me, the above is a contrived example. And you can contrive examples
> that are just as confusing while still being ASCII-only, like
> swimmer/swirnmer in many fonts, or I and l, or any number of other
> visually-confusing glyphs. I propose that we ban the letters 'r' and
> 'l' from identifiers, to ensure that people can't mess with
> themselves.

swirnmer and swimmer are distinguished by squiting a bit
А and A only by digging down into the hex.
If you categorize them as similar/same... well I am not arguing...
will come to you when I am short of straw...


> 
> > Specifically as far as I am concerned if python were to throw back say
> > a ligature in an identifier as a syntax error -- exactly what python2 does 
> > --
> > I think it would be perfectly fine and a more sane choice
> 
> The ligature is handled straight-forwardly: it gets decomposed into
> its component letters. I'm not seeing a problem here.

Yes... there is no problem... HERE [I did say python gets this right that
haskell for example gets wrong]
Whats wrong is the whole approach of swallowing gobs of characters that
need not be legal at all and then getting indigestion:

Note the "non-normative" in
https://docs.python.org/3/reference/lexical_analysis.html#identifiers

If a language reference is not normative what is?
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Python programs and relative imports

2016-04-08 Thread Rob Gaddi
Chris Angelico wrote:

> On Sat, Apr 9, 2016 at 2:59 AM, Rob Gaddi
>  wrote:
>> Rob Gaddi wrote:
>>
>>> Does anyone know the history of why relative imports are only available
>>> for packages and not for "programs"?  It certainly complicates life.
>>>
>>
>> Really, no one?  It seems like a fairly obvious thing to have included;
>> all of the reasons that you want to be explicit in saying:
>>
>>   from . import mypkg
>>
>> in a package apply just as well in an executable script.  But instead,
>> they've got different semantics such that you expressly _cannot_ use
>> relative imports in a script.  This feels like such a glaring oversight
>> that there must have been some rationale behind it.
>
> You can use the simple "import mypkg" syntax to load these up. I'm not
> sure what you're looking for - do you want to prevent that syntax from
> working, to prevent accidental shadowing?
>
> ChrisA

Sort of.  If I've got a directory full of files (in a package)
that I'm working on, the relative import semantics change based on
whether I'm one directory up and importing the package or in the same
directory and importing the files locally.  That is to say if I've got:

pkg/
  __init__.py
  a.py
  usedbya.py

then there is no single syntax I can use in a.py that allows me to both
sit in the pkg directory at the shell and poke at things and import pkg
from the higher level.

If the 'from . import usedbya' syntax were always available, then it
would work the same in either context.  And if as I refactored things,
as they moved in and out of packages, it would all still "just work" for
files that haven't moved relative to one another.

But it would also address the accidental shadowing issue.  If you could
use from __future__ import force_relative_imports in an exectable,
then the import semantics would ALWAYS be that "import xxx" looks in
sys.path and "from . import xxx" looks locally.  This is akin to what
the C preprocessor has done for decades by differentiating
  #include 
  #include "localdefs.h"

As is, it's a bit of a hodge podge.

-- 
Rob Gaddi, Highland Technology -- www.highlandtechnology.com
Email address domain is currently out of order.  See above to fix.
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Python programs and relative imports

2016-04-08 Thread Chris Angelico
On Sat, Apr 9, 2016 at 3:50 AM, Rob Gaddi
 wrote:
> Sort of.  If I've got a directory full of files (in a package)
> that I'm working on, the relative import semantics change based on
> whether I'm one directory up and importing the package or in the same
> directory and importing the files locally.  That is to say if I've got:
>
> pkg/
>   __init__.py
>   a.py
>   usedbya.py
>
> then there is no single syntax I can use in a.py that allows me to both
> sit in the pkg directory at the shell and poke at things and import pkg
> from the higher level.
>
> If the 'from . import usedbya' syntax were always available, then it
> would work the same in either context.  And if as I refactored things,
> as they moved in and out of packages, it would all still "just work" for
> files that haven't moved relative to one another.

Ah, I see what you mean. You're working inside an actual package here.
So you can "cd ..; python3 -m pkg.a", or you can "python3 a.py", but
not both.

The simplest fix for that would be to allow "python3 -m .a" to mean
"current directory is a package". I don't think there's currently a
way to spell that, but it ought to be completely backward compatible.
You could raise this on python-ideas and see what people say.

ChrisA
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Unicode normalisation [was Re: [beginner] What's wrong?]

2016-04-08 Thread Peter Pearson
On Sat, 9 Apr 2016 03:50:16 +1000, Chris Angelico  wrote:
> On Sat, Apr 9, 2016 at 3:44 AM, Marko Rauhamaa  wrote:
[snip]
>> (As for ligatures, I understand that there might be quite a bit of
>> legacy software that dedicated code points and code pages for ligatures.
>> Translating that legacy software to Unicode was made more
>> straightforward by introducing analogous codepoints to Unicode. Unicode
>> has quite many such codepoints: µ, K, Ω etc.)
>
> More specifically, Unicode solved the problems that *codepages* had
> posed. And one of the principles of its design was that every
> character in every legacy encoding had a direct representation as a
> Unicode codepoint, allowing bidirectional transcoding for
> compatibility. Perhaps if Unicode had existed from the dawn of
> computing, we'd have less characters; but backward compatibility is
> way too important to let a narrow purity argument sway it.

I guess with that historical perspective the current situation
seems almost inevitable.  Thanks.  And thanks to Steven D'Aprano
for other relevant insights.

-- 
To email me, substitute nowhere->runbox, invalid->com.
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Unicode normalisation [was Re: [beginner] What's wrong?]

2016-04-08 Thread Rustom Mody
On Friday, April 8, 2016 at 11:14:21 PM UTC+5:30, Marko Rauhamaa wrote:
> Peter Pearson :
> 
> > On Fri, 08 Apr 2016 16:00:10 +1000, Steven D'Aprano  wrote:
> >> They are not, and never have been, in the typesetting business.
> >> Perhaps characters are not the only things easily confused *wink*
> >
> > Defining codepoints that deal with appearance but not with meaning is
> > going into the typesetting business. Examples: ligatures, and spaces
> > of varying widths with specific typesetting properties like being
> > non-breaking.
> >
> > Typesetting done in MS Word using such Unicode codepoints will never
> > be more than a goofy approximation to real typesetting (e.g., TeX),
> > but it will cost a huge amount of everybody's time, with the current
> > discussion of ligatures in variable names being just a straw in the
> > wind. Getting all the world's writing systems into a single, coherent
> > standard was an extraordinarily ambitious, monumental undertaking, and
> > I'm baffled that the urge to broaden its scope in this irrelevant
> > direction was entertained at all.
> 
> I agree completely but at the same time have a lot of understanding for
> the reasons why Unicode had to become such a mess. Part of it is
> historical, part of it is political, yet part of it is in the
> unavoidable messiness of trying to define what a character is.

There are standards and standards.
Just because they are standard does not make them useful, well-designed,
reasonable etc..

Its reasonably likely that all our keyboards start QWERT...
 Doesn't make it a sane design.

Likewise using NFKC to define the equivalence relation on identifiers
is analogous to saying: Since QWERTY has been in use for over a hundred years
its a perfectly good design. Just because NFKC has the stamp of the unicode
consortium it does not straightaway make it useful for all purposes
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Unicode normalisation [was Re: [beginner] What's wrong?]

2016-04-08 Thread Rustom Mody
On Friday, April 8, 2016 at 11:33:38 PM UTC+5:30, Peter Pearson wrote:
> On Sat, 9 Apr 2016 03:50:16 +1000, Chris Angelico wrote:
> > On Sat, Apr 9, 2016 at 3:44 AM, Marko Rauhamaa  wrote:
> [snip]
> >> (As for ligatures, I understand that there might be quite a bit of
> >> legacy software that dedicated code points and code pages for ligatures.
> >> Translating that legacy software to Unicode was made more
> >> straightforward by introducing analogous codepoints to Unicode. Unicode
> >> has quite many such codepoints: µ, K, Ω etc.)
> >
> > More specifically, Unicode solved the problems that *codepages* had
> > posed. And one of the principles of its design was that every
> > character in every legacy encoding had a direct representation as a
> > Unicode codepoint, allowing bidirectional transcoding for
> > compatibility. Perhaps if Unicode had existed from the dawn of
> > computing, we'd have less characters; but backward compatibility is
> > way too important to let a narrow purity argument sway it.
> 
> I guess with that historical perspective the current situation
> seems almost inevitable.  Thanks.  And thanks to Steven D'Aprano
> for other relevant insights.

Strange view
In fact the unicode standard itself encourages not using the standard in its
entirety

5.12 Deprecation

In the Unicode Standard, the term deprecation is used somewhat differently than 
it is in some other standards. Deprecation is used to mean that a character or 
other feature is strongly discouraged from use. This should not, however, be 
taken as indicating that anything has been removed from the standard, nor that 
anything is planned for removal from the standard. Any such change is 
constrained by the Unicode Consortium Stability Policies [Stability].

For the Unicode Character Database, there are two important types of 
deprecation to be noted. First, an encoded character may be deprecated. Second, 
a character property may be deprecated.

When an encoded character is strongly discouraged from use, it is given the 
property value Deprecated=True. The Deprecated property is a binary property 
defined specifically to carry this information about Unicode characters. Very 
few characters are ever formally deprecated this way; it is not enough that a 
character be uncommon, obsolete, disliked, or not preferred. Only those few 
characters which have been determined by the UTC to have serious architectural 
defects or which have been determined to cause significant implementation 
problems are ever deprecated. Even in the most severe cases, such as the 
deprecated format control characters (U+206A..U+206F), an encoded character is 
never removed from the standard. Furthermore, although deprecated characters 
are strongly discouraged from use, and should be avoided in favor of other, 
more appropriate mechanisms, they may occur in data. Conformant implementations 
of Unicode processes such a Unicode normalization must handle even deprecated 
characters correctly.

I read this as saying that -- in addition to officially deprecated chars --
there ARE "uncommon, obsolete, disliked, or not preferred" chars
which sensible users should avoid using even though unicode as a standard is
compelled to keep supporting

Which translates into
- python as a language *implementing* unicode (eg in strings) needs to
do it completely if it is to be standard compliant
- python as a *user* of unicode (eg in identifiers) can (and IMHO should)
use better judgement
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Unicode normalisation [was Re: [beginner] What's wrong?]

2016-04-08 Thread Rustom Mody
Adding link

On Friday, April 8, 2016 at 11:48:07 PM UTC+5:30, Rustom Mody wrote:

> 5.12 Deprecation
> 
> In the Unicode Standard, the term deprecation is used somewhat differently 
> than it is in some other standards. Deprecation is used to mean that a 
> character or other feature is strongly discouraged from use. This should not, 
> however, be taken as indicating that anything has been removed from the 
> standard, nor that anything is planned for removal from the standard. Any 
> such change is constrained by the Unicode Consortium Stability Policies 
> [Stability].
> 
> For the Unicode Character Database, there are two important types of 
> deprecation to be noted. First, an encoded character may be deprecated. 
> Second, a character property may be deprecated.
> 
> When an encoded character is strongly discouraged from use, it is given the 
> property value Deprecated=True. The Deprecated property is a binary property 
> defined specifically to carry this information about Unicode characters. Very 
> few characters are ever formally deprecated this way; it is not enough that a 
> character be uncommon, obsolete, disliked, or not preferred. Only those few 
> characters which have been determined by the UTC to have serious 
> architectural defects or which have been determined to cause significant 
> implementation problems are ever deprecated. Even in the most severe cases, 
> such as the deprecated format control characters (U+206A..U+206F), an encoded 
> character is never removed from the standard. Furthermore, although 
> deprecated characters are strongly discouraged from use, and should be 
> avoided in favor of other, more appropriate mechanisms, they may occur in 
> data. Conformant implementations of Unicode processes such a Unicode 
> normalization must handle even deprec
 ated characters correctly.



Link: http://unicode.org/reports/tr44/#Deprecation
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Python 3.4 problem with requests module

2016-04-08 Thread Chris Angelico
On Sat, Apr 9, 2016 at 4:24 AM, Lee Fig <1lee...@gmail.com> wrote:
> print(socket.__file__)
>
> seems to confirm that all is well. It refers to my Lib folder:
> C:\work\tools\WinPython-64bit-3.4.4.1\python-3.4.4.amd64\Lib\socket.py
>
> How frustrating. I will Google shadow importing as thats a new one on me.
> Please feel free to mention any further thoughts

Please respond to the list, so everyone can see things.

Did you run that from the exact same place that your imports were failing?

Try, immediately after the print call above, "import requests". Does
it still fail?

ChrisA
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Python programs and relative imports

2016-04-08 Thread Ian Kelly
On Fri, Apr 8, 2016 at 11:50 AM, Rob Gaddi
 wrote:
> Sort of.  If I've got a directory full of files (in a package)
> that I'm working on, the relative import semantics change based on
> whether I'm one directory up and importing the package or in the same
> directory and importing the files locally.  That is to say if I've got:
>
> pkg/
>   __init__.py
>   a.py
>   usedbya.py
>
> then there is no single syntax I can use in a.py that allows me to both
> sit in the pkg directory at the shell and poke at things and import pkg
> from the higher level.
>
> If the 'from . import usedbya' syntax were always available, then it
> would work the same in either context.

Not necessarily. Inside the package, 'from . import usedbya' is
effectively equivalent to 'import pkg.usedbya as usedbya'. Without the
package, all of these modules are at the top level, and 'from . import
usedbya' would conceptually be equivalent to 'import usedbya'. But
there's no guarantee that the 'usedbya' module at the top level of the
module tree is the same 'usedbya.py' file in the current directory; it
could be shadowed by some other module. Whereas with the package, the
packaging ensures that you'll get the module expect.
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Unicode normalisation [was Re: [beginner] What's wrong?]

2016-04-08 Thread Steven D'Aprano
On Sat, 9 Apr 2016 03:21 am, Peter Pearson wrote:

> On Fri, 08 Apr 2016 16:00:10 +1000, Steven D'Aprano 
> wrote:
>> On Fri, 8 Apr 2016 02:51 am, Peter Pearson wrote:
>>> 
>>> The Unicode consortium was certifiably insane when it went into the
>>> typesetting business.
>>
>> They are not, and never have been, in the typesetting business. Perhaps
>> characters are not the only things easily confused *wink*
> 
> Defining codepoints that deal with appearance but not with meaning is
> going into the typesetting business.  Examples: ligatures, and spaces of
> varying widths with specific typesetting properties like being
> non-breaking.

Both of which are covered by the requirement that Unicode is capable of
representing legacy encodings/code pages.

Examples: MacRoman contains fl and fi ligatures, and NBSP. 

Non-breaking space is not so much a typesetting property as a semantic
property, that is, it deals with *meaning* (exactly what you suggested it
doesn't deal with). It is a space which doesn't break words.

Ligatures are a good example -- the Unicode consortium have explicitly
refused to add other ligatures beyond the handful needed for backwards
compatibility because they maintain that it is a typesetting issue that is
best handled by the font. There's even a FAQ about that very issue, and I
quote:

"The existing ligatures exist basically for compatibility and round-tripping
with non-Unicode character sets. Their use is discouraged. No more will be
encoded in any circumstances."

http://www.unicode.org/faq/ligature_digraph.html#Lig2


Unicode currently contains something of the order of one hundred and ten
thousand defined code points. I'm sure that if you went through the entire
list, with a sufficiently loose definition of "typesetting", you could
probably find some that exist only for presentation, and aren't covered by
the legacy encoding clause. So what? One swallow does not mean the season
is spring. Unicode makes an explicit rejection of being responsible for
typesetting. See their discussion on presentation forms:

http://www.unicode.org/faq/ligature_digraph.html#PForms

But I will grant you that sometimes there's a grey area between presentation
and semantics, and the Unicode consortium has to make a decision one way or
another. Those decisions may not always be completely consistent, and may
be driven by political and/or popular demand.

E.g. the Consortium explicitly state that stylistic issues such as bold,
italic, superscript etc are up to the layout engine or markup, and
shouldn't be part of the Unicode character set. They insist that they only
show representative glyphs for code points, and that font designers and
vendors are free (within certain limits) to modify the presentation as
desired. Nevertheless, there are specialist characters with distinct
formatting, and variant selectors for specifying a specific glyph, and
emoji modifiers for specifying skin tone.

But when you get down to fundamentals, character sets and alphabets have
always blurred the line between presentation and meaning. W ("double-u")
was, once upon a time, UU and & (ampersand) started off as a ligature
of "et" (Latin for "and"). There are always going to be cases where
well-meaning people can agree to disagree on whether or not adding the
character to Unicode was justified or not.




-- 
Steven

-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Unicode normalisation [was Re: [beginner] What's wrong?]

2016-04-08 Thread Marko Rauhamaa
Steven D'Aprano :

> But when you get down to fundamentals, character sets and alphabets have
> always blurred the line between presentation and meaning. W ("double-u")
> was, once upon a time, UU

But as every Finnish-speaker now knows, "w" is only an old-fashioned
typographic variant of the glyph "v". We still have people who write
"Wirtanen" or "Waltari" to make their last names look respectable and
19th-centrury-ish.


Marko
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: [beginner] What's wrong?

2016-04-08 Thread sohcahtoa82
On Friday, April 1, 2016 at 3:57:40 PM UTC-7, Mark Lawrence wrote:
> On 01/04/2016 23:44, sohcahto...@gmail.com wrote:
> > On Friday, April 1, 2016 at 3:10:51 PM UTC-7, Michael Okuntsov wrote:
> >> Nevermind. for j in range(1,8) should be for j in range(8).
> >
> > I can't tell you how many times I've gotten bit in the ass with that 
> > off-by-one mistake whenever I use a range that doesn't start at zero.
> >
> > I know that if I want to loop 10 times and I either want to start at zero 
> > or just don't care about the actual number, I use `for i in range(10)`.  
> > But if I want to loop from 10 to 20, my first instinct is to write `for i 
> > in range(10, 20)`, and then I'm left figuring out why my loop isn't 
> > executing the last step.
> >
> 
> "First instinct"?  "I expected"?  The Python docs might not be perfect, 
> but they were certainly adequate enough to get me going 15 years ago, 
> and since then they've improved.  So where is the problem, other than 
> failure to RTFM?
> 
> -- 
> My fellow Pythonistas, ask not what our language can do for you, ask
> what you can do for our language.
> 
> Mark Lawrence

Holy hell, why such an aggressive tone?

I understand how range(x, y) works.  It's just a simple mistake that I 
frequently do it wrong and have to correct it after the first time I run it.  
It's not like I'm saying that the implementation needs to change.  I'm just 
saying that if I want to loop from 10 to 20, my first thought is to use 
range(10, 20).  It is slightly unintuitive.

*YES*, I know it is wrong.  *YES*, I understand why the correct usage would be 
range(10, 21) to get that list from 10 to 20.

Get off your high horse.  Not everybody is like you and has been using Python 
for 15 years and apparently never makes mistakes.
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: [beginner] What's wrong?

2016-04-08 Thread Mark Lawrence via Python-list

On 08/04/2016 23:59, sohcahto...@gmail.com wrote:

On Friday, April 1, 2016 at 3:57:40 PM UTC-7, Mark Lawrence wrote:

On 01/04/2016 23:44, sohcahto...@gmail.com wrote:

On Friday, April 1, 2016 at 3:10:51 PM UTC-7, Michael Okuntsov wrote:

Nevermind. for j in range(1,8) should be for j in range(8).


I can't tell you how many times I've gotten bit in the ass with that off-by-one 
mistake whenever I use a range that doesn't start at zero.

I know that if I want to loop 10 times and I either want to start at zero or 
just don't care about the actual number, I use `for i in range(10)`.  But if I 
want to loop from 10 to 20, my first instinct is to write `for i in range(10, 
20)`, and then I'm left figuring out why my loop isn't executing the last step.



"First instinct"?  "I expected"?  The Python docs might not be perfect,
but they were certainly adequate enough to get me going 15 years ago,
and since then they've improved.  So where is the problem, other than
failure to RTFM?

--
My fellow Pythonistas, ask not what our language can do for you, ask
what you can do for our language.

Mark Lawrence


Holy hell, why such an aggressive tone?

I understand how range(x, y) works.  It's just a simple mistake that I 
frequently do it wrong and have to correct it after the first time I run it.  
It's not like I'm saying that the implementation needs to change.  I'm just 
saying that if I want to loop from 10 to 20, my first thought is to use 
range(10, 20).  It is slightly unintuitive.

*YES*, I know it is wrong.  *YES*, I understand why the correct usage would be 
range(10, 21) to get that list from 10 to 20.

Get off your high horse.  Not everybody is like you and has been using Python 
for 15 years and apparently never makes mistakes.



*plonk*

--
My fellow Pythonistas, ask not what our language can do for you, ask
what you can do for our language.

Mark Lawrence

--
https://mail.python.org/mailman/listinfo/python-list


(Python 3.5) Asyncio and an attempt to run loop.run_until_complete() from within a running loop

2016-04-08 Thread Alexander Myodov
Hello.

TLDR: how can I use something like loop.run_until_complete(coro), to execute a 
coroutine synchronously, while the loop is already running?

More on this:

I was trying to create an aio_map(coro, iterable) function (which would 
asynchronously launch a coroutine for each iteration over iterable, and collect 
the data; similary to gevent.pool.Group.imap() from another async world), but 
stuck while attempting to make it well both from outside the async event loop 
and from inside one - any help?

My code is at http://paste.pound-python.org/show/EQvN2cSDp0xqXK56dUPy/ - and I 
stuck around lines 22-28, with the problem that loop.run_until_complete() 
cannot be executed when the loop is running already (raising "RuntimeError: 
Event loop is running."). Is it normal? and why is it so restrictive? And what 
can I do to wait for `coros` Future to be finished?

I tried various mixes of loop.run_forever() and even loop._run_once() there, 
but was not able to create a stable working code. Am I doing something 
completely wrong here? Am I expected to create a totally new event loop for 
synchronously waiting for the Future, if the current event loop is running - 
and if so, won't the previous event loop miss any its events?


Thank you in advance.
Alexander
-- 
https://mail.python.org/mailman/listinfo/python-list


QWERTY was not designed to intentionally slow typists down (was: Unicode normalisation [was Re: [beginner] What's wrong?])

2016-04-08 Thread Ben Finney
Dennis Lee Bieber  writes:

> [The QWERTY keyboard layout] was a sane design -- for early mechanical
> typewrites. It fulfills its goal of slowing down a typist to reduce
> jamming print-heads at the platen.

This is an often-repeated myth, with citations back as far as the 1970s.
It is false.

The design is intended to reduce jamming the print heads together, but
the goal of this is not to reduce speed, but to enable *fast* typing.

It aims to maximise the frequency in which (English-language) text has
consecutive letters alternating either side of the middle of the
keyboard. This should thus reduce collisions of nearby heads — and hence
*increase* the effective typing speed that can be achieved on such a
mechanical typewriter.

The degree to which this maximum was achieved is arguable. Certainly the
relevance to keyboards today, with no connection from the layout to
whether print heads will jam, is negligible.

What is not arguable is that there is no evidence the design had any
intention of *slowing* typists in any way. Quite the opposite, in fact.

http://www.straightdope.com/columns/read/221/was-the-qwerty-keyboard-purposely-designed-to-slow-typists>,
and other links from the Wikipedia article
https://en.wikipedia.org/wiki/QWERTY#History_and_purposes>, should
allow interested people to get the facts right on this canard.

-- 
 \ “I used to think that the brain was the most wonderful organ in |
  `\   my body. Then I realized who was telling me this.” —Emo Philips |
_o__)  |
Ben Finney

-- 
https://mail.python.org/mailman/listinfo/python-list


Repair??

2016-04-08 Thread Amaya McLean
After I install Python, I try to run it, and it always says it needs to be
repaired, and when I do, it still doesn't fix the problem.
If you could help me out, that would be great!
Thanks,
Amaya
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Repair??

2016-04-08 Thread Ben Finney
Amaya McLean  writes:

> After I install Python

How, specifically, are you installing Python? There are many ways, and
we can't guess which you use.

Which particular Python installation have you obtained? From what
specific URL? Python is available from many sources and we can't guess
which you obtained.

Onto which platform? Python is available for many, and we can't guess
which is relevant to your case.

> I try to run it

How do you try to run it? What action do you do? We can't guess that
either.

> and it always says it needs to be repaired, and when I do, it still
> doesn't fix the problem.

With some relevant details we may be able to help.

> If you could help me out, that would be great! Thanks, Amaya

Please also note that this is the community forum for Python, and by
posting here you are inviting open discussion among volunteers. This is
not an issue tracker nor a commercial support service. You're welcome
here, but I wanted to be clear on what you can expect :-)

-- 
 \ “[H]ow deep can a truth be — indeed, how true can it be — if it |
  `\ is not built from facts?” —Kathryn Schulz, 2015-10-19 |
_o__)  |
Ben Finney

-- 
https://mail.python.org/mailman/listinfo/python-list


test post please ignore

2016-04-08 Thread Random832
Testing posting from an email address other than the one I'm subscribed
in, to determine whether it's possible to post to the list without being
subscribed.
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Repair??

2016-04-08 Thread Random832
I suspect that the reason that a lot of people who report issues like
this don't seem to follow up is that they may not be subscribed to the
list, and replies are sent to the list exclusively.

Quoting the entire reply so they see it.

On Fri, Apr 8, 2016, at 21:24, Ben Finney wrote:
> Amaya McLean  writes:
> 
> > After I install Python
> 
> How, specifically, are you installing Python? There are many ways, and
> we can't guess which you use.
> 
> Which particular Python installation have you obtained? From what
> specific URL? Python is available from many sources and we can't guess
> which you obtained.
> 
> Onto which platform? Python is available for many, and we can't guess
> which is relevant to your case.
> 
> > I try to run it
> 
> How do you try to run it? What action do you do? We can't guess that
> either.
> 
> > and it always says it needs to be repaired, and when I do, it still
> > doesn't fix the problem.
> 
> With some relevant details we may be able to help.
> 
> > If you could help me out, that would be great! Thanks, Amaya
> 
> Please also note that this is the community forum for Python, and by
> posting here you are inviting open discussion among volunteers. This is
> not an issue tracker nor a commercial support service. You're welcome
> here, but I wanted to be clear on what you can expect :-)
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: test post please ignore

2016-04-08 Thread Ethan Furman

On 04/08/2016 06:32 PM, Random832 wrote:


Testing posting from an email address other than the one I'm subscribed
in, to determine whether it's possible to post to the list without being
subscribed.


Kinda.  :)

--
~Ethan~

--
https://mail.python.org/mailman/listinfo/python-list


Re: QWERTY was not designed to intentionally slow typists down (was: Unicode normalisation [was Re: [beginner] What's wrong?])

2016-04-08 Thread Steven D'Aprano
On Sat, 9 Apr 2016 10:43 am, Ben Finney wrote:

> Dennis Lee Bieber  writes:
> 
>> [The QWERTY keyboard layout] was a sane design -- for early mechanical
>> typewrites. It fulfills its goal of slowing down a typist to reduce
>> jamming print-heads at the platen.
> 
> This is an often-repeated myth, with citations back as far as the 1970s.
> It is false.
> 
> The design is intended to reduce jamming the print heads together, but
> the goal of this is not to reduce speed, but to enable *fast* typing.

And how did it enable fast typing? By *slowing down the typist*, and thus
having fewer jams.

Honestly, I have the greatest respect for the Straight Dope, but this is one
of those times when they miss the forest for the trees. The conventional
wisdom about typewriters isn't wrong -- or at least there's no evidence
that it's wrong.

As far as I can, *every single* argument against the conventional wisdom
comes down to an argument that it is ridiculous or silly that anyone might
have wanted to slow typing down. For example, Wikipedia links to this page:

http://www.smithsonianmag.com/arts-culture/fact-of-fiction-the-legend-of-the-qwerty-keyboard-49863249/?no-ist

which quotes researchers:

“The speed of Morse receiver should be equal to the Morse sender, of course.
If Sholes really arranged the keyboard to slow down the operator, the
operator became unable to catch up the Morse sender. We don’t believe that
Sholes had such a nonsense intention during his development of
Type-Writer.”

This is merely argument from personal incredibility:

http://rationalwiki.org/wiki/Argument_from_incredulity

and is trivially answerable: how well do you think the receiver can keep up
with the sender if they have to stop every few dozen keystrokes to unjam
the typewriter?

Wikipedia states:

"Contrary to popular belief, the QWERTY layout was not designed to slow the
typist down,[3]"

with the footnote [3] linking to

http://www.maltron.com/media/lillian_kditee_001.pdf

which clearly and prominently states in the THIRD paragraph:

"It has been said of the Sholes letter layout [QWERTY] this it would
probably have been chosen if the objective was to find the least
efficient -- in terms of learning time and speed achievable -- and the most
error producing character arrangement. This is not surprising when one
considers that a team of people spent one year developing this layout so
that it should provide THE GREATEST INHIBITION TO FAST KEYING. [Emphasis
added.] This was no Machiavellian plot, but necessary because the mechanism
of the early typewriters required slow operation."

This is the power of the "slowing typists down is a myth" meme: same
Wikipedia contributor takes an article which *clearly and obviously*
repeats the conventional narrative that QWERTY was designed to decrease the
number of key presses per second, and uses that to defend the counter-myth
that QWERTY wasn't designed to decrease the number of key presses per
second!

These are the historical facts:

- early typewriters had varying layouts, some of which allow much more rapid
keying than QWERTY;

- early typewriters were prone to frequent and difficult jamming;

- Sholes spend significant time developing a layout which reduced the number
of jams by intentionally moving frequently typed characters far apart,
which has the effect of slowing down the rate at which the typist can hit
keys;

- which results in greater typing speed do to a reduced number of jams.

In other words the conventional story.

Jams have such a massively negative effect on typing speed that reducing the
number of jams gives you a *huge* win on overall speed even if the rate of
keying is significantly lower. At first glance, it may seem paradoxical,
but it's not. Which is faster?

- typing at a steady speed of (lets say) 100 words per minute;

- typing in bursts of (say) 200 wpm for a minute, followed by three minutes
of 0 wpm.

The second case averages half the speed of the first, even though the typist
is hitting keys at a faster rate. This shouldn't be surprising to any car
driver who has raced from one red light to the next, only to be caught up
and even overtaken by somebody driving at a more sedate speed who caught
nothing but green lights. Or to anyone who has heard the story of the
Tortoise and the Hare.

The moral of QWERTY is "less haste, more speed".

The myth of the "QWERTY myth" is based on the idea that people are unable to
distinguish between peak speed and average speed. But ironically, in my
experience, it's only those repeating the myth who seem confused by that
difference (as in the quote from the Smithsonian above). Most people don't
need the conventional narrative explained:

"Speed up typing by slowing the typist down? Yeah, that makes sense. When I
try to do things in a rush, I make more mistakes and end up taking longer
than I otherwise would have. This is exactly the same sort of principle."

while others, like our dear Cecil from the Straight Dope, wrongly imagine
that o

Re: (Python 3.5) Asyncio and an attempt to run loop.run_until_complete() from within a running loop

2016-04-08 Thread Frank Millman
"Alexander Myodov"  wrote in message 
news:33e44698-2625-47c4-9595-00a8c79f2...@googlegroups.com...



Hello.


TLDR: how can I use something like loop.run_until_complete(coro), to 
execute a coroutine synchronously, while the loop is already running?


I am no expert, but does this help?

"If you're handling coroutines there is an asyncio facility for "background 
tasks".  asyncio.ensure_future() will take a coroutine, attach it to a Task, 
and return a future to you that resolves when the coroutine is complete."


This advice was given to me a while back when I wanted to run a background 
task. It works perfectly for me.


Frank Millman


--
https://mail.python.org/mailman/listinfo/python-list