Re: ImportError: cannot import name 'RAND_egd'

2016-02-10 Thread shaunak . bangale
On Tuesday, February 9, 2016 at 1:33:23 PM UTC-7, Ian wrote:
> On Tue, Feb 9, 2016 at 7:55 AM,   wrote:
> > Hi,
> >
> > I am trying to run a 60 lines Python code which is running on a mac machine 
> > but on windows machine, I am getting this error when I run on it on 
> > shell(open file and run module). I have Python 3.5 installed.
> >
> >from _ssl import RAND_status, RAND_egd, RAND_add
> > ImportError: cannot import name 'RAND_egd'
> 
> Why are you importing these directly from the "_ssl" C module and not
> from the "ssl" wrapper module? Anything that starts with an _ should
> be considered a private implementation detail and shouldn't be relied
> upon.
> 
> > Form forums, I found that it is a common error but could not find a good 
> > solution that will work for me.
> >
> > One of the ways was to create scripts folder and putting easy_install.exe 
> > and then running easy_install pip but that gave me sytnax error.
> >
> > Please advise. Thanks in advance.
> 
> The ssl module in the standard library has this:
> 
> try:
> from _ssl import RAND_egd
> except ImportError:
> # LibreSSL does not provide RAND_egd
> pass
> 
> So it looks like you cannot depend on ssl.RAND_egd to be present.

Hi Ian,
Thanks for your reply.
I wasn't trying to import it from _ssl. That was part of the error. My code did 
not have RAND_egd. I think it was just about ssl package being missing. After 
installing analcondra distribution, it stopped throwing this particular error 
at least.
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Cygwin and Python3

2016-02-10 Thread Mark Lawrence

On 10/02/2016 03:39, Mike S via Python-list wrote:

On 2/9/2016 7:26 PM, Larry Hudson wrote:

On 02/09/2016 08:41 AM, Fillmore wrote:


Hi, I am having a hard time making my Cygwin run Python 3.5 (or Python
2.7 for that matter).
The command will hang and nothing happens.



Just curious...

Since Python runs natively in Windows, why are you trying to run it with
Cygwin?
I'm not implying that you shouldn't, just offhand I don't see a reason
for it.

  -=- Larry -=-



Have you seen this?
http://www.davidbaumgold.com/tutorials/set-up-python-windows/



I have now, but I'm perfectly happy with the free versions of Visual Studio.

--
My fellow Pythonistas, ask not what our language can do for you, ask
what you can do for our language.

Mark Lawrence

--
https://mail.python.org/mailman/listinfo/python-list


pylint -> ImportError: No module named lazy_object_proxy

2016-02-10 Thread Michael Ströder
HI!

Hmm, I've used pylint before but my current installation gives me an 
ImportError:

$ pylint
Traceback (most recent call last):
  File "/usr/bin/pylint", line 3, in 
run_pylint()
  File "/usr/lib/python2.7/site-packages/pylint/__init__.py", line 22, in 
run_pylint
from pylint.lint import Run
  File "/usr/lib/python2.7/site-packages/pylint/lint.py", line 44, in 
import astroid
  File "/usr/lib/python2.7/site-packages/astroid/__init__.py", line 54, in 

from astroid.nodes import *
  File "/usr/lib/python2.7/site-packages/astroid/nodes.py", line 39, in 
from astroid.node_classes import (
  File "/usr/lib/python2.7/site-packages/astroid/node_classes.py", line 24, in

import lazy_object_proxy
ImportError: No module named lazy_object_proxy

Can anybody here give me a hint what's missing?
six, astroid and tk modules are installed.
Any more dependencies?

Ciao, Michael.
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: pylint -> ImportError: No module named lazy_object_proxy

2016-02-10 Thread Peter Otten
Michael Ströder wrote:

> HI!
> 
> Hmm, I've used pylint before but my current installation gives me an
> ImportError:
> 
> $ pylint
> Traceback (most recent call last):
>   File "/usr/bin/pylint", line 3, in 
> run_pylint()
>   File "/usr/lib/python2.7/site-packages/pylint/__init__.py", line 22, in
>   run_pylint
> from pylint.lint import Run
>   File "/usr/lib/python2.7/site-packages/pylint/lint.py", line 44, in
>   
> import astroid
>   File "/usr/lib/python2.7/site-packages/astroid/__init__.py", line 54, in
>   
> from astroid.nodes import *
>   File "/usr/lib/python2.7/site-packages/astroid/nodes.py", line 39, in
>   
> from astroid.node_classes import (
>   File "/usr/lib/python2.7/site-packages/astroid/node_classes.py", line
>   24, in
> 
> import lazy_object_proxy
> ImportError: No module named lazy_object_proxy
> 
> Can anybody here give me a hint what's missing?
> six, astroid and tk modules are installed.
> Any more dependencies?

How about the dependencies' dependencies? Grepping through the astroid 
source finds

./astroid/__pkginfo__.py:install_requires = ['six', 'lazy_object_proxy', 
'wrapt']

But doesn't pip care of these?



-- 
https://mail.python.org/mailman/listinfo/python-list


Copying void * string to

2016-02-10 Thread Martin Phillips
I am writing a Python wrapper to go around a C library. I have encountered a 
problem that I have been unable to resolve with
countless web searches.

 

Several functions in the C library return pointers to dynamically allocated 
w_char null terminated strings. I need to copy the
string to a Python variable and call an existing library function that will 
free the dynamically allocate memory.

 

My test code for this is

 

def Test(fno, item):

func = mylib. MyFunc

func.restype = ct.c_void_p

s = func(fno, item)

result = s

mylib.free(s)

return result

 

The problem is with the line that sets the result variable. I need this to make 
a copy of the dynamically allocated string, not the
pointer to it.

 

Thanks in advance.

 

 

Martin

-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Copying void * string to

2016-02-10 Thread Ian Kelly
On Wed, Feb 10, 2016 at 5:07 AM, Martin Phillips
 wrote:
> I am writing a Python wrapper to go around a C library. I have encountered a 
> problem that I have been unable to resolve with
> countless web searches.
>
>
>
> Several functions in the C library return pointers to dynamically allocated 
> w_char null terminated strings. I need to copy the
> string to a Python variable and call an existing library function that will 
> free the dynamically allocate memory.
>
>
>
> My test code for this is
>
>
>
> def Test(fno, item):
>
> func = mylib. MyFunc
>
> func.restype = ct.c_void_p
>
> s = func(fno, item)
>
> result = s
>
> mylib.free(s)
>
> return result
>
>
>
> The problem is with the line that sets the result variable. I need this to 
> make a copy of the dynamically allocated string, not the
> pointer to it.

Does ctypes.wstring_at(s) do what you want?
-- 
https://mail.python.org/mailman/listinfo/python-list


Python Twitter Error

2016-02-10 Thread Andra-Irina Vasile
Hello!

Could you be so kind to help me with an issue? I have some problems with
python when I am trying to use the twitter api... I have Python 3.5.1 (32
bit). I tried to reinstall the module, but I am still receiving the same
errors... I attached in this email some screenshots with them.

Thank you in advance!

Best regards,
Andra-Irina Vasile
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Python Twitter Error

2016-02-10 Thread Joel Goldstick
On Wed, Feb 10, 2016 at 11:31 AM, Andra-Irina Vasile <
andra.irina.vas...@gmail.com> wrote:

> Hello!
>
> Could you be so kind to help me with an issue? I have some problems with
> python when I am trying to use the twitter api... I have Python 3.5.1 (32
> bit). I tried to reinstall the module, but I am still receiving the same
> errors... I attached in this email some screenshots with them.
>
> Thank you in advance!
>
> Best regards,
> Andra-Irina Vasile
> --
> https://mail.python.org/mailman/listinfo/python-list
>

This list doesn't take attachments.  Please copy and past the error in your
question


-- 
Joel Goldstick
http://joelgoldstick.com/stats/birthdays
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: pylint -> ImportError: No module named lazy_object_proxy

2016-02-10 Thread Michael Ströder
Peter Otten wrote:
> Michael Ströder wrote:
> 
>> HI!
>>
>> Hmm, I've used pylint before but my current installation gives me an
>> ImportError:
>>
>> $ pylint
>> [..]
>> ImportError: No module named lazy_object_proxy
>>
>> Can anybody here give me a hint what's missing?
>> six, astroid and tk modules are installed.
>> Any more dependencies?
> 
> How about the dependencies' dependencies? Grepping through the astroid 
> source finds
> 
> ../astroid/__pkginfo__.py:install_requires = ['six', 'lazy_object_proxy', 
> 'wrapt']

Ah, overlooked this. Thanks.

> But doesn't pip care of these?

Yes, likely, but...

I've added new openSUSE packages python-lazy_object_proxy and python-wrapt which
hopefully will appear in devel:languages:python repo soon.

Ciao, Michael.

-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Asyncio thought experiment

2016-02-10 Thread Sven R. Kunze

On 08.02.2016 23:13, Marko Rauhamaa wrote:

As I stated in an earlier post, a normal subroutine may turn out to be
blocking. To make it well-behaved under asyncio, you then dutifully tag
the subroutine with "async" and adorn the blocking statement with
"await". Consequently, you put "await" in front of all calls to the
subroutine and cascade the "async"s and "await"s all the way to the top
level.

Now what would prevent you from making *every* function an "async" and
"await"ing *every* function call? Then, you would never fall victim to
the cascading async/await.

And if you did that, why bother sprinkling async's and await's
everywhere? Why not make every single function call an await implicitly
and every single subroutine an async? In fact, that's how everything
works in multithreading: blocking statements don't need to be ornamented
in any manner.


So? :)

Best,
Sven
--
https://mail.python.org/mailman/listinfo/python-list


Re: Heap Implementation

2016-02-10 Thread Sven R. Kunze

Hi Cem,

On 08.02.2016 02:37, Cem Karan wrote:

My apologies for not writing sooner, but work has been quite busy lately (and 
likely will be for some time to come).


no problem here. :)


I read your approach, and it looks pretty good, but there may be one issue with 
it; how do you handle the same item being pushed into the heap more than once?  
In my simple simulator, I'll push the same object into my event queue multiple 
times in a row.  The priority is the moment in the future when the object will 
be called.  As a result, items don't have unique priorities.  I know that there 
are methods of handling this from the client-side (tuples with unique counters 
come to mind), but if your library can handle it directly, then that could be 
useful to others as well.


I've pondered about that in the early design phase. I considered it a 
slowdown for my use-case without benefit.


Why? Because I always push a fresh object ALTHOUGH it might be equal 
comparing attributes (priority, deadline, etc.).



That's the reason why I need to ask again: why pushing the same item on 
a heap?



Are we talking about function objects? If so, then your concern is 
valid. Would you accept a solution that would involve wrapping the 
function in another object carrying the priority? Would you prefer a 
wrapper that's defined by xheap itself so you can just use it?



Best,
Sven
--
https://mail.python.org/mailman/listinfo/python-list


Re: Python's import situation has driven me to the brink of imsanity

2016-02-10 Thread Sivan Greenberg
not entirely on-topic here, but is distutils still being in active use "in
the wild" ?

-Sivan

On Wed, Feb 10, 2016 at 2:31 AM, Oscar Benjamin 
wrote:

> On 8 February 2016 at 00:38,   wrote:
> > Running python setup.py develop doesn't work, it gives me this error:
> error: invalid command 'develop'
>
> This is presumably because your setup.py script uses distutils rather
> than setuptools: distutils doesn't have the develop command.
>
> > Running pip install -e . does work.
>
> That's because pip "injects setuptools" so that when you import
> distutils in your setup.py your actually importing a monkey-patched
> setuptools. You may as well import setuptools in your setup.py but
> either way the recommended invocation is "pip install -e .".
>
> --
> Oscar
> --
> https://mail.python.org/mailman/listinfo/python-list
>



-- 
Sivan Greenberg
Co founder & CTO
Vitakka Consulting
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Importing two modules of same name

2016-02-10 Thread Tim Johnson
* dieter  [160209 23:03]:
> Carl Meyer  writes:
> > ...
> > If you omit the future-import in Python 2.7, `import config` will import
> > the neighboring app/config.py by default, and there is no way to import
> > the top-level config.py.
> 
> There is the "__import__" builtin function which allows to specify
> the "parent package" indirectly via its "globals" parameter. This
> way, you can import the "top-level" config (passing an empty "globals").
  Thanks.
  I used __import__ as part of a custom load() function that I used
  in my own framework when I was developing CGI sites. I never used
  the globals parameter tho'.
  If I start

-- 
Tim 
http://www.akwebsoft.com, http://www.tj49.com
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Python's import situation has driven me to the brink of imsanity

2016-02-10 Thread Mark Lawrence

On 10/02/2016 19:05, Sivan Greenberg wrote:

not entirely on-topic here, but is distutils still being in active use "in
the wild" ?

-Sivan



Given that there was distutils2 which took the same course as the 
Norwegian Blue, I would say no, distutils is not active.  I'll happily 
stand corrected.


As a slight aside, please don't top post, it's irritating, thanks :)

--
My fellow Pythonistas, ask not what our language can do for you, ask
what you can do for our language.

Mark Lawrence

--
https://mail.python.org/mailman/listinfo/python-list


Re: Cygwin and Python3

2016-02-10 Thread Benoit Izac
Larry Hudson  writes:

>> Hi, I am having a hard time making my Cygwin run Python 3.5 (or
>> Python 2.7 for that matter).
>> The command will hang and nothing happens.
>
> Just curious...
>
> Since Python runs natively in Windows, why are you trying to run it
> with Cygwin? I'm not implying that you shouldn't, just offhand I don't
> see a reason for it.

I do it because it's easier to install third party packages, those that
need an external library to run. Cygwin come with a lot of lib* and
lib*-devel that permit to just run `pip install xxx' if not already
packaged. I gave a try on the native Windows version and Anaconda but
there is at least one package that I could not run (and I loosed
a lot of time to compile a bunch of libraries).

Example of package: pyproj (proj4), openpyxl with lxml (libxml2,
libxslt) and pillow (libjpeg, zlib, libtiff, ...), psycopg2 (libpq).

--
Benoit Izac
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Cygwin and Python3

2016-02-10 Thread Terry Reedy

On 2/10/2016 4:26 PM, Benoit Izac wrote:

Larry Hudson  writes:



Since Python runs natively in Windows, why are you trying to run it
with Cygwin? I'm not implying that you shouldn't, just offhand I don't
see a reason for it.


I do it because it's easier to install third party packages, those that
need an external library to run. Cygwin come with a lot of lib* and
lib*-devel that permit to just run `pip install xxx' if not already
packaged. I gave a try on the native Windows version and Anaconda but
there is at least one package that I could not run (and I loosed
a lot of time to compile a bunch of libraries).

Example of package: pyproj (proj4), openpyxl with lxml (libxml2,
libxslt) and pillow (libjpeg, zlib, libtiff, ...), psycopg2 (libpq).


I belive these are all available at
http://www.lfd.uci.edu/~gohlke/pythonlibs/

--
Terry Jan Reedy

--
https://mail.python.org/mailman/listinfo/python-list


Re: Python Twitter Error

2016-02-10 Thread Joel Goldstick
On Wed, Feb 10, 2016 at 5:12 PM, Andra-Irina Vasile <
andra.irina.vas...@gmail.com> wrote:

> Thank you for your quick answer! These are my errors... i just followed
> the basic stepts to pip twitter and the other libraries... I just started
> to learn about python..:
>
> AttributeError: module 'twitter' has no attribute 'Twitter'
>
> ImportError: no module named '_file_cache'
>
> Trimis de pe iPhone-ul meu
>
> Pe 10 feb. 2016, la 17:50, Joel Goldstick  a
> scris:
>
>
> if you are using python 3.x you need to do pip3 to install the correct
packages

>
> On Wed, Feb 10, 2016 at 11:31 AM, Andra-Irina Vasile <
> andra.irina.vas...@gmail.com> wrote:
>
>> Hello!
>>
>> Could you be so kind to help me with an issue? I have some problems with
>> python when I am trying to use the twitter api... I have Python 3.5.1 (32
>> bit). I tried to reinstall the module, but I am still receiving the same
>> errors... I attached in this email some screenshots with them.
>>
>> Thank you in advance!
>>
>> Best regards,
>> Andra-Irina Vasile
>> --
>> https://mail.python.org/mailman/listinfo/python-list
>>
>
> This list doesn't take attachments.  Please copy and past the error in
> your question
>
>
> --
> Joel Goldstick
> http://joelgoldstick.com/stats/birthdays
>
>


-- 
Joel Goldstick
http://joelgoldstick.com/stats/birthdays
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Cygwin and Python3

2016-02-10 Thread Mike S via Python-list

On 2/10/2016 5:05 AM, Mark Lawrence wrote:

On 10/02/2016 03:39, Mike S via Python-list wrote:

On 2/9/2016 7:26 PM, Larry Hudson wrote:

On 02/09/2016 08:41 AM, Fillmore wrote:


Hi, I am having a hard time making my Cygwin run Python 3.5 (or Python
2.7 for that matter).
The command will hang and nothing happens.



Just curious...

Since Python runs natively in Windows, why are you trying to run it with
Cygwin?
I'm not implying that you shouldn't, just offhand I don't see a reason
for it.

  -=- Larry -=-



Have you seen this?
http://www.davidbaumgold.com/tutorials/set-up-python-windows/



I have now, but I'm perfectly happy with the free versions of Visual
Studio.


I was referring to the procedure in general, and what looked like an 
important step to include.


So it looks like the terminal can find ssh and git, but not python. 
That's understandable, since we didn't use Cygwin to install Python. To 
tell Cygwin how to find Python, run the following command:

$ echo "PATH=\$PATH:/cygdrive/c/Python32" >> .bash_profile

I don't see any references to VS on that page so I don't know what 
you're referring to.



--
https://mail.python.org/mailman/listinfo/python-list


Re: Copying void * string to

2016-02-10 Thread eryk sun
On Wed, Feb 10, 2016 at 6:07 AM, Martin Phillips
 wrote:
>
> Several functions in the C library return pointers to dynamically allocated 
> w_char null
> terminated strings. I need to copy the string to a Python variable and call 
> an existing
> library function that will free the dynamically allocate memory.
>
> My test code for this is
>
> def Test(fno, item):
> func = mylib. MyFunc
> func.restype = ct.c_void_p
> s = func(fno, item)
> result = s
> mylib.free(s)
> return result
>
> The problem is with the line that sets the result variable. I need this to 
> make a copy of
> the dynamically allocated string, not the pointer to it.

There are several options, but I think the simplest is to use a
subclass of ctypes.c_wchar_p. subclasses of simple types don't get
converted automatically. Copy the string using the "value" attribute.
Then free() the pointer. But only copy and free the result if it isn't
NULL (i.e. a false boolean value).

import ctypes

mylib = ctypes.CDLL('path/to/mylib')

class MyLibError(Exception):
pass

class my_wchar_p(ctypes.c_wchar_p):
pass

mylib.MyFunc.restype = my_wchar_p

def test(fno, item):
s = mylib.MyFunc(fno, item)
if s:
result = s.value
mylib.free(s)
return result
raise MyLibError('mylib.MyFunc returned NULL')
-- 
https://mail.python.org/mailman/listinfo/python-list


Handling transactions in Python DBI module

2016-02-10 Thread Israel Brewster
I am working on implementing a Python DB API module, and am hoping I can get 
some help with figuring out the workflow of handling transactions. In my 
experience (primarily with psycopg2) the workflow goes like this:

- When you open a connection (or is it when you get a cursor? I *think* it is 
on opening a connection), a new transaction is started
- When you close a connection, an implicit ROLLBACK is performed
- After issuing SQL statements that modify the database, you call commit() on 
the CONNECTION object, not the cursor.

My primary confusion is that at least for the DB I am working on, to 
start/rollback/commit a transaction, you execute the appropriate SQL statement 
(the c library I'm using doesn't have any transactional commands, not that it 
should). However, to execute the statement, you need a cursor. So how is this 
*typically* handled? Does the connection object keep an internal cursor that it 
uses to manage transactions?

I'm assuming, since it is called on the connection, not the cursor, that any 
COMMIT/ROLLBACK commands called affect all cursors on that connection. Is that 
correct? Or is this DB specific?

Finally, how do other DB API modules, like psycopg2, ensure that ROLLBACK is 
called if the user never explicitly calls close()?

Thanks for any assistance that can be provided.
---
Israel Brewster
Systems Analyst II
Ravn Alaska
5245 Airport Industrial Rd
Fairbanks, AK 99709
(907) 450-7293
---




-- 
https://mail.python.org/mailman/listinfo/python-list


RegExp help

2016-02-10 Thread Larry Martell
Given this string:

>>> s = """|Type=Foo
... |Side=Left"""
>>> print s
|Type=Foo
|Side=Left

I can match with this:

>>> m = re.search(r'^\|Type=(.*)$\n^\|Side=(.*)$',s,re.MULTILINE)
>>> print m.group(0)
|Type=Foo
|Side=Left
>>> print m.group(1)
Foo
>>> print m.group(2)
Left

But when I try and sub it doesn't work:

>>> rn = re.sub(r'^\|Type=(.*)$^\|Side=(.*)$', r'|Side Type=\2 
>>> \1',s,re.MULTILINE)
>>> print rn
|Type=Foo
|Side=Left

What very stupid thing am I doing wrong?
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: RegExp help

2016-02-10 Thread MRAB

On 2016-02-11 02:48, Larry Martell wrote:

Given this string:


s = """|Type=Foo

... |Side=Left"""

print s

|Type=Foo
|Side=Left

I can match with this:


m = re.search(r'^\|Type=(.*)$\n^\|Side=(.*)$',s,re.MULTILINE)
print m.group(0)

|Type=Foo
|Side=Left

print m.group(1)

Foo

print m.group(2)

Left

But when I try and sub it doesn't work:


rn = re.sub(r'^\|Type=(.*)$^\|Side=(.*)$', r'|Side Type=\2 \1',s,re.MULTILINE)
print rn

|Type=Foo
|Side=Left

What very stupid thing am I doing wrong?


The 4th argument of re.sub is the count.

--
https://mail.python.org/mailman/listinfo/python-list


Re: RegExp help

2016-02-10 Thread Larry Martell
On Wed, Feb 10, 2016 at 10:00 PM, MRAB  wrote:
> On 2016-02-11 02:48, Larry Martell wrote:
>>
>> Given this string:
>>
> s = """|Type=Foo
>>
>> ... |Side=Left"""
>
> print s
>>
>> |Type=Foo
>> |Side=Left
>>
>> I can match with this:
>>
> m = re.search(r'^\|Type=(.*)$\n^\|Side=(.*)$',s,re.MULTILINE)
> print m.group(0)
>>
>> |Type=Foo
>> |Side=Left
>
> print m.group(1)
>>
>> Foo
>
> print m.group(2)
>>
>> Left
>>
>> But when I try and sub it doesn't work:
>>
> rn = re.sub(r'^\|Type=(.*)$^\|Side=(.*)$', r'|Side Type=\2
> \1',s,re.MULTILINE)
> print rn
>>
>> |Type=Foo
>> |Side=Left
>>
>> What very stupid thing am I doing wrong?
>>
> The 4th argument of re.sub is the count.


Thanks. Turned out that this site is running 2.6 and that doesn't
support the flags arg to sub. So I had to change it to:

re.sub(r'\|Type=(.*)\n\|Side=(.*)', r'\|Side Type=\2 \1',s)
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: RegExp help

2016-02-10 Thread MRAB

On 2016-02-11 03:09, Larry Martell wrote:

On Wed, Feb 10, 2016 at 10:00 PM, MRAB  wrote:

On 2016-02-11 02:48, Larry Martell wrote:


Given this string:


s = """|Type=Foo


... |Side=Left"""


print s


|Type=Foo
|Side=Left

I can match with this:


m = re.search(r'^\|Type=(.*)$\n^\|Side=(.*)$',s,re.MULTILINE)
print m.group(0)


|Type=Foo
|Side=Left


print m.group(1)


Foo


print m.group(2)


Left

But when I try and sub it doesn't work:


rn = re.sub(r'^\|Type=(.*)$^\|Side=(.*)$', r'|Side Type=\2
\1',s,re.MULTILINE)
print rn


|Type=Foo
|Side=Left

What very stupid thing am I doing wrong?


The 4th argument of re.sub is the count.



Thanks. Turned out that this site is running 2.6 and that doesn't
support the flags arg to sub. So I had to change it to:

re.sub(r'\|Type=(.*)\n\|Side=(.*)', r'\|Side Type=\2 \1',s)


You could've used the inline flag "(?m)" in the pattern:

  rn = re.sub(r'(?m)^\|Type=(.*)$^\|Side=(.*)$', r'|Side Type=\2 \1',s)

--
https://mail.python.org/mailman/listinfo/python-list


Re: Handling transactions in Python DBI module

2016-02-10 Thread Frank Millman
"Israel Brewster"  wrote in message 
news:92d3c964-0323-46ee-b770-b89e7e7e6...@ravnalaska.net...


I am working on implementing a Python DB API module, and am hoping I can 
get some help with figuring out the workflow of handling transactions. In 
my experience (primarily with

psycopg2) the workflow goes like this:

- When you open a connection (or is it when you get a cursor? I *think* it 
is on opening a connection), a new transaction is started

- When you close a connection, an implicit ROLLBACK is performed
- After issuing SQL statements that modify the database, you call commit() 
on the CONNECTION object, not the cursor.


My primary confusion is that at least for the DB I am working on, to 
start/rollback/commit a transaction, you execute the appropriate SQL 
statement (the c library I'm using doesn't
have any transactional commands, not that it should). However, to execute 
the statement, you need a cursor. So how is this *typically* handled? Does 
the connection object keep an > internal cursor that it uses to manage 
transactions?


I'm assuming, since it is called on the connection, not the cursor, that 
any COMMIT/ROLLBACK commands called affect all cursors on that connection. 
Is that correct? Or is this DB

specific?

Finally, how do other DB API modules, like psycopg2, ensure that ROLLBACK 
is called if the user never explicitly calls close()?


Rather than try to answer your questions point-by-point, I will describe the 
results of some investigations I carried out into this subject a while ago.


I currently support 3 databases, so I use 3 DB API modules - 
PostgreSQL/psycopg2, Sql Server/pyodbc, and sqlite3/sqlite3. The following 
applies specifically to psycopg2, but I applied the lessons learned to the 
other 2 as well, and have had no issues.


A connection has 2 possible states - 'in transaction', or 'not in 
transaction'. When you create the connection it starts off as 'not'.


When you call cur.execute(), it checks to see what state it is in. If the 
state is 'not', it silently issues a 'BEGIN TRANSACTION' before executing 
your statement. This applies for SELECT as well as other statements.


All subsequent statements form part of the transaction, until you issue 
either conn.commit() or conn.rollback(). This performs the required action, 
and resets the state to 'not'.


I learned the hard way that it is important to use conn.commit() and not 
cur.execute('commit'). Both succeed in committing, but the second does not 
reset the state, therefore the next statement does not trigger a 'BEGIN', 
with possible unfortunate side-effects.


HTH

Frank Millman


--
https://mail.python.org/mailman/listinfo/python-list


Re: Handling transactions in Python DBI module

2016-02-10 Thread Chris Angelico
On Thu, Feb 11, 2016 at 4:06 PM, Frank Millman  wrote:
> A connection has 2 possible states - 'in transaction', or 'not in
> transaction'. When you create the connection it starts off as 'not'.
>
> When you call cur.execute(), it checks to see what state it is in. If the
> state is 'not', it silently issues a 'BEGIN TRANSACTION' before executing
> your statement. This applies for SELECT as well as other statements.
>
> All subsequent statements form part of the transaction, until you issue
> either conn.commit() or conn.rollback(). This performs the required action,
> and resets the state to 'not'.
>
> I learned the hard way that it is important to use conn.commit() and not
> cur.execute('commit'). Both succeed in committing, but the second does not
> reset the state, therefore the next statement does not trigger a 'BEGIN',
> with possible unfortunate side-effects.

When I advise my students on basic databasing concepts, I recommend
this structure:

conn = psycopg2.connect(...)

with conn, conn.cursor() as cur:
cur.execute(...)

The transaction block should always start at the 'with' block and end
when it exits. As long as you never nest them (including calling other
database-using functions from inside that block), it's easy to reason
about the database units of work - they always correspond perfectly to
the code blocks.

Personally, I'd much rather the structure were "with
conn.transaction() as cur:", because I've never been able to
adequately explain what a cursor is/does. It's also a bit weird that
"with conn:" doesn't close the connection at the end (just closes the
transaction within that connection). But I guess we don't need a
"Python DB API 3.0".

ChrisA
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Handling transactions in Python DBI module

2016-02-10 Thread Frank Millman
"Chris Angelico"  wrote in message 
news:captjjmphjvtckub6qr-vp_1epewxbgqxmfkepmohqp3papg...@mail.gmail.com...


When I advise my students on basic databasing concepts, I recommend
this structure:

conn = psycopg2.connect(...)

with conn, conn.cursor() as cur:
cur.execute(...)



Does this automatically issue a 'conn.commit()' on exit?

I have found that this is important, for the following reason.

If you issue a bare SELECT to PostgreSQL, it executes it without creating 
any locks. However, if it is inside a transaction, it does create a lock (I 
forget exactly which one).


Because psycopg2 silently executes 'BEGIN TRANSACTION', your SELECTs always 
happen inside a transaction whether you specify it or not. If you do not 
issue a conn.commit(), the locks do not get cleared.


Frank


--
https://mail.python.org/mailman/listinfo/python-list


Re: Handling transactions in Python DBI module

2016-02-10 Thread Chris Angelico
On Thu, Feb 11, 2016 at 4:28 PM, Frank Millman  wrote:
> "Chris Angelico"  wrote in message
> news:captjjmphjvtckub6qr-vp_1epewxbgqxmfkepmohqp3papg...@mail.gmail.com...
>>
>>
>> When I advise my students on basic databasing concepts, I recommend
>> this structure:
>>
>> conn = psycopg2.connect(...)
>>
>> with conn, conn.cursor() as cur:
>> cur.execute(...)
>>
>
> Does this automatically issue a 'conn.commit()' on exit?

If there was no exception raised, yes. If an exception starts to
bubble across that boundary, the transaction's rolled back instead
(and then the exception continues).

> I have found that this is important, for the following reason.
>
> If you issue a bare SELECT to PostgreSQL, it executes it without creating
> any locks. However, if it is inside a transaction, it does create a lock (I
> forget exactly which one).

That's not strictly true. A SELECT query always acquires some locks,
albeit weak ones; but if you're not currently inside a transaction,
you get an implicit transaction that gets immediately ended. So by the
time you get to the next line of Python code, the locks have been
released.

> Because psycopg2 silently executes 'BEGIN TRANSACTION', your SELECTs always
> happen inside a transaction whether you specify it or not. If you do not
> issue a conn.commit(), the locks do not get cleared.

Correct. And that's how you should _always_ work. When I started
databasing seriously, it was with DB2 on OS/2, and if you didn't
explicitly BEGIN TRANSACTION, one would be begun - and not ended. So
the model was: Do your stuff, then commit/rollback. Then do more
stuff. I never used BEGIN TRANSACTION unless I needed to add
parameters to it (picking an isolation level or something).

With Postgres, it's pretty similar, only you execute BEGIN TRANSACTION
more explicitly. But that can be buried in the interface library,
giving you a nice tidy system of "here, this is how we do some work...
and now we're done". Which I strongly recommend.

ChrisA
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: asyncio and blocking - an update

2016-02-10 Thread Frank Millman

"Frank Millman"  wrote in message news:n9c4p3$gmp$1...@ger.gmane.org...


Some of you may have been following my attempts to modify my asyncio app 
so that it does not block when accessing the database. Here is an update.




Here is an update to my update ...

I came up with what felt like a good idea. Run the database handler in a 
separate thread, pass requests to it using a queue.Queue, and get it to 
pass results back using an asyncio.Queue.


It works, but I had a vague sense that performance was a bit sluggish, so 
I tried the 'recommended' approach of using asyncio.run_in_executor() to 
execute database calls in a separate thread. It felt a bit faster.


Now I have written a proper timing test, and the recommended approach is 
much faster. I am not 100% sure of the reason, but I think the problem is 
that, with my method, when the database tries to 'put' a row on the return 
queue, it has to use 'loop.call_soon_threadsafe()', and this seems to 
create a bottleneck.




I have come up with a plan that seems to provide a solution.

Instead of 'putting' one row at a time, let the database handler build up a 
block of rows, and then 'put' the block.


I tried a block of 10, and it ran a lot faster. I increased it to 50, and it 
ran faster again. I tried 100 and there was not much improvement, so 50 
seems like an optimum number. The speed is now only slightly slower than 
run_in_executor(), and it is more truly asynchronous.


Frank


--
https://mail.python.org/mailman/listinfo/python-list


Re: asyncio and blocking - an update

2016-02-10 Thread Chris Angelico
On Thu, Feb 11, 2016 at 4:45 PM, Frank Millman  wrote:
> I have come up with a plan that seems to provide a solution.
>
> Instead of 'putting' one row at a time, let the database handler build up a
> block of rows, and then 'put' the block.
>
> I tried a block of 10, and it ran a lot faster. I increased it to 50, and it
> ran faster again. I tried 100 and there was not much improvement, so 50
> seems like an optimum number. The speed is now only slightly slower than
> run_in_executor(), and it is more truly asynchronous.

Something worth checking would be real-world database performance
metrics: what's time-to-first-row versus time-to-subsequent-rows? When
you submit a query, the server first has to parse it and check for
errors, then do all its optimization and stuff, and figure out an
access plan. Then it goes and fetches stuff. If your query is a simple
"select * from tablename" on a huge table, then it's entirely possible
that you save a lot of time by fetching subsequent rows
asynchronously; but if there's an ORDER BY that can't be resolved from
an index (maybe involving a table join or a non-optimizable function
call), the database might have to read everything from the disk before
it can return a single row, so the time from first row to last row is
a tiny fraction of the time to first row. What are your actual
real-world queries like?

Most likely, the database is reading rows in pages, and you'll have no
way of predicting how many usable result rows are on any page. I
wonder, is there any way you can actually operate this the other way
around? Presumably the database is connected to you via a socket of
some sort (TCP if you're going over a network, maybe a Unix socket for
local connections), which you could select() on same as any other; if
you can react to the arrival of more rows, that might be the easiest
solution. That would probably require a dedicated async database
library, though.

ChrisA
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: asyncio and blocking - an update

2016-02-10 Thread Frank Millman
"Chris Angelico"  wrote in message 
news:CAPTjJmrVCkKAEevc9TW8FYYTnZgRUMPHectz+bD=dqrphxy...@mail.gmail.com...


Something worth checking would be real-world database performance metrics


[snip lots of valid questions]

My approach is guided by something I read a long time ago, and I don't know 
how true it is, but it feels plausible. This is a rough paraphrase.


Modern databases are highly optimised to execute a query and return the 
result as quickly as possible. A properly written database adaptor will work 
in conjunction with the database to optimise the retrieval of the result. 
Therefore the quickest way to get the result is to let the adaptor iterate 
over the cursor and let it figure out how best to achieve it.


Obviously you still have to tune your query to make make sure it is 
efficient, using indexes etc. But there is no point in trying to 
second-guess the database adaptor in figuring out the quickest way to get 
the result.


My theory rests on an assumption which may be faulty. I have assumed that, 
in order to execute a query using run_in_executor(), the way to get the 
result is to use cur.fetchall(). Maybe there are alternatives. However, 
based on that assumption, my theory contrasts the following two approaches -


1. In a separate thread, perform the following -

   cur.execute('SELECT ...')
   rows = cur.fetchall()
   return rows

   The awaiting function will perform the following -

   future = loop.run_in_executor('SELECT ...')
   await future
   rows = future.result()
   for row in rows:
   process row

   The SELECT will not block, because it is run in a separate thread. But 
it will return all the rows in a single list, and the calling function will 
block while it processes the rows, unless it takes the extra step of turning 
the list into an Asynchronous Iterator.


2. In a separate thread, perform the following -

   cur.execute('SELECT ...')
   for row in cur:
   build up block of 50 rows
   loop.call_soon_threadsafe(return_queue.put_nowait, block)

   The awaiting function will call the following -

   rows = AsyncCursor('SELECT ...')
   async for row in rows:
   process row

   AsyncCursor looks like this (abridged) -
   def __init__(self, sql, params):
   loop = asyncio.get_event_loop()
   self.return_queue = asyncio.Queue()
   request_queue.put((loop, sql, params, self.return_queue))
   self.rows = []
   async def __aiter__(self):
   return self
   async def __anext__(self):
   if self.rows:
   return self.rows.pop(0)
   self.rows = await self.return_queue.get()
   return self.rows.pop(0)

Hope this makes sense.

Frank


--
https://mail.python.org/mailman/listinfo/python-list


Re: asyncio and blocking - an update

2016-02-10 Thread Chris Angelico
On Thu, Feb 11, 2016 at 5:36 PM, Frank Millman  wrote:
> "Chris Angelico"  wrote in message
> news:CAPTjJmrVCkKAEevc9TW8FYYTnZgRUMPHectz+bD=dqrphxy...@mail.gmail.com...
>>
>>
>> Something worth checking would be real-world database performance metrics
>
>
> [snip lots of valid questions]
>
> My approach is guided by something I read a long time ago, and I don't know
> how true it is, but it feels plausible. This is a rough paraphrase.
>
> Modern databases are highly optimised to execute a query and return the
> result as quickly as possible. A properly written database adaptor will work
> in conjunction with the database to optimise the retrieval of the result.
> Therefore the quickest way to get the result is to let the adaptor iterate
> over the cursor and let it figure out how best to achieve it.
>
> Obviously you still have to tune your query to make make sure it is
> efficient, using indexes etc. But there is no point in trying to
> second-guess the database adaptor in figuring out the quickest way to get
> the result.

As far as that goes, it's sound. (It's pretty obvious that collecting
all the rows into a list is going to take (at least) as long to give
the first row as iteration would take to give the last row, simply
because you could always implement one on top of the other, and
iteration has flexibility that fetchall doesn't.) The only question
is, what price are you paying for that?

> 1.
>future = loop.run_in_executor('SELECT ...')
>await future
>rows = future.result()
>for row in rows:
>process row
>
>The SELECT will not block, because it is run in a separate thread. But it
> will return all the rows in a single list, and the calling function will
> block while it processes the rows, unless it takes the extra step of turning
> the list into an Asynchronous Iterator.

This is beautifully simple.

> 2.
>rows = AsyncCursor('SELECT ...')
>async for row in rows:
>process row

Also beautifully simple. But this one comes with much more complexity
cost in your second thread and your AsyncCursor.

So really, the question is: Is this complexity buying you enough
performance that it's worthwhile? My questions about real-world stats
are based on the flip side of your assumption - to quote it again:

> Modern databases are highly optimised to execute a query and return the
> result as quickly as possible. A properly written database adaptor will work
> in conjunction with the database to optimise the retrieval of the result.
> Therefore the quickest way to get the result is to let the adaptor iterate
> over the cursor and let it figure out how best to achieve it.

A properly-built database will optimize for two things: Time to first
row, and time to query completion. (And other things, like memory
usage, which don't directly affect this discussion.) In some cases,
they'll be very different figures, and then you'll get a lot of
benefit from iteration. In other cases, they'll be virtually the same
- imagine a query that involves a number of tables and lots of
aggregate functions, governed by a big GROUP BY that gathers them all
up into, say, three rows, sorted by one of the aggregate functions (eg
"show me these categories, sorted by the total value of sales per
category"). How long does it take for the database to get the first
row? It has to execute the entire query. How long to get the other
two? Just return 'em from memory. So there's basically no benefit to
this query of iteration above fetchall. Most queries will  be
somewhere in between, hence the question about real-world
significance. If it costs you little to iterate, great! But if you're
paying a high price, it's something to consider.

ChrisA
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Cygwin and Python3

2016-02-10 Thread blindanagram
On 10/02/2016 23:05, Mike S wrote:
> On 2/10/2016 5:05 AM, Mark Lawrence wrote:

[snip]
>>> Have you seen this?
>>> http://www.davidbaumgold.com/tutorials/set-up-python-windows/
>>>
>>
>> I have now, but I'm perfectly happy with the free versions of Visual
>> Studio.

[snip]
> I don't see any references to VS on that page so I don't know what
> you're referring to.

I suspect that Mark is reacting indirectly to the misleading implication
on that page that it is necessary to install Cygwin if you want to
develop Python code on Windows.

-- 
https://mail.python.org/mailman/listinfo/python-list