why zip64_limit defined as 1<<31 -1?

2015-01-28 Thread jesse
should not it be 1<<32 -1(4g)?

normal zip archive format should be able to support 4g file.

thanks
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: why zip64_limit defined as 1<<31 -1?

2015-01-29 Thread jesse
the official zip format spec states clearly that normal zip file should be
<= 4G size instead 2G.  I just can not believe Python has such an obvious
bug.

People suggested monkey patch the ZIP64_LIMIT value to pass 2G, I am not
sure what will be the ramifications.
On Jan 28, 2015 1:37 PM, "Chris Angelico"  wrote:

> On Thu, Jan 29, 2015 at 5:53 AM, jesse  wrote:
> > should not it be 1<<32 -1(4g)?
> >
> > normal zip archive format should be able to support 4g file.
> >
> > thanks
>
> 1<<31-1 is the limit for a signed 32-bit integer. You'd have to look
> into the details of the zip file format to see whether that's the
> official limit or not; it might simply be that some (un)archivers have
> problems with >2GB files, even if the official stance is that it's
> unsigned.
>
> ChrisA
> --
> https://mail.python.org/mailman/listinfo/python-list
>
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: why zip64_limit defined as 1<<31 -1?

2015-01-29 Thread jesse
On Jan 29, 2015 9:27 AM, "Ian Kelly"  wrote:
>
> On Wed, Jan 28, 2015 at 2:36 PM, Chris Angelico  wrote:
> > On Thu, Jan 29, 2015 at 5:53 AM, jesse  wrote:
> >> should not it be 1<<32 -1(4g)?
> >>
> >> normal zip archive format should be able to support 4g file.
> >>
> >> thanks
> >
> > 1<<31-1 is the limit for a signed 32-bit integer. You'd have to look
> > into the details of the zip file format to see whether that's the
> > official limit or not; it might simply be that some (un)archivers have
> > problems with >2GB files, even if the official stance is that it's
> > unsigned.
>
> The bug in which zip64 support was added indicates that the value was
> indeed chosen as the limit of a signed 32-bit integer:
>
> http://bugs.python.org/issue1446489

ok,  then why signed 32-bit integer instead of unsigned 32 integer? any
technical limitation reason? the chosen 2G boundary does not conform to zip
standard specification.

> --
> https://mail.python.org/mailman/listinfo/python-list
-- 
https://mail.python.org/mailman/listinfo/python-list


any visualization web framework ?

2015-04-27 Thread jesse
show task execution;
data visualization;
easy to set up;

thanks
-- 
https://mail.python.org/mailman/listinfo/python-list


Popen and wget, problems

2007-05-12 Thread Jesse
Hi all, I have a problem using wget and Popen. I hope someone can help.


-- Problem --
I want to use the command:
wget -nv -O "dir/cpan.txt" "http://search.cpan.org";
and capture all it's stdout+stderr.
(Note that option -O requires 'dir' to be existing before wget is executed)

Popen doesn't work, while os.system and shell do. Popen will give the error:
dir/cpan.txt: No such file or directory

While os.system and shell will give the correct result:
06:52:40 URL:http://search.cpan.org/ [3657/3657] -> "dir1/cpan.txt" [1]



-- Background info about wget --
-Option -nv: -nv, --no-verbose turn off verboseness, without 
being quiet.
-Option -O: -O,  --output-document=FILEwrite documents to FILE.

Note that wget requires any directories in the file-path of option -O to be 
existing before the wget command is executed.


-- Python Code using Popen with cmd arg list --
# imports
import os
from subprocess import Popen, PIPE

# vars and create dir
cmd_set = ['wget', '-nv', '-O dir/cpan.txt', 'http://search.span.org']
cmd = ' '.join(cmd_set)
print "cmd: " + cmd
try:
 os.makedirs('dir')
except:
 print 'dir already exists'


# execute using Popen (does NOT work)
proc = Popen(cmd_set, stdout=PIPE, stderr=PIPE)
return_code = proc.wait()
if return_code == 0:
 print "Success:\n%s" % (proc.stdout.read() + proc.stderr.read())
else:
 print "Failure %s:\n%s" % (return_code, proc.stderr.read() + 
proc.stdout.read())


# execute using os.system (does work)
os.system(cmd)


-- Python code output of Popen --
Failure 1:
 dir/cpan.txt: No such file or directory


-- Question --
Why is Popen unable to correctly execute the wget, while os.system can?



-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Popen and wget, problems

2007-05-13 Thread Jesse
Thx Rob!

Your solution works perfect!


"Rob Wolfe" <[EMAIL PROTECTED]> wrote in message 
news:[EMAIL PROTECTED]
> "Jesse" <[EMAIL PROTECTED]> writes:
>
>> Hi all, I have a problem using wget and Popen. I hope someone can help.
>>
>>
>> -- Problem --
>> I want to use the command:
>> wget -nv -O "dir/cpan.txt" "http://search.cpan.org";
>> and capture all it's stdout+stderr.
>> (Note that option -O requires 'dir' to be existing before wget is 
>> executed)
>>
>> Popen doesn't work, while os.system and shell do. Popen will give the 
>> error:
>> dir/cpan.txt: No such file or directory
>>
>> While os.system and shell will give the correct result:
>> 06:52:40 URL:http://search.cpan.org/ [3657/3657] -> "dir1/cpan.txt" [1]
>
> [...]
>
>> -- Python Code using Popen with cmd arg list --
>> # imports
>> import os
>> from subprocess import Popen, PIPE
>>
>> # vars and create dir
>> cmd_set = ['wget', '-nv', '-O dir/cpan.txt', 'http://search.span.org']
>> cmd = ' '.join(cmd_set)
>> print "cmd: " + cmd
>> try:
>> os.makedirs('dir')
>> except:
>> print 'dir already exists'
>>
>>
>> # execute using Popen (does NOT work)
>> proc = Popen(cmd_set, stdout=PIPE, stderr=PIPE)
>> return_code = proc.wait()
>> if return_code == 0:
>> print "Success:\n%s" % (proc.stdout.read() + proc.stderr.read())
>> else:
>> print "Failure %s:\n%s" % (return_code, proc.stderr.read() +
>> proc.stdout.read())
>>
>>
>> # execute using os.system (does work)
>> os.system(cmd)
>>
>>
>> -- Python code output of Popen --
>> Failure 1:
>> dir/cpan.txt: No such file or directory
>>
>>
>> -- Question --
>> Why is Popen unable to correctly execute the wget, while os.system can?
>
> I don't know exactly why in this case Popen doesn't work,
> but the counterpart of os.system is Popen with option shell=True
> and the first parameter should be a string instead of list.
> That seems to work:
> proc = Popen("wget -nv -O dir/cpan.txt http://search.span.org";,
>  shell=True, stdout=PIPE, stderr=PIPE)
>
> and this variant seems to work too:
> cmd_set = ['wget', '-nv', '-O', 'dir/cpan.txt', 'http://search.span.org']
>
> -- 
> HTH,
> Rob 

-- 
http://mail.python.org/mailman/listinfo/python-list


SendKeys-0.3.win32-py2.1.exe

2008-10-25 Thread Jesse
cant seem to install this, using python 2.6, any known errors that
wont let me select the python installation to use, just opens a blank
dialog and wont let me continue..do i need to downgrade python??

thanks in advance
--
http://mail.python.org/mailman/listinfo/python-list


C extension using GSL

2009-03-26 Thread jesse
I give up. I cannot find my memory leak! I'm hoping that someone out
there has come across something similar. Let me lay out the basic
setup:

I'm performing multiple simulations on a model. Each iteration
involves solving a system of differential equations. For this I use
the GNU Scientific Library (GSL) -- specifically the rk4imp ODE
solver. After the ODE is solved the array is returned to python and is
analyzed. As you may have guessed, my problem is that over the course
of the iterations the memory keeps climbing until python crashes.

Note: *The extension does not keep running.* It returns object (a
list) and is done until the next iteration. AFAIK, any memory
allocated during execution of the extension should be released.
Question: Since the extension was run from within python is memory
allocated within an extension part of python's heap?  Would this have
an adverse or unpredictable affect on any memory allocated during the
running of the extension? One hypothesis I have, since most other
possibilities have been eliminated, is that GSL's creation of it's own
data structures (eg., gsl_vector) is messing up Python's control of
the heap. Is this possible? If so, how would one be able to fix this
issue?

It may help some nice folks out there who are good enough to look at
this post if I layout the flow, broken up by where stuff happens
(i.e., in the Python code or C code):

1) Python: Set up the simulation and basic data structures (numpy
arrays with initial conditions, coupling matrices for the ODE's, dicts
with parameters, etc).
2) Python: Pass these to the C extension
3) C: Python objects passed in are converted to C arrays, floats,
etc.
4) C: A PyList object, L,  is created (new reference!). This will hold
the solution vector for the ODE
5) C: Initialize GSL ODE solver and stepper functions. Solve the ODE,
at each step use PyList_Append(L, current_state) to append the current
state to L.
6) C: After the ODE solver finishes, free GSL objects, free coupling
matrices, free last state vector.
7) C: Return L to Python with return Py_BuildValue("N", L).
8) Python: Convert returned list L to array A, delete L, work with A.
8.1) Python: Step 8) includes plotting. (I have a small suspicion that
matplotlib holds onto lots of data, but I've used clf() and close(fig)
on all plots, so I think I'm safe here. )
8.2) Python: save analysis results from A, save A. (At this point
there should be no more use of A. In fact, at point 8) in the next
iteration A is replaced by a new array.)
9) Python: Change any parameters or initial conditions and goto 1).

thanks for any help,
-Jesse


--
http://mail.python.org/mailman/listinfo/python-list


Re: C extension using GSL

2009-03-27 Thread jesse
On Mar 27, 9:30 am, Nick Craig-Wood  wrote:
> jesse  wrote:
> >  I give up. I cannot find my memory leak! I'm hoping that someone out
> >  there has come across something similar. Let me lay out the basic
> >  setup:
>
> >  I'm performing multiple simulations on a model. Each iteration
> >  involves solving a system of differential equations. For this I use
> >  the GNU Scientific Library (GSL) -- specifically the rk4imp ODE
> >  solver. After the ODE is solved the array is returned to python and is
> >  analyzed. As you may have guessed, my problem is that over the course
> >  of the iterations the memory keeps climbing until python crashes.
>
> >  Note: *The extension does not keep running.* It returns object (a
> >  list) and is done until the next iteration. AFAIK, any memory
> >  allocated during execution of the extension should be released.
> >  Question: Since the extension was run from within python is memory
> >  allocated within an extension part of python's heap?
>
> No, on the normal C heap
>
> >  Would this have
> >  an adverse or unpredictable affect on any memory allocated during the
> >  running of the extension?
>
> If the library passes you data and ownership of a heap block, you'll
> need to free it.
>
> > One hypothesis I have, since most other
> >  possibilities have been eliminated, is that GSL's creation of it's own
> >  data structures (eg., gsl_vector) is messing up Python's control of
> >  the heap. Is this possible?
>
> Only if GSL is buggy
>
> > If so, how would one be able to fix this
> >  issue?
>
> Valgrind
>
>
>
> >  It may help some nice folks out there who are good enough to look at
> >  this post if I layout the flow, broken up by where stuff happens
> >  (i.e., in the Python code or C code):
>
> >  1) Python: Set up the simulation and basic data structures (numpy
> >  arrays with initial conditions, coupling matrices for the ODE's, dicts
> >  with parameters, etc).
> >  2) Python: Pass these to the C extension
> >  3) C: Python objects passed in are converted to C arrays, floats,
> >  etc.
> >  4) C: A PyList object, L,  is created (new reference!). This will hold
> >  the solution vector for the ODE
> >  5) C: Initialize GSL ODE solver and stepper functions. Solve the ODE,
> >  at each step use PyList_Append(L, current_state) to append the current
> >  state to L.
> >  6) C: After the ODE solver finishes, free GSL objects, free coupling
> >  matrices, free last state vector.
> >  7) C: Return L to Python with return Py_BuildValue("N", L).
> >  8) Python: Convert returned list L to array A, delete L, work with A.
> >  8.1) Python: Step 8) includes plotting. (I have a small suspicion that
> >  matplotlib holds onto lots of data, but I've used clf() and close(fig)
> >  on all plots, so I think I'm safe here. )
> >  8.2) Python: save analysis results from A, save A. (At this point
> >  there should be no more use of A. In fact, at point 8) in the next
> >  iteration A is replaced by a new array.)
> >  9) Python: Change any parameters or initial conditions and goto 1).
>
> At every point a memory allocation is made check who owns the memory
> and that it is freed through all possible error paths.
>
> Valgrind will help you find the memory leaks.  It works well once
> you've jumped the flaming hoops of fire that setting it up is!
>
> Another thing you can try is run your process untill it leaks loads,
> then make it dump core.  Examine the core dump with a hex editor and
> see what it is full of!  This technique works suprisingly often.
>
> --
> Nick Craig-Wood  --http://www.craig-wood.com/nick

Thanks for the suggestions.
-J
--
http://mail.python.org/mailman/listinfo/python-list


Why does this hang sometimes?

2012-04-04 Thread Jesse Jaggars
I am just playing around with threading and subprocess and found that
the following program will hang up and never terminate every now and
again.

import threading
import subprocess
import time

def targ():
   p = subprocess.Popen(["/bin/sleep", "2"])
   while p.poll() is None:
       time.sleep(1)

t1 = threading.Thread(target=targ)
t2 = threading.Thread(target=targ)
t1.start()
t2.start()

t1.join()
t2.join()


I found this bug, and while it sounds similar it seems that it was
closed during python 2.5 (I'm using 2.7.2):
http://bugs.python.org/issue1404925

Thanks!
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Why does this hang sometimes?

2012-04-12 Thread Jesse Jaggars
Possibly. I wonder what the difference(s) is(are)?

On Sat, Apr 7, 2012 at 5:54 PM, Jason Friedman  wrote:
>> I am just playing around with threading and subprocess and found that
>> the following program will hang up and never terminate every now and
>> again.
>>
>> import threading
>> import subprocess
>> import time
>>
>> def targ():
>>    p = subprocess.Popen(["/bin/sleep", "2"])
>>    while p.poll() is None:
>>        time.sleep(1)
>>
>> t1 = threading.Thread(target=targ)
>> t2 = threading.Thread(target=targ)
>> t1.start()
>> t2.start()
>>
>> t1.join()
>> t2.join()
>>
>>
>> I found this bug, and while it sounds similar it seems that it was
>> closed during python 2.5 (I'm using 2.7.2):
>> http://bugs.python.org/issue1404925
>
> I can confirm hanging on my installation of 2.7.2.  I also ran this
> code 100 times on 3.2.2 without experiencing a hang.  Is version 3.x a
> possibility for you?
-- 
http://mail.python.org/mailman/listinfo/python-list


ctypes: point to buffer in structure

2011-07-09 Thread Jesse R
Hey I've been trying to convert this to run through ctypes and i'm
having a hard time

typedef struct _SYSTEM_PROCESS_ID_INFORMATION
{
HANDLE ProcessId;
UNICODE_STRING ImageName;
} SYSTEM_PROCESS_IMAGE_NAME_INFORMATION,
*PSYSTEM_PROCESS_IMAGE_NAME_INFORMATION;

to

class SYSTEM_PROCESS_ID_INFORMATION(ctypes.Structure):
_fields_ = [('pid', ctypes.c_ulong),
('imageName', ctypes.c_wchar_p)]

processNameBuffer = ctypes.create_unicode_buffer(0x100)
pidInfo = SYSTEM_PROCESS_ID_INFORMATION(pid,
ctypes.byref(processNameBuffer))
status = ntdll.NtQuerySystemInformation(0x58, ctypes.byref(pidInfo),
ctypes.sizeof(pidInfo), None)

does anyone know how to get this working?
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Python Web Scrapping : Within href readonly those value that have href in it

2017-01-16 Thread Jesse Alama
To complement what Peter wrote: I'd approach this problem using
XPath. XPath is a query language for XML/HTML documents; it's a great
tool to have in your web scraping toolbox (among other tasks). With
Python's excellent lxml library you can do some XPath processing. Here's
how I might tackle this problem:

== [ scrape.py ] ==

from lxml import etree

# ...somehow get HTML/XML into the variable xml

root = etree.HTML(xml)

hrefs = root.xpath("//a[@href and starts-with(@href, 'http://')]/@href")

# magic =>  ^^

print(hrefs) # if you want to see what this looks like

== [ end scrape.py ] ==

The argument to the xpath method here is an XPath expression.  The
overall form is:

//a[.]/@href

The '//a' at the beginning means: starting at the root node of the
document, find all a (anchor) elements that match the condition
specified by ".".  The '/@href' at the end means: give me the href
attribute of the nodes (if any) that remain.

Looking inside the square brackets (what's known as the predicate in the
XPath world), we find

@href and starts-with(@href, 'http://')

The 'and' bit should be clear (there are two conditions that need to be
checked).  The first part says: the a element should have an href
attribute.  The second part says that the value of the href element had
better start with 'http://'.

In fact, we could simplify the predicate to

  starts-with(@href, 'http://')

If an element does not even have an href attribute, its value does not
start with 'http://'. It's not an error, and no exception will be
thrown, when the XPath evaluator applies the starts-with function to an
a element that does not have an href attribute.

Hope this helps.

Best regards,

Jesse

--
Jesse Alama
http://xml.sh
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Embedding Python in C

2019-07-17 Thread Jesse Ibarra
On Wednesday, July 17, 2019 at 11:55:28 AM UTC-6, Barry Scott wrote:
> > On 17 Jul 2019, at 16:57,  wrote:
> > 
> > I am using Python3.6:
> > 
> > [jibarra@redsky ~]$ python3.6
> > Python 3.6.8 (default, Apr 25 2019, 21:02:35) 
> > [GCC 4.8.5 20150623 (Red Hat 4.8.5-36)] on linux
> > Type "help", "copyright", "credits" or "license" for more information.
> > 
> > 
> > I am 
> > referencing:https://docs.python.org/3.6/extending/embedding.html#beyond-very-high-level-embedding-an-overview
> > 
> > Is there a way to call a shared C lib using PyObjects?
> 
> If what you want to call is simple enough then you can use the ctypes library
> that ships with python.
> 
> If the code you want to call is more complex you will want to use one of a 
> number of libraries to help
> you create a module that you can import.
> 
> I use PyCXX for this purpose that allows me to write C++ code that can call 
> C++ and C libs and interface
> easily with python. Home page http://cxx.sourceforge.net/ 
>  the source kit contains demo code that you shows
> how to cerate a module, a class and function etc. 
> 
> Example code: 
> https://sourceforge.net/p/cxx/code/HEAD/tree/trunk/CXX/Demo/Python3/simple.cxx
>  
> 
> 
> Barry
> PyCXX maintainer
> 
> > 
> > Please advise.
> > 
> > Thank you.
> > -- 
> > https://mail.python.org/mailman/listinfo/python-list
> >

My options seem rather limited, I need to make a Pipeline from (Smalltalk -> C 
-> Python) then go back (Smalltalk <- C <- Python). Since Smalltalk does not 
support Python directly I have to settle with the C/Python API 
(https://docs.python.org/3.6/extending/embedding.html#beyond-very-high-level-embedding-an-overview).
 Any suggestions?
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Embedding Python in C

2019-07-18 Thread Jesse Ibarra
On Wednesday, July 17, 2019 at 2:20:51 PM UTC-6, Christian Gollwitzer wrote:
> Am 17.07.19 um 20:39 schrieb Jesse Ibarra:
> > My options seem rather limited, I need to make a Pipeline from (Smalltalk 
> > -> C -> Python) then go back (Smalltalk <- C <- Python). Since Smalltalk 
> > does not support Python directly I have to settle with the C/Python API 
> > (https://docs.python.org/3.6/extending/embedding.html#beyond-very-high-level-embedding-an-overview).
> >  Any suggestions?
> > 
> 
> Ah, now you finally tell us your problem!
> 
> Depending on, how complete / advanced / efficient the bridge needs to 
> be, it can be easy or hard.
> 
> What level of integration do you want to achieve? Do you want
> 
> a) to call Python functions from Smalltalk
> b) call Smalltalk functions from Python
> c) pass callbacks around, e.g. use a Smalltalk function within a Python 
> list comprehension, and if so, which way
> d) integrate the class systems - derive a Python class from a Smalltalk 
> base or the other way round
> 
> e) ?
> 
> 
> The most basic thing is a), but even getting that right might be 
> non-trivial, since both C APIs will have different type systems which 
> you need to match. I don't speak Smalltalk, so can't comment in detail 
> on this - but in practice it will also depend on the implementation you 
> are using.
> 
> Best regards,
> 
>   Christian

For right now, I need to call a .so file from Smalltalk. I can't explicitly use 
Python libraries since Smalltalk does not support Python. I need to use the 
C/Python API (in Smalltalk) to create a bridge where I can call a .so and a 
function in that .so file with a PyObject (This will be called back to 
Smalltalk from the .so file). I can't figure out a way to do that since the 
C/API can't call C or .so files.

sorry for the confusion on my part
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Embedding Python in C

2019-07-19 Thread Jesse Ibarra
On Thursday, July 18, 2019 at 2:01:39 PM UTC-6, Chris Angelico wrote:
> On Fri, Jul 19, 2019 at 5:51 AM Christian Gollwitzer  wrote:
> > Once you can do this, you can proceed to call a Python function, which
> > in C means that you invoke the function PyObject_CallObject(). A basic
> > example is shown here:
> >
> > https://docs.python.org/2/extending/embedding.html#pure-embedding
> >
> 
> Or, better:
> 
> https://docs.python.org/3/extending/embedding.html#pure-embedding
> 
> ChrisA

I need to call a .so file, but I don't know I way to do that with PyObject. I 
can only seem to bring in .py files
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Embedding Python in C

2019-07-19 Thread Jesse Ibarra
On Thursday, July 18, 2019 at 1:46:05 PM UTC-6, Christian Gollwitzer wrote:
> Am 18.07.19 um 16:18 schrieb Jesse Ibarra:
> > On Wednesday, July 17, 2019 at 2:20:51 PM UTC-6, Christian Gollwitzer wrote:
> >> What level of integration do you want to achieve? Do you want
> >>
> >> a) to call Python functions from Smalltalk
> >> b) call Smalltalk functions from Python
> >> c) pass callbacks around, e.g. use a Smalltalk function within a Python
> >> list comprehension, and if so, which way
> >> d) integrate the class systems - derive a Python class from a Smalltalk
> >> base or the other way round
> >>
> 
> > 
> > For right now, I need to call a .so file from Smalltalk. I can't explicitly 
> > use Python libraries since Smalltalk does not support Python. I need to use 
> > the C/Python API (in Smalltalk) to create a bridge where I can call a .so 
> > and a function in that .so file with a PyObject (This will be called back 
> > to Smalltalk from the .so file). I can't figure out a way to do that since 
> > the C/API can't call C or .so files.
> > 
> > sorry for the confusion on my part
> > 
> 
> I still don't get it, sorry. To me it is unclear which part of the 
> integration you manage to do so far, and which part is the problem.
> 
> Which Smalltalk interpreter are you using? The answer to the following 
> will heavily depend on that.
> 
> 
> Suppose I'd give you a C file with the following simple function:
> 
> 
> double add(double a, double b) {
>   return a+b;
> }
> 
> Do you know how to compile this code and make it usable from Smalltalk?
> 
> One level up, consider a C function working on an array:
> 
> double arrsum(int n, double *arr) {
>   double sum=0.0;
>   for (int i=0; i   return sum;
> }
> 
> 
> How would you compile and link this with your Smalltalk implementation, 
> such that you can pass it a Smalltalk array?
> 
> Once you can do this, you can proceed to call a Python function, which 
> in C means that you invoke the function PyObject_CallObject(). A basic 
> example is shown here:
> 
> https://docs.python.org/2/extending/embedding.html#pure-embedding
> 
>   Christian

I am using Visualworks 8.3. I can write that double arrsum function in 
Smalltalk. No Problem.

I can also make shared library and bring those function into Smalltalk with C 
(Since VW 8.3 supports C and C types). No Problem.

Smalltalk does not support Python. So I can use C/API to bring in Python 
libs,types and files. No Problem

Now ,I need to bring in shared libraries using C/Python API using Smalltalk. It 
seems like I can't directly bring in C shared libraries (.so files). PROBLEM.

HOW CAN I BRING THESE IN DIRECTLY?

Why do I want to call a C shared lib (.so file) using C/Python API from 
Smalltalk?

Because I want to make C shared libs that bring in individual Python libraries 
into Smalltalk, such as as (Scipy,NumPy,etc). That can be called using that 
C/Python API in Smalltalk


-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Embedding Python in C

2019-07-19 Thread Jesse Ibarra
On Friday, July 19, 2019 at 8:17:43 AM UTC-6, Chris Angelico wrote:
> On Sat, Jul 20, 2019 at 12:16 AM Jesse Ibarra
>  wrote:
> >
> > On Thursday, July 18, 2019 at 2:01:39 PM UTC-6, Chris Angelico wrote:
> > > On Fri, Jul 19, 2019 at 5:51 AM Christian Gollwitzer  
> > > wrote:
> > > > Once you can do this, you can proceed to call a Python function, which
> > > > in C means that you invoke the function PyObject_CallObject(). A basic
> > > > example is shown here:
> > > >
> > > > https://docs.python.org/2/extending/embedding.html#pure-embedding
> > > >
> > >
> > > Or, better:
> > >
> > > https://docs.python.org/3/extending/embedding.html#pure-embedding
> > >
> > > ChrisA
> >
> > I need to call a .so file, but I don't know I way to do that with PyObject. 
> > I can only seem to bring in .py files
> 
> I have vastly insufficient context to be able to even attempt to answer this.
> 
> ChrisA

Thank you for your help.

If this helps. I need PyImport_Import to bring in a C shared lib. (.so file)
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Embedding Python in C

2019-07-19 Thread Jesse Ibarra
Sorry, I am not understanding. Smalltlak VW 8.3 does not support Python. I can 
only call Pyhton code through C/Python API.
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Embedding Python in C

2019-07-22 Thread Jesse Ibarra
On Saturday, July 20, 2019 at 1:11:51 PM UTC-6, Stefan Behnel wrote:
> Jesse Ibarra schrieb am 20.07.19 um 04:12:
> > Sorry, I am not understanding. Smalltlak VW 8.3 does not support Python.
> > I can only call Pyhton code through C/Python API.
> 
> Ok, but that doesn't mean you need to write code that uses the C-API of
> Python. All you need to do is:
> 
> 1) Start up a CPython runtime from Smalltalk (see the embedding example I
> posted) and make it import an extension module that you write (e.g. using
> the "inittab" mechanism [1]).
> 
> 2) Use Cython to implement this extension module to provide an interface
> between your Smalltalk code and your Python code. Use the Smalltalk C-API
> from your Cython code to call into Smalltalk and exchange data with it.
> 
> Now you can execute Python code inside of Python and make it call back and
> forth into your Smalltalk code, through the interface module. And there is
> no need to use the Python C-API for anything beyond step 1), which is about
> 5 lines of Python C-API code if you write it yourself. Everything else can
> be implemented in Cython and Python.
> 
> Stefan
> 
> 
> [1]
> https://docs.python.org/3/extending/embedding.html?highlight=PyImport_appendinittab#extending-embedded-python

This cleared so much @Stefan, thank you. I just need some clarification if you 
don't mind.
 
In (1), when you say  "import an extension module that you write",  do you mean 
the Python library that was created "import emb"? Is that gonna be written in 
Cython or standalone .C file?

in (2), what do to mean when you said "Use the Smalltalk C-API from your Cython 
code to call into Smalltalk and exchange data with it."? 

Please advise,

Thank you so much for your help.
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Embedding Python in C

2019-07-24 Thread Jesse Ibarra
On Tuesday, July 23, 2019 at 2:20:45 PM UTC-6, Stefan Behnel wrote:
> Jesse Ibarra schrieb am 22.07.19 um 18:12:
> > On Saturday, July 20, 2019 at 1:11:51 PM UTC-6, Stefan Behnel wrote:
> >> Jesse Ibarra schrieb am 20.07.19 um 04:12:
> >>> Sorry, I am not understanding. Smalltlak VW 8.3 does not support Python.
> >>> I can only call Pyhton code through C/Python API.
> >>
> >> Ok, but that doesn't mean you need to write code that uses the C-API of
> >> Python. All you need to do is:
> >>
> >> 1) Start up a CPython runtime from Smalltalk (see the embedding example I
> >> posted) and make it import an extension module that you write (e.g. using
> >> the "inittab" mechanism [1]).
> >>
> >> 2) Use Cython to implement this extension module to provide an interface
> >> between your Smalltalk code and your Python code. Use the Smalltalk C-API
> >> from your Cython code to call into Smalltalk and exchange data with it.
> >>
> >> Now you can execute Python code inside of Python and make it call back and
> >> forth into your Smalltalk code, through the interface module. And there is
> >> no need to use the Python C-API for anything beyond step 1), which is about
> >> 5 lines of Python C-API code if you write it yourself. Everything else can
> >> be implemented in Cython and Python.
> >>
> >> Stefan
> >>
> >>
> >> [1]
> >> https://docs.python.org/3/extending/embedding.html?highlight=PyImport_appendinittab#extending-embedded-python
> > 
> > This cleared so much @Stefan, thank you. I just need some clarification if 
> > you don't mind.
> >  
> > In (1), when you say  "import an extension module that you write",  do you 
> > mean the Python library that was created "import emb"? Is that gonna be 
> > written in Cython or standalone .C file?
> 
> Yes. In Cython.
> 
> 
> > in (2), what do to mean when you said "Use the Smalltalk C-API from your 
> > Cython code to call into Smalltalk and exchange data with it."? 
> 
> Not sure what part exactly you are asking about, but you somehow have to
> talk to the Smalltalk runtime from your Cython/Python code if you want to
> interact with it. I assume that this will be done through the C API that
> Smalltalk provides.
> 
> Just in case, did you check if there is already a bridge for your purpose?
> A quick web search let me find this, not sure if it helps.
> 
> https://github.com/ObjectProfile/PythonBridge
> 
> Stefan

Yes, I think that can be done through the "inittab" mechanism  you recommended. 
I will try it. 

Yes, I trying to implement the same thing as shown in the GitHub link but 
instead of Pharo I will use VisualWorks IDE

Thank you
-- 
https://mail.python.org/mailman/listinfo/python-list


Background process for ssh port forwarding

2005-10-01 Thread Jesse Rosenthal
Hello all,

I'm writing a script which will backup data from my machine to a server
using rsync. It checks to see if I am on the local network. If I am, it
runs rsync over ssh to 192.168.2.6 using the pexpect module to log in.
That's the easy part.

Now, when I'm not on the local network, I first want to open up an ssh
connection to do port forwarding, so something like this:

def hostforward():
#This is based on the assumption that the passfile is the gnus
#authinfo file, or has a similar format...
f = open(PASS_FILE, "r")
f_list = f.read().split(' ')
f.close()
#Now, we get the entry after "password" (be slicker to make it a
#dictionary, but maybe wouldn't work as well).
pass_index = f_list.index('password') + 1
forwardpass = f_list[pass_index]
#now we connect
command = 'ssh -l %s -L 2022:%s:22 %s' % \
  (login, my_server, forwarding_server)
connection = pexpect.spawn(command)
connection.expect('.*assword:')
connection.sendline(forwardpass)

If I end this with 'connection.interact()', I will end up logged in to the
forwarding server. But what I really want is to go on and run rsync to
localhost port 2022, which will forward to my_server port 22. So, how can
I put the ssh connection I set up in hostforward() in the background?
I need to make sure that connection is made before I can run the rsync
command.

I've looked at threading, but that seems excessive. There must be an
easier way. Whatever I do, though, I'll need to use pexpect to spawn the
processes, since I'll need to log in to ssh servers with a password.

Thanks for any help.

--Jesse


-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Background process for ssh port forwarding

2005-10-02 Thread Jesse Rosenthal
On Sun, 02 Oct 2005 08:44:48 +0200, Fredrik Lundh wrote:

> Jesse Rosenthal wrote:
> 
>> If I end this with 'connection.interact()', I will end up logged in to the
>> forwarding server. But what I really want is to go on and run rsync to
>> localhost port 2022, which will forward to my_server port 22. So, how can
>> I put the ssh connection I set up in hostforward() in the background?
>> I need to make sure that connection is made before I can run the rsync
>> command.
> 
> $ man ssh
> 
> ...
> 
>  -f  Requests ssh to go to background just before command execution.
>  This is useful if ssh is going to ask for passwords or
>  passphrases, but the user wants it in the background.  This
>  implies -n.  The recommended way to start X11 programs at a
>  remote site is with something like ssh -f host xterm.
> 
> ...

Thanks for the response. I had tried that, but the problem is that it
requires that I specify some command on the server (in this case, the
forwarding server) which it executes with ssh in the bg, and then exits.
But maybe I could just run "sleep 10" or something on the server to keep
the port open long enough to ssh through it? I'll give a few things like
that a try.


-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Background process for ssh port forwarding

2005-10-04 Thread Jesse Rosenthal
On Tue, 04 Oct 2005 12:40:27 -0700, John Hazen wrote:

> I think what's happening is that when you return from 'hostforward', the
> connection is being closed because of garbage collection.  Python uses
> (among other stuff) reference counting to tell it when to delete
> objects.  After hostforward returns from execution, there are no longer
> any references to 'connection', so it gets deleted, which cleans up the
> connection.
> 
> You probably want to add:
> 
> return connection
> 
> at the end of hostforward, and call it like:
> 
> connection = hostforward()
> my_rsync_function()
> connection.close()   # or whatever the approved pexpect cleanup is

Thanks, John. This makes a lot of sense -- I'll have to give it a try.

I ended up taking an uglier approach. First I set up keys pairs
with my forwarding server so I don't have to log in. So no more need for
pexpect at this point. I then set up an ssh connection with a "-f sleep
10" option. This backgrounds the connection to the forwarding server and
runs "sleep 10" on the forwarding server, which gives me 10 seconds to
log in to my forwarded port (localhost 2022). 10 seconds is actually
probably excessive, since logging in is the next command in the script.
The connection then stays open so long as I'm logged in.

It works, but it seems rather kludgey. So I'll definitely give your
approach a try.

Thanks again
Jesse


-- 
http://mail.python.org/mailman/listinfo/python-list


Weighted "random" selection from list of lists

2005-10-08 Thread Jesse Noller
Hello -

I'm probably missing something here, but I have a problem where I am
populating a list of lists like this:

list1 = [ 'a', 'b', 'c' ]
list2 = [ 'dog', 'cat', 'panda' ]
list3 = [ 'blue', 'red', 'green' ]

main_list = [ list1, list2, list3 ]

Once main_list is populated, I want to build a sequence from items
within the lists, "randomly" with a defined percentage of the sequence
coming for the various lists. For example, if I want a 6 item
sequence, I might want:

60% from list 1 (main_list[0])
30% from list 2 (main_list[1])
10% from list 3 (main_list[2])

I know how to pull a random sequence (using random()) from the lists,
but I'm not sure how to pick it with the desired percentages.

Any help is appreciated, thanks

-jesse
-- 
http://mail.python.org/mailman/listinfo/python-list


Background process for ssh port forwarding

2005-11-15 Thread Jesse Rosenthal
Hello all,

I'm writing a script which will backup data from my machine to a server
using rsync. It checks to see if I am on the local network. If I am, it
runs rsync over ssh to 192.168.2.6 using the pexpect module to log in.
That's the easy part.

Now, when I'm not on the local network, I first want to open up an ssh
connection to do port forwarding, so something like this:

def hostforward():
#This is based on the assumption that the passfile is the gnus
#authinfo file, or has a similar format...
f = open(PASS_FILE, "r")
f_list = f.read().split(' ')
f.close()
#Now, we get the entry after "password" (be slicker to make it a
#dictionary, but maybe wouldn't work as well).
pass_index = f_list.index('password') + 1
forwardpass = f_list[pass_index]
#now we connect
command = 'ssh -l %s -L 2022:%s:22 %s' % \
  (login, my_server, forwarding_server)
connection = pexpect.spawn(command)
connection.expect('.*assword:')
connection.sendline(forwardpass)

If I end this with 'connection.interact()', I will end up logged in to the
forwarding server. But what I really want is to go on and run rsync to
localhost port 2022, which will forward to my_server port 22. So, how can
I put the ssh connection I set up in hostforward() in the background?
I need to make sure that connection is made before I can run the rsync
command.

I've looked at threading, but that seems excessive. There must be an
easier way. Whatever I do, though, I'll need to use pexpect to spawn the
processes, since I'll need to log in to ssh servers with a password.

Thanks for any help.

--Jesse


-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Hello World-ish

2005-11-26 Thread Jesse Lands
On 26 Nov 2005 03:19:55 -0800
"[EMAIL PROTECTED]" <[EMAIL PROTECTED]> wrote:

> from os import *
> print "Working path: %s" % os.getcwd();
> 
> Just wondering how you would do that .. in theory, if you get what I
> mean?
> I get
> NameError: name 'os' is not defined
> currently, which I don't know how to fix.. anyone?
> 

your using the getcwd from os.  You need to change the 'from os import
*'  to import os


i.e.

import os
print "Working path: %s" % os.getcwd()


-- 
JLands
Slackware 9.1
Registered Linux User #290053
"If you were plowing a field, which would you rather use? Two strong
oxen or 1024 chickens?"
- Seymour Cray (1925-1996), father of supercomputing
-- 
http://mail.python.org/mailman/listinfo/python-list


OT: Boston-Area QA/Python Job

2005-06-24 Thread Jesse Noller
Sorry for the off-topic post everyone. The company I work for has a
job opening for a Senior QA/Automation person, and they are looking
for someone strong in Python to help develop tests/testing
frameworks/etc.

The complete job description follows - you can feel free to email
resumes and questions to jnoller at gmail dot com the company website
is http://www.archivas.com

Job description:

Senior QA Engineer – Test Automation 

Archivas is actively seeking a highly motivated, self-starting QA
Engineer with a strong development background to help in the
development of automated tests. In this position, you will be
responsible for defining, executing and developing tests to help to
increase or test coverage and to ensure product quality. This will
require strong development skills and  solid QA skills . You will also
be responsible for the analysis, report generation, and communication
of test results.

Responsibilities
This position will be responsible for the following:
- Develop test plans and test cases to ensure product quality.
- Automate tests (preferably in Python or Java) to be executed in our
Automated Test Framework (written in Python)
- Test development and execution at all levels (unit, component,
integration and system, white box and black box)
- Develop and maintain testing tools/scripts (preferably written in
Python or Java)
- Report product defects and track defects to closure.
- Work closely with development to resolve escalated issues.
- Help prioritize product defects and feature requests.
- Help review and contribute to product documentation.
- Act as backup and escalation path for Technical Support.
- Assist in the support of beta customers.

Position requirements include:

- 5+ years experience testing systems products 
- Comprehensive knowledge and application of Software Quality
Assurance methodologies
- Experience developing automated test cases
- Strong development experience in scripting or programming languages
(specifically: Python or Java).
- Experience testing and developing automated tests for a mixed Linux
(*nix) and Windows environments required.
- Strong written and verbal communication skills. 
- Computer Science Degree or equivalent experience.

Additional Skills (a plus):
- Experience with relational databases such as PostgresQL or SQL
- Experience with QA testing tools: code coverage analysis tools,
performance testing tools, automated test tools
- Experience with software development tools: source code control
system such as Perforce or ClearCase, build tools such as ant or make,
and bugtracking tools like Bugzilla or ClearQuest.

Desired Personal Characteristics: 
- Independent: Able to grasp high level product requirements and
translate these to detailed test plans. Capable of gathering required
information from engineers and product marketing to perform job
function.
- Strong sense of ownership: Sees defects throughout their lifecycle;
from detection to resolution.
- Assertive: Champions causes that will benefit the company. Willing
to engage engineers and management in discussions on bug resolution.
- Communicative: Takes an "open source" attitude to quality assurance.
Keeps others in the company informed and up to date on his or her
priorities, current tasks and work completed. Encourages constructive
criticism on his or her work.
- Accountable: He or she should be a results-oriented team player who
leads by example, holds themselves accountable for performance, takes
absolute ownership, and champions all aspects of quality initiatives.
- Sense of urgency. Escalates quality issues when appropriate;
maintains a sense of "bug ownership" to see all issues through to a
successful resolution. Strives to turn around issues with an efficient
and effective approach.
- Flexible and adaptable. Should be able to switch gears in various
high-stress situations and apply themselves to quickly learning new
technologies and adopting new methodologies.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: How can I import a py script by its absolute path name?

2005-07-14 Thread Jesse Noller
A question in a similiar vein:

I have appended 2 different directories to my path (via
sys.path.append) now - without knowing the names of the files in those
directories, I want to force an import of the libraries ala:

for f in os.listdir(os.path.abspath(libdir)):
module_name = f.strip('.py')
import module_name

Obviously, this throws:

ImportError: No module named module_name

Is there some way to do this?

thanks
-jesse
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: How to realize ssh & scp by Python

2005-07-24 Thread Jesse Noller
On 7/23/05, 刚 王 <[EMAIL PROTECTED]> wrote:
> I would like to write a Python code like this:
> 
> It can login a host by SSH
> after login the host, use SCP to get a remote file, so it can deliver file
> to the host.
> then execute the program
> then leave the host
> 
> For example :
> 
> STEP 1. ssh _yyy at 123.45.67.89
> STEP 2. Enter the password automatically 
> STEP 3. run " scp -r zzz at 123.45.67.90:/home/xxx/program ."
> STEP 4. Enter the password for SCP automatically 
> STEP 5. run "./program"
> STEP 6. run " exit"
> 
> I know telnetlib can help us with telnet, and how to deal with this SSH
> situation in Python?
> 
> Thanks a lot for your help :)
> 

I would recommend looking at the following utilities:

http://www.theether.org/pssh/
http://sourceforge.net/projects/pyssh

Setting up private/public key authentication is going to allow for a
greate amount of secure automation. Barring that, use the pexpect
module to do the prompt handling.

-jesse
-- 
http://mail.python.org/mailman/listinfo/python-list

NDMP Library?

2005-09-09 Thread Jesse Noller
Does anyone know of a python module/library for communicating with the
NDMP (ndmp.org) protocol? I'm doing some research and anything would
be a great help, thanks!

-jesse
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: How to extract contents of inner text of html tag?

2014-06-27 Thread Jesse Adam
I don't have BeautifulSoup installed so I am unable to tell whether

a) for line in all_kbd:
processes one line at a time as given in the input, or do you get the clean
text in single lines in a list as shown in the example in the doc 
http://www.crummy.com/software/BeautifulSoup/bs4/doc/#searching-the-tree


b) for inside_line in line:
  Does this process one token at a time? 

In any case, it looks like the reason you got "None" in the output is 
because you assume that every single line contains  and  tags.
This may not be case all the time, so, prior to printing extract_code
perhaps you could check whether that is None.

-- 
https://mail.python.org/mailman/listinfo/python-list


Re: multiprocessing in a while loop?

2014-06-27 Thread Jesse Adam

Could you post 
a) what the output looks like now (sans the logging part)
b) what output do you expect


In any event, this routine does not look right to me:

def consume_queue(queue_name):
  conn = boto.connect_sqs()
  q = conn.get_queue(queue_name)
  m = q.read()
  while m is not None:
yield m
q.delete_message(m)
logger.debug('message deleted')
m = q.read()
-- 
https://mail.python.org/mailman/listinfo/python-list


httplib and large file uploads

2006-10-02 Thread Jesse Noller
Hey All,I'm working on an script that will generate a file of N size (where N is 1k-1gig) in small chunks, in memory (and hash the data on the fly) and pass it to an httplib object for upload. I don't want to store the file on the disk, or completely in memory at any time. The problem arises after getting the http connection (PUT) - and then trying to figure out how to iterate/hand the chunks I am generating to the httplib connection's send() call. For example (this code does not work as is):
chunksize = 1024size = 10024http = httplib.HTTP(url, '80')http.putrequest("PUT", save_url)http.putheader("Content-Length", str(size))http.endheaders()for i in xrange(size / chunksize):
    chunk = ur.read(chunksize)    http.send(chunk)errcode, errmsg, headers = http.getreply()http.close()In this case, "ur" is a file handle pointing to /dev/urandom. Obviously, the problem lies in the multiple send(chunk) calls. I'm wondering if it is possible to hand 
http.send() an iterator/generator which can pass chunks in as needed.Thanks in advance,-jesse
-- 
http://mail.python.org/mailman/listinfo/python-list

Simple SMTP server

2005-06-09 Thread Jesse Noller
Hello -

I am looking at implementing a simple SMTP server in python - I know
about the smtpd module, but I am looking for code examples/snippets as
the documentation is sparse.

The server I am looking at writing is really simple - it just needs to
fork/thread appropriately for handling large batches of email, and all
it has to do is write the incoming emails to the disk (no
relaying/proxying/etc).

If anyone has any good examples/recipes I'd greatly appreciate it.

Thanks

-jesse
-- 
http://mail.python.org/mailman/listinfo/python-list


logging anomaly

2007-06-26 Thread Jesse James

I have some fairly simply code in a turbogears controller that uploads
files. In this code, I inserted a 'start uploading' and a 'done
uploading' log record like this:

logger.info('- start uploading file: '+Filename)
# copy file to specified location.
while 1:
# Read blocks of 8KB at a time to avoid memory problems
with large files.
data = Filedata.file.read(1024 * 8)
if not data:
break
fp.write(data)
logger.info('- done uploading file: '+Filename)
fp.close()

It is nice to occasionally see the upload time for a large file...but
what I am seeing is that the 'start' message is not being logged until
just immediately before the 'done' message:

2007-06-26 07:59:38,192 vor.uploader INFO - start uploading file:
7_Canyons_Clip_1.flv
2007-06-26 07:59:38,206 vor.uploader INFO - done uploading file:
7_Canyons_Clip_1.flv

I know this is wrong because this is a large file that took almost a
minute to upload.

Seems like, with my experience with log4j, upon which I believe the
python logging module was modeled, this should just work as expected.

What am I missing here?

Jesse.

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: division by 7 efficiently ???

2007-02-02 Thread Jesse Chounard
On 31 Jan 2007 19:13:14 -0800, [EMAIL PROTECTED]
<[EMAIL PROTECTED]> wrote:
> Its not an homework. I appeared for EA sports interview last month. I
> was asked this question and I got it wrong. I have already fidlled
> around with the answer but I don't know the correct reasoning behind
> it.

I think this works:

>>> def div7 (N):
... return ((N * 9362) >> 16) + 1
...
>>> div7(7)
1
>>> div7(14)
2
>>> div7(700)
100
>>> div7(7)
1

The coolest part about that (whether it works or not) is that it's my
first Python program.  I wrote it in C first and had to figure out how
to convert it.  :)

Jesse
-- 
http://mail.python.org/mailman/listinfo/python-list


recording sound with python

2007-11-08 Thread jesse j
Hello Everyone,

I'm new to python.  I have worked through some tutorials and played around
with the language a little bit but I'm stuck.

I want to know how I can make python run a program.  More specifically, I
want to get python to work with SOX to record a sound through the
microphone, save the sound and then play the sound by typing one command
into a linux terminal (UBUNTU).

Can someone give me some hints on how this can be done?

Thank You.

jessej
-- 
http://mail.python.org/mailman/listinfo/python-list

Re: Efficient: put Content of HTML file into mysql database

2007-11-19 Thread Jesse Jaggars
Fabian López wrote:
> Hi colegues,
> do you know the most efficient way to put the content of an html file 
> into a mySQL database?Could it be this one?:
> 1.- I have the html document in my hard disk.
> 2.- Then I Open the file (maybe with fopen??)
> 3.- Read the content (fread or similar)
> 4.- Write all the content it in a SQL sentence.
>
> What happens if the html file is very big?
>
>
> Thanks!
> FAbian
>
>
I don't understand why you would want to store an entire static html 
page in the database. All that accomplishes is adding the overhead of 
calling MySQL too.

If you want to serve HTML do just let your webserver serve HTML.

Is there more to the situation that would help myself and others 
understand why you want to do this?
-- 
http://mail.python.org/mailman/listinfo/python-list


MySQLdb autocommit and SELECT problems

2007-11-28 Thread Jesse Lehrman
I'm experiencing strange behavior using MySQLdb.

If I turn off autocommit for my connection, I get stale data when I perform 
multiple and identical SELECT's. If autocommit is enabled, I follow my SELECT 
statement with a COMMIT statement or I create a new connection I don't get this 
behavior.

Example:

TABLE EMPLOYEES (InnoDB)
|| ID | NAME ||
|| 1  | bob   ||

import MySQLdbdb = MySQLdb.connect(host=host, user=user, passwd=passwd, 
db=schema)cursor = db.cursor()cursor.execute("SELECT @@autocommit")autocommit = 
cursor.fetchall()cursor.execute("SELECT name FROM EMPLOYEES WHERE 
id=1")firstResult = print cursor.fetchall()cursor.close()# I now go and change 
'bob' to 'dave' using some other MySQL client. I also confirm that the data was 
changed with that client.cursor = db.cursor()
cursor.execute("SELECT name FROM EMPLOYEES WHERE id=1")secondResult = 
cursor.fetchall()
cursor.close()# I append a COMMIT statement to the SELECT
cursor = db.cursor()
cursor.execute("SELECT name FROM EMPLOYEES WHERE id=1; COMMIT;")thirdResult = 
cursor.fetchall()
cursor.close()

print autocommit
print firstResult
print secondResult
print thirdResult>>((0),)>>((bob),)>>((bob),)>>((dave),)

It doesn't make sense to me that a COMMIT should affect the behavior of a 
SELECT statement at all. I tried replicating this using the MySQL Query browser 
but that gives me the expected result (ie: I see the change). I know that the 
second SELECT was successful because MySQL increments its SELECT counter.

I realize the easy fix is to just add a COMMIT to all my SELECT statements but 
I'm trying to understand why it's doing this.

Python v2.4.3 (WinXP 64-bit)
MySQLdb v 1.2.2 (also tried v1.2.1)
MySQL 4.1.12 and 5.0.45 (Linux)

thanks in advance!

Jesse
[EMAIL PROTECTED]

_
Send a smile, make someone laugh, have some fun! Start now!
http://www.freemessengeremoticons.ca/?icid=EMENCA122-- 
http://mail.python.org/mailman/listinfo/python-list

Re: Source formatting fixer?

2007-12-11 Thread Jesse Jaggars
Bret wrote:
> Does anyone know of a package that can be used to "fix" bad formatting
> in Python code?  I don't mean actual errors, just instances where
> someone did things that violate the style guide and render the code
> harder to read.
>
> If nothing exists, I'll start working on some sed scripts or something
> to add spaces back in but I'm hoping someone out there has already
> done something like this.
>
> Thanks!
>
>
> Bret Wortman
>   
This may not be exactly what you want, but if you use Gedit there is a 
handy reindent plugin.


http://live.gnome.org/Gedit/Plugins/Reindent
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: pywin32 : scheduled weakup from standby/hiberate ?

2006-04-18 Thread Jesse Hager
robert wrote:
> On Windows the task scheduler tool can program (the BIOS?) to weak up 
> the machine from standby/hibernate at certain pre-configured times. Can 
> this be done directly through the (py)win32 API?
> 
> robert

What you need is a Waitable Timer.

The APIs to manipulate these are:

win32event.CreateWaitableTimer()
win32event.SetWaitableTimer()
win32event.CancelWaitableTimer()

The wakeup feature is enabled when the last parameter of the call to 
SetWaitableTimer is True.

However, pywin32 seems to be missing a binding for the 
SetThreadExecutionState() function so unless a person moves the mouse or 
keyboard within a minute or two of the wakeup, the system just goes back 
to sleep.

You should be able to call the SetThreadExecutionState function using 
ctypes.

Search for Power Management in the MSDN library for info on these functions.

--
Jesse Hager
email = "[EMAIL PROTECTED]".decode("rot13")
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: midipy.py on linux

2006-04-27 Thread Jesse Hager
Will Hurt wrote:
> Hi
> Ive been using midipy in my blender3d python scripts on windowsXP, now
> im trying to run them from ubuntu and i cant find the midipy.py module
> compiled for linux anywhere. 
> Is it possible to complie it under linux and how would i go about doing
> it --or--
> Is there another module which does the same thing available for linux[ie
> i can get raw midi data in as a list] and thats why no-ones bothered to
> compile midipy under linux?
> 
> Thanks
> Will
> 

The MIDI IO module I use pyPortMidi, supposedly supports Linux, but I
have only ever used it on Windows.

The links in the Python Cheese Shop seem broken, so here's the site address:

http://alumni.media.mit.edu/~harrison/code.html

The C library it is based on is here:

http://www.cs.cmu.edu/~music/portmusic/

There doesn't seem to be any documentation for pyPortMidi, so you will
need to read the comments in the portmidi.h from the original C library
and the test.py file included in the distribution.

http://www.cs.cmu.edu/~music/portmusic/portmidi/portmidi.h

The Read() method of the input stream object returns a list of events,
each event consists of 4 bytes of data and a timestamp. Each event
contains a single MIDI message (not all of the 4 bytes will be used for
most messages).  Sysex commands are broken up into 4 byte pieces and
returned as multiple events, one for each 4 bytes.

Format is like this:

[[[byte0,byte1,byte2,byte3],time],...]

Not sure if it uses lists or tuples, since I use it mainly for output
and I don't have MIDI input on this machine to test it...

Hope this helps.

-- 
Jesse Hager
email = "[EMAIL PROTECTED]".decode("rot13")
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Multi-Monitor Support

2006-05-02 Thread Jesse Hager
Mark rainess wrote:
> Hello,
> 
> 
> Does Python or wxPython include any support for multiple monitors. In my
> application multiple monitors are located at a distance. It is not
> convenient to move a window from the main monitor to one of the others.
> I want to have the option to open an an application on any connected
> monitor. I am using wxPython. Does anyone know how to do it?
> 
> Thanks
> 
> Mark Rainess

import wx

app = wx.App()

#To get the count of displays
num_displays = wx.Display.GetCount()

#Open a frame on each display
for display_num in range(num_displays):
#Get a display object
display = wx.Display(display_num)

#To get a wx.Rect that gives the geometry of a display
geometry = display.GetGeometry()

#Create a frame on the display
frame = wx.Frame(None,-1,"Display %d"%display_num,
geometry.GetTopLeft(),geometry.GetSize())

#Make the frame visible
frame.Show()

app.MainLoop()

Creating a window on a different display is the same as creating one on
the main display, all you need to do is specify coordinates somewhere on
the alternate display when calling the constructor.

Use wx.Display to find out about the displays on the system, it also
lets you query and set the current video mode for a display.

Hope this helps.

-- 
Jesse Hager
email = "[EMAIL PROTECTED]".decode("rot13")
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: stripping

2006-05-02 Thread Jesse Hager
[EMAIL PROTECTED] wrote:
> hi
> i have a file test.dat eg
> 
> abcdefgh
> ijklmn
>  <-newline
> opqrs
> tuvwxyz
>   <---newline
> 
> 
> I wish to print the contents of the file such that it appears:
> abcdefgh
> ijklmn
> opqrs
> tuvwxyz
> 
> here is what i did:
> f = open("test.dat")
> while 1:
> line = f.readline().rstrip("\n")
> if line == '':
> break
> #if not re.findall(r'^$',line):
> print line
> 
> but it always give me first 2 lines, ie
> abcdefgh
> ijklmn
> 
> What can i do to make it print all..?
> thanks
> 
Change the 'break' statement to a 'continue'.

-- 
Jesse Hager
email = "[EMAIL PROTECTED]".decode("rot13")
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: creating a new database with mysqldb

2006-05-17 Thread Jesse Hager
John Salerno wrote:
> Since the connect method of mysqldb requires a database name, it seems
> like you can't use it without having a database already created. So is
> there a way to connect to your mysql server (without a specified
> database) in order to create a new database (i.e., the CREATE DATABASE
> query)?
> 
> Thanks.

In every MySQL library I have ever seen, the database parameter is
optional.  You may either omit it or pass an empty string.  It is just a
shortcut so the application does not need to send a "USE" command to
select the active database.

>>>import MySQLdb
>>>db = MySQLdb.connect("host","username","password")
>>>c = db.cursor()

To get the currently selected database:

>>>c.execute("SELECT DATABASE()")
1L
>>>c.fetchall()
((None,),)
  ^^^
None (NULL) indicates no currently selected database.

To set the default database:

>>>c.execute("USE mysql")
0L

Getting the database again:

>>>c.execute("SELECT DATABASE()")
1L
>>>c.fetchall()
(('mysql',),)
  ^^^
A string indicates that a database is currently selected.

Hope this helps.

-- 
Jesse Hager
email = "[EMAIL PROTECTED]".decode("rot13")
-- 
http://mail.python.org/mailman/listinfo/python-list


Weird cgi error

2008-02-24 Thread Jesse Aldridge
I uploaded the following script, called "test.py", to my webhost.
It works find except when I input the string "python ".  Note that's
the word "python" followed by a space.  If I submit that I get a 403
error.  It seems to work fine with any other string.
What's going on here?

Here's the script in action: http://crookedgames.com/cgi-bin/test.py

Here's the code:

#!/usr/bin/python
print "Content-Type: text/html\n"
print """


  


  
  


"""
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Weird cgi error

2008-02-25 Thread Jesse Aldridge

> If you cant have access to the apache (?) error_log, you can put this in
> your code:
> import cgitb
> cgitb.enable()
>
> Which should trap what is being writed on the error stream and put it on
> the cgi output.
>
> Gerardo

I added that.  I get no errors.  It still doesn't work.  Well, I do
get several errors about the missing favicon, but I'm pretty sure
that's unrelated.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Weird cgi error

2008-02-25 Thread Jesse Aldridge
On Feb 25, 11:42 am, Jesse Aldridge <[EMAIL PROTECTED]> wrote:
> > If you cant have access to the apache (?) error_log, you can put this in
> > your code:
> > import cgitb
> > cgitb.enable()
>
> > Which should trap what is being writed on the error stream and put it on
> > the cgi output.
>
> > Gerardo
>
> I added that.  I get no errors.  It still doesn't work.  Well, I do
> get several errors about the missing favicon, but I'm pretty sure
> that's unrelated.

Oh, and yeah, the server's running Apache.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Weird cgi error

2008-02-25 Thread Jesse Aldridge

> This is some kind of crooked game, right? Your code works fine on a
> local server, and there's no reason why it shouldn't work just fine on
> yours either. All you are changing is the standard input to the process.
>
> Since you claim to have spotted this specific error, perhaps you'd like
> to explain just exactly how you came across it. I mean that's a pretty
> specific input to test with ...
>
> Frankly I am not sure you are telling the truth about the code behind
> that page. If you *are* then you'd better provide specifics: Python
> version, Apache version, httpd.conf file, and so on. April 1 is still
> over a month away.
>
> regards
>   Steve
>
> PS: consider closing the  tag on the same line as the opening
> tag to avoid spurious spaces in your pristine form.
> --
> Steve Holden+1 571 484 6266   +1 800 494 3119
> Holden Web LLC  http://www.holdenweb.com/

Thanks for the reply.

No, it's not a game, crookedgames.com is a mostly defunct games site
that I was working on for a while.  I'm just hosting the script
there.  What I am actually working on is a tool used to compare
various things.  Check it out here: 
http://crookedgames.com/cgi-bin/Language_Comparison.py
Here's some input you can use to test with:

Cats
  +2 Fuzzy
  -1 Medium Maintenance

Fish
  +1 Low Maintenance
  -1 Stupid

Dogs
  +2 Fuzzy
  -2 High Maintenance

(note that there's supposed to be two spaces before the +/- symbols --
in case my formatting doesn't go through)

I originally created that tool because I wanted to compare programming
languages, python among them, thus leading me discover this issue.

Now, I'm very new to this web development stuff (this is my first real
app), so it's quite likely that I'm just doing something stupid, but I
can't figure out what.

I'm using LunarPages.  CPanel reports my Apache version as: 1.3.37
(Unix)

I added the line "print sys.version" to the test script, and that
spits out: "2.3.4 (#1, Dec 11 2007, 05:27:57) [GCC 3.4.6 20060404 (Red
Hat 3.4.6-9)]"

I can't find any file called httpd.conf.  It would be in /etc, right?
I guess I don't have one.

Still having the same problem.

Here's the new contents of test.py:

#!/usr/bin/python
import cgitb, sys
cgitb.enable()

print "Content-Type: text/html\n"
print sys.version
print """


  
 

  


"""

It's not a joke, honest :)
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: multiprocessing vs. distributed processing

2009-01-16 Thread Jesse Noller
On Fri, Jan 16, 2009 at 12:52 AM, James Mills
 wrote:
> I've noticed over the past few weeks lots of questions
> asked about multi-processing (including myself).
>
> For those of you new to multi-processing, perhaps this
> thread may help you. Some things I want to start off
> with to point out are:
>
> "multiprocessing will not always help you get things done faster."
>
> "be aware of I/O bound applications vs. CPU bound"
>
> "multiple CPUs (cores) can compute multiple concurrent expressions -
> not read 2 files concurrently"
>
> "in some cases, you may be after distributed processing rather than
> multi or parallel processing"
>
> cheers
> James

James is quite correct, and maybe I need to amend the multiprocessing
documentation to reflect this fact.

While distributed programming and parallel programming may cross paths
in a lot of problems/applications, you have to know when to use one
versus the other. Multiprocessing only provides some basic primitives
to help you get started with distributed programming, it is not it's
primary focus, nor is it a complete solution for distributed
applications.

That being said, there is no reason why you could not use it in
conjunction with something like Kamaelia, pyro, $ipc mechanism/etc.

Ultimately, it's a tool in your toolbox, and you have to judge and
experiment to see which tool is best applied to your problem. In my
own work/code, I use both processes *and* threads - one works better
than the other depending on the problem.

For example, a web testing tool. This is something that needs to
generate hundreds of thousands of HTTP requests - not a problem you
want to use multiprocessing for given that A> It's primarily I/O bound
and B> You can generate that many threads on a single machine.
However, if I wanted to say, generate hundreds of threads across
multiple machines, I would (and do) use multiprocessing + paramiko to
construct a grid of machines and coordinate work.

That all being said: multiprocessing isn't set in stone - there's room
for improvement in the docs, tests and code, and all patches are
welcome.

-jesse
--
http://mail.python.org/mailman/listinfo/python-list


Re: process/thread instances and attributes

2009-01-16 Thread Jesse Noller
On Thu, Jan 15, 2009 at 10:37 PM, James Mills
 wrote:
> After some work ... I've taken Laszlo's suggestion of using Value
> (shared memory) objects
> to share state between the -pseudo- Process (manager) object and it's
> underlying multiprocessing.Process
> instance (and subsequent process):
>
> Here is the code:
>
> 
> #!/usr/bin/env python
>
> import os
> from time import sleep
>
> from threading import activeCount as threads
> from threading import Thread as _Thread
>
> from multiprocessing import Value
> from multiprocessing import Process as _Process
> from multiprocessing import active_children as processes
>
> class Process(object):
>
>def __init__(self, *args, **kwargs):
>super(Process, self).__init__(*args, **kwargs)
>
>self.running = Value("b", False)
>self.thread = _Thread(target=self.run)
>self.process = _Process(target=self._run, args=(self.running,))
>
>def _run(self, running):
>self.thread.start()
>
>try:
>while running.value:
>try:
>sleep(1)
>print "!"
>except SystemExit:
>running.acquire()
>running.value = False
>running.release()
>break
>except KeyboardInterrupt:
>running.acquire()
>running.value = False
>running.release()
>break
>finally:
>running.acquire()
>running.value = False
>running.release()
>self.thread.join()
>
>def start(self):
>self.running.acquire()
>self.running.value = True
>self.running.release()
>self.process.start()
>
>def run(self):
>pass
>
>def stop(self):
>print "%s: Stopping ..." % self
>self.running.acquire()
>self.running.value = False
>self.running.release()
>
>def isAlive(self):
>return self.running.value
>
> class A(Process):
>
>def run(self):
>while self.isAlive():
>sleep(5)
>self.stop()
>
> a = A()
> a.start()
>
> N = 0
>
> while a.isAlive():
>sleep(1)
>print "."
>print "threads:   %d" % threads()
>print "processes: %d" % len(processes())
>
> print "DONE"
> 
>
> Here is the result of running this:
>
> 
> $ python test3.py
> !
> .
> threads:   1
> processes: 1
> .
> threads:   1
> processes: 1
> !
> !
> .
> threads:   1
> processes: 1
> .
> threads:   1
> processes: 1
> !
> .
> threads:   1
> processes: 1
> !
> <__main__.A object at 0x80de42c>: Stopping ...
> !
> .
> threads:   1
> processes: 0
> DONE
> 
>
> This appears to work as I intended.
>
> Thoughts / Comments ?
>
> cheers
> James


Personally, rather then using a value to indicate whether to run or
not, I would tend to use an event to coordinate start/stop state.

-jesse
--
http://mail.python.org/mailman/listinfo/python-list


Re: Python 2.6's multiprocessing lock not working on second use?

2009-01-19 Thread Jesse Noller
On Mon, Jan 19, 2009 at 8:16 AM, Frédéric Sagnes  wrote:
> On Jan 19, 11:53 am, Frédéric Sagnes  wrote:
>> On Jan 17, 11:32 am, "Gabriel Genellina" 
>> wrote:
>>
>>
>>
>> > En Fri, 16 Jan 2009 14:41:21 -0200, escribiste en el grupo
>> > gmane.comp.python.general
>>
>> > > I ran a few tests on the new Python 2.6multiprocessingmodule before
>> > > migrating a threading code, and found out the locking code is not
>> > > working well. In this case, a pool of 5 processes is running, each
>> > > trying to get the lock and releasing it after waiting 0.2 seconds
>> > > (action is repeated twice). It looks like themultiprocessinglock
>> > > allows multiple locking after the second pass. Running the exact same
>> > > code with threads works correctly.
>>
>> > I've tested your code on Windows and I think the problem is on the Queue
>> > class. If you replace the Queue with some print statements or write to a
>> > log file, the sequence lock/release is OK.
>> > You should file a bug report onhttp://bugs.python.org/
>>
>> > --
>> > Gabriel Genellina
>>
>> Thanks for your help gabriel, I just tested it without the queue and
>> it works! I'll file a bug about the queues.
>>
>> Fred
>>
>> For those interested, the code that works (well, it always did, but
>> this shows the real result):
>>
>> class test_lock_process(object):
>> def __init__(self, lock):
>> self.lock = lock
>> self.read_lock()
>>
>> def read_lock(self):
>> for i in xrange(5):
>> self.lock.acquire()
>> logging.info('Got lock')
>> time.sleep(.2)
>> logging.info('Released lock')
>> self.lock.release()
>>
>> if __name__ == "__main__":
>> logging.basicConfig(format='[%(process)0...@%(relativeCreated)04d] %
>> (message)s', level=logging.DEBUG)
>>
>> lock = Lock()
>>
>> processes = []
>> for i in xrange(2):
>> processes.append(Process(target=test_lock_process, args=
>> (lock,)))
>>
>> for t in processes:
>> t.start()
>>
>> for t in processes:
>> t.join()
>
> Opened issue #4999 [http://bugs.python.org/issue4999] on the matter,
> referencing this thread.
>

Thanks, I've assigned it to myself. Hopefully I can get a fix put
together soonish, time permitting.
-jesse
--
http://mail.python.org/mailman/listinfo/python-list


Re: Python 2.6's multiprocessing lock not working on second use?

2009-01-19 Thread Jesse Noller
On Mon, Jan 19, 2009 at 1:32 PM, Nick Craig-Wood  wrote:
> Jesse Noller  wrote:
>> > Opened issue #4999 [http://bugs.python.org/issue4999] on the matter,
>> > referencing this thread.
>>
>>  Thanks, I've assigned it to myself. Hopefully I can get a fix put
>>  together soonish, time permitting.
>
> Sounds like it might be hard or impossible to fix to me.  I'd love to
> be proved wrong though!
>
> If you were thinking of passing time.time() /
> clock_gettime(CLOCK_MONOTONIC) along in the Queue too, then you'll
> want to know that it can differ by significant amounts on different
> processors :-(
>
> Good luck!
>

Consider my parade rained on. And after looking at it this morning,
yes - this is going to be hard, and should be fixed for a FIFO queue
:\

-jesse
--
http://mail.python.org/mailman/listinfo/python-list


Re: malloc (error code=12)

2009-01-21 Thread Jesse Noller
On Wed, Jan 21, 2009 at 9:38 PM, Arash Arfaee  wrote:
>
> Hi All,
>
> I am writing a multiprocessing program using python 2.6. It works in most
> cases, however when my input is large sometimes I get this message again and
> again:
>
> Python(15492,0xb0103000) malloc: *** mmap(size=393216) failed (error
> code=12)
> *** error: can't allocate region
>
> and at the and I have these messages:
>
>
> Python(15492,0xb0103000) malloc: *** mmap(size=393216) failed (error
> code=12)
> *** error: can't allocate region
> *** set a breakpoint in malloc_error_break to debug
> Exception in thread Thread-2:
> Traceback (most recent call last):
>   File
> "/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/threading.py",
> line 524, in __bootstrap_inner
> self.run()
>   File
> "/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/threading.py",
> line 479, in run
> self.__target(*self.__args, **self.__kwargs)
>   File
> "/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/multiprocessing/pool.py",
> line 259, in _handle_results
> task = get()
> MemoryError
>
> Any idea what is wrong here? I didn't attached the code since it is a big
> program and I don't know exactly which part of my program causes this error.
> And since it is multiprocessing I can't debug it and run it line by line!
>
> Thanks,
> Arash
>
> --
> http://mail.python.org/mailman/listinfo/python-list
>
>


wow. How big are these objects/input?
--
http://mail.python.org/mailman/listinfo/python-list


Re: Addition of multiprocessing ill-advised? (was: Python 3.0.1)

2009-01-28 Thread Jesse Noller
On Tue, Jan 27, 2009 at 11:36 PM, James Mills
 wrote:
> On Wed, Jan 28, 2009 at 1:49 PM, Ben Finney  wrote:
>> Steve Holden  writes:
>>> I think that [Python 2.6 was a rushed release]. 2.6 showed it in the
>>> inclusion (later recognizable as somewhat ill-advised so late in the
>>> day) of multiprocessing […]
>
> Steve: It's just a new package - it used to be available
> as a 3rd-party package. I dare say it most definitely was
> -not- ill-advised. It happens to be a great addition to the
> standard library.
>
>> What was ill-advised about the addition of the 'multiprocessing'
>> module to Python 2.6? I ask because I haven't yet used it in anger,
>> and am not sure what problems have been found in it.
>
> I have found no problems with it - I've recently integrated it with my
> event/component framework (1). In my library I use Process, Pipe
> and Value.
>
> cheers
> James
>

Awesome James, I'll be adding this to both the multiprocessing talk,
and the distributed talk. Let me know if you have any issues.

-jesse
--
http://mail.python.org/mailman/listinfo/python-list


Re: Addition of multiprocessing ill-advised? (was: Python 3.0.1)

2009-01-28 Thread Jesse Noller
On Tue, Jan 27, 2009 at 10:49 PM, Ben Finney  wrote:
> (Continuing a side topic of a different discussion)
>
> Steve Holden  writes:
>
>> I think that [Python 2.6 was a rushed release]. 2.6 showed it in the
>> inclusion (later recognizable as somewhat ill-advised so late in the
>> day) of multiprocessing […]
>
> What was ill-advised about the addition of the 'multiprocessing'
> module to Python 2.6? I ask because I haven't yet used it in anger,
> and am not sure what problems have been found in it.
>
> --
>  \  "Holy bouncing boiler-plated fits, Batman!" —Robin |
>  `\   |
> _o__)  |
> Ben Finney
> --
> http://mail.python.org/mailman/listinfo/python-list
>

I might write a longer blog post about this later, but I can see
Steve's point of view. The fact is, pyprocessing/multiprocessing was a
late addition to Python 2.6. Personally, I was game to put it into
either 2.7 or 2.6, but I felt inclusion into 2.6 wasn't completely out
of question - and others agreed with me.

See these mail threads:

http://mail.python.org/pipermail/python-dev/2008-May/079417.html
http://mail.python.org/pipermail/python-dev/2008-June/080011.html

And so on.

All of that being said; the initial conversion and merging of the code
into core exposed a lot of bugs I and others didn't realize were there
in the first place. I take full responsibility for that - however some
of those bugs were in python-core itself (deadlock after fork
anyone?).

So, the road to inclusion was a bit rougher than I initially thought -
I relied heavily on the skills of people who had more experience in
the core than I did, and it was disruptive to the release schedule of
python 2.6 due to both the bugs and instability.

I however; disagree that this was ultimately a bad decision, or that
it was some how indicative of a poorly managed or rushed 2.6 release.
All releases have bugs, and towards the end of the 2.6 cycle,
multiprocessing *was not* the release blocker.

After 2.6 went out, I had a small wave of bugs filed against
multiprocessing that I've been working through bit by bit (I still
need to work on BSD/Solaris issues) and some of the bugs have exposed
issues I simply wish weren't there but I think this is true of any
package, especially one as complex as multiprocessing is.

I know of plenty of people using the package now, and I know of
several groups switching to 2.6 as quickly as possible due to its new
features, bug fixes/etc. Multiprocessing as a package is not bug free
- I'm the first to admit that - however it is useful, and being used
and frankly, I maintain that it is just one step in a larger project
to bring additional concurrency and distributed "stuff" into
python-core over time.

So yes, I see Steve's point - multiprocessing *was* disruptive, and it
inclusion late in the game siphoned off resources that could have been
used elsewhere. Again, I'll take the responsibility for soiling the
pool this way. I do however think, that python 2.6 is overall a
*fantastic* release both feature wise, quality wise and is quite
useful for people who want to "get things done" (tm).

Now I'm going to go back to fixing bugs.

-jesse
--
http://mail.python.org/mailman/listinfo/python-list


Re: Addition of multiprocessing ill-advised?

2009-01-28 Thread Jesse Noller
On Wed, Jan 28, 2009 at 8:32 AM, Steve Holden  wrote:
...snip...
>> I have found no problems with it - I've recently integrated it with my
>> event/component framework (1). In my library I use Process, Pipe
>> and Value.
>>
> It will be a great library in time, but the code was immature and
> insufficiently tested before the 2.6 release. The decision to include it
> late in the release cycle
>
> There are 32 outstanding issues on multiprocessing, two of them critical
> and four high. Many of them are platform-specific, so if they don't hit
> your platform you won't mind.
>

See my reply to the thread I just sent out; I don't disagree with you.
However, there are not 32 open bugs:

http://bugs.python.org/issue?%40search_text=&title=&%40columns=title&id=&%40columns=id&creation=&creator=&activity=&%40columns=activity&%40sort=activity&actor=&nosy=&type=&components=&versions=&dependencies=&assignee=jnoller&keywords=&priority=&%40group=priority&status=1&%40columns=status&resolution=&%40pagesize=50&%40startwith=0&%40queryname=&%40old-queryname=&%40action=search

Man, I hope that url comes through. As of this writing, there are 18
open bugs assigned to me for resolution. Of those, 2 are critical, but
should not be - one is an enhancement I am on the fence about, and one
is for platforms which have issues with semaphore support.

However, I agree that there are bugs, and there will continue to be
bugs. I think the quality has greatly increased since the port to core
started, and we did find bugs in core as well. I also think it is more
than ready for use now.

> Jesse did a great job in the time available. It would have been more
> sensible to wait until 2.7 to include it in the library, IMHO, or make
> the decision to include it in 2.6 in a more timely fashion. The one
> advantage of the inclusion is that the issues have been raised now, so
> as long as maintenance continues the next round will be better.

Again, I don't disagree. Alas, the PEP resolution and proposal was
greatly delayed due to, well, someone paying me money to do something
else ;) - that being said, yes, the decision was made late in the
game, and was disruptive.

Maintenance is going to continue as long as I continue to have an
internet connection. Heck, there are enhancements to it I really want
to add, but I swore off those until the bugs are closed/resolved and
my pycon talks are in the can.

-jesse
--
http://mail.python.org/mailman/listinfo/python-list


Re: Terminating a Python program that uses multi-process, multi-threading

2009-01-29 Thread Jesse Noller
On Wed, Jan 28, 2009 at 3:46 PM, akineko  wrote:
> Hello Python experts,
>
> I have a program that uses three processes (invoked by
> multiprocessing) and several threads.
> The program is terminated when ^C is typed (KeyboardInterrupt).
> The main process takes the KeyboardInterrupt Exception and it orderly
> shutdown the program.
>
> It works fine in normal situation.
>
> However, KeyboardInterrupt is not accepted when, for example, the
> program is stuck somewhere due to error in network. I understand that
> the KeyboardInterrupt won't be processed until the program moves out
> from an atomic step in a Python program.
>
> Probably this is partly my fault as I use some blocking statements
> (without timeout) as they should not block the program in normal
> situation.
>
> As the program freezes up, I cannot do anything except killing three
> processes using kill command.
> So, I cannot tell which statement is actually blocking when this
> occurs (and there are many suspects).
>
> Is there any good way to deal with this kind of problem?
> Killing three processes when it hangs is not a fun thing to do.
>
> Any suggestions will be greatly appreciated.
>
> Best regards,
> Aki Niimura
>
> --
> http://mail.python.org/mailman/listinfo/python-list
>

See also:
http://jessenoller.com/2009/01/08/multiprocessingpool-and-keyboardinterrupt/

jesse
--
http://mail.python.org/mailman/listinfo/python-list


Re: Where to host a (Python) project?

2009-01-31 Thread Jesse Noller
On Sat, Jan 31, 2009 at 4:30 PM, andrew cooke  wrote:
> On Jan 31, 4:50 pm, "Giampaolo Rodola'"  wrote:
>> Google Code.
>>
>> --- Giampaolohttp://code.google.com/p/pyftpdlib
>
> thanks - that's a nice example.  i'm a bit concerned about the whole
> google corporation thing, but reading through the ideological check-
> sheet at savannah convinced me i wasn't worthy and your project looks
> good (i admit i haven't seen that many google projects, but they all
> seemed abandoned/bare/hostile).  so i'll follow the majority here and
> give google code a go.
>
> cheers,
> andrew
> --
> http://mail.python.org/mailman/listinfo/python-list
>

Bitbucket: http://bitbucket.org/ (I use this, move from google code)
Github: http://github.com/
Launchapd: https://launchpad.net/
FreeHG: http://freehg.org/

Google is nice due to the groups/mailing list options, but I find I
don't miss mailing lists all that much after being subscribed to so
many.

-jesse
--
http://mail.python.org/mailman/listinfo/python-list


Re: PyYaml in standard library?

2009-02-18 Thread Jesse Noller
On Wed, Feb 18, 2009 at 4:11 AM, Brendan Miller  wrote:
> I'm just curious whether PyYaml is likely to end up in the standard
> library at some point?
> --
> http://mail.python.org/mailman/listinfo/python-list
>

Personally, loving the hell out of yaml (and pyyaml) as much as I do,
I'd love to see this; however interested people should pass the idea
to python-ideas, and write a PEP. It would need a dedicated maintainer
as well as the other things stdlib modules require.

-jesse
--
http://mail.python.org/mailman/listinfo/python-list


Re: multiprocessing module and os.close(sys.stdin.fileno())

2009-02-18 Thread Jesse Noller
On Tue, Feb 17, 2009 at 10:34 PM, Graham Dumpleton
 wrote:
> Why is the multiprocessing module, ie., multiprocessing/process.py, in
> _bootstrap() doing:
>
>  os.close(sys.stdin.fileno())
>
> rather than:
>
>  sys.stdin.close()
>
> Technically it is feasible that stdin could have been replaced with
> something other than a file object, where the replacement doesn't have
> a fileno() method.
>
> In that sort of situation an AttributeError would be raised, which
> isn't going to be caught as either OSError or ValueError, which is all
> the code watches out for.
>
> Graham
> --
> http://mail.python.org/mailman/listinfo/python-list
>

I don't know why it was implemented that way. File an issue on the
tracker and assign it to me (jnoller) please.
--
http://mail.python.org/mailman/listinfo/python-list


Re: Builing Python 2.6 on AIX 5.2

2008-10-06 Thread Jesse Noller
Looks like AIX is missing sem_timedwait - see:
http://bugs.python.org/issue3876

Please add your error to the bug report just so I can track it.

-jesse

On Mon, Oct 6, 2008 at 4:16 AM, brasse <[EMAIL PROTECTED]> wrote:
> Hello!
>
> I am having some trouble building Python 2.6 on AIX. The steps I have
> taken are:
>
> export PATH=/usr/bin/:/usr/vacpp/bin/
> ./configure --with-gcc=xlc_r --with-cxx=xlC_r --disable-ipv6
> make
>
> This is the error message I'm seeing:
> ./Modules/ld_so_aix xlc_r -bI:Modules/python.exp build/
> temp.aix-5.2-2.6/home/mabr/Python-2.6/Modules/_multiprocessing/
> multiprocessing.o build/temp.aix-5.2-2.6/home/mabr/Python-2.6/Modules/
> _multiprocessing/socket_connection.o build/temp.aix-5.2-2.6/home/mabr/
> Python-2.6/Modules/_multiprocessing/semaphore.o -L/usr/local/lib -o
> build/lib.aix-5.2-2.6/_multiprocessing.so
> ld: 0711-317 ERROR: Undefined symbol: .sem_timedwait
> ld: 0711-317 ERROR: Undefined symbol: .CMSG_SPACE
> ld: 0711-317 ERROR: Undefined symbol: .CMSG_LEN
> ld: 0711-345 Use the -bloadmap or -bnoquiet option to obtain more
> information.
> *** WARNING: renaming "_multiprocessing" since importing it failed: No
> such file or directory
> error: No such file or directory
> make: The error code from the last command is 1.
>
> Have someone on this list had similar problems? Am I missing some
> libraries? The configure script runs without errors, I would have
> expected some kind of error there if I was missing something.
>
> Regards,
> Mattias
> --
> http://mail.python.org/mailman/listinfo/python-list
>
--
http://mail.python.org/mailman/listinfo/python-list


Re: Builing Python 2.6 on AIX 5.2

2008-10-07 Thread Jesse Noller
Thanks for posting this to the tracker mattias - as soon as I can
steal some time, I'll dig into it and see if I can get it teed up for
the patch release.

On Tue, Oct 7, 2008 at 6:24 AM, brasse <[EMAIL PROTECTED]> wrote:
> On Oct 6, 10:16 am, brasse <[EMAIL PROTECTED]> wrote:
>> Hello!
>>
>> I am having some trouble building Python 2.6 on AIX. The steps I have
>> taken are:
>>
>> export PATH=/usr/bin/:/usr/vacpp/bin/
>> ./configure --with-gcc=xlc_r --with-cxx=xlC_r --disable-ipv6
>> make
>>
>> This is the error message I'm seeing:
>> ./Modules/ld_so_aix xlc_r -bI:Modules/python.exp build/
>> temp.aix-5.2-2.6/home/mabr/Python-2.6/Modules/_multiprocessing/
>> multiprocessing.o build/temp.aix-5.2-2.6/home/mabr/Python-2.6/Modules/
>> _multiprocessing/socket_connection.o build/temp.aix-5.2-2.6/home/mabr/
>> Python-2.6/Modules/_multiprocessing/semaphore.o -L/usr/local/lib -o
>> build/lib.aix-5.2-2.6/_multiprocessing.so
>> ld: 0711-317 ERROR: Undefined symbol: .sem_timedwait
>> ld: 0711-317 ERROR: Undefined symbol: .CMSG_SPACE
>> ld: 0711-317 ERROR: Undefined symbol: .CMSG_LEN
>> ld: 0711-345 Use the -bloadmap or -bnoquiet option to obtain more
>> information.
>> *** WARNING: renaming "_multiprocessing" since importing it failed: No
>> such file or directory
>> error: No such file or directory
>> make: The error code from the last command is 1.
>>
>> Have someone on this list had similar problems? Am I missing some
>> libraries? The configure script runs without errors, I would have
>> expected some kind of error there if I was missing something.
>>
>> Regards,
>> Mattias
>
> OK. I have made some changes in the source that lets me build on AIX
> 5.2. I thought I could post the patch here and perhaps someone can
> tell me if I am on the wrong track or if this is an OK fix on AIX.
>
> Basically I have changed setup.py to define HAVE_SEM_TIMED_WAIT=0 on
> aix. I have also defined CMESG_SPACE and CMESG_LEN in terms of
> _CMSG_ALIGN (see http://homepage.mac.com/cjgibbons/rubyonaixhowto/x72.html)
> in multipocessing.c. (I realise that this breaks some other platforms,
> but right now I just need to build on AIX).
>
> Here is a patch:
>
> diff -Naur Python-2.6/Modules/_multiprocessing/multiprocessing.c
> Python-2.6-clean-patch/Modules/_multiprocessing/multiprocessing.c
> --- Python-2.6/Modules/_multiprocessing/multiprocessing.c
> 2008-06-14 00:38:33.0 +0200
> +++ Python-2.6-clean-patch/Modules/_multiprocessing/
> multiprocessing.c   2008-10-07 12:23:55.0 +0200
> @@ -8,6 +8,13 @@
>
>  #include "multiprocessing.h"
>
> +#ifndef CMSG_SPACE
> +#define CMSG_SPACE(len) (_CMSG_ALIGN(sizeof(struct cmsghdr)) +
> _CMSG_ALIGN(len))
> +#endif
> +#ifndef CMSG_LEN
> +#define CMSG_LEN(len) (_CMSG_ALIGN(sizeof(struct cmsghdr)) + (len))
> +#endif
> +
>  PyObject *create_win32_namespace(void);
>
>  PyObject *pickle_dumps, *pickle_loads, *pickle_protocol;
> diff -Naur Python-2.6/setup.py Python-2.6-clean-patch/setup.py
> --- Python-2.6/setup.py 2008-09-30 02:15:45.0 +0200
> +++ Python-2.6-clean-patch/setup.py 2008-10-07 12:23:34.0
> +0200
> @@ -1277,6 +1277,14 @@
> )
> libraries = []
>
> +elif platform.startswith('aix'):
> +macros = dict(
> +HAVE_SEM_OPEN=1,
> +HAVE_SEM_TIMEDWAIT=0,
> +HAVE_FD_TRANSFER=1
> +)
> +libraries = ['rt']
> +
> else:   # Linux and other
> unices
> macros = dict(
> HAVE_SEM_OPEN=1,
>
> Perhaps this should go to some other list?
>
> :.:: mattias
> --
> http://mail.python.org/mailman/listinfo/python-list
>
--
http://mail.python.org/mailman/listinfo/python-list


Re: Using multiprocessing

2008-10-10 Thread Jesse Noller
On Fri, Oct 10, 2008 at 4:32 PM, nhwarriors <[EMAIL PROTECTED]> wrote:
> I am attempting to use the (new in 2.6) multiprocessing package to
> process 2 items in a large queue of items simultaneously. I'd like to
> be able to print to the screen the results of each item before
> starting the next one. I'm having trouble with this so far.
>
> Here is some (useless) example code that shows how far I've gotten by
> reading the documentation:
>
> from multiprocessing import Process, Queue, current_process
>
> def main():
>facs = []
>for i in range(5,50005):
>facs.append(i)
>
>tasks = [(fac, (i,)) for i in facs]
>task_queue = Queue()
>done_queue = Queue()
>
>for task in tasks:
>task_queue.put(task)
>
>for i in range(2):
>Process(target = worker, args = (task_queue, 
> done_queue)).start()
>
>for i in range(len(tasks)):
>print done_queue.get()
>
>for i in range(2):
>task_queue.put('STOP')
>
> def worker(input, output):
>for func, args in iter(input.get, 'STOP'):
>result = func(*args)
>output.put(result)
>
> def fac(n):
>f = n
>for i in range(n-1,1,-1):
>f *= i
>return 'fac('+str(n)+') done on '+current_process().name
>
> if __name__ == '__main__':
>main()
>
> This works great, except that nothing can be output until everything
> in the queue is finished. I'd like to write out the result of fac(n)
> for each item in the queue as it happens.
>
> I'm probably approaching the problem all wrong - can anyone set me on
> the right track?

I'm not quite following: If you run this, the results are printed by
the main thread, unordered, as they are put on the results queue -
this works as intended (and the example this is based on works the
same way) .

For example:
result put Process-2
result put Process-1
fac(5) done on Process-1
result put Process-2
fac(50001) done on Process-2
result put Process-1
fac(50003) done on Process-1
result put Process-2
fac(50002) done on Process-2
fac(50004) done on Process-2

You can see this if you expand the range:

result put Process-1
result put Process-2
result put Process-2
fac(50001) done on Process-2
result put Process-1
fac(5) done on Process-1
fac(50003) done on Process-2
result put Process-2
result put Process-1
fac(50004) done on Process-2
result put Process-2
fac(50006) done on Process-2
result put Process-1
fac(50002) done on Process-1
fac(50005) done on Process-1
fac(50007) done on Process-1
result put Process-2
result put Process-1
fac(50008) done on Process-2
result put Process-2
result put Process-1
fac(50010) done on Process-2
result put Process-2
result put Process-1
fac(50009) done on Process-1
fac(50011) done on Process-1
fac(50013) done on Process-1
result put Process-2
fac(50012) done on Process-2
fac(50014) done on Process-2

One trick I use is when I have a results queue to manage, I spawn an
addition process to read off of the results queue and deal with the
results. This is mainly so I can process the results outside of the
main thread, as they appear on the results queue

-jesse
--
http://mail.python.org/mailman/listinfo/python-list


Re: Python 2.6, multiprocessing module and BSD

2008-10-22 Thread Jesse Noller
On Tue, Oct 21, 2008 at 6:45 PM,  <[EMAIL PROTECTED]> wrote:
> It seems that the multiprocessing module in 2.6 is broken for *BSD;
> I've seen issue 3770 regarding this. I'm curious if there are more
> details on this issue since the posts in 3770 were a bit unclear. For
> example, one post claimed that the problem was that sem_open isn't
> implemented in *BSD, but it is available on FreeBSD 7 (I checked). I'd
> be willing to help get this working if someone could point me in the
> right direction.
> --
> http://mail.python.org/mailman/listinfo/python-list
>

The BSD issue was raised late in the cycle for 2.6. The problem is
that FBSD's support is "very experimental" as Phillip points out - and
OpenBSD doesn't even have them.

Due to the lateness of the issue and a finite amount of time I have to
work on things, I chose to disable support for this on the various
*BSDs until I can cook up a stable patch or have one provided by
someone more familiar with the inner workings of Free-BSD. OpenBSD
support is a non-starter.

Ideally, I would like to get this fixed and put on the 2.6 maint
branch ASAP, but I haven't had a chance to circle back to it.

Also note Nick's comment in that bug: "Unfortunately, our OpenBSD and
FreeBSD buildbots are so unreliable that they don't get much attention
when they go red"

Stable reliable buildbots and a few more volunteers more familiar with
BSDs might be a great and welcome addition to python-dev.

As for getting this working - I would love a patch. You are going to
want to start with python-trunk and look in setup.py. You are going to
want to adjust the flags the package uses:

elif platform in ('freebsd5', 'freebsd6', 'freebsd7', 'freebsd8'):
# FreeBSD's P1003.1b semaphore support is very experimental
# and has many known problems. (as of June 2008)
macros = dict(  # FreeBSD
HAVE_SEM_OPEN=0,
HAVE_SEM_TIMEDWAIT=0,
HAVE_FD_TRANSFER=1,
)
libraries = []

You will also need to look at: Lib/multiprocessing/synchronize.py to
disable the import error - Modules/_multiprocessing/multiprocessing.h
will need to be updated for the proper ifdefs for the bsd(s) as well.
Finally, the core of the semaphore usage is in
Modules/_multiprocessing/semaphore.c

I apologize we/I could not get this in for 2.6

-jesse
--
http://mail.python.org/mailman/listinfo/python-list


Re: Python 2.6, multiprocessing module and BSD

2008-10-22 Thread Jesse Noller
On Wed, Oct 22, 2008 at 10:31 AM,  <[EMAIL PROTECTED]> wrote:
> On Oct 22, 8:11 am, "Jesse Noller" <[EMAIL PROTECTED]> wrote:
>> On Tue, Oct 21, 2008 at 6:45 PM,  <[EMAIL PROTECTED]> wrote:
>> > It seems that the multiprocessing module in 2.6 is broken for *BSD;
>> > I've seen issue 3770 regarding this. I'm curious if there are more
>> > details on this issue since the posts in 3770 were a bit unclear. For
>> > example, one post claimed that the problem was that sem_open isn't
>> > implemented in *BSD, but it is available on FreeBSD 7 (I checked). I'd
>> > be willing to help get this working if someone could point me in the
>> > right direction.
>> > --
>> >http://mail.python.org/mailman/listinfo/python-list
>>
>> The BSD issue was raised late in the cycle for 2.6. The problem is
>> that FBSD's support is "very experimental" as Phillip points out - and
>> OpenBSD doesn't even have them.
>>
>> Due to the lateness of the issue and a finite amount of time I have to
>> work on things, I chose to disable support for this on the various
>> *BSDs until I can cook up a stable patch or have one provided by
>> someone more familiar with the inner workings of Free-BSD. OpenBSD
>> support is a non-starter.
>>
>> Ideally, I would like to get this fixed and put on the 2.6 maint
>> branch ASAP, but I haven't had a chance to circle back to it.
>>
>> Also note Nick's comment in that bug: "Unfortunately, our OpenBSD and
>> FreeBSD buildbots are so unreliable that they don't get much attention
>> when they go red"
>>
>> Stable reliable buildbots and a few more volunteers more familiar with
>> BSDs might be a great and welcome addition to python-dev.
>>
>> As for getting this working - I would love a patch. You are going to
>> want to start with python-trunk and look in setup.py. You are going to
>> want to adjust the flags the package uses:
>>
>> elif platform in ('freebsd5', 'freebsd6', 'freebsd7', 'freebsd8'):
>> # FreeBSD's P1003.1b semaphore support is very experimental
>> # and has many known problems. (as of June 2008)
>> macros = dict(  # FreeBSD
>> HAVE_SEM_OPEN=0,
>> HAVE_SEM_TIMEDWAIT=0,
>> HAVE_FD_TRANSFER=1,
>> )
>> libraries = []
>>
>> You will also need to look at: Lib/multiprocessing/synchronize.py to
>> disable the import error - Modules/_multiprocessing/multiprocessing.h
>> will need to be updated for the proper ifdefs for the bsd(s) as well.
>> Finally, the core of the semaphore usage is in
>> Modules/_multiprocessing/semaphore.c
>>
>> I apologize we/I could not get this in for 2.6
>>
>> -jesse
>
> This is exactly the sort of response I was hoping for. Thanks for the
> additional background on the problem. I'll take a look at the code and
> see if I can figure out a patch. I'll also read up on the buildbot
> issue ( I think I saw a link about that)...I might have a stable co-
> located FreeBSD box that could be used.
> -Alan
> --
> http://mail.python.org/mailman/listinfo/python-list
>

I know I really appreciate the additional set of eyes on this, so thanks Alan!
--
http://mail.python.org/mailman/listinfo/python-list


Re: Python 2.6, multiprocessing module and BSD

2008-10-22 Thread Jesse Noller
On Wed, Oct 22, 2008 at 11:06 AM, Philip Semanchuk <[EMAIL PROTECTED]> wrote:
>> The BSD issue was raised late in the cycle for 2.6. The problem is
>> that FBSD's support is "very experimental" as Phillip points out - and
>> OpenBSD doesn't even have them.
>>
>> Due to the lateness of the issue and a finite amount of time I have to
>> work on things, I chose to disable support for this on the various
>> *BSDs until I can cook up a stable patch or have one provided by
>> someone more familiar with the inner workings of Free-BSD. OpenBSD
>> support is a non-starter.
>
> Hi Jesse,
> I wasn't aware of the multiprocessing module. It looks slick! Well done.
>

The credit goes to R. Oudkerk, the original author of the pyprocessing
library - I'm simply a rabid user who managed to wrangle it into
Python-Core. See: http://www.python.org/dev/peps/pep-0371/

> I don't know if you clicked on the link I gave for my posix_ipc module, but
> it looks like we're duplicating effort to some degree. My module makes POSIX
> semaphore & shared memory primitives available to Python programs.
> Obviously, what you've done is much more sophisticated.
>

I actually saw your stuff cross the 'tubes - it looks darned nice as a
lower-level interface. What the MP package is meant to be is obviously
much more high level (and "thread like"), MP goes out of it's way to
hide the gritty internals of the semaphore management/etc - posix_ipc
is much more low level than that.

> One oversight I noticed the multiprocessing module docs is that a
> semaphore's acquire() method shouldn't have a timeout on OS X as
> sem_timedwait() isn't supported on that platform. (You note OS X's lack of
> support for sem_getvalue() elsewhere.)

Please file a ticket or update http://bugs.python.org/issue4012 so I
don't loose it, my memory is increasingly lossy. Good catch.

>
> A question - how do you handle the difference in error messages on various
> platforms? For instance, sem_trywait() raises error 35, "Resource
> temporarily unavailable" under OS X but error 11 under Ubuntu. Right now I'm
> just passing these up to the (Python) caller as OSErrors. This makes it
> really hard for the Python programmer to write cross-platform code.
>

If you look at the code, we're pretty much raising OSError - it's
possible we could enhance this in later versions, but given MP is
supposed to be a cross-platform as possible and protect the user from
the seedy underbelly of semaphores/pipes/etc - when an OSError does
occur, it's generally a bug in our code, not the users.

> The only solution I can think of (which I haven't coded yet) is to compile &
> run a series of small C programs during setup.py that test things like
> sem_trywait() to see what errors occur, and provide those constants to my
> main .c module so that it can detect those errors exactly and wrap them into
> a specific, custom error for the Python caller.
>
> Any thoughts on this?
>

That's actually (while feeling hacky) a possibly sensible idea, the
problem is is that you'd need to maintain documentation to tell users
the exceptions for their platform.

-jesse
--
http://mail.python.org/mailman/listinfo/python-list


Re: 2.6, 3.0, and truly independent intepreters

2008-10-22 Thread Jesse Noller
On Wed, Oct 22, 2008 at 12:32 PM, Andy <[EMAIL PROTECTED]> wrote:
> And, yes, I'm aware of the multiprocessing module added in 2.6, but
> that stuff isn't lightweight and isn't suitable at all for many
> environments (including ours).  The bottom line is that if you want to
> perform independent processing (in python) on different threads, using
> the machine's multiple cores to the fullest, then you're out of luck
> under python 2.

So, as the guy-on-the-hook for multiprocessing, I'd like to know what
you might suggest for it to make it more apt for your - and other
environments.

Additionally, have you looked at:
https://launchpad.net/python-safethread
http://code.google.com/p/python-safethread/w/list
(By Adam olsen)

-jesse
--
http://mail.python.org/mailman/listinfo/python-list


Re: 2.6, 3.0, and truly independent intepreters

2008-10-22 Thread Jesse Noller
On Wed, Oct 22, 2008 at 5:34 PM, Terry Reedy <[EMAIL PROTECTED]> wrote:
> The *current* developers seem to be more interested in exploiting multiple
> processors with multiprocessing.  Note that Google choose that route for
> Chrome (as I understood their comic introduction). 2.6 and 3.0 come with a
> new multiprocessing module that mimics the threading module api fairly
> closely.  It is now being backported to run with 2.5 and 2.4.

That's not exactly correct. Multiprocessing was added to 2.6 and 3.0
as a *additional* method for parallel/concurrent programming that
allows you to use multiple cores - however, as I noted in the PEP:

"In the future, the package might not be as relevant should the
CPython interpreter enable "true" threading, however for some
applications, forking an OS process may sometimes be more
desirable than using lightweight threads, especially on those
platforms where process creation is fast and optimized."

Multiprocessing is not a replacement for a "free threading" future
(ergo my mentioning Adam Olsen's work) - it is a tool in the
"batteries included" box. I don't want my cheerleading and driving of
this to somehow implicate that the rest of Python-Dev thinks this is
the "silver bullet" or final answer in concurrency.

However, a free-threaded python has a lot of implications, and if we
were to do it, it requires we not only "drop" the GIL - it also
requires we consider the ramifications of enabling true threading ala
Java et al - just having "true threads" lying around is great if
you've spent a ton of time learning locking, avoiding shared data/etc,
stepping through and cursing poor debugger support for multiple
threads, etc.

This is why I've been a fan of Adam's approach - enabling free
threading via GIL removal is actually secondary to the project's
stated goal: Enable Safe Threading.

In any case, I've jumped the rails - let's just say there's room in
python for multiprocessing, threading and possible a concurrent
package ala java.util.concurrent - but it really does have to be
thought out and done right.

Speaking of which: If you wanted "real" threads, you could use a
combination of JCC (http://pypi.python.org/pypi/JCC/) and Jython. :)

-jesse
--
http://mail.python.org/mailman/listinfo/python-list


Re: 2.6, 3.0, and truly independent intepreters

2008-10-24 Thread Jesse Noller
On Fri, Oct 24, 2008 at 10:40 AM, Andy O'Meara <[EMAIL PROTECTED]> wrote:
>> > 2) Barriers to "free threading".  As Jesse describes, this is simply
>> > just the GIL being in place, but of course it's there for a reason.
>> > It's there because (1) doesn't hold and there was never any specs/
>> > guidance put forward about what should and shouldn't be done in multi-
>> > threaded apps
>>
>> No, it's there because it's necessary for acceptable performance
>> when multiple threads are running in one interpreter. Independent
>> interpreters wouldn't mean the absence of a GIL; it would only
>> mean each interpreter having its own GIL.
>>
>
> I see what you're saying, but let's note that what you're talking
> about at this point is an interpreter containing protection from the
> client level violating (supposed) direction put forth in python
> multithreaded guidelines.  Glenn Linderman's post really gets at
> what's at hand here.  It's really important to consider that it's not
> a given that python (or any framework) has to be designed against
> hazardous use.  Again, I refer you to the diagrams and guidelines in
> the QuickTime API:
>
> http://developer.apple.com/technotes/tn/tn2125.html
>
> They tell you point-blank what you can and can't do, and it's that's
> simple.  Their engineers can then simply create the implementation
> around those specs and not weigh any of the implementation down with
> sync mechanisms.  I'm in the camp that simplicity and convention wins
> the day when it comes to an API.  It's safe to say that software
> engineers expect and assume that a thread that doesn't have contact
> with other threads (except for explicit, controlled message/object
> passing) will run unhindered and safely, so I raise an eyebrow at the
> GIL (or any internal "helper" sync stuff) holding up an thread's
> performance when the app is designed to not need lower-level global
> locks.
>
> Anyway, let's talk about solutions.  My company looking to support
> python dev community endeavor that allows the following:
>
> - an app makes N worker threads (using the OS)
>
> - each worker thread makes its own interpreter, pops scripts off a
> work queue, and manages exporting (and then importing) result data to
> other parts of the app.  Generally, we're talking about CPU-bound work
> here.
>
> - each interpreter has the essentials (e.g. math support, string
> support, re support, and so on -- I realize this is open-ended, but
> work with me here).
>
> Let's guesstimate about what kind of work we're talking about here and
> if this is even in the realm of possibility.  If we find that it *is*
> possible, let's figure out what level of work we're talking about.
> >From there, I can get serious about writing up a PEP/spec, paid
> support, and so on.

Point of order! Just for my own sanity if anything :) I think some
minor clarifications are in order.

What are "threads" within Python:

Python has built in support for POSIX light weight threads. This is
what most people are talking about when they see, hear and say
"threads" - they mean Posix Pthreads
(http://en.wikipedia.org/wiki/POSIX_Threads) this is not what you
(Adam) seem to be asking for. PThreads are attractive due to the fact
they exist within a single interpreter, can share memory all "willy
nilly", etc.

Python does in fact, use OS-Level pthreads when you request multiple threads.

The Global Interpreter Lock is fundamentally designed to make the
interpreter easier to maintain and safer: Developers do not need to
worry about other code stepping on their namespace. This makes things
thread-safe, inasmuch as having multiple PThreads within the same
interpreter space modifying global state and variable at once is,
well, bad. A c-level module, on the other hand, can sidestep/release
the GIL at will, and go on it's merry way and process away.

POSIX Threads/pthreads/threads as we get from Java, allow unsafe
programming styles. These programming styles are of the "shared
everything deadlock lol" kind. The GIL *partially* protects against
some of the pitfalls. You do not seem to be asking for pthreads :)

http://www.python.org/doc/faq/library/#can-t-we-get-rid-of-the-global-interpreter-lock
http://en.wikipedia.org/wiki/Multi-threading

However, then there are processes.

The difference between threads and processes is that they do *not
share memory* but they can share state via shared queues/pipes/message
passing - what you seem to be asking for - is the ability to
completely fork independent Python interpreters, with their own
namespace and coordina

Re: 2.6, 3.0, and truly independent intepreters

2008-10-24 Thread Jesse Noller
On Fri, Oct 24, 2008 at 12:30 PM, Jesse Noller <[EMAIL PROTECTED]> wrote:
> On Fri, Oct 24, 2008 at 10:40 AM, Andy O'Meara <[EMAIL PROTECTED]> wrote:
>>> > 2) Barriers to "free threading".  As Jesse describes, this is simply
>>> > just the GIL being in place, but of course it's there for a reason.
>>> > It's there because (1) doesn't hold and there was never any specs/
>>> > guidance put forward about what should and shouldn't be done in multi-
>>> > threaded apps
>>>
>>> No, it's there because it's necessary for acceptable performance
>>> when multiple threads are running in one interpreter. Independent
>>> interpreters wouldn't mean the absence of a GIL; it would only
>>> mean each interpreter having its own GIL.
>>>
>>
>> I see what you're saying, but let's note that what you're talking
>> about at this point is an interpreter containing protection from the
>> client level violating (supposed) direction put forth in python
>> multithreaded guidelines.  Glenn Linderman's post really gets at
>> what's at hand here.  It's really important to consider that it's not
>> a given that python (or any framework) has to be designed against
>> hazardous use.  Again, I refer you to the diagrams and guidelines in
>> the QuickTime API:
>>
>> http://developer.apple.com/technotes/tn/tn2125.html
>>
>> They tell you point-blank what you can and can't do, and it's that's
>> simple.  Their engineers can then simply create the implementation
>> around those specs and not weigh any of the implementation down with
>> sync mechanisms.  I'm in the camp that simplicity and convention wins
>> the day when it comes to an API.  It's safe to say that software
>> engineers expect and assume that a thread that doesn't have contact
>> with other threads (except for explicit, controlled message/object
>> passing) will run unhindered and safely, so I raise an eyebrow at the
>> GIL (or any internal "helper" sync stuff) holding up an thread's
>> performance when the app is designed to not need lower-level global
>> locks.
>>
>> Anyway, let's talk about solutions.  My company looking to support
>> python dev community endeavor that allows the following:
>>
>> - an app makes N worker threads (using the OS)
>>
>> - each worker thread makes its own interpreter, pops scripts off a
>> work queue, and manages exporting (and then importing) result data to
>> other parts of the app.  Generally, we're talking about CPU-bound work
>> here.
>>
>> - each interpreter has the essentials (e.g. math support, string
>> support, re support, and so on -- I realize this is open-ended, but
>> work with me here).
>>
>> Let's guesstimate about what kind of work we're talking about here and
>> if this is even in the realm of possibility.  If we find that it *is*
>> possible, let's figure out what level of work we're talking about.
>> >From there, I can get serious about writing up a PEP/spec, paid
>> support, and so on.
>
> Point of order! Just for my own sanity if anything :) I think some
> minor clarifications are in order.
>
> What are "threads" within Python:
>
> Python has built in support for POSIX light weight threads. This is
> what most people are talking about when they see, hear and say
> "threads" - they mean Posix Pthreads
> (http://en.wikipedia.org/wiki/POSIX_Threads) this is not what you
> (Adam) seem to be asking for. PThreads are attractive due to the fact
> they exist within a single interpreter, can share memory all "willy
> nilly", etc.
>
> Python does in fact, use OS-Level pthreads when you request multiple threads.
>
> The Global Interpreter Lock is fundamentally designed to make the
> interpreter easier to maintain and safer: Developers do not need to
> worry about other code stepping on their namespace. This makes things
> thread-safe, inasmuch as having multiple PThreads within the same
> interpreter space modifying global state and variable at once is,
> well, bad. A c-level module, on the other hand, can sidestep/release
> the GIL at will, and go on it's merry way and process away.
>
> POSIX Threads/pthreads/threads as we get from Java, allow unsafe
> programming styles. These programming styles are of the "shared
> everything deadlock lol" kind. The GIL *partially* protects against
> some of the pitfalls. You do not seem to be asking for pthreads :)
>
> h

Re: 2.6, 3.0, and truly independent intepreters

2008-10-24 Thread Jesse Noller
On Fri, Oct 24, 2008 at 3:17 PM, Andy O'Meara <[EMAIL PROTECTED]> wrote:

> I'm a lousy writer sometimes, but I feel bad if you took the time to
> describe threads vs processes.  The only reason I raised IPC with my
> "messaging isn't very attractive" comment was to respond to Glenn
> Linderman's points regarding tradeoffs of shared memory vs no.
>

I actually took the time to bring anyone listening in up to speed, and
to clarify so I could better understand your use case. Don't feel bad,
things in the thread are moving fast and I just wanted to clear it up.

Ideally, we all want to improve the language, and the interpreter.
However trying to push it towards a particular use case is dangerous
given the idea of "general use".

-jesse
--
http://mail.python.org/mailman/listinfo/python-list


Re: 2.6, 3.0, and truly independent intepreters

2008-10-24 Thread Jesse Noller
On Fri, Oct 24, 2008 at 4:51 PM, Andy O'Meara <[EMAIL PROTECTED]> wrote:

>> In the module multiprocessing environment could you not use shared
>> memory, then, for the large shared data items?
>>
>
> As I understand things, the multiprocessing puts stuff in a child
> process (i.e. a separate address space), so the only to get stuff to/
> from it is via IPC, which can include a shared/mapped memory region.
> Unfortunately, a shared address region doesn't work when you have
> large and opaque objects (e.g. a rendered CoreVideo movie in the
> QuickTime API or 300 megs of audio data that just went through a
> DSP).  Then you've got the hit of serialization if you're got
> intricate data structures (that would normally would need to be
> serialized, such as a hashtable or something).  Also, if I may speak
> for commercial developers out there who are just looking to get the
> job done without new code, it's usually always preferable to just a
> single high level sync object (for when the job is complete) than to
> start a child processes and use IPC.  The former is just WAY less
> code, plain and simple.
>

Are you familiar with the API at all? Multiprocessing was designed to
mimic threading in about every way possible, the only restriction on
shared data is that it must be serializable, but event then you can
override or customize the behavior.

Also, inter process communication is done via pipes. It can also be
done with messages if you want to tweak the manager(s).

-jesse
--
http://mail.python.org/mailman/listinfo/python-list


Re: 2.6, 3.0, and truly independent intepreters

2008-10-30 Thread Jesse Noller
On Wed, Oct 29, 2008 at 8:05 PM, Glenn Linderman <[EMAIL PROTECTED]> wrote:
> On approximately 10/29/2008 3:45 PM, came the following characters from the
> keyboard of Patrick Stinson:
>>
>> If you are dealing with "lots" of data like in video or sound editing,
>> you would just keep the data in shared memory and send the reference
>> over IPC to the worker process. Otherwise, if you marshal and send you
>> are looking at a temporary doubling of the memory footprint of your
>> app because the data will be copied, and marshaling overhead.
>
> Right.  Sounds, and is, easy, if the data is all directly allocated by the
> application.  But when pieces are allocated by 3rd party libraries, that use
> the C-runtime allocator directly, then it becomes more difficult to keep
> everything in shared memory.
>
> One _could_ replace the C-runtime allocator, I suppose, but that could have
> some adverse effects on other code, that doesn't need its data to be in
> shared memory.  So it is somewhat between a rock and a hard place.
>
> By avoiding shared memory, such problems are sidestepped... until you run
> smack into the GIL.

If you do not have shared memory: You don't need threads, ergo: You
don't get penalized by the GIL. Threads are only useful when you need
to have that requirement of large in-memory data structures shared and
modified by a pool of workers.

-jesse
--
http://mail.python.org/mailman/listinfo/python-list


Re: 2.6, 3.0, and truly independent intepreters

2008-10-30 Thread Jesse Noller
anding what you want: You
are making threads in another language (not via the threading API),
embed python in those threads, but you want to be able to share
objects/state between those threads, and independent interpreters. You
want to be able to pass state from one interpreter to another via
shared memory (e.g. pointers/contexts/etc).

Example:

ParentAppFoo makes 10 threads (in C)
Each thread gets an itty bitty python interpreter
ParentAppFoo gets a object(video) to render
Rather then marshal that object, you pass a pointer to the object to
the children
You want to pass that pointer to an existing, or newly created itty
bitty python interpreter for mangling
Itty bitty python interpreter passes the object back to a C module via
a pointer/context

If the above is wrong, I think possible outlining it in the above form
may help people conceptualize it - I really don't think you're talking
about python-level processes or threads.

-jesse
--
http://mail.python.org/mailman/listinfo/python-list


Re: 2.6, 3.0, and truly independent intepreters

2008-10-30 Thread Jesse Noller
On Thu, Oct 30, 2008 at 1:54 PM, Andy O'Meara <[EMAIL PROTECTED]> wrote:
> On Oct 30, 1:00 pm, "Jesse Noller" <[EMAIL PROTECTED]> wrote:
>
>>
>> Multiprocessing is written in C, so as for the "less agile" - I don't
>> see how it's any less agile then what you've talked about.
>
> Sorry for not being more specific there, but by "less agile" I meant
> that an app's codebase is less agile if python is an absolute
> requirement.  If I was told tomorrow that for some reason we had to
> drop python and go with something else, it's my job to have chosen a
> codebase path/roadmap such that my response back isn't just "well,
> we're screwed then."  Consider modern PC games.  They have huge code
> bases that use DirectX and OpenGL and having a roadmap of flexibility
> is paramount so packages they choose to use are used in a contained
> and hedged fashion.  It's a survival tactic for a company not to
> entrench themselves in a package or technology if they don't have to
> (and that's what I keep trying to raise in the thread--that the python
> dev community should embrace development that makes python a leading
> candidate for lightweight use).  Companies want to build a flexible,
> powerful codebases that are married to as few components as
> possible.
>
>>
>> > - Shared memory -- for the reasons listed in my other posts, IPC or a
>> > shared/mapped memory region doesn't work for our situation (and I
>> > venture to say, for many real world situations otherwise you'd see end-
>> > user/common apps use forking more often than threading).
>>
>> I would argue that the reason most people use threads as opposed to
>> processes is simply based on "ease of use and entry" (which is ironic,
>> given how many problems it causes).
>
> No, we're in agreement here -- I was just trying to offer a more
> detailed explanation of "ease of use".  It's "easy" because memory is
> shared and no IPC, serialization, or special allocator code is
> required.  And as we both agree, it's far from "easy" once those
> threads to interact with each other.  But again, my goal here is to
> stay on the "embarrassingly easy" parallelization scenarios.
>

That's why when I'm using threads, I stick to Queues. :)

>
>>
>> I would argue that most of the people taking part in this discussion
>> are working on "real world" applications - sure, multiprocessing as it
>> exists today, right now - may not support your use case, but it was
>> evaluated to fit *many* use cases.
>
> And as I've mentioned, it's a totally great endeavor to be super proud
> of.  That suite of functionality alone opens some *huge* doors for
> python and I hope folks that use it appreciate how much time and
> thought that undoubtably had to go into it.  You get total props, for
> sure, and you're work is a huge and unique credit to the community.
>

Thanks - I'm just a cheerleader and pusher-into-core, R Oudkerk is the
implementor. He and everyone else who has helped deserve more credit
than me by far.

My main interest, and the reason I brought it up (again) is that I'm
interested in making it better :)

>>
>> Please correct me if I am wrong in understanding what you want: You
>> are making threads in another language (not via the threading API),
>> embed python in those threads, but you want to be able to share
>> objects/state between those threads, and independent interpreters. You
>> want to be able to pass state from one interpreter to another via
>> shared memory (e.g. pointers/contexts/etc).
>>
>> Example:
>>
>> ParentAppFoo makes 10 threads (in C)
>> Each thread gets an itty bitty python interpreter
>> ParentAppFoo gets a object(video) to render
>> Rather then marshal that object, you pass a pointer to the object to
>> the children
>> You want to pass that pointer to an existing, or newly created itty
>> bitty python interpreter for mangling
>> Itty bitty python interpreter passes the object back to a C module via
>> a pointer/context
>>
>> If the above is wrong, I think possible outlining it in the above form
>> may help people conceptualize it - I really don't think you're talking
>> about python-level processes or threads.
>>
>
> Yeah, you have it right-on there, with added fact that the C and
> python execution (and data access) are highly intertwined (so getting
> and releasing the GIL would have to be happening all over).  For
> example, consider and the dynamics, logic, algorithms, and data
> structures associated with image and video effects and image and video
> image recognition/analysis.

okie doke!
--
http://mail.python.org/mailman/listinfo/python-list


Can someone explain this behavior to me?

2009-02-26 Thread Jesse Aldridge
I have one module called foo.py
-
class Foo:
foo = None

def get_foo():
return Foo.foo

if __name__ == "__main__":
import bar
Foo.foo = "foo"
bar.go()
-
And another one called bar.py
-
import foo

def go():
assert foo.get_foo() == "foo"
--
When I run foo.py, the assertion in bar.py fails.  Why?
--
http://mail.python.org/mailman/listinfo/python-list


Re: Can someone explain this behavior to me?

2009-02-27 Thread Jesse Aldridge
Ah, I get it.
Thanks for clearing that up, guys.
--
http://mail.python.org/mailman/listinfo/python-list


Re: [Python-Dev] PEP 382: Namespace Packages

2009-04-06 Thread Jesse Noller
On Thu, Apr 2, 2009 at 4:33 PM, M.-A. Lemburg  wrote:
> On 2009-04-02 17:32, Martin v. Löwis wrote:
>> I propose the following PEP for inclusion to Python 3.1.
>
> Thanks for picking this up.
>
> I'd like to extend the proposal to Python 2.7 and later.
>

-1 to adding it to the 2.x series. There was much discussion around
adding features to 2.x *and* 3.0, and the consensus seemed to *not*
add new features to 2.x and use those new features as carrots to help
lead people into 3.0.

jesse
--
http://mail.python.org/mailman/listinfo/python-list


Re: [Python-Dev] PEP 382: Namespace Packages

2009-04-06 Thread Jesse Noller
On Mon, Apr 6, 2009 at 9:26 AM, Barry Warsaw  wrote:
> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA1
>
> On Apr 6, 2009, at 9:21 AM, Jesse Noller wrote:
>
>> On Thu, Apr 2, 2009 at 4:33 PM, M.-A. Lemburg  wrote:
>>>
>>> On 2009-04-02 17:32, Martin v. Löwis wrote:
>>>>
>>>> I propose the following PEP for inclusion to Python 3.1.
>>>
>>> Thanks for picking this up.
>>>
>>> I'd like to extend the proposal to Python 2.7 and later.
>>>
>>
>> -1 to adding it to the 2.x series. There was much discussion around
>> adding features to 2.x *and* 3.0, and the consensus seemed to *not*
>> add new features to 2.x and use those new features as carrots to help
>> lead people into 3.0.
>
> Actually, isn't the policy just that nothing can go into 2.7 that isn't
> backported from 3.1?  Whether the actual backport happens or not is up to
> the developer though.  OTOH, we talked about a lot of things and my
> recollection is probably fuzzy.
>
> Barry

That *is* the official policy, but there was discussions around no
further backporting of features from 3.1 into 2.x, therefore providing
more of an upgrade incentive
--
http://mail.python.org/mailman/listinfo/python-list


Re: pyprocessing and exceptions

2009-04-15 Thread Jesse Noller
On Wed, Apr 15, 2009 at 11:32 AM, garyrob  wrote:
> Hi,
>
> We're still using Python 2.5 so this question is about the
> pyprocessing module rather than the multiprocessing module, but I'm
> guessing the answer is the same.
>
> I tend to use the Pool() object to create slave processes. If
> something goes wrong in the slave, an exception is raised there, which
> is then raised in the master or parent process, which is great.
>
> The problem is that if the master aborts due to the exception, it
> doesn't show the usual stack trace info for the slave, which would
> show (among other things) the line number the error occurred on.
> Instead, it shows the line in the master where work was sent to the
> slave (such as a call to pool.map()).
>
> I'm wondering what the recommended way is to write code that will
> reveal what went wrong in the slave. One obvious possibility is to
> have functions that are invoked in the slave incorporate their own
> exception handling that prints a stack trace. But I'd rather handle
> this issue in the master, rather than have to handle it in every
> function in the slave module that the master may invoke.
>
> Is there a way to do that? If not, what's the recommended approach?
>
> Thanks,
> Gary
>
>
> --
> http://mail.python.org/mailman/listinfo/python-list
>

You should handle the exception in the child.

Also, multiprocessing was backported to python 2.5 and earlier.

http://pypi.python.org/pypi/multiprocessing/

jesse
--
http://mail.python.org/mailman/listinfo/python-list


regex alternation problem

2009-04-17 Thread Jesse Aldridge
import re

s1 = "I am an american"

s2 = "I am american an "

for s in [s1, s2]:
print re.findall(" (am|an) ", s)

# Results:
# ['am']
# ['am', 'an']

---

I want the results to be the same for each string.  What am I doing
wrong?
--
http://mail.python.org/mailman/listinfo/python-list


Re: regex alternation problem

2009-04-17 Thread Jesse Aldridge
On Apr 17, 5:30 pm, Paul McGuire  wrote:
> On Apr 17, 5:28 pm, Paul McGuire  wrote:> -- Paul
>
> > Your find pattern includes (and consumes) a leading AND trailing space
> > around each word.  In the first string "I am an american", there is a
> > leading and trailing space around "am", but the trailing space for
> > "am" is the leading space for "an", so " an "- Hide quoted text -
>
> Oops, sorry, ignore debris after sig...

Alright, I got it.  Thanks for the help guys.
--
http://mail.python.org/mailman/listinfo/python-list


Handling NameError in a list gracefully

2009-04-20 Thread Jesse Aldridge
from my_paths import *

def get_selected_paths():
return [home, desktop, project1, project2]

---

So I have a function like this which returns a list containing a bunch
of variables.  The real list has around 50 entries.  Occasionally I'll
remove a variable from my_paths and cause get_selected_paths to throw
a NameError.  For example, say I delete the "project1" variable from
my_paths; now I'll get a NameError when I call get_selected_paths.
So everything that depends on the get_selected_paths function is
crashing.  I am wondering if there is an easy way to just ignore the
variable if it's not found.  So, in the example case I would want to
somehow handle the exception in a way that I end up returning just
[home, desktop, project2].
Yes, I realize there are a number of ways to reimplement this, but I'm
wanting to get it working with minimal changes to the code.  Any
suggestions?
--
http://mail.python.org/mailman/listinfo/python-list


Re: Handling NameError in a list gracefully

2009-04-20 Thread Jesse Aldridge
Nevermind, I figured it out right after I clicked the send button :\


from my_paths import *

def get_selected_paths():
return [globals()[s] for s in
 ["home", "desktop", "project1", "project2"]
if s in globals()]
--
http://mail.python.org/mailman/listinfo/python-list


Re: Handling NameError in a list gracefully

2009-04-20 Thread Jesse Aldridge
On Apr 20, 3:46 pm, Chris Rebert  wrote:
> On Mon, Apr 20, 2009 at 1:36 PM, Jesse Aldridge  
> wrote:
> > from my_paths import *
>
> > def get_selected_paths():
> >    return [home, desktop, project1, project2]
>
> > ---
>
> > So I have a function like this which returns a list containing a bunch
> > of variables.  The real list has around 50 entries.  Occasionally I'll
> > remove a variable from my_paths and cause get_selected_paths to throw
> > a NameError.  For example, say I delete the "project1" variable from
> > my_paths; now I'll get a NameError when I call get_selected_paths.
> > So everything that depends on the get_selected_paths function is
> > crashing.  I am wondering if there is an easy way to just ignore the
> > variable if it's not found.  So, in the example case I would want to
> > somehow handle the exception in a way that I end up returning just
> > [home, desktop, project2].
> > Yes, I realize there are a number of ways to reimplement this, but I'm
> > wanting to get it working with minimal changes to the code.  Any
> > suggestions?
>
> def get_selected_paths():
>     variables = "home desktop project1 project2".split()
>     vals = []
>     for var in variables:
>         try:
>             vals.append(getattr(my_paths, var))
>         except AttributeError:
>             pass
>     return vals
>
> --
> I have a blog:http://blog.rebertia.com

Hey, that's even better.  Thanks.
--
http://mail.python.org/mailman/listinfo/python-list


Re: Ending data exchange through multiprocessing pipe

2009-04-23 Thread Jesse Noller
On Thu, Apr 23, 2009 at 5:15 AM, Michal Chruszcz  wrote:
> On Apr 22, 10:30 pm, Scott David Daniels 
> wrote:
>> Michal Chruszcz wrote:
>> > ... First idea, which came to my mind, was using a queue. I've got many
>> > producers (all of the workers) and one consumer. Seams quite simple,
>> > but it isn't, at least for me. I presumed that each worker will put()
>> > its results to the queue, and finally will close() it, while the
>> > parent process will get() them as long as there is an active
>> > subprocess
>>
>> Well, if the protocol for a worker is:
>>      :
>>           
>>           queue.put(result)
>>      queue.put()
>>      queue.close()
>>
>> Then you can keep count of how many have finished in the consumer.
>
> Yes, I could, but I don't like the idea of using a sentinel, if I
> still need to close the queue. I mean, if I mark queue closed or close
> a connection through a pipe why do I still have to "mark" it closed
> using a sentinel? From my point of view it's a duplication. Thus I
> dare to say multiprocessing module misses something quite important.
>
> Probably it is possible to retain a pipe state using a multiprocessing
> manager, thus omitting the state exchange duplication, but I haven't
> tried it yet.
>
> Best regards,
> Michal Chruszcz
> --
> http://mail.python.org/mailman/listinfo/python-list
>

Using a sentinel, or looping on get/Empty pattern are both valid, and
correct suggestions.

If you think it's a bug, or you want a new feature, post it,
preferably with a patch, to bugs.python.org. Add me to the +noisy, or
if you can assign it to me.

Jesse
--
http://mail.python.org/mailman/listinfo/python-list


Re: multiprocessing, pool and process crashes

2009-04-30 Thread Jesse Noller
On Wed, Apr 29, 2009 at 10:50 AM,   wrote:
> I want to use the multiprocessing.Pool object to run multiple tasks in
> separate processes.
>
> The problem is that I want to call an external C function (from a
> shared library, with help from ctypes) and this function tends to
> crash (SIGSEGV,SIGFPE,etc.) on certain data sets (the purpose of this
> thing is to test the behavior of the function and see when it
> crashes).
>
> My question is if the Pool object is able to spawn a new process in
> place of the crashed one and resume operation.  I killed with SIGSEGV
> the subprocesses and the script locked up :(
>
> Any ideas/alternatives?
>

You're going to want to use a custom pool, not the built in pool. In
your custom pool, you'll need to capture the signals/errors you want
and handle them accordingly. The built in pool was not designed for
your use case.

-jesse
--
http://mail.python.org/mailman/listinfo/python-list


Re: Multiprocessing Pool and functions with many arguments

2009-04-30 Thread Jesse Noller
On Wed, Apr 29, 2009 at 2:01 PM, psaff...@googlemail.com
 wrote:
> I'm trying to get to grips with the multiprocessing module, having
> only used ParallelPython before.
>
> based on this example:
>
> http://docs.python.org/library/multiprocessing.html#using-a-pool-of-workers
>
> what happens if I want my "f" to take more than one argument? I want
> to have a list of tuples of arguments and have these correspond the
> arguments in f, but it keeps complaining that I only have one argument
> (the tuple). Do I have to pass in a tuple and break it up inside f? I
> can't use multiple input lists, as I would with regular map.
>
> Thanks,
>
> Peter

Perhaps these articles will help you:

http://www.doughellmann.com/PyMOTW/multiprocessing/communication.html#pool-map
http://www.doughellmann.com/PyMOTW/multiprocessing/mapreduce.html#simplemapreduce

jesse
--
http://mail.python.org/mailman/listinfo/python-list


Re: [Python-Dev] [RELEASED] Python 3.1 beta 1

2009-05-07 Thread Jesse Noller
On Thu, May 7, 2009 at 9:12 AM, Scott David Daniels
 wrote:
> Daniel Fetchinson wrote:
>>>
>>> Other features include an ordered dictionary implementation
>>
>> Are there plans for backporting this to python 2.x just as
>> multiprocessing has been?
>
> Why not grab the 3.1 code and do it yourself for your 2.X's?
> It should be far less work than attempting something as
> fidgety as multiprocessing.
>
> --Scott David Daniels
> scott.dani...@acm.org

Don't worry, we gave multiprocessing some ADHD meds :)
--
http://mail.python.org/mailman/listinfo/python-list


Re: Get multiprocessing.Queue to do priorities

2009-05-10 Thread Jesse Noller
On Sat, May 9, 2009 at 6:11 PM, uuid  wrote:
> The Queue module, apparently, is thread safe, but *not* process safe. If you
> try to use an ordinary Queue, it appears inaccessible to the worker process.
> (Which, after all, is quite logical, since methods for moving items between
> the threads of the same process are quite different from inter-process
> communication.) It appears that creating a manager that holds a shared queue
> might be an option
> (http://stackoverflow.com/questions/342556/python-2-6-multiprocessing-queue-compatible-with-threads).

Using a manager, or submitting a patch which adds priority queue to
the multiprocessing.queue module is the correct solution for this.

You can file an enhancement in the tracker, and assign/add me to it,
but without a patch it may take me a bit (wicked busy right now).

jesse
--
http://mail.python.org/mailman/listinfo/python-list


Re: unladen swallow: python and llvm

2009-06-04 Thread Jesse Noller
You can email these questions to the unladen-swallow mailing list.
They're very open to answering questions.

2009/6/4 Luis M. González :
> I am very excited by this project (as well as by pypy) and I read all
> their plan, which looks quite practical and impressive.
> But I must confess that I can't understand why LLVM is so great for
> python and why it will make a difference.
>
> AFAIK, LLVM is alot of things at the same time (a compiler
> infrastructure, a compilation strategy, a virtual instruction set,
> etc).
> I am also confussed at their use of the term "jit" (is LLVM a jit? Can
> it be used to build a jit?).
> Is it something like the .NET or JAVA jit? Or it can be used to
> implement a custom jit (ala psyco, for example)?
>
> Also, as some pypy folk said, it seems they intend to do "upfront
> compilation". How?
> Is it something along the lines of the V8 javascript engine (no
> interpreter, no intermediate representation)?
> Or it will be another interpreter implementation? If so, how will it
> be any better...?
>
> Well, these are a lot of questions and they only show my confussion...
> I would highly appreciate if someone knowledgeable sheds some light on
> this for me...
>
> Thanks in advance!
> Luis
> --
> http://mail.python.org/mailman/listinfo/python-list
>
-- 
http://mail.python.org/mailman/listinfo/python-list


Best way to modify code without breaking stuff.

2008-06-04 Thread Jesse Aldridge
I've got a module that I use regularly.  I want to make some extensive
changes to this module but I want all of the programs that depend on
the module to keep working while I'm making my changes.  What's the
best way to accomplish this?
--
http://mail.python.org/mailman/listinfo/python-list


dynamic method question

2008-06-12 Thread Jesse Aldridge
So in the code below, I'm binding some events to a text control in
wxPython.  The way I've been doing it is demonstrated with the
Lame_Event_Widget class.  I want to factor out the repeating
patterns.  Cool_Event_Widget is my attempt at this.  It pretty much
works, but I have a feeling there's a better way of doing this.  Also,
I couldn't get the Cool_Text_Control to override the on_text method.
How would I do that?

-

import textwrap, new

import wx


class Lame_Event_Widget:
def bind_events(self):
self.Bind(wx.EVT_LEFT_DOWN, self.on_left_down)
self.Bind(wx.EVT_RIGHT_DOWN, self.on_right_down)
self.Bind(wx.EVT_MIDDLE_DOWN, self.on_middle_down)
self.Bind(wx.EVT_LEFT_UP, self.on_left_up)
self.Bind(wx.EVT_RIGHT_UP, self.on_right_up)
self.Bind(wx.EVT_TEXT, self.on_text)

def on_left_down(self, e):
print "lame normal method: on_left_down"
e.Skip()

def on_right_down(self, e):
print "lame normal method: on_right_down"
e.Skip()

def on_middle_down(self, e):
print "lame normal method: on_middle_down"
e.Skip()

def on_left_up(self, e):
print "lame normal method: on_left_up"
e.Skip()

def on_right_up(self, e):
print "lame normal method: on_right_up"
e.Skip()

def on_text(self, e):
print "lame normal method: on_text"
e.Skip()


class Cool_Event_Widget:
def bind_events(self):
method_names = textwrap.dedent(
"""
on_left_down, on_right_down, on_middle_down,
on_left_up, on_right_up, on_middle_up,
on_text"""
).replace("\n", "").split(", ")

for name in method_names:
event_name = name.partition("_")[2]
event_name = "wx.EVT_" + event_name.upper()

exec("def " + name + "(self, e):\n" +
 "print 'cool dynamic method: " + name + "'\n" +
 "e.Skip()\n"
 "self." + name + " = new.instancemethod(" + name + ",
self, self.__class__)\n"
 "self.Bind(" + event_name + ", self." + name + ")")


if __name__ == "__main__":
app = wx.App()
frame = wx.Frame(None)
panel = wx.Panel(frame)
sizer = wx.BoxSizer(wx.VERTICAL)
panel.SetSizer(sizer)

class Cool_Text_Control(wx.TextCtrl, Cool_Event_Widget):
def __init__(self, parent):
wx.TextCtrl.__init__(self, parent)
self.bind_events()

def on_text(self, e):
print "modified text in Cool_Text_Control"

class Lame_Text_Control(wx.TextCtrl, Lame_Event_Widget):
def __init__(self, parent):
wx.TextCtrl.__init__(self, parent)
self.bind_events()

def on_text(self, e):
print "modified text in Lame_Text_Control"

sizer.Add(Cool_Text_Control(panel))
sizer.Add(Lame_Text_Control(panel))

frame.Show()
app.MainLoop()
--
http://mail.python.org/mailman/listinfo/python-list


Python Data Utils

2008-04-05 Thread Jesse Aldridge
In an effort to experiment with open source, I put a couple of my
utility files up http://github.com/jessald/python_data_utils/
tree/master">here.  What do you think?
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Python Data Utils

2008-04-06 Thread Jesse Aldridge
Thanks for the detailed feedback.  I made a lot of modifications based
on your advice.  Mind taking another look?

> Some names are a bit obscure - "universify"?
> Docstrings would help too, and blank lines

I changed the name of universify and added a docstrings to every
function.

> ...PEP8

I made a few changes in this direction, feel free to take it the rest
of the way ;)

> find_string is a much slower version of the find method of string objects,  

Got rid of find_string, and contains.  What are the others?

> And I don't see what you gain from things like:
> def count( s, sub ):
>      return s.count( sub )

Yeah, got rid of that stuff too.  I ported these files from Java a
while ago, so there was a bit of junk like this lying around.

> delete_string, as a function, looks like it should delete some string, not  
> return a character; I'd use a string constant DELETE_CHAR, or just DEL,  
> it's name in ASCII.

Got rid of that too :)

> In general, None should be compared using `is` instead of `==`, and  
> instead of `type(x) is type(0)` or `type(x) == type(0)` I'd use  
> `isinstance(x, int)` (unless you use Python 2.1 or older, int, float, str,  
> list... are types themselves)

Changed.

So, yeah, hopefully things are better now.

Soon developers will flock from all over the world to build this into
the greatest data manipulation library the world has ever seen!  ...or
not...

I'm tired.  Making code for other people is too much work :)
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Python Data Utils

2008-04-06 Thread Jesse Aldridge
On Apr 6, 6:14 am, "Konstantin Veretennicov" <[EMAIL PROTECTED]>
wrote:
> On Sun, Apr 6, 2008 at 7:43 AM, Jesse Aldridge <[EMAIL PROTECTED]> wrote:
> > In an effort to experiment with open source, I put a couple of my
> >  utility files up http://github.com/jessald/python_data_utils/
> >  tree/master">here.  What do you think?
>
> Would you search for, install, learn and use these modules if *someone
> else* created them?
>
> --
> kv

Yes, I would.  I searched a bit for a library that offered similar
functionality.  I didn't find anything.  Maybe I'm just looking in the
wrong place.  Any suggestions?
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Python Data Utils

2008-04-06 Thread Jesse Aldridge
> Docstrings go *after* the def statement.

Fixed.

> changing "( " to "(" and " )" to ")".

Changed.


I attempted to take out everything that could be trivially implemented
with the standard library.
This has left me with... 4 functions in S.py.  1 one of them is used
internally, and the others aren't terribly awesome :\  But I think the
ones that remain are at least a bit useful :)

> The penny drops :-)

yeah, yeah

> Not in all places ... look at the ends_with function. BTW, this should
> be named something like "fuzzy_ends_with".

fixed

> fuzzy_match(None, None) should return False.

changed

> 2. make_fuzzy function: first two statements should read "s =
> s.replace(.)" instead of "s.replace(.)".

fixed

> 3. Fuzzy matching functions are specialised to an application; I can't
> imagine that anyone would be particularly interested in those that you
> provide.

I think it's useful in many cases.  I use it all the time.  It helps
guard against annoying input errors.

> A basic string normalisation-before-comparison function would
> usefully include replacing multiple internal whitespace characters by
> a single space.

I added this functionality.


> 5. Casual inspection of your indentation function gave the impression
> that it was stuffed

Fixed

Thanks for the feedback.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Python Data Utils

2008-04-07 Thread Jesse Aldridge
> But then you introduced more.

oops.  old habits...

> mxTextTools.

This looks cool, so does the associated book - "Text Processing in
Python".  I'll look into them.

> def normalise_whitespace(s):
>     return ' '.join(s.split())

Ok, fixed.

> a.replace('\xA0', ' ') in there somewhere.

Added.

Thanks again.
-- 
http://mail.python.org/mailman/listinfo/python-list


  1   2   >