What is the preferred way to sort/compare custom objects?

2011-02-24 Thread Jeremy
I just discovered the wiki page on sorting 
(http://wiki.python.org/moin/HowTo/Sorting/).  This describes the new way of 
sorting a container instead of using the cmp function.  But what do I do for 
custom objects?  If I write __lt__, __gt__, etc. functions for my objects, will 
these be used?  Is this better than defining a key for sorting my custom 
objects?  

Thanks,
Jeremy
-- 
http://mail.python.org/mailman/listinfo/python-list


Preferred method of sorting/comparing custom objects

2011-02-24 Thread Jeremy
I recently found the wiki page on sorting 
(http://wiki.python.org/moin/HowTo/Sorting/).  This page describes the new key 
parameter to the sort and sorted functions.  

What about custom objects?  Can I just write __lt__, __gt__, etc. functions and 
not have to worry about the key parameter?  Is that the preferred (i.e., 
fastest) way to do things or should I use a lambda function similar to what is 
given as an example on the wiki page?

For my custom objects, I would prefer to write the comparison functions as that 
seems easiest in my situation, but I would like to know what is the 
preferred/accepted way.

Thanks,
Jeremy
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: What is the preferred way to sort/compare custom objects?

2011-02-24 Thread Jeremy
Sorry for double posting.  Google Groups was acting funny this morning.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Preferred method of sorting/comparing custom objects

2011-02-24 Thread Jeremy
Sorry for double posting.  Google Groups was acting funny this morning.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: What is the preferred way to sort/compare custom objects?

2011-02-24 Thread Jeremy
On Thursday, February 24, 2011 10:09:53 AM UTC-7, Chris Rebert wrote:
> On Thu, Feb 24, 2011 at 8:27 AM, Jeremy  wrote:
> > I just discovered the wiki page on sorting 
> > (http://wiki.python.org/moin/HowTo/Sorting/).  This describes the new way 
> > of sorting a container instead of using the cmp function.  But what do I do 
> > for custom objects?
> > If I write __lt__, __gt__, etc. functions for my objects, will these be 
> >used?
> 
> s/functions/methods/
> Yes, they will. As Bill Nye would say: "Try It!". The REPL exists for a 
> reason.
> 
> If you're using Python 2.7+, you may want to use
> functools.total_ordering()
> [http://docs.python.org/library/functools.html#functools.total_ordering
> ] for convenience.
> 
> > Is this better than defining a key for sorting my custom objects?
> 
> Unless there are multiple "obvious" ways to sort your objects, yes.
> Third-party code will be able to sort+compare your objects. Sorting
> your objects in your own code will be more concise. And you'll be able
> to use the comparison operators on your objects.

I implemented __eq__ and __lt__ (and used functoos, thanks for the suggestion) 
and sorting works.  Thanks for the help.  Most importantly, I wanted to make 
sure I was doing this the right way and your post helped.

Jeremy
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: How can I define __getattr__ to operate on all items of container and pass arguments?

2011-02-24 Thread Jeremy
On Tuesday, February 15, 2011 2:58:11 PM UTC-7, Jeremy wrote:
> 
> > So the arguments haven't yet been passed when __getattr__() is
> > invoked. Instead, you must return a function from __getattr__(); this
> > function will then get called with the arguments. Thus (untested):
> > 
> > def __getattr__(self, name):
> > def _multiplexed(*args, **kwargs):
> > return [getattr(item, name)(*args, **kwargs) for item in self.items]
> > return _multiplexed
> 

Sorry to resurrect an old(ish) thread, but I have found a problem with this 
approach.  Defining __getattr__ in this manner allows me to access methods of 
the contained objects---this was the original goal.  But it has introduced a 
problem that I can no longer access the documentation for my class.  The error 
I get is copied below.  It seems like the help function is trying to access an 
attribute of the class, but can no longer get it.  Any suggestions:

Thanks,
Jeremy

/home/jlconlin/CustomPython/trunk/Collect.py in ()
> 1 
  2 
  3 
  4 
  5 

/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site.pyc in 
__call__(self, *args, **kwds)
455 def __call__(self, *args, **kwds):
456 import pydoc
--> 457 return pydoc.help(*args, **kwds)
458 
459 def sethelper():

/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/pydoc.pyc in 
__call__(self, request)
   1721 def __call__(self, request=None):
   1722 if request is not None:
-> 1723 self.help(request)
   1724 else:
   1725 self.intro()

/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/pydoc.pyc in 
help(self, request)
   1768 elif request: doc(request, 'Help on %s:')
   1769 elif isinstance(request, Helper): self()
-> 1770 else: doc(request, 'Help on %s:')
   1771 self.output.write('\n')
   1772 

/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/pydoc.pyc in 
doc(thing, title, forceload)
   1506 """Display text documentation, given an object or a path to an 
object."""
   1507 try:
-> 1508 pager(render_doc(thing, title, forceload))
   1509 except (ImportError, ErrorDuringImport), value:
   1510 print value

l/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/pydoc.pyc in 
render_doc(thing, title, forceload)
   1483 desc = describe(object)
   1484 module = inspect.getmodule(object)
-> 1485 if name and '.' in name:
   1486 desc += ' in ' + name[:name.rfind('.')]
   1487 elif module and module is not object:

TypeError: argument of type 'function' is not iterable
-- 
http://mail.python.org/mailman/listinfo/python-list


inheriting file object

2005-07-06 Thread Jeremy
Hello all,
I am trying to inherit the file object and don't know how to do it.  I 
need to open a file and perform operations on it in the class I am 
writing.  I know the simple syntax is:

class MyClass(file):
...

but I don't know how to make it open the file for reading/writing.  Can 
anyone help me out with this?
Thanks,
Jeremy

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: inheriting file object

2005-07-06 Thread Jeremy
Jeremy Jones wrote:

> Something like this?  I put the following code in test_file.py:
> 
> class MyFile(file):
> def doing_something(self):
> print "in my own method"
> 
> 
> And used it like this:
> 
> In [1]: import test_file
> 
> In [2]: f = test_file.MyFile("foobar.file", "w")
> 
> In [3]: f.write("foo\n")
> 
> In [4]: f.doing_something()
> in my own method
> 
> 
> But do you really need to subclass file, or can you just use a file 
> instance in your class?
> 
> 
> Jeremy Jones  
I don't know if I should be inheriting file or just using a file object. 
  How would I determine which one would be more appropriate?
Thanks,
Jeremy

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: inheriting file object

2005-07-06 Thread Jeremy
harold fellermann wrote:
>>I don't know if I should be inheriting file or just using a file 
>>object.
>>  How would I determine which one would be more appropriate?
> 
> 
> Inheritance is often refered to as an IS relation, whereas using an 
> attribute
> is a HAS relation.
> 
> If you inherit from file, all operations for files should be valif for 
> your
> class also. Usually the file-operations would be directly inherited and 
> not
> overwritten.
> 
> However, if you don't want to expose all file functionalities, a HAS 
> relation
> is more appropriate. if you plan to use your class as a file handle, 
> e.g. for
> formatting output in a special way, I woould prefer to make the file an 
> attribute:

> If you would tell as your use case, it would be easier to give you an 
> advice.

That is an excellent explanation and the example is similar to what I 
want to do.  I have a file I want to look through and change if needed. 
I think I will follow you suggestion and not inherit from the file object.
Thanks,
Jeremy

-- 
http://mail.python.org/mailman/listinfo/python-list


regular expression questions in Python

2005-07-11 Thread Jeremy
I am (very) new top regular expressions and I am having a difficult time 
understanding how to do them.  I have the following in my script:

zaidsearch = r'''^ {5,}([\d]{4,5})(.\d{2,2}c)'''
ZAIDSearch = re.compile(search, re.IGNORECASE)

When I do: ZAID.search(...) then this works fine.  I would like to write 
   it as:

zaidsearch = r'''^ {5,}([\d]{4,5})  #My comments
  (.\d{2,2}c)#More of my comments'''
ZAIDSearch = re.compile(zaidsearch, re.VERBOSE)

but this doesn't work.  I get the following error:

 raise error, v # invalid expression
sre_constants.error: nothing to repeat


So I guess my question is: how do I use the VERBOSE option to make my 
regular expression easier to understand for a human?  Secondly, how can 
I use both the VERBOSE and IGNORECASE options?
Thanks,
Jeremy

-- 
http://mail.python.org/mailman/listinfo/python-list


readlines() doesn't read entire file

2005-07-14 Thread Jeremy
I have a most aggravating problem.  I don't understand what is causing 
readlines() not to read all the lines in the file.  I have the following 
syntax:



# some initial stuff
XS = xsdir(Datapath + '/xsdir', options.debug)
# some more stuff

class xsdir(object):#{{{1
 """This class handles all of the data and methods for reading
 the xsdir file."""

 def __init__(self, Datapath, debug=False):
 self.xsdir = file(Datapath, 'r')# File object
 self.lines = self.xsdir.readlines()
 if debug:
 print self.lines
# and then other stuff as well


I can see all the lines in the list self.lines, but they are not all the 
lines in the file.  When I look at the file in Vim, I can see all the 
lines, but Python cannot.  Can someone help me with this one?
Thanks,
Jeremy

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: readlines() doesn't read entire file

2005-07-14 Thread Jeremy
Peter Hansen wrote:
> Jeremy wrote:
> 
>>I have a most aggravating problem.  I don't understand what is causing 
>>readlines() not to read all the lines in the file.  I have the following 
>>syntax:
>>
> 
> ...
> 
>>self.xsdir = file(Datapath, 'r')# File object
>>
>>I can see all the lines in the list self.lines, but they are not all the 
>>lines in the file.  When I look at the file in Vim, I can see all the 
>>lines, but Python cannot.  Can someone help me with this one?
> 
> 
> What platform?  What version of Python?
> 
> You're opening the file in "text" mode.  If you are on Windows and the 
> file actually contains a ^Z (byte 26) it is treated as EOF.  Is that the 
> problem?
> 
> If not, have you tried cutting parts out of the file, to produce the 
> smallest file that still shows the problem?  At that point you will 
> likely resolve the issue on your own.  Also, does the same thing happen 
> if you use the interactive interpreter to read the file "manually"?
> 
> These are all basic troubleshooting techniques you can use at any time 
> on any problem...
> 
> -Peter
Well, now I have to make a humbling retraction.  I realized I wasn't 
reading the file I thought I was, but one similar to it.  Now that I am 
reading the correct file, I am getting all the lines I expected to.
Thanks,
Jeremy

-- 
http://mail.python.org/mailman/listinfo/python-list


re.IGNORECASE and re.VERBOSE

2005-07-18 Thread Jeremy
I am using regular expressions and I would like to use both
re.IGNORECASE and re.VERBOSE options.  I want to do something like the
following (which doesn't work):

matsearch = r'''^\ {0,4}([mM]\d+) '''
MatSearch = re.compile(matsearch, re.VERBOSE, re.IGNORECASE)

Does anyone have any suggestions?
Thanks,
Jeremy


-- 
http://mail.python.org/mailman/listinfo/python-list


Newbie Developing a Python Extension

2006-11-24 Thread Jeremy
Hi,

I have been working on Linux 2.6.9 to adapt a C++ module to work as a Python
extension with the following setup.py file:

from distutils.core import setup, Extension

sm=Extension(
 'tdma',
 define_macros=[('__USE_POSIX199309','1')],
 include_dirs=['/usr/include','/usr/include/python2.3'],
 library_dirs=['/usr/lib'],
 sources=['Bitstrea.cpp','bytequeu.cpp','debug.cpp','dlist.cpp',
  'GrPort.cpp','IoPort.cpp','LMEmu.cpp','LMEmuPdu.cpp',
  'MacPyIf.cpp','per_os_clk.cpp','timer.cpp'])

setup(name='MySm',
 version='0.1.0',
 description='TDMA MAC',
 ext_modules=[sm])

The extension uses the POSIX call clock_gettime() and things seem fine when
generating the tdma.so file. However, when running the Python program, at a 
line 'from tdma import init,txdn,txup,...', I get an error message saying 
'ImportError: /home/.../tdma.so: undefined symbol: clock_gettime'.

What is wrong here? Is the from-import statement right?

Why is it, when I use 'import sm' or 'from sm import...', I get the message 
'ImportError: No module named sm'?

Thanks,
Jeremy 


-- 
http://mail.python.org/mailman/listinfo/python-list


Floating Exception

2006-11-27 Thread Jeremy
Hi,

I just changed some previously-working Python program to add a C++ 
extension.

Now, when I call __init__() in a Python class I did not change and with the 
same arguments passed, I get the screen message 'Floating exception' and the 
program seems to stop. The exact point of the crash in __init__() seems to 
depend on from which directory I run the Python program.

What is wrong? And what is a floating exception?

Thanks,
Jeremy 


-- 
http://mail.python.org/mailman/listinfo/python-list


binascii in C++

2006-01-31 Thread Jeremy
I'm working on a project to create a keyfinder program that finds the 
Windows CD Key in the registry and decodes it.  I prototyped it in 
Python and it worked great but for several reasons I've decided to 
rewrite it in C++.  I use the module binascii extensively in the Python 
version but I can't find an equivalent module in C++.  I'm not a 
professional developer so maybe I'm overlooking something simple.

In particular I'm trying to find an equivalent to the binascii.b2a_hex() 
and binascii.unhexlify() functions.

Thanks,

Jeremy
-- 
http://mail.python.org/mailman/listinfo/python-list


Error 14 on OS Call

2007-03-12 Thread Jeremy
Hi,

I recently started using Python and am working on Python 2.3.6 on Redhat. I
have developed a fat C++ extension for it and have the following problem:

The main module is in the Python code which gets a packet pk of string type
from the extension. It then passes pk to an IP tunnel using
os.write(self.tun_fd,pk).

Sometimes there is a crash at the os.write() call with the message 'OSError:
[Errno 14] Bad address'. When I check the integrity of references tun_fd and
pk by copying them just before write(), there seems to be no problem, so it
looks to me that os or the function reference to write() is bad.

I guess this may be due to a pointer problem in the C++ code. I have been
trying several ways of debugging this problem, but have yet to succeed.

One way I have been trying is to use Purify on Python. However, this leads 
to a segmentation fault in rtlib.o when running make install on the 
instrumented Python code.

Reagrds,
Jeremy




-- 
http://mail.python.org/mailman/listinfo/python-list


Python Threads and C Semaphores

2007-01-15 Thread Jeremy
Hello,

I have a fat C++ extension to a Python 2.3.4 program. In all, I count
five threads. Of these, two are started in Python using
thread.start_new_thread(), and both of these wait on semaphores in the C++
extension using sem_wait(). There also are two other Python threads and one 
thread running wholly in
the extension.

I notice that when one of the Python threads calls the extension and waits
on a semaphore, all but the C++ thread halt even when not waiting on any
semaphore. How do we get this working right?

Thank you,
Jeremy


-- 
http://mail.python.org/mailman/listinfo/python-list


Adding a Par construct to Python?

2009-05-17 Thread jeremy
>From a user point of view I think that adding a 'par' construct to
Python for parallel loops would add a lot of power and simplicity,
e.g.

par i in list:
updatePartition(i)

There would be no locking and it would be the programmer's
responsibility to ensure that the loop was truly parallel and correct.

The intention of this would be to speed up Python execution on multi-
core platforms. Within a few years we will see 100+ core processors as
standard and we need to be ready for that.

There could also be parallel versions of map, filter and reduce
provided.

BUT...none of this would be possible with the current implementation
of Python with its Global Interpreter Lock, which effectively rules
out true parallel processing.

See: 
http://jessenoller.com/2009/02/01/python-threads-and-the-global-interpreter-lock/

What do others think?

Jeremy Martin
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Adding a Par construct to Python?

2009-05-17 Thread jeremy
On 17 May, 13:05, jer...@martinfamily.freeserve.co.uk wrote:
> From a user point of view I think that adding a 'par' construct to
> Python for parallel loops would add a lot of power and simplicity,
> e.g.
>
> par i in list:
>     updatePartition(i)
>
...actually, thinking about this further, I think it would be good to
add a 'sync' keyword which causes a thread rendezvous within a
parallel loop. This would allow parallel loops to run for longer in
certain circumstances without having the overhead of stopping and
restarting all the threads, e.g.

par i in list:
for j in iterations:
   updatePartion(i)
   sync
   commitBoundaryValues(i)
   sync

This example is a typical iteration over a grid, e.g. finite elements,
calculation, where the boundary values need to be read by neighbouring
partitions before they are updated. It assumes that the new values of
the boundary values are stored in temporary variables until they can
be safely updated.

Jeremy
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Adding a Par construct to Python?

2009-05-18 Thread jeremy
o use or can
delegate that decision to the system). Parallel pmap and pfilter would
be implemented in much the same way, although the resultant list might
have to be reassembled from the partial results returned from each
thread. As people have pointed out, parallel reduce is a tricky option
because it requires the binary operation to be associative in which
case it can be parallelised by calculating the result using a tree-
based evaluation strategy.

I have used all of OpenMP, MPI, and Occam in the past. OpenMP adds
parallelism to programs by the use of special comment strings, MPI by
explicit calls to library routines, and Occam by explicit syntactical
structures. Each has its advantages. I like the simplicity of OpenMP,
the cross-language portability of MPI and the fact the concurrency is
built in to the Occam language. What I am proposing here is a hybrid
of the OpenMP and Occam approaches - a change to the language which is
very natural and yet is easy for programmers to understand.
Concurrency is generally regarded as the hardest concept for
programmers to grasp.

Jeremy
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Adding a Par construct to Python?

2009-05-18 Thread jeremy
On 18 May, 19:58, George Sakkis  wrote:
> On May 18, 5:27 am, jer...@martinfamily.freeserve.co.uk wrote:
>
> > My suggestion is primarily about using multiple threads and sharing
> > memory - something akin to the OpenMP directives that one of you has
> > mentioned. To do this efficiently would involve removing the Global
> > Interpreter Lock, or switching to Jython or Iron Python as you
> > mentioned.
>
> > However I *do* actually want to add syntax to the language.
>
> Good luck with that. The GIL is not going away any time soon (or
> probably ever) and as long as CPython is the "official"
> implementation, there are almost zero chances of adding syntax support
> for this. Besides, Guido and other py-devs are not particularly keen
> on threads as a parallelization mechanism.
>
> George

Hi George,

> The GIL is not going away any time soon (or probably ever) and as long as 
> CPython is
> the "official" implementation, there are almost zero chances of adding syntax 
> support
> for this.

What concerns me about this statement is that, if it is true, Python
risks falling behind when other languages which can exploit multicore
effectively start to come to the fore. I know that Microsoft is
actively researching in this area and they are hoping that F# will
offer good ways to exploit multi-core architectures.

As I understand it the reason for the GIL is to prevent problems with
garbage collection in multi-threaded applications. Without it the
reference count method is prone to race conditions. However it seems
like a fairly crude mechanism to solve this problem. Individual
semaphores could be used for each object reference counter, as in
Java.

Jeremy
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Adding a Par construct to Python?

2009-05-18 Thread jeremy
On 18 May, 21:07, Terry Reedy  wrote:
> George Sakkis wrote:
> > On May 18, 5:27 am, jer...@martinfamily.freeserve.co.uk wrote:
>
> >> My suggestion is primarily about using multiple threads and sharing
> >> memory - something akin to the OpenMP directives that one of you has
> >> mentioned. To do this efficiently would involve removing the Global
> >> Interpreter Lock, or switching to Jython or Iron Python as you
> >> mentioned.
>
> >> However I *do* actually want to add syntax to the language.
>
> I can understand you having a preference, but you may have to choose
> between fighting over that method or achieving results.  I agree with ...
>
> > Good luck with that. The GIL is not going away any time soon (or
> > probably ever) and as long as CPython is the "official"
> > implementation, there are almost zero chances of adding syntax support
> > for this. Besides, Guido and other py-devs are not particularly keen
> > on threads as a parallelization mechanism.
>
> Parallel processes can run on multiple processors as well as multiple
> cores within a processor.  Some problems, like massive search, require
> multiple disks (or memories, or IO ports) as well as multiple processing
> units.  There is debate over how useful massively multicore processors
> will actually be and for which types of problems.
>
> tjr

Hi Terry,

> Parallel processes can run on multiple processors as well as multiple
> cores within a processor.  Some problems, like massive search, require
> multiple disks (or memories, or IO ports) as well as multiple processing
> units.  There is debate over how useful massively multicore processors
> will actually be and for which types of problems.

I agree with this. My approach is in the same space as OpenMP - a
simple way for users to define shared memory parallelism. There is no
reason why it would not work with multiple disks or IO ports on the
same shared memory server. However for distributed memory hardware it
would be a non-starter. In that case we would need something like the
Message Passing Interface.

Jeremy
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Adding a Par construct to Python?

2009-05-19 Thread jeremy
On 19 May, 00:32, Steven D'Aprano  wrote:
> On Mon, 18 May 2009 02:27:06 -0700, jeremy wrote:
> > However I *do* actually want to add syntax to the language. I think that
> > 'par' makes sense as an official Python construct - we already have had
> > this in the Occam programming language for twenty-five years. The reason
> > for this is ease of use. I would like to make it easy for amateur
> > programmers to exploit natural parallelism in their algorithms. For
> > instance somebody who wishes to calculate a property of each member from
> > a list of chemical structures using the Python Daylight interface: with
> > my suggestion they could potentially get a massive speed up just by
> > changing 'for' to 'par' or 'map' to 'pmap'. (Or map with a parallel
> > keyword argument set as suggested). At present they would have to
> > manually chop up their work and run it as multiple processes in order to
> > achieve the same - fine for expert programmers but not reasonable for
> > people working in other domains who wish to use Python as a utility
> > because of its fantastic productivity and ease of use.
>
> There seems to be some discrepancy between this, and what you wrote in
> your first post:
>
> "There would be no locking and it would be the programmer's
> responsibility to ensure that the loop was truly parallel and correct."
>
> So on the one hand, you want par to be utterly simple-minded and to do no
> locking. On the other hand you want it so simple to use that amateurs can
> mechanically replace 'for' with 'par' in their code and everything will
> Just Work, no effort or thought required. But those two desires are
> inconsistent.
>
> Concurrency is an inherently complicated problem: deadlocks and race
> conditions abound, and are notoriously hard to reproduce, let alone
> debug. If par is simple, and does no locking, then the programmer needs
> to worry about those complications. If you want programmers to ignore
> those complications, either (1) par needs to be very complicated and
> smart, to do the Right Thing in every case, or (2) you're satisfied if
> par produces buggy code when used in the fashion you recommend.
>
> The third option is, make par really simple and put responsibility on the
> user to write code which is concurrent. I think that's the right
> solution, but it means a simplistic "replace `for` with `par` and your
> code will run faster" will not work. It might run faster three times out
> of five, but the other two times it will hang in a deadlock, or produce
> incorrect results, or both.
>
> --
> Steven

Hi Steven,

> you want it so simple to use that amateurs can mechanically replace 'for' 
> with 'par' in their
> code and everything will Just Work, no effort or thought required.

Yes I do want the par construction to be simple, but of course you
can't just replace a for loop with a par loop in the general case.
This issue arises when people use OpenMP: you can take a correct piece
of code, add a comment to indicate that a loop is 'parallel', and if
you get it wrong the code with no longer work correctly. With my 'par'
construct the programmer's intention is made explicit in the code,
rather than by a compiler directive and so I think that is clearer
than OpenMP.

As I wrote before, concurrency is one of the hardest things for
professional programmers to grasp. For 'amateur' programmers we need
to make it as simple as possible, and I think that a parallel loop
construction and the dangers that lurk within would be reasonably
straightforward to explain: there are no locks to worry about, no
message passing. The only advanced concept is the 'sync' keyword,
which would be used to rendezvous all the threads. That would only be
used to speed up certain codes in order to avoid having to repeatedly
shut down and start up gangs of threads.

Jeremy
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Adding a Par construct to Python?

2009-05-19 Thread jeremy
On 19 May, 03:31, Carl Banks  wrote:
> On May 18, 1:52 pm, jer...@martinfamily.freeserve.co.uk wrote:
>
> > As I understand it the reason for the GIL is to prevent problems with
> > garbage collection in multi-threaded applications.
>
> Not really.  It's main purpose to prevent context switches from
> happening in the middle of some C code (essentially it makes all C
> code a "critical" section, except when the C code explicitly releases
> the GIL).  This tremendously simplifies the task of writing extension
> modules, not to mention the Python core.
>
> > Without it the
> > reference count method is prone to race conditions. However it seems
> > like a fairly crude mechanism to solve this problem. Individual
> > semaphores could be used for each object reference counter, as in
> > Java.
>
> Java doesn't use reference counting.
>
> Individual locks on the refcounts would be prohibitively expensive in
> Python, a cost that Java doesn't have.
>
> Even if you decided to accept the penalty and add locking to
> refcounts, you still have to be prepared for context switching at any
> time when writing C code, which means in practice you have to lock any
> object that's being accessed--that's in addition to the refcount lock.
>
> I am not disagreeing with your assessment in general, it would be
> great if Python were better able to take advantage of multiple cores,
> but it's not as simple a thing as you're making it out to be.
>
> Carl Banks

Hi Carl,

> I am not disagreeing with your assessment in general, it would be
> great if Python were better able to take advantage of multiple cores,
> but it's not as simple a thing as you're making it out to be.

Thanks for explaining a few things to me. So it would seem that
replacing the GIL with something which allows better scalability of
multi-threaded applications, would be very complicated. The paper by
Jesse Nolle which I referenced in my original posting includes the
following:

"In 1999 Greg Stein created a patch set for the interpreter that
removed the GIL, but added granular locking around sensitive
interpreter operations. This patch set had the direct effect of
speeding up threaded execution, but made single threaded execution two
times slower."

Source: 
http://jessenoller.com/2009/02/01/python-threads-and-the-global-interpreter-lock/

That was ten years ago - do you have any idea as to how things have
been progressing in this area since then?

Jeremy
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Adding a Par construct to Python?

2009-05-19 Thread jeremy
On 19 May, 10:24, Steven D'Aprano
 wrote:
> On Mon, 18 May 2009 02:27:06 -0700, jeremy wrote:
> > Let me clarify what I think par, pmap, pfilter and preduce would mean
> > and how they would be implemented.
>
> [...]
>
> Just for fun, I've implemented a parallel-map function, and done a couple
> of tests. Comments, criticism and improvements welcome!
>
> import threading
> import Queue
> import random
> import time
>
> def f(arg):  # Simulate a slow function.
>     time.sleep(0.5)
>     return 3*arg-2
>
> class PMapThread(threading.Thread):
>     def __init__(self, clients):
>         super(PMapThread, self).__init__()
>         self._clients = clients
>     def start(self):
>         super(PMapThread, self).start()
>     def run(self):
>         while True:
>             try:
>                 data = self._clients.get_nowait()
>             except Queue.Empty:
>                 break
>             target, where, func, arg = data
>             result = func(arg)
>             target[where] = result
>
> class VerbosePMapThread(threading.Thread):
>     def __init__(self, clients):
>         super(VerbosePMapThread, self).__init__()
>         print "Thread %s created at %s" % (self.getName(), time.ctime())
>     def start(self):
>         super(VerbosePMapThread, self).start()
>         print "Thread %s starting at %s" % (self.getName(), time.ctime())
>     def run(self):
>         super(VerbosePMapThread, self).run()
>         print "Thread %s finished at %s" % (self.getName(), time.ctime())
>
> def pmap(func, seq, verbose=False, numthreads=4):
>     size = len(seq)
>     results = [None]*size
>     if verbose:
>         print "Initiating threads"
>         thread = VerbosePMapThread
>     else:
>         thread = PMapThread
>     datapool = Queue.Queue(size)
>     for i in xrange(size):
>         datapool.put( (results, i, f, seq[i]) )
>     threads = [PMapThread(datapool) for i in xrange(numthreads)]
>     if verbose:
>         print "All threads created."
>     for t in threads:
>         t.start()
>     # Block until all threads are done.
>     while any([t.isAlive() for t in threads]):
>         if verbose:
>             time.sleep(0.25)
>             print results
>     return results
>
> And here's the timing results:
>
> >>> from timeit import Timer
> >>> setup = "from __main__ import pmap, f; data = range(50)"
> >>> min(Timer('map(f, data)', setup).repeat(repeat=5, number=3))
> 74.999755859375
> >>> min(Timer('pmap(f, data)', setup).repeat(repeat=5, number=3))
>
> 20.490942001342773
>
> --
> Steven

Hi Steven,

I am impressed by this - it shows the potential speedup that pmap
could give. Although the GIL would be a problem as things for speed up
of pure Python code. Do Jython and Iron Python include the threading
module?

Jeremy
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Adding a Par construct to Python?

2009-05-19 Thread jeremy
On 17 May, 13:37, jer...@martinfamily.freeserve.co.uk wrote:
> On 17 May, 13:05, jer...@martinfamily.freeserve.co.uk wrote:> From a user 
> point of view I think that adding a 'par' construct to
> > Python for parallel loops would add a lot of power and simplicity,
> > e.g.
>
> > par i in list:
> >     updatePartition(i)
>
> ...actually, thinking about this further, I think it would be good to
> add a 'sync' keyword which causes a thread rendezvous within a
> parallel loop. This would allow parallel loops to run for longer in
> certain circumstances without having the overhead of stopping and
> restarting all the threads, e.g.
>
> par i in list:
>     for j in iterations:
>        updatePartion(i)
>        sync
>        commitBoundaryValues(i)
>        sync
>
> This example is a typical iteration over a grid, e.g. finite elements,
> calculation, where the boundary values need to be read by neighbouring
> partitions before they are updated. It assumes that the new values of
> the boundary values are stored in temporary variables until they can
> be safely updated.
>
> Jeremy

I have coded up a (slightly) more realistic example. Here is a code to
implement the numerical solution to Laplace's equation, which can
calculate the values of a potential field across a rectangular region
given fixed boundary values:

xmax = 200
ymax = 200
niterations = 200

# Initialisation
old=[[0.0 for y in range(ymax)] for x in range(xmax)]
for x in range(xmax):
old[x][0] = 1.0
for y in range(ymax):
old[0][y] = 1.0
new=[[old[x][y] for y in range(ymax)] for x in range(xmax)]

# Iterations
for i in range(1,100):
print "Iteration: ", i
for x in range(1,ymax-1):
for y in range(1, xmax-1):
new[x][y] = \
0.25*(old[x-1][y] + old[x+1][y] + old[x][y-1] + old[x-1][y])
# Swap over the new and old storage arrays
tmp = old
old = new
new = tmp


# Print results
for y in range(ymax):
for x in range(xmax):
print str(old[x][y]).rjust(7),
print

In order to parallelise this with the par construct would require a
single alteration to the iteration section:

for i in range(1,100):
print "Iteration: ", i
par x in range(1,ymax-1):
for y in range(1, xmax-1):
new[x][y] = \
0.25*(old[x-1][y] + old[x+1][y] + old[x][y-1] + old[x-1][y])
# Swap over the new and old storage arrays
tmp = old
old = new
new = tmp

The par command tells python that it may choose to fire up multiple
threads and automatically partition the data between them. So, for
instance, if there were ten threads created each one would work on a
sub-range of x values: thread 1 takes x from 1 to 100, thread 2 takes
x from 101 to 200, etc.

However with this approach there is an overhead in each iteration of
starting up the threads and shutting them down again. Depending on the
impact of this overhead, it might be better to keep the threads
running between iterations by modifying the code like this, adding a
'sync' command to synchronise the threads at the end of each iteration
and also making sure that only one of the threads performs the swap of
the data arrays.

par x in range(1,ymax-1):
for i in range(1,100):
if __thread__ == 0: print "Iteration: ", i
for y in range(1, xmax-1):
new[x][y] = \
0.25*(old[x-1][y] + old[x+1][y] + old[x][y-1] + old[x-1][y])
# Swap over the new and old storage arrays
sync
if __thread__ == 0:
tmp = old
old = new
new = tmp
sync

Jeremy




-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Adding a Par construct to Python?

2009-05-20 Thread jeremy
On 20 May, 03:43, Steven D'Aprano
 wrote:
> On Tue, 19 May 2009 03:57:43 -0700, jeremy wrote:
> >> you want it so simple to use that amateurs can mechanically replace
> >> 'for' with 'par' in their code and everything will Just Work, no effort
> >> or thought required.
>
> > Yes I do want the par construction to be simple, but of course you can't
> > just replace a for loop with a par loop in the general case.
>
> But that's exactly what you said you wanted people to be able to do:
>
> "with my suggestion they could potentially get a massive speed up just by
> changing 'for' to 'par' or 'map' to 'pmap'."
>
> I am finding this conversation difficult because it seems to me you don't
> have a consistent set of requirements.
>
> > This issue
> > arises when people use OpenMP: you can take a correct piece of code, add
> > a comment to indicate that a loop is 'parallel', and if you get it wrong
> > the code with no longer work correctly.
>
> How will 'par' be any different? It won't magically turn code with
> deadlocks into bug-free code.
>
> > With my 'par' construct the
> > programmer's intention is made explicit in the code, rather than by a
> > compiler directive and so I think that is clearer than OpenMP.
>
> A compiler directive is just as clear about the programmer's intention as
> a keyword. Possibly even more so.
>
> #$ PARALLEL-LOOP
> for x in seq:
>     do(x)
>
> Seems pretty obvious to me. (Not that I'm suggesting compiler directives
> is a good solution to this problem.)
>
> > As I wrote before, concurrency is one of the hardest things for
> > professional programmers to grasp. For 'amateur' programmers we need to
> > make it as simple as possible,
>
> The problem is that "as simple as possible" is Not Very Simple. There's
> no getting around the fact that concurrency is inherently complex. In
> some special cases, you can keep it simple, e.g. parallel-map with a
> function that has no side-effects. But in the general case, no, you can't
> avoid dealing with the complexity, at least a little bit.
>
> > and I think that a parallel loop
> > construction and the dangers that lurk within would be reasonably
> > straightforward to explain: there are no locks to worry about, no
> > message passing.
>
> It's *already* easy to explain. And having explained it, you still need
> to do something about it. You can't just say "Oh well, I've had all the
> pitfalls explained to me, so now I don't have to actually do anything
> about avoiding those pitfalls". You still need to actually avoid them.
> For example, you can choose one of four tactics:
>
> (1) the loop construct deals with locking;
>
> (2) the caller deals with locking;
>
> (3) nobody deals with locking, therefore the code is buggy and risks
> deadlocks; or
>
> (4) the caller is responsible for making sure he never shares data while
> looping over it.
>
> I don't think I've missed any possibilities. You have to pick one of
> those four.
>
> > The only advanced concept is the 'sync' keyword, which
> > would be used to rendezvous all the threads. That would only be used to
> > speed up certain codes in order to avoid having to repeatedly shut down
> > and start up gangs of threads.
>
> So now you want a second keyword as well.
>
> --
> Steven

Hi Steven,

You wrote:

> I am finding this conversation difficult because it seems to me you don't 
> have a consistent set of requirements".

I think that my position has actually been consistent throughout this
discussion about what I would like to achieve. However I have learned
more about the inner workings of python than I knew before which have
made it clear that it would be difficult to implement (in CPython at
least). And also I never intended to present this as a fait accompli -
the intention was to start a debate as we have been doing. You also
wrote

> So now you want a second keyword as well

I actually described the 'sync' keyword in my second email before
anybody else contributed.

I *do* actually know a bit about concurrency and would never imply
that *any* for loop could be converted to a parallel one. The
intention of my remark "with my suggestion they could potentially get
a massive speed up just by changing 'for' to 'par' or 'map' to
'pmap'." is that it could be applied in the particular circumstances
where there are no dependencies between different iterations of the
loop.

Regarding yo

Re: Adding a Par construct to Python?

2009-05-22 Thread jeremy
On 22 May, 05:17, "Rhodri James"  wrote:
> On Wed, 20 May 2009 09:19:50 +0100,   
> wrote:
>
> > On 20 May, 03:43, Steven D'Aprano
> >  wrote:
> >> On Tue, 19 May 2009 03:57:43 -0700, jeremy wrote:
> >> > As I wrote before, concurrency is one of the hardest things for
> >> > professional programmers to grasp. For 'amateur' programmers we need  
> >> to
> >> > make it as simple as possible,
>
> Here, I think, is the fatal flaw in your plan.  As Steven pointed out,
> concurrency isn't simple.  All you are actually doing is making it
> easier for 'amateur' programmers to write hard-to-debug buggy code,
> since you seem to be trying to avoid making them think about how to
> write parallelisable code at all.
>
> > I *do* actually know a bit about concurrency and would never imply
> > that *any* for loop could be converted to a parallel one. The
> > intention of my remark "with my suggestion they could potentially get
> > a massive speed up just by changing 'for' to 'par' or 'map' to
> > 'pmap'." is that it could be applied in the particular circumstances
> > where there are no dependencies between different iterations of the
> > loop.
>
> If you can read this newsgroup for a week and still put your hand on
> your heart and say that programmers will check that there are no
> dependencies before swapping 'par' for 'for', I want to borrow your
> rose-coloured glasses.  That's not to say this isn't the right solution,
> but you must be aware that people will screw this up very, very
> regularly, and making the syntax easy will only up the frequency of
> screw-ups.
>
> > This shows why the sync event is needed - to avoid  race conditions on
> > shared variables. It is borrowed from the BSP paradigm - although that
> > is a distibuted memory approach. Without the sync clause, rule 5 would
> > just be the standard way of defining a parallelisable loop.
>
> Pardon my cynicism but sync would appear to have all the disadvantages
> of message passing (in terms of deadlock opportunities) with none of
> advantages (like, say, actual messages).  The basic single sync you put
> forward may be coarse-grained enough to be deadlock-proof, but I would
> need to be more convinced of that than I am at the moment before I was
> happy.
>
>
>
> > P.S. I have a couple of additional embellishments to share at this
> > stage:
> [snip]
> > 2. Scope of the 'sync' command. It was pointed out to me by a
> > colleague that I need to define what happens with sync when there are
> > nested par loops. I think that it should be defined to apply to the
> > innermost par construct which encloses the statement.
>
> What I said before about deadlock-proofing?  Forget it.  There's hours
> of fun to be had once you introduce scoping, not to mention the fact
> that your inner loops can't now be protected against common code in the
> outer loop accessing the shared variables.
>
> --
> Rhodri James *-* Wildebeeste Herder to the Masses

Hi Rhodri,

> If you can read this newsgroup for a week and still put your hand on
> your heart and say that programmers will check that there are no
> dependencies before swapping 'par' for 'for', I want to borrow your
> rose-coloured glasses.

I think this depends on whether we think that Python is a language for
people who we trust to know what they are doing (like Perl) or whether
it is a language for people we don't trust to get things right(like
Java). I suspect it probably lies somewhere in the middle.

Actually the 'sync' command could lead to deadlock potentially:

par i in range(2):
if i == 1:
sync

In this case there are two threads (or virtual threads): one thread
waits for a sync, the other does not, hence deadlock.

My view about deadlock avoidance is that it should not be built into
the language - that would make everything too restrictive - instead
people should use design patterns which guarantee freedom from
deadlock.

See http://www.wotug.org/docs/jeremy-martin/index.shtml

Jeremy
-- 
http://mail.python.org/mailman/listinfo/python-list


How to prevent re.split() from removing part of string

2009-11-30 Thread Jeremy
I am using re.split to... well, split a string into sections.  I want
to split when, following a new line, there are 4 or fewer spaces.  The
pattern I use is:

sections = re.split('\n\s{,4}[^\s]', lineoftext)

This splits appropriately but I lose the character matched by [^s].  I
know I can put parentheses around [^s] and keep the matched character,
but the character is placed in it's own element of the list instead of
with the rest of the lineoftext.

Does anyone know how I can accomplish this without losing the matched
character?

Thanks,
Jeremy

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: How to prevent re.split() from removing part of string

2009-12-01 Thread Jeremy
On Nov 30, 5:24 pm, MRAB  wrote:
> Jeremy wrote:
> > I am using re.split to... well, split a string into sections.  I want
> > to split when, following a new line, there are 4 or fewer spaces.  The
> > pattern I use is:
>
> >         sections = re.split('\n\s{,4}[^\s]', lineoftext)
>
> > This splits appropriately but I lose the character matched by [^s].  I
> > know I can put parentheses around [^s] and keep the matched character,
> > but the character is placed in it's own element of the list instead of
> > with the rest of the lineoftext.
>
> > Does anyone know how I can accomplish this without losing the matched
> > character?
>
> First of all, \s matches any character that's _whitespace_, such as
> space, "\t", "\n", "\r", "\f". There's also \S, which matches any
> character that's not whitespace.

Thanks for the reminder.  I knew \S existed, but must have forgotten
about it.
>
> But in answer to your question, use a look-ahead:
>
>      sections = re.split('\n {,4}(?=\S)', lineoftext)

Yep, that does the trick.  Thanks for the help!

Jeremy

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Different number of matches from re.findall and re.split

2010-01-11 Thread Jeremy
On Jan 11, 8:44 am, Iain King  wrote:
> On Jan 11, 3:35 pm, Jeremy  wrote:
>
>
>
>
>
> > Hello all,
>
> > I am using re.split to separate some text into logical structures.
> > The trouble is that re.split doesn't find everything while re.findall
> > does; i.e.:
>
> > > found = re.findall('^ 1', line, re.MULTILINE)
> > > len(found)
> >    6439
> > > tables = re.split('^ 1', line, re.MULTILINE)
> > > len(tables)
> > > 1
>
> > Can someone explain why these two commands are giving different
> > results?  I thought I should have the same number of matches (or maybe
> > different by 1, but not 6000!)
>
> > Thanks,
> > Jeremy
>
> re.split doesn't take re.MULTILINE as a flag: it doesn't take any
> flags. It does take a maxsplit parameter, which you are passing the
> value of re.MULTILINE (which happens to be 8 in my implementation).
> Since your pattern is looking for line starts, and your first line
> presumably has more splits than the maxsplits you are specifying, your
> re.split never finds more than 1.

Yep.  Thanks for pointing that out.  I guess I just assumed that
re.split was similar to re.search/match/findall in what it accepted as
function parameters.  I guess I'll have to use a \n instead of a ^ for
split.

Thanks,
Jeremy
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Different number of matches from re.findall and re.split

2010-01-11 Thread Jeremy
On Jan 11, 9:28 am, Duncan Booth  wrote:
> MRAB  wrote:
> >> Yep.  Thanks for pointing that out.  I guess I just assumed that
> >> re.split was similar to re.search/match/findall in what it accepted as
> >> function parameters.  I guess I'll have to use a \n instead of a ^ for
> >> split.
>
> > You could use the .split method of a pattern object instead:
>
> >      tables = re.compile('^ 1', re.MULTILINE).split(line)
>
> or you might include the flag in the regular expression literal: '(?m)^ 1'

Another great solution.  This is what I will do.

Thanks,
Jeremy
-- 
http://mail.python.org/mailman/listinfo/python-list


What is built-in method sub

2010-01-11 Thread Jeremy
I just profiled one of my Python scripts and discovered that >99% of
the time was spent in

{built-in method sub}

What is this function and is there a way to optimize it?

Thanks,
Jeremy
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: What is built-in method sub

2010-01-11 Thread Jeremy
On Jan 11, 12:54 pm, Carl Banks  wrote:
> On Jan 11, 11:20 am, Jeremy  wrote:
>
> > I just profiled one of my Python scripts and discovered that >99% of
> > the time was spent in
>
> > {built-in method sub}
>
> > What is this function and is there a way to optimize it?
>
> I'm guessing this is re.sub (or, more likely, a method sub of an
> internal object that is called by re.sub).
>
> If all your script does is to make a bunch of regexp substitutions,
> then spending 99% of the time in this function might be reasonable.
> Optimize your regexps to improve performance.  (We can help you if you
> care to share any.)
>
> If my guess is wrong, you'll have to be more specific about what your
> sctipt does, and maybe share the profile printout or something.
>
> Carl Banks

Your guess is correct.  I had forgotten that I was using that
function.

I am using the re.sub command to remove trailing whitespace from lines
in a text file.  The commands I use are copied below.  If you have any
suggestions on how they could be improved, I would love to know.

Thanks,
Jeremy

lines = self._outfile.readlines()
self._outfile.close()

line = string.join(lines)

if self.removeWS:
# Remove trailing white space on each line
trailingPattern = '(\S*)\ +?\n'
line = re.sub(trailingPattern, '\\1\n', line)
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: What is built-in method sub

2010-01-11 Thread Jeremy
On Jan 11, 1:15 pm, "Diez B. Roggisch"  wrote:
> Jeremy schrieb:
>
>
>
>
>
> > On Jan 11, 12:54 pm, Carl Banks  wrote:
> >> On Jan 11, 11:20 am, Jeremy  wrote:
>
> >>> I just profiled one of my Python scripts and discovered that >99% of
> >>> the time was spent in
> >>> {built-in method sub}
> >>> What is this function and is there a way to optimize it?
> >> I'm guessing this is re.sub (or, more likely, a method sub of an
> >> internal object that is called by re.sub).
>
> >> If all your script does is to make a bunch of regexp substitutions,
> >> then spending 99% of the time in this function might be reasonable.
> >> Optimize your regexps to improve performance.  (We can help you if you
> >> care to share any.)
>
> >> If my guess is wrong, you'll have to be more specific about what your
> >> sctipt does, and maybe share the profile printout or something.
>
> >> Carl Banks
>
> > Your guess is correct.  I had forgotten that I was using that
> > function.
>
> > I am using the re.sub command to remove trailing whitespace from lines
> > in a text file.  The commands I use are copied below.  If you have any
> > suggestions on how they could be improved, I would love to know.
>
> > Thanks,
> > Jeremy
>
> > lines = self._outfile.readlines()
> > self._outfile.close()
>
> > line = string.join(lines)
>
> > if self.removeWS:
> >     # Remove trailing white space on each line
> >     trailingPattern = '(\S*)\ +?\n'
> >     line = re.sub(trailingPattern, '\\1\n', line)
>
> line = line.rstrip()?
>
> Diez

Yep.  I was trying to reinvent the wheel.  I just remove the trailing
whitespace before joining the lines.

Thanks,
Jeremy
-- 
http://mail.python.org/mailman/listinfo/python-list


distutils not finding all of my pure python modules

2010-01-21 Thread Jeremy
I have a small set of Python packages/modules that I am putting
together.  I'm having trouble in that when I run

python setup.py sdist

I don't get all of my pure python modules.  The setup.py script I use
is:

# =
from distutils.core import setup

purePythonModules = ['regex', 'gnuFile']

setup(name='PythonForSafeguards',
version='0.9.1',
description = 'Python code for MCNP and Safeguards analysis.',
author = 'Jake the Snake',
author_email = 'someth...@blah.com',
packages = ['MCNP', 'Safeguards'],
url='http://lanl.gov',
py_modules = purePythonModules,
)

# =

Everythin seems to work fine except the gnuFile.py script does not get
put into the distribution.  I know the file exists in the same
directory as regex.py and has the same permissions.

Does anyone know what is going on here?  I'm using Python 2.6.4.

Thanks,
Jeremy
-- 
http://mail.python.org/mailman/listinfo/python-list


How to schedule system calls with Python

2009-10-15 Thread Jeremy
I need to write a Python script that will call some command line
programs (using os.system).  I will have many such calls, but I want
to control when the calls are made.  I won't know in advance how long
each program will run and I don't want to have 10 programs running
when I only have one or two processors.  I want to run one at a time
(or two if I have two processors), wait until it's finished, and then
call the next one.

How can I use Python to schedule these commands?

Thanks,
Jeremy
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: How to schedule system calls with Python

2009-10-15 Thread Jeremy
On Oct 15, 2:15 pm, TerryP  wrote:
> On Oct 15, 7:42 pm, Jeremy  wrote:
>
> > I need to write a Python script that will call some command line
> > programs (using os.system).  I will have many such calls, but I want
> > to control when the calls are made.  I won't know in advance how long
> > each program will run and I don't want to have 10 programs running
> > when I only have one or two processors.  I want to run one at a time
> > (or two if I have two processors), wait until it's finished, and then
> > call the next one.
>
> > How can I use Python to schedule these commands?
>
> > Thanks,
> > Jeremy
>
> External programs are not system calls; external programs are invoked
> through system calls; for example system() is a function call which
> when implemented under UNIX systems invokes some form of fork() and
> exec(), and likely spawn() under Windows NT.
>
> If you want simple sequenceal execution of external programs, use a
> suitable blocking function to execute them (like system) combined with
> a simple loop over the sequence of commands to run.
>
> for prog in ['cmd1', 'cmd2', 'cmd3']:
>     os.system(prog)
>
> blah.
>
> For anything more detailed (or complex) in response, try being more
> detailed yourself ;).

This is the solution I wanted.  I thought that os.system(prog) would
return immediately regardless of how long prog takes to run.  I should
have tried this simple solution first.  Thanks for being patient.

Jeremy
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: How to schedule system calls with Python

2009-10-15 Thread Jeremy
On Oct 15, 6:32 pm, MRAB  wrote:
> TerryP wrote:
> > On Oct 15, 7:42 pm, Jeremy  wrote:
> >> I need to write a Python script that will call some command line
> >> programs (using os.system).  I will have many such calls, but I want
> >> to control when the calls are made.  I won't know in advance how long
> >> each program will run and I don't want to have 10 programs running
> >> when I only have one or two processors.  I want to run one at a time
> >> (or two if I have two processors), wait until it's finished, and then
> >> call the next one.
>
> >> How can I use Python to schedule these commands?
>
> >> Thanks,
> >> Jeremy
>
> > External programs are not system calls; external programs are invoked
> > through system calls; for example system() is a function call which
> > when implemented under UNIX systems invokes some form of fork() and
> > exec(), and likely spawn() under Windows NT.
>
> > If you want simple sequenceal execution of external programs, use a
> > suitable blocking function to execute them (like system) combined with
> > a simple loop over the sequence of commands to run.
>
> > for prog in ['cmd1', 'cmd2', 'cmd3']:
> >     os.system(prog)
>
> > blah.
>
> > For anything more detailed (or complex) in response, try being more
> > detailed yourself ;).
>
> You could use multithreading: put the commands into a queue; start the
> same number of worker threads as there are processors; each worker
> thread repeatedly gets a command from the queue and then runs it using
> os.system(); if a worker thread finds that the queue is empty when it
> tries to get a command, then it terminates.

Yes, this is it.  If I have a list of strings which are system
commands, this seems like a more intelligent way of approaching it.
My previous response will work, but won't take advantage of multiple
cpus/cores in a machine without some manual manipulation.  I like this
idea.

Thanks!
Jeremy
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: How to schedule system calls with Python

2009-10-15 Thread Jeremy
On Oct 15, 6:47 pm, Ishwor Gurung  wrote:
> Jeremy,
> Hi
>
> > I need to write a Python script that will call some command line
> > programs (using os.system).  I will have many such calls, but I want
> > to control when the calls are made.  I won't know in advance how long
> > each program will run and I don't want to have 10 programs running
> > when I only have one or two processors.  I want to run one at a time
> > (or two if I have two processors), wait until it's finished, and then
> > call the next one.
>
> Right.
>
> > How can I use Python to schedule these commands?
>
> If I were as lucky as you, I would have used multiprocessing module[1]
> (my platform does not have sem_open() syscall). Others suggestions are
> as good as it can be but yeah you could get a lot of work done using
> multiprocessing module(all the relevant bits are explained in the
> module doc).
>
> [1]http://docs.python.org/library/multiprocessing.html
> --
> Regards,
> Ishwor Gurung

Again another great suggestion.  I was not aware of the
multiprocessing module, and I'm not (yet) sure if I understand why I
should use instead of multithreading as explained by a previous post.

Jeremy
-- 
http://mail.python.org/mailman/listinfo/python-list


Please help with regular expression finding multiple floats

2009-10-22 Thread Jeremy
I have text that looks like the following (but all in one string with
'\n' separating the lines):

1.E-08   1.58024E-06 0.0048
1.E-07   2.98403E-05 0.0018
1.E-06   8.85470E-06 0.0026
1.E-05   6.08120E-06 0.0032
1.E-03   1.61817E-05 0.0022
1.E+00   8.34460E-05 0.0014
2.E+00   2.31616E-05 0.0017
5.E+00   2.42717E-05 0.0017
  total  1.93417E-04 0.0012

I want to capture the two or three floating point numbers in each line
and store them in a tuple.  I want to find all such tuples such that I
have
[('1.E-08', '1.58024E-06', '0.0048'),
 ('1.E-07', '2.98403E-05', '0.0018'),
 ('1.E-06', '8.85470E-06', '0.0026'),
 ('1.E-05', '6.08120E-06', '0.0032'),
 ('1.E-03', '1.61817E-05', '0.0022'),
 ('1.E+00', '8.34460E-05', '0.0014'),
 ('2.E+00', '2.31616E-05', '0.0017'),
 ('5.E+00', '2.42717E-05', '0.0017')
 ('1.93417E-04', '0.0012')]

as a result.  I have the regular expression pattern

fp1 = '([-+]?\d*\.?\d+(?:[eE][-+]?\d+)?)\s+'

which can find a floating point number followed by some space.  I can
find three floats with

found = re.findall('%s%s%s' %fp1, text)

My question is, how can I use regular expressions to find two OR three
or even an arbitrary number of floats without repeating %s?  Is this
possible?

Thanks,
Jeremy




-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Please help with regular expression finding multiple floats

2009-10-23 Thread Jeremy
On Oct 23, 3:48 am, Edward Dolan  wrote:
> On Oct 22, 3:26 pm, Jeremy  wrote:
>
> > My question is, how can I use regular expressions to find two OR three
> > or even an arbitrary number of floats without repeating %s?  Is this
> > possible?
>
> > Thanks,
> > Jeremy
>
> Any time you have tabular data such as your example, split() is
> generally the first choice. But since you asked, and I like fscking
> with regular expressions...
>
> import re
>
> # I modified your data set just a bit to show that it will
> # match zero or more space separated real numbers.
>
> data =
> """
> 1.E-08
>
> 1.E-08 1.58024E-06 0.0048 1.E-08 1.58024E-06
> 0.0048
> 1.E-07 2.98403E-05
> 0.0018
> foo bar
> baaz
> 1.E-06 8.85470E-06
> 0.0026
> 1.E-05 6.08120E-06
> 0.0032
> 1.E-03 1.61817E-05
> 0.0022
> 1.E+00 8.34460E-05
> 0.0014
> 2.E+00 2.31616E-05
> 0.0017
> 5.E+00 2.42717E-05
> 0.0017
> total 1.93417E-04
> 0.0012
> """
>
> ntuple = re.compile
> (r"""
> # match beginning of line (re.M in the
> docs)
> ^
> # chew up anything before the first real (non-greedy -> ?)
>
> .*?
> # named match (turn the match into a named atom while allowing
> irrelevant (groups))
> (?
> P
>   # match one
> real
>   [-+]?(\d*\.\d+|\d+\.\d*)([eE][-+]?\d
> +)?
>   # followed by zero or more space separated
> reals
>   ([ \t]+[-+]?(\d*\.\d+|\d+\.\d*)([eE][-+]?\d+)?)
> *)
> # match end of line (re.M in the
> docs)
> $
> """, re.X | re.M) # re.X to allow comments and arbitrary
> whitespace
>
> print [tuple(mo.group('ntuple').split())
>        for mo in re.finditer(ntuple, data)]
>
> Now compare the previous post using split with this one. Even with the
> comments in the re, it's still a bit difficult to read. Regular
> expressions
> are brittle. My code works fine for the data above but if you change
> the
> structure the re will probably fail. At that point, you have to fiddle
> with
> the re to get it back on course.
>
> Don't get me wrong, regular expressions are hella fun to play with.
> You have
> to ask yourself, "Do I really _need_ to use a regular expression here?"

In this simplified example I don't really need regular expressions.
However I will need regular expressions for more complex problems and
I'm trying to become more proficient at using regular expressions.  I
tried to simplify this so as not to bother the mailing list too much.

Thanks for the great suggestion.  It looks like it will work fine, but
I can't get it to work.  I downloaded the simple script you put on
http://codepad.org/Z7eWBusl  but it only prints an empty list.  Am I
missing something?

Thanks,
Jeremy
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Please help with regular expression finding multiple floats

2009-10-26 Thread Jeremy
On Oct 24, 12:00 am, Edward Dolan  wrote:
> No, you're not missing a thing. I am ;) Something was happening with
> the triple-quoted
> strings when I pasted them. Here is hopefully, the correct 
> code.http://codepad.org/OIazr9lA
> The output is shown on that page as well.
>
> Sorry for the line noise folks. One of these days I'm going to learn
> gnus.

Yep now that works.  Thanks for the help.
Jeremy
-- 
http://mail.python.org/mailman/listinfo/python-list


Please help with MemoryError

2010-02-11 Thread Jeremy
I have been using Python for several years now and have never run into
memory errors…

until now.

My Python program now consumes over 2 GB of memory and then I get a
MemoryError.  I know I am reading lots of files into memory, but not
2GB worth.  I thought I didn't have to worry about memory allocation
in Python because of the garbage collector.  On this note I have a few
questions.  FYI I am using Python 2.6.4 on my Mac.

1.When I pass a variable to the constructor of a class does it
copy that variable or is it just a reference/pointer?  I was under the
impression that it was just a pointer to the data.
2.When do I need to manually allocate/deallocate memory and when
can I trust Python to take care of it?
3.Any good practice suggestions?

Thanks,
Jeremy
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Please help with MemoryError

2010-02-12 Thread Jeremy
On Feb 11, 6:50 pm, Steven D'Aprano  wrote:
> On Thu, 11 Feb 2010 15:39:09 -0800, Jeremy wrote:
> > My Python program now consumes over 2 GB of memory and then I get a
> > MemoryError.  I know I am reading lots of files into memory, but not 2GB
> > worth.
>
> Are you sure?
>
> Keep in mind that Python has a comparatively high overhead due to its
> object-oriented nature. If you have a list of characters:
>
> ['a', 'b', 'c', 'd']
>
> there is the (small) overhead of the list structure itself, but each
> individual character is not a single byte, but a relatively large object:
>
>  >>> sys.getsizeof('a')
> 32
>
> So if you read (say) a 500MB file into a single giant string, you will
> have 500MB plus the overhead of a single string object (which is
> negligible). But if you read it into a list of 500 million single
> characters, you will have the overhead of a single list, plus 500 million
> strings, and that's *not* negligible: 32 bytes each instead of 1.
>
> So try to avoid breaking a single huge strings into vast numbers of tiny
> strings all at once.
>
> > I thought I didn't have to worry about memory allocation in
> > Python because of the garbage collector.
>
> You don't have to worry about explicitly allocating memory, and you
> almost never have to worry about explicitly freeing memory (unless you
> are making objects that, directly or indirectly, contain themselves --
> see below); but unless you have an infinite amount of RAM available of
> course you can run out of memory if you use it all up :)
>
> > On this note I have a few
> > questions.  FYI I am using Python 2.6.4 on my Mac.
>
> > 1.    When I pass a variable to the constructor of a class does it copy
> > that variable or is it just a reference/pointer?  I was under the
> > impression that it was just a pointer to the data.
>
> Python's calling model is the same whether you pass to a class
> constructor or any other function or method:
>
> x = ["some", "data"]
> obj = f(x)
>
> The function f (which might be a class constructor) sees the exact same
> list as you assigned to x -- the list is not copied first. However,
> there's no promise made about what f does with that list -- it might copy
> the list, or make one or more additional lists:
>
> def f(a_list):
>     another_copy = a_list[:]
>     another_list = map(int, a_list)
>
> > 2.    When do I need
> > to manually allocate/deallocate memory and when can I trust Python to
> > take care of it?
>
> You never need to manually allocate memory.
>
> You *may* need to deallocate memory if you make "reference loops", where
> one object refers to itself:
>
> l = []  # make an empty list
> l.append(l)  # add the list l to itself
>
> Python can break such simple reference loops itself, but for more
> complicated ones, you may need to break them yourself:
>
> a = []
> b = {2: a}
> c = (None, b)
> d = [1, 'z', c]
> a.append(d)  # a reference loop
>
> Python will deallocate objects when they are no longer in use. They are
> always considered in use any time you have them assigned to a name, or in
> a list or dict or other structure which is in use.
>
> You can explicitly remove a name with the del command. For example:
>
> x = ['my', 'data']
> del x
>
> After deleting the name x, the list object itself is no longer in use
> anywhere and Python will deallocate it. But consider:
>
> x = ['my', 'data']
> y = x  # y now refers to THE SAME list object
> del x
>
> Although you have deleted the name x, the list object is still bound to
> the name y, and so Python will *not* deallocate the list.
>
> Likewise:
>
> x = ['my', 'data']
> y = [None, 1, x, 'hello world']
> del x
>
> Although now the list isn't bound to a name, it is inside another list,
> and so Python will not deallocate it.
>
> > 3.    Any good practice suggestions?
>
> Write small functions. Any temporary objects created by the function will
> be automatically deallocated when the function returns.
>
> Avoid global variables. They are a good way to inadvertently end up with
> multiple long-lasting copies of data.
>
> Try to keep data in one big piece rather than lots of little pieces.
>
> But contradicting the above, if the one big piece is too big, it will be
> hard for the operating system to swap it in and out of virtual memory,
> causing thrashing, which is *really* slow. So aim for big, but not huge.
>
> (By "big" I mean megab

Can I specify regex group to return float or int instead of string?

2010-02-25 Thread Jeremy
I have a regular expression that searches for some numbers and puts
them into a dictionary, i.e.

'(?P\d+)\s+(?P\d+\.\d+)'

Is it possible to have the results of the matches returned as int or
float objects instead of strings?

Thanks,
Jeremy
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Can I specify regex group to return float or int instead of string?

2010-02-25 Thread Jeremy
On Feb 25, 9:41 am, Steven D'Aprano  wrote:
> On Thu, 25 Feb 2010 07:48:44 -0800, Jeremy wrote:
> > I have a regular expression that searches for some numbers and puts them
> > into a dictionary, i.e.
>
> > '(?P\d+)\s+(?P\d+\.\d+)'
>
> > Is it possible to have the results of the matches returned as int or
> > float objects instead of strings?
>
> No. Just convert the match with int() or float() before storing it in the
> dictionary. That is, instead of:
>
> d[key] = match
>
> use
>
> d[key] = float(match)
>
> or similar.

I was afraid that was the only option.  Oh well, thought I'd ask
anyway.  Thanks for your help.
Jeremy
-- 
http://mail.python.org/mailman/listinfo/python-list


Dictionary or Database—Please advise

2010-02-26 Thread Jeremy
I have lots of data that I currently store in dictionaries.  However,
the memory requirements are becoming a problem.  I am considering
using a database of some sorts instead, but I have never used them
before.  Would a database be more memory efficient than a dictionary?
I also need platform independence without having to install a database
and Python interface on all the platforms I'll be using.  Is there
something built-in to Python that will allow me to do this?

Thanks,
Jeremy
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Dictionary or Database—Please advise

2010-02-26 Thread Jeremy
On Feb 26, 9:29 am, Chris Rebert  wrote:
> On Fri, Feb 26, 2010 at 7:58 AM, Jeremy  wrote:
> > I have lots of data that I currently store in dictionaries.  However,
> > the memory requirements are becoming a problem.  I am considering
> > using a database of some sorts instead, but I have never used them
> > before.  Would a database be more memory efficient than a dictionary?
> > I also need platform independence without having to install a database
> > and Python interface on all the platforms I'll be using.  Is there
> > something built-in to Python that will allow me to do this?
>
> If you won't be using the SQL features of the database, `shelve` might
> be another option; from what I can grok, I sounds like a dictionary
> stored mostly on disk rather than entirely in RAM (not 100% sure
> though):http://docs.python.org/library/shelve.html
>
> It's in the std lib and supports several native dbm libraries for its
> backend; one of them should almost always be present.
>
> Cheers,
> Chris
> --http://blog.rebertia.com

Shelve looks like an interesting option, but what might pose an issue
is that I'm reading the data from a disk instead of memory.  I didn't
mention this in my original post, but I was hoping that by using a
database it would be more memory efficient in storing data in RAM so I
wouldn't have to read from (or swap to/from) disk.  Would using the
shelve package make reading/writing data from disk faster since it is
in a binary format?

Jeremy
-- 
http://mail.python.org/mailman/listinfo/python-list


How can I define class methods outside of the class?

2010-12-01 Thread Jeremy
I have some methods that I need (would like) to define outside of the
class.  I know this can be done by defining the function and then
setting it equal to some member of an instance of the class.  But,
because of the complexity of what I'm doing (I have to set many
functions as class methods) I would rather not do this.  Can someone
show me how to do this?  Is it even possible?  Can decorators be used
here?

Thanks,
Jeremy
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: How can I define class methods outside of the class?

2010-12-02 Thread Jeremy
On Dec 1, 10:47 pm, James Mills  wrote:
> On Thu, Dec 2, 2010 at 3:36 PM, Jeremy  wrote:
> > I have some methods that I need (would like) to define outside of the
> > class.  I know this can be done by defining the function and then
> > setting it equal to some member of an instance of the class.  But,
> > because of the complexity of what I'm doing (I have to set many
> > functions as class methods) I would rather not do this.  Can someone
> > show me how to do this?  Is it even possible?  Can decorators be used
> > here?
>
> Do you mean something like this ?
>
> @classmethod
> def foo(cls):
>     print "I am the foo classmethod on %r" % cls
>
> class Foo(object):
>     pass
>
> Foo.foo = foo
>
> cheers
> James

Thanks, James.  That is almost exactly what I want.  However, I want
to avoid doing

Foo.foo = foo

Is this going to be possible?  I'm trying to understand how decorators
are used.  Are they really necessary in this example?

Thanks,
Jeremy





-- 
http://mail.python.org/mailman/listinfo/python-list


Re: How can I define class methods outside of the class?

2010-12-02 Thread Jeremy
On Dec 2, 10:26 am, "bruno.desthuilli...@gmail.com"
 wrote:
> On 2 déc, 15:45, Jeremy  wrote:
>
>
>
>
>
> > On Dec 1, 10:47 pm, James Mills  wrote:
>
> > > On Thu, Dec 2, 2010 at 3:36 PM, Jeremy  wrote:
> > > > I have some methods that I need (would like) to define outside of the
> > > > class.  I know this can be done by defining the function and then
> > > > setting it equal to some member of an instance of the class.  But,
> > > > because of the complexity of what I'm doing (I have to set many
> > > > functions as class methods) I would rather not do this.  Can someone
> > > > show me how to do this?  Is it even possible?  Can decorators be used
> > > > here?
>
> > > Do you mean something like this ?
>
> > > @classmethod
> > > def foo(cls):
> > >     print "I am the foo classmethod on %r" % cls
>
> > > class Foo(object):
> > >     pass
>
> > > Foo.foo = foo
>
> > > cheers
> > > James
>
> > Thanks, James.  That is almost exactly what I want.  However, I want to 
> > avoid doing
>
> > Foo.foo = foo
>
> > Is this going to be possible?  
>
> def patch(cls):
>    def _patch(func):
>        setattr(cls, func.__name__, func)
>        return func
>    return _patch
>
> class Foo(object): pass
>
> @patch(Foo)
> def bar(self):
>     print self
>
> f = Foo()
> f.bar()

Yes!  This is what I was looking for.  Thanks!

Jeremy
-- 
http://mail.python.org/mailman/listinfo/python-list


Regular Expression for Finding and Deleting comments

2011-01-04 Thread Jeremy
I am trying to write a regular expression that finds and deletes (replaces with 
nothing) comments in a string/file.  Comments are defined by the first 
non-whitespace character is a 'c' or a dollar sign somewhere in the line.  I 
want to replace these comments with nothing which isn't too hard.  The trouble 
is, the comments are replaced with a new-line; or the new-line isn't captured 
in the regular expression.  

Below, I have copied a minimal example.  Can someone help?

Thanks,
Jeremy


import re

text = """ c
C - Second full line comment (first comment had no text)
c   Third full line comment
  F44:N 2$ Inline comments start with dollar sign and go to end of line"""

commentPattern = re.compile("""
(^\s*?c\s*?.*?| # Comment start with c or C
\$.*?)$\n   # Comment starting with $
""", re.VERBOSE|re.MULTILINE|re.IGNORECASE)

found = commentPattern.finditer(text)

print("\n\nCard:\n--\n%s\n--" %text)

if found:
   print("\nI found the following:")
   for f in found: print(f.groups())

else:
   print("\nNot Found")

print("\n\nComments replaced with ''")
replaced = commentPattern.sub('', text)
print("--\n%s\n--" %replaced)

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Regular Expression for Finding and Deleting comments

2011-01-04 Thread Jeremy
On Tuesday, January 4, 2011 11:26:48 AM UTC-7, MRAB wrote:
> On 04/01/2011 17:11, Jeremy wrote:
> > I am trying to write a regular expression that finds and deletes (replaces 
> > with nothing) comments in a string/file.  Comments are defined by the first 
> > non-whitespace character is a 'c' or a dollar sign somewhere in the line.  
> > I want to replace these comments with nothing which isn't too hard.  The 
> > trouble is, the comments are replaced with a new-line; or the new-line 
> > isn't captured in the regular expression.
> >
> > Below, I have copied a minimal example.  Can someone help?
> >
> > Thanks,
> > Jeremy
> >
> >
> > import re
> >
> > text = """ c
> > C - Second full line comment (first comment had no text)
> > c   Third full line comment
> >F44:N 2$ Inline comments start with dollar sign and go to end of 
> > line"""
> >
> > commentPattern = re.compile("""
> >  (^\s*?c\s*?.*?| # Comment start with c or C
> >  \$.*?)$\n   # Comment starting with $
> >  """, re.VERBOSE|re.MULTILINE|re.IGNORECASE)
> >
> Part of the problem is that you're not using raw string literals or
> doubling the backslashes.
> 
> Try soemthing like this:
> 
> commentPattern = re.compile(r"""
>  (^[ \t]*c.*\n|  # Comment start with c or C
>  [ \t]*\$.*) # Comment starting with $
>  """, re.VERBOSE|re.MULTILINE|re.IGNORECASE)

Using a raw string literal fixed the problem for me.  Thanks for the 
suggestion.  Why is that so important?

Jeremy

-- 
http://mail.python.org/mailman/listinfo/python-list


Convert unicode escape sequences to unicode in a file

2011-01-11 Thread Jeremy
I have a file that has unicode escape sequences, i.e., 

J\u00e9r\u00f4me

and I want to replace all of them in a file and write the results to a new 
file.  The simple script I've created is copied below.  However, I am getting 
the following error:

UnicodeEncodeError: 'ascii' codec can't encode character u'\xe9' in position 
947: ordinal not in range(128)

It appears that the data isn't being converted when writing to the file.  Can 
someone please help?

Thanks,
Jeremy


if __name__ == "__main__":
f = codecs.open(filename, 'r', 'unicode-escape')
lines = f.readlines()
line = ''.join(lines)
f.close()

utFound = re.sub('STRINGDECODE\((.+?)\)', r'\1', line)
print(utFound[:1000])


o = open('newDice.sql', 'w')
o.write(utFound.decode('utf-8'))
o.close()
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Convert unicode escape sequences to unicode in a file

2011-01-11 Thread Jeremy
On Tuesday, January 11, 2011 3:36:26 PM UTC-7, Alex wrote:

> 
> Are you _sure_ that your file contains the characters '\', 'u', '0',
> '0', 'e' and '9'? I expect that actually your file contains a byte
> with value 0xe9 and you have inspected the file using Python, which
> has printed the byte using a Unicode escape sequence. Open the file
> using a text editor or hex editor and look at the value at offset 947
> to be sure.
> 
> If so, you need to replace 'unicode-escape' with the actual encoding
> of the file.

Yeah, I'm sure that's what the file contains.  In fact, I solved my own problem 
while waiting for an answer.  When writing to the file I need to *en*code 
instead of *de*code; i.e.,

o = open('newDice.sql', 'w')
o.write(utFound.encode('utf-8'))
o.close()

That works!
-- 
http://mail.python.org/mailman/listinfo/python-list


How can I define __getattr__ to operate on all items of container and pass arguments?

2011-02-15 Thread Jeremy
I have a container object.  It is quite frequent that I want to call a function 
on each item in the container.  I would like to do this whenever I call a 
function on the container that doesn't exist, i.e., the container would return 
an attribute error.

For example

class Cont(object):
def __init__(self): 
self.items = []

def contMethod(self, args):
print("I'm in contMethod.")

def __getattr__(self, name):
for I in self.items:
# How can I pass arguments to I.__dict__[name]?
I.__dict__[name]


>>> C = Cont()
>>> # Add some items to C
>>>C.contMethod()
I'm in contMethod.
>>>C.itemMethod('abc')
??


The trouble I'm getting into is that I can't pass arguments to the attributes 
in the contained item.  In the example above, I can't pass 'abc' to the 
'itemMethod' method of each item in the container.  

Does someone know how I can accomplish this?

Thanks,
Jeremy

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: How can I define __getattr__ to operate on all items of container and pass arguments?

2011-02-15 Thread Jeremy
On Tuesday, February 15, 2011 1:44:55 PM UTC-7, Chris Rebert wrote:
> On Tue, Feb 15, 2011 at 12:29 PM, Jeremy  wrote:
> > I have a container object.  It is quite frequent that I want to call a 
> > function on each item in the container.  I would like to do this whenever I 
> > call a function on the container that doesn't exist, i.e., the container 
> > would return an attribute error.
> 
> s/function/method/
> 
> > For example
> >
> > class Cont(object):
> >    def __init__(self):
> >        self.items = []
> >
> >    def contMethod(self, args):
> >        print("I'm in contMethod.")
> >
> >    def __getattr__(self, name):
> >        for I in self.items:
> >            # How can I pass arguments to I.__dict__[name]?
> >            I.__dict__[name]
> >
> 
> > The trouble I'm getting into is that I can't pass arguments to the 
> > attributes in the contained item.  In the example above, I can't pass 'abc' 
> > to the 'itemMethod' method of each item in the container.
> >
> > Does someone know how I can accomplish this?
> 
> Recall that:
> x.y(z)
> is basically equivalent to:
> _a = x.y
> _a(z)
> 
> So the arguments haven't yet been passed when __getattr__() is
> invoked. Instead, you must return a function from __getattr__(); this
> function will then get called with the arguments. Thus (untested):
> 
> def __getattr__(self, name):
> def _multiplexed(*args, **kwargs):
> return [getattr(item, name)(*args, **kwargs) for item in self.items]
> return _multiplexed

Perfect, that's what I needed.  I realized that I didn't have the arguments to 
the function, but couldn't figure out how to do it.  This works like a charm.  
Thanks a lot!

Jeremy
-- 
http://mail.python.org/mailman/listinfo/python-list


How to read file during module import?

2010-04-09 Thread Jeremy
I have a module that, when loaded, reads and parses a supporting
file.  The supporting file contains all the data for the module and
the function that reads/parses the file sets up the data structure for
the module.

How can I locate the file during the import statement.  The supporting
file is located in the same directory as the module, but when I import
I get a No such file or directory error.  I could hard code the path
to the filename, but that would make it only work on my machine.

A related question: Can I parse the data once and keep it somewhere
instead of reading the supporting file every time?  I tried pickling
but that wouldn't work because I have custom classes.  (Either that or
I just don't know how to pickle—this is a highly probable event.)

Thanks,
Jeremy
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: How to read file during module import?

2010-04-09 Thread Jeremy
On Apr 9, 4:02 pm, "Gabriel Genellina"  wrote:
> En Fri, 09 Apr 2010 18:04:59 -0300, Jeremy  escribió:
>
> > How can I locate the file during the import statement.  The supporting
> > file is located in the same directory as the module, but when I import
> > I get a No such file or directory error.  I could hard code the path
> > to the filename, but that would make it only work on my machine.
>
> The directory containing the current module is:
>
> module_dir = os.path.dirname(os.path.abspath(__file__))

I didn't know about  __file__ this works!  Thanks.
>
> so you could open your supporting file using:
>
> fn = os.path.join(module_dir, "supporting_file_name.ext")
> open(fn) ...
>
> > A related question: Can I parse the data once and keep it somewhere
> > instead of reading the supporting file every time?  I tried pickling
> > but that wouldn't work because I have custom classes.  (Either that or
> > I just don't know how to pickle—this is a highly probable event.)
>
> What kind of "custom classes"?

My custom classes are not very fancy.  They basically are dictionaries
and lists organizing the data in the supporting file.  I was actually
surprised they didn't pickle because the classes were so simple.

> An open file, or a socket, are examples of non pickleable objects; most  
> other basic built-in objects are pickleable. Instances of user-defined  
> classes are pickleable if they contain pickleable attributes. Micro recipe:
>
> # pickle some_object
> with open(filename, "wb") as f:
>    pickle.dump(some_object, f, -1)

When I did this I got the following error:

PicklingError: Can't pickle : it's not found
as __main__.element

Am I just being dumb?

Thanks,
Jeremy
-- 
http://mail.python.org/mailman/listinfo/python-list


weakrefs, threads,, and object ids

2009-06-14 Thread Jeremy
Hello,

I'm using weakrefs in a small multi-threaded application.  I have been
using object IDs as dictionary keys with weakrefs to execute removal
code, and was glad to find out that this is in fact recommended
practice (http://docs.python.org/library/weakref.html)

> This simple example shows how an application can use objects IDs to retrieve
> objects that it has seen before. The IDs of the objects can then be used in 
> other
> data structures without forcing the objects to remain alive, but the objects 
> can
> still be retrieved by ID if they do.

After reading this, I realized it made no explicit mention of thread
safety in this section, whereas other sections made a note of correct
practice with threads.  The docs for the id() function specify

> Return the identity of an object.  This is guaranteed to be unique among
> simultaneously existing objects.  (Hint: it's the object's memory address.)

While guaranteed unique for simultaneously existing objects, how often
will an object assume an ID previously held by former object?  Given
that the ID is a memory address in Python's heap, I assume the answer
is either not often, or very often.

Specifically, I'm wondering if the case can occur that the callback
for a weakref is executed after another object has been allocated with
the same object identifier as a previous object.  If such an object is
inserted into a module-level dictionary, could it over-write a
previous entry with the same identifier and then get deleted whenever
the weakref callback happens to fire?



On a related note, what is a recommended way to obtain a weak
reference to a thread?
-- 
http://mail.python.org/mailman/listinfo/python-list


Compiling/Installing Python 2.7 on OSX 10.6

2010-11-04 Thread Jeremy
I'm having trouble installing Python 2.7 on OSX 10.6  I was able to
successfully compile it from source, but ran into problems when I did
make install.  The error I got (I received many similar errors) was:

/usr/bin/install -c -m 644 ../LICENSE /home/jlconlin/Library/
Frameworks/Python.framework/Versions/2.7/lib/python2.7/LICENSE.txt
PYTHONPATH=/home/jlconlin/Library/Frameworks/Python.framework/Versions/
2.7/lib/python2.7  DYLD_FRAMEWORK_PATH=/home/jlconlin/src/Python-2.7/
build: \
./python -Wi -tt 
/home/jlconlin/Library/Frameworks/Python.framework/
Versions/2.7/lib/python2.7/compileall.py \
-d 
/home/jlconlin/Library/Frameworks/Python.framework/Versions/2.7/
lib/python2.7 -f \
-x 'bad_coding|badsyntax|site-packages|lib2to3/tests/data' \

/home/jlconlin/Library/Frameworks/Python.framework/Versions/2.7/lib/
python2.7
Listing /home/jlconlin/Library/Frameworks/Python.framework/Versions/
2.7/lib/python2.7 ...
Compiling /home/jlconlin/Library/Frameworks/Python.framework/Versions/
2.7/lib/python2.7/._BaseHTTPServer.py ...
Sorry: TypeError: ('compile() expected string without null bytes',)
Compiling /home/jlconlin/Library/Frameworks/Python.framework/Versions/
2.7/lib/python2.7/._Bastion.py ...
Sorry: TypeError: ('compile() expected string without null bytes',)
Compiling /home/jlconlin/Library/Frameworks/Python.framework/Versions/
2.7/lib/python2.7/._CGIHTTPServer.py ...
Sorry: TypeError: ('compile() expected string without null bytes',)

As you can see I am compiling/installing in my home directory instead
of for the whole system.  The script I used to compile Python 2.7 is:

#! /bin/sh

export CFLAGS="-arch x86_64"
export LDFLAGS="-arch x86_64"

../configure --prefix=$HOME/usr/local \
--enable-framework=$HOME/Library/Frameworks \
--disable-toolbox-glue \
MACOSX_DEPLOYMENT_TARGET=10.6

make
make install

Can anyone help me fix the install error?

Thanks,
Jeremy

PS. Python compiled correctly, but a few modules were not found/made
but I don't think they are important.

Python build finished, but the necessary bits to build these modules
were not found:
_bsddb dl gdbm
imageoplinuxaudiodev  ossaudiodev
spwd   sunaudiodev
To find the necessary bits, look in setup.py in detect_modules() for
the module's name.




-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Compiling/Installing Python 2.7 on OSX 10.6

2010-11-04 Thread Jeremy
On Nov 4, 1:23 pm, Ned Deily  wrote:
> In article
> <3d9139ae-bd6f-4567-bb02-b21a8ba86...@o15g2000prh.googlegroups.com>,
>
>
>
>
>
>  Jeremy  wrote:
> > I'm having trouble installing Python 2.7 on OSX 10.6  I was able to
> > successfully compile it from source, but ran into problems when I did
> > make install.  The error I got (I received many similar errors) was:
>
> > /usr/bin/install -c -m 644 ../LICENSE /home/jlconlin/Library/
> > Frameworks/Python.framework/Versions/2.7/lib/python2.7/LICENSE.txt
> > PYTHONPATH=/home/jlconlin/Library/Frameworks/Python.framework/Versions/
> > 2.7/lib/python2.7  DYLD_FRAMEWORK_PATH=/home/jlconlin/src/Python-2.7/
> > build: \
> >            ./python -Wi -tt 
> > /home/jlconlin/Library/Frameworks/Python.framework/
> > Versions/2.7/lib/python2.7/compileall.py \
> >            -d 
> > /home/jlconlin/Library/Frameworks/Python.framework/Versions/2.7/
> > lib/python2.7 -f \
> >            -x 'bad_coding|badsyntax|site-packages|lib2to3/tests/data' \
> >            
> > /home/jlconlin/Library/Frameworks/Python.framework/Versions/2.7/lib/
> > python2.7
> > Listing /home/jlconlin/Library/Frameworks/Python.framework/Versions/
> > 2.7/lib/python2.7 ...
> > Compiling /home/jlconlin/Library/Frameworks/Python.framework/Versions/
> > 2.7/lib/python2.7/._BaseHTTPServer.py ...
> > Sorry: TypeError: ('compile() expected string without null bytes',)
> > Compiling /home/jlconlin/Library/Frameworks/Python.framework/Versions/
> > 2.7/lib/python2.7/._Bastion.py ...
> > Sorry: TypeError: ('compile() expected string without null bytes',)
> > Compiling /home/jlconlin/Library/Frameworks/Python.framework/Versions/
> > 2.7/lib/python2.7/._CGIHTTPServer.py ...
> > Sorry: TypeError: ('compile() expected string without null bytes',)
>
> How did you obtain and unpack the source?  It looks like you used
> something that created the old-style "._" hidden forks when extracting
> the source.  

I downloaded the source from python.org and extracted with 'tar -xzvf
Python-2.7.tgz'  My home space is on some network somewhere.  I think
the network filesystem creates the ._ at the beginning of the files.
It's really quite annoying.


> The path names look a little suspicious, too:
> /home/jlconlin.  What file system type are these files on?  You
> shouldn't run into problems if you use an HFS+ file system (for
> instance) and extract the tarball from the command line using
> /usr/bin/tar.

I am intentionally installing in my home directory (i.e., /home/
jlconlin) because I don't have access to /usr/local.  Supposedly this
is possible, and in fact common.

>
> > PS. Python compiled correctly, but a few modules were not found/made
> > but I don't think they are important.
>
> > Python build finished, but the necessary bits to build these modules
> > were not found:
> > _bsddb             dl                 gdbm
> > imageop            linuxaudiodev      ossaudiodev
> > spwd               sunaudiodev
>
> Yes, all of those are to be expected on an OS X 64-bit build.

Is it safe to ignore these modules then?

Thanks,
Jeremy
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Compiling/Installing Python 2.7 on OSX 10.6

2010-11-04 Thread Jeremy
On Nov 4, 5:08 pm, Ned Deily  wrote:
> In article <6f087ce1-5391-4ee3-b92a-5a499fdf0...@semanchuk.com>,
>  Philip Semanchuk  wrote:
>
> > You might want to try this before running tar to see if it inhibits the ._
> > files:
> > export COPYFILE_DISABLE=True
>
> > I know that tells tar to ignore those files (resource forks, no?) when
> > building a tarball. I don't know if it helps with extraction though.
>
> Interesting.  It's been so long since I've had to deal with ._ 's (which
> is where metadata for extended attributes including resource forks are
> stored), I had forgotten about that poorly documented option for 10.5
> and 10.6.
>
> A little experiment: from OS X 10.6, I NFS-mount a remote Linux (ext3)
> file system and have created files on it with extended attributes.  
> Using ls on either the OS X or the Linux side, the ._ files appear as
> regular files.  On the Linux side,  I use gnu tar to archive the files
> and move that archive back to OS X.  If I then use the stock Apple 10.6
> tar to extract that archive to an HFS+ directory, the extended
> attributes are by default restored properly (they can be viewed with ls
> -l@) and no '._' files - great!  If I first export
> COPYFILE_DISABLE=True, then the tar extraction appears to ignore the ._
> files: the extended attributes are not set and the ._ files still do not
> appear.
>
> So the COPYFILE_DISABLE trick may very well work for this issue.  It
> still raises the question of why the ._ files are being created in the
> first place.  They shouldn't be on the python.org tarball so it would
> seem most likely they are due to some operation on the OS X machine that
> causes extended attributes to be created.  Nothing wrong with that, just
> kind of interesting.
>
> --
>  Ned Deily,
>  n...@acm.org

What I have done is perform the installation on a local hard drive
(not network storage).  This prevents any ._* files from being
created.  Now I just have to copy the installation to ~/Library/
Frameworks or just link to the local copy.  I started the compilation
when I left, tomorrow I'll finish up and see how it went.  I don't
anticipate any more problems.

Thanks,
Jeremy
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: RE Module Performance

2013-07-25 Thread Jeremy Sanders
wxjmfa...@gmail.com wrote:

> Short example. Writing an editor with something like the
> FSR is simply impossible (properly).

http://www.gnu.org/software/emacs/manual/html_node/elisp/Text-Representations.html#Text-Representations

"To conserve memory, Emacs does not hold fixed-length 22-bit numbers that are 
codepoints of text characters within buffers and strings. Rather, Emacs uses a 
variable-length internal representation of characters, that stores each 
character as a sequence of 1 to 5 8-bit bytes, depending on the magnitude of 
its codepoint[1]. For example, any ASCII character takes up only 1 byte, a 
Latin-1 character takes up 2 bytes, etc. We call this representation of text 
multibyte.

...

[1] This internal representation is based on one of the encodings defined by 
the Unicode Standard, called UTF-8, for representing any Unicode codepoint, but 
Emacs extends UTF-8 to represent the additional codepoints it uses for raw 8-
bit bytes and characters not unified with Unicode.

"

Jeremy


-- 
http://mail.python.org/mailman/listinfo/python-list


Re: interactive plots

2011-07-06 Thread Jeremy Sanders
Mihai Badoiu wrote:

> How do I do interactive plots in python?  Say I have to plot f(x) and g(x)
> and I want in the plot to be able to click on f and make it disappear. 
> Any python library that does this?

You could try veusz, which is a python module and plotting program combined. 
You can also embed it in a PyQt program.

Jeremy


-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Understanding memory location of Python variables

2018-06-18 Thread Jeremy Black
Also, I don't think you can rely on memory being allocated sequentially any
more now that everyone has implemented some level of ASLR.

https://en.wikipedia.org/wiki/Address_space_layout_randomization

On Sat, Jun 16, 2018 at 12:22 PM Alister via Python-list <
python-list@python.org> wrote:

> On Sat, 16 Jun 2018 13:19:04 -0400, Joel Goldstick wrote:
>
> > On Sat, Jun 16, 2018 at 12:38 PM,   wrote:
> >> Hi everyone,
> >>
> >> I'm intrigued by the output of the following code, which was totally
> >> contrary to my expectations. Can someone tell me what is happening?
> >>
> > myName = "Kevin"
> > id(myName)
> >> 47406848
> > id(myName[0])
> >> 36308576
> > id(myName[1])
> >> 2476000
> >>
> >> I expected myName[0] to be located at the same memory location as the
> >> myName variable itself. I also expected myName[1] to be located
> >> immediately after myName[0].
> >> --
> >> https://mail.python.org/mailman/listinfo/python-list
> >
> > Others can probably give a more complete explanation, but small numbers,
> > and apparently letters are cached since they are so common.
>
> also ID is not necessarily a memory location (at least not according to
> the language specification)
> the standard cpython implementation does user the memory location for an
> object's ID but this is an implementation detail
>
> if you are tying to make use of ID in any way to manipulate computer
> memory your program is fundamentaly broken
>
>
>
> --
> Can you MAIL a BEAN CAKE?
> --
> https://mail.python.org/mailman/listinfo/python-list
>
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Classes derived from dict and eval

2005-09-22 Thread Jeremy Sanders
On Tue, 20 Sep 2005 13:59:50 -0700, Robert Kern wrote:

> globals needs to be a real dictionary. The implementation uses the C
> API, it doesn't use the overridden __getitem__. The locals argument,
> apparently can be some other kind of mapping.

It seems that on Python 2.3 then neither globals or locals accessing by
eval calls the __getitem__ member of the dicts.

Jeremy

-- 
http://mail.python.org/mailman/listinfo/python-list


Wrapping classes

2005-09-22 Thread Jeremy Sanders
Is it possible to implement some sort of "lazy" creation of objects only
when the object is used, but behaving in the same way as the object?

For instance:

class Foo:
  def __init__(self, val):
"""This is really slow."""
self.num = val

# this doesn't call Foo.__init__ yet
a = lazyclass(Foo, 6)

# Foo is only initalised here
print a.num

What I really want to do is make an object which looks like a numarray,
but only computes its contents the first time it is used.

Thanks

Jeremy


-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Wrapping classes

2005-09-23 Thread Jeremy Sanders
Peter Hansen wrote:
 
> Almost anything is possible in Python, though whether the underlying
> design idea is sound is a completely different question.  (Translation:
> try the following pseudo-code, but I have my suspicions about whether
> what you're doing is a good idea. :-) )

What I'd like to do precisely is to be able to evaluate an expression like
"a+2*b" (using eval) where a and b are objects which behave like numarray
arrays, but whose values aren't computed until their used.

I need to compute the values when used because the arrays could depend on
each other, and the easiest way to get the evaluation order correct is to
only evaluate them when they're used.

An alternative way is to do some string processing to replace a with
computearray("a") in the expression or something horrible like that.

Thanks

Jeremy

-- 
Jeremy Sanders
http://www.jeremysanders.net/
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Wrapping classes

2005-09-23 Thread Jeremy Sanders
Diez B. Roggisch wrote:

> It works - in python 2.4!! I tried subclassing dict, but my
> __getitem__-method wasn't called - most probably because it's a C-type,
> but I don't know for sure. Maybe someone can elaborate on that?

Yes - I tried that (see thread below). Unfortunately it needs Python 2.4,
and I can't rely on my users having that.

Traceback (most recent call last):
  File "test.py", line 15, in ?
print eval("10 * a + b", globals(), l)
TypeError: eval() argument 3 must be dict, not Foo

If you subclass dict it doesn't call the __getitem__ method.

Jeremy

-- 
Jeremy Sanders
http://www.jeremysanders.net/
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Wrapping classes

2005-09-23 Thread Jeremy Sanders
bruno modulix wrote:

> Could it work with a UserDict subclass ?

Unfortunately not:

Traceback (most recent call last):
  File "test.py", line 17, in ?
print eval("10 * a + b", globals(), l)
TypeError: eval() argument 3 must be dict, not instance

Thanks

Jeremy

-- 
Jeremy Sanders
http://www.jeremysanders.net/
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: File processing

2005-09-23 Thread Jeremy Jones
Gopal wrote:

>Hello,
>
>I'm Gopal. I'm looking for a solution to the following problem:
>
>I need to create a text file config.txt having some parameters. I'm
>thinking of going with this format by having "Param Name - value". Note
>that the value is a string/number; something like this:
>
>PROJECT_ID = "E4208506"
>SW_VERSION = "18d"
>HW_VERSION = "2"
>
>In my script, I need to parse this config file and extract the Values
>of the parameters.
>
>I'm very new to python as you can understand from the problem. However,
>I've some project dealines. So I need your help in arriving at a simple
>and ready-made solution.
>
>Regards,
>Gopal.
>
>  
>
Would this 
(http://www.python.org/doc/current/lib/module-ConfigParser.html) do what 
you need?  It's part of the standard library.


- JMJ
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: File processing

2005-09-23 Thread Jeremy Jones
Gopal wrote:

>Thanks for the reference. However, I'm not understanding how to use it.
>Could you please provide with an example? Like I open the file, read
>line and give it to parser?
>
>Please help me.
>
>  
>
I had thought of recommending what Peter Hansen recommended - just 
importing the text you have as a Python module.  I don't know why I 
recommended ConfigParser over that option.  However, if you don't like 
what Peter said and would still like to look at ConfigParser, here is a 
very simple example.  Here is the config file I created from your email:

[EMAIL PROTECTED]  8:36AM configparser % cat foo.txt
[main]
PROJECT_ID = "E4208506"
SW_VERSION = "18d"
HW_VERSION = "2"


Here is me running ConfigParser from a Python shell:

In [1]:  import ConfigParser

In [2]:  p = ConfigParser.ConfigParser()

In [3]:  p.read("foo.txt")
Out[3]:  ['foo.txt']

In [4]:  p.get("main", "PROJECT_ID")
Out[4]:  '"E4208506"'


Note that the value of ("main", "PROJECT_ID") is a string which contains 
double quotes in it.  If you take Peter's advice, you won't have that 
problem; the config file will preserve your types for you.

HTH,

- JMJ
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Wrapping classes

2005-09-23 Thread Jeremy Sanders
Colin J. Williams wrote:

> Could you not have functions a and b each of which returns a NumArray
> instance?
> 
> Your expression would then be something like a(..)+2*b(..).

The user enters the expression (yes - I'm aware of the possible security
issues), as it is a scientific application. I don't think they'd like to
put () after each variable name.

I could always munge the expression after the user enters it, of course.

Jeremy

-- 
Jeremy Sanders
http://www.jeremysanders.net/
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: batch mkdir using a file list

2005-09-23 Thread Jeremy Jones
DataSmash wrote:

>Hello,
>I think I've tried everything now and can't figure out how to do it.
>I want to read in a text list from the current directory,
>and for each line in the list, make a system directory for that name.
>
>My text file would look something like this:
>1144
>1145
>1146
>1147
>
>I simply want to create these 4 directories.
>It seems like something like the following
>code should work, but it doesn't.
>
>import os
>
>file = open("list.txt", "r")
>read = file.read()
>print "Creating directory " + str(read)
>os.mkdir(str(read))
>
>Appreciate any help you can give!
>R.D.  Harles
>
>  
>
Untested code:

import os
for line in open("list.txt", "r"):
os.mkdir(line)


- JMJ
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: 1 Million users.. I can't Scale!!

2005-09-28 Thread Jeremy Jones


[EMAIL PROTECTED] wrote:

>Damjan> Is there some python module that provides a multi process Queue?
>
>Not as cleanly encapsulated as Queue, but writing a class that does that
>shouldn't be all that difficult using a socket and the pickle module.
>
>Skip
>
>  
>
What about bsddb?  The example code below creates a multiprocess queue.  
Kick off two instances of it, one in each of two terminal windows.  Do a 
mp_db.consume_wait() in one first, then do a mp_db.append("foo or some 
other text here") in the other and you'll see the consumer get the 
data.  This keeps the stuff on disk,  which is not what the OP wants, 
but I *think* with flipping the flags or the dbenv, you can just keep 
stuff in memory:

#!/usr/bin/env python

import bsddb
import os

db_base_dir = "/home/jmjones/svn/home/source/misc/python/standard_lib/bsddb"

dbenv = bsddb.db.DBEnv(0)
dbenv.set_shm_key(40)
dbenv.open(os.path.join(db_base_dir, "db_env_dir"),
#bsddb.db.DB_JOINENV |
bsddb.db.DB_INIT_LOCK |
bsddb.db.DB_INIT_LOG |
bsddb.db.DB_INIT_MPOOL |
bsddb.db.DB_INIT_TXN |
#bsddb.db.DB_RECOVER |
bsddb.db.DB_CREATE |
#bsddb.db.DB_SYSTEM_MEM |
bsddb.db.DB_THREAD,
)

db_flags = bsddb.db.DB_CREATE | bsddb.db.DB_THREAD


mp_db = bsddb.db.DB(dbenv)
mp_db.set_re_len(1024)
mp_db.set_re_pad(0)
mp_db_id = mp_db.open(os.path.join(db_base_dir, "mp_db.db"), 
dbtype=bsddb.db.DB_QUEUE, flags=db_flags)



- JMJ
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: 1 Million users.. I can't Scale!!

2005-09-28 Thread Jeremy Jones
[EMAIL PROTECTED] wrote:

>Damjan> Is there some python module that provides a multi process Queue?
>
>Skip> Not as cleanly encapsulated as Queue, but writing a class that
>Skip> does that shouldn't be all that difficult using a socket and the
>    Skip> pickle module.
>
>Jeremy> What about bsddb?  The example code below creates a multiprocess
>Jeremy> queue.
>
>I tend to think "multiple computers" when someone says "multi-process".  I
>realize that's not always the case, but I think you need to consider that
>case (it's the only practical way for a multi-process application to scale
>beyond a few processors).
>
>Skip
>  
>
Doh!  I'll buy that.  When I hear "multi-process", I tend to think of 
folks overcoming the scaling issues that accompany the GIL.  This, of 
course, won't scale across computers without a networking interface.

- JMJ
-- 
http://mail.python.org/mailman/listinfo/python-list


Strange Extension Module Behavior

2005-09-28 Thread Jeremy Moles
Hey guys. I have an extension module written in C that abstracts and
simplifies a lot of what we do here. I'm observing some strange behavior
and wanted to know if anyone had any advice as to how I should start
tracking this down. More specific suggestions are obviously appreciated,
but I really don't have a lot of information to provide so I'm not
hoping for a lot. :)

The module works 100% of the time if I run it non-interactively.
However, any attempt to run it in the "interactive" python interpreter
fails, in some cases silently and in other cases raising exceptions that
really shouldn't be appearing.

Anyways, I guess the core issue here is:

   What could cause an extension module to work within a script
   and not interactively? Are there fundamental issues with the
   standard Python infrastructure I'm not grasping?

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: return own type from Python extention?

2005-09-29 Thread Jeremy Moles
Building a fully-fledged, custom Python object in C isn't a trivial
task; it isn't a hard one either, but it isn't trivial. :) Basically, as
far as I know, you'll need to create a PyTypeObject structure, populate
it accordingly, and then call a few setup functions on it...

// 

static PyTypeObject WEE = {
...
};

Py_TypeReady(&WEE);

module = Py_InitModule*(...) // There are a few versions of this

PyModule_AddObject(module, "WEE", (PyObject*)(&WEE));

// -

This is all from memory--I know I left out having to at least INCREF the
static WEE. At any rate, I'm sure you can find some samples easy enough.
And if not, I can let you try and grok some of my extension code...

On Thu, 2005-09-29 at 15:59 +0200, elho wrote:
> Thx, but my Problem is to get my own type before.
> If I have a C-Type, I know how tu return it from the Python extention, 
> but how will it work with my own type?
> I expect something like the following:
> 
> static PyObject* wrap_MyFunction (PyObject* self, PyObject* args)
> {
>:
>MyPyType *myType = MyTypeNew ();
>return (PyObject*)myType;
> }
> 
> How will the Konstructor-Funktion look for MyPyType to call this in C-Code.
> 
> ...If I explain my problem to confuesed here more informations:
> I have one function for MyPyType to construct it by calling from python:
> 
> static PyObject* PyMyType_new(PyTypeObject *type, PyObject *args, 
> PyObject *kwds)
> {
>  PyMyType *self;
>  self = (PyMyType*)type->tp_alloc(type, 0);
>  if (self != NULL) {
>  self->test = 0;
>  }
>  return (PyObject *)self;
> }
> 
> ..but how do I have to call this from C-Code or how will another 
> Funktion for this look like?
> 
> 
> 
> Jeremy Moles wrote:
> > You can use Py_BuildValue for most what you're probably going to need.
> > 
> > http://docs.python.org/api/arg-parsing.html
> > 
> > On Thu, 2005-09-29 at 15:39 +0200, elho wrote:
> > 
> >>I used the examples from the "Extending and Embedding the Python 
> >>Interpreter" tutorial and this works. I can use my types with python.
> >>But I do not know how to creat my own Python variable in an python 
> >>extending c-code. What will I have to creat this and return it to my 
> >>python programm?
> > 
> > 

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: return own type from Python extention?

2005-09-29 Thread Jeremy Moles
One more thing! :)

The _new method probably isn't what you want to modify the actual interl
C object; the initproc method actually gives you a pointer to it as it's
first object... (it's the tp_init member in the PyTypeObject sturcture.)

On Thu, 2005-09-29 at 15:59 +0200, elho wrote:
> Thx, but my Problem is to get my own type before.
> If I have a C-Type, I know how tu return it from the Python extention, 
> but how will it work with my own type?
> I expect something like the following:
> 
> static PyObject* wrap_MyFunction (PyObject* self, PyObject* args)
> {
>:
>MyPyType *myType = MyTypeNew ();
>return (PyObject*)myType;
> }
> 
> How will the Konstructor-Funktion look for MyPyType to call this in C-Code.
> 
> ...If I explain my problem to confuesed here more informations:
> I have one function for MyPyType to construct it by calling from python:
> 
> static PyObject* PyMyType_new(PyTypeObject *type, PyObject *args, 
> PyObject *kwds)
> {
>  PyMyType *self;
>  self = (PyMyType*)type->tp_alloc(type, 0);
>  if (self != NULL) {
>  self->test = 0;
>  }
>  return (PyObject *)self;
> }
> 
> ..but how do I have to call this from C-Code or how will another 
> Funktion for this look like?
> 
> 
> 
> Jeremy Moles wrote:
> > You can use Py_BuildValue for most what you're probably going to need.
> > 
> > http://docs.python.org/api/arg-parsing.html
> > 
> > On Thu, 2005-09-29 at 15:39 +0200, elho wrote:
> > 
> >>I used the examples from the "Extending and Embedding the Python 
> >>Interpreter" tutorial and this works. I can use my types with python.
> >>But I do not know how to creat my own Python variable in an python 
> >>extending c-code. What will I have to creat this and return it to my 
> >>python programm?
> > 
> > 

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: How to create temp file in memory???

2005-10-05 Thread Jeremy Jones
Wenhua Zhao wrote:

>A.T.T
>
>Thanks a lot.
>  
>
If you could elaborate a bit more, it might be helpful.  I'm guessing 
you want something like StringIO or cStringIO.


- jmj
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: How to create temp file in memory???

2005-10-05 Thread Jeremy Jones
Wenhua Zhao wrote:

>I have a list of lines. I want to feed these lines into a function.
>The input of this function is a file.
>I want to creat a temp file on disk, and write the list of lines into 
>this temp file, then reopen the file and feed it to the function.
>Can I create a this temp file on memory???
>
>
>
>Jeremy Jones wrote:
>  
>
>>Wenhua Zhao wrote:
>>
>>
>>
>>>A.T.T
>>>
>>>Thanks a lot.
>>> 
>>>
>>>  
>>>
>>If you could elaborate a bit more, it might be helpful.  I'm guessing 
>>you want something like StringIO or cStringIO.
>>
>>
>>- jmj
>>
>>
If the function takes a file object as an argument, you should be able 
to use StringIO or cStringIO.


- jmj
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: New Python book

2005-10-05 Thread Jeremy Jones
Dick Moores wrote:

>(Sorry, my previous post should not have had "Tutor" in the subject header.)
>
>Magnus Lie Hetland's new book, _Beginning Python: From Novice to
>Professional_ was published by Apress on Sept. 26 (in the U.S.). My copy
>arrived in the mail a couple of days ago. Very much worth a look, IMHO.
>But what do the experts here think?
>
>
>
>Dick Moores
>[EMAIL PROTECTED]
>
>
>  
>
I don't know what "the experts" think, but I thought it was excellent.  
I had the pleasure of serving as tech editor/reviewer for this book.  My 
dead tree version hasn't arrived yet, but should be on its way. 

The style is extremely readable, not a hint of dryness in it at all.  
The concepts are clearly and thoroughly presented.  This is an excellent 
resource for someone starting Python, but definitely useful for those 
already familiar with Python.  One thing that kept coming to mind as I 
was reading it, especially toward the end during the projects at the 
end, was that this would probably also be an excellent educational 
resource for teachers in a classroom setting teaching students Python.  
I would be interested to hear some teachers' opinion on that to see if 
that's a correct assessment.

Anyway, I highly recommend this book.

- jmj
-- 
http://mail.python.org/mailman/listinfo/python-list


Absolultely confused...

2005-10-06 Thread Jeremy Moles
So, here is my relevant code:

PyArg_ParseTuple(args, "O!", &PyType_vector3d, &arg1)

And here ismy error message:

argument 1 must be pylf.core.vector3d, not pylf.core.vector3d

I know PyType_vector3d "works" (as I can use them in the interpreter all
day long), and I know I'm passing a pylf.core.vector3d (well, apparently
not...)

I've spent hours and hours on this and I'm finally just giving up and
asking. I've tried everything to get my program to verify that arg1 is
really a PyType_vector3d, but to no avail.

If I take out the "!" in the format string and just use "O", I can at
least get past PyArg_ParseTuple. Then I try something like...

PyObject_TypeCheck(arg1, &PyType_vector3d)

Which also fails, but I know for a fact that arg1's PyObject_Repr is
what it should be. (pylf.core.vector3d)

I guess my question is: what in the world could be causing this to fail?
It seems like I'm just not able to use ParseType or BuildValue to create
objects of my own type.

I know I haven't provided a lot of information, but does anyone have any
ideas or where I should start looking?

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Absolultely confused...

2005-10-06 Thread Jeremy Moles
All of these are runtime errors. Using GCC4 and compiling perfectly with
-Wall.

On Thu, 2005-10-06 at 09:12 -0500, Brandon K wrote:
> > If I take out the "!" in the format string and just use "O", I can at
> > least get past PyArg_ParseTuple. 
> 
> Is this a compile-time error? Or a runtime error?
> 
> 
> 
> == Posted via Newsgroups.com - Usenet Access to over 100,000 Newsgroups 
> ==
> Get Anonymous, Uncensored, Access to West and East Coast Server Farms! 
> == Highest Retention and Completion Rates! HTTP://WWW.NEWSGROUPS.COM 
> ==
> 
> 

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Absolultely confused...

2005-10-06 Thread Jeremy Moles
Thanks for the reply. :)

I may be missing something critical here, but I don't exactly grok what
you're saying; how is it even possible to have two instances of
PyType_vector3d? It is (like all the examples show and all the extension
modules I've done in the past) a static structure declared and assigned
to all at once, only once.

Am I misunderstanding the point? :)

/me ducks

On Thu, 2005-10-06 at 16:26 +0200, Thomas Heller wrote:
> Jeremy Moles <[EMAIL PROTECTED]> writes:
> 
> > So, here is my relevant code:
> >
> > PyArg_ParseTuple(args, "O!", &PyType_vector3d, &arg1)
> >
> > And here ismy error message:
> >
> > argument 1 must be pylf.core.vector3d, not pylf.core.vector3d
> >
> > I know PyType_vector3d "works" (as I can use them in the interpreter all
> > day long), and I know I'm passing a pylf.core.vector3d (well, apparently
> > not...)
> >
> > I've spent hours and hours on this and I'm finally just giving up and
> > asking. I've tried everything to get my program to verify that arg1 is
> > really a PyType_vector3d, but to no avail.
> >
> > If I take out the "!" in the format string and just use "O", I can at
> > least get past PyArg_ParseTuple. Then I try something like...
> >
> > PyObject_TypeCheck(arg1, &PyType_vector3d)
> >
> > Which also fails, but I know for a fact that arg1's PyObject_Repr is
> > what it should be. (pylf.core.vector3d)
> >
> > I guess my question is: what in the world could be causing this to fail?
> > It seems like I'm just not able to use ParseType or BuildValue to create
> > objects of my own type.
> >
> > I know I haven't provided a lot of information, but does anyone have any
> > ideas or where I should start looking?
> 
> Can it be that you have TWO instances of the pylf.core.vector3d object?
> Debugging should reveal it...
> 
> Thomas

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Absolultely confused...

2005-10-06 Thread Jeremy Moles
Well, there's certainly no doubting that all of you are right. I guess
now I need to track down how this is happening and either fix it or
understand it so that I can explain why I'm having to work around it. :)

Many, many thanks. :)

On Thu, 2005-10-06 at 16:48 +0200, Daniel Dittmar wrote:
> Jeremy Moles wrote:
> > So, here is my relevant code:
> > 
> > PyArg_ParseTuple(args, "O!", &PyType_vector3d, &arg1)
> > 
> > And here ismy error message:
> > 
> > argument 1 must be pylf.core.vector3d, not pylf.core.vector3d
> > 
> 
> It looks as if two PyType_vector3d exist in your system
> - the one that created the object passed to your routine
> - the one in your extension code
> 
> As PyType_vector3d probably comes from a shared object/DLL
> - does your code accesses really the same shared object that is also 
> loaded by the Python interpreter? It could be that you linked with a 
> specific file, but Python loads something different from $PYTHONPATH
> - on Windows, you couldn't simply import a variable from a DLL, you had 
> to call a special routine to get the pointer
> 
> One possible portable solution: in your module initialization
> - import pylf.core
> - create an object of type vector3d
> - use your knowledge about the inner structure of Python objects and get 
> the pointer to the PyType from the object
> - store it in a module static variable TypeVector3D
> - pass that variable to PyArg_ParseTuple
> 
> Browse the Python Extension API, maybe partts or all of this are already 
> available.
> 
> There's still a problem left when pylf.core gets reloaded (rare, but 
> possible). I assume the shared object also gets reloaded, which means 
> that the type objects gets loaded to a new address and PyArg_ParseTuple 
> will complain again. I'm not sure if there is a solution to this, 
> because there still could be objects create from the old module.
> 
> Maybe you should just check the type yourself by comparing the class 
> names buried in the PyType. You could cache one or two type pointers to 
> speed this up.
> 
> Daniel

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Absolultely confused...

2005-10-06 Thread Jeremy Moles
WELL, I figured it out--thanks to everyone's help. There were instances
of the object and I am a total moron.

Thanks again to everyone who helped me stomp this out. :)

On Wed, 2005-10-05 at 21:58 -0400, Jeremy Moles wrote:
> So, here is my relevant code:
> 
>   PyArg_ParseTuple(args, "O!", &PyType_vector3d, &arg1)
> 
> And here ismy error message:
> 
>   argument 1 must be pylf.core.vector3d, not pylf.core.vector3d
> 
> I know PyType_vector3d "works" (as I can use them in the interpreter all
> day long), and I know I'm passing a pylf.core.vector3d (well, apparently
> not...)
> 
> I've spent hours and hours on this and I'm finally just giving up and
> asking. I've tried everything to get my program to verify that arg1 is
> really a PyType_vector3d, but to no avail.
> 
> If I take out the "!" in the format string and just use "O", I can at
> least get past PyArg_ParseTuple. Then I try something like...
> 
>   PyObject_TypeCheck(arg1, &PyType_vector3d)
> 
> Which also fails, but I know for a fact that arg1's PyObject_Repr is
> what it should be. (pylf.core.vector3d)
> 
> I guess my question is: what in the world could be causing this to fail?
> It seems like I'm just not able to use ParseType or BuildValue to create
> objects of my own type.
> 
> I know I haven't provided a lot of information, but does anyone have any
> ideas or where I should start looking?
> 

-- 
http://mail.python.org/mailman/listinfo/python-list


PyObject_New

2005-10-06 Thread Jeremy Moles
Hey guys, sorry to ask another question of this nature, but I can't find
the answer or a single example of it anywhere. I'm sure it's been asked
before, but my google-fu isn't strong enough to find anything.

I have the following:

struct MyType {
PyObject_HEAD
...
};

PyTypeObject PyType_MyType = { ... }

Now, elsewhere in the code I want to create an instance of this custom
type in C/C++ (rather than in the Python interpreter, where this all
seems to happen magically) and return it from a method. What I'm trying
now is the following:

PyObject* obj = _PyObject_New(&PyType_MyType);
obj = PyObject_Init(obj, &PyType_MyType);

...

return obj;

When "obj" gets back to the interpreter, Python sees it (or rather, it's
__repr__) in accordance with what it "should" be. However, any attempt
to USE the object results in a segfault. I feel my problem is simply
that I'm not allocating "obj" correctly in the C++ function.

If anyone has any advice, I would really appreciate it.

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: New Python book

2005-10-07 Thread Jeremy Jones
Maurice LING wrote:

>I had the opportunity to glance through the book in Borders yesterday. 
>On the whole, I think it is well covered and is very readable. Perhaps I 
>was looking for a specific aspect, and I find that threads did not get 
>enough attention. Looking at the index pages, the topics on threads 
>(about 4-5 pages) is mainly found in the context of GUI programming.
>
>maurice
>
>  
>
I don't have my hard copy of the book, but from memory and grepping over 
the soft copy, you appear to be correct.  Remember, though, that this is 
a beginning book on Python and *I* would consider threading a more 
advanced topic.  I think putting threading in the context of GUI 
programming is just about right for an intro book.

- jmj
-- 
http://mail.python.org/mailman/listinfo/python-list


C/API Clarification

2005-10-07 Thread Jeremy Moles
First of all, let me say I really do appreciate--and frequently use--the
ample and easy to read Python documentation. However, there are a few
things I'm still unclear on, even after asking multiple questions here
on the list (thanks everyone!) and reading the "Extending" and
"Reference" docs from start to finish. I'm new to Python extension
writing--but I'm learning fast. Still, I have a few questions that would
make my weekend so much better if I had some guidance on them or,
better, examples of other people's working code. :) (Real code, not
over-simplified examples that don't ever make sense in the real
world) :) 

Most of my woes are the result of trying to wrap some highly dynamic C++
code. People, of course, are quick to say "use Boost"--which I'm sure is
great--but doesn't actually teach me anything about Python. :)

1. Nowhere in the docs was I able to see an example of a custom object
using anything but a METH_NOARGS method. This is pretty silly since I
would assume that most class methods take a few arguments. I'm using:

PyObject* foo(cstruct* self, PyObject* args, PyObject* kargs)

...and it works, but I'm not even really sure if this is right or where
in the world I got the idea from! According to the docs, METH_VARARGS is
(PyObject*, PyObject*)? My prototype shouldn't even compile... but it
does, even at -Wall. This is an area I really feel like the docs should
elaborate on. All of the tp_* functions are infinitely easier to
implement as they have very specific purposes. However, very little
attention is given to just general type methods, though it could be just
me missing something. :)

2. No where in the docs does it show how to "correctly" create an
instance of your custom PyTypeObject in C (rather than in the
interpreter). I'm using:

PyObject* newobject = _PyObject_New(&PyType_myType);
PyObject* nulltuple = Py_BuildValue("()");
myType_init((cstruct*)(newobject), nulltuple, NULL);

...and it works (or appears to so far), but I really doubt this is how
it's done. I'm guessing I'm not supposed to go anywhere near the
_PyObject_New function. :)

3. I'm not able to return the "self" argument (useful, for instance,
when you want to chain a group of method calls) without screwing things
up really, really bad. For example:

PyObject* somefunc(cstruct* self, PyObject* args, PyObject* kargs) {
self->pointerToData->doSomeStuff();

return (PyObject*)(self);
}

...returns "something"; it has the methods you would expect, but trying
to use the object when it gets back to the interpreter is pretty much
undefined behavior. :) It's seen as a "" instance in the
interpreter, even though a dir() shows it still has the methods you
would expect.

I'm sorry if I come off as a newb or sound preachy--it certainly isn't
my intention. I love Python and will continue using it even if I never
quite grok it's C API. :)

Weee.

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: PyObject_New

2005-10-07 Thread Jeremy Moles
I just noticed this response right as I sent my other message. For some
reason my news reader didn't thread it, so it was all by itself...
Please disregard the rant concerning creation of objects in C. :)

/me hugs Martin
/me ducks and hides!

On Fri, 2005-10-07 at 09:57 +0200, "Martin v. Löwis" wrote:
> Jeremy Moles wrote:
> > PyObject* obj = _PyObject_New(&PyType_MyType);
> > obj = PyObject_Init(obj, &PyType_MyType);
> > 
> > ...
> > 
> > return obj;
> 
> The call to PyObject_Init is redundant: _PyObject_New
> is malloc+init. However, this shouldn't cause any crashes (except in the
> debug build). PyObject_Init is documented as
> 
> Initialize a newly-allocated object op with its type and initial 
> reference. Returns the initialized object. If type  indicates that the 
> object participates in the cyclic garbage detector, it is added to the 
> detector's set of observed objects. Other fields of the object are not 
> affected.
> 
> [I don't know where the mentioning of GC comes from - it appears to be
>   incorrect]
> 
> > When "obj" gets back to the interpreter, Python sees it (or rather, it's
> > __repr__) in accordance with what it "should" be. However, any attempt
> > to USE the object results in a segfault. I feel my problem is simply
> > that I'm not allocating "obj" correctly in the C++ function.
> 
> It doesn't crash because of the allocation - this code is correct.
> However, it is also incomplete: none of the state of the new object
> gets initialized in the fragment you are showing. So it likely crashes
> because the members of the object are stray pointers or some such,
> and accessing them causes a crash.
> 
> Regards,
> Martin

-- 
http://mail.python.org/mailman/listinfo/python-list

Re: Pass a tuple (or list) to a C wrapper function

2005-10-12 Thread Jeremy Moles
It depends on how you want to manipulate the data in C. If you want
compile-time variable access to each float, yeah, 50 floats. :) 

Probably what you want to do though is just keep the tuple as is and
iterate over it using the PySequence_* protocol:

http://docs.python.org/api/sequence.html

On Wed, 2005-10-12 at 13:06 -0700, Java and Swing wrote:
> I have a C function which takes an array of long values..
> 
> I understand that I can pass a tuple to a C wrapper function and in the
> C wrapper function have..
> 
> int ok = PyArg_ParseTuple(args, "s(ll)", &a, &b, &c);
> 
> ..that's great if my tuple only contained two longs..but what if it
> contained 50
> would I have to do..
> 
> int ok = PyArg_ParseTuple(args, "s(llll)", &a, &b,
> &c, &d...) ??
> 
> how can I handle this?
> 

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Yes, this is a python question, and a serious one at that (moving to Win XP)

2005-10-13 Thread Jeremy Jones
Kenneth McDonald wrote:

>For unfortunate reasons, I'm considering switching back to Win XP  
>(from OS X) as my "main" system. Windows has so many annoyances that  
>I can only compare it to driving in the Bay Area at rush hour (OS X  
>is like driving in Portland at rush hour--not as bad, but getting  
>there), but there are really only a couple of things that are really,  
>absolutely preventing me from making the switch. Number one is the  
>lack of a decent command line and command-line environment, and I'm  
>wondering (hoping) if perhaps someone has written a "Python shell"-- 
>something that will look like a regular shell, let users type in  
>commands, maybe have some of the nice features of bash etc. like tab  
>completion, etc, and will then execute an underlying python script  
>when the command is entered. I'm not thinking of IDLE, but something  
>that is really aimed more at being a system terminal, not a Python- 
>specific terminal.
>  
>
ipython -p pysh

IPython rocks as a Python shell.  I use zsh mostly, but IPython's pysh 
looks pretty good.  I hate to help you get back on Windows, though :-)


- jmj
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: How to get a raised exception from other thread

2005-10-14 Thread Jeremy Moles
On non-Windows system there are a ton of ways to do it--this is almost a
whole field unto itself. :) (D-BUS, fifos, sockets, shmfs, etc.) In
Windows, I wouldn't have a clue. 

I guess this is a hard question to answer without a bit more
information. :)

On Fri, 2005-10-14 at 14:45 -0700, dcrespo wrote:
> Hi all,
> 
> How can I get a raised exception from other thread that is in an
> imported module?
> 
> For example:
> 
> ---
> programA.py
> ---
> 
> import programB
> 
> thread = programB.MakeThread()
> thread.start()
> 
> ---
> programB.py
> ---
> import threading, time
> 
> class SomeException(Exception):
> pass
> 
> class MakeThread(threading.Thread):
> def __init__(self):
> threading.Thread.__init__(self)
> 
> def run(self):
> i = 0
> while 1:
> print i
> i += 1
> time.sleep(1) #wait a second to continue
> if i>10:
> raise SomeException()
> 
> 
> Thanks
> 
> Daniel
> 

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: global interpreter lock

2005-10-18 Thread Jeremy Jones
[EMAIL PROTECTED] wrote:

>I just need confirmation that I think right.
>
>Is the files thread_xxx.h (xxx = nt, os2 or whatever) responsible for
>the
>global interpreter lock in a multithreaded environment?
>
>I'm currently writing my own thread_VW for VxWorks, thats why I'm
>asking. 
>
>//Tommy
>
>  
>
Someone can correct me if I'm wrong, but the lock actually lives in 
ceval.c, around here:



802 PyThread_release_lock(interpreter_lock);
803
804 /* Other threads may run now */
805
806 PyThread_acquire_lock(interpreter_lock, 1);


This was taken from what appears to be a 2.4.1 release rather than a CVS 
checkout.  It looks like the PyThread_type_lock is defined in the 
thread_xxx.h files, though.

HTH,

- jmj
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: write a loopin one line; process file paths

2005-10-19 Thread Jeremy Jones
Xah Lee wrote:

>Peter Hansen wrote:
>  
>
>>Xah Lee wrote:
>>
>>
>>>If you think i have a point, ...
>>>  
>>>
>>You have neither that, nor a clue.
>>
>>
>
>Dear Peter Hansen,
>
>My messages speak themselfs. You and your cohorts's stamping of it does
>not change its nature. And if this is done with repetitiousness, it
>gives away your nature.
>
>It is not necessary to shout against me. But if you must refute (and
>that is reasonable), try to put content into your posts.
>(see Philosophies of Netiquette at
>http://xahlee.org/UnixResource_dir/writ/phil_netiquette.html)
>  
>
Xah,

Thanks for the comic relief of this link.  The first item of comedy came 
from the following two sentences:

'''
Then at the other extreme is the relatively rare Victorian propensity 
where each post is a gem of literature carefully crafted and researched 
for an entire century of readers to appreciate and archive. Xah, Erik 
Naggum, and [censored] posts are exemplary of this style, to name a few 
acquaintances like myself.
'''

I really don't know which is funnier, that you stated these sentences at 
all, or that you probably believe them.  Several things disqualify you 
from gaining my classification of "scholarly" (not that you give a fart 
what I think):

- poor spelling
- poor grammar
- rambling style with lack of cohesive thought
- non-interesting, non-original ideas in your posts
- invalid or incorrect points in your discourse

The next piece of humor came from these sentences:

'''
Go to a newsgroup archive such as dejanews.com and search for your 
favorite poster.  If you find a huge quantity of terse posts that is 
tiring, boring, has little content, and in general requires you to 
carefully follow the entire thread to understand it, then you know 
you've bumped into a conversationalist.
'''

By your definition, you mostly fit into the "conversationalist" 
category.  The only thing that may keep you out of that category is that 
your ramblings are typically lengthy.  So, what you provide is a large 
number of lengthy, tiring, boring, content-less, non-cohesive posts.  
Funny that you bash "the conversationalists" when you have so much in 
common with them.

The third point of humor in this link was the paypal link at the top of 
the page:

'''
If you spend more than 30 minutes on this site, please send $1 to me. Go 
to http://paypal.com/ and make a payment to [EMAIL PROTECTED] Or send to: 
P. O. Box 390595, Mountain View, CA 94042-0290, USA.
'''

It's humorous to think of anyone spending more than 30 minutes on your 
site (apart from the obvious stunned amazement at the content, quite 
like the "can't stop watching the train wreck" phenomenon).  It's even 
more humorous to think of anyone gaining value from it.  But I wouldn't 
be surprised to hear that some people have actually sent you money.

>If you deem fit, create a alt.fan.XahLee, and spare the rest of Python
>community of your politics. I appreciate your fandom.
>
> Xah
> [EMAIL PROTECTED]
>∑ http://xahlee.org/
>
>  
>

sorry-folks-for-feeding-the-troll-ly y'rs,

- jmj

-- 
http://mail.python.org/mailman/listinfo/python-list

ANN: Veusz 0.8 released

2005-10-21 Thread Jeremy Sanders
Veusz 0.8
-
Velvet Ember Under Sky Zenith
-
http://home.gna.org/veusz/
 
Veusz is Copyright (C) 2003-2005 Jeremy Sanders <[EMAIL PROTECTED]>
Licenced under the GPL (version 2 or greater)
 
Veusz is a scientific plotting package written in Python (currently
100% Python). It uses PyQt for display and user-interfaces, and
numarray for handling the numeric data. Veusz is designed to produce
publication-ready Postscript output.
 
Veusz provides a GUI, command line and scripting interface (based on
Python) to its plotting facilities. The plots are built using an
object-based system to provide a consistent interface.
 
Changes from 0.7:
 Please refer to ChangeLog for all the changes.
 Highlights include:
  * Datasets can be linked together with expressions
  * SVG export
  * Edit/Copy/Cut support of widgets
  * Pan image with mouse
  * Click on graph to change settings
  * Lots of UI improvements
 
Features of package:
 * X-Y plots (with errorbars)
 * Images (with colour mappings)
 * Stepped plots (for histograms)
 * Line plots
 * Function plots
 * Fitting functions to data
 * Stacked plots and arrays of plots
 * Plot keys
 * Plot labels
 * LaTeX-like formatting for text
 * EPS output
 * Simple data importing
 * Scripting interface
 * Save/Load plots
 * Dataset manipulation
 * Embed Veusz within other programs
 
To be done:
 * Contour plots
 * UI improvements
 * Import filters (for qdp and other plotting packages, fits, csv)
 
Requirements:
 Python (probably 2.3 or greater required)
   http://www.python.org/
 Qt (free edition)
   http://www.trolltech.com/products/qt/
 PyQt (SIP is required to be installed first)
   http://www.riverbankcomputing.co.uk/pyqt/
   http://www.riverbankcomputing.co.uk/sip/
 numarray
   http://www.stsci.edu/resources/software_hardware/numarray
 Microsoft Core Fonts (recommended)
   http://corefonts.sourceforge.net/
 PyFITS (optional)
   http://www.stsci.edu/resources/software_hardware/pyfits
 
For documentation on using Veusz, see the "Documents" directory. The
manual is in pdf, html and text format (generated from docbook).
 
If you enjoy using Veusz, I would love to hear from you. Please join
the mailing lists at
 
https://gna.org/mail/?group=veusz
 
to discuss new features or if you'd like to contribute code. The
newest code can always be found in CVS.

-- 
http://mail.python.org/mailman/listinfo/python-list


  1   2   3   4   5   6   7   >