Is PEP 237 accepted?

2005-11-10 Thread malkarouri
I was just having a pass on the peps when I noticed that the Unifying
Long Integers and Integers PEP is still draft. With most of it already
implemented - I understand banning the trailing 'L' is the only thing
left, shouldn't it at least be flagged as Accepted, if not Final?

karouri

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Considering moving from Delphi to Python [Some questions]

2005-07-06 Thread malkarouri
Dark Cowherd wrote:
> Stupid of me.
>
> I want some feedback on folllwing:
> anybody who has experience in writing SOAP servers in Python and data
> entry heavy web applications.
> Any suggestions?
> darkcowherd

Check ZSI, or SOAPPY, both on Python Web Services
http://pywebsvcs.sourceforge.net/
I usually use ZSI. You cannot generate wsdl files from python code, so
if I were in your shoes I would write the Delphi side and generate the
wsdl file then use it in python.

I never tested interoperability of Delphi & ZSI by the way, though I
would love to. Please inform us if you do try it.

btw, elementtree has an attempt at elementsoap but i don't think they
went very far..

cheers,
k

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Recursive function returning a list

2006-07-18 Thread malkarouri
Bruno Desthuilliers wrote:
> Boris Borcic a écrit :
> > Hello Bruno,
> >
> > Bruno Desthuilliers wrote:
> >
> >> Boris Borcic wrote:
> >>
>  Do you have any ideas?
> >>>
> >>>
> >>> you could use a recursive generator, like
> >>>
> >>> def genAllChildren(self) :
> >>> for child in self.children :
> >>> yield child
> >>> for childchild in child.genAllChildren() :
> >>> yield childchild
> >>
> >>
> >>
> >> Or how to *not* address the real problem...
> >>
> >> Boris, using a generator may be a pretty good idea, but *not* as a way
> >> to solve a problem that happens to be a FAQ !-)
> >>
> >
> > Sorry, but I don't understand your reasoning.
>
> It's quite simple. The OP's problem is well-known (it's a FAQ), and easy
> to solve. The righ answer to it is obviously to give a link to the FAQ
> (or take time to re-explain it for the zillionth time), not to propose a
> workaround.
>
> > How can you exclude that
> > the OP /may/ find that a generator neatly solves his problem ?
>
> I don't exclude it, and explicitly mentioned in whole letters that, I
> quote, it "may be a pretty good idea". And actually, the OP's problem is
> really with default values evaluation scheme - something that every
> Python programmer should know, because there are cases where you cannot
> solve it with a generator-based solution !-)
>
> > The use
> > of a default value was not an end in itself, was it ?
>
> If the OP has other reasons to want to use an accumulator based solution
> - which we don't know - then the possibility to use a default value is
> important.
>
> > - and the quirks of
> > default values being FAQ stuff don't change that. Sure if nobody had
> > covered that aspect, but a couple other posters did...
>
> Yes, but you forgot to mention that - and I would not have post any
> comment on your solution if you had explicitly mentioned the FAQ or
> these other answers.
>
> > Mmmmhhh somehow it feels like if there is any issue here, it is about
> > defending the credo "there ought to exist only one obvious way to do it"
> > ?...
>
> Nope, it's about trying to make sure that anyone googling for a similar
> problem will notice the canonical solution somehow.

Sorry, but I kinda agree with Boris here. Not that I am anybody here,
really.

If the question is to use an accumulator based solution, then yes, the
default values answer is definitely the canonical solution.
If the question is to write a recursive function that returns a list,
an accumulator based solution and a generator based solution are two
different ways for doing that. I don't think there is actually a FAQ
saying you must use the accumulator solution.
Actually, the accumulator based solution kind of comes to me
automatically as standard in any programming language, and I believe
that this was established as standard in python, _before_ the
introduction of generators.

Now, personally I find the generator-based solution more intuitive for
me (in the eys of the beholder:). And, looking at the subject of the
thread, guess what was the question?

k

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Recursive function returning a list

2006-07-19 Thread malkarouri
Bruno Desthuilliers wrote:
> [EMAIL PROTECTED] wrote:
[...]
> > Sorry, but I kinda agree with Boris here.
>
> On what ?

On the argument that you are (implicitly?) disagreeing with him on,
obviously. That the OP problem is not definitely the default values
question. As you say:

> >>If the OP has other reasons to want to use an accumulator based solution
> >>- which we don't know - then the possibility to use a default value is
> >>important.

My emphasis is on "we don't know".

>
> > Not that I am anybody here,
> > really.
>
> Err... Are you you at least ?-)
>

I am. Thanks for your concern:)

> Note that the generator-based solution doesn't return a list. (And yes,
> I know, it's just a matter of wrapping the call to obj.genAllChildrens()
> in a list constructor).

And you can do the wrapping in a function that returns a list, to be
more pedantic. And yes, I know you know.

>
> > I don't think there is actually a FAQ
> > saying you must use the accumulator solution.
>
> Did I say so ? The FAQ I mention is about default values evaluation, and
> it's the problem the OP was facing. Please re-read my post more carefully.
>

Actually, I have read your post right first time. And I do know you
didn't say that (a FAQ for accumulators). I raised it just in case.
What I don't agree with is that it is not the problem the OP was
facing. The discussion was: is the real problem the default values
problem, which he already got a solution for. Or was the real problem
the list returning recursion problem, for which he (or subsequent
google searchers, who are more likely to come to this thread after
searching for "Recursive function returning a list") may benefit from a
generator approach.

> > Actually, the accumulator based solution kind of comes to me
> > automatically as standard in any programming language, and I believe
> > that this was established as standard in python, _before_ the
> > introduction of generators.
>
> FWIW, you don't need to pass an accumulator around to solve this problem:
>
> def getAllChildren(self):
> children = []
> if self.children:
> children.extend(self.children)
> for child in self.children:
> children.extend(child.getAllChildren())
> return children

Thanks for the function, though I regard it as kind of trivial for the
level of discussion that we are having now. The problem solved by the
accumulator is _not_ the building of a list recursively. It is doing so
efficiently. Which is definitely not done by creating multiple
temporary lists just to add them. I am sure you know the theory. More
of relevance is that the generator based solution has also the same
efficiency, so they are both better than the trivial solution.
You have a point though. Your function is a solution. Just I don't
regard it as the preferred solution for the problem as I see it. YMMV.

To recap, the OP (and subsequent google searchers, if they pass by) has
now the preferred solution according to you. To my satisfaction, they
will also read the other solution, which looks more pythonic to me. As
I see it, any additional discussion now suffers from diminishing
returns.

Regards,
k

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Starting out.

2006-10-12 Thread malkarouri
Ahmer wrote:
> Hi all!
>
> I am a 15 year old High School Sophomore. I would like to start
> programming in Python. In school, we are learning Java (5) and I like
> to use the Eclipse IDE, I also am learning PHP as well.
>
> What are some ways to get started (books, sites, etc.)? I am usually on
> linux, but I have a windows box and am planning on getting a mac.

Generally good choices. I don't prefer PHP, however. For somebody who
is learning programming, it encourages sloppy programming.

I would suggest that you use PyDev (http://pydev.sourceforge.net/), a
Python plugin for Eclipse, for Python programming. It is a helpful
environment, especially for somebody already using Eclipse.

For resources, I suggest you take a look at
http://wiki.python.org/moin/BeginnersGuide/NonProgrammers which is for
non-programmers. If you feel like checking other resources, more are
linked to http://wiki.python.org/moin/BeginnersGuide .

Best luck,

k

-- 
http://mail.python.org/mailman/listinfo/python-list


identifying new not inherited methods

2006-09-26 Thread malkarouri
Hi,

I am writing a library in which I need to find the names of methods
which are implemented in a class, rather than inherited from another
class. To explain more, and to find if there is another way of doing
it, here is what I want to do: I am defining two classes, say A and B,
as:

class A(object):
def list_cmds(self):
'implementation needed'
?
def __init__(self):
... (rest of class)

class B(A):
def cmd1(self, args):
pass
def cmd2(self, args):
pass

I need an implementation of list_cmds in A above so that I can get a
result:

>>> b=B()
>>> b.list_cmds()
['cmd1','cmd2']#order not important

I will be happy if anybody can point to me any way of doing it, using
class attributes, metaclasses or otherwise. What I don't want to do is
modifying class B, which contains just the cmds, if possible.

Many thanks in advance.

k

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: identifying new not inherited methods

2006-09-26 Thread malkarouri
George Sakkis wrote:
[...]
> I'd rather have it as a function, not attached to a specific class:
>

Thanks a lot George, that was what I was looking for. Got to
understand/appreciate inspect more.
Of course it works as a method. So, other than having it as a general
utility, I presume there is no special reason to have it as a function
rather than a method..


Fredrik Lundh wrote:
[...]
> if the command methods can have any arbitrary names, change the test to
> filter out the methods you're not interested in.  it's usually easier to
> make sure that all commands use a common name prefix, though.

Thanks a lot FL. I have actually gone down this road first. But I don't
want to use a common prefix, and filtering methods out feels for me a
probable bug source, as in remembering whenever I add a method to class
A I need to add it to the filter.

> and yes, if you haven't done so already, take a look at the "cmd" module
> before you build your own variant:
>
>  http://effbot.org/librarybook/cmd.htm

Thanks again. I know I am doing a cmd variant. Actually, that's exactly
where I started, with your cmd module page.
The reason I am doing this is mainly to use a cmd variant decoupled
from stdin/stdout, to hook in PyShell or IPython (not decided yet). Is
there a way to use cmd without assuming stdin/stdout?

Regards,

k

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: identifying new not inherited methods

2006-09-27 Thread malkarouri
George Sakkis wrote:
[...]
> You're looking at it backwards; there's no particular reason this
> should be a method of class A since it can be used for any arbitrary
> object with no extra overhead. Now, if you intend to use it only for
> instances of A and its subclasses, the only difference would be
> syntactic; if you prefer x.list_cmds() from list_cmds(x), go with the
> method.

You are right of course. Let's say it's just bad company; the group I
am working with are mainly Java developers.

k

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Tripoli: a Python-based triplespace implementation

2005-05-01 Thread malkarouri
Dominic Fox wrote:
> I have been working on a Python implementation of a modified Tuple
> Space (cf Linda, JavaSpaces) that contains only 3-tuples (triples),
> and that has operators for copying and moving graphs of triples as
> well as sets matching a given pattern. It's called Tripoli, ...

Interesting, but I wonder if you are aware of pylinda
(http://www-users.cs.york.ac.uk/~aw/pylinda/) and if so, what's the
difference?

Regards,

karouri

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Python Memory Usage

2007-06-30 Thread malkarouri
On Jun 20, 4:48 am, "[EMAIL PROTECTED]" <[EMAIL PROTECTED]>
wrote:
> I am using Python to process particle data from a physics simulation.
> There are about 15 MB of data associated with each simulation, but
> there are many simulations.  I read the data from each simulation into
> Numpy arrays and do a simple calculation on them that involves a few
> eigenvalues of small matricies and quite a number of temporary
> arrays.  I had assumed that that generating lots of temporary arrays
> would make my program run slowly, but I didn't think that it would
> cause the program to consume all of the computer's memory, because I'm
> only dealing with 10-20 MB at a time.
>
> So, I have a function that reliably increases the virtual memory usage
> by ~40 MB each time it's run.  I'm measuring memory usage by looking
> at the VmSize and VmRSS lines in the /proc/[pid]/status file on an
> Ubuntu (edgy) system.  This seems strange because I only have 15 MB of
> data.
>
> I started looking at the difference between what gc.get_objects()
> returns before and after my function.  I expected to see zillions of
> temporary Numpy arrays that I was somehow unintentionally maintaining
> references to.  However, I found that only 27 additional objects  were
> in the list that comes from get_objects(), and all of them look
> small.  A few strings, a few small tuples, a few small dicts, and a
> Frame object.
>
> I also found a tool called heapy (http://guppy-pe.sourceforge.net/)
> which seems to be able to give useful information about memory usage
> in Python.  This seemed to confirm what I found from manual
> inspection: only a few new objects are allocated by my function, and
> they're small.
>
> I found Evan Jones article about the Python 2.4 memory allocator never
> freeing memory in certain circumstances:  
> http://evanjones.ca/python-memory.html.
> This sounds a lot like what's happening to me.  However, his patch was
> applied in Python 2.5 and I'm using Python 2.5.  Nevertheless, it
> looks an awful lot like Python doesn't think it's holding on to the
> memory, but doesn't give it back to the operating system, either.  Nor
> does Python reuse the memory, since each successive call to my
> function consumes an additional 40 MB.  This continues until finally
> the VM usage is gigabytes and I get a MemoryException.
>
> I'm using Python 2.5 on an Ubuntu edgy box, and numpy 1.0.3.  I'm also
> using a few routines from scipy 0.5.2, but for this part of the code
> it's just the eigenvalue routines.
>
> It seems that the standard advice when someone has a bit of Python
> code that progressively consumes all memory is to fork a process.  I
> guess that's not the worst thing in the world, but it certainly is
> annoying.  Given that others seem to have had this problem, is there a
> slick package to do this?  I envision:
> value = call_in_separate_process(my_func, my_args)
>
> Suggestions about how to proceed are welcome.  Ideally I'd like to
> know why this is going on and fix it.  Short of that workarounds that
> are more clever than the "separate process" one are also welcome.
>
> Thanks,
> Greg

I had almost the same problem. Will this do?

http://aspn.activestate.com/ASPN/Cookbook/Python/Recipe/511474

Any comments are welcome (I wrote the recipe with Pythonistas' help).

Regards,
Muhammad Alkarouri

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: how to implementation latent semantic indexing in python..

2007-07-18 Thread malkarouri
On 13 Jul, 17:18, 78ncp <[EMAIL PROTECTED]> wrote:
> hi...
> how to implementation algorithm latent semantic indexing in python
> programming...??
>
> thank's for daniel who answered my question before..
>
> --
> View this message in 
> context:http://www.nabble.com/how-to-implementation-latent-semantic-indexing-...
> Sent from the Python - python-list mailing list archive at Nabble.com.

IIRC, there was some explanation of Latent Semantic Analysis (with
Python code) in an IEEE ReadyNotes document called "Introduction to
Python for Artificial Intelligence". It wasn't free I am afraid.

Of course you are aware that LSA is patented..

Muhammad

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Problem with Closing TCP connection

2007-05-05 Thread malkarouri
On 5 May, 12:18, Madhur <[EMAIL PROTECTED]> wrote:
[...]
> as a sink to pump TCP messages. During which i have observed that the
> TCP close interface provided by Python is not closing the connection.
> This i confirmed by looking at the ethereal logs, which show proper 3
> way FIN ACK Handshake. But the netstat reports TIME_WAIT state for the
> TCP connection, which is hindering the messages to be pumped later. I
> would like to know whether the problem exists Python close and is
> there is turnaround? to the mentioned problem.

IIRC, this is normal operation of TCP connections. A very short
explanation is here (http://www.unixguide.net/network/socketfaq/
2.7.shtml).
So it is not a problem of Python.
I don't know exactly what you want to do, but I suggest you look at
one of the following options:
- Either get IP messages and filter them, same way as ethereal.
Probably complicated coding.
- Depending on your problem, you may try opening the TCP socket in
Python using the SO_REUSEADDR option.

Regards,
k

-- 
http://mail.python.org/mailman/listinfo/python-list


run function in separate process

2007-04-11 Thread malkarouri
Hi everyone,

I have written a function that runs functions in separate processes. I
hope you can help me improving it, and I would like to submit it to
the Python cookbook if its quality is good enough.

I was writing a numerical program (using numpy) which uses huge
amounts of memory, the memory increasing with time. The program
structure was essentially:

for radius in radii:
result = do_work(params)

where do_work actually uses a large number of temporary arrays. The
variable params is large as well and is the result of computations
before the loop.

After playing with gc for some time, trying to convince it to to
release the memory, I gave up. I will be happy, by the way, if
somebody points me to a web page/reference that says how to call a
function then reclaim the whole memory back in python.

Meanwhile, the best that I could do is fork a process, compute the
results, and return them back to the parent process. This I
implemented in the following function, which is kinda working for me
now, but I am sure it can be much improved. There should be a better
way to return the result that a temporary file, for example. I
actually thought of posting this after noticing that the pypy project
had what I thought was a similar thing in their testing, but they
probably dealt with it differently in the autotest driver [1]; I am
not sure.

Here is the function:

def run_in_separate_process(f, *args, **kwds):
from os import tmpnam, fork, waitpid, remove
from sys import exit
from pickle import load, dump
from contextlib import closing
fname = tmpnam()
pid = fork()
if pid > 0: #parent
waitpid(pid, 0) # should have checked for correct finishing
with closing(file(fname)) as f:
result = load(f)
remove(fname)
return result
else: #child
result = f(*args, **kwds)
with closing(file(fname,'w')) as f:
dump(result, f)
exit(0)


To be used as:

for radius in radii:
result = run_in_separate_process (do_work, params)

[1] http://codespeak.net/pipermail/pypy-dev/2006q3/003273.html



Regards,

Muhammad Alkarouri

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: run function in separate process

2007-04-11 Thread malkarouri
Thanks Mike for you answer. I will use the occasion to add some
comments on the links and on my approach.

I am programming in Python 2.5, mainly to avoid the bug that memory
arenas were never freed before.
The program is working on both Mac OS X (intel) and Linux, so I prefer
portable approaches.

On Apr 11, 3:34 pm, [EMAIL PROTECTED] wrote:
[...]
> I found a post on a similar topic that looks like it may give you some
> ideas:
>
> http://mail.python.org/pipermail/python-list/2004-October/285400.html

I see the comment about using mmap as valuable. I tried to use that
using numpy.memmap but I wasn't successful. I don't remember why at
the moment.
The other tricks are problem-dependent, and my case is not like them
(I believe).

> http://www.artima.com/forums/flat.jsp?forum=106&thread=174099

Good ideas. I hope that python will grow a replacable gc one day. I
think that pypy already has a choice at the moment.

> http://www.nabble.com/memory-manage-in-python-fu-t3386442.html

> http://www.thescripts.com/forum/thread620226.html

Bingo! This thread actually reaches more or less the same conclusion.
In fact, Alex Martelli describes the exact pattern in
http://mail.python.org/pipermail/python-list/2007-March/431910.html

I probably got the idea from a previous thread by him or somebody
else. It should be much earlier than March, though, as my program was
working since last year.

So, let's say the function I have written is an implementation of
Alex's architectural pattern. Probably makes it easier to get in the
cookbook:)

Regards,

Muhammad

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: run function in separate process

2007-04-11 Thread malkarouri
On Apr 11, 3:58 pm, [EMAIL PROTECTED] (Alex Martelli) wrote:
[...]
> That's my favorite way to ensure that all resources get reclaimed: let
> the operating system do the job.

Thanks a lot, Alex, for confirming the basic idea. I will be playing
with your function later today, and will give more feedback.
I think I avoided the pipe on the mistaken belief that pipes cannot be
binary. I know, I should've tested. And I avoided pickle at the time
because I had a structure that was unpicklable (grown by me using a
mixture of python, C, ctypes and pyrex at the time). The structure is
improved now, and I will go for the more standard approach..

Regards,

Muhammad

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: run function in separate process

2007-04-11 Thread malkarouri
On Apr 11, 4:36 pm, [EMAIL PROTECTED] wrote:
[...]
> .. And I avoided pickle at the time
> because I had a structure that was unpicklable (grown by me using a
> mixture of python, C, ctypes and pyrex at the time). The structure is
> improved now, and I will go for the more standard approach..

Sorry, I was speaking about an older version of my code. The code is
already using pickle, and yes, cPickle is better.

Still trying the code. So far, after modifying the line:

cPickle.dump(f, -1)

to:

cPickle.dump(result, f, -1)

it is working.

Regards,

Muhammad

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: run function in separate process

2007-04-11 Thread malkarouri
After playing with Alex's implementation, and adding some support for
exceptions, this is what I came up with. I hope I am not getting too
clever for my needs:

import os, cPickle
def run_in_separate_process_2(f, *args, **kwds):
pread, pwrite = os.pipe()
pid = os.fork()
if pid > 0:
os.close(pwrite)
with os.fdopen(pread, 'rb') as f:
status, result = cPickle.load(f)
os.waitpid(pid, 0)
if status == 0:
return result
else:
raise result
else:
os.close(pread)
try:
result = f(*args, **kwds)
status = 0
except Exception, exc:
result = exc
status = 1
with os.fdopen(pwrite, 'wb') as f:
try:
cPickle.dump((status,result), f,
cPickle.HIGHEST_PROTOCOL)
except cPickle.PicklingError, exc:
cPickle.dump((2,exc), f, cPickle.HIGHEST_PROTOCOL)
f.close()
os._exit(0)



Basically, the function is called in the child process, and a status
code is returned in addition to the result. The status is 0 if the
function returns normally, 1 if it raises an exception, and 2 if the
result is unpicklable. Some cases are deliberately not handled, like a
SystemExit or a KeyboardInterrupt show up as EOF errors in the
unpickling in the parent. Some cases are inadvertently not handled,
these are called bugs. And the original exception trace is lost. Any
comments?

Regards,

Muhammad Alkarouri

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: run function in separate process

2007-04-11 Thread malkarouri
After playing a little with Alex's function, I got to:

import os, cPickle
def run_in_separate_process_2(f, *args, **kwds):
pread, pwrite = os.pipe()
pid = os.fork()
if pid > 0:
os.close(pwrite)
with os.fdopen(pread, 'rb') as f:
status, result = cPickle.load(f)
os.waitpid(pid, 0)
if status == 0:
return result
else:
raise result
else:
os.close(pread)
try:
result = f(*args, **kwds)
status = 0
except Exception, exc:
result = exc
status = 1
with os.fdopen(pwrite, 'wb') as f:
try:
cPickle.dump((status,result), f,
cPickle.HIGHEST_PROTOCOL)
except cPickle.PicklingError, exc:
cPickle.dump((2,exc), f, cPickle.HIGHEST_PROTOCOL)
f.close()
os._exit(0)


It handles exceptions as well, partially. Basically the child process
returns a status code as well as a result. If the status is 0, then
the function returned successfully and its result is returned. If the
status is 1, then the function raised an exception, which will be
raised in the parent. If the status is 2, then the function has
returned successfully but the result is not picklable, an exception is
raised.
Exceptions such as SystemExit and KeyboardInterrupt in the child are
not checked and will result in an EOFError in the parent.

Any comments?

Regards,

Muhammad

-- 
http://mail.python.org/mailman/listinfo/python-list


CPython and a C extension using Boehm GC

2007-12-25 Thread malkarouri
Hi everyone,

Is it possible to write a Python extension that uses the Boehm garbage
collector?
I have a C library written that makes use of boehm-gc for memory
management. To use that, I have to call GC_INIT() at the start of the
program that uses the library. Now I want to encapsulate the library
as a CPython extension. The question is really is that possible? And
will there be conflicts between the boehm-gc and Python memory
management? And when should I call GC_INIT?

Best Regards,

Muhammad Alkarouri
-- 
http://mail.python.org/mailman/listinfo/python-list


List as FIFO in for loop

2008-03-08 Thread malkarouri
Hi everyone,

I have an algorithm in which I need to use a loop over a queue on
which I push values within the loop, sort of:

while not(q.empty()):
x = q.get()
#process x to get zero or more y's
#for each y:
q.put(y)

The easiest thing I can do is use a list as a queue and a normal for
loop:

q = [a, b, c]

for x in q:
#process x to get zero or more y's
q.append(y)

It makes me feel kind of uncomfortable, though it seems to work. The
question is: is it guaranteed to work, or does Python expect that you
wouldn't change the list in the loop?

Regards,

Muhammad Alkarouri
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: List as FIFO in for loop

2008-03-08 Thread malkarouri
On Mar 8, 3:20 pm, Roel Schroeven <[EMAIL PROTECTED]>
wrote:
> malkarouri schreef:
>
>
>
> > Hi everyone,
>
> > I have an algorithm in which I need to use a loop over a queue on
> > which I push values within the loop, sort of:
>
> > while not(q.empty()):
> >     x = q.get()
> >     #process x to get zero or more y's
> >     #for each y:
> >     q.put(y)
>
> > The easiest thing I can do is use a list as a queue and a normal for
> > loop:
>
> > q = [a, b, c]
>
> > for x in q:
> >     #process x to get zero or more y's
> >     q.append(y)
>
> > It makes me feel kind of uncomfortable, though it seems to work. The
> > question is: is it guaranteed to work, or does Python expect that you
> > wouldn't change the list in the loop?
>
> Changing a loop while iterating over it is to be avoided, if possible.
> In any case, a deque is more efficient for this kind of use. I'd use it
> like this:
>
> from collections import deque
>
> q = deque([a, b, c])
> while q:
>      x = q.popleft()
>      # ...
>      q.append(y)
>
> --
> The saddest aspect of life right now is that science gathers knowledge
> faster than society gathers wisdom.
>    -- Isaac Asimov
>
> Roel Schroeven

Thanks for your response. My same feeling, avoid loop variable, but no
explicit reason.
Thanks for reminding me of the deque, which I never used before.
Alas, in terms of efficiency - which I need - I don't really need to
pop the value on the list/deque.
This additional step takes time enough to slow the loop a lot. So its
not ideal here.

Still, why avoid changing loop variable? Does python treat looping
over a list differently from looping over an iterator, where it
doesn't know if the iterator future changes while loop running?

Regards,

Muhammad Alkarouri
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: List as FIFO in for loop

2008-03-08 Thread malkarouri
On Mar 8, 3:52 pm, duncan smith <[EMAIL PROTECTED]> wrote:
> malkarouri wrote:
> > Hi everyone,
>
> > I have an algorithm in which I need to use a loop over a queue on
> > which I push values within the loop, sort of:
>
> > while not(q.empty()):
> >     x = q.get()
> >     #process x to get zero or more y's
> >     #for each y:
> >     q.put(y)
>
> > The easiest thing I can do is use a list as a queue and a normal for
> > loop:
>
> > q = [a, b, c]
>
> > for x in q:
> >     #process x to get zero or more y's
> >     q.append(y)
>
> > It makes me feel kind of uncomfortable, though it seems to work. The
> > question is: is it guaranteed to work, or does Python expect that you
> > wouldn't change the list in the loop?
>
> I have used exactly the same approach.  I think it's a clean (even
> elegant) solution.  I'd be surprised if it ceased to work in some future
> implementation of Python, but I don't know if that's absolutely guaranteed.
>
> Duncan

Thanks Duncan, I think I will go ahead and use it. Though the Python
tutorial says otherwise in section 4.2:
"It is not safe to modify the sequence being iterated over in the loop
(this can only happen for mutable sequence types, such as lists). If
you need to modify the list you are iterating over (for example, to
duplicate selected items) you must iterate over a copy.".

More explicitly, in 7.3 of the Python Reference Manual:
"Warning: There is a subtlety when the sequence is being modified by
the loop (this can only occur for mutable sequences, i.e. lists). An
internal counter is used to keep track of which item is used next, and
this is incremented on each iteration. When this counter has reached
the length of the sequence the loop terminates. This means that if the
suite deletes the current (or a previous) item from the sequence, the
next item will be skipped (since it gets the index of the current item
which has already been treated). Likewise, if the suite inserts an
item in the sequence before the current item, the current item will be
treated again the next time through the loop."
This can be interpreted as don't play with the past. However, the part
"When this counter has reached the length of the sequence the loop
terminates." is interpretable as either the starting sequence length
or the running sequence length.

Testing:

In [89]: x=range(4)

In [90]: for i in x:
   : print i
   : x.append(i+4)
   : if i>=8:break
   :
   :
0
1
2
3
4
5
6
7
8

So it is the running sequence length. But I am still not sure if that
is guaranteed.

Regards,

Muhammad Alkarouri
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: List as FIFO in for loop

2008-03-08 Thread malkarouri
On Mar 8, 4:44 pm, "Martin v. Löwis" <[EMAIL PROTECTED]> wrote:
...
> Notice that the language specification *deliberately* does not
> distinguish between deletion of earlier and later items, but
> makes modification of the sequence undefined behavior to allow
> alternative implementations. E.g. an implementation that would
> crash, erase your hard disk, or set your house in flames if you
> confront it with your code still might be a conforming Python
> implementation.

Really appreciated, Martin. It was exactly the *deliberately* part I
am interested in. Settles it for me.

Thanks,

Muhammad Alkarouri
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: List as FIFO in for loop

2008-03-08 Thread malkarouri
On Mar 8, 6:24 pm, rockingred <[EMAIL PROTECTED]> wrote:
> I think it's a bad practice to get into.  Did you intend to do the
> "process" step again over the added variables?  If not I would set a
> new variable, based on your awful naming convention, let's call it z.
> Then use z.append(y) within the for loop and after you are out of your
> for loop, q.append(z).

Thanks, rockingred, for the advice. I hope that you didn't assume that
I was a newbie, even if my question looks so. What I was trying to do
is write some Python code which I need to optimize as much as
possible. I am using Cython (Pyrex) and would probably end up
rewriting my whole module in C at one point, but it is really hard to
beat Python data structures at their home turf. So meanwhile, I am
making use of dubious optimizations - on my own responsibility. There
have been a lot of these along the years - like using variables
leaking from list expressions (not anymore). Think of it as a goto.
Yes, I intend to do the process step again over the added variables.
The suggested deque is probably the best, though I need the speed
here.
What are the variable naming you would suggest, for a throwaway -
probably anonymized for the same of publishing on the web - code?

Cheers,

Muhammad Alkarouri
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: SoC project: Python-Haskell bridge - request for feedback

2008-03-27 Thread malkarouri
On 26 Mar, 08:46, Paul Rubin  wrote:
> A few thoughts.  The envisioned Python-Haskell bridge would have two
> directions: 1) calling Haskell code from Python; 2) calling Python
> code from Haskell.  The proposal spends more space on #1 but I think
> #1 is both more difficult and less interesting.

FWIW, I find #1 more interesting for me personally.
As a monad-challenged person, I find it much easier to develop
components using pure functional programming in a language like
Haskell and do all my I/O in Python than having it the other way
round.
Of course, if it is more difficult then I wouldn't expect it from a
SoC project, but that's that.

Muhammad Alkarouri
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Why shouldn't you put config options in py files

2008-12-04 Thread malkarouri
On 4 Dec, 19:35, HT <[EMAIL PROTECTED]> wrote:
> A colleague of mine is arguing that since it is easy to write config like:
>
> FOO = {'bar': ('a': 'b'), 'abc': ('z': 'x')}
>
> in config.py and just import it to get FOO, but difficult to achieve the
> same using an ini file and ConfigParser, and since Python files are just
> text, we should just write the config options in the Python file and
> import it.
>
> I can think of lots of arguments why this is a bad idea, but I don't
> seem to be able to think of a really convincing one.
>
> Anyone?

Some people actually do that. IIRC, ipython is now configured using a
python module.
The idea, however, is dangerous from a security viewpoint. Because
anybody can edit his configuration .py file, you are in effect
injecting arbitrary code into your program. Think that your program
starts with raw_input() and then goes on the execute whatever you get.
Same problems with SQL injection for example.
So people prefer to have a much more controlled environment for
configuration. In particular, the idea of using json as Chris said
should become a best practice now we have the json module.

Regards,

Muhammad Alkarouri
--
http://mail.python.org/mailman/listinfo/python-list


Re: RELEASED Python 2.6.1

2008-12-05 Thread malkarouri
On 5 Dec, 05:07, Barry Warsaw <[EMAIL PROTECTED]> wrote:
> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA1
>
> Hot on the heals of Python 3.0 comes the Python 2.6.1 bug-fix  
> release.

Nice work. Thanks.

> Source tarballs and Windows installers can be downloaded from the  
> Python 2.6.1 page

I note that OS X installers have not been released (yet). I don't know
if you plan to, but I think it is important to release installers that
do not suffer from the bug http://bugs.python.org/issue4017 which
renders Tkinter unusable in the 2.6.0 release and which (I believe) is
a build issue. Can we expect such an updated release?

Many thanks,

Muhammad Alkarouri
--
http://mail.python.org/mailman/listinfo/python-list


Re: how to get a beep, OS independent ?

2008-12-08 Thread malkarouri
On 6 Dec, 23:40, Stef Mientki <[EMAIL PROTECTED]> wrote:
> hello,
>
> I want to give a small beep,

Just to add to the options here. Where ncurses work you can use:

python -c 'from curses import *;wrapper(lambda s:beep())'

To try it just enter the whole line above in the command line..

Regards,

Muhammad Alkarouri
--
http://mail.python.org/mailman/listinfo/python-list


Re: When (and why) to use del?

2008-12-09 Thread malkarouri
On 9 Dec, 16:35, Albert Hopkins <[EMAIL PROTECTED]> wrote:
> I'm looking at a person's code and I see a lot of stuff like this:
>
>         def myfunction():
>             # do some stuff stuff
>             my_string = function_that_returns_string()
>             # do some stuff with my_string
>             del my_string
>             # do some other stuff
>             return
>
> and also
>
>         def otherfunction():
>             try:
>                 # some stuff
>             except SomeException, e:
>                 # more stuff
>                 del e
>             return
>
> I think this looks ugly, but also does it not hurt performance by
> preempting the gc?  My feeling is that this is a misuse of 'del'. Am I
> wrong?  Is there any advantage of doing the above?

As far as I understand it, this does not do anything. Let me explain.
The del statement doesn't actually free memory. It just removes the
binding from the corresponding namespace. So in your first example,
my_string cannot be used after the deletion. Of course, if the string
referenced by my_string was referenced by some other name then it will
still stay in memory.

In both your examples, the bindings are going to be removed anyway
immediately after the del statement, when returning from the function.
So, the del is redundant. If for some reason, however, you are not
breaking your work into functions (not advisable) you will have a huge
list of commands with no deletion and start to use huge amounts of
memory. Solution: divide your work into functions.

It is sometimes, however, useful to preempt the gc. This can be done
when you know that this is a good time to do chores, e.g. on an
application's idle period. Calling gc.collect() is useful here.

Regards,

Muhammad Alkarouri
--
http://mail.python.org/mailman/listinfo/python-list


Re: function that accepts any amount of arguments?

2008-04-24 Thread malkarouri
On Apr 24, 12:43 pm, Bruno Desthuilliers  wrote:
[...]
> Not quite sure what's the best thing to do in the second case - raise a
> ValueError if args is empty, or silently return 0.0 - but I'd tend to
> choose the first solution (Python's Zen, verses 9-11).

What's wrong with raising ZeroDivisionError (not stopping the
exception in the first place)?

k
--
http://mail.python.org/mailman/listinfo/python-list


Re: Large Data Sets: Use base variables or classes? And some binding questions

2008-09-26 Thread malkarouri
On 26 Sep, 16:39, Patrick  Sullivan <[EMAIL PROTECTED]> wrote:
> Hello.
>
> I will be using some large data sets ("points" from 2 to 12 variables)
> and would like to use one class for each point rather than a list or
> dictionary. I imagine this is terribly inefficient, but how much?

I can't really get into details here, but I would suggest that you go
ahead and try first. As you know, premature optimization is the root
of all evil.

General points I would suggest:

- Use Numpy/Scipy (http://www.scipy.org). You will have more
effeciency easier than if you try to use simply Python lists. And it
is much easier to later optimize that.
- Your questions of referencing classes and variables tell me that
perhaps you are starting from a C background, or Java maybe? Anyway,
as far as I know, it is not standard practice to write a class method
(you meant a normal bound method, right?) just to access a variable.
Use a normal Python variable and if you need to make it a method later
turn it into a property.
- Is the efficiency you are looking for is in terms of time or memory?
That difference leads to different optimization tricks sometimes.
- By using Numpy there is probably another advantage to you: some
efficiency in the data representation, as the NumPy array stores data,
say integers, without memory overhead per member (point). Just an
array of integers. Of course there is additional constant memory per
array which is independent of the number of elements (points) you are
storing.
- Generally try to think in terms of arrays of data rather than single
points. If it helps, think in terms of matrices. That is more or less
the design of Matlab, and Numpy is more or less similar.


Now if you specify your problem further I am sure that you will get
better advice from the community here. Don't focus on the details,
probably the bigger picture will help. Working in graphics? Image
processing? Machine Learning/Statistics/Data Mining/ etc..?

--
Muhammad Alkarouri
--
http://mail.python.org/mailman/listinfo/python-list