On 20/09/12 05:11:11, Chris Angelico wrote:
> On Thu, Sep 20, 2012 at 7:09 AM, Ian Kelly wrote:
>> You could do:
>>
>> os.listdir("/proc/%d/fd" % os.getpid())
>>
>> This should work on Linux, AIX, and Solaris, but obviously not on Windows.
On MacOS X, you can use
os.listdir("/dev/fd")
This
2012/9/19 Christian Heimes :
>> So the question:
>> * If I execve a python script (from C), how can I retrieve the list of
>> files, and optionally the list of locks, from within the execve(d)
>> python process so that I can use them?
>
> Have a look at psutil:
>
On Thu, Sep 20, 2012 at 7:09 AM, Ian Kelly wrote:
> You could do:
>
> os.listdir("/proc/%d/fd" % os.getpid())
>
> This should work on Linux, AIX, and Solaris, but obviously not on Windows.
I'm not sure how cross-platform it is, but at least on Linux, you can
use /proc/self as an alias for "/proc/
Am 19.09.2012 19:34, schrieb Ismael Farfán:
> Hello list
>
> From man 2 EXECVE
> "By default, file descriptors remain open across an execve()"
>
> And from man 2 FCNTL
> "Record locks are... preserved across an execve(2)."
>
> So the question:
>
2012/9/19 Ian Kelly :
> On Wed, Sep 19, 2012 at 2:36 PM, Ismael Farfán wrote:
>> It seems like I can use os.fstat to find out if a fd exists and also
>> get it's type and mode (I'm getting some pipes too : )
>
> Sure, because files and pipes both use the file descriptor
> abstraction. If your pro
On Wed, Sep 19, 2012 at 2:36 PM, Ismael Farfán wrote:
> It seems like I can use os.fstat to find out if a fd exists and also
> get it's type and mode (I'm getting some pipes too : )
Sure, because files and pipes both use the file descriptor
abstraction. If your process does any networking, you'l
On Wed, Sep 19, 2012 at 11:34 AM, Ismael Farfán wrote:
> So the question:
> * If I execve a python script (from C), how can I retrieve the list of
> files, and optionally the list of locks, from within the execve(d)
> python process so that I can use them?
>
>
> Some more inf
2012/9/19 Ismael Farfán :
> Hello list
>
> From man 2 EXECVE
> "By default, file descriptors remain open across an execve()"
>
> And from man 2 FCNTL
> "Record locks are... preserved across an execve(2)."
>
> So the question:
> * If I execve a pyth
Hello list
>From man 2 EXECVE
"By default, file descriptors remain open across an execve()"
And from man 2 FCNTL
"Record locks are... preserved across an execve(2)."
So the question:
* If I execve a python script (from C), how can I retrieve the list of
files, and optio
On 8/31/2011 5:40 AM, Chris Withers wrote:
On 31/08/2011 13:33, Steven D'Aprano wrote:
I am using Linux desktops; both incidents were with Python 2.5. Do newer
versions of Python respond to this sort of situation more gracefully?
Ironically, Windows does better here and dumps you out with a
Me
On Wednesday, August 31, 2011 5:49:24 AM UTC-7, Benjamin Kaplan wrote:
> 32-bit or 64-bit Python? A 32-bit program will crash once memory hits
> 2GB. A 64-bit program will just keep consuming RAM until your computer
> starts thrashing. The problem isn't your program using more RAM than
> you have,
On 9/12/2011 7:40 AM, Roy Smith wrote:
In article<4e6dc66e$0$29986$c3e8da3$54964...@news.astraweb.com>,
Steven D'Aprano wrote:
mylist = [0]*12345678901234
[...]
Apart from "Then don't do that!", is there anything I can do to prevent
this sort of thing in the future? Like instruct Python no
On Wed, 31 Aug 2011 22:47:59 +1000, Steven D'Aprano wrote:
>> Linux seems to fair badly when programs use more memory than physically
>> available. Perhaps there's some per-process thing that can be used to
>> limit things on Linux?
>
> As far as I know, ulimit ("user limit") won't help. It can l
In article <4e6dc66e$0$29986$c3e8da3$54964...@news.astraweb.com>,
Steven D'Aprano wrote:
> > mylist = [0]*12345678901234
> [...]
> > Apart from "Then don't do that!", is there anything I can do to prevent
> > this sort of thing in the future? Like instruct Python not to request more
> > memory t
On Wed, 31 Aug 2011 10:33 pm Steven D'Aprano wrote:
> Twice in a couple of weeks, I have locked up my PC by running a Python 2.5
> script that tries to create a list that is insanely too big.
>
> In the first case, I (stupidly) did something like:
>
> mylist = [0]*12345678901234
[...]
> Apart fr
On 08/31/11 18:31, Gregory Ewing wrote:
The Python process should also be able to set its own
limits using resource.setrlimit().
A new corner of stdlib that I've never poked at. Thanks for the
suggestion. Disappointed though that it doesn't seem to have
docstrings on the functions, so I had
Steven D'Aprano wrote:
As far as I know, ulimit ("user limit") won't help. It can limit the amount
of RAM available to a process, but that just makes the process start using
virtual memory more quickly.
ulimit -v is supposed to set the maximum amount of virtual
memory the process can use.
It
$ man limits.conf
Sent from my iPhone
On Aug 31, 2011, at 8:33 AM, Steven D'Aprano
wrote:
> Twice in a couple of weeks, I have locked up my PC by running a Python 2.5
> script that tries to create a list that is insanely too big.
>
> In the first case, I (stupidly) did something like:
>
> m
On 08/31/2011 02:40 PM, Chris Withers wrote:
On 31/08/2011 13:33, Steven D'Aprano wrote:
I am using Linux desktops; both incidents were with Python 2.5. Do newer
versions of Python respond to this sort of situation more gracefully?
Ironically, Windows does better here and dumps you out with a
Steven D'Aprano wrote:
> Twice in a couple of weeks, I have locked up my PC by running a Python 2.5
> script that tries to create a list that is insanely too big.
>
> In the first case, I (stupidly) did something like:
>
> mylist = [0]*12345678901234
>
> After leaving the machine for THREE DAYS
On Wed, Aug 31, 2011 at 8:40 AM, Chris Withers wrote:
>
> On 31/08/2011 13:33, Steven D'Aprano wrote:
>>
>> I am using Linux desktops; both incidents were with Python 2.5. Do newer
>> versions of Python respond to this sort of situation more gracefully?
>
> Ironically, Windows does better here and
Chris Withers wrote:
> On 31/08/2011 13:33, Steven D'Aprano wrote:
>> I am using Linux desktops; both incidents were with Python 2.5. Do newer
>> versions of Python respond to this sort of situation more gracefully?
>
> Ironically, Windows does better here and dumps you out with a
> MemoryError b
On 31/08/2011 13:33, Steven D'Aprano wrote:
I am using Linux desktops; both incidents were with Python 2.5. Do newer
versions of Python respond to this sort of situation more gracefully?
Ironically, Windows does better here and dumps you out with a
MemoryError before slowly recovering.
Linux
Twice in a couple of weeks, I have locked up my PC by running a Python 2.5
script that tries to create a list that is insanely too big.
In the first case, I (stupidly) did something like:
mylist = [0]*12345678901234
After leaving the machine for THREE DAYS (!!!) I eventually was able to get
to a
With this code:
#!/usr/bin/env python
from processing import Pool, Process, Manager, Lock
import sys
def slow_operation(i, q, lk):
print "id: ", i
# some really slow operation...
print "qcquiring lock..."
try:
lk.acquire(blocking=True)
q.put("result")
lk.
I'm trying to work out a multiple readers, one writer scenerio with a
bunch of objects. Basically "foo" objects are shared across processes.
Each foo object has a .lock variable, which holds a Mutex. In
creation, I'd like to call the SyncManager, get the dict() object
which hold object_ids->lock ma
ry.
>
> The basic function of the server is to serve a large dictionary -
> somewhat like a database. I have a couple theories as to why it locks
> up, but I'm not sure how to test them.
>
> Theories:
>Python is resizing the large dictionary
>Python is garbage
Paul Rubin wrote:
sturlamolden writes:
Python uses reference counting, not a generational GC like Java. A
Python object is destroyed when the refcount drops to 0. The GC only
collects cyclic references. If you create none, there are no GC delays
(you can in fact safely turn the GC off). Pyt
sturlamolden wrote:
On 9 Sep, 22:28, Zac Burns wrote:
Theories:
Python is resizing the large dictionary
Python is garbage collecting
Python uses reference counting, not a generational GC like Java.
The CPython implementation, that is. Jython, built on top of Java, uses
Java's GC. D
sturlamolden writes:
> Python uses reference counting, not a generational GC like Java. A
> Python object is destroyed when the refcount drops to 0. The GC only
> collects cyclic references. If you create none, there are no GC delays
> (you can in fact safely turn the GC off). Python does not sha
On 9 Sep, 22:28, Zac Burns wrote:
> Theories:
> Python is resizing the large dictionary
> Python is garbage collecting
Python uses reference counting, not a generational GC like Java. A
Python object is destroyed when the refcount drops to 0. The GC only
collects cyclic references. If you
On Wed, Sep 9, 2009 at 6:52 PM, David Stanek wrote:
> On Wed, Sep 9, 2009 at 4:28 PM, Zac Burns wrote:
>>
>> How would you suggest to figure out what is the problem?
>>
>
> I don't think you said your OS so I'll assume Linux.
>
> Sometimes it is more noise than value, but stracing the process may
On Wed, Sep 9, 2009 at 4:28 PM, Zac Burns wrote:
>
> How would you suggest to figure out what is the problem?
>
I don't think you said your OS so I'll assume Linux.
Sometimes it is more noise than value, but stracing the process may
shed light on what system calls are being made. This in turn may
> If it has been running continuously all that time then it might be that
> the dictionary has grown too big (is that possible?) or that it's a
> memory fragmentation problem. In the latter case it might be an idea to
> restart Python every so often; perhaps it could do that automatically
> during
function of the server is to serve a large dictionary -
somewhat like a database. I have a couple theories as to why it locks
up, but I'm not sure how to test them.
Theories:
Python is resizing the large dictionary
Python is garbage collecting
How would you suggest to figure out what i
to serve a large dictionary -
somewhat like a database. I have a couple theories as to why it locks
up, but I'm not sure how to test them.
Theories:
Python is resizing the large dictionary
Python is garbage collecting
How would you suggest to figure out what is the problem?
--
Za
G> Error is:
> >G> RuntimeError: Lock objects should only be shared between processes
> >G> through inheritance
>
> [code deleted]
>
> I guess you can't share locks (and probably other objects) between
> processes from a Pool. Maybe because there is no direct pa
een processes
>G> through inheritance
[code deleted]
I guess you can't share locks (and probably other objects) between
processes from a Pool. Maybe because there is no direct parent-child
relation or so (there is a separate thread involved). There is nothing
in the doc that explicitely
Hi All,
I am trying to understand multiprocessing, but I am getting a Runtime
error on the
code below. What am I missing or doing wrong?
Error is:
RuntimeError: Lock objects should only be shared between processes
through inheritance
I am using:
Python 2.6 (r26:66714, Nov 28 2008, 22:17:21)
[GCC
>>> not be a consumer-producer scheme which I don't think a dict would be
>>> the natural choice anyway) they can block the writer from writing
>>> alltogether.
>CB> You could implement some kind of fair ordering where whoever requests
>CB> a lock first is gua
> "Diez B. Roggisch" (DBR) wrote:
>>> This is a classical synchronization problem with a classical solution:
>>> You treat the readers as a group, and the writers individually. So you
>>> have a write lock that each writer has to acquire and release, but it is
>>> acquired only by the first r
rom writing
> alltogether.
You could implement some kind of fair ordering where whoever requests
a lock first is guaranteed to get it first, but I can't think of a way
to do that without requiring all readers to acquire two locks.
Carl Banks
--
http://mail.python.org/mailman/listinfo/python-list
On Apr 6, 3:30 am, "Emanuele D'Arrigo" wrote:
> Python's approach with the GIL is both reasonable and disappointing.
> Reasonable because I understand how it can make things easier for its
> internals. Disappointing because it means that standard python cannot
> take advantage of the parallelism t
Python's approach with the GIL is both reasonable and disappointing.
Reasonable because I understand how it can make things easier for its
internals. Disappointing because it means that standard python cannot
take advantage of the parallelism that can more and more often be
afforded by today's com
This is a classical synchronization problem with a classical solution:
You treat the readers as a group, and the writers individually. So you
have a write lock that each writer has to acquire and release, but it is
acquired only by the first reader and released by the last one.
Therefore you need
On Apr 6, 12:44 pm, Piet van Oostrum wrote:
> 3. See also http://code.activestate.com/recipes/465156/
Thank you for the useful suggestions Piet. In particular I just had a
look at the SharedLock class provided through the link above and it
seems to fit the bill quite nicely. I'll give it a go!
T
> "Emanuele D'Arrigo" (ED) wrote:
>ED> Hi everybody,
>ED> I'm having a threading-related design issue and I suspect it has a
>ED> name that I just don't know. Here's a description.
>ED> Let's assume a resource (i.e. a dictionary) that needs to be accessed
>ED> by multiple threads. A simple
On Apr 6, 7:49 am, "Diez B. Roggisch" wrote:
> The CPython-specific answer is that the GIL takes care of that for you
> right now anyway. So unless you plan for a distant future where some
> kind of swallows fly around that don't have a GIL, you are safe to
> simply read and write in threads witho
Emanuele D'Arrigo schrieb:
Hi everybody,
I'm having a threading-related design issue and I suspect it has a
name that I just don't know. Here's a description.
Let's assume a resource (i.e. a dictionary) that needs to be accessed
by multiple threads. A simple lock will do the job but in some
ci
Hi everybody,
I'm having a threading-related design issue and I suspect it has a
name that I just don't know. Here's a description.
Let's assume a resource (i.e. a dictionary) that needs to be accessed
by multiple threads. A simple lock will do the job but in some
circumstances it will create an
On Mar 16, 5:19 pm, Aaron Brady wrote:
> It's not one of the guarantees that Python
> makes. Operations aren't atomic by default, such as unless stated
> otherwise.
Well, in the documentation for RawArray:
"Note that setting and getting an element is potentially non-atomic –
use Array() instea
On Mar 15, 11:42 pm, Ahmad Syukri b wrote:
> On Mar 15, 6:19 am, Aaron Brady wrote:
>
>
>
> > Your code hung on my machine. The call to 'main()' should be in an
> > 'if __name__' block:
>
> > if __name__== '__main__':
> > main()
>
> > Is it possible you are just seeing the effects of the non
On Mar 15, 6:19 am, Aaron Brady wrote:
>
> Your code hung on my machine. The call to 'main()' should be in an
> 'if __name__' block:
>
> if __name__== '__main__':
> main()
>
> Is it possible you are just seeing the effects of the non-atomic
> '__iadd__' operation? That is, the value is read,
On Mar 14, 7:11 am, Ahmad Syukri bin Abdollah
wrote:
> I'm trying this on Python 3.0.1
> Consider the following code:
> """
> import multiprocessing as mp
>
> def jambu(b,i,gl):
> for n in range(10):
> with gl[i]:
> b[i]+=2
> with gl[3-i]:
> b[3-i]-=1
I'm trying this on Python 3.0.1
Consider the following code:
"""
import multiprocessing as mp
def jambu(b,i,gl):
for n in range(10):
with gl[i]:
b[i]+=2
with gl[3-i]:
b[3-i]-=1
def main():
b = mp.RawArray('i',4)
gl = []
proc = []
for i
"Diez B. Roggisch" writes:
> In python 2.5 and upwards, you can write this safer
> from __future__ import with_statement # only needed for py2.5
> with myInstance.lock:
>... critical section
Good point!
--
http://mail.python.org/mailman/listinfo/python-list
Paul Rubin wrote:
> reyjexter writes:
>> synchronize (myGroup) {
>> }
>>
>> but how do I do this in python? how can I name the lock that will be
>> used by the thread?
>
> You have to do it explicitly, for example with RLock:
>
> myInstance.lock = RLock()
> ...
> myInstance.lock.ac
reyjexter writes:
> synchronize (myGroup) {
> }
>
> but how do I do this in python? how can I name the lock that will be
> used by the thread?
You have to do it explicitly, for example with RLock:
myInstance.lock = RLock()
...
myInstance.lock.acquire()
... critical section ...
Hello!
Is there a way to lock a certain block to a people on similar group?
In java this is done by like this:
synchronize (myGroup) {
}
but how do I do this in python? how can I name the lock that will be
used by the thread?
-rey
--
http://mail.python.org/mailman/listinfo/python-list
Greetings,
It seems that marshal.load will lock the problem if the file object
(in this case a pipe) is not ready to be read from - even if it's done
in a thread.
The use case here is in writing a scripting interface for perforce
using the -G option
(http://www.perforce.com/perforce/doc.072/manua
To who cares, I found out what my problem was.
Testing interactivity with Tk in a normal Python console gave proper
results, just like IPython. Also running "python -i" gives the
interactive behaviour I wanted. But running "python -i" from a subprocess
did not. I was startled, because it worked ou
I think my question was not very clear. I narrowed the problem down to
a reconstructable small example, consisting of a python script (a very
simple interpreter) and three lines to execute in it:
== start simple interpreter file ==
import os
import sys
import time
def run():
while
yea... sorry... i just have all python stuff in the same folder and
messed up...
[EMAIL PROTECTED] wrote:
kalin> mailman has been locking one list out. the web interface just
kalin> hangs and it generates a bunch of locks. it seems that it can not
kalin> write to
Hi,
In wxpython, I made an interactive shell, which creates a remote python
subprocess
to do the interpreting. Communication is done via a pipe. The idea is that
the python
session is an actual process separate from the GUI, which has some
advantages,
like I can have multiple such shells in my app
You might also want to paste the output into a pastbin such as dpaste.com
On Wed, Sep 17, 2008 at 10:58 AM, <[EMAIL PROTECTED]> wrote:
>
>kalin> mailman has been locking one list out. the web interface just
>kalin> hangs and it generates a bunch of locks. it
kalin> mailman has been locking one list out. the web interface just
kalin> hangs and it generates a bunch of locks. it seems that it can not
kalin> write to a log but not sure which one. errors are like:
...
You'd probably be better off asking about Mailman prob
hi all...
mailman has been locking one list out.
the web interface just hangs and it generates a bunch of locks. it seems
that it can not write to a log but not sure which one. errors are like:
ep 17 05:09:12 2008 (18481) musiclist.lock lifetime has expired, breaking
Sep 17 05:09:12 2008
On Mon, 08 Oct 2007 23:00:46 GMT, John Nagle <[EMAIL PROTECTED]> wrote:
>Chris Mellon wrote:
>> On 10/7/07, Michel Albert <[EMAIL PROTECTED]> wrote:
>>> On Oct 6, 4:21 am, "Gabriel Genellina" <[EMAIL PROTECTED]> wrote:
En Fri, 05 Oct 2007 04:55:55 -0300, exhuma.twn <[EMAIL PROTECTED]>
es
Chris Mellon wrote:
> On 10/7/07, Michel Albert <[EMAIL PROTECTED]> wrote:
>> On Oct 6, 4:21 am, "Gabriel Genellina" <[EMAIL PROTECTED]> wrote:
>>> En Fri, 05 Oct 2007 04:55:55 -0300, exhuma.twn <[EMAIL PROTECTED]> escribi?:
>>>
[...] What I found
is that "libshout" is blocking, which sho
On 10/7/07, Michel Albert <[EMAIL PROTECTED]> wrote:
> On Oct 6, 4:21 am, "Gabriel Genellina" <[EMAIL PROTECTED]> wrote:
> > En Fri, 05 Oct 2007 04:55:55 -0300, exhuma.twn <[EMAIL PROTECTED]> escribi?:
> >
> > > [...] What I found
> > > is that "libshout" is blocking, which should be fine as the wh
On Oct 6, 4:21 am, "Gabriel Genellina" <[EMAIL PROTECTED]> wrote:
> En Fri, 05 Oct 2007 04:55:55 -0300, exhuma.twn <[EMAIL PROTECTED]> escribi?:
>
> > [...] What I found
> > is that "libshout" is blocking, which should be fine as the whole
> > thing runs in it's separate thread. But the application
En Fri, 05 Oct 2007 04:55:55 -0300, exhuma.twn <[EMAIL PROTECTED]> escribi�:
> [...] What I found
> is that "libshout" is blocking, which should be fine as the whole
> thing runs in it's separate thread. But the application hangs
> nevertheless while streaming. This effectively blocks out the othe
Unfortunately I don't have the code at hand on this box, but maybe
someone can give me a nudge in the right direction.
Some background: Last year I began to write a jukebox system that
provided a Telnet-like interface. I wrote this using "socket". Later
along the path I discovered Twisted, and due
On Wed, 14 Mar 2007 07:59:57 -0500
[EMAIL PROTECTED] wrote:
#>
#> Slawomir> When I execfile a file which contains a syntax error, the file
#> Slawomir> becomes locked and stays this way all the way until I exit the
#> Slawomir> interpreter (I am unable to delete it, for example). I ha
Slawomir> When I execfile a file which contains a syntax error, the file
Slawomir> becomes locked and stays this way all the way until I exit the
Slawomir> interpreter (I am unable to delete it, for example). I have
Slawomir> tried but failed to find any way to unlock the file... I
Hello,
When I execfile a file which contains a syntax error, the file becomes
locked and stays this way all the way until I exit the interpreter (I am
unable to delete it, for example). I have tried but failed to find any
way to unlock the file... Is this a bug in Python?
Is there *any* way to un
puter lockup).
Give Yview a parent, problem solved.
Changed pack to grid anyway.
jim-on-linux
http://www.inqvista.com
On Wednesday 25 October 2006 23:05, you wrote:
> > But, when I call it from another module it
> > locks
>
> methinks this "other module" has t
> But, when I call it from another module it locks
methinks this "other module" has the answer.
jim-on-linux wrote:
> py help,
>
> The file below will run as a stand alone file.
> It works fine as it is.
>
> But, when I call it from another module it locks
>
py help,
The file below will run as a stand alone file.
It works fine as it is.
But, when I call it from another module it locks
my computer, The off switch is the only
salvation.
This module when run as a stand alone, it will
open a jpeg image and add a vertical and
horizontal scrollbar
Update
The problem turned out to be the BIOS of the PC we were using. The
Python test program has been running fine for 5 days now (after we
upgraded the system BIOS) and is still running fine.
Sorry, I do not have any information as to what was fixed in the BIOS.
Also, I do not know exactly who
On 2006-05-10, [EMAIL PROTECTED] <[EMAIL PROTECTED]> wrote:
> Grant
>
>> You might want to run some memory tests.
>
> We have multiple identical boxes and they all have the same problem.
Maybe whoever built them got a bad batch of RAM. Or maybe the
just used RAM with the wrong specs. It doesn't
On May 10, 2006, at 5:39 PM, [EMAIL PROTECTED] wrote:
> Grant
>
>> You might want to run some memory tests.
>
> We have multiple identical boxes and they all have the same problem.
>
> Olaf
They might all have flaky memory - I would follow the other poster's
advice and run memtest86 on them.
Grant
> You might want to run some memory tests.
We have multiple identical boxes and they all have the same problem.
Olaf
--
http://mail.python.org/mailman/listinfo/python-list
On 2006-05-10, [EMAIL PROTECTED] <[EMAIL PROTECTED]> wrote:
> So to make sure that no other software is involved we took the
> PC with the problem, reset the BIOS to its defaults (which
> also enables hyper-threading) and installed Windows XP
> Professional (SP2) on it (but no further driver softw
This is an update on what we found so far:
We located one other PC that was not identical to the PC with the
problem. So we installed Windows XP on it and ran the Python test
program. It ran fine all night w/o locking up.
Here is what winmsd reports for this PC:
winmsd:
OS Name: Microsoft Windo
I haven't try the files yet. But have got a similar problem before. The
situation is nearly the same.
Always at random time , it reported that the memory read or write
violently.But I use Windows 2000(Python 2.3, wxPython 2.4). Windows
issue says 2000 doesn't support HP, so I simply turn it off. I
[EMAIL PROTECTED] wrote:
> Thanks for trying and reporting the results. Both of you and Tim and
> Dave have run the .py program now (all on hyper-threaded CPUs) and
> none of you were able to reproduce the problem.
>
> So that indicates that there is something special about our PC. We
> are pla
No lockup for me after 28 hours.
Of course, I don't have HT. It's a dual Opteron system with WinXP.
--
http://mail.python.org/mailman/listinfo/python-list
Robin and Roel
Thanks for trying and reporting the results. Both of you and Tim and
Dave have run the .py program now (all on hyper-threaded CPUs) and none
of you were able to reproduce the problem.
So that indicates that there is something special about our PC. We are
planing to re-install Win
Dave send me the below as an email. Olaf
Hi Olaf,
I'm running your test for you - it's been going for about an hour now
and is continuing to generate output[1].
c:\>py
Python 2.4.3 (#69, Mar 29 2006, 17:35:34) [MSC v.1310 32 bit (Intel)]
on win32
Type "help", "copyright", "credits" or "license"
[EMAIL PROTECTED] schreef:
> Below are 2 files. The first is a Python program that isolates the
> problem within less than 1 hour (often just a few minutes). The second
> is a C++ program that shows that the Win32 Sleep() function works as
> expected (ran from Friday afternoon until Monday mornin
[EMAIL PROTECTED] wrote:
> Below are 2 files. The first is a Python program that isolates the
> problem within less than 1 hour (often just a few minutes). The second
> is a C++ program that shows that the Win32 Sleep() function works as
> expected (ran from Friday afternoon until Monday morning)
[EMAIL PROTECTED] wrote:
> Tried importing win32api instead of time and using the
> win32api.GetTickCount() and win32api.Sleep() methods.
What about win32api.SleepEx? What about
WaitForMultipleObjects
WaitForMultipleObjectsEx
WaitForSingleObject
WaitForSingleObjectEx
when the object is not expe
Tim
Many thanks for trying and reporting the details of your environment.
All our hyper-threading PC are identical. However, we identified one
that is different and we are installing Windows XP on it now ...
My hope is that other people will try this, too.
Olaf
--
http://mail.python.org/mailm
[EMAIL PROTECTED]
> Below are 2 files. The first is a Python program that isolates the
> problem within less than 1 hour (often just a few minutes).
It does not on my box. I ran that program, from a DOS shell, using
the released Windows Python 2.4.3. After an hour, it was still
printing. I lef
Below are 2 files. The first is a Python program that isolates the
problem within less than 1 hour (often just a few minutes). The second
is a C++ program that shows that the Win32 Sleep() function works as
expected (ran from Friday afternoon until Monday morning).
Note, the Python programs hang
Hello,
I have been develop a blocking socket application with threading. The
main thread handles connections and inserts them into python's
protected queue as jobs for the thread pool to handle.
There is not much information on threading.local except that it states
that in maintains variable uniq
97 matches
Mail list logo