Changes by Charles-François Natali :
Removed file: http://bugs.python.org/file21696/gc_trim.diff
___
Python tracker
<http://bugs.python.org/issue11849>
___
___
Python-bug
Changes by Charles-François Natali :
Removed file: http://bugs.python.org/file21858/pymem.diff
___
Python tracker
<http://bugs.python.org/issue11849>
___
___
Python-bug
Charles-François Natali added the comment:
> To revive this issue, I tried to write a unit test to verify the behaviour.
> Onfurtunately, the test doesn't work and I don't understand why. I hope,
> someone here is more enlightend than me...
The semantic of listen's bac
Charles-François Natali added the comment:
# A lock taken from the current thread should stay taken in the
# child process.
Note that I'm not sure of how to implement this.
After a fork, even releasing the lock can be unsafe, it must be re-initialized,
see following comment in gl
Charles-François Natali added the comment:
> Yes, we would need to keep track of the thread id and process id inside
> the lock. We also need a global variable of the main thread id after
> fork, and a per-lock "taken" flag.
>
> Synopsis:
>
> def _reinit_if_nee
Charles-François Natali added the comment:
Oops, for liunxthreads, you should of course read "different PIDs", not "same
PID".
--
___
Python tracker
<http://bu
Charles-François Natali added the comment:
> Also that addresses the issue of "two threads inside different malloc
> implementations at the same time": it is currently not allowed with
> PyMem_Malloc.
>
That's not true.
You can perfectly have one thread inside PyM
Charles-François Natali added the comment:
>> - what's current_thread_id ? If it's thread_get_ident (pthread_self),
>> since TID is not guaranteed to be inherited across fork, this won't
>> work
>
> Ouch, then the approach I'm proposing is probably
Charles-François Natali added the comment:
Please disregard my comment on PyEval_ReInitThreads and _after_fork:
it will of course still be necessary, because it does much more than
just reinitializing locks (e.g. stop threads).
Also, note that both approaches don't handle synchroniz
Charles-François Natali added the comment:
> Thanks for the tip. I added the unit test and uploaded my final patch
> (which includes all changes).
A couple comments (note that I'm not entitled to accept or commit a patch, so
feel free to ignore them if I'm just being a pain)
Charles-François Natali added the comment:
> You can not pickle individual objects larger than 2**31.
Indeed, but that's not what's happening here, the failure occurs with much
smaller objects (also, note the OP's "cPickle is perfectly capable of pickling
these ob
Changes by Charles-François Natali :
Removed file: http://bugs.python.org/file21901/pending_signals-2.patch
___
Python tracker
<http://bugs.python.org/issue8407>
___
___
Charles-François Natali added the comment:
Oops.
Victor, my mouse got stuck and I mistakenly removed your pending_signals-2
patch. I'm really sorry about this, could you re-post it?
To try to make up for this, a small comment:
In signal_sigwait, at the end of the function, you do
Charles-François Natali added the comment:
> I'll attach 11877.4.diff
A couple comments:
static PyObject *
posix_fsync(PyObject *self, PyObject *args, PyObject *kwargs)
{
PyObject *retval = NULL;
auto PyObject *fdobj;
auto int full_fsync = 1;
Why are you using the "
Charles-François Natali added the comment:
It's a duplicate of issue #11432: http://bugs.python.org/issue11432
--
nosy: +neologix
resolution: -> out of date
status: open -> closed
superseder: -> webbrowser.open on unix fails.
___
Charles-François Natali added the comment:
Could someone explain me what's the risk on a case-insensitive filesystem?
Since files are created with O_CREAT|O_EXCL, I don't see where the problem is.
--
nosy: +neologix
___
Python trac
Charles-François Natali added the comment:
@Nick
I fully agree with you, but my point was that it doesn't make it less safe on
case-insensitive filesystems.
Apart from that, it's of course better to increase the length of the random
Charles-François Natali added the comment:
Hmm.
I think this was probably fixed by Gregory in issue #6643 (it's not in Python
2.7.1).
Could you try with Python 3.2, or a current snapshot?
--
nosy: +neologix
___
Python tracker
Charles-François Natali added the comment:
The patch looks fine to me: you just need to find someone interested
in reviewing and committing it (didn't find anyone listed as expert
for the socket module).
--
___
Python tracker
Charles-François Natali added the comment:
Steffen, you changed the default to doing a "full sync" in your last patch:
while I was favoring that initially, I now agree with Ronald and Antoine, and
think that we shouldn't change the default behaviour. The reason being that
A
Charles-François Natali added the comment:
There's just one thing I'm concerned with.
People using context managers tend to expect the __exit__ method to
perform cleanup actions and release corresponding resources if
necessary, for example closing the underlying file or socket.
So m
New submission from Charles-François Natali :
Lib/test/test_socket.py uses custom _get_unused_port to return a port which
will be likely available for binding in some tests.
The same functionality is already provided by support.find_unuse_port, let's
make use of it.
Patch att
Charles-François Natali added the comment:
I'm re-opening this issue, since Gregory agrees to change the current behaviour.
Patch attached (along with test and documentation update).
--
components: +Library (Lib)
keywords: +patch
resolution: rejected ->
status: closed ->
Charles-François Natali added the comment:
Calling fsync on a file descriptor referring to a tty doesn't make much sense.
On Linux, this fails with EINVAL:
$ python -c 'import os; os.fsync(1)'
Traceback (most recent call last):
File "", line 1, in
OSError: [Errno 22
Charles-François Natali added the comment:
> and if they do they thus really strive for data integrity, so call
> fsync() as a fallback for the security which Apple provides.
Why?
If I ask a full sync and it fails, I'd rather have an error returned so that I
can take the appropria
Charles-François Natali added the comment:
> If the C signal handler is called twice, the Python signal handler is only
> called once.
It's not the only shortage with the current implementation regarding
(real-time) signals. Another one is that they're delivered out-o
Charles-François Natali added the comment:
> Evaluate Python code in a signal handler is really not a good idea!
I know, you're limited to async-safe functions, among other things :-)
> because of the GIL, I don't think that we can do better. But with the
> GIL of Pyt
Charles-François Natali added the comment:
Antoine, I've got a couple questions concerning your patch:
- IIUC, the principle is to create a pipe for each worker process, so that when
the child exits the read-end - sentinel - becomes readable (EOF) from the
parent, so you know that a
Charles-François Natali added the comment:
> Hi,
>
Hello Nir,
> Option (2) makes sense but is probably not always applicable.
> Option (1) depends on being able to acquire locks in locking order, but how
> can we determine correct locking order across libraries?
>
There a
Charles-François Natali added the comment:
> Not exactly. The select is done on the queue's pipe and on the workers'
> fds *at the same time*. Thus there's no race condition.
You're right, I missed this part, it's perfectly safe.
But I think there's a pr
Charles-François Natali added the comment:
Something's missing in all the implementations presented:
to make sure that the new version of the file is available afer a crash, fsync
must be called on the containing directory after the rename.
--
nosy: +neo
Charles-François Natali added the comment:
Just a detail, but with the last version, select is retried with the full
timeout (note that the signal you're the most likely to receive is SIGCHLD and
since it's ignored by default it won't cause EINTR, so this shouldn't happen
Charles-François Natali added the comment:
Interesting.
There's something weird with the first child:
=== Child #1 =
Thread 0x0445:
Thread 0x0444:
File "/home/haypo/cpython/Lib/threading.py", line 237 in wait
waiter.acquire()
File "/home/haypo/cpyt
Charles-François Natali added the comment:
This makes sense.
I was suspecting a system limit exhaustion, maybe OOM or maximum number of
threads, something like that.
But at least on Linux, in OOM condition, the process would either get nuked by
the OOM-killer, or pthread_create would bail out
Charles-François Natali added the comment:
Hello Steffen,
First, thanks for testing this on OS-X: I only have access to Linux
systems (I tested both the semaphore and the emulated semaphore
paths).
If I understand correctly, the patch works fine with the default build
option on OS-X.
Then
Charles-François Natali added the comment:
> Indeed, it isn't, Pipe objects are not meant to be safe against multiple
> access. Queue objects (in multiprocessing/queues.py) use locks so they
> are safe.
But if the write to the Pipe is not atomic, then the select isn't safe.
Charles-François Natali added the comment:
> a) We know the correct locking order in Python's std libraries so the problem
> there is kind of solved.
I think that you're greatly under-estimating the complexity of lock ordering.
If we were just implementing a malloc implemen
Charles-François Natali added the comment:
> if you used the pipe approach you'd need to deal with the case of the
> write blocking (or failing if nonblocking) when the pipe buffer is full.
Well, a pipe is 64K on Linux (4K on older kernels). Given that each signal
received consum
Changes by Charles-François Natali :
Removed file: http://bugs.python.org/file21991/reinit_locks.diff
___
Python tracker
<http://bugs.python.org/issue6721>
___
___
Pytho
Charles-François Natali added the comment:
> Is it possible the following issue is related to this one?
It's hard to tell, the original report is rather vague.
But the comment about the usage of the maxtasksperchild argument reminds me of
issue #10332 "Multiprocessing maxtasksper
Charles-François Natali added the comment:
(I'm not sure Rietveld sent the message so I post it here, sorry in case of
duplicate).
Steffen, I've made a quick review of your patch, in case you're interested.
I think that this functionality can be really useful to some people, a
New submission from Charles-François Natali :
Multiprocessing's MapResult and ApplyResult use a notification mechanism to
signal callers when the underlying value is available.
Instead of re-inventing the wheel, we could use threading.Event instead: this
leads to cleaner and simpler code
Charles-François Natali added the comment:
Closing as duplicate of issue #9205.
--
nosy: +neologix
resolution: -> duplicate
status: open -> closed
superseder: -> Parent process hanging in multiprocessing if children terminate
une
Charles-François Natali added the comment:
The sleep is too short:
def f():
with cond:
result = cond.wait_for(lambda : state==4)
for i in range(5):
time.sleep(0.01)
with cond:
state += 1
cond.notify()
If state
Charles-François Natali added the comment:
It's probably a libc buc, see
http://sources.redhat.com/bugzilla/show_bug.cgi?id=12453
Basically, when libraries are dynamically loaded in an interleaved way, this
can lead to TLS being returned uninitialized, hence leading to a segfault
Charles-François Natali added the comment:
Here's a one-liner patch using the later approach (that way we're
sure the test won't hang).
--
keywords: +patch
Added file: http://bugs.python.org/file22015/wait_for_race.diff
___
Python
Charles-François Natali added the comment:
Under Linux, child processes are created with fork(), so they're run with the
exact same environment as the parent process (among which sys.flags.optimize).
I don't know Windows at all, but since I've heard it doesn't have fork(
Charles-François Natali added the comment:
> Importing uuid before importing the other modules does not result in Seg Fault
Alright.
In that case, I'm closing this bug as invalid.
Until distributions start shipping their glibc with this patch, the workaround
is simply to import uu
Charles-François Natali added the comment:
Here's a patch:
- those functions now accept and return str, not bytes arrays
- some of them were not declared static, it's now fixed
- use PyErr_SetFromErrno when errno is set
- add tests (return type, nonexistent interface name/index a
Charles-François Natali added the comment:
> In python3, one can still use fcntl(f.fileno(), FD_SET, FD_CLOEXEC)
Note that it's not atomic.
--
nosy: +neologix
___
Python tracker
<http://bugs.python.org
Charles-François Natali added the comment:
Here's a patch adding O_CLOEXEC to the os module, with test. This patch makes
it possible to open and set a FD CLOEXEC atomically.
O_CLOEXEC is part of POSIX.1-2008, supported by the Linux kernel since 2.6.23
and has been committed recent
Charles-François Natali added the comment:
> Using spawn_python() to check that os.O_CLOEXEC flag is correctly set seems
> overkill. Why not just testing fcntl.fcntl(f.fileno(), fcntl.F_GETFL) &
> FD_CLOEXEC)?
Because I couldn't find a place where the CLOEXEC flag was fu
Charles-François Natali added the comment:
Hello Christophe,
First and foremost, I think that the FD_CLOEXEC approach is terminally broken,
as it should have been the default in Unix. Now, we're stuck with this bad
design.
But we can't simply change the default to FD_CLOEXEC, for t
Charles-François Natali added the comment:
> To exclude races (in concurrent threads), this two ops should be done under
> lock (GIL?)
That won't work, because open(), like other slow syscalls, is called without
the GIL held. Furthermore, it wouldn't be atomic anyway (imagin
Changes by Charles-François Natali :
Removed file: http://bugs.python.org/file22025/socket_if.diff
___
Python tracker
<http://bugs.python.org/issue1746656>
___
___
Pytho
Charles-François Natali added the comment:
> You use UTF-8 encoding:
Here's an updated patch taking your comments into account (I'm really
blissfully ignorant when it comes to encoding issues, so I hope it will be OK
this time).
--
Added file: http://bugs.python.
Charles-François Natali added the comment:
> That's not the intention (that's a gordian knot I *will* be keeping
> a
> safe distance from). The intention is to create a saner default situation
> for most python programs.
I understand what you're saying, and I agre
Charles-François Natali added the comment:
> @neologix: You can commit it into Python 3.3. Tell me if you need
> help ;-)
My first commit :-)
What's the next step?
Can this issue be closed, or should I wait until the tests pass on
so
Changes by Charles-François Natali :
--
status: open -> closed
___
Python tracker
<http://bugs.python.org/issue1746656>
___
___
Python-bugs-list mailing list
Un
Changes by Charles-François Natali :
Removed file: http://bugs.python.org/file16758/urllib_redirect.diff
___
Python tracker
<http://bugs.python.org/issue8035>
___
___
Charles-François Natali added the comment:
Those URLs don't trigger the problem anymore, but AFAICT from the code, this
problem is still present in py3k.
Here's an updated patch.
--
Added file: http://bugs.python.org/file22040/urllib_red
Charles-François Natali added the comment:
It's actually an obvious case of heap fragmentation due to long-lived chunks
being realloc()ed to a smaller size. Some malloc implementations can choke on
this (e.g. OS-X's malloc is known to not shrink blocks when realloc() is called
with
Changes by Charles-François Natali :
Removed file: http://bugs.python.org/file16747/imaplib_read.diff
___
Python tracker
<http://bugs.python.org/issue1441530>
___
___
Charles-François Natali added the comment:
Thanks for reporting this, the current behaviour is clearly wrong. The child
process doesn't need to - and shouldn't - inherit the server socket.
The custom idiom when writting such code is to close the new socket (well, in
TCP) in the pare
Charles-François Natali added the comment:
$ cat test_sock.py
import socket
import fcntl
with socket.socket(socket.AF_INET, socket.SOCK_STREAM|socket.SOCK_CLOEXEC) as s:
print(bool(fcntl.fcntl(s, fcntl.F_GETFD) & fcntl.FD_CLOEXEC))
$ ./python test_sock.py
Charles-François Natali added the comment:
> Patch looks ok. Is 3.x also affected? The I/O stack changed quite a bit in
> 3.x.
I think it's not affected, but I can't reproduce this behaviour with
glibc/eglibc, so don't just take my word for it.
The reason is that in
Charles-François Natali added the comment:
In the buffered reader case, the result buffer is actually pre-allocated with
the total size, making fragmentation even less likely.
--
___
Python tracker
<http://bugs.python.org/issue1441
Charles-François Natali added the comment:
Digging a little deeper, here's the conclusion:
- with py3k, fragmentation is less likely: the buffered reader returned by
makefile() ensures that we can allocate only one result buffer for the total
number of bytes read() (thanks to soc
Changes by Charles-François Natali :
Added file: http://bugs.python.org/file22063/imaplib_recv_27.diff
___
Python tracker
<http://bugs.python.org/issue1441530>
___
___
Changes by Charles-François Natali :
Removed file: http://bugs.python.org/file22044/imaplib_read.diff
___
Python tracker
<http://bugs.python.org/issue1441530>
___
___
Changes by Charles-François Natali :
Removed file: http://bugs.python.org/file22051/imaplib_ssl_makefile.diff
___
Python tracker
<http://bugs.python.org/issue1441
Charles-Francois Natali added the comment:
> it may be very convenient and the performance overhead may be barely
> noticeable.
Convenient for what ?
If the remote end doesn't send a FIN or RST packet, then the TCP/IP stack has
no way of knowing the remote end is down.
Successful
Charles-Francois Natali added the comment:
> but sometimes socket.close will send TCP RST to disconnect the telnet and
> with wrong sequence number
This is called a a "half-duplex" TCP close sequence. Your application is
probably closing the socket while there are still dat
Charles-Francois Natali added the comment:
>From the documentation:
"This function returns random bytes from an OS-specific randomness source."
In your case, this problem shows up because of an OS misconfiguration : in that
case, the behaviour is undefined (not much Python ca
Charles-Francois Natali added the comment:
> Martin v. Löwis added the comment:
>
> I wonder why reading from /dev/urandom has a loop in the first place, though
> - isn't it guaranteed that you can read as many bytes as you want in one go?
> This goes back to #934711,
Charles-Francois Natali added the comment:
> Martin v. Löwis added the comment:
>
>> "It's a bug in random.c that doesn' t check for signal pending inside the
>> read(2) code, so you have no chance to kill the process via signals until
>> the read(2) s
Charles-Francois Natali added the comment:
> python mmap objects issue msync() in destructor even if mapping was created
> with prot=mmap.PROT_READ only
Actually, the call to msync(2) from destructor has been removed altogether in
py3k. See http://bugs.python.org/issue2643.
The patc
Charles-Francois Natali added the comment:
> have changed title of the bug. This is more precisely describe the problem
Yes, its's not quite the same problem.
> I think, that flush() should be no-op if mapping is read-only.
This has already be done for py3k. See
http://svn.pyth
Charles-Francois Natali added the comment:
It's due to the way the python interpreter handles signals: when the signal is
received, python runs a stub signal handler that just sets a flag indicating
that the signal has been received: the actual handler is executed later,
synchron
Charles-Francois Natali added the comment:
> Antoine Pitrou added the comment:
>
> Charles-François' analysis seems to be right. Note that the actual issue
> here is that read() always succeeds, returning a partial result (because
> you're executing a command,
Charles-Francois Natali added the comment:
It's now fixed in py3k, FD_CLOEXEC is set atomically (using pipe2 if available,
otherwise it still has the GIL protection). See
http://svn.python.org/view?view=rev&revision=87207
--
nosy: +neologi
Charles-Francois Natali added the comment:
> I cannot figure out why the closesocket's graceful
shutdown is waiting for the Popen command to complete.
It doesn't. Your main process closes its socket. You could see it with a
netstat/lsof (don't know under Windows).
The proble
Charles-Francois Natali added the comment:
This is normal behaviour: stdout is normally line buffered (_IOLBF) only if
connected to a tty.
When it's not connected to a tty, it's full buffered (_IOFBF). This is done on
purpose for performance reason. To convince yourself, run
$ c
Charles-Francois Natali added the comment:
As explained by Jean-Paul, it's due to the fact that the closed TCP sockets
spend some time in TIME-WAIT state before being deallocated.
On Linux, this issue can be more or less worked-around using sysctl
(net.ipv4.tcp_tw_{reuse,recycle}).
Charles-Francois Natali added the comment:
It's a dupe of http://bugs.python.org/issue8035.
By the way, it works with 2.7 because urllib used HTTP 1.0 by default, and in
py3k it now uses HTTP 1.1.
And from what I understood (by I'm by no means an http expert), in http 1.0 the
Charles-Francois Natali added the comment:
I think this issue is related to http://bugs.python.org/issue11158, which is in
turn related to http://bugs.python.org/issue6721 (Locks in python standard
library should be sanitized on fork).
When a mutex created by a parent process is used from
Charles-Francois Natali added the comment:
I'm not sure that releasing the mutex is enough, it can still lead to a
segfault, as is probably the case in this issue :
http://bugs.python.org/issue11148
Quoting pthread_atfork man page :
To understand the purpose of pthread_atfork, recall
Charles-Francois Natali added the comment:
> I was clearly wrong about a release being done in the child being the right
> thing to do (issue6643 proved that, the state held by a lock is not usable to
> another process on all platforms such that release even works).
Yeah, apparentl
Charles-Francois Natali added the comment:
Are you using a default gateway ?
Are you sure this gateway supports multicast ?
See for example http://www.sockets.com/err_lst1.htm#WSAENETUNREACH :
"""
WSAENETUNREACH (10051) Network is unreachable.
Berkeley description: A socke
Charles-Francois Natali added the comment:
The problem is due to the way urllib closes a FTP data transfer.
The data channel is closed, and a shutdown hook is called that waits for a
message on the control channel. But in that case, when the data connection is
closed while the transfer is in
Changes by Charles-Francois Natali :
Added file: http://bugs.python.org/file20815/urllib_ftp_close_27.diff
___
Python tracker
<http://bugs.python.org/issue11199>
___
___
Charles-Francois Natali added the comment:
> I just tested the patch under Python 2.6. It doesn't seem to solve the
> problem.
Are you sure the patch applied cleanly ?
I tested both on 3.2 and 2.7, and it fixed the problem for me.
If not, could you submit a tcpd
Charles-Francois Natali added the comment:
> rg3 added the comment:
>
> I have to correct myself. I applied the patch manually to my Python 2.6
> installation. In Python 2.6, the line you moved is number 961, and I did the
> same change.
OK. For information, you can apply it
Charles-Francois Natali added the comment:
dup(2) returns the lowest numbered available file descriptor: if there's a
discontinuity in the FDs allocation, this code is going to close only the FDs
up to the first available FD.
Imagine for example the following:
open("/tmp/foo")
Charles-Francois Natali added the comment:
In the test script, simply changing
def emit(f, data=snips):
for datum in data:
f.write(datum)
to
def gemit(f, data=snips):
datas = ''.join(data)
f.write(datas)
improves direct gzip performance from
[1.179978
Charles-Francois Natali added the comment:
$ cat /tmp/test.py
import socket
SIZE = 10L
s = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
try:
s.recv(SIZE)
finally:
s.close()
$ python /tmp/test.py
Traceback (most recent call last):
File "/tmp/test.py",
Charles-Francois Natali added the comment:
A patch to make select calls EINTR-safe is attached, but:
- there are many modules that suffer from the same problem, so maybe a fix at
select level would be better
- if not, it could be a good idea to add this EINTR-retry handler to a given
module
Charles-Francois Natali added the comment:
In real life, you can receive for example SIGSTOP (strace, gdb, shell), but
mostly SIGCHLD (any process spawning children), etc. The attached patch just
restarts calls when EINTR is received, as is done in subprocess module. The
mailing list is a
Charles-Francois Natali added the comment:
This is because difflib.ndiff (called by difflib.HtmlDiff.make_table),
contrarily to difflib.unified_diff (and probably kdiff3), doesn't restrict
itself to contiguous lines, and searches diff even inside lines, so the
complexity is much worse
Charles-Francois Natali added the comment:
Alright, what happens is the following:
- the file you're trying to retrieve is actually redirected, so the server send
a HTTP/1.X 302 Moved Temporarily
- in urllib, when we get a redirection, we call redirect_internal:
def redirect_internal
901 - 1000 of 2227 matches
Mail list logo