Olivier Delhomme added the comment:
>> Hi Marc-Andre,
>>
>> Please note that setting PYTHONUTF8 with "export PYTHONUTF8=1":
>>
>> * Is external to the program and user dependent
>> * It does not seems to work on my use case:
>>
>>
Olivier Delhomme added the comment:
Hi Marc-Andre,
Please note that setting PYTHONUTF8 with "export PYTHONUTF8=1":
* Is external to the program and user dependent
* It does not seems to work on my use case:
$ unset LANG
$ export PYTHONUTF8=1
$ python3
Python 3.6.4 (defau
New submission from Olivier Delhomme :
$ python3 --version
Python 3.6.4
Setting LANG to en_US.UTF8 works like a charm
$ export LANG=en_US.UTF8
$ python3
Python 3.6.4 (default, Jan 11 2018, 16:45:55)
[GCC 4.8.5] on linux
Type "help", "copyright", "credits" or
Change by Olivier Le Floch :
--
nosy: +olivierlefloch
___
Python tracker
<https://bugs.python.org/issue43112>
___
___
Python-bugs-list mailing list
Unsubscribe:
Olivier Croquette added the comment:
I don't know what version of gendef is meant, but the one from MSYS2 / MinGW64
doesn't output the result on stdout, but rather writes the file "python38.def"
itself. So the commands are the following:
cd libs
gendef ..\python38.d
Olivier Dony added the comment:
Somehow the message identifiers in the code sample got messed up in previous
comment, here's the actual code, for what it's worth ;-)
https://gist.github.com/odony/0323eab303dad2077c1277076ecc3733
--
Olivier Dony added the comment:
Further, under Python 3.8 the issue is not fully solved, as other
identification headers are still being folded in a non-RFC-conformant manner
(see OP for RFC references). This was indicated on the original PR by the
author: https://github.com/python/cpython
Olivier Dony added the comment:
With regard to msg349895, is there any chance this fix could be considered for
backport?
I imagine you could view it as a new feature, but it seems to be the only
official fix we have for the fact that Python 3 generates invalid SMTP
messages. And that'
Olivier Grisel added the comment:
As Victor said, the `time.sleep(1.0)` might lead to Heisen failures. I am not
sure how to write proper strong synchronization in this case but we could
instead go for something intermediate such as the following pattern:
...
p.terminate
Olivier Grisel added the comment:
Adding such a hook would make it possible to reimplement
cloudpickle.CloudPickler by deriving from the fast _pickle.Pickler class
(instead of the slow pickle._Pickler as done currently). This would mean
rewriting most of the CloudPickler method to only rely
Olivier Chédru added the comment:
FWIW, I encountered the same kind of issue when using the mkstemp() function:
under the hood, it calls gettempdir() and this one is protected by a lock too.
Current thread 0x7ff10231f700 (most recent call first):
File "/usr/lib/python3.5/tempfi
Olivier Grisel added the comment:
Thanks for the very helpful feedback and guidance during the review.
--
___
Python tracker
<https://bugs.python.org/issue31
Olivier Grisel added the comment:
Shall we close this issue now that the PR has been merged to master?
--
___
Python tracker
<https://bugs.python.org/issue31
Olivier Grisel added the comment:
Flushing the buffer at each frame commit will cause a medium-sized write every
64kB on average (instead of one big write at the end). So that might actually
cause a performance regression for some users if the individual file-object
writes induce significant
Olivier Grisel added the comment:
> While we are here, wouldn't be worth to flush the buffer in the C
> implementation to the disk always after committing a frame? This will save a
> memory when dump a lot of small objects.
I think it's a good idea. The C pickler would b
Olivier Grisel added the comment:
Thanks Antoine, I updated my code to what you suggested.
--
___
Python tracker
<https://bugs.python.org/issue31993>
___
___
Olivier Grisel added the comment:
Alright, I found the source of my refcounting bug. I updated the PR to include
the C version of the dump for PyBytes.
I ran Serhiy's microbenchmarks on the C version and I could not detect any
overhead on small bytes objects while I get a ~20x speedup
Olivier Grisel added the comment:
I have tried to implement the direct write bypass for the C version of the
pickler but I get a segfault in a Py_INCREF on obj during the call to
memo_put(self, obj) after the call to _Pickler_write_large_bytes.
Here is the diff of my current version of the
Olivier Grisel added the comment:
BTW, I am looking at the C implementation at the moment. I think I can do it.
--
___
Python tracker
<https://bugs.python.org/issue31
Olivier Grisel added the comment:
Alright, the last version has now ~4% overhead for small bytes.
--
___
Python tracker
<https://bugs.python.org/issue31
Olivier Grisel added the comment:
Actually, I think this can still be improved while keeping it readable. Let me
try again :)
--
___
Python tracker
<https://bugs.python.org/issue31
Olivier Grisel added the comment:
I have pushed a new version of the code that now has a 10% overhead for small
bytes (instead of 40% previously).
It could be possible to optimize further but I think that would render the code
much less readable so I would be tempted to keep it this way
Olivier Grisel added the comment:
In my last comment, I also reported the user times (not spend in OS level disk
access stuff): the code of the PR is on the order of 300-400ms while master is
around 800ms or more.
--
___
Python tracker
<ht
Olivier Grisel added the comment:
More benchmarks with the unix time command:
```
(py37) ogrisel@ici:~/code/cpython$ git checkout master
Switched to branch 'master'
Your branch is up-to-date with 'origin/master'.
(py37) ogrisel@ici:~/code/cpython$ time python ~/tmp/large_p
Olivier Grisel added the comment:
Note that the time difference is not significant. I rerun the last command I
got:
```
(py37) ogrisel@ici:~/code/cpython$ python ~/tmp/large_pickle_dump.py
--use-pypickle
Allocating source data...
=> peak memory usage: 2.014 GB
Dumping to disk...
done
Olivier Grisel added the comment:
I wrote a script to monitor the memory when dumping 2GB of data with python
master (C pickler and Python pickler):
```
(py37) ogrisel@ici:~/code/cpython$ python ~/tmp/large_pickle_dump.py
Allocating source data...
=> peak memory usage: 2.014 GB
Dumping
New submission from Olivier Grisel :
I noticed that both pickle.Pickler (C version) and pickle._Pickler (Python
version) make unnecessary memory copies when dumping large str, bytes and
bytearray objects.
This is caused by unnecessary concatenation of the opcode and size header with
the
Olivier Vielpeau added the comment:
Thnaks for the reviews and the merge! :)
--
___
Python tracker
<http://bugs.python.org/issue29738>
___
___
Python-bugs-list m
Olivier Vielpeau added the comment:
I've attached the PR on Github and signed the CLA, is there anything else
needed from me to get this reviewed? Thanks!
--
___
Python tracker
<http://bugs.python.org/is
Changes by Olivier Vielpeau :
--
pull_requests: +433
___
Python tracker
<http://bugs.python.org/issue29738>
___
___
Python-bugs-list mailing list
Unsubscribe:
New submission from Olivier Vielpeau:
The code snippet in #25569 reproduces the memory leak with Python 3.6.0 and
2.7.13. The current memory leak is a regression that was introduced in #26470.
Going to attach a PR on github that fixes the issue shortly.
--
assignee: christian.heimes
Olivier Le Moign added the comment:
I guess this is fixed by https://pypi.python.org/pypi/rfc6266. Could have
looked better, sorry.
--
___
Python tracker
<http://bugs.python.org/issue26
New submission from Olivier Le Moign:
According to RFC5987 (http://tools.ietf.org/html/rfc5987), it's possible to use
other encoding than ASCII in header fields.
Specifically in the CGI library, posting files with non-ASCII characters will
lead the header to be (for example) filename*=
Olivier Matz added the comment:
By the way, I have my own implementation of the patch that did before checking
the issue tracker. Instead of adding an argument to readline.redisplay(), it
adds a new function readline.forced_update_display().
I attach the patch for reference, I don't know
Olivier Matz added the comment:
Hi,
I'm also interrested in this feature. Indeed, exporting
rl_forced_update_display() is the only way I've found to make
readline.set_completion_display_matches_hook() working properly. Without
forcing the redisplay, the prompt is not displayed. Thi
Olivier Grisel added the comment:
No problem. Thanks Antoine for the review!
--
___
Python tracker
<http://bugs.python.org/issue21905>
___
___
Python-bugs-list m
Olivier Grisel added the comment:
New version of the patch to add an inline comment.
--
Added file: http://bugs.python.org/file35841/pickle_whichmodule_20140703.patch
___
Python tracker
<http://bugs.python.org/issue21
New submission from Olivier Grisel:
`pickle.whichmodule` performs an iteration over `sys.modules` and tries to
perform `getattr` calls on those modules. Unfortunately some modules such as
those from the `six.moves` dynamic module can trigger imports when calling
`getattr` on them, hence
Olivier Grisel added the comment:
I applied issue19946_pep_451_multiprocessing_v2.diff and I confirm that it
fixes the problem that I reported initially.
--
___
Python tracker
<http://bugs.python.org/issue19
Olivier Grisel added the comment:
For Python 3.4:
Maybe rather than raising ImportError, we could issue warning to notify the
users that names from the __main__ namespace could not be loaded and make the
init_module_attrs return early.
This way a multiprocessing program that only calls
Olivier Grisel added the comment:
I can wait (or monkey-patch the stuff I need as a temporary workaround in my
code). My worry is that Python 3.4 will introduce a new feature that is very
crash-prone.
Take this simple program that uses the newly introduced `get_context` function
(the same
Olivier Grisel added the comment:
> The semantics are not going to change in python 3.4 and will just stay as
> they were in Python 3.3.
Well the semantics do change: in Python 3.3 the spawn and forkserver modes did
not exist at all. The "spawn" mode existed but only implicitl
Olivier Grisel added the comment:
Why has this issue been closed? Won't the spawn and forkserver mode work in
Python 3.4 for Python program started by a Python script (which is probably the
majority of programs written in Python under unix)?
Is there any reason not to use the `imp.load_s
Olivier Grisel added the comment:
Here is a patch that uses `imp.load_source` when the first importlib name-based
lookup fails.
Apparently it fixes the issue on my box but I am not sure whether this is the
correct way to do it.
--
keywords: +patch
Added file: http://bugs.python.org
Olivier Grisel added the comment:
Note however that the problem is not specific to nose. If I rename my initial
'check_forserver.py' script to 'check_forserver', add the '#!/usr/bin/env
python' header and make it 'chmod +x' I get the same crash.
So
Olivier Grisel added the comment:
> what is sys.modules['__main__'] and sys.modules['__main__'].__file__ if you
> run under nose?
$ cat check_stuff.py
import sys
def test_main():
print("sys.modules['__main__']=%r"
% sys.modules
Olivier Grisel added the comment:
I agree that a failure to lookup the module should raise an explicit exception.
> Second, there is no way that 'nosetests' will ever succeed as an import
> since, as Oliver pointed out, it doesn't end in '.py' or any other
>
Olivier Grisel added the comment:
> So the question is exactly what module is being passed to
> importlib.find_spec() and why isn't it finding a spec/loader for that module.
The module is the `nosetests` python script. module_name == 'nosetests' in this
case. Howe
Changes by Olivier Grisel :
--
type: -> crash
___
Python tracker
<http://bugs.python.org/issue19946>
___
___
Python-bugs-list mailing list
Unsubscrib
New submission from Olivier Grisel:
Here is a simple python program that uses the new forkserver feature introduced
in 3.4b1:
name: checkforkserver.py
"""
import multiprocessing
import os
def do(i):
print(i, os.getpid())
def test_forkserver():
mp = multiprocess
Olivier Grisel added the comment:
I tested the patch on the current HEAD and it fixes a regression introduced
between 3.3 and 3.4b1 that prevented to build scipy from source with "pip
install scipy".
--
nosy: +Olivier.Grisel
___
Pyth
Olivier Grisel added the comment:
Richard Oudkerk: thanks for the clarification, that makes sense. I don't have
the time either in the coming month, maybe later.
--
___
Python tracker
<http://bugs.python.org/is
Olivier Grisel added the comment:
The process pool executor [1] from the concurrent futures API would be suitable
to explicitly start and stop the helper process for the `forkserver` mode.
[1]
http://docs.python.org/3.4/library/concurrent.futures.html#concurrent.futures.ProcessPoolExecutor
Olivier Grisel added the comment:
> Maybe it would be better to have separate contexts for each start method.
> That way joblib could use the forkserver context without interfering with the
> rest of the user's program.
Yes in general it would be great if libraries could
Olivier Grisel added the comment:
Related question: is there any good reason that would prevent to pass a custom
`start_method` kwarg to the `Pool` constructor to make it use an alternative
`Popen` instance (that is an instance different from the
`multiprocessing._Popen` singleton)?
This
Olivier Grisel added the comment:
> In 3.3 you can do
>
> from multiprocessing.forking import ForkingPickler
> ForkingPickler.register(MyType, reduce_MyType)
>
> Is this sufficient for you needs? This is private (and its definition has
> moved in 3.4) but it
Olivier Grisel added the comment:
I forgot to end a sentence in my last comment:
- detect mmap-backed numpy
should read:
- detect mmap-backed numpy arrays and pickle only the filename and other buffer
metadata to reconstruct a mmap-backed array in the worker processes instead of
copying the
Olivier Grisel added the comment:
I have implemented a custom subclass of the multiprocessing Pool to be able
plug custom pickling strategy for this specific use case in joblib:
https://github.com/joblib/joblib/blob/master/joblib/pool.py#L327
In particular it can:
- detect mmap-backed numpy
Olivier Gagnon added the comment:
Yes I do have code that break because of this behaviour. I'm doing evolutionary
algorithm using a framework called DEAP. This framework creates a type called
individual at the runtime by subclassing a container and adding it a fitness
attribute.
Olivier Gagnon added the comment:
I can understand that the current behaviour can be correct in regard with the
added attributes of the object. However, should I open a new issue for the
following inheritance behaviour which the reduce function affects also.
class myCounter(Counter):
def
Olivier Gagnon added the comment:
The dictionary and the set do not give the freedom to add dynamic attributes to
them. I agree that the Counter should have the same behaviour.
However, this will raise the same bug when we inherit from a Counter object.
>>> class mylist(list): pass
.
Olivier Hervieu added the comment:
Here is a fixed version for python2.7. Using StringIO instead of io module
fixes the problem pointed by Ezio.
The print_stats method dumps the stats either on sys.stdout if `Stats` class is
declared without a stream specification or in the given stream
New submission from Olivier Gagnon:
The following code shows that the Counter is not deepcopied properly. The
same code with an user defined class or a dict is copied with the "b" attribute.
import collections
import copy
count = collections.Counter()
count.b = 3
print(count.b)
Olivier Berger added the comment:
Excellent. I've started playing with pygettext and msgfmt and it looks like
this works, from the initial tests I've made
--
nosy: +olberger
___
Python tracker
<http://bugs.python.o
New submission from Olivier Berger:
The IDLE UI isn't internationalized, AFAICS.
This doesn't help when teachning Python to non-english native speakers.
While learning basic english skills is no problem for wanabe Python hackers, it
isn't so for young programmers being tou
olivier-mattelaer added the comment:
Thanks a lot Ronald.
Cheers,
Olivier
--
status: pending -> open
___
Python tracker
<http://bugs.python.org/issu
Olivier Berten added the comment:
Any idea why mac cjk encodings still aren't available in Python 2.7 and 3.2 ?
--
components: -Build, Demos and Tools, Library (Lib), Macintosh
nosy: +ezio.melotti, olivier-berten
versions: +Python 2.7, Python 3.2 -Python 2.6, Pytho
New submission from olivier-mattelaer :
Hi Everyone,
I have found a strange behavior of the import command for the routine readline:
The commands (put in the file test.py) is simply:
import readline
print readline.__doc__
If I run this programs "normally" (i.e. python2.x test.py)
New submission from olivier :
Hi,
I tried to define new key in python idle and then python 2.5.1 failed to
launch.
What I did : I defined a new key, applied and changed my mind, removed my key
set named 'ole'.
I launched cmd
C:\Python25\Lib\idlelib>..\..\python idle.py
Olivier Refalo added the comment:
hum, your patch actually works on MSYS !
ok.. so I am pretty much having the very some issue.
Could not find platform dependent libraries
Consider setting $PYTHONHOME to [:]
Fatal Python error: Py_Initialize: unable to load the file system codec
LookupError
New submission from Olivier Berten :
I'm writing SwatchBooker <https://launchpad.net/swatchbooker>, an app that's
(among other things) reading some data from the web. When urllib.urlopen is
called first from within a secondary thread, the app crashes (or freezes). If
it
Olivier Berten added the comment:
Pleeese ;-)
--
nosy: +olivier-berten
___
Python tracker
<http://bugs.python.org/issue2504>
___
___
Python-bug
New submission from Olivier Hervieu <[EMAIL PROTECTED]>:
Hi guys.. i found something strange on the behavior of OptionParser
If I have this sample code :
import sys
from optparse import OptionParser
if __name__ == '__main__':
parser = OptionParser()
parser.add_o
Olivier Croquette added the comment:
And about the decoding, sorry, it's clear from your snippets that
urlparse doesn't do it:
>>> print q.username
user%40xyz
Maybe it should do it, I am not sure. What do you think? It would save
work
Olivier Croquette added the comment:
See also the related bug on duplicity:
http://savannah.nongnu.org/bugs/?21475
__
Tracker <[EMAIL PROTECTED]>
<http://bugs.python.org/
Olivier Croquette added the comment:
Hi!
Thanks for the reply!
The problem right now is that urlparse parses silently an URL which is
not compliant, but does the wrong thing with it (since usernames can
contain @, and hostname can not, it's a more logical thing to parse from
the right
New submission from Olivier Croquette:
Some servers allow the @ character is usernames. It gives URLs like:
ftp://[EMAIL PROTECTED]@host/dir
[EMAIL PROTECTED] could for example by an email address.
I am not sure if this is RFC compliant. What's sure is that is makes
trouble with url
77 matches
Mail list logo