On 07/18/2017 01:07 AM, dieter wrote:
"Jan Gosmann" writes:
[...]
fn = load_pyfile('fn.py')['fn']
[...]
"pickle" (and "cpickle") are serializing functions as so called
"global"s, i.e. as a module reference together with a name.
T
Hi,
today I came across some weird behaviour (a bug?) in Python 2.7.13 (on
Linux) with the cPickle module. The pickle module works and so does the
pickle module in Python 3.
I have a file fn.py with a minimal function definition:
```
def fn():
pass
```
The actual code that I run is in a
On 03/30/2017 02:19 PM, INADA Naoki wrote:
FYI, this small patch may fix your issue:
https://gist.github.com/methane/8faf12621cdb2166019bbcee65987e99
I can verify that the patch fixes the issue for me.
Do you still need more information about the `transitive_closure`
function and my usage of s
On 03/29/2017 11:31 PM, Chris Angelico wrote:
On Thu, Mar 30, 2017 at 2:19 PM, Rick Johnson
wrote:
[...]
Really? How could your clients not notice 60 GB of memory
usage unless they are running some kind of mad-dog insane
top-of-the-line hardware? (Got Benjamins???) Of course, in
the case they a
That's great news. I'm busy with other things right now, but will look
into your findings in more detail later.
On 03/30/2017 02:09 PM, INADA Naoki wrote:
Filed an issue: https://bugs.python.org/issue29949
Thanks for your report, Jan.
On Fri, Mar 31, 2017 at 3:04 AM, INADA Naoki wrote:
May
On 29 Mar 2017, at 20:12, Steve D'Aprano wrote:
If you can demonstrate this effect using simple example code without
the
external dependencies (using nothing but the standard library) and
people
can replicate it, I think it should be reported as a bug.
I probably won't be able to demonstrate
On 28 Mar 2017, at 14:21, INADA Naoki wrote:
On Wed, Mar 29, 2017 at 12:29 AM, Jan Gosmann
wrote:
I suppose smaller and faster benchmark is better to others looking for
it.
I already stopped the azure instance.
[...]
There are no maxrss difference in "smaller existing examples"
On 28 Mar 2017, at 6:11, INADA Naoki wrote:
I managed to install pyopencl and run the script. It takes more than
2 hours, and uses only 7GB RAM.
Maybe, some faster backend for OpenCL is required?
I used Microsoft Azure Compute, Standard_A4m_v2 (4 cores, 32 GB
memory) instance.
I suppose that
On 28 Mar 2017, at 3:08, Peter Otten wrote:
> Perhaps numpy's default integer type has changed (assuming you are using
> integer arrays, I did look at, but not into your code)?
>
> You could compare
>
numpy.array([42]).itemsize
> 8
>
> for the two interpreters.
Both report 8 for integer and
On 27 Mar 2017, at 20:12, Chris Angelico wrote:
On Tue, Mar 28, 2017 at 11:00 AM, Chris Angelico
wrote:
In any case, I've installed nvidia-opencl-dev and it seems to be
happy. How long should I expect this to run for?
Now testing under CPython 3.7.
It ran for about 14 minutes, then memory
On 27 Mar 2017, at 20:00, Chris Angelico wrote:
By the way, since you're aiming this at recent Python versions, you
could skip the 'virtualenv' external dependency and use the built-in
'venv' package:
$ python3 -m venv env
Yeah, I know about venv. The last time I tried it, there was still som
On 27 Mar 2017, at 18:42, Chris Angelico wrote:
Are you able to share the program? I could try it on my system and see
if the same thing happens.
Yes, it is on GitHub (use the fixes branch):
https://github.com/ctn-archive/gosmann-frontiers2017/tree/fixes
Installation instructions are in the
On 27 Mar 2017, at 18:30, Peter Otten wrote:
Are you perchance comparing 32-bit Python 3.5 with 64-bit Python 3.6?
I don't think so.
[sys.maxsize](https://docs.python.org/3/library/platform.html#cross-platform)
indicates both to be 64-bit.
--
https://mail.python.org/mailman/listinfo/python-
Hi,
I have a program which uses twice as much memory when I run it in Python
3.6 than when I run it in Python 3.5 (about 60GB instead of 30GB). I
tried to find the reason for that, but the cumulated size (measured with
sys.getsizeof) of all objects returned by gc.get_objects accumulates
only
14 matches
Mail list logo