Similarly, on macOS 10.12.3 Sierra:
% python3.5
Python 3.5.3 (v3.5.3:1880cb95a742, Jan 16 2017, 08:49:46)
[GCC 4.2.1 (Apple Inc. build 5666) (dot 3)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> import sys
>>> s = set(range(10))
>>> sys.getsizeof(s)
736
>>
On 03/30/2017 02:19 PM, INADA Naoki wrote:
FYI, this small patch may fix your issue:
https://gist.github.com/methane/8faf12621cdb2166019bbcee65987e99
I can verify that the patch fixes the issue for me.
Do you still need more information about the `transitive_closure`
function and my usage of s
On 03/29/2017 11:31 PM, Chris Angelico wrote:
On Thu, Mar 30, 2017 at 2:19 PM, Rick Johnson
wrote:
[...]
Really? How could your clients not notice 60 GB of memory
usage unless they are running some kind of mad-dog insane
top-of-the-line hardware? (Got Benjamins???) Of course, in
the case they a
On 3/30/17, INADA Naoki wrote:
> Maybe, this commit make this regression.
>
> https://github.com/python/cpython/commit/4897300276d870f99459c82b937f0ac22450f0b6
>
> Old:
> minused = (so->used + other->used)*2 (L619)
>
> New:
> minused = so->used + other->used (L620)
> minused = (minused > 5)
On 2017-03-30 19:04, INADA Naoki wrote:
Maybe, this commit make this regression.
https://github.com/python/cpython/commit/4897300276d870f99459c82b937f0ac22450f0b6
Old:
minused = (so->used + other->used)*2 (L619)
New:
minused = so->used + other->used (L620)
minused = (minused > 5) ? minus
FYI, this small patch may fix your issue:
https://gist.github.com/methane/8faf12621cdb2166019bbcee65987e99
--
https://mail.python.org/mailman/listinfo/python-list
That's great news. I'm busy with other things right now, but will look
into your findings in more detail later.
On 03/30/2017 02:09 PM, INADA Naoki wrote:
Filed an issue: https://bugs.python.org/issue29949
Thanks for your report, Jan.
On Fri, Mar 31, 2017 at 3:04 AM, INADA Naoki wrote:
May
Filed an issue: https://bugs.python.org/issue29949
Thanks for your report, Jan.
On Fri, Mar 31, 2017 at 3:04 AM, INADA Naoki wrote:
> Maybe, this commit make this regression.
>
> https://github.com/python/cpython/commit/4897300276d870f99459c82b937f0ac22450f0b6
>
> Old:
> minused = (so->used + ot
Maybe, this commit make this regression.
https://github.com/python/cpython/commit/4897300276d870f99459c82b937f0ac22450f0b6
Old:
minused = (so->used + other->used)*2 (L619)
New:
minused = so->used + other->used (L620)
minused = (minused > 5) ? minused * 2 : minused * 4; (L293)
So size of
I reproduced the issue.
This is very usual, memory usage issue. Slashing is just a result of
large memory usage.
After 1st pass of optimization, RAM usage is 20GB+ on Python 3.5 and
30GB on Python 3.6.
And Python 3.6 starts slashing in 2nd optimization pass.
I enabled tracemalloc while 1st pass.
On 3/29/17, Jan Gosmann wrote:
> On 28 Mar 2017, at 14:21, INADA Naoki wrote:
>
>> On Wed, Mar 29, 2017 at 12:29 AM, Jan Gosmann
>> wrote:
>>
>> I suppose smaller and faster benchmark is better to others looking for
>> it.
>> I already stopped the azure instance.
>> [...]
>> There are no maxrss d
>
> Running further trials indicate that the problem actually is related to
> swapping. If I reduce the model size in the benchmark slightly so that
> everything fits into the main memory, the problem disappears. Only when the
> memory usage exceeds the 32GB that I have, Python 3.6 will acquire way
On Thu, Mar 30, 2017 at 2:19 PM, Rick Johnson
wrote:
> On Wednesday, March 29, 2017 at 8:17:01 PM UTC-5, Jan Gosmann wrote:
>> On 29 Mar 2017, at 20:12, Steve D'Aprano wrote:
>>
>> > If you can demonstrate this effect using simple example
>> > code without the external dependencies (using nothing
On Wednesday, March 29, 2017 at 8:17:01 PM UTC-5, Jan Gosmann wrote:
> On 29 Mar 2017, at 20:12, Steve D'Aprano wrote:
>
> > If you can demonstrate this effect using simple example
> > code without the external dependencies (using nothing but
> > the standard library) and people can replicate it,
On 29 Mar 2017, at 20:12, Steve D'Aprano wrote:
If you can demonstrate this effect using simple example code without
the
external dependencies (using nothing but the standard library) and
people
can replicate it, I think it should be reported as a bug.
I probably won't be able to demonstrate
On Thu, 30 Mar 2017 07:19 am, Jan Gosmann wrote:
> Running further trials indicate that the problem actually is related to
> swapping. If I reduce the model size in the benchmark slightly so that
> everything fits into the main memory, the problem disappears. Only when
> the memory usage exceeds t
On 28 Mar 2017, at 14:21, INADA Naoki wrote:
On Wed, Mar 29, 2017 at 12:29 AM, Jan Gosmann
wrote:
I suppose smaller and faster benchmark is better to others looking for
it.
I already stopped the azure instance.
[...]
There are no maxrss difference in "smaller existing examples"?
[...]
I wan
On Wed, Mar 29, 2017 at 12:29 AM, Jan Gosmann wrote:
> On 28 Mar 2017, at 6:11, INADA Naoki wrote:
>
>> I managed to install pyopencl and run the script. It takes more than
>> 2 hours, and uses only 7GB RAM.
>> Maybe, some faster backend for OpenCL is required?
>>
>> I used Microsoft Azure Comput
On 28 Mar 2017, at 6:11, INADA Naoki wrote:
I managed to install pyopencl and run the script. It takes more than
2 hours, and uses only 7GB RAM.
Maybe, some faster backend for OpenCL is required?
I used Microsoft Azure Compute, Standard_A4m_v2 (4 cores, 32 GB
memory) instance.
I suppose that
On 28 Mar 2017, at 3:08, Peter Otten wrote:
> Perhaps numpy's default integer type has changed (assuming you are using
> integer arrays, I did look at, but not into your code)?
>
> You could compare
>
numpy.array([42]).itemsize
> 8
>
> for the two interpreters.
Both report 8 for integer and
I can't reproduce it.
I managed to install pyopencl and run the script. It takes more than
2 hours, and uses only 7GB RAM.
Maybe, some faster backend for OpenCL is required?
I used Microsoft Azure Compute, Standard_A4m_v2 (4 cores, 32 GB
memory) instance.
More easy way to reproduce is needed...
Jan Gosmann wrote:
> On 27 Mar 2017, at 18:30, Peter Otten wrote:
>
>> Are you perchance comparing 32-bit Python 3.5 with 64-bit Python 3.6?
>
> I don't think so.
> [sys.maxsize](https://docs.python.org/3/library/platform.html#cross-platform)
> indicates both to be 64-bit.
While my original ide
On 27 Mar 2017, at 20:12, Chris Angelico wrote:
On Tue, Mar 28, 2017 at 11:00 AM, Chris Angelico
wrote:
In any case, I've installed nvidia-opencl-dev and it seems to be
happy. How long should I expect this to run for?
Now testing under CPython 3.7.
It ran for about 14 minutes, then memory
On 27 Mar 2017, at 20:00, Chris Angelico wrote:
By the way, since you're aiming this at recent Python versions, you
could skip the 'virtualenv' external dependency and use the built-in
'venv' package:
$ python3 -m venv env
Yeah, I know about venv. The last time I tried it, there was still som
On Tue, Mar 28, 2017 at 11:00 AM, Chris Angelico wrote:
> In any case, I've installed nvidia-opencl-dev and it seems to be
> happy. How long should I expect this to run for?
>
> Now testing under CPython 3.7.
>
It ran for about 14 minutes, then memory usage spiked and went into
the page file. Use
On Tue, Mar 28, 2017 at 10:01 AM, Jan Gosmann wrote:
> Yes, it is on GitHub (use the fixes branch):
> https://github.com/ctn-archive/gosmann-frontiers2017/tree/fixes
> Installation instructions are in the readme.
> The command I'm running is python scripts/log_reduction.py spaun
Working on it.
B
On 27 Mar 2017, at 18:42, Chris Angelico wrote:
Are you able to share the program? I could try it on my system and see
if the same thing happens.
Yes, it is on GitHub (use the fixes branch):
https://github.com/ctn-archive/gosmann-frontiers2017/tree/fixes
Installation instructions are in the
On Tue, Mar 28, 2017 at 8:57 AM, Jan Gosmann wrote:
> I have a program which uses twice as much memory when I run it in Python 3.6
> than when I run it in Python 3.5 (about 60GB instead of 30GB). I tried to
> find the reason for that, but the cumulated size (measured with
> sys.getsizeof) of all o
On 27 Mar 2017, at 18:30, Peter Otten wrote:
Are you perchance comparing 32-bit Python 3.5 with 64-bit Python 3.6?
I don't think so.
[sys.maxsize](https://docs.python.org/3/library/platform.html#cross-platform)
indicates both to be 64-bit.
--
https://mail.python.org/mailman/listinfo/python-
Jan Gosmann wrote:
> Hi,
>
> I have a program which uses twice as much memory when I run it in Python
> 3.6 than when I run it in Python 3.5 (about 60GB instead of 30GB). I
> tried to find the reason for that, but the cumulated size (measured with
> sys.getsizeof) of all objects returned by gc.ge
Hi,
I have a program which uses twice as much memory when I run it in Python
3.6 than when I run it in Python 3.5 (about 60GB instead of 30GB). I
tried to find the reason for that, but the cumulated size (measured with
sys.getsizeof) of all objects returned by gc.get_objects accumulates
only
31 matches
Mail list logo