> I had the (mis)pleasure of dealing with a multi-terabyte postgresql
> instance many years ago and figuring out why random scripts were eating
> up system memory became quite common.
>
> All of our "ETL" scripts were either written in Perl, Java, or Python
> but the results were always the sa
On 3/29/21 5:12 AM, Alexey wrote:
Hello everyone!
I'm experiencing problems with memory consumption.
I have a class which is doing ETL job. What`s happening inside:
- fetching existing objects from DB via SQLAchemy
- iterate over raw data
- create new/update existing objects
- commit cha
четверг, 1 апреля 2021 г. в 15:56:23 UTC+3, Marco Ippolito:
> > > Are you running with systemd?
> >
> > I really don't know.
> An example of how to check:
>
> ```
> $ readlink /sbin/init
> /lib/systemd/systemd
> ```
>
> You want to check which program runs as PID 1.
Thank you Marco
--
h
четверг, 1 апреля 2021 г. в 15:46:21 UTC+3, Marco Ippolito:
> I suspect the high watermark of `` needs to be reachable still and,
> secondly, that a forceful constraint whilst running would crash the
> container?
Exactly.
--
https://mail.python.org/mailman/listinfo/python-list
четверг, 1 апреля 2021 г. в 17:21:59 UTC+3, Mats Wichmann:
> On 4/1/21 5:50 AM, Alexey wrote:
> > Found it. As I said before the problem was lurking in the cache.
> > Few days ago I read about circular references and things like that and
> > I thought to myself that it might be the case. To buil
четверг, 1 апреля 2021 г. в 16:02:15 UTC+3, Barry:
> > On 1 Apr 2021, at 13:46, Marco Ippolito wrote:
> >
> >
> >>
> What if you increase the machine's (operating system's) swap space? Does
> that take care of the problem in practice?
> >>>
> >>> I can`t do that because it will aff
четверг, 1 апреля 2021 г. в 15:27:01 UTC+3, Chris Angelico:
> On Thu, Apr 1, 2021 at 10:56 PM Alexey wrote:
> >
> > Found it. As I said before the problem was lurking in the cache.
> > Few days ago I read about circular references and things like that and
> > I thought to myself that it might
On 4/1/21 5:50 AM, Alexey wrote:
Found it. As I said before the problem was lurking in the cache.
Few days ago I read about circular references and things like that and
I thought to myself that it might be the case. To build the cache I was
using lots of 'setdefault' methods chained together
s
> On 1 Apr 2021, at 13:46, Marco Ippolito wrote:
>
>
>>
What if you increase the machine's (operating system's) swap space? Does
that take care of the problem in practice?
>>>
>>> I can`t do that because it will affect other containers running on this
>>> host.
>>> In my opinion i
> > Are you running with systemd?
>
> I really don't know.
An example of how to check:
```
$ readlink /sbin/init
/lib/systemd/systemd
```
You want to check which program runs as PID 1.
```
ps 1
```
--
https://mail.python.org/mailman/listinfo/python-list
четверг, 1 апреля 2021 г. в 14:57:29 UTC+3, Barry:
> > On 31 Mar 2021, at 09:42, Alexey wrote:
> >
> > среда, 31 марта 2021 г. в 01:20:06 UTC+3, Dan Stromberg:
> >>> On Tue, Mar 30, 2021 at 1:25 AM Alexey wrote:
> >>>
> >>>
> >>> I'm sorry. I didn't understand your question right. If I have
> >> What if you increase the machine's (operating system's) swap space? Does
> >> that take care of the problem in practice?
> >
> > I can`t do that because it will affect other containers running on this
> > host.
> > In my opinion it may significantly reduce their performance.
>
> Assuming thi
On Thu, Apr 1, 2021 at 10:56 PM Alexey wrote:
>
> Found it. As I said before the problem was lurking in the cache.
> Few days ago I read about circular references and things like that and
> I thought to myself that it might be the case. To build the cache I was
> using lots of 'setdefault' methods
> On 31 Mar 2021, at 09:42, Alexey wrote:
>
> среда, 31 марта 2021 г. в 01:20:06 UTC+3, Dan Stromberg:
>>> On Tue, Mar 30, 2021 at 1:25 AM Alexey wrote:
>>>
>>>
>>> I'm sorry. I didn't understand your question right. If I have 4 workers,
>>> they require 4Gb
>>> in idle state and some ex
Found it. As I said before the problem was lurking in the cache.
Few days ago I read about circular references and things like that and
I thought to myself that it might be the case. To build the cache I was
using lots of 'setdefault' methods chained together
self.__cache.setdefault(cluster_name,
On 31/03/2021 09:35, Alexey wrote:
среда, 31 марта 2021 г. в 01:20:06 UTC+3, Dan Stromberg:
What if you increase the machine's (operating system's) swap space? Does
that take care of the problem in practice?
I can`t do that because it will affect other containers running on this host.
In my
среда, 31 марта 2021 г. в 18:17:46 UTC+3, Dieter Maurer:
> Alexey wrote at 2021-3-31 02:43 -0700:
> >среда, 31 марта 2021 г. в 06:54:52 UTC+3, Inada Naoki:
> > ...
> >> You can get some hints from sys._debugmallocstats(). It prints
> >> obmalloc (allocator for small objects) stats to stderr.
> >
Alexey wrote at 2021-3-31 02:43 -0700:
>среда, 31 марта 2021 г. в 06:54:52 UTC+3, Inada Naoki:
> ...
>> You can get some hints from sys._debugmallocstats(). It prints
>> obmalloc (allocator for small objects) stats to stderr.
>> Try printing stats before and after 1st run, and after 2nd run. And
>>
среда, 31 марта 2021 г. в 14:16:30 UTC+3, Inada Naoki:
> > ** Before first run:
> > # arenas allocated total = 776
> > # arenas reclaimed = 542
> > # arenas highwater mark = 234
> > # arenas allocated current = 234
> > 234 arenas * 262144 bytes/arena = 61,341,696
> > ** After fi
> ** Before first run:
> # arenas allocated total = 776
> # arenas reclaimed = 542
> # arenas highwater mark= 234
> # arenas allocated current = 234
> 234 arenas * 262144 bytes/
среда, 31 марта 2021 г. в 11:52:43 UTC+3, Marco Ippolito:
> > > At which point does the problem start manifesting itself?
> > The problem spot is my cache(dict). I simplified my code to just load
> > all the objects to this dict and then clear it.
> What's the memory utilisation just _before_ per
среда, 31 марта 2021 г. в 06:54:52 UTC+3, Inada Naoki:
> First of all, I recommend upgrading your Python. Python 3.6 is a bit old.
I was thinking about that.
> As you saying, Python can not return the memory to OS until the whole
> arena become unused.
> If your task releases all objects alloc
среда, 31 марта 2021 г. в 05:45:27 UTC+3, cameron...@gmail.com:
> Since everyone is talking about vague OS memory use and not at all about
> working set size of Python objects, let me ...
> On 29Mar2021 03:12, Alexey wrote:
> >I'm experiencing problems with memory consumption.
> >
> >I have a
> > At which point does the problem start manifesting itself?
> The problem spot is my cache(dict). I simplified my code to just load
> all the objects to this dict and then clear it.
What's the memory utilisation just _before_ performing this load? I am assuming
it's much less than this 1 GB you
среда, 31 марта 2021 г. в 01:20:06 UTC+3, Dan Stromberg:
> On Tue, Mar 30, 2021 at 1:25 AM Alexey wrote:
>
> >
> > I'm sorry. I didn't understand your question right. If I have 4 workers,
> > they require 4Gb
> > in idle state and some extra memory when they execute other tasks. If I
> > inc
вторник, 30 марта 2021 г. в 18:43:54 UTC+3, Alan Gauld:
> On 29/03/2021 11:12, Alexey wrote:
> The first thing you really need to tell us is which
> OS you are using? Memory management varies wildly
> depending on OS. Even different flavours of *nix
> do it differently.
I'm using Ubuntu(5.8.
вторник, 30 марта 2021 г. в 18:43:51 UTC+3, Marco Ippolito:
> Have you tried to identify where in your code the surprising memory
> allocations
> are made?
Yes.
> You could "bisect search" by adding breakpoints:
>
> https://docs.python.org/3/library/functions.html#breakpoint
>
> At which po
On Mon, Mar 29, 2021 at 7:16 PM Alexey wrote:
>
> Problem. Before executing, my interpreter process weighs ~100Mb, after first
> run memory increases up to 500Mb
> and after second run it weighs 1Gb. If I will continue to run this class,
> memory wont increase, so I think
> it's not a memory lea
Since everyone is talking about vague OS memory use and not at all about
working set size of Python objects, let me ...
On 29Mar2021 03:12, Alexey wrote:
>I'm experiencing problems with memory consumption.
>
>I have a class which is doing ETL job. What`s happening inside:
> - fetching existing o
On Tue, Mar 30, 2021 at 1:25 AM Alexey wrote:
>
> I'm sorry. I didn't understand your question right. If I have 4 workers,
> they require 4Gb
> in idle state and some extra memory when they execute other tasks. If I
> increase workers
> count up to 16, they`ll eat all the memory I have (16GB) on
On 30/03/2021 16:50, Chris Angelico wrote:
>> A 1GB process on modern computers is hardly a big problem?
>> Most machines have 4G and many have 16G or even 32G
>> nowadays.
>>
>
> Desktop systems maybe, but if you rent yourself a worker box, it might
> not have anything like that much. Especially
On Wed, Mar 31, 2021 at 2:44 AM Alan Gauld via Python-list
wrote:
>
> On 29/03/2021 11:12, Alexey wrote:
> > Hello everyone!
> > I'm experiencing problems with memory consumption.
> >
>
> The first thing you really need to tell us is which
> OS you are using? Memory management varies wildly
> depe
Have you tried to identify where in your code the surprising memory allocations
are made?
You could "bisect search" by adding breakpoints:
https://docs.python.org/3/library/functions.html#breakpoint
At which point does the problem start manifesting itself?
--
https://mail.python.org/mailman/lis
On 29/03/2021 11:12, Alexey wrote:
> Hello everyone!
> I'm experiencing problems with memory consumption.
>
The first thing you really need to tell us is which
OS you are using? Memory management varies wildly
depending on OS. Even different flavours of *nix
do it differently.
However, most do i
> I'm sorry. I didn't understand your question right. If I have 4 workers,
> they require 4Gb
> in idle state and some extra memory when they execute other tasks. If I
> increase workers
> count up to 16, they`ll eat all the memory I have (16GB) on my machine and
> will crash as soon
> as system ge
понедельник, 29 марта 2021 г. в 19:56:52 UTC+3, Stestagg:
> > > 2. Can you try a test with 16 or 32 active workers (i.e. number of
> > > workers=2x available memory in GB), do they all still end up with 1gb
> > > usage? or do you get any other memory-related issues running this?
> > Yes. They wi
понедельник, 29 марта 2021 г. в 19:37:03 UTC+3, Dieter Maurer:
> Alexey wrote at 2021-3-29 06:26 -0700:
> >понедельник, 29 марта 2021 г. в 15:57:43 UTC+3, Julio Oña:
> >> It looks like the problem is on celery.
> >> The mentioned issue is still open, so not sure if it was corrected.
> >>
> >>
> > 2. Can you try a test with 16 or 32 active workers (i.e. number of
> > workers=2x available memory in GB), do they all still end up with 1gb
> > usage? or do you get any other memory-related issues running this?
> Yes. They will consume 1Gb each. It doesn't matter how many workers I
> have,
> t
Alexey wrote at 2021-3-29 06:26 -0700:
>понедельник, 29 марта 2021 г. в 15:57:43 UTC+3, Julio Oña:
>> It looks like the problem is on celery.
>> The mentioned issue is still open, so not sure if it was corrected.
>>
>> https://manhtai.github.io/posts/memory-leak-in-celery/
>
>As I mentioned in my f
понедельник, 29 марта 2021 г. в 17:19:02 UTC+3, Stestagg:
> On Mon, Mar 29, 2021 at 2:32 PM Alexey wrote:
> Some questions here to help understand more:
>
> 1. Do you have any actual problems caused by running 8 celery workers
> (beyond high memory reports)? What are they?
No. Everything work
On Mon, Mar 29, 2021 at 2:32 PM Alexey wrote:
> понедельник, 29 марта 2021 г. в 15:57:43 UTC+3, Julio Oña:
> > It looks like the problem is on celery.
> > The mentioned issue is still open, so not sure if it was corrected.
> >
> > https://manhtai.github.io/posts/memory-leak-in-celery/
>
> As I me
понедельник, 29 марта 2021 г. в 15:57:43 UTC+3, Julio Oña:
> It looks like the problem is on celery.
> The mentioned issue is still open, so not sure if it was corrected.
>
> https://manhtai.github.io/posts/memory-leak-in-celery/
As I mentioned in my first message, I tried to run
this task(cla
It looks like the problem is on celery.
The mentioned issue is still open, so not sure if it was corrected.
https://manhtai.github.io/posts/memory-leak-in-celery/
Julio
El lun, 29 de mar. de 2021 a la(s) 08:31, Alexey (zen.supag...@gmail.com)
escribió:
> Hello Lars!
> Thanks for your interest.
Hello Lars!
Thanks for your interest.
The problem appears when all celery workers
require 1Gb of RAM each in idle state. They
hold this memory constantly and when they do
something useful, they grab more memory. I
think 8Gb+ in idle state is quite a lot for my
app.
> Did it crash your system or p
Hello Alexej,
May I stupidly ask, why you care about that in general? Please don't get
me wrong I don't want to criticize you, this is rather meant to be a
(thought) provoking question.
Normally your OS-Kernel and the Python-Interpreter get along pretty well
and whenthere is free memory to be had,
Mayling ge wrote:
> Sorry. The code here is just to describe the issue and is just pseudo
> code,
That is the problem with your post. It's too vague for us to make sense of
it.
Can you provide a minimal example that shows what you think is a "memory
leak"? Then we can either help you avo
Sorry. The code here is just to describe the issue and is just pseudo
code, please forgive some typo. I list out lines because I need line
context.
Sent from Mail Master
On 07/05/2017 15:52, [1]Albert-Jan Roskam wrote:
From: Python-list
on behalf of
Maylin
From: Python-list on
behalf of Mayling ge
Sent: Tuesday, July 4, 2017 9:01 AM
To: python-list
Subject: memory leak with re.match
Hi,
My function is in the following way to handle file line by line. There are
multiple error patterns defined and need to apply to each line. I us
Thanks. I actually comment out all handling code. The loop ends with the
re_pat.match and nothing followed.
Sent from Mail Master
On 07/05/2017 08:31, [1]Cameron Simpson wrote:
On 04Jul2017 17:01, Mayling ge wrote:
> My function is in the following way to handle file lin
On 04Jul2017 17:01, Mayling ge wrote:
My function is in the following way to handle file line by line. There are
multiple error patterns defined and need to apply to each line. I use
multiprocessing.Pool to handle the file in block.
The memory usage increases to 2G for a 1G file. A
Christian wrote:
> Hi,
>
> I'm wondering why python blow up a dictionary structure so much.
>
> The ids and cat substructure could have 0..n entries but in the most cases
> they are <= 10,t is limited by <= 6.
>
> Thanks for any advice to save memory.
> Christian
>
>
> Example:
>
> {'0a0f7a3
Am Freitag, 23. September 2016 12:02:47 UTC+2 schrieb Chris Angelico:
> On Fri, Sep 23, 2016 at 7:05 PM, Christian wrote:
> > I'm wondering why python blow up a dictionary structure so much.
> >
> > The ids and cat substructure could have 0..n entries but in the most cases
> > they are <= 10,t is
On Fri, Sep 23, 2016 at 7:05 PM, Christian wrote:
> I'm wondering why python blow up a dictionary structure so much.
>
> The ids and cat substructure could have 0..n entries but in the most cases
> they are <= 10,t is limited by <= 6.
>
> Example:
>
> {'0a0f7a3a0e09826caef1bff707785662': {'ids':
Hi, Oscar,
Your feedback is very valuable to me since you dig into the problem itself.
Basically, we are trying to develop an open source software with multiple
interface to several free solvers so that we can switch among them in case one
of them is not working or so efficient. The optimizati
On 15 August 2015 at 00:41, Ping Liu wrote:
> Dear All,
Hi Ping Liu,
> I am working on an optimization problem, where we are trying to minimize
> some indicators like energy usage, energy cost, CO2 emission. In this
> problem, we have a bunch of energy conversion technologies for electricity
> a
In a message of Tue, 18 Aug 2015 01:56:16 -0700, Rustom Mody writes:
>[How she (her mail client) manages to befuddle googlegroups thusly is
>quite a mystery...
>]
For me as well, as all I am doing is just replying to the mail ... And
I haven't changed my mail client at all in years and years ...
On Tuesday, August 18, 2015 at 3:40:11 AM UTC+5:30, Ping Liu wrote:
> Hi, Dieter,
>
> If I move from Python to Jython or IronPython, do I need to retool whatever I
> have done? If so, that may take quite a long time. This may make the
> reimplementation impossible.
Hi Ping
There is a message f
In a message of Tue, 18 Aug 2015 10:13:57 +1000, Chris Angelico writes:
>On Tue, Aug 18, 2015 at 8:09 AM, Ping Liu wrote:
>> If I move from Python to Jython or IronPython, do I need to retool whatever
>> I have done? If so, that may take quite a long time. This may make the
>> reimplementation i
Ping Liu writes:
> If I move from Python to Jython or IronPython, do I need to retool whatever I
> have done? If so, that may take quite a long time. This may make the
> reimplementation impossible.
As Chris already pointed out, you are still using Python -- i.e. the
base language does not ch
On Tue, Aug 18, 2015 at 8:09 AM, Ping Liu wrote:
> If I move from Python to Jython or IronPython, do I need to retool whatever I
> have done? If so, that may take quite a long time. This may make the
> reimplementation impossible.
You're not moving from Python to something else; you're moving f
Hi, Dieter,
If I move from Python to Jython or IronPython, do I need to retool whatever I
have done? If so, that may take quite a long time. This may make the
reimplementation impossible.
--
https://mail.python.org/mailman/listinfo/python-list
In a message of Mon, 17 Aug 2015 11:40:32 -0700, Ping Liu writes:
>> Discuss this more on pypy-...@python.org or the #pypy channel on freenode.
>> People on pypy-dev would appreciate not getting libreoffice spreadsheet
>> attachments but just the figures as plain text.
>>
>> Laura
>
>Hi, Laura,
>
On Saturday, August 15, 2015 at 11:56:22 AM UTC-7, Laura Creighton wrote:
> If the problem is that Python is using too much memory, then PyPy may
> be able to help you. PyPy is an alternative implementation of Python,
> and by defaiult uses a minimark garbage collector.
> https://pypy.readthedocs.
If the problem is that Python is using too much memory, then PyPy may
be able to help you. PyPy is an alternative implementation of Python,
and by defaiult uses a minimark garbage collector.
https://pypy.readthedocs.org/en/release-2.4.x/garbage_collection.html
You will have to write your own bind
On 15/08/2015 18:28, Terry Reedy wrote:
On 8/15/2015 3:21 AM, dieter wrote:
Ping Liu writes:
...
For small cases, Python works well. But if we consider longer time
period.
then it would fail due to the memory usage issues. We have tested
several
case studies to check the memory use for differe
On 8/15/2015 3:21 AM, dieter wrote:
Ping Liu writes:
...
For small cases, Python works well. But if we consider longer time period.
then it would fail due to the memory usage issues. We have tested several
case studies to check the memory use for different time period, including
1) 2 hours in o
Ping Liu writes:
> ...
> For small cases, Python works well. But if we consider longer time period.
> then it would fail due to the memory usage issues. We have tested several
> case studies to check the memory use for different time period, including
> 1) 2 hours in one day, 2) 24 hours in one da
On Mon, Jun 8, 2015 at 3:32 AM, naren wrote:
> Memory Error while working with pandas dataframe.
>
> Description of Environment Windows 7 python 3.4.2 32-bit version pandas
> 0.16.0
>
> We are running into the error described below. Any help provided will be
> sincerely appreciated.
>
> We are ab
On 01/04/2015 11:15, mohan...@gmail.com wrote:
Hi All,
we have developed Iron Python script in spotfire to execute some calculation
based on the marked rows. The Script holding the memory and not realsing . if
we marked 1000+ rows its taking MBs of memory and keep on increasing.
Please help u
On 29/10/2014 02:18, Denis McMahon wrote:
> On Mon, 27 Oct 2014 10:16:43 -0700, kiuhnm03 wrote:
>
>> I'd like to write one or more scripts that analyze processes in memory
>> on Windows 7. I used to do these things in C++ by using native Win32 API
>> calls.
>> How should I proceed in python? Any p
On Wed, Oct 29, 2014 at 1:18 PM, Denis McMahon wrote:
> On Mon, 27 Oct 2014 10:16:43 -0700, kiuhnm03 wrote:
>
>> I'd like to write one or more scripts that analyze processes in memory
>> on Windows 7. I used to do these things in C++ by using native Win32 API
>> calls.
>> How should I proceed in p
On Mon, 27 Oct 2014 10:16:43 -0700, kiuhnm03 wrote:
> I'd like to write one or more scripts that analyze processes in memory
> on Windows 7. I used to do these things in C++ by using native Win32 API
> calls.
> How should I proceed in python? Any pointers?
This seems to be a very common request.
On Tuesday, October 28, 2014 3:37:19 AM UTC+1, Rustom Mody wrote:
> On Tuesday, October 28, 2014 12:41:40 AM UTC+5:30, kiuh...@yahoo.it wrote:
> > On Monday, October 27, 2014 6:24:19 PM UTC+1, Tim Golden wrote:
> > > psutil is definitely your friend:
> > >
> > > https://github.com/giampaolo/psut
On Tuesday, October 28, 2014 12:41:40 AM UTC+5:30, kiuh...@yahoo.it wrote:
> On Monday, October 27, 2014 6:24:19 PM UTC+1, Tim Golden wrote:
> > psutil is definitely your friend:
> >
> > https://github.com/giampaolo/psutil
> >
> > Although WMI can be quite handy too, depending on what you're tr
On Monday, October 27, 2014 6:24:19 PM UTC+1, Tim Golden wrote:
> psutil is definitely your friend:
>
> https://github.com/giampaolo/psutil
>
> Although WMI can be quite handy too, depending on what you're trying to do:
>
> http://timgolden.me.uk/python/wmi/
>
> TJG
Thanks for answering.
I
On 27/10/2014 17:16, kiuhn...@yahoo.it wrote:
> Hi! I'd like to write one or more scripts that analyze processes in
> memory on Windows 7. I used to do these things in C++ by using native
> Win32 API calls. How should I proceed in python? Any pointers?
>
psutil is definitely your friend:
https
Jamie Mitchell writes:
> ...
> I then get a memory error:
>
> Traceback (most recent call last):
> File "", line 1, in
> File "/usr/local/sci/lib/python2.7/site-packages/scipy/stats/stats.py",
> line 2409, in pearsonr
> x = np.asarray(x)
> File "/usr/local/sci/lib/python2.7/site-packag
On 03/24/2014 04:32 AM, Jamie Mitchell wrote:
Hello all,
I'm afraid I am new to all this so bear with me...
I am looking to find the statistical significance between two large netCDF data
sets.
Firstly I've loaded the two files into python:
swh=netCDF4.Dataset('/data/cr1/jmitchel/Q0/swh/cont
On Monday, March 24, 2014 11:32:31 AM UTC, Jamie Mitchell wrote:
> Hello all,
>
>
>
> I'm afraid I am new to all this so bear with me...
>
>
>
> I am looking to find the statistical significance between two large netCDF
> data sets.
>
>
>
> Firstly I've loaded the two files into python:
>
Giorgos Tzampanakis writes:
> ...
> So it seems that the pickle module does keep some internal cache or
> something like that.
This is highly unlikely: the "ZODB" (Zope object database)
uses pickle (actually, it is "cPickle", the "C" implementation
of the "pickle" module) for serialization. The "
On 2013-06-15, Peter Otten wrote:
> Giorgos Tzampanakis wrote:
>
>> So it seems that the pickle module does keep some internal cache or
>> something like that.
>
> I don't think there's a global cache. The Pickler/Unpickler has a per-
> instance cache (the memo dict) that you can clear with the c
Giorgos Tzampanakis wrote:
> So it seems that the pickle module does keep some internal cache or
> something like that.
I don't think there's a global cache. The Pickler/Unpickler has a per-
instance cache (the memo dict) that you can clear with the clear_memo()
method, but that doesn't matter
On 2013-06-15, Dave Angel wrote:
> On 06/14/2013 07:04 PM, Giorgos Tzampanakis wrote:
>> I have a program that saves lots (about 800k) objects into a shelve
>> database (I'm using sqlite3dbm for this since all the default python dbm
>> packages seem to be unreliable and effectively unusable, but t
On 2013-06-15, Peter Otten wrote:
> Giorgos Tzampanakis wrote:
>
>> I have a program that saves lots (about 800k) objects into a shelve
>> database (I'm using sqlite3dbm for this since all the default python dbm
>> packages seem to be unreliable and effectively unusable, but this is
>> another dis
Giorgos Tzampanakis wrote:
> I have a program that saves lots (about 800k) objects into a shelve
> database (I'm using sqlite3dbm for this since all the default python dbm
> packages seem to be unreliable and effectively unusable, but this is
> another discussion).
>
> The process takes about 10-
On 06/14/2013 07:04 PM, Giorgos Tzampanakis wrote:
I have a program that saves lots (about 800k) objects into a shelve
database (I'm using sqlite3dbm for this since all the default python dbm
packages seem to be unreliable and effectively unusable, but this is
another discussion).
The process ta
> Python version and OS please. And is the Python 32bit or 64bit? How
>
> much RAM does the computer have, and how big are the swapfiles ?
>
Python 2.7.3
ubuntu 12.04 64 bit
4GB RAM
>
> "Fairly big" is fairly vague. To some people, a list with 100k members
>
> is huge, but not to a modern
On 02/18/2013 10:29 AM, Sudheer Joseph wrote:
HI,
I have been trying to compute cross correlation between a time series
at a location f(1) and the timeseries of spatial data f(XYT) and saving the
resulting correlation coefficients and lags in a 3 dimensional array which is
of fairly b
On 23 January 2013 17:33, Isaac Won wrote:
> On Wednesday, January 23, 2013 10:51:43 AM UTC-6, Oscar Benjamin wrote:
>> On 23 January 2013 14:57, Isaac Won wrote:
>>
>> > On Wednesday, January 23, 2013 8:40:54 AM UTC-6, Oscar Benjamin wrote:
>>
>> Unless I've misunderstood how this function is su
On Wednesday, January 23, 2013 10:51:43 AM UTC-6, Oscar Benjamin wrote:
> On 23 January 2013 14:57, Isaac Won wrote:
>
> > On Wednesday, January 23, 2013 8:40:54 AM UTC-6, Oscar Benjamin wrote:
>
> >> On 23 January 2013 14:28, Isaac Won wrote:
>
> >>
>
> [SNIP]
>
> >
>
> > Following is full
On 23 January 2013 14:57, Isaac Won wrote:
> On Wednesday, January 23, 2013 8:40:54 AM UTC-6, Oscar Benjamin wrote:
>> On 23 January 2013 14:28, Isaac Won wrote:
>>
[SNIP]
>
> Following is full error message after I adjusted following Ulich's advice:
>
> interp = interp1d(indices[not_nan], x[not_
On Wednesday, January 23, 2013 8:40:54 AM UTC-6, Oscar Benjamin wrote:
> On 23 January 2013 14:28, Isaac Won wrote:
>
> > On Wednesday, January 23, 2013 4:08:13 AM UTC-6, Oscar Benjamin wrote:
>
> >
>
> > To Oscar
>
> > My actual error message is:
>
> > File
> > "/lustre/work/apps/python-2.7
On Wednesday, January 23, 2013 2:55:14 AM UTC-6, Ulrich Eckhardt wrote:
> Am 23.01.2013 05:06, schrieb Isaac Won:
>
> > I have tried to use different interpolation methods with Scipy. My
>
> > code seems just fine with linear interpolation, but shows memory
>
> > error with quadratic. I am a nov
On 23 January 2013 14:28, Isaac Won wrote:
> On Wednesday, January 23, 2013 4:08:13 AM UTC-6, Oscar Benjamin wrote:
>
> To Oscar
> My actual error message is:
> File
> "/lustre/work/apps/python-2.7.1/lib/python2.7/site-packages/scipy/interpolate/interpolate.py",
> line 311, in __init__
> se
On Wednesday, January 23, 2013 4:08:13 AM UTC-6, Oscar Benjamin wrote:
> On 23 January 2013 08:55, Ulrich Eckhardt
>
>
>
> > Am 23.01.2013 05:06, schrieb Isaac Won:
>
> >
>
> >> I have tried to use different interpolation methods with Scipy. My
>
> >> code seems just fine with linear interpol
On Wednesday, January 23, 2013 4:08:13 AM UTC-6, Oscar Benjamin wrote:
> On 23 January 2013 08:55, Ulrich Eckhardt
>
> wrote:
>
> > Am 23.01.2013 05:06, schrieb Isaac Won:
>
> >
>
> >> I have tried to use different interpolation methods with Scipy. My
>
> >> code seems just fine with linear i
On Tuesday, January 22, 2013 10:06:41 PM UTC-6, Isaac Won wrote:
> Hi all,
>
>
>
> I have tried to use different interpolation methods with Scipy. My code seems
> just fine with linear interpolation, but shows memory error with quadratic. I
> am a novice for python. I will appreciate any help.
On 23 January 2013 08:55, Ulrich Eckhardt
wrote:
> Am 23.01.2013 05:06, schrieb Isaac Won:
>
>> I have tried to use different interpolation methods with Scipy. My
>> code seems just fine with linear interpolation, but shows memory
>> error with quadratic. I am a novice for python. I will appreciat
Am 23.01.2013 05:06, schrieb Isaac Won:
I have tried to use different interpolation methods with Scipy. My
code seems just fine with linear interpolation, but shows memory
error with quadratic. I am a novice for python. I will appreciate any
help.
>
#code
f = open(filin, "r")
Check out the "w
Andrew Robinson r3dsolutions.com> writes:
>
> When Python3.2 is running, is there an easy way within Python to capture
> the *total* amount of heap space the program is actually using (eg:real
> memory)?
I'm not sure what you mean with "real memory" or how precise you want that
measurement t
1 - 100 of 475 matches
Mail list logo