t;pgsql-general@postgresql.org"
Subject: Re: [GENERAL] ERROR: out of memory
On Thu, Apr 2, 2015 at 5:24 PM, Dzmitry Nikitsin
wrote:
> Hey folks,
> I have 4 postgresql servers 9.3.6(on master I use 9.3.5) configured with
> streaming replication - with 1 maser(30GB RAM, proces
"
Subject: Re: [GENERAL] ERROR: out of memory
On Thu, Apr 2, 2015 at 5:24 PM, Dzmitry Nikitsin
wrote:
> Hey folks,
> I have 4 postgresql servers 9.3.6(on master I use 9.3.5) configured with
> streaming replication - with 1 maser(30GB RAM, processor - Intel Xeon E5-2680
> v
it¹s 4 different servers.
From: "David G. Johnston"
Date: Thursday, April 2, 2015 at 9:37 PM
To: Melvin Davidson
Cc: Bob Jones , "pgsql-general@postgresql.org"
Subject: Re: [GENERAL] ERROR: out of memory
On Thursday, April 2, 2015, Melvin Davidson wrote:
> Well ri
On Thursday, April 2, 2015, Melvin Davidson wrote:
> Well right of the bat, if your master shared_buffers = 7GB and 3 slaves
> shared_buffers = 10GB, that is 37GB total, which means you are guaranteed
> to exceed the 30GB physical limit on your machine.
>
I don't get why you are adding these tog
Well right of the bat, if your master shared_buffers = 7GB and 3 slaves
shared_buffers = 10GB, that is 37GB total, which means you are guaranteed
to exceed the 30GB physical limit on your machine. General recommendation
is to only allocate 1/4 total memory for shared_buffers, so start by
cutting ba
On Thu, Apr 2, 2015 at 5:24 PM, Dzmitry Nikitsin wrote:
> Hey folks,
> I have 4 postgresql servers 9.3.6(on master I use 9.3.5) configured with
> streaming replication - with 1 maser(30GB RAM, processor - Intel Xeon
> E5-2680 v2) and 3 slaves(61 Intel Xeon E5-2670 v2), all on Ubuntu 14.04.1
> L
On 15/12/14 04:44, Andy Colson wrote:
On 12/13/2014 10:03 PM, wetter wetterana wrote:
Hi,
I'm passing rows from SAS to PostgreSQL (I assign a libname and use a
PROC APPEND). This works fine with smaller tables (~below 1 million
rows). However, as tables get larger I receive the following er
wetter wetterana writes:
> Help much appreciated!
The out-of-memory situation is definitely happening on the client side,
not the server side. A problem happening in the server would not result
in a message spelled quite that way, and it would not use an HY000 error
code either.
A plausible gue
On 12/14/2014 08:14 AM, wetter wetterana wrote:
Apologies,
I am using SAS, a statistical package/database management system. SAS
has the feature of connecting to a PostgreSQL server. It does so by
assigning what is called a libname (a 'library' connection telling SAS
that a particular folder i
Apologies,
I am using SAS, a statistical package/database management system. SAS has
the feature of connecting to a PostgreSQL server. It does so by assigning
what is called a libname (a 'library' connection telling SAS that a
particular folder is a data storage). In this assignment, you specify
Ah! That would explain it.
Welp, this search was more helpful: "Out of memory while reading tuples"
http://stackoverflow.com/questions/22532149/vba-and-postgresql-connection
It says: include UseDeclareFetch=1 in connect string, which sounds like its
part of odbc. How does SAS connect to PG?
On 12/14/2014 09:51 AM, Adrian Klaver wrote:
On 12/14/2014 07:44 AM, Andy Colson wrote:
On 12/13/2014 10:03 PM, wetter wetterana wrote:
Hi,
I'm passing rows from SAS to PostgreSQL (I assign a libname and use a
PROC APPEND). This works fine with smaller tables (~below 1 million
rows). However
On 12/14/2014 07:44 AM, Andy Colson wrote:
On 12/13/2014 10:03 PM, wetter wetterana wrote:
Hi,
I'm passing rows from SAS to PostgreSQL (I assign a libname and use a
PROC APPEND). This works fine with smaller tables (~below 1 million
rows). However, as tables get larger I receive the following
On 12/13/2014 10:03 PM, wetter wetterana wrote:
Hi,
I'm passing rows from SAS to PostgreSQL (I assign a libname and use a PROC
APPEND). This works fine with smaller tables (~below 1 million rows).
However, as tables get larger I receive the following error messages:
"ERROR: CLI describe er
On 12/13/2014 08:03 PM, wetter wetterana wrote:
Hi,
I'm passing rows from SAS to PostgreSQL (I assign a libname and use a
PROC APPEND). This works fine with smaller tables (~below 1 million
rows). However, as tables get larger I receive the following error
messages:
This will need some more
> Date: Fri, 22 Nov 2013 20:11:47 +0100
> Subject: Re: [GENERAL] ERROR: out of memory DETAIL: Failed on request of size
> ???
> From: t...@fuzzy.cz
> To: bwon...@hotmail.com
> CC: brick...@gmail.com; pgsql-general@postgresql.org
>
> On 19 Listopad 2013, 5:30, Brian Wong w
On 27 Listopad 2013, 22:39, Brian Wong wrote:
>> Date: Fri, 22 Nov 2013 20:11:47 +0100
>> Subject: Re: [GENERAL] ERROR: out of memory DETAIL: Failed on request of
>> size ???
>> From: t...@fuzzy.cz
>> To: bwon...@hotmail.com
>> CC: brick...@gmail.com; pgsq
November 18, 2013 7:25 PM
> To: "Brian Wong"
> Cc: pgsql-general@postgresql.org
> Subject: Re: [GENERAL] ERROR: out of memory DETAIL: Failed on request of
> size ???
>
> On Mon, Nov 18, 2013 at 12:40 PM, Brian Wong wrote:
>
>We'd like to seek out yo
Hi,
On 22 Listopad 2013, 20:09, Edson Richter wrote:
>
> Excuse me (or just ignore me) if it is a stupid question, but have you
> configured sysctl.conf accordingly?
> For instance, to use larget memory settings, I had to configure my EL as
> follows:
>
> # Controls the maximum shared segment size
on the other
end).
regards
Tomas
>
> --- Original Message ---
>
> From: "bricklen"
> Sent: November 18, 2013 7:25 PM
> To: "Brian Wong"
> Cc: pgsql-general@postgresql.org
> Subject: Re: [GENERAL] ERROR: out of memory DETAIL: Failed on request of
>
n we tested the error there was no other
load whatsoever. Unfortunately, the error doesn't say what kinda
memory ran out.
--- Original Message ---
From: "bricklen"
Sent: November 18, 2013 7:25 PM
To: "Brian Wong"
Cc: pgsql-general@postgresql.org
Subject: Re: [GENERAL
From: pgsql-general-ow...@postgresql.org
[mailto:pgsql-general-ow...@postgresql.org] On Behalf Of Brian Wong
Sent: Monday, November 18, 2013 11:30 PM
To: bricklen
Cc: pgsql-general@postgresql.org
Subject: Re: [GENERAL] ERROR: out of memory DETAIL: Failed on request of size
???
I've trie
load whatsoever. Unfortunately,
the error doesn't say what kinda memory ran out.
--- Original Message ---
From: "bricklen"
Sent: November 18, 2013 7:25 PM
To: "Brian Wong"
Cc: pgsql-general@postgresql.org
Subject: Re: [GENERAL] ERROR: out of memory DETAIL: Failed on reque
Hello
I reported similar problem week ago - Postgres releases work_mem (assigned
for every SELECT in union) after query finishing. So large SELECT UNION ALL
SELECT UNION ALL .. queries require lot of memory. My customer reported
significant problems for 100 unions. He had to migrate to 64bit pg w
On Mon, Nov 18, 2013 at 8:30 PM, Brian Wong wrote:
> I've tried any work_mem value from 1gb all the way up to 40gb, with no
> effect on the error. I'd like to think of this problem as a server process
> memory (not the server's buffers) or client process memory issue, primarily
> because when w
On Mon, Nov 18, 2013 at 12:40 PM, Brian Wong wrote:
> We'd like to seek out your expertise on postgresql regarding this error
> that we're getting in an analytical database.
>
> Some specs:
> proc: Intel Xeon X5650 @ 2.67Ghz dual procs 6-core, hyperthreading on.
> memory: 48GB
> OS: Oracle Enterp
Replaying to my own mail. Maybe we've found the root cause:
In one database there was a table with 200k records where each record
contained 15kB bytea field. Auto-ANALYZE was running on that table
continuously (with statistics target 500). When we avoid the
auto-ANALYZE via UPDATE table set by
Jakub Ouhrabka writes:
>>> They clearly were: notice the reference to "Autovacuum context" in the
>>> memory map. I think you are right to suspect that auto-analyze was
>>> getting blown out by the wide bytea columns. Did you have any
>>> expression indexes involving those columns?
> Yes, there
> They clearly were: notice the reference to "Autovacuum context" in the
> memory map. I think you are right to suspect that auto-analyze was
> getting blown out by the wide bytea columns. Did you have any
> expression indexes involving those columns?
Yes, there are two unique btree indexes:
(
Jakub Ouhrabka writes:
> Could it be that the failed connections were issued by autovacuum?
They clearly were: notice the reference to "Autovacuum context" in the
memory map. I think you are right to suspect that auto-analyze was
getting blown out by the wide bytea columns. Did you have any
exp
2010/11/8 Jakub Ouhrabka :
> Replaying to my own mail. Maybe we've found the root cause:
>
> In one database there was a table with 200k records where each record
> contained 15kB bytea field. Auto-ANALYZE was running on that table
> continuously (with statistics target 500). When we avoid the auto
> Date: Mon, 8 Nov 2010 20:05:23 +0100
> From: k...@comgate.cz
> To: pgsql-general@postgresql.org
> Subject: Re: [GENERAL] ERROR: Out of memory - when connecting to database
>
> Replaying to my own mail. Maybe we've found the root cause:
>
> In one database there
what's the work_mem?
64MB
that's *way* too much with 24GB of ram and> 1k connections. please
lower it to 32MB or even less.
Thanks for your reply. You are generally right. But in our case most of
the backends are only waiting for notify so not consuming any work_mem.
The server is not swa
On Mon, Nov 08, 2010 at 08:04:32PM +0100, Jakub Ouhrabka wrote:
> > is it 32bit or 64bit machine?
>
> 64bit
>
> > what's the work_mem?
>
> 64MB
that's *way* too much with 24GB of ram and > 1k connections. please
lower it to 32MB or even less.
Best regards,
depesz
--
Linkedin: http://www.lin
Replaying to my own mail. Maybe we've found the root cause:
In one database there was a table with 200k records where each record
contained 15kB bytea field. Auto-ANALYZE was running on that table
continuously (with statistics target 500). When we avoid the
auto-ANALYZE via UPDATE table set by
> is it 32bit or 64bit machine?
64bit
> what's the work_mem?
64MB
Kuba
Dne 8.11.2010 19:52, hubert depesz lubaczewski napsal(a):
On Mon, Nov 08, 2010 at 07:19:43PM +0100, Jakub Ouhrabka wrote:
Hi,
we have several instances of following error in server log:
2010-11-08 18:44:18 CET 5177 1 @
On Mon, Nov 08, 2010 at 07:19:43PM +0100, Jakub Ouhrabka wrote:
> Hi,
>
> we have several instances of following error in server log:
>
> 2010-11-08 18:44:18 CET 5177 1 @ ERROR: out of memory
> 2010-11-08 18:44:18 CET 5177 2 @ DETAIL: Failed on request of size 16384.
>
> It's always the firs
On Nov 16, 2007 1:48 AM, Anton <[EMAIL PROTECTED]> wrote:
> My machine has 2G RAM. And I want make postgres utilize it...
You're trying to tune your database based on philosophy. Making
postgresql use all the RAM may or may not make your machine run
faster. The OS caches a lot of data for you, s
On Nov 16, 2007, at 1:48 AM, Anton wrote:
Hi.
I got an error when I try to VACUUM ANALYZE table.
# VACUUM ANALYZE n_traf;
ERROR: out of memory
DETAIL: Failed on request of size 536870910.
In logfile:
TopMemoryContext: 33464512 total in 12 blocks; 10560 free (61 chunks);
33453952 used
TopTra
Tom, You bet. I'll give it a go and report back.
On Tue, Apr 25, 2006 at 08:39:46PM -0400, Tom Lane wrote:
> I've applied the attached patch to 8.1.*,
> but it could use more testing --- do you want to patch locally and
> confirm it's OK for you?
---(end of broadcast)
Wayne Conrad <[EMAIL PROTECTED]> writes:
> I've got a 7.4 database that gives postgres an "out of memory" error
> when restoring into a 32-bit build 8.1, yet restores into a 64-bit
> build of 8.1.
> Filesystem: -1367351296 total in 361 blocks; 34704 free (305 chunks);
> -1367386000 used
Now that
41 matches
Mail list logo