7;COMPLETED', 'REJECTED') ORDER BY t.msg_status DESC,
> t.input_sequence ;
>
> EXPLAIN EXECUTE my_query('2024-05-17 00:00:00', 202406020168279904);
>
> I have an explanation for what I suspected was a memory leak. It seems
that systemd reports cached memory,
On Mon, Jun 3, 2024 at 9:12 AM Greg Sabino Mullane
wrote:
> On Mon, Jun 3, 2024 at 6:19 AM Radu Radutiu wrote:
>
>> Do you have any idea how to further debug the problem?
>>
>
> Putting aside the issue of non-reclaimed memory for now, can you show us
> the actual query? The explain analyze you p
On Mon, Jun 3, 2024 at 6:19 AM Radu Radutiu wrote:
> Do you have any idea how to further debug the problem?
>
Putting aside the issue of non-reclaimed memory for now, can you show us
the actual query? The explain analyze you provided shows it doing an awful
lot of joins and then returning 14+ mi
Hello,
I have an out of memory problem after upgrading from postgresql 12 to 16.3.
I have identified one query that can reproduce the error on demand. Once
the query starts, it will eventually exhaust all RAM and swap until the OOM
killer will stop postgresql.
The setup is as follows:
- One
On Thu, Nov 2, 2023 at 4:13 AM Avi Weinberg wrote:
> Hi experts,
>
>
>
> I'm using Patroni Postgres installation and noticed that twice already
> postgres crashed due to out of memory. I'm using logical replication with
> around 30-40 active subscribers on this
On Thu, 2023-11-02 at 09:12 +, Avi Weinberg wrote:
> I'm using Patroni Postgres installation and noticed that twice already
> postgres
> crashed due to out of memory. I'm using logical replication with around 30-40
> active subscribers on this machine. The machine has
Hi experts,
I'm using Patroni Postgres installation and noticed that twice already postgres
crashed due to out of memory. I'm using logical replication with around 30-40
active subscribers on this machine. The machine has 128GB but only 32GB is
allocated to Postgres. How can I kn
Hello,
The problem Joe spotted is that a variable name is misspelled in one
> place. It should (probably) be external_complete_id, not
> _external_complete_id).
Oh, copy-paste issue :-)
> > better solution proposal?
>
> I think you can make that clearer by using IS [NOT] DISTINCT FROM:
>
>
On 2022-08-05 17:47:02 +0200, Aleš Zelený wrote:
> the construct surprised me when I saw it in the function the first time, but
> it
> is correct and works as expected - it allows writing the function as SQL
> instead of PLPGSQL while it ensures that for a parameter null value it
> evaluates to tr
Hello,
the construct surprised me when I saw it in the function the first time,
but it is correct and works as expected - it allows writing the function as
SQL instead of PLPGSQL while it ensures that for a parameter null value it
evaluates to true instead of filtering the resultset.
What is the i
On 8/4/22 08:34, Aleš Zelený wrote:
SELECT ... simple join of two tables...
WHERE opd.id_data_provider = _id_data_provider
AND CASE WHEN _external_id IS NULL
THEN external_id IS NULL
ELSE external_id = _external_id
END
AND CASE W
Hello,
thanks for the information and the link!
Ales
čt 4. 8. 2022 v 1:05 odesílatel Tom Lane napsal:
> =?UTF-8?B?QWxlxaEgWmVsZW7DvQ==?= writes:
> > CREATE OR REPLACE FUNCTION tescase (_id_data_provider integer,
> _external_id
> > bigint DEFAULT NULL::bigint, _external_complete_id character v
=?UTF-8?B?QWxlxaEgWmVsZW7DvQ==?= writes:
> CREATE OR REPLACE FUNCTION tescase (_id_data_provider integer, _external_id
> bigint DEFAULT NULL::bigint, _external_complete_id character varying
> DEFAULT NULL::character varying)
> RETURNS TABLE(id_gm bigint, id_opp bigint, id_opp_state integer)
> LA
Hello,
I did some testing and the result is that on pg12 there are no such
problems. Pg13 and Pg 14.3 tests will follow but based on monitoring
processed RssAnon memory, I've found a correlation that only processes by a
single user are suffering from the memory allocation (it looks to be a
memory
x-- [ anon ]
ff60 4K r-x-- [ anon ]
The machine has 40GB of RAM. It is a dedicated DB server, only a single PG
instance is running there. Since we again run out of memory from my last
reply, now it looks healthy:
-bash-4.2$ psql -c "select count(*) from pg_stat_activit
On 7/19/22 10:54 AM, ghoostin...@mail.com wrote:
Again reply to list also.
Ccing list.
What should i do and what’s the requirements
———-
The link explains what you should do.
What requirements are you talking about?
This is probably something that the Odoo community will be better able
to
using Odoo Erp v12 Community and i want migrate to v14 and i have
tablea content more 5m rows and after 8hr of execution it sho msg “out of
memory” so i need your help
And we need information:
1) Detail how you are doing migration.
2) Are you doing this on the same machine?
3) OS and version.
On 7/19/22 06:16, ghoostin...@mail.com wrote:
Hello, I’m using Odoo Erp v12 Community and i want migrate to v14 and i have
tablea content more 5m rows and after 8hr of execution it sho msg “out of
memory” so i need your help
And we need information:
1) Detail how you are doing migration.
2
Hello, I’m using Odoo Erp v12 Community and i want migrate to v14 and i have
tablea content more 5m rows and after 8hr of execution it sho msg “out of
memory” so i need your help
ing
actual leaks :-(. You can only believe there's a leak if the
reported usage doesn't level off after reaching the vicinity of
your shared memory size.
So back to why you're getting these out-of-memory failures:
we still don't have much info about that. I wonder whether
rw--- [ anon ]
> 02d15000504K rw--- [ anon ]
> 02d93000 934476K rw--- [ anon ]
> 7fd989776000 311304K rw--- [ anon ]
> 7fd9a8d75000 32772K rw--- [ anon ]
> 7fd9acb65000 20K rw--- [ anon ]
> 7fd9affc1000 372K rw--- [ anon
n ]
7fd9acb65000 20K rw--- [ anon ]
7fd9affc1000372K rw--- [ anon ]
>From previous observation I know, that the process RssAnon memory grew over
time, sometimes there are some steps. Still, generally, the growth is
linear until the process finishes or we run out of memory, a
=?UTF-8?B?QWxlxaEgWmVsZW7DvQ==?= writes:
> after some time, I've found a process consuming over 1GB of memory"
> -bash-4.2$ grep RssAnon /proc/*/status | sort -nk2 | tail
> /proc/17048/status:RssAnon: 1053952 kB
> Here are memory contexts for PID 17048:
> TopMemoryContext: 422592 total in 14 blo
Saying that, you should be able to downgrade safely as there are no
>> > changes in WAL format or such that would break things. Saying that,
>> > the corruption issue caused by CONCURRENTLY is something you'd still
>> > have to face.
>> >
>&
`git diff REL_14_3..REL_14_4 -- *.c`
> >
> > Saying that, you should be able to downgrade safely as there are no
> > changes in WAL format or such that would break things. Saying that,
> > the corruption issue caused by CONCURRENTLY is something you'd sti
by CONCURRENTLY is something you'd still
> have to face.
>
>
> Thanks, good to know that, we can use it for a test case, since we
> already hit the CONCURRENTLY bug on 14.3.
>
> > 2022-07-02 14:48:07 CEST [3930]: [3-1] user=,db=,host=,app=
> ERROR:
;>
>> Saying that, you should be able to downgrade safely as there are no
>> changes in WAL format or such that would break things. Saying that,
>> the corruption issue caused by CONCURRENTLY is something you'd still
>> have to face.
>>
>>
> Thanks, good
corruption issue caused by CONCURRENTLY is something you'd still
> have to face.
>
>
Thanks, good to know that, we can use it for a test case, since we already
hit the CONCURRENTLY bug on 14.3.
> 2022-07-02 14:48:07 CEST [3930]: [3-1] user=,db=,host=,app= ERROR: out of
> > m
t, you should be able to downgrade safely as there are no
changes in WAL format or such that would break things. Saying that,
the corruption issue caused by CONCURRENTLY is something you'd still
have to face.
> 2022-07-02 14:48:07 CEST [3930]: [3-1] user=,db=,host=,app= ERROR: out of
> m
;> 2022-07-02 14:48:07 CEST [3930]: [3-1] user=,db=,host=,app= ERROR: out
>> of memory
>> 2022-07-02 14:48:07 CEST [3930]: [4-1] user=,db=,host=,app= DETAIL:
>> Failed on request of size 152094068 in memory context
>> "TopTransactionContext".
>> 2022-07-0
Hi
It's looks like memory leak
ErrorContext: 8192 total in 1 blocks; 7928 free (5 chunks); 264 used
> Grand total: 1456224 bytes in 195 blocks; 378824 free (165 chunks);
> 1077400 used
> 2022-07-02 14:48:07 CEST [3930]: [3-1] user=,db=,host=,app= ERROR: out of
> memory
>
> On Jul 11, 2022, at 2:50 AM, Aleš Zelený wrote:
>
>
> Hello,
>
> we are experiencing out-of-memory issues after Postygres upgrade from 14.3 to
> 14.4 on CentOS7.
>
> OS: CenotOS7
>
> No OOM killer messages in the syslog. SWAP is disabled.
>
> -b
Hello,
we are experiencing out-of-memory issues after Postygres upgrade from 14.3
to 14.4 on CentOS7.
OS: CenotOS7
No OOM killer messages in the syslog. SWAP is disabled.
-bash-4.2$ sysctl -q vm.overcommit_memory
vm.overcommit_memory = 2
[root@sts-uat-pgsql100 ~]# swapon -vs
[root@sts-uat
Open
Hi,
Thanks a lot for information.
Best regards,
Jana
-Original Message-
From: Magnus Hagander
Sent: Tuesday, April 6, 2021 3:23 PM
To: Mihalidesová Jana
Cc: pgsql-general@lists.postgresql.org
Subject: Re: Upgrade from 11.3 to 13.1 failed with out of memory
On Tue, Apr 6, 2021
On Tue, Apr 6, 2021 at 3:08 PM Mihalidesová Jana
wrote:
>
> Open
>
>
> Hi,
>
>
>
> I have aproximetly 560GB large database and try to upgrade it from 11.3 to
> 13.1. I’ve successfully upgraded dev,test and ref environment but on the
> production pg_dump failed w
Open
Hi,
I have aproximetly 560GB large database and try to upgrade it from 11.3 to
13.1. I’ve successfully upgraded dev,test and ref environment but on the
production pg_dump failed with out of memory. Yes, of course, the dev,test and
ref are much much smaller then production database.
We
>I am aware that the behavior is different from what we've seen last week but
>this is how it looks today.
>Anything we missed or did not do correct?
Finally this can be re-produced quite easily by installing this extension:
https://de.osdn.net/projects/pgstoreplans/downloads/72297/pg_store_plan
>> Here is a new one with bt at the end:
>That's just showing the stack when the backend is idle waiting for input.
>We need to capture the stack at the moment when the "out of memory" error
>is reported (errfinish() should be the top of stack).
Then I don'
t.
We need to capture the stack at the moment when the "out of memory" error
is reported (errfinish() should be the top of stack).
>> libraries to have gotten linked into a Postgres backend. What
>> extensions are you using?
> These are the extensions in use:
> plpythonu
From: Tom Lane
Sent: Wednesday, July 29, 2020 17:05
To: Daniel Westermann (DWE)
Cc: pgsql-general@lists.postgresql.org
Subject: Re: Out of memory with "create extension postgis"
"Daniel Westermann (DWE)" writes:
>> So this is what we got today. In the log file the
"Daniel Westermann (DWE)" writes:
> So this is what we got today. In the log file there is this:
> 2020-07-29 16:33:23 CEST 101995 ERROR: out of memory
> 2020-07-29 16:33:23 CEST 101995 DETAIL: Failed on request of size 8265691 in
> memory context "PortalContext&q
>>>> "Daniel Westermann (DWE)" writes:
>>>>> The process eats all the available memory and finally dies:
>>>>> # create extension postgis;
>>>>> ERROR: out of memory
>>>>> DETAIL: Failed on request of size 8265
>>> "Daniel Westermann (DWE)" writes:
>>>> The process eats all the available memory and finally dies:
>>>> # create extension postgis;
>>>> ERROR: out of memory
>>>> DETAIL: Failed on request of size 8265691 in memory
"Daniel Westermann (DWE)" writes:
>> "Daniel Westermann (DWE)" writes:
>>> The process eats all the available memory and finally dies:
>>> # create extension postgis;
>>> ERROR: out of memory
>>> DETAIL: Failed on request of
ory and finally dies:
>> # create extension postgis;
>> ERROR: out of memory
>> DETAIL: Failed on request of size 8265691 in memory context "PortalContext".
>> Time: 773569.877 ms (12:53.570)
>Quite odd. There should have been a memory context dump written t
; # create extension postgis;
> ERROR: out of memory
> DETAIL: Failed on request of size 8265691 in memory context "PortalContext".
> Time: 773569.877 ms (12:53.570)
Quite odd. There should have been a memory context dump written to the
postmaster's stderr, can you show tha
available memory and finally dies:
# create extension postgis;
ERROR: out of memory
DETAIL: Failed on request of size 8265691 in memory context "PortalContext".
Time: 773569.877 ms (12:53.570)
Stats file from /proc:
# cat status
Name: psql
Umask: 0022
State: S (sleeping)
Tgid:
not sure this was clear from the stack
trace.
Von: Michael Lewis [mailto:mle...@entrata.com]
Gesendet: Freitag, 3. April 2020 18:30
An: David Day
Cc: Pavel Stehule ; Jan Strube
; pgsql-general@lists.postgresql.org
Betreff: Re: Out of memory in big transactions after upgrade to 12.2
If you didn
If you didn't turn it off, you have parallel workers on by default with
v12. If work_mem is set high, memory use may be much higher as each node in
a complex plan could end up executing in parallel.
Also, do you use a connection pooler such as pgbouncer or pgpool? What is
max_connections set to?
n kernel: [3026711.345000] [ 1485] 0 148569911
99 180224 159 0 accounts-daemon
...
Feb 19 19:06:53 akran kernel: [3026711.345345] Out of memory: Kill process
25591 (postgres) score 697 or sacrifice child
Feb 19 19:06:53 akran kernel: [3026711.346563] Killed proce
On Tue, Feb 18, 2020 at 12:10 PM Nagaraj Raj wrote:
>
> Below are the same configurations ins .conf file before and after updagrade
>
> show max_connections; = 1743
> show shared_buffers = "4057840kB"
> show effective_cache_size = "8115688kB"
> show maintenance_work_mem = "259MB"
> show checkpoin
288)"" Filter: (((data -> 'info'::text) ->> 'status'::text)
= 'CLOSE'::text)"
Thanks,
On Tuesday, February 18, 2020, 09:59:37 AM PST, Tomas Vondra
wrote:
On Tue, Feb 18, 2020 at 05:46:28PM +, Nagaraj Raj wrote:
>af
On Tue, Feb 18, 2020 at 05:46:28PM +, Nagaraj Raj wrote:
after upgrade Postgres to v9.6.11 from v9.6.9 DB running out of memory issues
no world load has changed before and after upgrade.
spec: RAM 16gb,4vCore
Any bug reported like this or suggestions on how to fix this issue? I
Dear mailing list,
I am currently testing an application for which I previously used PostgreSQL 10
with the current PostgreSQL 11.2 release. During the tests I experienced out of
memory errors of my database which I could not explain, they seem to affect
tables containing large BYTEA data.
I
Am 11.03.19 um 06:44 schrieb Nanda Kumar:
Hello Tem,
Can you please help on the below issues . The below Error occurred when I run
the select statement for the huge data volume.
Error Details :
Ran out of memory retrieving query results.
you should provide more details, for instance
Hello Tem,
Can you please help on the below issues . The below Error occurred when I run
the select statement for the huge data volume.
Error Details :
Ran out of memory retrieving query results.
Regards
Nanda Kumar.M
SmartStream Technologies
Eastland Citadel | 5th Floor | 102 Hosur Road
On 12 February 2019 17:20:09 CET, Vikas Sharma wrote:
>Hello All,
>
>I have a 4 node PostgreSQL 9.6 cluster with streaming replication. we
>encounter today the Out of Memory Error on the Master which resulted
>in
>All postres processes restarted and cluster recovered itsel
but I can see out of memory happened and
Postmaster invoked OOM.
Regards
Vikas Sharma
On Tue, 12 Feb 2019 at 16:39, Adrian Klaver
wrote:
> On 2/12/19 8:20 AM, Vikas Sharma wrote:
> > Hello All,
> >
> > I have a 4 node PostgreSQL 9.6 cluster with streaming replication. we
&g
On 2/12/19 8:20 AM, Vikas Sharma wrote:
Hello All,
I have a 4 node PostgreSQL 9.6 cluster with streaming replication. we
encounter today the Out of Memory Error on the Master which resulted in
All postres processes restarted and cluster recovered itself. Please
let me know the best way to
Hello All,
I have a 4 node PostgreSQL 9.6 cluster with streaming replication. we
encounter today the Out of Memory Error on the Master which resulted in
All postres processes restarted and cluster recovered itself. Please let
me know the best way to diagnose this issue.
The error seen in
Thanks Tom Lane for your answer
Same issue for pg_dump and LOB was also reported in
https://postgrespro.com/list/thread-id/2211186
Concerning the 25% waste of space we experienced with LOB:
We are using LOB because we save jpeg2000 images into the DB.
We display them as thumbnails in a 6x10 multi
JMLessard wrote:
> What about updates where the bytea do not changed. Does a new copy of the
> bytea will be made in the toast table or new row will point to the original
> bytea?
> > https://www.postgresql.org/docs/current/storage-toast.html says:
> > The TOAST management code is triggered only
Thanks to Daniel Verite, nice answer, really helpful :)
It summarizes what I have read in the doc and blogs.
What about updates where the bytea do not changed. Does a new copy of the bytea
will be made in the toast table or new row will point to the original bytea?
> https://www.postgresql.org/do
Jean-Marc Lessard wrote:
> Another area where LOB hurts is the storage. LOB are broken and stored in 2K
> pieces.
> Due to the block header, only three 2k pieces fit in an 8k block wasting 25%
> of space (in fact pgstattuple reports ~ 20%).
Yes. bytea stored as TOAST is sliced into pieces
On 11/14/2018 11:14 AM, Jean-Marc Lessard wrote:
Adrien Nayrat wrote:
> With 17 million LO, it could eat lot of memory ;)
Yes it does.
I did several tests and here are my observations.
First memory settings are:
shared_buffers = 3GB
work_mem = 32Mb
maintenance_work_mem = 1GB
effective_cache_siz
Jean-Marc Lessard writes:
> Would you recommend bytea over LOB considering that the max LOB size is well
> bellow 1GB?
Yes, probably. The reason that pg_dump has trouble with lots of small
BLOBs is the 9.0-era decision to treat BLOBs as independent objects
having their own owners, privilege att
Adrien Nayrat wrote:
> With 17 million LO, it could eat lot of memory ;)
Yes it does.
I did several tests and here are my observations.
First memory settings are:
shared_buffers = 3GB
work_mem = 32Mb
maintenance_work_mem = 1GB
effective_cache_size = 9MB
bytea_output = 'escape'
The largest LO is
Hello,
On 11/10/18 12:49 AM, Jean-Marc Lessard wrote:
> The dumped table is 0.5TB, 17 million rows and LOB uses about 99% of the
> space.
>
If I understand, you have 17 million Large Object?
I do not recall exactly and maybe I am wrong. But it seems pg_dump has to
allocate memory for each obje
% of
the space.
The pg_dump consumes the entire system memory and swap, then
terminates with out of memory error
Is it a bug or normal behavior?
If I do not include LOB in the dump, it works fine.
Here is the dump output:
C:\Users\Administrator> pg_dump -h localhost -Fc -a -b -t signatures
consumes the entire system memory and swap, then terminates
with out of memory error
Is it a bug or normal behavior?
If I do not include LOB in the dump, it works fine.
Here is the dump output:
C:\Users\Administrator> pg_dump -h localhost -Fc -a -b -t signatures -v >
d:\postgresql\s
with
out of memory error
Is it a bug or normal behavior?
If I do not include LOB in the dump, it works fine.
Here is the dump output:
C:\Users\Administrator> pg_dump -h localhost -Fc -a -b -t signatures -v >
d:\postgresql\sig.dmp
pg_dump: last built-in OID is 16383
...
pg_dump: reading row se
On 2018-09-28 07:23:59 +0200, Laurenz Albe wrote:
> Rob Sargent wrote:
> > > Christoph Moench-Tegeder wrote:
> > > > ## Laurenz Albe (laurenz.a...@cybertec.at):
> > > >
> > > > > vm.overcommit_memory = 2
> > > > > vm_overcommit_ratio = 100
> > > > >
> > > > > Linux commits (swap * overcommit_rati
Rob Sargent wrote:
> > Christoph Moench-Tegeder wrote:
> > > ## Laurenz Albe (laurenz.a...@cybertec.at):
> > >
> > > > vm.overcommit_memory = 2
> > > > vm_overcommit_ratio = 100
> > > >
> > > > Linux commits (swap * overcommit_ratio * RAM / 100),
> > >
> > > ^
> > >
> On Sep 27, 2018, at 3:45 PM, Laurenz Albe wrote:
>
> Christoph Moench-Tegeder wrote:
>> ## Laurenz Albe (laurenz.a...@cybertec.at):
>>
>>> vm.overcommit_memory = 2
>>> vm_overcommit_ratio = 100
>>>
>>> Linux commits (swap * overcommit_ratio * RAM / 100),
>>
>> ^
>>
Christoph Moench-Tegeder wrote:
> ## Laurenz Albe (laurenz.a...@cybertec.at):
>
> > vm.overcommit_memory = 2
> > vm_overcommit_ratio = 100
> >
> > Linux commits (swap * overcommit_ratio * RAM / 100),
>
> ^
> That should be a "+".
Yes; shame on me for
, and you want to know about it right now.
A good way to know about this kind of mistakes is having stuff fail
hard, obviously and loudly (i.e. Out Of Memory). Do not try to save
some day by swapping: stuff is "sometimes slow" and if you really need
the swap, everything crawls to a halt anywa
greigwise wrote:
> All right.. one more thing here. Any suggestions for how to set overcommit
> on a postgres db server with 16 GB of RAM and no swap? I think I want
> vm.overcommit_memory = 2, as I understand that prevents the OOM killer from
> zapping me. Is 100% the right way to go for over
All right.. one more thing here. Any suggestions for how to set overcommit
on a postgres db server with 16 GB of RAM and no swap? I think I want
vm.overcommit_memory = 2, as I understand that prevents the OOM killer from
zapping me. Is 100% the right way to go for overcommit_ratio? Is there a
I think I figured it out:
vm.overcommit_memory = 2
vm.overcommit_ratio = 50
Only allows me to use 50% of my RAM... ugh! I have 16 GB, so when only 8 is
left, I start seeing OOM. Will increase this setting and see if it helps.
Thanks everyone for the help.
Greig
--
Sent from: http://www.pos
Tom Lane-2 wrote
> greigwise <
> greigwise@
> > writes:
>> Is it possible that the fact that my stack size is limited is what is
>> causing my issue?
>
> No. If you were hitting that limit you'd get a message specifically
> talking about stack.
>
> regards, tom lane
Well
greigwise writes:
> Is it possible that the fact that my stack size is limited is what is
> causing my issue?
No. If you were hitting that limit you'd get a message specifically
talking about stack.
regards, tom lane
There is also this:
-bash-4.2$ prlimit -p 6590
RESOURCE DESCRIPTION SOFT HARD UNITS
AS address space limitunlimited unlimited bytes
CORE max core file size 0 unlimited blocks
CPUCPU time
n on
> > the server with the out of memory issues. I have a script which just
> > periodically dumps the output of free -m to a text file.
> >
> > So, the output of free -m immediately before and after the out of memory
> > error looks like this:
> >
> > J
On Tue, 25 Sep 2018 11:34:19 -0700 (MST)
greigwise wrote:
> Well, I've been unsuccessful so far on creating a standalone test.
>
> I have put some scripting in place to capture some additional information on
> the server with the out of memory issues. I have a
Tom Lane-2 wrote
> greigwise <
> greigwise@
> > writes:
>> If I have nearly 8 GB of memory left, why am I getting out of memory
>> errors?
>
> Probably the postmaster is running under restrictive ulimit settings.
>
> regards, tom
greigwise writes:
> If I have nearly 8 GB of memory left, why am I getting out of memory errors?
Probably the postmaster is running under restrictive ulimit settings.
regards, tom lane
Well, I've been unsuccessful so far on creating a standalone test.
I have put some scripting in place to capture some additional information on
the server with the out of memory issues. I have a script which just
periodically dumps the output of free -m to a text file.
So, the outp
Well, we are 64-bit I'll see if I can make some kind of self contained
test to repeat it.
Thanks,
Greig
--
Sent from: http://www.postgresql-archive.org/PostgreSQL-general-f1843780.html
greigwise writes:
> Hello, I'm running postgres 9.6.10 on Centos 7. Seeing the occasional out
> of memory error trying to run a query. In the logs I see something like
> this:
> Grand total: 462104832 bytes in 795 blocks; 142439136 free (819860 chunks);
> 319665696 used
&
ulimit -a for the postgres user shows memory unlimited.
numactl --hardware gives command not found.
Thanks again.
Greig
--
Sent from: http://www.postgresql-archive.org/PostgreSQL-general-f1843780.html
My first two guesses are ulimit or numa. numactl --hardware will show your
how many nodes your box has and if you're exhausting any of them.
On Thu, Sep 20, 2018, 6:11 PM greigwise wrote:
> Hello, I'm running postgres 9.6.10 on Centos 7. Seeing the occasional out
> of memory
Hello, I'm running postgres 9.6.10 on Centos 7. Seeing the occasional out
of memory error trying to run a query. In the logs I see something like
this:
Grand total: 462104832 bytes in 795 blocks; 142439136 free (819860 chunks);
319665696 used
2018-09-20 18:08:01 UTC 5ba3e1a2.7a8a d
table "ofrrds" failed: PQgetResult() failed.
pg_dump: Error message from server: ERROR: out of memory
DETAIL: Failed on request of size 1073741823.> pg_dump: The command was: COPY
public.ofrrds (id, updateddate, bytes) TO
stdout;
There will be less memory pressure on the server if th
frrds" failed: PQgetResult() failed.
> pg_dump: Error message from server: ERROR: out of memory
> DETAIL: Failed on request of size 1073741823.> pg_dump: The command was:
> COPY public.ofrrds (id, updateddate, bytes) TO
> stdout;
There will be less memory pressure on the server
ts of table "ofrrds" failed: PQgetResult() failed.
>pg_dump: Error message from server: ERROR: out of memory
>DETAIL: Failed on request of size 1073741823.
^^
pg_dump is trying to allocate 1GB. Obviously it can't if 1GB is
wildfire.backup wildfirep
g_dump: Dumping the contents of table "ofrrds" failed: PQgetResult()
failed.
pg_dump: Error message from server: ERROR: out of memory
DETAIL: Failed on request of size 1073741823.
pg_dump: The command was: COPY public.ofrrds (id, updateddate, bytes) TO
stdout;
I'
of table "ofrrds" failed: PQgetResult() failed.
pg_dump: Error message from server: ERROR: out of memory
DETAIL: Failed on request of size 1073741823.
pg_dump: The command was: COPY public.ofrrds (id, updateddate, bytes) TO
stdout;
I'm not sure how to get this backup to run.
QgetResult()
failed.
pg_dump: Error message from server: ERROR: out of memory
DETAIL: Failed on request of size 1073741823.
pg_dump: The command was: COPY public.ofrrds (id, updateddate, bytes) TO
stdout;
I've been reducing my memory settings:
maintenance_work_mem = 80MB
work_mem = 5MB
shared_
ver: ERROR: out of memory
DETAIL: Failed on request of size 1073741823.
pg_dump: The command was: COPY public.ofrrds (id, updateddate, bytes) TO
stdout;
I've been reducing my memory settings:
maintenance_work_mem = 80MB
work_mem = 5MB
shared_buffers = 200MB
But it doesnt seem to make a
Le mercredi 16 mai 2018 à 09:48:54 UTC+2, ChatPristi
a écrit :
Dear all,
I have a SELECT command (in partitionned tables) that failed
with:psql:/tmp/query.txt:1: ERROR: out of memory
DETAIL: Cannot enlarge string buffer containing 1073741818 bytes by 32 more
bytes.
I got the error
1 - 100 of 146 matches
Mail list logo