On 7/19/22 10:54 AM, ghoostin...@mail.com wrote:
Again reply to list also.
Ccing list.
What should i do and what’s the requirements
———-
The link explains what you should do.
What requirements are you talking about?
This is probably something that the Odoo community will be better able
to
On 7/19/22 9:56 AM, ghoostin...@mail.com wrote:
Please reply to list also.
Ccing list
> 1) Detail how you are doing migration
1 - I'm Using Tool OpenUpgrade OCA
2 - Creation new db and restore my db on it (36GB and Some table 5m+
rows)
3 - Execution Script Sql to unist
On 7/19/22 06:16, ghoostin...@mail.com wrote:
Hello, I’m using Odoo Erp v12 Community and i want migrate to v14 and i have
tablea content more 5m rows and after 8hr of execution it sho msg “out of
memory” so i need your help
And we need information:
1) Detail how you are doing migration.
2)
>I am aware that the behavior is different from what we've seen last week but
>this is how it looks today.
>Anything we missed or did not do correct?
Finally this can be re-produced quite easily by installing this extension:
https://de.osdn.net/projects/pgstoreplans/downloads/72297/pg_store_plan
>> Here is a new one with bt at the end:
>That's just showing the stack when the backend is idle waiting for input.
>We need to capture the stack at the moment when the "out of memory" error
>is reported (errfinish() should be the top of stack).
Then I don't know what/how to do it. Here is a comp
"Daniel Westermann (DWE)" writes:
>> Umm ... you didn't issue a "bt" when you got to errfinish, so there's
>> no useful info here.
> Here is a new one with bt at the end:
That's just showing the stack when the backend is idle waiting for input.
We need to capture the stack at the moment when the
From: Tom Lane
Sent: Wednesday, July 29, 2020 17:05
To: Daniel Westermann (DWE)
Cc: pgsql-general@lists.postgresql.org
Subject: Re: Out of memory with "create extension postgis"
"Daniel Westermann (DWE)" writes:
>> So this is what we got today. In the log file the
"Daniel Westermann (DWE)" writes:
> So this is what we got today. In the log file there is this:
> 2020-07-29 16:33:23 CEST 101995 ERROR: out of memory
> 2020-07-29 16:33:23 CEST 101995 DETAIL: Failed on request of size 8265691 in
> memory context "PortalContext".
> 2020-07-29 16:33:23 CEST 10
"Daniel Westermann (DWE)" writes:
> The process eats all the available memory and finally dies:
> # create extension postgis;
> ERROR: out of memory
> DETAIL: Failed on request of size 8265691 in memory context
> "PortalContext".
> Time: 773569.877 ms (12:53.570)
>
>>> "Daniel Westermann (DWE)" writes:
The process eats all the available memory and finally dies:
# create extension postgis;
ERROR: out of memory
DETAIL: Failed on request of size 8265691 in memory context
"PortalContext".
Time: 773569.877 ms (12:53.570)
>> There
"Daniel Westermann (DWE)" writes:
>> "Daniel Westermann (DWE)" writes:
>>> The process eats all the available memory and finally dies:
>>> # create extension postgis;
>>> ERROR: out of memory
>>> DETAIL: Failed on request of size 8265691 in memory context
>>> "PortalContext".
>>> Time: 773569.
>"Daniel Westermann (DWE)" writes:
>> we have a very strange behavior on PostgreSQL 12.3 when we try to create the
>> extension postgis. Postgres and postgis have both been installed from
>> packages:
>> ...
>> The process eats all the available memory and finally dies:
>> # create extension pos
"Daniel Westermann (DWE)" writes:
> we have a very strange behavior on PostgreSQL 12.3 when we try to create the
> extension postgis. Postgres and postgis have both been installed from
> packages:
> ...
> The process eats all the available memory and finally dies:
> # create extension postgis;
>
If you didn't turn it off, you have parallel workers on by default with
v12. If work_mem is set high, memory use may be much higher as each node in
a complex plan could end up executing in parallel.
Also, do you use a connection pooler such as pgbouncer or pgpool? What is
max_connections set to?
On 12 February 2019 17:20:09 CET, Vikas Sharma wrote:
>Hello All,
>
>I have a 4 node PostgreSQL 9.6 cluster with streaming replication. we
>encounter today the Out of Memory Error on the Master which resulted
>in
>All postres processes restarted and cluster recovered itself. Please
>let
>me kno
Thank you Adrian for the reply, I did check the postgres processes running
around the time when OOM was invoked, there were lots of high CPU consuming
postgres processes running long running selects.
I am not sure of how to interpret the memory terms appearing in linux
dmeg or /var/log/messages b
On 2/12/19 8:20 AM, Vikas Sharma wrote:
Hello All,
I have a 4 node PostgreSQL 9.6 cluster with streaming replication. we
encounter today the Out of Memory Error on the Master which resulted in
All postres processes restarted and cluster recovered itself. Please
let me know the best way to
On 2018-09-28 07:23:59 +0200, Laurenz Albe wrote:
> Rob Sargent wrote:
> > > Christoph Moench-Tegeder wrote:
> > > > ## Laurenz Albe (laurenz.a...@cybertec.at):
> > > >
> > > > > vm.overcommit_memory = 2
> > > > > vm_overcommit_ratio = 100
> > > > >
> > > > > Linux commits (swap * overcommit_rati
Rob Sargent wrote:
> > Christoph Moench-Tegeder wrote:
> > > ## Laurenz Albe (laurenz.a...@cybertec.at):
> > >
> > > > vm.overcommit_memory = 2
> > > > vm_overcommit_ratio = 100
> > > >
> > > > Linux commits (swap * overcommit_ratio * RAM / 100),
> > >
> > > ^
> > >
> On Sep 27, 2018, at 3:45 PM, Laurenz Albe wrote:
>
> Christoph Moench-Tegeder wrote:
>> ## Laurenz Albe (laurenz.a...@cybertec.at):
>>
>>> vm.overcommit_memory = 2
>>> vm_overcommit_ratio = 100
>>>
>>> Linux commits (swap * overcommit_ratio * RAM / 100),
>>
>> ^
>>
Christoph Moench-Tegeder wrote:
> ## Laurenz Albe (laurenz.a...@cybertec.at):
>
> > vm.overcommit_memory = 2
> > vm_overcommit_ratio = 100
> >
> > Linux commits (swap * overcommit_ratio * RAM / 100),
>
> ^
> That should be a "+".
Yes; shame on me for
## Laurenz Albe (laurenz.a...@cybertec.at):
> vm.overcommit_memory = 2
> vm_overcommit_ratio = 100
>
> Linux commits (swap * overcommit_ratio * RAM / 100),
^
That should be a "+".
See Documentation/sysctl/vm.txt and Documentation/vm/overcommit-accounti
greigwise wrote:
> All right.. one more thing here. Any suggestions for how to set overcommit
> on a postgres db server with 16 GB of RAM and no swap? I think I want
> vm.overcommit_memory = 2, as I understand that prevents the OOM killer from
> zapping me. Is 100% the right way to go for over
All right.. one more thing here. Any suggestions for how to set overcommit
on a postgres db server with 16 GB of RAM and no swap? I think I want
vm.overcommit_memory = 2, as I understand that prevents the OOM killer from
zapping me. Is 100% the right way to go for overcommit_ratio? Is there a
I think I figured it out:
vm.overcommit_memory = 2
vm.overcommit_ratio = 50
Only allows me to use 50% of my RAM... ugh! I have 16 GB, so when only 8 is
left, I start seeing OOM. Will increase this setting and see if it helps.
Thanks everyone for the help.
Greig
--
Sent from: http://www.pos
Tom Lane-2 wrote
> greigwise <
> greigwise@
> > writes:
>> Is it possible that the fact that my stack size is limited is what is
>> causing my issue?
>
> No. If you were hitting that limit you'd get a message specifically
> talking about stack.
>
> regards, tom lane
Well
greigwise writes:
> Is it possible that the fact that my stack size is limited is what is
> causing my issue?
No. If you were hitting that limit you'd get a message specifically
talking about stack.
regards, tom lane
There is also this:
-bash-4.2$ prlimit -p 6590
RESOURCE DESCRIPTION SOFT HARD UNITS
AS address space limitunlimited unlimited bytes
CORE max core file size 0 unlimited blocks
CPUCPU time
On Tue, Sep 25, 2018 at 2:05 PM PT wrote:
> On Tue, 25 Sep 2018 11:34:19 -0700 (MST)
> greigwise wrote:
>
> > Well, I've been unsuccessful so far on creating a standalone test.
> >
> > I have put some scripting in place to capture some additional
> information on
> > the server with the out of m
On Tue, 25 Sep 2018 11:34:19 -0700 (MST)
greigwise wrote:
> Well, I've been unsuccessful so far on creating a standalone test.
>
> I have put some scripting in place to capture some additional information on
> the server with the out of memory issues. I have a script which just
> periodical
Tom Lane-2 wrote
> greigwise <
> greigwise@
> > writes:
>> If I have nearly 8 GB of memory left, why am I getting out of memory
>> errors?
>
> Probably the postmaster is running under restrictive ulimit settings.
>
> regards, tom lane
If I login as the user which runs pos
greigwise writes:
> If I have nearly 8 GB of memory left, why am I getting out of memory errors?
Probably the postmaster is running under restrictive ulimit settings.
regards, tom lane
Well, I've been unsuccessful so far on creating a standalone test.
I have put some scripting in place to capture some additional information on
the server with the out of memory issues. I have a script which just
periodically dumps the output of free -m to a text file.
So, the output of fr
Well, we are 64-bit I'll see if I can make some kind of self contained
test to repeat it.
Thanks,
Greig
--
Sent from: http://www.postgresql-archive.org/PostgreSQL-general-f1843780.html
greigwise writes:
> Hello, I'm running postgres 9.6.10 on Centos 7. Seeing the occasional out
> of memory error trying to run a query. In the logs I see something like
> this:
> Grand total: 462104832 bytes in 795 blocks; 142439136 free (819860 chunks);
> 319665696 used
> 2018-09-20 18:08:01
ulimit -a for the postgres user shows memory unlimited.
numactl --hardware gives command not found.
Thanks again.
Greig
--
Sent from: http://www.postgresql-archive.org/PostgreSQL-general-f1843780.html
My first two guesses are ulimit or numa. numactl --hardware will show your
how many nodes your box has and if you're exhausting any of them.
On Thu, Sep 20, 2018, 6:11 PM greigwise wrote:
> Hello, I'm running postgres 9.6.10 on Centos 7. Seeing the occasional out
> of memory error trying to r
Le mercredi 16 mai 2018 à 09:48:54 UTC+2, ChatPristi
a écrit :
Dear all,
I have a SELECT command (in partitionned tables) that failed
with:psql:/tmp/query.txt:1: ERROR: out of memory
DETAIL: Cannot enlarge string buffer containing 1073741818 bytes by 32 more
bytes.
I got the error wit
Forgotten to CC the list too, sorry. again..
>1073741818 is a bit less than 1GB and 1073741818+32 is a bit more. So
>you are obviously hitting a 1GB limit here.
>Given that 1GB is the maximum length of a character type value in
>PostgreSQL and the error message mentions a "string buffer", I sus
Forgotten to CC the list, sorry...
>Well, instead of an explain output which takes 2.4MB compressed and
>9.6MB uncompressed (take it as unreadable), could you produce a
>self-contained test case with a glimpse of the schema you are using?
>Where does the OOM happen, and how did you change your pa
On 2018-05-15 15:02:48 +, ChatPristi wrote:
> I have a SELECT command (in partitionned tables) that failed with:
> psql:/tmp/query.txt:1: ERROR: out of memory
> DETAIL: Cannot enlarge string buffer containing 1073741818 bytes by 32 more
> bytes.
1073741818 is a bit less than 1GB and 10737418
On Tue, May 15, 2018 at 03:02:48PM +, ChatPristi wrote:
> I got the error with PG 10.3 and 10.4 on CentOS 7 up-to-date.The
> command works with a smaller size database.
> The command works with the same database with PG 9.3.19 on RHEL 6.9
> up-to-date.
>
> I attach the EXPLAIN SELECT command.
42 matches
Mail list logo