Johann Spies writes:
> On 25 August 2017 at 13:48, Tom Lane wrote:
>> Remember that "work_mem" is "work memory per plan node", so a complex
>> query could easily chew up a multiple of that number --- and that's
>> with everything going according to plan. If, say, the planner
>> underestimates th
On 25 August 2017 at 13:48, Tom Lane wrote:
> How complex is "complex"? I can think of two likely scenarios:
> 1. You've stumbled across some kind of memory-leak bug in Postgres.
> 2. The query's just using too much memory. In this connection, it's
> not good that you've got
>> work_mem = 2GB
>
Johann Spies writes:
> While restoring a dump from our development server (768G ram) to the
> production server, (PG 9.6.3 on Debian Stretch with 128G Ram) the
> refreshing of a Materialized View fails like this:
> local] js@wos=# REFRESH MATERIALIZED VIEW wos_2017_1.citation_window_mv ;
> server
## Johann Spies (johann.sp...@gmail.com):
> --
> 2017-08-24 19:23:26 SAST [7532-18] LOG: server process (PID 4890) was
> terminated by signal 9: Killed
That looks like out-of-memory. Check the kernel log/dmesg to verify.
If it's the dreaded OOM-killer, you should check your over
Chris Roberts writes:
> Would someone tell me why I am seeing the following Postgres logs?
> 07:56:20 EST LOG: 08P00: incomplete message from client
> 07:56:20 EST LOCATION: pq_getmessage, src\backend\libpq\pqcomm.c:1143
> 07:56:20 EST ERROR: 54000: out of memory
> 07:56:20 EST DETAIL: Cannot
Chris Mair writes:
> ...
> Interestingly, if you combine these, it quickly blows up! The following query
> with a limit 1000 already
> has a RES of well over 1GB. With larger limits it quickly thrashes my machine.
> enrico=# explain analyze
> SELECT substring((field_id ->'comment')::text,1,1),
>
>> https://drive.google.com/file/d/0ByfjZX4TabhocUg2MFJ6a21qS2M/view?usp=sharing
> Note: due an error in dump script, if you are in Linux/Unix environment, use
> this command for uncompressing the file:
>
> bzip2 -d -c comment_test.dump.bz2 |sed -e '12d' > comment_test.dump
Hi,
I've played a
On 01/16/2015 11:22 AM, Enrico Bianchi wrote:
https://drive.google.com/file/d/0ByfjZX4TabhocUg2MFJ6a21qS2M/view?usp=sharing
Note: due an error in dump script, if you are in Linux/Unix environment,
use this command for uncompressing the file:
bzip2 -d -c comment_test.dump.bz2 |sed -e '12d' > c
On 01/16/2015 09:58 AM, Enrico Bianchi wrote:
I've asked permission for these data
I've obtained the permission, here is available a subset of data large
enough to replicate the problem (note: you can simply run the query
without the where clause):
https://drive.google.com/file/d/0ByfjZX4Tabh
On 01/16/2015 01:19 AM, John R Pierce wrote:
you didn't do EXPLAIN ANALYZE, so your query plan statistics are all
estimates.
I know, but the EXPLAIN ANALYZE has the same problem of the query
Enrico
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your s
On 01/16/2015 02:18 AM, Tom Lane wrote:
Can we see the map?
This is the log when executing the query with a subset of data:
< 2015-01-16 08:47:43.517 GMT >DEBUG: StartTransactionCommand
< 2015-01-16 08:47:43.517 GMT >DEBUG: StartTransaction
< 2015-01-16 08:47:43.517 GMT >DEBUG: name: unnamed
Enrico Bianchi writes:
> When I launch a query (the principal field is JSONb), the database
> return this:
> ERROR: out of memory
> DETAIL: Failed on request of size 110558.
That error should be associated with a memory usage map getting dumped to
postmaster stderr, where hopefully your loggin
On 1/15/2015 3:17 PM, Enrico Bianchi wrote:
When I launch a query (the principal field is JSONb), the database
return this:
ERROR: out of memory
DETAIL: Failed on request of size 110558.
it looks like your query is trying to return 7 million rows, although
you didn't do EXPLAIN ANALYZE, s
Carlos Henrique Reimer writes:
> Extracted ulimits values from postmaster pid and they look as expected:
> [root@2-NfseNet ~]# cat /proc/2992/limits
> Limit Soft Limit Hard Limit
> Units
> Max address space 102400 unlimited
> bytes
So you'v
So if you watch processes running with sort by memory turned on in top
or htop can you see your machine running out of memory etc? You have
enough swap if needed? 48G is pretty small for a modern pgsql server
with as much data and tables as you have, so I'd assume you have
plenty of swap just in ca
Extracted ulimits values from postmaster pid and they look as expected:
[root@2-NfseNet ~]# ps -ef | grep /postgres
postgres 2992 1 1 Nov30 ?03:17:46
/usr/local/pgsql/bin/postgres -D /database/dbcluster
root 26694 1319 0 18:19 pts/000:00:00 grep /postgres
[root@2-N
Carlos Henrique Reimer writes:
> Yes, all lines of /etc/security/limits.conf are commented out and session
> ulimit -a indicates the defaults are being used:
I would not trust "ulimit -a" executed in an interactive shell to be
representative of the environment in which daemons are launched ...
ha
Yes, all lines of /etc/security/limits.conf are commented out and session
ulimit -a indicates the defaults are being used:
core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pen
On Thu, Dec 11, 2014 at 12:05 PM, Carlos Henrique Reimer
wrote:
> That was exactly what the process was doing and the out of memory error
> happened while one of the merges to set 1 was being executed.
You sure you don't have a ulimit getting in the way?
--
Sent via pgsql-general mailing list
That was exactly what the process was doing and the out of memory error
happened while one of the merges to set 1 was being executed.
On Thu, Dec 11, 2014 at 4:42 PM, Vick Khera wrote:
>
> On Thu, Dec 11, 2014 at 10:30 AM, Tom Lane wrote:
>
>> needed to hold relcache entries for all 23000 table
On Thu, Dec 11, 2014 at 10:30 AM, Tom Lane wrote:
> needed to hold relcache entries for all 23000 tables :-(. If so there
> may not be any easy way around it, except perhaps replicating subsets
> of the tables. Unless you can boost the memory available to the backend
>
I'd suggest this. Break
Slony version is 2.2.3
On Thu, Dec 11, 2014 at 3:29 PM, Scott Marlowe
wrote:
> Just wondering what slony version you're using?
>
--
Reimer
47-3347-1724 47-9183-0547 msn: carlos.rei...@opendb.com.br
Just wondering what slony version you're using?
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
Hi,
Yes, I agree, 8.3 is out of support for a long time and this is the reason
we are trying to migrate to 9.3 using SLONY to minimize downtime.
I eliminated the possibility of data corruption as the limit/offset
technique indicated different rows each time it was executed. Actually, the
failure
Carlos Henrique Reimer writes:
> I've facing an out of memory condition after running SLONY several hours to
> get a 1TB database with about 23,000 tables replicated. The error occurs
> after about 50% of the tables were replicated.
I'd try bringing this up with the Slony crew.
> I guess postgre
I was reading in to the parameter a little more and it appears that the
defuault for vm.overcommit_ratio is 50%, I am considering bumping this
up to 95% so the sums look like this:
max memory allocation for process = swap + ratio of physical memory
21 + (16 * 0.95) = 36.2GB
This in theory sho
Hi,
On 16/06/2014 14:15, Andres Freund wrote:
Hi,
On 2014-06-16 13:56:23 +0100, Bruce McAlister wrote:
[1] 3 x ESX VM's
[a] 8 vCPU's each
[b] 16GB memory each
# Dont hand out more memory than neccesary
vm.overcommit_memory = 2
So you haven't tune overcom
Hi,
On 2014-06-16 13:56:23 +0100, Bruce McAlister wrote:
> [1] 3 x ESX VM's
> [a] 8 vCPU's each
> [b] 16GB memory each
> # Dont hand out more memory than neccesary
> vm.overcommit_memory = 2
So you haven't tune overcommit_ratio at all? Can you show
/proc/memin
I wanted to answer this for you but I didn't see a reply button on the site.
In pgadmin,
it's File ==> Options ==> Query tool ==> History file ==> default is 1024.
try 4096 if you have more then 8G on your PC.
04.03.2013 18:25 пользователь "Merlin Moncure" написал:
>
> On Sun, Mar 3, 2013 at 11:05 AM, G N wrote:
> > Hello Friends,
> >
> > Hope you are all well...
> >
> > I have a specific issue, where my query fails with below error while
trying
> > to export data from pgadmin SQL tool.
> >
> > There
On Sun, Mar 3, 2013 at 11:05 AM, G N wrote:
> Hello Friends,
>
> Hope you are all well...
>
> I have a specific issue, where my query fails with below error while trying
> to export data from pgadmin SQL tool.
>
> There are no such issues when the result set is small. But it returns error
> when
Eelke Klein writes:
> In a database of one of our customers we sometimes get out of memory
> errors. Below I have copy pasted one of these very long messages.
> The error doesn't always occur, when I copy paste the query and run it
> manually it works.
The memory map doesn't look out of the ordin
"Dara Olson" writes:
> This is the first 1/3 of the errors, so hopefully this will help diagnose
> where my problem may be. Any help would be greatly appreciated.
Well, you didn't show us the error that caused a COPY to fail, but it's
pretty obvious that you're attempting to load the dump into
LY spatial_ref_sys
ADD CONSTRAINT spatial_ref_sys_pkey PRIMARY KEY (srid);
This is the first 1/3 of the errors, so hopefully this will help diagnose where
my problem may be. Any help would be greatly appreciated.
Thank you in advance.
Dara
- Original Message -
From: Tom Lane
T
"Dara Olson" writes:
> I am attempting to create an exact copy of our production database/cluster on
> a different server for development. I created a dumpall file which is 8.7GB.
> When I attempt to run this in psql on the new server it seems okay and then I
> got a string of "invalid command
Mark Priest writes:
> However, I am still curious as to why I am getting an out of memory
> error. I can see how the performance might be terrible on such a
> query but I am surprised that postgres doesn't start using the disk at
> some point to reduce memory usage. Could it be that postgres tr
Thanks, Craig.
There are no triggers on the tables and the only constraints are the
primary keys.
I am thinking that the problem may be that I have too many full self
joins on the simple_group table. I am probably getting a
combinatorial explosion when postgres does cross joins on all the
deriv
Mark Priest writes:
> I am getting an Out of Memory error in my server connection process
> while running a large insert query.
> Postgres version: "PostgreSQL 8.2.16 on i686-pc-mingw32, compiled by
> GCC gcc.exe (GCC) 3.4.2 (mingw-special)"
> OS: Windows 7 Professional (v.6.1, build 7601 service
On 10/18/2011 02:52 PM, Mark Priest wrote:
I am getting an Out of Memory error in my server connection process
while running a large insert query.
Postgres version: "PostgreSQL 8.2.16 on i686-pc-mingw32, compiled by
GCC gcc.exe (GCC) 3.4.2 (mingw-special)"
OS: Windows 7 Professional (v.6.1, buil
On Aug 31, 2011, at 10:52 AM, Don wrote:
> I had always thought that a 32bit machine could access up to 4GB.
> So what is the limiting factor ?
- Half of your memory space may be given over to memory-mapped I/O. Now you're
down to 2GB.
- Your process's executable, plus any libraries it uses, pl
The server is 64 bit and client is 32 bit... I tried the select
* from table on the server and the query worked...
but I am puzzled why it does not work on the 32bit machine. I had
always thought that a 32bit machine could access up to 4GB.
So what is the limiting fac
Hello
2011/8/31 Don :
> Pavel...
>
> Thanks for the reply...
>
> This still did not solve the issue. It seems odd that a simple select
> command in psql accessing 32MB of records should cause a problem. I have
> tables much larger than this and may want to access them the same way.
>
so there a
On Aug 31, 2011, at 9:51 AM, Don wrote:
> Both machines are 64bit.
Are all your server & client builds 64-bit?
32M rows, unless the rows are <50 bytes each, you'll never be able to
manipulate that selection in memory with a 32-bit app.
--
Scott Ribe
scott_r...@elevated-dev.com
http://www.ele
Pavel...
Thanks for the reply...
This still did not solve the issue. It seems odd that a simple select
command in psql accessing 32MB of records should cause a problem. I
have tables much larger than this and may want to access them the same way.
I have 24 GB RAM on the sever and 32GB RAM
On 08/30/11 7:28 AM, Don wrote:
I am trying a simple access of a table and get an out of memory
error. How do I avoid this issue. It seems I have some configuration
set wrong.
Our system has 24GB of memory and is dedicated to the postgres database.
Back ground information
aquarec=> explain
Hello
if table is large, then client can raise this exception too
try to set FETCH_COUNT to 1000
http://www.postgresql.org/docs/8.4/interactive/app-psql.html
Regards
Pavel Stehule
2011/8/30 Don :
> I am trying a simple access of a table and get an out of memory error. How
> do I avoid this i
On 06/07/11 01:12, Geoffrey Myers wrote:
Wanted to add more specifics. Here is the actual code that generated the
error:
my $result = $conn->exec($select);
if ($result->resultStatus != PGRES_TUPLES_OK)
{
$error = $conn->errorMessage;
die "Error: <$error> Failed: <$select>";
}
That looks like
On 5/07/2011 11:12 PM, Geoffrey Myers wrote:
my $result = $conn->exec($select);
if ($result->resultStatus != PGRES_TUPLES_OK)
{
$error = $conn->errorMessage;
die "Error: <$error> Failed: <$select>";
}
So you're saying this select request failing would not be logged to the
postgres database log
Tom Lane wrote:
Geoffrey Myers writes:
Geoffrey Myers wrote:
out of memory for query result
One other note that is bothering me. There is no reference in the log
regarding the out of memory error. Should that not also show up in the
associated database log?
Not if it's a client-side er
Geoffrey Myers writes:
> Geoffrey Myers wrote:
>> out of memory for query result
> One other note that is bothering me. There is no reference in the log
> regarding the out of memory error. Should that not also show up in the
> associated database log?
Not if it's a client-side error.
(Whic
Craig Ringer wrote:
On 3/07/2011 6:00 PM, Geoffrey Myers wrote:
out of memory for query result
How is this possible?
Resource limits?
Could this message be generated because of shared memory issues?
The odd thing is the error was generated by a user process, but there is
no reference to
Geoffrey Myers wrote:
We have a process that we successfully ran on virtually identical
databases. The process completed fine on a machine with 8 gig of
memory. The process fails when run on another machine that has 16 gig
of memory with the following error:
out of memory for query result
One other note, there is no error in the postgres log for this database.
I would have expected to find an error there.
--
Until later, Geoffrey
"I predict future happiness for America if they can prevent
the government from wasting the labors of the people under
the pretense of taking care of
Craig Ringer wrote:
On 3/07/2011 6:00 PM, Geoffrey Myers wrote:
out of memory for query result
How is this possible?
Resource limits?
Could this message be generated because of shared memory issues?
The odd thing is the error was generated by a user process, but there is
no reference to
Alban Hertroys wrote:
On 3 Jul 2011, at 12:00, Geoffrey Myers wrote:
We have a process that we successfully ran on virtually identical
databases. The process completed fine on a machine with 8 gig of
memory. The process fails when run on another machine that has 16
gig of memory with the foll
On 3 Jul 2011, at 12:00, Geoffrey Myers wrote:
> We have a process that we successfully ran on virtually identical databases.
> The process completed fine on a machine with 8 gig of memory. The process
> fails when run on another machine that has 16 gig of memory with the
> following error:
>
On 3/07/2011 6:00 PM, Geoffrey Myers wrote:
out of memory for query result
How is this possible?
Resource limits?
Do you have a ulimit in place that applies to postgresql? You can check
by examining the resource limits of a running postgresql backend as
shown in /proc/$PG_PID where $PG_PID
On 07/03/2011 01:00 PM, Geoffrey Myers wrote:
We have a process that we successfully ran on virtually identical
databases. The process completed fine on a machine with 8 gig of
memory. The process fails when run on another machine that has 16 gig
of memory with the following error:
out of
Well after a few days of further investigation I still can't track the issue
down. The main problem I can only reproduce the error running the whole
transaction. So I can't isolate the problem down to a simple use case or even
smaller subset of the transaction, which would have been nice for pos
Hi Jeff,
< Where is the source to the function?
The source is located here: https://github.com/linz/linz_bde_uploader
The main function LDS_MaintainSimplifiedLayers that is being called is on line
37 is in
https://github.com/linz/linz_bde_uploader/blob/master/sql/lds_layer_functions.sql.
T
Hi John,
> Does that all really have to be a single transaction?
Yes - I need to ensure that of the changesets and denormalised tables are
created in the same transaction, so that if an error occurs the database is
rolled back to the last successfully applied changeset. I don't want to get
int
On 04/05/11 2:50 AM, Jeremy Palmer wrote:
I've been having repeated troubles trying to get a PostgreSQL app to play
nicely on Ubuntu. I recently posted a message on this list about an out of
memory error and got a resolution by reducing the work_mem setting. However I'm
now getting further out
On Tue, 2011-04-05 at 21:50 +1200, Jeremy Palmer wrote:
> Hi,
>
> I've been having repeated troubles trying to get a PostgreSQL app to play
> nicely on Ubuntu. I recently posted a message on this list about an out of
> memory error and got a resolution by reducing the work_mem setting. However
any given time.
Thanks,
Jeremy
-Original Message-
From: Jeremy Palmer
Sent: Saturday, 26 March 2011 9:57 p.m.
To: Scott Marlowe
Cc: pgsql-general@postgresql.org
Subject: RE: [GENERAL] Out of memory
Hi Scott,
It was the work_mem that was set too high. I reduced it to 32mb and the
leted.
>
> Thanks,
> Jeremy
>
>
> From: Scott Marlowe [scott.marl...@gmail.com]
> Sent: Friday, 25 March 2011 5:04 p.m.
> To: Jeremy Palmer
> Cc: pgsql-general@postgresql.org
> Subject: Re: [GENERAL] Out of memory
>
> On Thu, Mar 24, 2011 at 9:23 PM, Jeremy Palm
memory after each sort operation has
completed.
Thanks,
Jeremy
From: Scott Marlowe [scott.marl...@gmail.com]
Sent: Friday, 25 March 2011 5:04 p.m.
To: Jeremy Palmer
Cc: pgsql-general@postgresql.org
Subject: Re: [GENERAL] Out of memory
On Thu, Mar 24, 2011
On Thu, Mar 24, 2011 at 9:23 PM, Jeremy Palmer wrote:
> I’ve been getting database out of memory failures with some queries which
> deal with a reasonable amount of data.
>
> I was wondering what I should be looking at to stop this from happening.
>
> The typical messages I been getting are like t
> -Original Message-
> From: pgsql-general-ow...@postgresql.org [mailto:pgsql-general-
> ow...@postgresql.org] On Behalf Of Sam Mason
> Sent: 05 July 2010 15:14
> To: pgsql-general@postgresql.org
> Subject: Re: [GENERAL] Out of memory on update of a single column table
> containg jus
On Mon, Jul 05, 2010 at 01:52:20PM +, zeeshan.gha...@globaldatapoint.com
wrote:
> So, is this there a restriction with 32-bit PostgreSQL, a bug or
> configuration issue?
It's a restriction because of the 32bit address space. You've basically
got between two and three GB of useful space left
ent: 05 July 2010 14:39
> To: pgsql-general@postgresql.org
> Subject: Re: [GENERAL] Out of memory on update of a single column table
> containg just one row.
>
> Hi,
>
> i tried a simple test:
> create temp table _t as select repeat('x',382637520) as test;
> upda
Hi,
i tried a simple test:
create temp table _t as select repeat('x',382637520) as test;
update _t set test=test||test;
pg 8.3 32bit fails with
[Error Code: 0, SQL State: 53200] ERROR: out of memory
Detail: Failed on request of size 765275088.
pg 8.4.4 64bit works fine
so upgrade to 64bit
> -Original Message-
> From: Thom Brown [mailto:thombr...@gmail.com]
> Sent: 05 July 2010 12:40
> To: Zeeshan Ghalib
> Cc: pgsql-general@postgresql.org
> Subject: Re: [GENERAL] Out of memory on update of a single column table
> containg just one row.
> Hi Zeesh
On 5 July 2010 11:47, wrote:
> Hello Guys,
>
>
>
> We are trying to migrate from Oracle to Postgres. One of the major
> requirement of our database is the ability to generate XML feeds and some of
> our XML files are in the order of 500MB+.
>
>
>
> We are getting "Out of Memory" errors when doin
So for a system which was being used to serve many clients it would be
fine (web service, etc). But for my purposes where I am using a single
session to process large tables of data, (such as a mammoth update
statement normalising and encoding 25million rows of string data) the
32-bit version i
It does when you have many sessions. But each individual session can
only use "32 bits worth of memory", and shaared memory counts in all
processes. The memory can be used for *os level cache*, not postgresql
buffercache.
//Magnus
On Wed, Jun 2, 2010 at 16:08, Tom Wilcox wrote:
> Hi Stephen,
>
>
Hi Stephen,
The impression I was getting from Magnus Hagander's blog was that a 32-bit
version of Postgres could make use of >4Gb RAM when running on 64-bit
Windows due to the way PG passes on the responsibility for caching onto the
OS.. Is this definitely not the case then?
Here's where Im getti
* Tom Wilcox (hungry...@googlemail.com) wrote:
> My plan now is to try increasing the shared_buffers, work_mem,
> maintenance_work_mem and apparently checkpoint_segments and see if that
> fixes it.
er. work_mem and maintenance_work_mem aren't *limits*, they're
more like *targets*. The ou
I have now hit a new query that produces Out of memory errors in a
similar way to the last ones. Can anyone please suggest why I might be
getting this error and any way I can go about diagnosing or fixing it..
The error I get is:
ERROR: out of memory
SQL state: 53200
Detail: Failed on request
I am having difficulties. I have rerun my update that uses the python
functions..
(1) UPDATE nlpg.match_data SET org = normalise(org);
And some other similar queries on neighbouring fields in the table. They
have all now worked. Without any changes to the configuration. I have
done one thing
Thanks Bill,
That sounds like good advice. I am rerunning my query with the python
function peppered with plpy.notice("msg") call.
Hopefully that'll shed some light on which inputs it's crashing on. Does
anyone know of a way to measure the memory being consumed by the
function/query so that
On 5/28/10 8:43:48 PM, Tom Wilcox wrote:
I ran this query:
EXPLAIN ANALYZE UPDATE nlpg.match_data SET org = org;
And I got this result:
"Seq Scan on match_data (cost=0.00..9762191.68 rows=32205168 width=206)
(actual time=76873.592..357450.519 rows=2961 loops=1)"
"Total runtime: 8028212.36
I ran this query:
EXPLAIN ANALYZE UPDATE nlpg.match_data SET org = org;
And I got this result:
"Seq Scan on match_data (cost=0.00..9762191.68 rows=32205168 width=206)
(actual time=76873.592..357450.519 rows=2961 loops=1)"
"Total runtime: 8028212.367 ms"
On 28 May 2010 19:39, Tom Wilcox w
On 28 May 2010, at 20:39, Tom Wilcox wrote:
> out = ''
> for tok in toks:
> ## full word replace
> if tok == 'house' : out += 'hse'+ADDR_FIELD_DELIM
> elif tok == 'ground' : out += 'grd'+ADDR_FIELD_DELIM
> elif tok == 'gnd' : out += 'grd'+ADDR_FIELD_DELIM
>
Oops. Sorry about that.
I am having this problem with multiple queries however I am confident that a
fair number may involve the custom plpython "normalise" function which I
have made myself. I didn't think it would be complicated enough to produce a
memory problem.. here it is:
-- Normalises com
In response to Tom Wilcox :
> In addition, I have discovered that the update query that runs on each row
> of a 27million row table and fails with Out of memory error will work when
> limited to 1million rows in an extremely shorter period of time:
>
> EXPLAIN ANALYZE
> UPDATE nlpg.match_
In response to Tom Wilcox :
> Also, can anyone give me any pointers for configuring postgres to use
> ALL 96GB of RAM in my new machine? I would like to know it was using
> everything available.. especially when it is possible to load an entire
> 30m row table into memory! I am currently usin
* Tom Wilcox (hungry...@googlemail.com) wrote:
> UPDATE tbl SET f1 = COALESCE(f2,'') || ' ' || COALESCE(f3);
>
> Can anyone suggest reasons why I might be running out of memory on such
> a simple query?
Do you have any triggers on that table? Or FK's?
Stephen
signature.asc
Descriptio
yue peng writes:
> I encountered an out of memory error during executing un INSERT into
> table1(v1,v2,v3) SELECT c1,c2,c3 from table2 where .
Most likely the OOM is because of growth of the pending-trigger-event
queue --- do you have any foreign key references in that table?
Possible solut
On 24 March 2010 10:57, yue peng wrote:
> Is there any other ways to still insert same amount of data and avoid this
> OOM error ?
>
>
I'd expect COPY to be the most effective way of bulk loading data into a
database. http://www.postgresql.org/docs/current/static/sql-copy.html
Or do inserts in
It seems that the process goes a little further lowering shared_buffers
but I've reached the minimum (128kB with max_connections = 2)
without reaching the end .
Are there any chances to break the 128kb limit ?
Or do I need to break this process in two smaller parts (not easy for me
) ?
The procedure is create_accessors_methods in the dbi_link package
which you can find at:
http://pgfoundry.org/projects/dbi-link/
I've slightly modified the code to adapt it better to Oracle.
Basically it is a procedure which builds a lot of views and tables based
on objects (synonyms in my case)
On 30/12/2009 6:35 PM, Nicola Farina wrote:
Hello
I am using PostgreSQL 8.3.7, compiled by Visual C++ build 1400 under
win32 on a pc with 2 gb ram.
I need to use a long running plperlu stored procedure which actually
seems to make pg consume lot of memory
till a point in which pg crashes.
Can
e sujets à la manipulation, nous ne pouvons accepter aucune responsabilité
pour le contenu fourni.
> Subject: Re: [GENERAL] Out of memory on pg_dump
> Date: Fri, 21 Aug 2009 11:29:48 -0400
> From: chopk...@cra.com
> To: t...@sss.pgh.pa.us
> CC: pgsql-general@postgresql.org
>
"Chris Hopkins" writes:
> Thanks Tom. Next question (and sorry if this is an ignorant one)...how
> would I go about doing that?
See the archives for previous discussions of corrupt-data recovery.
Basically it's divide-and-conquer to find the corrupt rows.
regards, tom lan
[mailto:t...@sss.pgh.pa.us]
Sent: Friday, August 21, 2009 11:07 AM
To: Chris Hopkins
Cc: pgsql-general@postgresql.org
Subject: Re: [GENERAL] Out of memory on pg_dump
"Chris Hopkins" writes:
> 2009-08-19 22:35:42 ERROR: out of memory
> 2009-08-19 22:35:42 DETAIL: Failed on
"Chris Hopkins" writes:
> 2009-08-19 22:35:42 ERROR: out of memory
> 2009-08-19 22:35:42 DETAIL: Failed on request of size 536870912.
> Is there an easy way to give pg_dump more memory?
That isn't pg_dump that's out of memory --- it's a backend-side message.
Unless you've got extremely wide fi
Paul Smith wrote:
It's actually ST_Intersects from PostGIS (some of the PostGIS function
names are still recognize without the leading "ST_").
Not for too much longer - these have been deprecated for a while ;)
http://postgis.refractions.net/documentation/manual-1.3/ch06.html#id2574404
# se
On Mon, Jul 6, 2009 at 7:26 PM, Paul Ramsey wrote:
> If you are on PostGIS < 1.3.4 there are substantial memory leaks in
> intersects() for point/polygon cases. Upgrading to 1.3.6 is
> recommended.
Thank you, that fixed it.
--
Paul Smith
http://www.pauladamsmith.com/
--
Sent via pgsql-general
If you are on PostGIS < 1.3.4 there are substantial memory leaks in
intersects() for point/polygon cases. Upgrading to 1.3.6 is
recommended.
P
On Mon, Jul 6, 2009 at 1:39 PM, Paul Smith wrote:
> On Mon, Jul 6, 2009 at 3:34 PM, Tom Lane wrote:
>> Clearly a memory leak, but it's not so clear exactl
On Mon, Jul 6, 2009 at 3:34 PM, Tom Lane wrote:
> Clearly a memory leak, but it's not so clear exactly what's causing it.
> What's that intersects() function? Can you put together a
> self-contained test case?
It's actually ST_Intersects from PostGIS (some of the PostGIS function
names are still
1 - 100 of 413 matches
Mail list logo