Chris Roberts writes:
> Would someone tell me why I am seeing the following Postgres logs?
> 07:56:20 EST LOG: 08P00: incomplete message from client
> 07:56:20 EST LOCATION: pq_getmessage, src\backend\libpq\pqcomm.c:1143
> 07:56:20 EST ERROR: 54000: out of memory
> 07:56:20 EST DETAIL: Cannot
Hello,
My deployment is Postgres 9.3.5 on a Windows machine.
Would someone tell me why I am seeing the following Postgres logs?
07:56:20 EST LOG: 0: execute : SELECT * FROM "c3p0"
07:56:20 EST LOCATION: exec_execute_message, src\backend\tcop\postgres.c:1906
07:56:20 EST LOG: 0: execut
Eelke Klein writes:
> In a database of one of our customers we sometimes get out of memory
> errors. Below I have copy pasted one of these very long messages.
> The error doesn't always occur, when I copy paste the query and run it
> manually it works.
The memory map doesn't look out of the ordin
Hello,
In a database of one of our customers we sometimes get out of memory
errors. Below I have copy pasted one of these very long messages.
The error doesn't always occur, when I copy paste the query and run it
manually it works.
The current server is an OpenSUSE 12.2 with postgresql 9.2.1 (we
"Dara Olson" writes:
> This is the first 1/3 of the errors, so hopefully this will help diagnose
> where my problem may be. Any help would be greatly appreciated.
Well, you didn't show us the error that caused a COPY to fail, but it's
pretty obvious that you're attempting to load the dump into
LY spatial_ref_sys
ADD CONSTRAINT spatial_ref_sys_pkey PRIMARY KEY (srid);
This is the first 1/3 of the errors, so hopefully this will help diagnose where
my problem may be. Any help would be greatly appreciated.
Thank you in advance.
Dara
- Original Message -
From: Tom Lane
T
"Dara Olson" writes:
> I am attempting to create an exact copy of our production database/cluster on
> a different server for development. I created a dumpall file which is 8.7GB.
> When I attempt to run this in psql on the new server it seems okay and then I
> got a string of "invalid command
Greetings.
I am attempting to create an exact copy of our production database/cluster on a
different server for development. I created a dumpall file which is 8.7GB.
When I attempt to run this in psql on the new server it seems okay and then I
got a string of "invalid command \N" lines" and the
Mark Priest writes:
> However, I am still curious as to why I am getting an out of memory
> error. I can see how the performance might be terrible on such a
> query but I am surprised that postgres doesn't start using the disk at
> some point to reduce memory usage. Could it be that postgres tr
Thanks, Craig.
There are no triggers on the tables and the only constraints are the
primary keys.
I am thinking that the problem may be that I have too many full self
joins on the simple_group table. I am probably getting a
combinatorial explosion when postgres does cross joins on all the
deriv
> Thanks, Craig.
>
> There are no triggers on the tables and the only constraints are the
> primary keys.
>
> I am thinking that the problem may be that I have too many full self
> joins on the simple_group table. I am probably getting a
> combinatorial explosion when postgres does cross joins
Mark Priest writes:
> I am getting an Out of Memory error in my server connection process
> while running a large insert query.
> Postgres version: "PostgreSQL 8.2.16 on i686-pc-mingw32, compiled by
> GCC gcc.exe (GCC) 3.4.2 (mingw-special)"
> OS: Windows 7 Professional (v.6.1, build 7601 service
On 10/18/2011 02:52 PM, Mark Priest wrote:
I am getting an Out of Memory error in my server connection process
while running a large insert query.
Postgres version: "PostgreSQL 8.2.16 on i686-pc-mingw32, compiled by
GCC gcc.exe (GCC) 3.4.2 (mingw-special)"
OS: Windows 7 Professional (v.6.1, buil
I am getting an Out of Memory error in my server connection process
while running a large insert query.
Postgres version: "PostgreSQL 8.2.16 on i686-pc-mingw32, compiled by
GCC gcc.exe (GCC) 3.4.2 (mingw-special)"
OS: Windows 7 Professional (v.6.1, build 7601 service pack 1)
The OS is 64 bit but t
On 06/07/11 01:12, Geoffrey Myers wrote:
Wanted to add more specifics. Here is the actual code that generated the
error:
my $result = $conn->exec($select);
if ($result->resultStatus != PGRES_TUPLES_OK)
{
$error = $conn->errorMessage;
die "Error: <$error> Failed: <$select>";
}
That looks like
On 5/07/2011 11:12 PM, Geoffrey Myers wrote:
my $result = $conn->exec($select);
if ($result->resultStatus != PGRES_TUPLES_OK)
{
$error = $conn->errorMessage;
die "Error: <$error> Failed: <$select>";
}
So you're saying this select request failing would not be logged to the
postgres database log
Tom Lane wrote:
Geoffrey Myers writes:
Geoffrey Myers wrote:
out of memory for query result
One other note that is bothering me. There is no reference in the log
regarding the out of memory error. Should that not also show up in the
associated database log?
Not if it's a client-side er
Geoffrey Myers writes:
> Geoffrey Myers wrote:
>> out of memory for query result
> One other note that is bothering me. There is no reference in the log
> regarding the out of memory error. Should that not also show up in the
> associated database log?
Not if it's a client-side error.
(Whic
Craig Ringer wrote:
On 3/07/2011 6:00 PM, Geoffrey Myers wrote:
out of memory for query result
How is this possible?
Resource limits?
Could this message be generated because of shared memory issues?
The odd thing is the error was generated by a user process, but there is
no reference to
Geoffrey Myers wrote:
We have a process that we successfully ran on virtually identical
databases. The process completed fine on a machine with 8 gig of
memory. The process fails when run on another machine that has 16 gig
of memory with the following error:
out of memory for query result
One other note, there is no error in the postgres log for this database.
I would have expected to find an error there.
--
Until later, Geoffrey
"I predict future happiness for America if they can prevent
the government from wasting the labors of the people under
the pretense of taking care of
Craig Ringer wrote:
On 3/07/2011 6:00 PM, Geoffrey Myers wrote:
out of memory for query result
How is this possible?
Resource limits?
Could this message be generated because of shared memory issues?
The odd thing is the error was generated by a user process, but there is
no reference to
Alban Hertroys wrote:
On 3 Jul 2011, at 12:00, Geoffrey Myers wrote:
We have a process that we successfully ran on virtually identical
databases. The process completed fine on a machine with 8 gig of
memory. The process fails when run on another machine that has 16
gig of memory with the foll
On 3 Jul 2011, at 12:00, Geoffrey Myers wrote:
> We have a process that we successfully ran on virtually identical databases.
> The process completed fine on a machine with 8 gig of memory. The process
> fails when run on another machine that has 16 gig of memory with the
> following error:
>
On 3/07/2011 6:00 PM, Geoffrey Myers wrote:
out of memory for query result
How is this possible?
Resource limits?
Do you have a ulimit in place that applies to postgresql? You can check
by examining the resource limits of a running postgresql backend as
shown in /proc/$PG_PID where $PG_PID
On 07/03/2011 01:00 PM, Geoffrey Myers wrote:
We have a process that we successfully ran on virtually identical
databases. The process completed fine on a machine with 8 gig of
memory. The process fails when run on another machine that has 16 gig
of memory with the following error:
out of
We have a process that we successfully ran on virtually identical
databases. The process completed fine on a machine with 8 gig of
memory. The process fails when run on another machine that has 16 gig
of memory with the following error:
out of memory for query result
How is this possible?
Paul Smith wrote:
It's actually ST_Intersects from PostGIS (some of the PostGIS function
names are still recognize without the leading "ST_").
Not for too much longer - these have been deprecated for a while ;)
http://postgis.refractions.net/documentation/manual-1.3/ch06.html#id2574404
# se
On Mon, Jul 6, 2009 at 7:26 PM, Paul Ramsey wrote:
> If you are on PostGIS < 1.3.4 there are substantial memory leaks in
> intersects() for point/polygon cases. Upgrading to 1.3.6 is
> recommended.
Thank you, that fixed it.
--
Paul Smith
http://www.pauladamsmith.com/
--
Sent via pgsql-general
If you are on PostGIS < 1.3.4 there are substantial memory leaks in
intersects() for point/polygon cases. Upgrading to 1.3.6 is
recommended.
P
On Mon, Jul 6, 2009 at 1:39 PM, Paul Smith wrote:
> On Mon, Jul 6, 2009 at 3:34 PM, Tom Lane wrote:
>> Clearly a memory leak, but it's not so clear exactl
On Mon, Jul 6, 2009 at 3:34 PM, Tom Lane wrote:
> Clearly a memory leak, but it's not so clear exactly what's causing it.
> What's that intersects() function? Can you put together a
> self-contained test case?
It's actually ST_Intersects from PostGIS (some of the PostGIS function
names are still
Paul Smith writes:
> We have a query that's producing an "out of memory" error
> consistently. The detail of the error message is "Failed on request of
> size 16." We have 16 GB of RAM in our database server running 32-bit
> Debian lenny. Here's the query:
> ...
> ExecutorState: 460390096 to
We have a query that's producing an "out of memory" error
consistently. The detail of the error message is "Failed on request of
size 16." We have 16 GB of RAM in our database server running 32-bit
Debian lenny. Here's the query:
INSERT INTO db_newsitemlocation (news_item_id, location_id)
SELECT n
"Joost Kraaijeveld" <[EMAIL PROTECTED]> writes:
> I have a query that has run on 3 other *identical* machines (hardware,
> software, postgresql.conf idenntical, just other data in the database)
> that give me an "out of memory error" every time I try (see below).
> Anyone any idea of where or how
Hi,
I have a query that has run on 3 other *identical* machines (hardware,
software, postgresql.conf idenntical, just other data in the database)
that give me an "out of memory error" every time I try (see below).
Anyone any idea of where or how to look for the problem or the
solution?
>From th
Side note: Why does Thunderbird send HTML mails albeit being configured
for sending plain text mails? Sorry for that! And sorry for being off-topic.
Regards,
Christian
--
Deriva GmbH Tel.: +49 551 489500-42
Financial IT and Consulting Fax: +49 551 489500-91
H
Mikko Partio wrote:
Isn't 128MB quite low considering the "current standard" of 25%
- 50% of total ram?
I had also read a statement about using this amount of memory as shared
buffers. Exactly that was the reason why I set it to such a high value,
but I am now convinced that this
On Fri, Aug 24, 2007 at 12:07:37PM +0300, Mikko Partio wrote:
> On 8/23/07, Martijn van Oosterhout <[EMAIL PROTECTED]> wrote:
> > You've got it completely wrong. By setting shared_buffers to 2GB it
> > means no-one can use it. It's not postgres that's running out of
> > memory, it's the rest of you
On 8/23/07, Martijn van Oosterhout <[EMAIL PROTECTED]> wrote:
>
>
> You've got it completely wrong. By setting shared_buffers to 2GB it
> means no-one can use it. It's not postgres that's running out of
> memory, it's the rest of your system. Set it to something sane like
> 128MB or maybe smaller.
Martijn van Oosterhout wrote:
You've got it completely wrong.
Hm, you seem to be right. :(
I have now decreased the "shared_buffers" setting to 128 MB. I have also
found some tuning pages with warnings about not setting the value too
high. I'm sure that I have read these pages before, but I s
On Thu, Aug 23, 2007 at 08:30:46PM +0200, Christian Schröder wrote:
>Thanks for your tips! I have changed the "shared_buffers" setting back
>to 2 GB. It was set to 2 GB before, but we also had "out of memory"
>errors with this setting, so I raised it to 3 GB.
You've got it completely w
Tom Lane wrote:
Ok, I can do this, but why can more memory be harmful?
Because you've left no room for anything else? The kernel, the various
other daemons, the Postgres code itself, and the local memory for each
Postgres process all require more than zero space.
So doe
=?UTF-8?B?Q2hyaXN0aWFuIFNjaHLDtmRlcg==?= <[EMAIL PROTECTED]> writes:
> hubert depesz lubaczewski wrote:
>> On Wed, Aug 22, 2007 at 07:07:20PM +0200, Christian Schröder wrote:
>>> These are the current settings from the server configuration:
>>> shared_buffers = 3GB
>>
>> this is *way* to much. i
hubert depesz lubaczewski wrote:
On Wed, Aug 22, 2007 at 07:07:20PM +0200, Christian Schröder wrote:
These are the current settings from the server configuration:
shared_buffers = 3GB
this is *way* to much. i would suggest lowering it to 1gig *at most*.
Ok, I can do this, but wh
On Wed, Aug 22, 2007 at 07:07:20PM +0200, Christian Schröder wrote:
> These are the current settings from the server configuration:
>shared_buffers = 3GB
this is *way* to much. i would suggest lowering it to 1gig *at most*.
>max memory size (kbytes, -m) 3441565
this looks like to
Hi list,
I am struggling with some "out of memory" errors in our PostgreSQL
database which I do not understand. Perhaps someone can give me a hint.
The application which causes the errors runs multi-threaded with 10
threads. Each of the threads performs several select statements on the
database
On Mon, Dec 11, 2006 at 05:50:53PM -0600, Kirk Wythers wrote:
> met_data=# SELECT count(*) FROM climate, sites, solar WHERE
> climate.id = sites.id AND solar.id = sites.id AND climate.year = 1999;
> --
> 33061700
> (1 row)
> psql(394) malloc: *** vm_allocate(size=396742656) failed (err
On Mon, 2006-12-11 at 17:50 -0600, Kirk Wythers wrote:
> SELECT count (*) returns 33,061,700
>
> met_data=# SELECT count(*) FROM climate, sites, solar WHERE
> climate.id = sites.id AND solar.id = sites.id AND climate.year = 1999;
> --
> 33061700
> (1 row)
>
> However attempting the join
On Dec 11, 2006, at 1:43 PM, Tom Lane wrote:
Kirk Wythers <[EMAIL PROTECTED]> writes:
I have an database (pg 8.1.0 on OS X) where a three table inner-join
gives the following errors:
psql(606) malloc: *** vm_allocate(size=8421376) failed (error code=3)
psql(606) malloc: *** error: can't all
Kirk Wythers <[EMAIL PROTECTED]> writes:
> I have an database (pg 8.1.0 on OS X) where a three table inner-join
> gives the following errors:
> psql(606) malloc: *** vm_allocate(size=8421376) failed (error code=3)
> psql(606) malloc: *** error: can't allocate region
> psql(606) malloc: *** set a
I have an database (pg 8.1.0 on OS X) where a three table inner-join
gives the following errors:
psql(606) malloc: *** vm_allocate(size=8421376) failed (error code=3)
psql(606) malloc: *** error: can't allocate region
psql(606) malloc: *** set a breakpoint in szone_error to debug
out of memory
On Jun 22, 2006, at 4:02 PM, Relyea, Mike wrote:
Thanks Jim and Tom. At least now I've got a direction to head in. I
think for now I'll probably reduce work_mem as a stop-gap measure
to get
the query running again. This will buy me some time to redesign it.
I'll probably separate out each s
Thanks Jim and Tom. At least now I've got a direction to head in. I
think for now I'll probably reduce work_mem as a stop-gap measure to get
the query running again. This will buy me some time to redesign it.
I'll probably separate out each sub query and store the results in a
table (would a tem
Tom Lane wrote:
Hmm. One of the things that's on my TODO list is to make the planner
smarter about drilling down into sub-selects to extract statistics.
I think that's what's called for here, but your example has eliminated
all the interesting details. Can you show us the actual query, its
EXP
On Jun 22, 2006, at 2:23 PM, Tom Lane wrote:
"Relyea, Mike" <[EMAIL PROTECTED]> writes:
I've zipped the results of EXPLAIN INSERT INTO "tblSummary" SELECT *
FROM "qrySummary"; for my case. It's a zip file that I've renamed to
.txt in order to get around the attachment being blocked by
certain
"Relyea, Mike" <[EMAIL PROTECTED]> writes:
> I've zipped the results of EXPLAIN INSERT INTO "tblSummary" SELECT *
> FROM "qrySummary"; for my case. It's a zip file that I've renamed to
> .txt in order to get around the attachment being blocked by certain mail
> servers.
Egad, what a mess :-(. By
"Todd A. Cook" <[EMAIL PROTECTED]> writes:
> Tom Lane wrote:
>> Well, that's the problem right there :-(. Have you ANALYZEd this table?
> My production table and query are more complex. In the original, the
> query above was in a sub-select; the work-around was to create a temp
> table with the
I've zipped the results of EXPLAIN INSERT INTO "tblSummary" SELECT *
FROM "qrySummary"; for my case. It's a zip file that I've renamed to
.txt in order to get around the attachment being blocked by certain mail
servers.
PK yÖ4¨yï ý explain.txtí]]oÇÕ¾/[EMAIL
PROTECTED]( É`ç{ÖH
Tom Lane wrote:
"Todd A. Cook" <[EMAIL PROTECTED]> writes:
oom_test=> explain select val,count(*) from oom_tab group by val;
QUERY PLAN
-
HashAggregate (cost=1163446.13..1163448.63 rows=200
"Todd A. Cook" <[EMAIL PROTECTED]> writes:
"Todd A. Cook" <[EMAIL PROTECTED]> writes:
> QUERY PLAN
> -
> HashAggregate (cost=1163446.13..1163448.63 rows=200 width=4)
> -> Seq Scan on oo
"Todd A. Cook" <[EMAIL PROTECTED]> writes:
> oom_test=> explain select val,count(*) from oom_tab group by val;
> QUERY PLAN
> -
> HashAggregate (cost=1163446.13..1163448.63 rows=200 width=4)
>
Tom Lane wrote:
Misestimated hash aggregation, perhaps? What is the query and what does
EXPLAIN show for it? What have you got work_mem set to?
oom_test=> \d oom_tab
Table "public.oom_tab"
Column | Type | Modifiers
+-+---
val| integer |
oom_test=> explai
"Todd A. Cook" <[EMAIL PROTECTED]> writes:
> I am consistently running into out-of-memory issues in 8.1.4 running on
> RHEL3 and 8.0.5 on RHEL4. The logs show entries like this:
> AggContext: -2130714624 total in 271 blocks; 9688 free (269 chunks);
> -2130724312 used
> TupleHashTable: 893902872
ailto:[EMAIL PROTECTED]
Sent: Wednesday, June 21, 2006 11:01 PM
To: Relyea, Mike
Cc: pgsql-general@postgresql.org; Tom Lane
Subject: RE: [GENERAL] Out of memory error in 8.1.0 Win32
On Wed, 21 Jun 2006, Relyea, Mike wrote:
ExecutorState: 550339936 total in 123 blocks; 195003544 free (740135
c
So what's my next step? How do I track down what is causing this
problem?
-Original Message-
From: Qingqing Zhou [mailto:[EMAIL PROTECTED]
Sent: Wednesday, June 21, 2006 11:01 PM
To: Relyea, Mike
Cc: pgsql-general@postgresql.org; Tom Lane
Subject: RE: [GENERAL] Out of memory err
On Wed, 21 Jun 2006, Relyea, Mike wrote:
> ExecutorState: 550339936 total in 123 blocks; 195003544 free (740135
> chunks); 355336392 used
> HashBatchContext: 293593176 total in 44 blocks; 3107384 free (80
> chunks); 290485792 used
> TIDBitmap: 2088960 total in 8 blocks; 924720 free (27 chunks);
t: Re: [GENERAL] Out of memory error in 8.1.0 Win32
"Qingqing Zhou" <[EMAIL PROTECTED]> writes:
>> ExecutorState: 550339936 total in 123 blocks; 195005920 free (740144
>> chunks); 355334016 used
>> ...
>> HashBatchContext: 293593176 total in 44 blocks; 3107
"Qingqing Zhou" <[EMAIL PROTECTED]> writes:
>> ExecutorState: 550339936 total in 123 blocks; 195005920 free (740144
>> chunks); 355334016 used
>> ...
>> HashBatchContext: 293593176 total in 44 blocks; 3107384 free (80 chunks);
>> 290485792 used
> Er, looks like a huge hash-join but not sure if it
"Qingqing Zhou" <[EMAIL PROTECTED]> wrote
>
>
> ExecutorState: 550339936 total in 123 blocks; 195005920 free (740144
chunks); 355334016 used
> ...
> HashBatchContext: 293593176 total in 44 blocks; 3107384 free (80 chunks);
290485792 used
> TIDBitmap: 2088960 total in 8 blocks; 1012120 free (27 ch
""Relyea, Mike"" <[EMAIL PROTECTED]> wrote
> Is this what you're looking for?
No. I mean per-context memory usage output like this in your log file:
2006-06-08 16:33:09 LOG: autovacuum: processing database "ibox"
TopMemoryContext: 84400 total in 7 blocks; 12696 free (22 chunks); 71704
used
Oper
""Relyea, Mike"" <[EMAIL PROTECTED]> wrote
> I've just started receiving an out of memory error with my most complex
> query.
Can you post the memory usage log after the error the server reports?
Regards,
Qingqing
---(end of broadcast)---
TIP 6:
I've just started receiving an out of memory error with my most complex
query. It has been running fine for the past 9 months. It's a snapshot
materialized view that I update every night using the functions from
http://jonathangardner.net/PostgreSQL/materialized_views/matviews.html
The error I'm
I didn't see any strange number in your log message and 16777212 is 16M,which is not a scary number ... is there any other memory-exhausting program
in the same machine?Besides Windows 2003, PostgreSQL Server and the Oracel CLIENT libraries the only thing running is a script of mine (which uses aro
""Harald Armin Massa"" <[EMAIL PROTECTED]> wrote
> today postgresql 8.1.3 on win32 died:
> 2006-06-08 16:33:12 ERROR: out of memory
> 2006-06-08 16:33:12 DETAIL: Failed on request of size 16777212.
>
I didn't see any strange number in your log message and 16777212 is 16M,
which is not a scary n
Hello,today postgresql 8.1.3 on win32 died:2006-06-08 16:33:09 LOG: autovacuum: processing database "ibox"TopMemoryContext: 84400 total in 7 blocks; 12696 free (22 chunks); 71704 usedOperator class cache: 8192 total in 1 blocks; 4936 free (0 chunks); 3256 used
TopTransactionContext: 8192 total in
8 Mar 2006 07:31:19 -0800, Nik <[EMAIL PROTECTED]>:
> [...]
> psql: ERROR: out of memory
> DETAIL: Failed on request of size 32.
>
I also have this kind of error (out of memory) during the restoration
of objects on my database. I use a 8.1.2 pg_dump on a 7.1.1 PostgreSQL
server. Size of the dump i
Yes, I was indeed out of memory. That is the problem: the postgres.exe
process corresponding to the pg_restore continuously consumes more and
more memory until it runs out and fails with the mentioned error. Since
I already have 4Gb of RAM, throwing more hardware at it is not a
feasible solution, s
Tom was exactly right.
I was trying to restore the dump file into an already created table
structure that did have three foreign key constraints. I removed the
primary key constraint to speed up the load, but was not aware of the
memory usage of the foreign keys.
I dropped the table and ran the p
other way is to set
HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Session
Manager\Memory Management
bigger values
but to restore a lot of data on windows take so many time
2006/3/8, Tom Lane <[EMAIL PROTECTED]>:
> "Nik" <[EMAIL PROTECTED]> writes:
> > pg_restore: ERROR: out of memory
> > D
"Nik" <[EMAIL PROTECTED]> writes:
> pg_restore: ERROR: out of memory
> DETAIL: Failed on request of size 32.
> CONTEXT: COPY lane_data, line 17345022: ""
A COPY command by itself shouldn't eat memory. I'm wondering if the
table being copied into has any AFTER triggers on it (eg for foreign key
Nik wrote:
I am running PostgreSQL 8.1.3 on Windows 2003 Server.
I am trying to transfer the data from a table in db1on one machine to a
table in db2 on a different machine. The table size is about 22Gb
(about 280 million rows).
I was trying to do it by generating a backup file of the table in
I am running PostgreSQL 8.1.3 on Windows 2003 Server.
I am trying to transfer the data from a table in db1on one machine to a
table in db2 on a different machine. The table size is about 22Gb
(about 280 million rows).
I was trying to do it by generating a backup file of the table in db1
and resto
Gabor Siklos <[EMAIL PROTECTED]> writes:
> I'm trying to update a table in transaction mode with
> 300 million records in it, and I'm getting an out of
> memory error.
The update per se shouldn't be a problem, but if you have AFTER ROW
triggers on the table then the list of pending trigger events
I'm trying to update a table in transaction mode with
300 million records in it, and I'm getting an out of
memory error.
ERROR: out of memory
DETAIL: Failed on request of size 32.
In transaction mode, I first delete all the records in
the table and then try to use COPY to populate it
again with t
On Thu, 1 Sep 2005, James Cradock wrote:
Hello.
I'm getting an Out Of Memory error when I try to create a GiST index on a
geometry column (PostGIS) with approximately 33,000,000 rows. I can truncate
the data in the table, create the GiST index on the empty table, but when I
try to reload the
Thanks.
I'm using PostgreSQL 8.0.2.
On Sep 1, 2005, at 6:35 AM, Richard Huxton wrote:
James Cradock wrote:
Hello.
I'm getting an Out Of Memory error when I try to create a GiST index
on a geometry column (PostGIS) with approximately 33,000,000 rows. I
can truncate the data in the table, cre
James Cradock wrote:
Hello.
I'm getting an Out Of Memory error when I try to create a GiST index on
a geometry column (PostGIS) with approximately 33,000,000 rows. I can
truncate the data in the table, create the GiST index on the empty
table, but when I try to reload the data via pg_restore
Hello.
I'm getting an Out Of Memory error when I try to create a GiST index on
a geometry column (PostGIS) with approximately 33,000,000 rows. I can
truncate the data in the table, create the GiST index on the empty
table, but when I try to reload the data via pg_restore I get the
error. I ge
Werner Bohl <[EMAIL PROTECTED]> writes:
> Explain output:
> "HashAggregate (cost=881509.02..881510.02 rows=200 width=20)"
> " Filter: (count(*) > 1)"
> " -> Seq Scan on lssi_base (cost=0.00..872950.68 rows=1711668
> width=20)"
If this is just a one-time query just do
set enable_hashagg = o
On Tue, 2005-04-05 at 16:04, Werner Bohl wrote:
> I have a fairly large table (21M) records. One field of type varchar(16)
> has some duplicate values, which I'm trying to identify.
> Executing select dup_field from dup_table group by dup_field having
> count(*) > 1 errs with Out of Memory error. S
I have a fairly large table (21M) records. One field of type varchar(16)
has some duplicate values, which I'm trying to identify.
Executing select dup_field from dup_table group by dup_field having
count(*) > 1 errs with Out of Memory error. Server has 4GB memory, the
backend-process errs after 3.7
Am Mittwoch, den 09.02.2005, 10:39 -0200 schrieb Clodoaldo Pinto:
> On Wed, 09 Feb 2005 13:18:44 +0100, Tino Wildenhain <[EMAIL PROTECTED]> wrote:
> >
> > Will this go into the same database?
>
> Yes, this *went* into the same database.
>
> > If so, you should probably use:
> >
> > CREATE TABLE
On Wed, 09 Feb 2005 13:18:44 +0100, Tino Wildenhain <[EMAIL PROTECTED]> wrote:
>
> Will this go into the same database?
Yes, this *went* into the same database.
> If so, you should probably use:
>
> CREATE TABLE targettable AS
> select data,
> usuario,
> sum(pontos) as sum_pont
Am Montag, den 07.02.2005, 20:05 -0200 schrieb Clodoaldo Pinto:
> On Mon, 07 Feb 2005 13:51:46 -0800, Joshua D. Drake
> <[EMAIL PROTECTED]> wrote:
>
> > Well your first email didn't explain that you were doing the below :)
>
> In the first email I was not doing the insert. I was executing a psql
After much work i was able to do it:
The psql script was changed to:
\o '/KakaoStats/bak/groupdup1.txt'
select
data,
usuario,
sum(pontos),
sum(wus)
from usuarios
where data <= 2056
group by data, usuario
;
\o
\o '/KakaoStats/bak/groupdup2.txt'
select
data
On Tue, 8 Feb 2005 09:06:38 -0200, Clodoaldo Pinto
<[EMAIL PROTECTED]> wrote:
> I did:
> # /sbin/sysctl -w vm.overcommit_memory=2
> following
> http://www.postgresql.org/docs/7.4/static/kernel-resources.html#AEN17068
>
> And got the same error:
>
> ERROR: out of memory
> DETAIL: Failed on reque
I did:
# /sbin/sysctl -w vm.overcommit_memory=2
following
http://www.postgresql.org/docs/7.4/static/kernel-resources.html#AEN17068
And got the same error:
ERROR: out of memory
DETAIL: Failed on request of size 44.
CONTEXT: PL/pgSQL function "group_dup" line 9 at SQL statement
The difference n
On Mon, 07 Feb 2005 13:51:46 -0800, Joshua D. Drake
<[EMAIL PROTECTED]> wrote:
> Well your first email didn't explain that you were doing the below :)
In the first email I was not doing the insert. I was executing a psql script:
$ psql -e -f groupdup.psql ks2
This was the groupdup.psql script:
Clodoaldo Pinto wrote:
On Mon, 07 Feb 2005 09:32:47 -0800, Joshua D. Drake
<[EMAIL PROTECTED]> wrote:
Any advice on how to avoid it?
Use a cursor.
Same thing using a cursor:
Well your first email didn't explain that you were doing the below :)
The cursor will help you with large data pulls from a
On Mon, 07 Feb 2005 09:32:47 -0800, Joshua D. Drake
<[EMAIL PROTECTED]> wrote:
>
> >
> > Any advice on how to avoid it?
>
> Use a cursor.
>
Same thing using a cursor:
declare
rdata record;
begin
truncate table usuarios2;
for rdata in
select distinct on (data) data
from usuarios
loop
ins
1 - 100 of 113 matches
Mail list logo