On Tue, Feb 9, 2010 at 11:51 PM, Greg Smith wrote:
> Asher wrote:
>>
>> Once loaded into the database the data will never be deleted or modified
>> and will typically be accessed over a particular date range for a particular
>> channel (e.g. "sample_time >= X AND sample_time <= Y AND channel=Z").
Hi,
I would consider telling Hibernate to log the SQL it is generating to a
file. This can be done by setting the logging category org.hibernate.SQL to
debug, and for the parameters used in the prepared statements I think you
must also enable org.hibernate.type on debug (I have not managed to get
On Tue, Feb 09, 2010 at 08:19:51PM +0500, Anton Maksimenkov wrote:
> It means that on openbsd i386 we have about 2,2G of virtual space for
> malloc, shm*. So, postgres will use that space.
>
> But mmap() use random addresses. So when you get big chunk of memory
> for shared buffers (say, 2G) then
karsten vennemann wrote:
I have to write a 700 GB large database to a dump to clean out a lot
of dead records on an Ubuntu server with postgres 8.3.8. What is the
proper procedure to succeed with this - last time the dump stopped at
3.8 GB size I guess. Should I combine the -Fc option of pg_dum
On Wed, Feb 10, 2010 at 10:11 AM, Steve Atkins wrote:
>
> On Feb 9, 2010, at 10:38 PM, AI Rumman wrote:
>
>> Thanks for your quick answes.
>>
>> But if I use a file and then store the name in the database, is it possible
>> to use TEXT search tsvector and tsquery indexing on these external files?
On Feb 9, 2010, at 11:21 PM, Scott Marlowe wrote:
> On Wed, Feb 10, 2010 at 12:11 AM, Steve Atkins wrote:
>> A database isn't really the right way to do full text search for single
>> files that big. Even if they'd fit in the database it's way bigger than the
>> underlying index types tsquery
Hello
2010/2/10 karsten vennemann
> I have to write a 700 GB large database to a dump to clean out a lot of
> dead records on an Ubuntu server with postgres 8.3.8. What is the proper
> procedure to succeed with this - last time the dump stopped at 3.8 GB size I
> guess. Should I combine the -Fc
On Wed, Feb 10, 2010 at 12:11 AM, Steve Atkins wrote:
> A database isn't really the right way to do full text search for single files
> that big. Even if they'd fit in the database it's way bigger than the
> underlying index types tsquery uses are designed for.
>
> Are you sure that the document
I have to write a 700 GB large database to a dump to clean out a lot of
dead records on an Ubuntu server with postgres 8.3.8. What is the proper
procedure to succeed with this - last time the dump stopped at 3.8 GB size I
guess. Should I combine the -Fc option of pg_dump and and the spli
On Feb 9, 2010, at 10:38 PM, AI Rumman wrote:
> Thanks for your quick answes.
>
> But if I use a file and then store the name in the database, is it possible
> to use TEXT search tsvector and tsquery indexing on these external files?
No, no way at all.
A database isn't really the right way t
Hi all,
I'm playing with 8.5devel aka 9.0 and got that:
test=# with a as
(insert into table_a select s, md5(s::text) from
generate_series(0,25) s returning *),
b as
(insert into table_b select id, md5(value) from a where
substring(value,1,1) between '0' and '9')
sel
On Tue, Feb 9, 2010 at 11:51 PM, Pavel Stehule wrote:
> 2010/2/10 Scott Marlowe :
>> On Tue, Feb 9, 2010 at 11:26 PM, Steve Atkins wrote:
>>>
>>> On Feb 9, 2010, at 9:52 PM, Scott Marlowe wrote:
>>>
On Tue, Feb 9, 2010 at 9:38 PM, AI Rumman wrote:
> How to save 2 GB or more text string
I have to write a 700 GB large database to a dump to clean out a lot of dead
records on an Ubuntu server with postgres 8.3.8. What is the proper procedure
to succeed with this - last time the dump stopped at 3.8 GB size I guess.
Should I combine the -Fc option of pg_dump and and the split comman
2010/2/10 Scott Marlowe :
> On Tue, Feb 9, 2010 at 11:26 PM, Steve Atkins wrote:
>>
>> On Feb 9, 2010, at 9:52 PM, Scott Marlowe wrote:
>>
>>> On Tue, Feb 9, 2010 at 9:38 PM, AI Rumman wrote:
How to save 2 GB or more text string in Postgresql?
Which data type should I use?
>>>
>>> If yo
Asher wrote:
Once loaded into the database the data will never be deleted or
modified and will typically be accessed over a particular date range
for a particular channel (e.g. "sample_time >= X AND sample_time <= Y
AND channel=Z"). A typical query won't return more than a few million
rows and
Thanks for your quick answes.
But if I use a file and then store the name in the database, is it possible
to use TEXT search tsvector and tsquery indexing on these external files?
On Wed, Feb 10, 2010 at 12:26 PM, Steve Atkins wrote:
>
> On Feb 9, 2010, at 9:52 PM, Scott Marlowe wrote:
>
> > On
On Tue, Feb 9, 2010 at 11:26 PM, Steve Atkins wrote:
>
> On Feb 9, 2010, at 9:52 PM, Scott Marlowe wrote:
>
>> On Tue, Feb 9, 2010 at 9:38 PM, AI Rumman wrote:
>>> How to save 2 GB or more text string in Postgresql?
>>> Which data type should I use?
>>
>> If you have to you can use either the lo
On Feb 9, 2010, at 9:52 PM, Scott Marlowe wrote:
> On Tue, Feb 9, 2010 at 9:38 PM, AI Rumman wrote:
>> How to save 2 GB or more text string in Postgresql?
>> Which data type should I use?
>
> If you have to you can use either the lo interface, or you can use
> bytea. Large Object (i.e. lo) all
On Tue, Feb 9, 2010 at 9:38 PM, AI Rumman wrote:
> How to save 2 GB or more text string in Postgresql?
> Which data type should I use?
If you have to you can use either the lo interface, or you can use
bytea. Large Object (i.e. lo) allows for access much like fopen /
fseek etc in C, but the act
Quick note, please stick to text formatted email for the mailing list,
it's the preferred format.
On Tue, Feb 9, 2010 at 9:09 PM, Jayadevan M
wrote:
>
> Hello all,
> Apologies for the long mail.
> I work for a company that is provides solutions mostly on a Java/Oracle
> platform. Recently we mov
If you will be selecting sets of data within a time range, it should also
improve performance if you can build a clustered index on the sample_time. It
may also be worth looking at whether partitioning by timestamp & channel offers
any advantages.
Brent Wood
Brent Wood
DBA/GIS consultant
NIWA
How to save 2 GB or more text string in Postgresql?
Which data type should I use?
Please any suggestion.
Hello all,
Apologies for the long mail.
I work for a company that is provides solutions mostly on a Java/Oracle
platform. Recently we moved on of our products to PostgreSQL. The main
reason was PostgreSQL's GIS capabilities and the inability of government
departments (especially road/traffic) to
Hi,
Is there a way to create a new composite type using already existing composite
type?
For example,
CREATE TABLE inventory_item (
nametext,
supplier_id integer REFERENCES suppliers,
price numeric CHECK (price > 0)
);
produces the following type automatic
.com/canit/b.php?i=80271402&m=3d7572f81686&t=20100
209&c=f> previous vote
__ Information from ESET Smart Security, version of virus signature
database 4852 (20100209) __
The message was checked by ESET Smart Security.
http://www.eset.com
On Tue, Feb 9, 2010 at 8:06 PM, BillR wrote:
> Is it also possible to denormalize by putting the 'channel' data in the
> first table (especially if it isn't very much)? Maintaining a foreign key
> constraint can impact performance significantly in most RDBMS's, even when
> deferring checking. I co
;m=5d99840e72f9&t=201002
09&c=f
--
END-ANTISPAM-VOTING-LINKS
__ Information from ESET Smart Security, version of virus signature
database 4852 (20100209) __
The message was checked by ESET Smart Security.
http://www.
On Tue, 2010-02-09 at 17:14 -0800, Erik Jones wrote:
> Anyways, I realized that the dump run with zero_damaged_pages does
> actually finish.
Yeah, it should finish, it's just a question of whether the warnings
continue, and if you need to keep zero_damaged_pages on to keep reading.
> Also, I foun
On Feb 9, 2010, at 5:00 PM, Jeff Davis wrote:
> On Tue, 2010-02-09 at 15:28 -0800, Erik Jones wrote:
>> * Set zero_damaged_pages=on, run query that originally showed the
>> corruption. This reports 3 different blocks with invalid page headers
>> and reports that they are being zero'd out. Unfor
On Tue, 2010-02-09 at 15:28 -0800, Erik Jones wrote:
> * Set zero_damaged_pages=on, run query that originally showed the
> corruption. This reports 3 different blocks with invalid page headers
> and reports that they are being zero'd out. Unfortunately,
> subsequently querying the table the same
Greetings,
We've recently had database server crash due to a heavy duty disk failure and
upon rebooting we now have a table showing corruption via the "invalid page
header in block X" message when querying one table in particular, the rest have
all checked out via full table dumps with no error
On Feb 9, 2010, at 3:28 PM, Erik Jones wrote:
> Greetings,
>
> We've recently had database server crash due to a heavy duty disk failure and
> upon rebooting we now have a table showing corruption via the "invalid page
> header in block X" message when querying one table in particular, the res
On 2/9/2010 4:41 PM, Asher Hoskins wrote:
>
> Thanks for that, it looks like partitioning is the way to go. I'm
> assuming that I should try and keep my total_relation_sizes less than
> the memory size of the machine?
This depends on what the quires look like. As other have stated when
partitio
Le 09/02/2010 17:05, David Kerr a écrit :
> Magnus Hagander wrote:
>> 2010/2/9 David Kerr :
>>> It seems like pg_stat_activity truncates the current_query to about
>>> 1024 characters.
>>>
>>> The field is a text, so i'm wondering if there is a way to see the
>>> full query?
>>>
>>> (I know i can t
Asher Hoskins wrote:
If I partition so that each partition holds data for a single channel
(and set a CHECK constraint for this) then I can presumably remove the
channel from the index since constraint exclusion will mean that only
partitions holding the channel I'm interested in will be search
Boszormenyi Zoltan írta:
> Korry Douglas írta:
>
>>> I have the $SUBJECT problem loading my own
>>> module in PostgreSQL. The server is HP-UX/ia64,
>>> PostgeSQL 8.4.2 was compiled with HP CC.
>>> pl/PgSQL can be loaded fine.
>>>
>>> ...:/usr/local/pgsql/pgsql-cc-8.4/lib# ldd odbclink.so
>>>
On Tue, Feb 9, 2010 at 10:47 AM, Asher wrote:
> Hello.
>
> I'm putting together a database to store the readings from various
> measurement devices for later processing. Since these things (water pressure
> monitors attached to very large water pipes) take readings at 200Hz and are
> typically dep
Justin Graf wrote:
Well first is that 200hz meaning 200 samples per channel per second.
That is very fast sampling for pressure sensor, I would be surprised if
the meters are actually giving real results at that rate. I would look
at reducing that down to what the meter is actual capable
On Tue, Feb 9, 2010 at 1:55 PM, John R Pierce wrote:
> Guillaume Lelarge wrote:
>>>
>>> is this a 64bit postgres build?
>>>
>>> if not, you're probably running out of virtual address space in the 32
>>> bit user space, which is limited to like 2gb.
>>>
>>>
>>
>> IIRC, the virtual address space in
On Feb 9, 2010, at 9:55 AM, Seb wrote:
> On Tue, 9 Feb 2010 14:44:23 -0300,
> Alvaro Herrera wrote:
>
> [...]
>
>> Err, remove the renegotiation code from the server.
>
> Oh I meant something from a user's point of view, not hacking the source
> code. If there's not a user option, I don't mi
Guillaume Lelarge wrote:
is this a 64bit postgres build?
if not, you're probably running out of virtual address space in the 32
bit user space, which is limited to like 2gb.
IIRC, the virtual address space in 32bit platforms is 4GB.
it is, but within that 4gb, the kernel uses the to
I've only gone up to about a billion rows, but table partitioning seems
to be the way to go to me. I did per-day partitioning, and just had the
job that inserts the daily data add the table automatically. With the
partitioning, it only has to pull up the tables for the specific days,
and is t
On 2/9/2010 12:47 PM, Asher wrote:
> Hello.
>
> I'm putting together a database to store the readings from various
> measurement devices for later processing. Since these things (water
> pressure monitors attached to very large water pipes) take readings at
> 200Hz and are typically deployed ove
Crosstab is indeed very interesting. Thank you for the suggestion Vincenzo.
regards
Davor
"Vincenzo Romano" wrote in message
news:3eff28921002081133h4b0d7fabm96cc1bc08e579...@mail.gmail.com...
Look for crosstab in the documentation.
Il giorno 8 feb, 2010 8:21 p., "Davor J." ha scritto
Seb escribió:
> On Tue, 9 Feb 2010 14:44:23 -0300,
> Alvaro Herrera wrote:
>
> [...]
>
> > Err, remove the renegotiation code from the server.
>
> Oh I meant something from a user's point of view, not hacking the source
> code. If there's not a user option, I don't mind the connection loss as
Hello.
I'm putting together a database to store the readings from various
measurement devices for later processing. Since these things (water
pressure monitors attached to very large water pipes) take readings at
200Hz and are typically deployed over multiple sites for several months
at a tim
On Tue, 9 Feb 2010 14:44:23 -0300,
Alvaro Herrera wrote:
[...]
> Err, remove the renegotiation code from the server.
Oh I meant something from a user's point of view, not hacking the source
code. If there's not a user option, I don't mind the connection loss as
long as the data do get copied t
Seb escribió:
> On Tue, 9 Feb 2010 10:28:50 -0700,
> Scott Marlowe wrote:
>
> > Are there any PANICs in the log before this? Doesn't seem likely from
> > your description, but just wanted to be sure.
>
> No, there's no such lines in the log.
>
> AFAICS, the data did make into the table, after
On Tue, 9 Feb 2010 10:28:50 -0700,
Scott Marlowe wrote:
> Are there any PANICs in the log before this? Doesn't seem likely from
> your description, but just wanted to be sure.
No, there's no such lines in the log.
AFAICS, the data did make into the table, after psql re-established
connection w
On Tue, Feb 9, 2010 at 10:03 AM, Seb wrote:
> Hi,
>
> While copying a large'ish file (~ 7.6 million rows) into a table with:
>
> \copy [tablename] ([columns]) FROM '/path/to/large/file' NULL AS ''
>
> The command stopped running after a few minutes with the message:
>
> ---
On Mon, Feb 8, 2010 at 5:46 PM, David Kerr wrote:
> It seems like pg_stat_activity truncates the current_query to about 1024
> characters.
>
> The field is a text, so i'm wondering if there is a way to see the full query?
>
> (I know i can turn on log_statement=all, or log_min_duration_statement)
Seb escribió:
> ------
> 2010-02-09 10:47:03 CST LOG: checkpoints are occurring too frequently (6
> seconds apart)
> 2010-02-09 10:47:03 CST HINT: Consider increasing the configuration
> parameter "checkpoint_segments".
> 201
Hi,
While copying a large'ish file (~ 7.6 million rows) into a table with:
\copy [tablename] ([columns]) FROM '/path/to/large/file' NULL AS ''
The command stopped running after a few minutes with the message:
------
server clo
I've a column that contain something formed by concatenating fields
from another table
create table stuff(
glued varchar(30),
...
);
insert into stuff select 'constA,' || field1 || ',' || field2 from
origin where ...;
insert into stuff select 'constB,' || field1 || ',' || field2 from
origin w
Korry Douglas írta:
>> I have the $SUBJECT problem loading my own
>> module in PostgreSQL. The server is HP-UX/ia64,
>> PostgeSQL 8.4.2 was compiled with HP CC.
>> pl/PgSQL can be loaded fine.
>>
>> ...:/usr/local/pgsql/pgsql-cc-8.4/lib# ldd odbclink.so
>>libodbc.so.1 => /usr/local/pgsql/ru
Magnus Hagander wrote:
On Tue, Feb 9, 2010 at 09:53, David Kerr wrote:
Guillaume Lelarge wrote:
Le 09/02/2010 09:35, David Kerr a écrit :
Guillaume Lelarge wrote:
Le 09/02/2010 05:49, John R Pierce a écrit :
David Kerr wrote:
maintenance_work_mem = 1GB
So evidently, when it tries to actua
Magnus Hagander wrote:
2010/2/9 David Kerr :
It seems like pg_stat_activity truncates the current_query to about 1024
characters.
The field is a text, so i'm wondering if there is a way to see the full query?
(I know i can turn on log_statement=all, or log_min_duration_statement) but i'd
lik
2010/2/9 Scott Marlowe :
> On Tue, Feb 9, 2010 at 3:18 AM, Anton Maksimenkov wrote:
>>> Isn't the usual advice here is to log the ulimit setting from the pg
>>> startup script so you can what it really is for the user at the moment
>> I think that "su" is enough:
> In previous discussions it was m
I have the $SUBJECT problem loading my own
module in PostgreSQL. The server is HP-UX/ia64,
PostgeSQL 8.4.2 was compiled with HP CC.
pl/PgSQL can be loaded fine.
...:/usr/local/pgsql/pgsql-cc-8.4/lib# ldd odbclink.so
libodbc.so.1 => /usr/local/pgsql/runtime/lib/libodbc.so.1
libpthrea
On 09/02/10 13:52, Chris Barnes wrote:
I have this error when selecting from the pg_stat_user_tables. I
restarted postgres and the error went away.
Has anyone else seen this error?
Chris Barnes
[postg...@preventdb02 londiste]$ cat
/data/pgsql/data/pg_log/postgresql-Tue.log
WARNING: pgstat
Hello
>
> I think that Pavel's stuff is more interesting (and effective):
>
> http://pgfoundry.org/projects/pstcollection/
>
sorry, it is in orafce package (in pstcollection could be too (in future))
http://pgfoundry.org/projects/orafce
Regards
Pavel Stehule
> --
> Vincenzo Romano
> NotOrAnd
2010/2/9 Shoaib Mir :
> On Tue, Feb 9, 2010 at 10:31 PM, Richard Huxton wrote:
>>
>> On 09/02/10 09:51, AI Rumman wrote:
>>>
>>> I am using Postgresql 8.1.
>>>
>>> I have to reverse a string like 'abc' to 'cba'.
>>>
>>> Is there any function to do this?
>>
>> Nothing built-in, but you can write yo
I have this error when selecting from the pg_stat_user_tables. I restarted
postgres and the error went away.
Has anyone else seen this error?
Chris Barnes
[postg...@preventdb02 londiste]$ cat /data/pgsql/data/pg_log/postgresql-Tue.log
WARNING: pgstat wait timeout
WARNING: pgsta
On Tue, Feb 9, 2010 at 09:53, David Kerr wrote:
> Guillaume Lelarge wrote:
>>
>> Le 09/02/2010 09:35, David Kerr a écrit :
>>>
>>> Guillaume Lelarge wrote:
Le 09/02/2010 05:49, John R Pierce a écrit :
>
> David Kerr wrote:
maintenance_work_mem = 1GB
>>>
On 09/02/10 11:25, Ben Campbell wrote:
[snip]
I need to set all
those flags, but it takes _ages_ to do "UPDATE article SET
needs_indexing=true;"
[snip]
I _think_ the reason it takes so long is that postgresql doesn't modify
rows in place - it creates an entry for the modified row and zaps the
o
On Tue, Feb 9, 2010 at 10:31 PM, Richard Huxton wrote:
> On 09/02/10 09:51, AI Rumman wrote:
>
>> I am using Postgresql 8.1.
>>
>> I have to reverse a string like 'abc' to 'cba'.
>>
>> Is there any function to do this?
>>
>
> Nothing built-in, but you can write your own easily enough in plpgsql -
On 09/02/10 11:07, Willy-Bas Loos wrote:
Hi,
in my plpgsql function I'm looping through tables and their fields and i
want to return their field names and the corresponding values.
It all works fine, except i can't retrieve the values, which is really a
bummer.
I tried a couple of things (in vai
On Tue, Feb 9, 2010 at 11:21, Fredric Fredricson
wrote:
> Scott Marlowe wrote:
>
> On Tue, Feb 9, 2010 at 1:42 AM, Fredric Fredricson
> wrote:
>
>
> I use locale en_US.UTF-8 but why this should affect how leading characters
> in strings are ignored is beyond me.
>
>
> P.s. this page may shed some
I've got a database that holds a bunch of articles in a table called
'article'. It has a bunch of columns, and each row might hold a few KB
of data, say.
I'm maintaining a separate fulltext database, and so I added a boolean
flag, 'needs_indexing' to my 'article' table to keep track of which
ar
On 09/02/10 09:51, AI Rumman wrote:
I am using Postgresql 8.1.
I have to reverse a string like 'abc' to 'cba'.
Is there any function to do this?
Nothing built-in, but you can write your own easily enough in plpgsql -
google a bit for examples.
--
Richard Huxton
Archonet Ltd
--
Sent vi
Hi,
I have the $SUBJECT problem loading my own
module in PostgreSQL. The server is HP-UX/ia64,
PostgeSQL 8.4.2 was compiled with HP CC.
pl/PgSQL can be loaded fine.
...:/usr/local/pgsql/pgsql-cc-8.4/lib# ldd odbclink.so
libodbc.so.1 => /usr/local/pgsql/runtime/lib/libodbc.so.1
lib
Hi,
in my plpgsql function I'm looping through tables and their fields and i
want to return their field names and the corresponding values.
It all works fine, except i can't retrieve the values, which is really a
bummer.
I tried a couple of things (in vain), of which this seemed the most
promising
On Tue, Feb 9, 2010 at 3:18 AM, Anton Maksimenkov wrote:
> 2010/1/28 Scott Marlowe :
>>> related to maximum per-process data space. I don't know BSD very well
>>> so I can't say if datasize is the only such value for BSD, but it'd be
>>> worth checking. (Hmm, on OS X which is at least partly BSD
Scott Marlowe wrote:
On Tue, Feb 9, 2010 at 1:42 AM, Fredric Fredricson
wrote:
I use locale en_US.UTF-8 but why this should affect how leading characters
in strings are ignored is beyond me.
P.s. this page may shed some light on the subject:
http://en.wikipedia.org/wiki/Collation
OK,
Hi,
I'm trying to log statements that are being fired from hibernate, so that i
can benchmark them without hibernate itself.
I want to fire the SQL at the database directly and see what difference i
can make by tweaking things.
But when i set log_statement = 'all', the logging looks like this:
20
2010/1/28 Scott Marlowe :
>> related to maximum per-process data space. I don't know BSD very well
>> so I can't say if datasize is the only such value for BSD, but it'd be
>> worth checking. (Hmm, on OS X which is at least partly BSDish, I see
>> -m and -v in addition to -d, so I'm suspicious Op
This was what I was after:
test=# select version();
version
PostgreSQL 8.3.9 on x86_64-pc-linux-gnu, compiled by GCC
gcc-4.3.real (Debian 4.3.2-1.1) 4.3.2
2010/2/9 David Kerr :
> It seems like pg_stat_activity truncates the current_query to about 1024
> characters.
>
> The field is a text, so i'm wondering if there is a way to see the full query?
>
> (I know i can turn on log_statement=all, or log_min_duration_statement) but
> i'd like
> something
I am using Postgresql 8.1.
I have to reverse a string like 'abc' to 'cba'.
Is there any function to do this?
I just answered this less than an hour ago... And please don't cross
post to multiple mailing lists.
On Tue, Feb 9, 2010 at 12:43 AM, AI Rumman wrote:
> I have created a index
> create index leadaddress_phone_idx on
> leadaddress(regexp_replace((phone)::text, '[^0-9]*'::text, ''::text,
> 'g'::te
Le 09/02/2010 08:43, AI Rumman a écrit :
> I have created a index
> create index leadaddress_phone_idx on
> leadaddress(regexp_replace((phone)::text, '[^0-9]*'::text, ''::text,
> 'g'::text));
>
> But the index is not using.
>
> explain select * from leadaddress where
> regexp_replace(phone,'[^0-9
I have created a index
create index leadaddress_phone_idx on
leadaddress(regexp_replace((phone)::text, '[^0-9]*'::text, ''::text,
'g'::text));
But the index is not using.
explain select * from leadaddress where
regexp_replace(phone,'[^0-9]*','','g') like '%2159438606';
On Tue, Feb 9, 2010 at 1:42 AM, Fredric Fredricson
wrote:
> I use locale en_US.UTF-8 but why this should affect how leading characters
> in strings are ignored is beyond me.
P.s. this page may shed some light on the subject:
http://en.wikipedia.org/wiki/Collation
--
Sent via pgsql-general mail
On Tue, Feb 9, 2010 at 1:42 AM, Fredric Fredricson
wrote:
> Justin Graf wrote:
>
> On 2/8/2010 7:09 PM, Fredric Fredricson wrote:
>
> Hi!
> New to the list with a question that I cannot find the answer to in the
> manual or on the internet but I suspect is trivial. If somebody could point
> me in
Justin Graf wrote:
On 2/8/2010 7:09 PM, Fredric Fredricson wrote:
Hi!
New to the list with a question that I cannot find the answer to in
the manual or on the internet but I suspect is trivial. If somebody
could point me in the correct direction I would be greatful.
This is what I do (conden
Guillaume Lelarge wrote:
Le 09/02/2010 09:35, David Kerr a écrit :
Guillaume Lelarge wrote:
Le 09/02/2010 05:49, John R Pierce a écrit :
David Kerr wrote:
maintenance_work_mem = 1GB
So evidently, when it tries to actually allocate 1GB, it can't do it.
Ergo, that setting is too high for your
Le 09/02/2010 09:35, David Kerr a écrit :
> Guillaume Lelarge wrote:
>> Le 09/02/2010 05:49, John R Pierce a écrit :
>>> David Kerr wrote:
>> maintenance_work_mem = 1GB
> So evidently, when it tries to actually allocate 1GB, it can't do it.
> Ergo, that setting is too high for your mach
Guillaume Lelarge wrote:
Le 09/02/2010 05:49, John R Pierce a écrit :
David Kerr wrote:
maintenance_work_mem = 1GB
So evidently, when it tries to actually allocate 1GB, it can't do it.
Ergo, that setting is too high for your machine.
...
seems like i've got 2GB free.
is this a 64bit postgre
Le 09/02/2010 05:49, John R Pierce a écrit :
> David Kerr wrote:
maintenance_work_mem = 1GB
>>>
>>> So evidently, when it tries to actually allocate 1GB, it can't do it.
>>> Ergo, that setting is too high for your machine.
>>> ...
>>
>> seems like i've got 2GB free.
>
>
> is this a 64bit pos
89 matches
Mail list logo