Re: [GENERAL] SELECT FOR UPDATE could see commited trasaction partially.

2012-03-05 Thread Kiriakos Georgiou
This is correct behavior with MVCC.  Do a 'select * from x' in thread 2 and to 
understand why.  The select for update in thread 2 sees the data in table x as 
it was prior to thread 1 committing, thus it won't see the row with a=2.

For further suggestions you'll have to explain what you are logically trying to 
accomplish.

Kiriakos


On Mar 5, 2012, at 1:41 AM, Sam Wong wrote:

> Hi,
>  
> I hit a UPDATE/LOCK issue in my application and the result has surprised me 
> somewhat…
> And for the repro, it boils down into this:
> ---
> CREATE TABLE x (a int, b bool);
> INSERT INTO x VALUES (1, TRUE);
> COMMIT;
>  
> _THREAD 1_:
> BEGIN;
> UPDATE x SET b=FALSE;
> INSERT INTO x VALUES (2, TRUE);
>  
> _THREAD 2_:
> BEGIN;
> SELECT * FROM x WHERE b=TRUE FOR UPDATE; -- It will be blocked, as expected
>  
> _THREAD 1_:
> COMMIT;
>  
> _THREAD 2_ will be unblocked.  It will return no rows.
> I expect it to return (2, TRUE) instead, when I design the program.
>  
> If I issue the same SELECT query in THREAD 2 right now, it does indeed return 
> (2, TRUE).
>  
> For the same SQL statement, in MySQL-InnoDB, it does return (2, TRUE) in the 
> first SELECT.
>  
> I understand why this happens in PgSQL, (because it first limited the 
> selection and locked the row, upon unlock it recheck the condition)…
>  
> I don’t like THERAD 2 only see half of the fact of the committed transaction 
> (it see the effect of the update but not the insert), is there anything I 
> could do?
>  
> I considered:
> * ISOLATION serialization – but the thread 2 would abort as deadlock…
> * Both thread do: LOCK TABLE x AS SHARE ROW EXCLUSIVE MODE – does resolve my 
> issue but it creates a big lock contention problem, and relies on app to do 
> the right thing.
> * Advisory lock – pretty much the same, except that I could unlock earlier to 
> make the locking period shorter, but nevertheless it’s the whole table lock…
>  
> Thoughts?
>  
> Thanks,
> Sam



Re: [GENERAL] atoi-like function: is there a better way to do this?

2012-03-05 Thread Tom Molesworth

On 05/03/12 05:08, Chris Angelico wrote:

On Mon, Mar 5, 2012 at 3:15 PM, Tom Molesworth  wrote:

Can you use to_number() here? It sounds like something along the lines of
cast(to_number('0' || field::varchar, '9.') as int) might give the
behaviour you're after, and a quick test seems to indicate that it's about
4x faster than the original function:

I looked at to_number but it seems to find digits anywhere inside the
field - to_number('12.34','9') returns 1234, but I want it to stop
at the decimal.


Right - that's why I included the . in the format string. Haven't done 
exhaustive testing but it seemed to do what you were after.


cheers,

Tom


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Determine dead files

2012-03-05 Thread Andreas Brandl
Hi,

we have a streaming replication running and kind of suspect that the slave 
contains dead files caused by an abort of a huge transaction. I'd like to ask 
how we can be sure that those files are dead.

The details are:

* PostgreSQL 9.1.2 on x86_64-unknown-linux-gnu, compiled by gcc-4.6.real 
(Debian 4.6.2-9) 4.6.2, 64-bit
* streaming replication with hot-standby
* We moved a huge table from one tablespace to the other
* Unfortunately the disk-space on the master host exceeded during the 
transaction, which caused a transaction abort and furthermore a server-crash 
because WAL files could not be written
* We added more disk-space and restarted the master
* After restarting the master, the slave continued to read the WAL until the 
transaction abort
* Now we have files on the slave which we suspect to be dead:

332166.27
...
332166.2
332166.1

* These files are located in the destination tablespace (where we tried to move 
the table to)

I suspect those files to be dead since the following query returns 0 rows:

# select * from pg_class where relfilenode=332166;

Is this a sufficient condition to delete all files $relfilenode.* ? Is 
relfilenode unique per database or per cluster?

Is there any explanation why this situation lead to dead files?

Thank you!

Regards,
  Andreas

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] what Linux to run

2012-03-05 Thread r d
>>
>> If we move to Linux, what is the preferred Linux for running Postgres
>> on.  This machine would be dedicated to the database only.=20
>>
>> I'd like a recommendation for both a GUI hosted version and a non-GUI
>> version.  I haven't used Linux in the past but did spend several year s
>> in a mixed Unix and IBM mainframe environment at the console level.


I run PostgreSQL on Fedora Core 16 64bit and have never had problems, now
or before.
>From that point of view I can recommend FC, but I don't know how it compares
performance-wise to other distros.

I have been using the FC series since they split from the "RedHat Linux"
distribs at about "RedHat 9",
perhaps 10 years ago and have never missed anything, and seldom noticed
troublesome behavior.
My main criticism of FC is that the distro updates to a new version quite
often, 1-2 times per year,
and upgrades are seldom as smooth as they are supposed/advertised to be,
but they have become
much better.

Beyond that, the FC series have about everything you need for
development or anything else,
like running PG

You can use FC both with GUI and without. It comes by default with GNOME.
It also has
KDE, which looks (and works) similar to Windows. Both Gnome and KDE run
atop X.
FC has the usual Unix shells like bash (default), sh, ksh, csh, tcsh ...
and if you need
to connect to your host, there are several 3270 emulator available, for X
and also text-mode.

Two components which do not mix well with FC are Java 7 (1.7.0x) and Oracle
RDBMS 11g.
For Java, stay with the 1.6 series until the problems of 1.7 are fixed. If
you need to use  the RDBMS
besides PG then FC is not your OS. Instead, look at what systems they
(Oracle) "support".

I hope this helps you with your decision.


[GENERAL] Memory usage and configuration settings

2012-03-05 Thread Mike C
Hi,

I have been using table 17-2, Postgres Shared Memory Usage
(http://www.postgresql.org/docs/9.1/interactive/kernel-resources.html)
to calculate approximately how much memory the server will use. I'm
using Postgres 9.1 on a Linux 2.6 (RHEL 6) 64bit system, with 8GB RAM.
Database is approximately 5GB, and is a mixture of read/write.
Postgres is occasionally being killed by the linux oom-killer. I am
trying to understand how much memory postgres could use, and how to
change the configuration to bring it down to a level that won't get it
killed.

Key configuration values are:

max_connections = 350
shared_buffers = 4GB
temp_buffers = 24MB
max_prepared_transactions = 211
work_mem = 16MB
maintenance_work_mem = 131MB
wal_buffers = -1
wal_keep_segments = 128
checkpoint_segments = 64
effective_cache_size = 4GB
autovacuum_max_workers = 4

which I have interpreted to be:

max_locks_per_transaction = 64
max_connections = 350
autovacuum_max_workers =4
max_prepared_transactions = 211 (I've since realised this can be 0; I
use prepared statements, not 2PC)
shared_buffers = 4294967296
wal_block_size = 8192
wal_buffers = 16777216 (actually, -1, but following the documentation
of max(16MB, shared_buffers/32) it should be 16MB).
and wal segment size = 16777216, block_size = 8192

And using the equations on the kernel resources page, I get:

Connections = 6,678,000
 = (1800 + 270 * max_locks_per_transaction) *
max_connections
 = (1800 + 270 * 64) * 350
Autovacuum Workers = 76,320
= (1800 + 270 *
max_locks_per_transaction) * autovacuum_max_workers
= (1800 + 270 * 64) * 4
Prepared Transactions = 3,808,550
  = (770 + 270 *
max_locks_per_transaction) * max_prepared_transactions
  = (770 + 270 * 64) * 211
Shared Disk Buffers  = 36,077,725,286,400
 = (block_size + 208) * shared_buffers
 = (8192 + 208) * 4294967296
 = ~33TB
WAL Buffers = 137,573,171,200
 = (wal_block_size + 8) * wal_buffers
 = (8192 + 8) * 16777216
 = ~128GB
Fixed Space Requirements = 788,480
Overall = 36,215,309,808,950 bytes (~33.2 TB!)

33.2TB doesn't seem right, and while I know the equations are just
approximations, this seems too much. What have I done wrong? I read a
prior thread about this on the pgsql lists which seemed to indicate
the equations for shared disk and wall buffers should be divided by
the block_size 8192, and looking at it closer, wonder if the equation
for both should just be overhead + buffer?

Also what is the relationship between memory and work_mem (set to 16M
in my case). I understand work_mem is per sort, and in most cases our
queries only have a single sort. Does this mean an additional 16M per
sorting client (350 * 16M = 5.6GB), or presumably it only uses the
work memory as it needs it (i.e. does it preallocate all 16M for each
sort, or on an as-needed basis depending on the size of sorted data?)

Are there any other ways to calculate the worst case memory usage of a
given postgres configuration?

My gut feeling is to reduce shared_buffer to 1GB or less and reduce
connections to ~150-200 (to reduce worst case work_mem impact).

Kind Regards,

Mike

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Non inheritable check constraint

2012-03-05 Thread Clodoaldo Neto
I have gone through the issue of making a parent table not writable. After
discussing it (1) I adopted the trigger solution. But I think that a
trigger is just an invisible layer over the database model and so I'm
naively proposing a new syntax to postgresql. It would inform that the
check constraint is not inheritable:

create table t (c integer check NOT INHERITABLE (false));

I found a model visible solution but it only prevents non null values from
being inserted. Or only prevents all inserts if that column is not null:

create table tfk (c integer unique check(false));
create table t (c integer, foreign key (c) references tfk(c));

Regards, Clodoaldo

(1)
http://stackoverflow.com/questions/9545783/how-to-prevent-inserts-in-a-table


Re: [GENERAL] Return unknown resultset from a function

2012-03-05 Thread Merlin Moncure
On Sun, Mar 4, 2012 at 1:52 PM, Jan Meyland Andersen  wrote:
> How do I return an unknown resultset from a function
>
> My main problem is that I do not know how many columns or the data type of
> the columns before runtime.
> It this possible at all?
>
> I also tried to return the data as a text array but I also have trouble with
> that.

The basic deal with postgres is that while it's legal to return
untyped RECORDSs from functions, only C functions can do that. Also,
when calling C functions you still have to decorate the returned
record with types when the query is called.  For a couple of good
examples of that see (\d+) the pg_locks view which wraps
pg_lock_status() record returning function or check out dblink which
makes heavy use of record returning functions.

The only exception to this rule is cursors.  Reading from cursors via
FETCH allows you to pull data from a refcursor that was set up in a
previous function call and works pretty well, but comes with the giant
downside that the results can be directed only to the client.

For pure server-side manipulation of untyped structures you have to
flatten everything to text.  You can do it yourself:

CREATE OR REPLACE FUNCTION get_records(table_name TEXT) RETURNS SETOF TEXT AS
$$
DECLARE
  query TEXT;
BEGIN
  query = format('SELECT %s::text FROM %s', table_name, table_name);

  RETURN QUERY EXECUTE query;
END;
$$ LANGUAGE PLPGSQL STABLE;

select get_records('foo');

Once you have the record in text representation you can throw it
around until it has to get casted back to 'foo' record type:

select (get_records('foo')::foo).*;

You can also do lots of wonderful things with the hstore type, or the
non plpgsql server-side languages (which basically flatten everything
to text).

merlin

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] what Linux to run

2012-03-05 Thread mgould
Thanks for all of the help.  I will be doing some testing in VM's this
week before loading on my other server.

Michael Gould
Intermodal Software Solutions, LLC
904-226-0978
 
 
 Original Message 
Subject: Re: [GENERAL] what Linux to run
From: r d 
Date: Mon, March 05, 2012 5:25 am
To: "pgsql-general@postgresql.org" 

>>
>> If we move to Linux, what is the preferred Linux for running Postgres
>> on.  This machine would be dedicated to the database only.=20
>>
>> I'd like a recommendation for both a GUI hosted version and a non-GUI
>> version.  I haven't used Linux in the past but did spend several year s
>> in a mixed Unix and IBM mainframe environment at the console level.




I run PostgreSQL on Fedora Core 16 64bit and have never had problems,
now or before.
From that point of view I can recommend FC, but I don't know how it
compares
performance-wise to other distros.


I have been using the FC series since they split from the "RedHat Linux"
distribs at about "RedHat 9",
perhaps 10 years ago and have never missed anything, and seldom noticed
troublesome behavior.
My main criticism of FC is that the distro updates to a new version
quite often, 1-2 times per year,
and upgrades are seldom as smooth as they are supposed/advertised to be,
but they have become
much better.


Beyond that, the FC series have about everything you need for
development or anything else,
like running PG


You can use FC both with GUI and without. It comes by default with
GNOME. It also has
KDE, which looks (and works) similar to Windows. Both Gnome and KDE run
atop X.
FC has the usual Unix shells like bash (default), sh, ksh, csh, tcsh ...
and if you need
to connect to your host, there are several 3270 emulator available, for
X and also text-mode.


Two components which do not mix well with FC are Java 7 (1.7.0x) and
Oracle RDBMS 11g.
For Java, stay with the 1.6 series until the problems of 1.7 are fixed.
If you need to use  the RDBMS
besides PG then FC is not your OS. Instead, look at what systems they
(Oracle) "support".


I hope this helps you with your decision.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Memory usage and configuration settings

2012-03-05 Thread Tom Lane
Mike C  writes:
> I have been using table 17-2, Postgres Shared Memory Usage
> (http://www.postgresql.org/docs/9.1/interactive/kernel-resources.html)
> to calculate approximately how much memory the server will use. I'm
> using Postgres 9.1 on a Linux 2.6 (RHEL 6) 64bit system, with 8GB RAM.
> Database is approximately 5GB, and is a mixture of read/write.
> Postgres is occasionally being killed by the linux oom-killer. I am
> trying to understand how much memory postgres could use, and how to
> change the configuration to bring it down to a level that won't get it
> killed.

Basically, you can't fix it that way, at least not if you want to have a
sane configuration.  The problem is misdesign of the OOM killer: it will
count the shared memory block against the postmaster *once for each
child process*.  The only realistic solution is to turn off OOM kill for
the postmaster (and maybe its children too, or maybe you'd rather have
them immune as well).  The former is pretty easy to do if you're
launching the postmaster from a root-privileged initscript.  I think
most prepackaged versions of PG are set up to be able to do this
already.  If you want the children OOM-killable it requires a
source-code tweak as well, since that property is normally inherited.

But anyway, your calculations are totally off:

> Shared Disk Buffers= 36,077,725,286,400
>  = (block_size + 208) * shared_buffers
>  = (8192 + 208) * 4294967296
>  = ~33TB

I think you've multiplied by the block size one time too many.  Ditto
for WAL buffers.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Return unknown resultset from a function

2012-03-05 Thread Tom Lane
Merlin Moncure  writes:
> The only exception to this rule is cursors.  Reading from cursors via
> FETCH allows you to pull data from a refcursor that was set up in a
> previous function call and works pretty well, but comes with the giant
> downside that the results can be directed only to the client.

Hmm, couldn't you do a FETCH into a record variable in plpgsql?  Not
that you'd not have problems manipulating the record variable, since
plpgsql is pretty strongly typed itself.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Memory usage and configuration settings

2012-03-05 Thread Mike C
On Mon, Mar 5, 2012 at 4:04 PM, Tom Lane  wrote:
> Mike C  writes:
>> I have been using table 17-2, Postgres Shared Memory Usage
>> (http://www.postgresql.org/docs/9.1/interactive/kernel-resources.html)
>> to calculate approximately how much memory the server will use. I'm
>> using Postgres 9.1 on a Linux 2.6 (RHEL 6) 64bit system, with 8GB RAM.
>> Database is approximately 5GB, and is a mixture of read/write.
>> Postgres is occasionally being killed by the linux oom-killer. I am
>> trying to understand how much memory postgres could use, and how to
>> change the configuration to bring it down to a level that won't get it
>> killed.
>
> Basically, you can't fix it that way, at least not if you want to have a
> sane configuration.  The problem is misdesign of the OOM killer: it will
> count the shared memory block against the postmaster *once for each
> child process*.  The only realistic solution is to turn off OOM kill for
> the postmaster (and maybe its children too, or maybe you'd rather have
> them immune as well).  The former is pretty easy to do if you're
> launching the postmaster from a root-privileged initscript.  I think
> most prepackaged versions of PG are set up to be able to do this
> already.  If you want the children OOM-killable it requires a
> source-code tweak as well, since that property is normally inherited.

Ok, that makes sense. With regards to work_mem, am I right in thinking
the child processes only allocate enough memory to meet the task at
hand, rather than the full 16M specified in the config file?

> But anyway, your calculations are totally off:
>
>> Shared Disk Buffers    = 36,077,725,286,400
>>                                  = (block_size + 208) * shared_buffers
>>                                  = (8192 + 208) * 4294967296
>>                                  = ~33TB
>
> I think you've multiplied by the block size one time too many.  Ditto
> for WAL buffers.

Yes spot on, removed the block_size and it is now the more sane ~4.1GB.

Thanks for your help,

Mike

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Memory usage and configuration settings

2012-03-05 Thread Tom Lane
Mike C  writes:
> Ok, that makes sense. With regards to work_mem, am I right in thinking
> the child processes only allocate enough memory to meet the task at
> hand, rather than the full 16M specified in the config file?

They only allocate what's needed ... but you have to keep in mind that
work_mem is *per operation*, eg per sort or hash.  A complex query could
require several such steps and thus eat several times work_mem.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Return unknown resultset from a function

2012-03-05 Thread Merlin Moncure
On Mon, Mar 5, 2012 at 10:08 AM, Tom Lane  wrote:
> Merlin Moncure  writes:
>> The only exception to this rule is cursors.  Reading from cursors via
>> FETCH allows you to pull data from a refcursor that was set up in a
>> previous function call and works pretty well, but comes with the giant
>> downside that the results can be directed only to the client.
>
> Hmm, couldn't you do a FETCH into a record variable in plpgsql?  Not
> that you'd not have problems manipulating the record variable, since
> plpgsql is pretty strongly typed itself.

Yeah -- good point on both sides -- you can do it, but it's pretty
limiting: you can only fetch a row at a time and the result data can't
be further expressed in another query.  A CTE based FETCH has been
suggested a couple of times as a hypothetical workaround.

Whether the data is processed on the server or the client the result
essentially the result is the same...you're forced into a highly
iterative method of programming that I try to avoid whenever possible.

TBH though I find the textual workarounds to the type system to work
pretty well, meaning that most of the useful things which were
historically only possible in C have been nicely wrapped or seem to be
just plain impossible (like handling mixed type variadic functions, or
receiving generic RECORDs as arguments).

merlin

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Memory usage and configuration settings

2012-03-05 Thread Merlin Moncure
On Mon, Mar 5, 2012 at 6:37 AM, Mike C  wrote:
> Hi,
>
> I have been using table 17-2, Postgres Shared Memory Usage
> (http://www.postgresql.org/docs/9.1/interactive/kernel-resources.html)
> to calculate approximately how much memory the server will use. I'm
> using Postgres 9.1 on a Linux 2.6 (RHEL 6) 64bit system, with 8GB RAM.
> Database is approximately 5GB, and is a mixture of read/write.
> Postgres is occasionally being killed by the linux oom-killer. I am
> trying to understand how much memory postgres could use, and how to
> change the configuration to bring it down to a level that won't get it
> killed.
>
> Key configuration values are:
>
> max_connections = 350
> shared_buffers = 4GB
> temp_buffers = 24MB
> max_prepared_transactions = 211
> work_mem = 16MB
> maintenance_work_mem = 131MB
> wal_buffers = -1
> wal_keep_segments = 128
> checkpoint_segments = 64
> effective_cache_size = 4GB
> autovacuum_max_workers = 4
>
> which I have interpreted to be:
>
> max_locks_per_transaction = 64
> max_connections = 350
> autovacuum_max_workers =4
> max_prepared_transactions = 211 (I've since realised this can be 0; I
> use prepared statements, not 2PC)
> shared_buffers = 4294967296
> wal_block_size = 8192
> wal_buffers = 16777216 (actually, -1, but following the documentation
> of max(16MB, shared_buffers/32) it should be 16MB).
> and wal segment size = 16777216, block_size = 8192
>
> And using the equations on the kernel resources page, I get:
>
> Connections = 6,678,000
>                     = (1800 + 270 * max_locks_per_transaction) *
> max_connections
>                     = (1800 + 270 * 64) * 350
> Autovacuum Workers = 76,320
>                                    = (1800 + 270 *
> max_locks_per_transaction) * autovacuum_max_workers
>                                    = (1800 + 270 * 64) * 4
> Prepared Transactions = 3,808,550
>                                      = (770 + 270 *
> max_locks_per_transaction) * max_prepared_transactions
>                                      = (770 + 270 * 64) * 211
> Shared Disk Buffers      = 36,077,725,286,400
>                                 = (block_size + 208) * shared_buffers
>                                 = (8192 + 208) * 4294967296
>                                 = ~33TB
> WAL Buffers = 137,573,171,200
>                     = (wal_block_size + 8) * wal_buffers
>                     = (8192 + 8) * 16777216
>                     = ~128GB
> Fixed Space Requirements = 788,480
> Overall = 36,215,309,808,950 bytes (~33.2 TB!)
>
> 33.2TB doesn't seem right, and while I know the equations are just
> approximations, this seems too much. What have I done wrong? I read a
> prior thread about this on the pgsql lists which seemed to indicate
> the equations for shared disk and wall buffers should be divided by
> the block_size 8192, and looking at it closer, wonder if the equation
> for both should just be overhead + buffer?
>
> Also what is the relationship between memory and work_mem (set to 16M
> in my case). I understand work_mem is per sort, and in most cases our
> queries only have a single sort. Does this mean an additional 16M per
> sorting client (350 * 16M = 5.6GB), or presumably it only uses the
> work memory as it needs it (i.e. does it preallocate all 16M for each
> sort, or on an as-needed basis depending on the size of sorted data?)
>
> Are there any other ways to calculate the worst case memory usage of a
> given postgres configuration?
>
> My gut feeling is to reduce shared_buffer to 1GB or less and reduce
> connections to ~150-200 (to reduce worst case work_mem impact).

One easy thing to neglect when doing memory counting is backend
private memory.  Each postgres process typically eats around 1mb and
this will grow if your processes are long-lived as the backend starts
to cache various structures.  As a rule of thumb I tend to use 4mb per
backend (you can confirm this yourself by subtracting SHR from RES).
In absolutely pathological cases (like heavy plpgsql backends with a
lot of tables and views) it can be worse.  4mb * 350 = 1.4gb...so
you're cutting things fairly close.

Looking at your postgresql.conf, your memory settings for
shared_buffers are a more aggressive than the often suggested 25% rule
but I bet it's the backend memory that's pushing you over the edge.

Rather than reducing backend count, I'd consider (carefully) using
pgbouncer to reduce overall connection count.  Or you can reduce
shared buffers, but in your case I'd probably cut it to 1GB if it was
me.

merlin

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Lost data Folder, but have WAL files--- How to recover the database ?? Windows

2012-03-05 Thread chinnaobi
Hi All,

Recently i was doing streaming replication, I lost the data folder on both
the servers and left with WAL archives (some how).

Can any one tell me how to recover database with WAL archives.

Thanks in advance.

--
View this message in context: 
http://postgresql.1045698.n5.nabble.com/Lost-data-Folder-but-have-WAL-files-How-to-recover-the-database-Windows-tp5539703p5539703.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Single server multiple databases - extension

2012-03-05 Thread Brian Trudal
Hi

I have 2 databases running in a single server; and I installed extension 
'hstore' to one database and it works fine. When I tried to use the same 
extension in another database, it gives an error saying 'extension does not 
exist'; nor it allow to install as it complains about its existence.

Any help ?

db1=# CREATE EXTENSION hstore;
ERROR:  type "hstore" already exists
db1=# DROP EXTENSION hstore;
ERROR:  extension "hstore" does not exist
db1=# create table foo(id hstore);
ERROR:  type "hstore" is only a shell
LINE 1: create table foo(id hstore);
    ^


Re: [GENERAL] Lost data Folder, but have WAL files--- How to recover the database ?? Windows

2012-03-05 Thread Chris Travers
On Mon, Mar 5, 2012 at 6:38 PM, chinnaobi  wrote:

> Hi All,
>
> Recently i was doing streaming replication, I lost the data folder on both
> the servers and left with WAL archives (some how).
>
> Can any one tell me how to recover database with WAL archives.
>

I don;t think you can recover from only the WAL archives.  However, if you
have a base backup and a complete collection of WAL archives then recovery
should be possible.  At that point it's basically standard WAL recovery.

So I guess the first step is to determine if you have this base backup (you
would have taken one during replication setup and hopefully were taking
them from time to time).

best Wishes,
Chris Travers

>
> Thanks in advance.
>
> --
> View this message in context:
> http://postgresql.1045698.n5.nabble.com/Lost-data-Folder-but-have-WAL-files-How-to-recover-the-database-Windows-tp5539703p5539703.html
> Sent from the PostgreSQL - general mailing list archive at Nabble.com.
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>


[GENERAL] ANALYSE on top-level of partitioned tables - required?

2012-03-05 Thread Toby Corkindale

Hi,
Say you have a master table which has many partitioned tables, created 
like this:


CREATE TABLE foo (various columns...);
CREATE TABLE foo_1 () INHERITS foo;
CREATE TABLE foo_2 () INHERITS foo;

Now lets say you insert data directly into foo_X, eg with
COPY foo_1 FROM...
COPY foo_1 FROM...

Do you need to manually call ANALYZE manually on foo for PostgreSQL to 
build statistics on it?


I remember seeing something in the documentation about how the 
auto-analyser won't realise it needs to run on top-level tables because 
as far as its concerned, there have been no changes to it..


But, I wondered if that matters? I mean, since there's no data in it, 
but all the child tables are analysed, is that enough?



My experience so far indicates that it DOES matter (looking at query 
plans before and after calling ANALYSE), but I'd like to get some 
opinions from those more knowledgeable than I..


thanks in advance,
Toby

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general