In my experience, I had configured a warm standby for 2 TB Postgres Cluster
(PostgreSQL 8.4).
Note : I do not know your database size and WAL archive generation rate.
Important considerations i made were as follows -
1. WAL archives transfer from production to standy depends on the network
bandw
On 6 Září 2011, 0:27, Rory Campbell-Lange wrote:
> On 05/09/11, Tomas Vondra (t...@fuzzy.cz) wrote:
>> Do I understand correctly that you compare a query with literal
>> parameters
>> with a parametrized query wrapped in a plpgsql function?
>
> Yes! Certainly I need to make the function perform mor
On 05/09/11, Rory Campbell-Lange (r...@campbell-lange.net) wrote:
> On 05/09/11, Tomas Vondra (t...@fuzzy.cz) wrote:
> > On 5 Zá??í 2011, 23:07, Rory Campbell-Lange wrote:
> ...
> > > The query itself runs in about a 1/3rd of a second. When running the
> > > query as a 'RETURN QUERY' function on Po
On 05/09/11, Tomas Vondra (t...@fuzzy.cz) wrote:
> On 5 Zá??í 2011, 23:07, Rory Campbell-Lange wrote:
...
> > The query itself runs in about a 1/3rd of a second. When running the
> > query as a 'RETURN QUERY' function on Postgres 8.4, the function runs in
> > over 100 seconds, about 300 times slowe
Hopefully It should be back after sometime :)
---
Regards,
Raghavendra
EnterpriseDB Corporation
Blog: http://raghavt.blogspot.com/
On Tue, Sep 6, 2011 at 3:17 AM, Tomas Vondra wrote:
> On 2 Září 2011, 7:36, Magnus Hagander wrote:
> > Yeah, all hub.org hosted services had a rather long dow
On 2 Září 2011, 7:36, Magnus Hagander wrote:
> Yeah, all hub.org hosted services had a rather long downtime again
> yesterday. They seem to be back up now.
And down again :-(
Tomas
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http:
the nodes communicate through 4Gbps ethernet so i dont think there is an
issue there. probably some kind of misconfiguration of DRBD has occured. i
will check on that tommorow. thx a lot :)
--
View this message in context:
http://postgresql.1045698.n5.nabble.com/warm-standby-apply-wal-archives-tp
On 5 Září 2011, 23:07, Rory Campbell-Lange wrote:
> I have a function wrapping a (fairly complex) query.
>
> The query itself runs in about a 1/3rd of a second. When running the
> query as a 'RETURN QUERY' function on Postgres 8.4, the function runs in
> over 100 seconds, about 300 times slower.
>
I have a function wrapping a (fairly complex) query.
The query itself runs in about a 1/3rd of a second. When running the
query as a 'RETURN QUERY' function on Postgres 8.4, the function runs in
over 100 seconds, about 300 times slower.
The function takes 3 input parameters: 2 dates and a boolean
Phghght. Sorry, no, that didn't do it, I was typing too fast and
skipped updating the attributes table. That was definitely not the case
w/ my original database. Wasn't working. The table definition reported
the update I made. Insert did not work. Dropping rules, restarting
database, and recr
On Monday, September 05, 2011 1:48:58 pm Ron Peterson wrote:
> 2011-09-05_16:14:00-0400 Tom Lane :
> > Ron Peterson writes:
> > > I just dropped my logging rules, stopped the database and restarted it,
> > > put my rules back in place, and now it works. Not sure why. Cached
> > > query plan?
> >
On 02/09/11, David Johnston (pol...@yahoo.com) wrote:
> > In my "-1" example, am I right in assuming that I created a correlated
> > subquery rather than an correlated one? I'm confused about the
> > difference.
> >
> Correlated: has a where clause that references the outer query
> Un-correlated:
2011-09-05_16:14:00-0400 Tom Lane :
> Ron Peterson writes:
> > I just dropped my logging rules, stopped the database and restarted it,
> > put my rules back in place, and now it works. Not sure why. Cached
> > query plan?
>
> Maybe. We'd need a reproducible test case to do more than speculate
Ron Peterson writes:
> I just dropped my logging rules, stopped the database and restarted it,
> put my rules back in place, and now it works. Not sure why. Cached
> query plan?
Maybe. We'd need a reproducible test case to do more than speculate
though.
regards, tom la
On September 5, 2011, MirrorX wrote:
> thx a lot for your answer.
>
> actually DRBD is the solution i am trying to avoid, since i think the
> performance is degrading a lot (i ve used it in the past). and also i
> have serious doubts if the data is corrupted in case of the master's
> failure, if
2011-09-05_15:03:00-0400 Tom Lane :
> Ron Peterson writes:
> > I just updated a table to have a larger column size as follows.
>
> > alter table attributes_log alter column attribute_name type varchar(48);
>
> How come this refers to "attributes_log" while your failing command is
> an insert int
Ron Peterson writes:
> I just updated a table to have a larger column size as follows.
> alter table attributes_log alter column attribute_name type varchar(48);
How come this refers to "attributes_log" while your failing command is
an insert into "attributes"?
regards,
I just updated a table to have a larger column size as follows.
alter table attributes_log alter column attribute_name type varchar(48);
The size was previously 24.
iddb=> \d attributes
Table "iddb.attributes"
Column | Type
thx a lot for your answer.
actually DRBD is the solution i am trying to avoid, since i think the
performance is degrading a lot (i ve used it in the past). and also i have
serious doubts if the data is corrupted in case of the master's failure, if
not all blocks have been replicated to they second
MirrorX wrote:
> my bad...
> i read in the manual that the recovery process is constant and runs all the
> time. so the question now is
> how many wals can this procedure handle? for example can it handle 100-200G
sure, if the master can handle that it's no problem for the client (same
hardware
my bad...
i read in the manual that the recovery process is constant and runs all the
time. so the question now is
how many wals can this procedure handle? for example can it handle 100-200G
every day? if it cannot, any other suggestions for HA ?thx in advance
--
View this message in context:
ht
Asia writes:
> I would expect to have only one top-level CA cert in server's and client's
> root.crt and it was not possible to configure with 2-level intermediate CA.
This seems a little confused, since in your previous message you stated
that libpq worked correctly and JDBC did not, and now y
On Mon, 5 Sep 2011 10:54:21 -0400, John DeSoi wrote:
On Sep 5, 2011, at 7:05 AM, Radosław Smogura wrote:
Hello,
During testing of (forked) driver we had seen following strange
behaviour. JDBC driver mainly invokes Fastpath to obtain LOBs, because
of unscientific privileges I get
1. Some byt
On Sep 5, 2011, at 7:05 AM, Radosław Smogura wrote:
> Hello,
>
> During testing of (forked) driver we had seen following strange behaviour.
> JDBC driver mainly invokes Fastpath to obtain LOBs, because of unscientific
> privileges I get
> 1. Some bytes
> 2. 'E' (error about priviliges)
> 3. (s
hello all,
i would like your advice in the following matter. If i am not wrong, by
implementing a warm standby (pg 8.4) the wal archives are being sent to the
fail over server and when the time comes the fail over who already has a
copy of the /data of the primary and all the wal archives, starts
> Asia writes:
> > Now the issue is then when using libpq it was enough to have only root
> > certificate in server's root.crt and it worked fine.
> > But when I tried using the same with JDBC it turned out that I need to put
> > whole chain (2 certs) of Intermediate CA 1 in server's root.crt.
>
On Mon, 5 Sep 2011 14:23:12 +0300, Oguz Yilmaz wrote:
Hi,
We need some handy method for compression of pgsql communication on
port 5432. For my case, database is available over the internet and
application logic has to reach the database remotely.
I have searched for it and found those threads:
Hi,
We need some handy method for compression of pgsql communication on
port 5432. For my case, database is available over the internet and
application logic has to reach the database remotely.
I have searched for it and found those threads:
http://archives.postgresql.org/pgsql-hackers/2002-05/ms
I agree that there are better ways to do this.
But for me this works. (legacy driven situation)
INSERT INTO tbinitialisatie (col1, col2)
SELECT 'x', 'y'
FROM tbinitialisatie
WHERE not exists (select * from tbinitialisatie where col1 = 'x' and
col2 = 'y')
LIMIT 1
Pau Marc Muñoz Torre
Hello,
During testing of (forked) driver we had seen following strange
behaviour. JDBC driver mainly invokes Fastpath to obtain LOBs, because
of unscientific privileges I get
1. Some bytes
2. 'E' (error about priviliges)
3. (sic!) 'S' application_name (driver throws exception)
Now I analyse bu
Ok , thanks Sim, now i see it
P
2011/9/5 Sim Zacks
> **
> On 09/05/2011 01:37 PM, Pau Marc Muñoz Torres wrote:
>
> i don't see it clear, let me put an example
>
> i got the following table
>
> molec varchar(30)
> seq varchar(100)
>
> where I insert my values
>
> lets image that i have a recor
On 09/05/2011 01:37 PM, Pau Marc Muñoz Torres wrote:
i don't see it clear, let me put an example
i got the following table
molec varchar(30)
seq varchar(100)
where I insert my values
lets image that i have a record introduc
i don't see it clear, let me put an example
i got the following table
molec varchar(30)
seq varchar(100)
where I insert my values
lets image that i have a record introduced as ('ubq', 'aadgylpittrs')
how i can prevent to insert another record where molec='ubq' ?
thanks
2011/9/5 Thomas Ke
On 09/05/2011 12:38 PM, Pau Marc Muñoz Torres wrote:
Hi follk
i trying to performe a conditional insert into a table, indeed,
what i'm trying to do is not insert a record into the table if
that record exist
googleling i found something like
Pau Marc Muñoz Torres, 05.09.2011 11:38:
Hi follk
i trying to performe a conditional insert into a table, indeed, what i'm
trying to do is not insert a record into the table if that record exist
googleling i found something like
insert into XX values (1,2,3) where not exist (select
On 05/09/2011 10:38, Pau Marc Muñoz Torres wrote:
> Hi follk
>
> i trying to performe a conditional insert into a table, indeed, what
> i'm trying to do is not insert a record into the table if that record exist
>
> googleling i found something like
>
> insert into XX values (1,2,3) where no
Στις Monday 05 September 2011 12:38:34 ο/η Pau Marc Muñoz Torres έγραψε:
> Hi follk
>
> i trying to performe a conditional insert into a table, indeed, what i'm
> trying to do is not insert a record into the table if that record exist
>
thats why primary/unique keys are for.
isolate the colu
Hi follk
i trying to performe a conditional insert into a table, indeed, what i'm
trying to do is not insert a record into the table if that record exist
googleling i found something like
insert into XX values (1,2,3) where not exist (select );
but i'm having and error near where...
an
38 matches
Mail list logo