I'm still getting my pg_dumps on the 9.1 hot standby cancelled
occasionally, despite hot_standby_feedback being set.
pg_stat_replication tells me the replication connection is not being
reset or anything.
The last one was:
pg_dump: Error message from server: ERROR: canceling statement due to
conf
On Thu, Sep 6, 2012 at 6:06 PM, Adrian Klaver wrote:
> On 09/06/2012 04:19 PM, Scott Marlowe wrote:
>>
>> That shouldn't really matter. Either the db is just on the NAS in
>> which case as long as pg compiles on it then the client on the main
>> unit shouldn't matter, or the data is just stored t
Em 06/09/2012 15:40, John R Pierce escreveu:
On 09/06/12 5:30 AM, Edson Richter wrote:
You could change the default setting for the user with
ALTER ROLE someuser SET search_path=...
That is perfect! I can have separate users for each application, and
then they will have the correct search path
On Fri, Sep 7, 2012 at 4:10 AM, Andrew Barnham wrote:
> Scratch that. An immediate show stopping pitfall occurs to me: the
> necessity to match CPU/OS Architecture between primary server and replicate
> target. Doubtful that there are any consumer NAS products out there
> running linux on 64bit/i
On Wed, 2012-09-05 at 21:15 -0400, David Johnston wrote:
> On Sep 5, 2012, at 19:02, Sergio Basurto
> wrote:
>
>
>
> > On Tue, 2012-09-04 at 21:58 -0400, David Johnston wrote:
> >
> > > On Sep 4, 2012, at 21:39, Sergio Basurto wrote:
> > >
> > > > I am using regexp_matches in a function lik
Jeff Janes writes:
>> That sounds like you lack an index on the referencing column of the
>> foreign key constraint. Postgres doesn't require you to keep such
>> an index, but it's a really good idea if you ever update the referenced
>> column.
> For updating 20 million out of 500 million rows,
On Thursday, September 06, 2012 05:06:27 PM Jeff Janes wrote:
> For updating 20 million out of 500 million rows, wouldn't a full table
> scan generally be preferable to an index scan anyway?
>
Not one table scan for each row updated ...
--
Sent via pgsql-general mailing list (pgsql-general@po
On 09/06/2012 04:19 PM, Scott Marlowe wrote:
That shouldn't really matter. Either the db is just on the NAS in
which case as long as pg compiles on it then the client on the main
unit shouldn't matter, or the data is just stored there and the db is
on the main unit, client and all and again it w
>
>> There are also rare cases where I might want to make a correction. For
>> example, one of the columns is sample name which is a foreign key to a
>> samples table defined with " ON UPDATE CASCADE." I decided to change a
>> sample name in the samples table which should affect about 20 milli
That shouldn't really matter. Either the db is just on the NAS in
which case as long as pg compiles on it then the client on the main
unit shouldn't matter, or the data is just stored there and the db is
on the main unit, client and all and again it wouldn't matter.
But the client and server do N
On Sep 6, 2012, at 5:54 PM, Tom Lane wrote:
> That sounds like you lack an index on the referencing column of the
> foreign key constraint. Postgres doesn't require you to keep such
> an index, but it's a really good idea if you ever update the referenced
> column.
Thanks. You're right. That
Andrew Barnham writes:
> Scratch that. An immediate show stopping pitfall occurs to me: the
> necessity to match CPU/OS Architecture between primary server and replicate
> target. Doubtful that there are any consumer NAS products out there
> running linux on 64bit/intel
Maybe not, but there are
On Tue, 2012-09-04 at 21:58 -0400, David Johnston wrote:
> On Sep 4, 2012, at 21:39, Sergio Basurto wrote:
>
> > I am using regexp_matches in a function like this
> >
> > create or replace function test (v_string in text) returns varchar as
> > $$
> > declare
> > i_strings
2012/9/4 Albe Laurenz
> Yvon Thoraval wrote:
> > on my computer I had a disk probleme, then i had to reinstall the system
> (Xubuntu 12.04).
> > I've backuped some parts of the disk, namely /etc, /var and /home.
> > I wonder if I'm able to recover my past databases in the
> /var/lib/postgresql/9.
Scratch that. An immediate show stopping pitfall occurs to me: the
necessity to match CPU/OS Architecture between primary server and replicate
target. Doubtful that there are any consumer NAS products out there
running linux on 64bit/intel
On Fri, Sep 7, 2012 at 8:23 AM, Andrew Barnham wrote:
>
Hi
I currently run a modest streaming replication target on a cheap, single
disk ASUS media center; replicating a 100GB PG database.
I want to add RAID via a consumer grade NAS device.
As far as I can tell consumer grade NAS devices these days appear to be
fairly rich & flexible embedded lniux/f
Aram Fingal writes:
> I have a table which currently has about 500 million rows. For the most
> part, the situation is going to be that I will import a few hundred million
> more rows from text files once every few months but otherwise there won't be
> any insert, update or delete queries. I
Hi Pavel,
Hm... To me workaround looks as exactly as the same thing?
1) uses Dynamic SQL to bulid query (but returns refcursor insted of text)
2) client still needs to execute 2 commands (second is fetch instead of
execute 'result')
However, based on your name, and the name of the blog author :
I have a table which currently has about 500 million rows. For the most part,
the situation is going to be that I will import a few hundred million more rows
from text files once every few months but otherwise there won't be any insert,
update or delete queries. I have created five indexes, so
On 09/06/12 1:34 PM, Kobus Wolvaardt wrote:
Something that is curios is that if a DB takes long, it really takes
horribly long like some kind of a lock is holding it. It would sit at
a few kb dump size for 20 minutes en then run a bit and get stuck
again (as far as we can tell), what we do kn
Hi,
I hope this is the right list. I have read through a few pg_dump slow posts
but none of them seem to apply to our case. We have a nice big server
running windows server 2008 and postgres 8.4. The machine does nothing else
and every so often running the nightly backup take 10 to 12 hours to
com
Bruce Momjian writes:
> On Thu, Sep 6, 2012 at 07:18:50PM +0200, Willy-Bas Loos wrote:
>> Is it possible to use the output of explain as text values?
> I think you have to do EXPLAIN in a function and call the function.
Yeah, IIRC you can use EXPLAIN as the source statement in a plpgsql
FOR loo
Hello
2012/9/6 Misa Simic :
> That is one of most wanted features of PostgreSQL, what is not solved yet,,,
>
> But it seems will be soon with introductions of Stored Procedures...
>
I wish :)
> For now, you must "know" what result (columns) you expects...
>
> So the only one option for now is to
On 09/06/12 5:30 AM, Edson Richter wrote:
You could change the default setting for the user with
ALTER ROLE someuser SET search_path=...
That is perfect! I can have separate users for each application, and
then they will have the correct search path.
You saved my day,
the default search_path
Buce, thx for answering.
I cant't find the example you mean.
Tried a function, but won't work..
create or replace function test() returns setof record as $$
declare
t_rec record;
begin
for t_rec in (
explain
select *
from (values ('a'),('b'), ('c')) foo(x)
where x > 'a'
) loop
return next t_r
That is one of most wanted features of PostgreSQL, what is not solved yet,,,
But it seems will be soon with introductions of Stored Procedures...
For now, you must "know" what result (columns) you expects...
So the only one option for now is to use Dynamic SQL - to build your query
dynamically b
On Thu, Sep 6, 2012 at 07:18:50PM +0200, Willy-Bas Loos wrote:
> Hi,
>
> Is it possible to use the output of explain as text values?
> This won't work:
>
> explain select *
> from (values (1),(2),(3)) foo(x)
> where x > 2
>
> What i really want is to explain analyze a dynamic query that i build
correction.
What won't work is:
select y||'--some text'
from
(
explain select *
from (values (1),(2), (3)) foo(x)
where x > 2
) bar(y)
Cheers,
WBL
On Thu, Sep 6, 2012 at 7:18 PM, Willy-Bas Loos wrote:
> Hi,
>
> Is it possible to use the output of explain as text values?
> This won't work:
>
>
Hi,
Is it possible to use the output of explain as text values?
This won't work:
explain select *
from (values (1),(2),(3)) foo(x)
where x > 2
What i really want is to explain analyze a dynamic query that i build up in
a function.
If it returns a value i can do stuff with it, but i can't find ou
a very nice way is to use a cursor.
http://okbob.blogspot.cz/2008/08/using-cursors-for-generating-cross.html
HTH
WBL
On Thu, Sep 6, 2012 at 12:40 PM, Vincent Veyron wrote:
> Le jeudi 06 septembre 2012 à 00:40 -0700, Chris Travers a écrit :
> >
> >
> > On Wed, Sep 5, 2012 at 10:14 PM, punnoose
> Date: Mon, 3 Sep 2012 09:31:21 +0100
> Subject: Re: [GENERAL] UPDATE RULE to be invoked when UPDATE .. WHERE fails
> the WHERE predicate ?
> From: dean.a.rash...@gmail.com
> To: johnlu...@hotmail.com
> CC: pgsql-general@postgresql.org; pavan.deola...@
Em 06/09/2012 09:21, Albe Laurenz escreveu:
Edson Richter wrote:
2) Is there a way to specify the default schema in JDBC url
(or command I can issue to change
the default schema at runtime, like "set path...")?
SET search_path=schema1,schema2,public;
Problem is that my application uses JDBC an
Em 06/09/2012 02:34, Chris Travers escreveu:
On Wed, Sep 5, 2012 at 7:56 PM, Edson Richter
mailto:edsonrich...@hotmail.com>> wrote:
Em 05/09/2012 23:49, Chris Travers escreveu:
Regarding MySQL vs PostgreSQL:
MySQL is what you get when app developers build a database ser
Edson Richter wrote:
>>> 2) Is there a way to specify the default schema in JDBC url
>>> (or command I can issue to change
>>> the default schema at runtime, like "set path...")?
>> SET search_path=schema1,schema2,public;
> Problem is that my application uses JDBC and Connection Pooling. After
a
Em 06/09/2012 05:12, Albe Laurenz escreveu:
Edson Richter wrote:
That's what I want to do know: I would like to consolidate these 4
separate databases in 1
database with 5 schemas:
- Main schema: will have all shared tables, that will be
read only most of time;
- Schema1 to Schema4: will h
Le jeudi 06 septembre 2012 à 00:40 -0700, Chris Travers a écrit :
>
>
> On Wed, Sep 5, 2012 at 10:14 PM, punnoose
> wrote:
> I want to have a pivot like function in which i should have
> variable number
> of columns.i went for crosstab but it doesnot support variable
>
Sireesha Modumudi wrote:
> I am using postgres 8.3.9 on SUSE 64 bit. By default max_connections
is 100, but I want to know if
> this can be increased, if so, what should we take into consideration?
It can be increased, but you habe to restart the server for
the change to take effect.
It is not a
Edson Richter wrote:
> That's what I want to do know: I would like to consolidate these 4
separate databases in 1
> database with 5 schemas:
>
> - Main schema: will have all shared tables, that will be
> read only most of time;
> - Schema1 to Schema4: will have their own tables, read write.
>
>
On Wed, Sep 5, 2012 at 10:14 PM, punnoose wrote:
> I want to have a pivot like function in which i should have variable number
> of columns.i went for crosstab but it doesnot support variable number of
> columns.Can any body suggest an alternative.like if i have a event at a
> particular time of t
On Thu, Sep 6, 2012 at 10:44 AM, punnoose wrote:
> I want to have a pivot like function in which i should have variable number
> of columns.i went for crosstab but it doesnot support variable number of
> columns.Can any body suggest an alternative.like if i have a event at a
> particular time of t
40 matches
Mail list logo