Hello,
I have a BDR setup with two nodes. If I bring one node down i am seeing that
the replication slot is becoming inactive with below error.
<10.106.43.152(43253)nsxpostgres798452016-05-25 23:58:19 GMTnsxdb%DETAIL:
streaming transactions committing after 0/111A91
48, reading WAL from 0/110F0
Hello,
while playing around with the parallel aggregates and seq scan in 9.6beta I
noticed that Postgres will stop using parallel plans when cpu_tuple_cost is set
to a very small number.
When using the defaults and max_parallel_degree = 4, the following (test) query
will be executed with 4 wo
On Fri, 27 May 2016, 9:26 p.m. Thomas Kellerer, wrote:
> Hello,
>
> while playing around with the parallel aggregates and seq scan in 9.6beta
> I noticed that Postgres will stop using parallel plans when cpu_tuple_cost
> is set to a very small number.
>
> When using the defaults and max_parallel_
Thomas Kellerer writes:
> while playing around with the parallel aggregates and seq scan in 9.6beta I
> noticed that Postgres will stop using parallel plans when cpu_tuple_cost is
> set to a very small number.
If you don't reduce the parallel-plan cost factors proportionally,
it's not very surp
Tom Lane schrieb am 27.05.2016 um 15:48:
Thomas Kellerer writes:
while playing around with the parallel aggregates and seq scan in
9.6beta I noticed that Postgres will stop using parallel plans when
cpu_tuple_cost is set to a very small number.
If you don't reduce the parallel-plan cost facto
Hello,
I am working to migrate 2 DB's (not the entire postgres instance), from 1
host to another... and I need some guidance on the best approach/practice.
I have migrated ~25 other DB's in this environment, and I was able to use
pg_dump/pgrestore for those, and it worked fine. These final 2 are
On Fri, May 27, 2016 at 4:56 PM, Jeff Baldwin wrote:
> Hello,
>
> I am working to migrate 2 DB's (not the entire postgres instance), from 1
> host to another... and I need some guidance on the best approach/practice.
>
> I have migrated ~25 other DB's in this environment, and I was able to use
>
Melvin,
Thank you for taking the time to reply to my question.
Below are the details you have requested:
SOURCE:
CentOS release 4.6
Postgres 8.3
TARGET:
CentOS release 6.2
Postgres 8.3
Kind Regards,
Jeff
On Fri, May 27, 2016 at 5:05 PM Melvin Davidson
wrote:
>
> On Fri, May 27, 2016 at 4:56
On Fri, May 27, 2016 at 5:09 PM, Jeff Baldwin wrote:
> Melvin,
>
> Thank you for taking the time to reply to my question.
>
> Below are the details you have requested:
>
> SOURCE:
> CentOS release 4.6
> Postgres 8.3
>
> TARGET:
> CentOS release 6.2
> Postgres 8.3
>
> Kind Regards,
> Jeff
>
> On F
Thanks Melvin.
I have done just this, and the time required to dump/restore in this manner
far exceeds the outage window we can afford to have (max of 2hrs). I am
looking for alternatives to the standard dump/restore that might help me
save time.
For instance... if I could do a continuous rsync
On Fri, May 27, 2016 at 5:23 PM, Jeff Baldwin wrote:
> Thanks Melvin.
>
> I have done just this, and the time required to dump/restore in this
> manner far exceeds the outage window we can afford to have (max of 2hrs).
> I am looking for alternatives to the standard dump/restore that might help
>
On Friday, May 27, 2016 05:32:08 PM Melvin Davidson wrote:
> Well, Slony certainly will do the trick.
> Keep in mind you will need to do schema only first to the slave.
> You set up replication from the old server with the db on the new server as
> the slave. Then you initiate replication. It will
Thank you for your time Alan.
I'd like to confirm my understanding of your statement, and ask a question.
To move the DB, you are suggesting something like this:
pg_dump -h dbms11 -U postgres -C mls11 | psql -h localhost -d mls11 -U
postgres
I'm not familiar with removing/adding indexes (I'm no
Jeff,
is (temporarily) migrating the whole cluster an option? What I have in mind is
roughly this:
- rsync/copy complete db dir to target (with src still being in production),
throttle/repeat as necessary
- stop source db
- rsync again
- start src + target dbs
- drop moved databases in src
- dr
Hannes,
Thank you for the message. --- I like your idea, but one thing I forgot to
mention is that my target postgres cluster has production DB's running on
it already. I think your solution would overwrite those? Or cause any
other issues on the target side?
Perhaps I could stand up a 2nd p
-BEGIN PGP SIGNED MESSAGE-
Hash: RIPEMD160
> To move the DB, you are suggesting something like this:
> pg_dump -h dbms11 -U postgres -C mls11 | psql -h localhost -d mls11
Basically yes.
> I'm not familiar with removing/adding indexes (I'm not a DBA, just trying
> to pretend to be one
Thanks Greg,
Sounds like I've unknowingly stumbled onto a good path, the one you
suggested.
I actually installed v9.5 on the target server. I have it running on a
different port (5444) and using a different data directory than the v8.3
install.
I'm doing the dump, and forwarding it to the remo
I got that figured out, and the data is now going into my v9.5 cluster
(shiny and new!).
I happen to hit 'Enter' on my terminal window after it was stagnant for
~1hr, and it gave me this error:
psql: fe_sendauth: no password supplied
I corrected that with pgpass and things are looking good.
Tha
hi,
we have following situation:
pg 9.3.11 on ubuntu.
we have master and slave.
the db is large-ish, but we're removing *most* of its data from all
across the tables, and lots of tables too.
while we're doing it, sometimes, we get LOTS of processes, but only on
slave, never on master, that spend l
Hi
2016-05-28 7:19 GMT+02:00 hubert depesz lubaczewski :
> hi,
> we have following situation:
> pg 9.3.11 on ubuntu.
> we have master and slave.
> the db is large-ish, but we're removing *most* of its data from all
> across the tables, and lots of tables too.
>
> while we're doing it, sometimes,
On Sat, May 28, 2016 at 07:25:18AM +0200, Pavel Stehule wrote:
> It is looking like spinlock issue.
> try to look there by "perf top"
First results look like:
Samples: 64K of event 'cpu-clock', Event count (approx.): 2394094576
2016-05-28 7:45 GMT+02:00 hubert depesz lubaczewski :
> On Sat, May 28, 2016 at 07:25:18AM +0200, Pavel Stehule wrote:
> > It is looking like spinlock issue.
> > try to look there by "perf top"
>
> First results look like:
>
> Samples: 64K of event 'cpu-clock', Event count (approx.): 2394094576
>
On Sat, May 28, 2016 at 07:46:52AM +0200, Pavel Stehule wrote:
> you should to install debug info - or compile with dubug symbols
Installed debug info, and the problem stopped.
Don't think it's related - it could be just timing. I'll report back
if/when the problem will re-appear.
Best regards,
23 matches
Mail list logo