On Thu, Feb 21, 2019 at 09:14:24PM -0800, Adrian Klaver wrote:
> This would be a question for AWS RDS support.
And this depends also a lot on your schema, your column alignment and
the level of bloat of your relations..
--
Michael
signature.asc
Description: PGP signature
On Thu, Feb 21, 2019 at 08:32:01PM +0100, Peter Eisentraut wrote:
> On 2019-02-21 05:47, Michael Paquier wrote:
>> if (conn->ssl_in_use)
>> +{
>> +/*
>> + * The server has offered SCRAM-SHA-256-PLUS,
>
On 2/21/19 9:08 PM, github kran wrote:
Hello Pgsql-General,
We have currently have around 6 TB of data and have plans to move some
historic datainto RDS of about close to 1 TB of data. The total rows in
partitioned tables is around 6 billion rows today and have plans to keep
the data long ter
Hello Pgsql-General,
We have currently have around 6 TB of data and have plans to move some
historic datainto RDS of about close to 1 TB of data. The total rows in
partitioned tables is around 6 billion rows today and have plans to keep
the data long term which would be around 5-8 billion rows per
Bruce Momjian writes:
> On Thu, Feb 21, 2019 at 09:31:32PM -0500, Stephen Frost wrote:
>> * Bruce Momjian (br...@momjian.us) wrote:
>>> There was too much concern that users would accidentally start the old
>>> server at some later point, and its files would be hard linked to the
>>> new live serv
Thanks for the feedback.
On Tue, Feb 19, 2019 at 11:12 AM Michael Lewis wrote:
> 1) You can increase it as much as you want but (auto)analyze will take
> longer to examine the values of default_stat_target * 300 rows and compute
> the most common values and the frequencies of those values. How m
On Thu, Feb 21, 2019 at 09:31:32PM -0500, Stephen Frost wrote:
> Greetings,
>
> * Bruce Momjian (br...@momjian.us) wrote:
> > On Tue, Feb 19, 2019 at 12:25:24PM -0500, Stephen Frost wrote:
> > > Ah, right, I forgot that it did that, fair enough.
> > >
> > > I've never been thrilled with that part
Greetings,
* Bruce Momjian (br...@momjian.us) wrote:
> On Tue, Feb 19, 2019 at 12:25:24PM -0500, Stephen Frost wrote:
> > Ah, right, I forgot that it did that, fair enough.
> >
> > I've never been thrilled with that particular approach due to the
> > inherent risks of people messing directly with
On Tue, Feb 19, 2019 at 12:25:24PM -0500, Stephen Frost wrote:
> Ah, right, I forgot that it did that, fair enough.
>
> I've never been thrilled with that particular approach due to the
> inherent risks of people messing directly with files like pg_control,
> but that's how it is for now.
There w
On Sun, Feb 17, 2019 at 02:52:07PM -0700, legrand legrand wrote:
> Hello,
>
> It seems that pgss doesn't track commit (nor rollback) commands from
> pl/pgsql blocks.
> using psql in version 11.1:
>
> select pg_stat_statements_reset();
> do $$ begin commit; end $$;
> select calls,query from pg_sta
Tiffany, have you tried the clone_schema function? It seems to me it does
exactly what you need, no dumping or restoring. There is
even an option to copy the data or not. Default is not.
On Thu, Feb 21, 2019 at 3:23 PM Adrian Klaver
wrote:
> On 2/21/19 11:52 AM, Tiffany Thang wrote:
> > Thanks e
Hello again Rob,
Thank you for pointing that.
Now what I did:
1. Copied the server.crt created on the postgresqlSERVER's /var/lib/CA/server
directory to client side.
2. Ran this script:|
openssl x509 -in server.crt -out server.crt.der -outform der
3. keytool -keystore $JAVA_HOME/jre/lib/sec
Hi Tom,
just cleaned house in my mailbox and found this email that got buried. You
were spot on with the prepared transactions, we had some hung Hibernate
threads that were acting up.
Sorry for resurrecting this ages old thread, but wanted to say a big fat
*THANKS*!
Cheers,
Tamas Kalman.
On Thu
On 2/21/19 11:52 AM, Tiffany Thang wrote:
Thanks everyone. Unfortunately the schema rename would not work since
the source database will be our production system. We have not gone live
yet but the system is expected to be constantly used.
I have multiple tables that I need to export ranging fr
Dear, I have a query to make you
As much as you enable hot_standby_feedback = on, queries continue to be
canceled, in addition to enabling this parameter you have to modify any of
these?
#max_standby_archive_delay = 30s
#max_standby_streaming_delay = 30s
#wal_receiver_status_interval = 10s
The id
Thanks everyone. Unfortunately the schema rename would not work since the
source database will be our production system. We have not gone live yet
but the system is expected to be constantly used.
I have multiple tables that I need to export ranging from 20GB to 60GB
each. The parallel will not wo
On 2019-02-21 05:47, Michael Paquier wrote:
> if (conn->ssl_in_use)
> + {
> + /*
> + * The server has offered SCRAM-SHA-256-PLUS,
> which is only
> + * supported by the c
On 2019-02-20 17:45, Rob Nikander wrote:
>> On Feb 20, 2019, at 10:07 AM, Peter Eisentraut
>> wrote:
>>
>> You can run SET TRANSACTION ISOLATION LEVEL in a procedure.
>
> I tried that before but I get this error:
>
> create or replace procedure t_test(n integer)
> as $$
> begin
You
No doubt it'll take a while...
You said you have 36 databases. Could you move half of them using
pg_dump/pg_restore over a few outage windows? (Doing it in bite-sized
pieces reduces risk.)
On 2/21/19 2:27 AM, Julie Nishimura wrote:
Thank you for the suggestions! We realized we cannot add mo
Greetings,
* Edson Carlos Ericksson Richter (rich...@simkorp.com.br) wrote:
> No backup solution (no matter which one you choose) is 100% guaranteed: your
> disks may fail, your network mail fail, your memory may fail, files get
> corrupted - so, setup a regular "restore" to separate "test backup
Em 21/02/2019 04:17, Julie Nishimura escreveu:
Does anyone use this solution? any recommenations?
Thanks!
We do use it.
IMHO, those are minimum recommendations:
1) start using it! It's easy and robust.
2) for minimal impact over production servers, setup replicated servers
and create y
Hi!
I need help with investigation what happened here. I have two different master
servers with standby,
version: PostgreSQL 9.6.10 on x86_64-pc-linux-gnu (Debian 9.6.10-1.pgdg90+1),
compiled by gcc (Debian 6.3.0-18+deb9u1) 6.3.0 20170516, 64-bit
and I create backups with pg_basebackup from the
Am 21.02.19 um 08:17 schrieb Julie Nishimura:
Does anyone use this solution? any recommenations?
Thanks!
sure, many of our customers. why not?
Regards, Andreas
--
2ndQuadrant - The PostgreSQL Support Company.
www.2ndQuadrant.com
Thank you for the suggestions! We realized we cannot add more space to the
existing cluster due to the hardware limitations. So, we decided to go the
other route by introducing new standby on a new host with bigger size for data
(with pg_basebackup and putting Master into archive mode), then pro
Hello all,
If I were in your situation, I would analyze if it could move only a part
of the 36 databases to the new disk.
* Either, I will move some of the databases to the new disk,
* Either, In the largest databases, I will consider to work in multiple
tablespace configuration, using the command
25 matches
Mail list logo