Please bottom-post, otherwise the thread becomes hard to read.
On Thu, 2018-10-11 at 08:44 +0530, Raghavendra Rao J S V wrote:
> We are using postgresql 9.2. It doesn't contain the option.
>
> Please guide me any other approaches to improve the performance of pg_dump.
>
> On Thu 11 Oct, 2018, 8
magodo wrote:
> OK... Just take another example:
>
> A B
> BASE-+-+--o1 (recover to A) 1
> | | C
> +.|...+---o2 (regret, recover to B) 2
>| |
>
Thank you very much for your prompt response Christopher.
On Thu 11 Oct, 2018, 8:41 AM Christopher Browne, wrote:
> On Wed, Oct 10, 2018, 10:32 PM Raghavendra Rao J S V <
> raghavendra...@gmail.com> wrote:
>
>> Hi All,
>>
>> pg_dump is taking more time. Please let me know which configuration
>>
We are using postgresql 9.2. It doesn't contain the option.
Please guide me any other approaches to improve the performance of pg_dump.
On Thu 11 Oct, 2018, 8:05 AM Pavan Teja,
wrote:
> Hi,
>
> You can use -j jobs option to speed up the process.
>
> Hope it works.
>
> Regards,
> Pavan
>
> On Th
On Wed, Oct 10, 2018, 10:32 PM Raghavendra Rao J S V <
raghavendra...@gmail.com> wrote:
> Hi All,
>
> pg_dump is taking more time. Please let me know which configuration
> setting we need to modify to speedup the pg_dump backup.We are using 9.2
> version on Centos Box.
>
There certainly isn't a c
On Wed, 2018-10-10 at 18:21 +0200, Laurenz Albe wrote:
> magodo wrote:
> > > By default, recovery will stay on the timeline where it started.
> > > If you want to go to timeline 2 or 3, you have to specify
> > > recovery_target_timeline.
> > >
> > For me, the specified point corresponds to timel
On 10/10/2018 09:32 PM, Raghavendra Rao J S V wrote:
Hi All,
pg_dump is taking more time. Please let me know which configuration
setting we need to modify to speedup the pg_dump backup.We are using 9.2
version on Centos Box.
Is it taking "more time" because your database is bigger?
--
Angu
Hi,
You can use -j jobs option to speed up the process.
Hope it works.
Regards,
Pavan
On Thu, Oct 11, 2018, 8:02 AM Raghavendra Rao J S V <
raghavendra...@gmail.com> wrote:
> Hi All,
>
> pg_dump is taking more time. Please let me know which configuration
> setting we need to modify to speedup
Hi All,
pg_dump is taking more time. Please let me know which configuration setting
we need to modify to speedup the pg_dump backup.We are using 9.2 version on
Centos Box.
--
Regards,
Raghavendra Rao J S V
On 10/10/18 12:20 PM, Aleš Zelený wrote:
Hello, thanks for the testcase!
Test cases with added msg_handler
test_get_some_id)
Thanks to chosen small table, it fast enough:
=# select branch_id from ase.tds_tbl where branch_id::text =
'\x00038500875c3d60';
NOTICE: DB-Library notice: Msg #: 5
Thank you. Let me test it and see the benefit. We have a use case for this.
> On Oct 10, 2018, at 17:18 , Andres Freund wrote:
>
>
>
> On October 10, 2018 2:15:19 PM PDT, Ravi Krishna wrote:
>>>
>>> pg_restore doesn't take locks on the table for the COPY, it does so
>>> because creating th
On October 10, 2018 2:15:19 PM PDT, Ravi Krishna wrote:
>>
>> pg_restore doesn't take locks on the table for the COPY, it does so
>> because creating the table takes an exclusive lock.
>
>
>Interesting. I seem to recollect reading here that I can't have
>concurrent COPY on the same table beca
>
> pg_restore doesn't take locks on the table for the COPY, it does so
> because creating the table takes an exclusive lock.
Interesting. I seem to recollect reading here that I can't have concurrent
COPY on the same table because of the lock.
To give an example:
If I have a large file with
Hi,
On 2018-10-10 17:02:59 -0400, Ravi Krishna wrote:
> I am talking about pg_restore which is COPY FROM, which takes exclusive lock
> on the table
> while the data is loaded into the table.
pg_restore doesn't take locks on the table for the COPY, it does so
because creating the table takes an e
>
> No, why would that seem to be the case? If it did so, then you could
> not run pg_dump to dump data while regular activity was going on.
Not sure. In fact I am now confused.
I am talking about pg_restore which is COPY FROM, which takes exclusive lock on
the table
while the data is loaded i
On Wed, 10 Oct 2018 at 16:22, Ravi Krishna wrote:
> You obviously are referring to multiple connections running COPY on different
> tables, right? Like what pg_restore does with -j option.
> Doesn't copy take an exclusive lock on the table which makes it incompatible
> with parallelization.
No
>
> You obviously can just copy the data into postgres over multiple
> connections if you need to speed COPY up. But that requires splitting up
> the data on the clientside.
>
You obviously are referring to multiple connections running COPY on different
tables, right? Like what pg_restore does
> On Oct 10, 2018, at 1:24 PM, Andres Freund wrote:
>
> On 2018-10-10 18:50:02 +0200, Laurenz Albe wrote:
>> Rob Sargent wrote:
>>> Can anyone here tell me whether or not the CopyManager facility in JDBC
>>> via org.postgresql:postgresql:42.1.4 is internally multithreaded?
>>> Running on Cen
On 2018-10-10 18:50:02 +0200, Laurenz Albe wrote:
> Rob Sargent wrote:
> > Can anyone here tell me whether or not the CopyManager facility in JDBC
> > via org.postgresql:postgresql:42.1.4 is internally multithreaded?
> > Running on CentOS 7 (all participants), java8, postgres 10.5
>
> It isn't,
Hello, thanks for the testcase!
First of all, some more environment information:
Foreign server:
[local]:5432 postgres@postgres:13713
=# \des+ ase
List of foreign servers
┌───┬──┬──┬──┬─┬
On 10/10/18 7:37 AM, Bryce Pepper wrote:
Sorry, I wasn't clear in the prior posts.
The stop script is running during reboot. The problem is the database is not
reachable when the stop script runs. The ctmdist server shut down is as
follows:
Stop control-m application
Stop control-m co
> On Oct 10, 2018, at 10:50 AM, Laurenz Albe wrote:
>
> Rob Sargent wrote:
>> Can anyone here tell me whether or not the CopyManager facility in JDBC
>> via org.postgresql:postgresql:42.1.4 is internally multithreaded?
>> Running on CentOS 7 (all participants), java8, postgres 10.5
>
> It is
Rob Sargent wrote:
> Can anyone here tell me whether or not the CopyManager facility in JDBC
> via org.postgresql:postgresql:42.1.4 is internally multithreaded?
> Running on CentOS 7 (all participants), java8, postgres 10.5
It isn't, and there would not be much reason for it to be, as COPY
in Po
magodo wrote:
> > By default, recovery will stay on the timeline where it started.
> > If you want to go to timeline 2 or 3, you have to specify
> > recovery_target_timeline.
> >
> For me, the specified point corresponds to timeline 1, because at that
> time, the timeline is 1 indeed (when there i
Hi Nicola,
I am of the impression that the problem of having waiting clients does not
depend from server_idle_timeout
How is the load on both ends of pgbouncer? High? Low? No activity? A lot of
content switch perhaps?
I think that a pool_size of 120 is very high, unless you have an XXL databas
Hello,
we are running pgbouncer 1.9.1 connected to postgres 10.5
Sometimes we are seeing a lot of waiting connections with this query :
DISCARD ALL
This is our pgbouncer config :
[databases]
dev = host=10.10.10.1 port=5432 dbname=dev auth_user=pgbouncer pool_size=120
[pgbouncer]
listen_port =
Sorry, I wasn't clear in the prior posts.
The stop script is running during reboot. The problem is the database is not
reachable when the stop script runs. The ctmdist server shut down is as
follows:
Stop control-m application
Stop control-m configuration agent
Stop database
As you
On Wed, 2018-10-03 at 08:06 +0200, Laurenz Albe wrote:
> magodo wrote:
> > I read the document about recovery configuration, it says:
>
> [...]
>
> > Therefore, suppose I am going to recover to a point of some child
> > timeline, to identify the point, I have to specify either
> > recovery_targ
El 10/10/18 a las 04:00, magodo escribió:
>
> Hello,
>
> I know there is a tool `pg_archivecleanup`, but it only accepts wal
> filename. What if I want to clean up all archived wal before a
> specified datetime?
>
> Though I could check the ctime fo each wal and figure out which to
> remove, but
Hello,
I know there is a tool `pg_archivecleanup`, but it only accepts wal
filename. What if I want to clean up all archived wal before a
specified datetime?
Though I could check the ctime fo each wal and figure out which to
remove, but it looks like not a elegent way to do this.
Also I tried
On 10/10/18 12:00 AM, magodo wrote:
Hello,
I know there is a tool `pg_archivecleanup`, but it only accepts wal
filename. What if I want to clean up all archived wal before a
specified datetime?
That is dangerous, given that a WAL is important not based on its age
but on whether it's content
On Wed, 2018-10-10 at 11:24 +0800, magodo wrote:
> Hello,
>
> I'm figuring out how to do PITR on a HA setup for postgres 9.6. I
> almost manage to do it, while I faced following issue:
>
>
> [1. create basebackup]
>
> [2. insert 2 rows]
>
> on both primary and standby's pg_xlog
>
> 0001
El 05/10/18 a las 06:50, Rijo Roy escribió:
> Hello Experts,
>
> I have a server rhel 6.9 installed with barman 2.3 which is configured
> to take backup from Postgresql 10 installed in a remote host and the
> same is working.
You should upgrade to 2.4, although I can't see anything relevant in th
On 10/10/18 1:31 AM, Aleš Zelený wrote:
Hello,
my fault, I've forgot to mention that I have only DSN and database
user/password credentials with no access to the box with Sybase. trying
to reach service vendor support, but it might take some time and I
hoped I've done some mistake on my side
Hello,
I'm figuring out how to do PITR on a HA setup for postgres 9.6. I
almost manage to do it, while I faced following issue:
[1. create basebackup]
[2. insert 2 rows]
on both primary and standby's pg_xlog
00010005
rmgr: Transaction len (rec/tot): 34/34, tx:
On 10/10/18 5:32 AM, Bryce Pepper wrote:
Adrian,
Thanks for the inquiry. The function (db_execute_sql) is coming from a vendor
(BMC) product called Control-M. It is a scheduling product.
The tmp file is deleted before I can see its contents but I believe it is
trying to update some columns in
Here is the contents of the query and error:
[root@kccontrolmt01 tmp]# cat ctm.Xf9pQkg2
update CMS_SYSPRM set CURRENT_STATE='STOPPING',DESIRED_STATE='Down' where
DESIRED_STATE <> 'Ignored'
;
psql: could not connect to server: Connection refused
Is the server running on host "kccontrolmt01"
Adrian,
Thanks for the inquiry. The function (db_execute_sql) is coming from a vendor
(BMC) product called Control-M. It is a scheduling product.
The tmp file is deleted before I can see its contents but I believe it is
trying to update some columns in the CMS_SYSPRM table.
I also think the pos
> On Oct 9, 2018, at 11:45 PM, Dilshan wrote:
>
> Hi Team,
> I am working on a requirement, where I need to log each and every operation
> on the master db. So I was thinking if I could do with pg_logical by setting
> my master db as publisher and setting a new db as subscriber and dropping
Hello,
my fault, I've forgot to mention that I have only DSN and database
user/password credentials with no access to the box with Sybase. trying to
reach service vendor support, but it might take some time and I hoped I've
done some mistake on my side...
Kind Regards
Ales Zeleny
st 10. 10. 201
> "David" == David G Johnston writes:
>> SELECT ARRAY_AGG(hashed)
>> FROM words_nouns
>> WHERE added > TO_TIMESTAMP(1539100913)
>> UNION
>> SELECT ARRAY_AGG(hashed)
>> FROM words_verbs
>> WHERE added > TO_TIMESTAMP(1539100913);
David> SELECT array_agg(words) FROM (
David> SELECT has
Hi:
I am not sure it is ok to ask this question in this lislt.
I met this problem using RDS Postgres. (I think the normal postgres
may have the same problem).
I have posted the question on StackOverflow here:
https://stackoverflow.com/questions/52589058/aws-rds-postgresql-transaction-logs-keep-g
42 matches
Mail list logo