Ha! So forcing compression to 0 i went from 644 minutes to 87 minutes. And this
time I backed it to a afp share and from the looks of it I hit the roof on that
eth interface. Size of backup went from 50GB to 260 GB though, hehehe.
So something seems to have changed regarding default compression
Hi Tom
I'm honestly not sure about anything. =) I use the exact same flags as with 8.4
for the dump:
${BINARY_PATH}/pg_dump --host=localhost --user=postgres --no-password --blobs
--format=custom --verbose --file=${pg_dump_filename}_${database}.backup
${database}
So unless the default behaviou
"Henrik Cednert (Filmlance)" writes:
> I'm not sure if I can attach screenshots here. Trying, screenshot from
> instruments after running for a few mins.
It looks like practically all of pg_dump's time is going into deflate(),
ie zlib. I don't find that terribly surprising in itself, but it off
Hello
Running it with format "directory" produced something I cannot import form the
host application. So I aborted that.
Running it now and recording with Instruments. Guess I'll have to leave it
cooking for the full procedure but I've added an initial one to pastebin.
https://pastebin.com/QHR
Robert Haas writes:
> On Tue, Nov 21, 2017 at 12:01 PM, Tom Lane wrote:
>> Can you get a profile of where the machine is spending its time during the
>> dump run? On Linux I'd recommend "perf", but on macOS, hmm ...
>> You could use Activity Monitor, but as far as I can see that just captures
>>
On Tue, Nov 21, 2017 at 12:01 PM, Tom Lane wrote:
> "Henrik Cednert (Filmlance)" writes:
>> The last pg_dump with 8.4 took 212 minutes and 49 seconds.And now with 9.5
>> the very same pg_dump takes 644 minutes and 40 seconds. To it takes about
>> three times as long now and I have no idea to wh
On November 21, 2017 6:49:26 AM PST, Tom Lane wrote:
>Zakhar Shapurau writes:
>
>Presumably, we could examine the grouping sets to identify column(s)
>present in all sets, and then allow the optimization for clauses that
>reference only such columns. Or maybe I'm misreading the comment
>(but t
"Henrik Cednert (Filmlance)" writes:
> The last pg_dump with 8.4 took 212 minutes and 49 seconds.And now with 9.5
> the very same pg_dump takes 644 minutes and 40 seconds. To it takes about
> three times as long now and I have no idea to why. Nothing in the system or
> hardware other than the p
From: Henrik Cednert (Filmlance) [mailto:henrik.cedn...@filmlance.se]
Sent: Tuesday, November 21, 2017 11:48 AM
To: pgsql-performance@lists.postgresql.org
Subject: Re: pg_dump 3 times as slow after 8.4 -> 9.5 upgrade
I VACUUM every sunday so that is done already. =/
Not sure I have the proper pa
I VACUUM every sunday so that is done already. =/
Not sure I have the proper params though since I'm not used to db's but have
followed other's "how to's", but these are the lines in my script for that;
${BINARY_PATH}/vacuumdb --analyze --host=localhost --username=postgres --echo
--verbose --no
Guys,
Sorry to bother you but can anyone help me unsubscribe from this list?
I followed the instructions in the original email and got an error
message...
Thanks,
-- Shaul
On Tue, Nov 21, 2017 at 6:25 PM, Igor Neyman wrote:
>
>
> *From:* Henrik Cednert (Filmlance) [mailto:henrik.cedn...@filmla
From: Henrik Cednert (Filmlance) [mailto:henrik.cedn...@filmlance.se]
Sent: Tuesday, November 21, 2017 11:37 AM
To: pgsql-performance@lists.postgresql.org
Subject: Re: pg_dump 3 times as slow after 8.4 -> 9.5 upgrade
Attention: This email was sent from someone outside of Perceptron. Always
exer
RAID6. Doing disk test I have 1000MB/sec write and 1200MB/sec read.
--
Henrik Cednert
cto | compositor
Filmlance International
mobile [ + 46 (0)704 71 89 54 ]
skype [ cednert ]
On 21 Nov 2017, at 17:34, Igor Neyman
mailto:iney...@perceptron.com>> wrote:
From: Henrik Cednert (Filmlance) [mail
From: Henrik Cednert (Filmlance) [mailto:henrik.cedn...@filmlance.se]
Sent: Tuesday, November 21, 2017 11:27 AM
To: pgsql-performance@lists.postgresql.org
Subject: Re: pg_dump 3 times as slow after 8.4 -> 9.5 upgrade
Ahh! Nice catch Igor. Thanks. =)
Will try and see if resolve can read that back
Ahh! Nice catch Igor. Thanks. =)
Will try and see if resolve can read that back in.
Still very curious about the 3x slowdown in 9.5 pg_dump though.
--
Henrik Cednert
cto | compositor
Filmlance International
On 21 Nov 2017, at 17:25, Igor Neyman
mailto:iney...@perceptron.com>> wrote:
From: H
From: Henrik Cednert (Filmlance) [mailto:henrik.cedn...@filmlance.se]
Sent: Tuesday, November 21, 2017 9:29 AM
To: pgsql-performance@lists.postgresql.org
Subject: pg_dump 3 times as slow after 8.4 -> 9.5 upgrade
Hello
We use a system in filmproduction called DaVinci Resolve. It uses a pgsql
dat
Hi,
powa relies on extensions (pg_stat_statements, pg_qualstats) that needs
to be installed in every database you want to monitor. Maybe you just
installed them only into postgres database?!
Best regards
Marco
Am 18.11.2017 um 02:52 schrieb Neto pr:
Dear all
I have successfully installed
Zakhar Shapurau writes:
> Using GROUPING SETS with more than one set disables predicate pushdown?
It looks like this is a case that no one's gotten round to yet.
The comment in the relevant code is
* In some cases we may want to transfer a HAVING clause into WHERE. We
* cannot do so if
Hello
We use a system in filmproduction called DaVinci Resolve. It uses a pgsql
database when you work in a collaborative workflow and multiple people share
projects. Previously it was using pgsql 8.4 but for a new major upgrade they
recommend an upgrade to 9.5. Probably also to some macOS limi
Hi!
First of all, thanks for the great work! PostgreSQL is amazing, and
community is super helpful.
I found an unexpected behaviour in PostgreSQL, and was advised to post
it to the performance mailing list on IRC.
Using GROUPING SETS with more than one set disables predicate pushdown?
Version:
Hi Jeff,
Thanks so much for writing. You've got some great points.
> On Nov 20, 2017, at 5:42 PM, Jeff Janes wrote:
> While I have not done exhaustive testing, from the tests I have done I've
> never found gist to be better than gin with trgm indexes.
Thanks, this helps considerably, as the do
Hi,
You should probably report your issue at
https://github.com/dalibo/powa/issues
KR
Le 18/11/2017 à 02:52, Neto pr a écrit :
Dear all
I have successfully installed POWA (http://dalibo.github.io/powa),
including all required extensions, see the following Printscreen of
its operation of end
22 matches
Mail list logo