Cool. Good to know. I'll see if I can replicate these results in my
environment. Thanks, M.
From: Josh Berkus
Sent: Wednesday, April 08, 2015 1:05 PM
To: Mel Llaguno; Przemysław Deć
Cc: pgsql-performance@postgresql.org
Subject: Re: [PERFORM]
Care to elaborate? We usually do not recommend specific kernel versions
for our customers (who run on a variety of distributions). Thanks, M.
Mel Llaguno • Staff Engineer – Team Lead
Office: +1.403.264.9717 x310
www.coverity.com <http://www.coverity.com/> • Twitter: @coverity
Coverity by Sy
FYI - all my tests were conducted using Ubuntu 12.04 x64 LTS (which I
believe are all 3.xx series kernels).
Mel Llaguno • Staff Engineer – Team Lead
Office: +1.403.264.9717 x310
www.coverity.com <http://www.coverity.com/> • Twitter: @coverity
Coverity by Synopsys
On 4/6/15, 2:51 PM,
&type[]=ext4-writeback-barrier:1&type[]=ext4-writeback-nobarrier:1&type[]=xfs-barrier:1&type[]=xfs-nobarrier:1?
M.
From: Przemysław Deć
Sent: Wednesday, April 01, 2015 3:02 AM
To: Mel Llaguno
Cc: Josh Berkus; pgsql-performance@postgresql.org
Subject: Re
ercury Excelsior PCI-E SSD which
hasn’t yet materialized ...
Thanks, M.
Mel Llaguno • Staff Engineer – Team Lead
Office: +1.403.264.9717 x310
www.coverity.com <http://www.coverity.com/> • Twitter: @coverity
Coverity by Synopsys
On 3/31/15, 1:52 PM, "Josh Berkus" wrote:
&
The reason I ask is that it seems to support deduplication/compression. I was
wondering if this would have any performance implications of PG operations.
Thanks, M.?
Mel Llaguno * Staff Engineer - Team Lead
Office: +1.403.264.9717 x310
www.coverity.com<http://www.coverity.com/>
Josh,
Thanks for the feedback. Given the prevalence of SSDs/VMs, it would be
useful to start collecting stats/tuning for different operating systems
for things like fsync (and possibly bonnie++/dd). If the community is
interested, I¹ve got a performance lab that I¹d be willing to help run
tests on
My 2 cents :
The results are not surprising, in the linux enviroment the i/o call of
pg_test_fsync are using O_DIRECT (PG_O_DIRECT) with also the O_SYNC or
O_DSYNC calls, so ,in practice, it is waiting the "answer" from the storage
bypassing the cache in sync mode, while in the Mac OS X i
I was given anecdotal information regarding HFS+ performance under OSX as
being unsuitable for production PG deployments and that pg_test_fsync
could be used to measure the relative speed versus other operating systems
(such as Linux). In my performance lab, I have a number of similarly
equipped Li
ata + base application schema), seed information is present in
all tests. I guess my question is this - why would having existing data change
the bind behavior at all? Is it possible that the way indexes are created has
changed between 8.4 -> 9.x?
Thanks, M.
Mel Llaguno | Principal Engineer
this upgraded 9.x DB into any PG instance in the
previously described scenarios also seems to fix the bind connection issue.
Please let me know if this clarifies my earlier post.
Thanks, M.
Mel Llaguno | Principal Engineer (Performance/Deployment)
Coverity | 800 6th Avenue S.W. | Suite 410 | Ca
Let me clarify further - when we reconstruct our schema (without the upgrade
step) via a sql script, the problem still persists. Restoring an upgraded DB
which contains existing data into exactly the same instance will correct the
behavior.
Mel Llaguno | Principal Engineer
12 matches
Mail list logo