On 11/30/22 19:41, Michael Loftis wrote:
On Wed, Nov 30, 2022 at 18:03 Mladen Gogala <gogala.mla...@gmail.com> wrote:
On 11/30/22 18:19, Hannes Erven wrote:
You could also use a filesystem that can do atomic snapshots - like ZFS.
Uh, oh. Not so sure about that. Here is a page from the world of the
big O: https://blog.docbert.org/oracle-on-zfs/
However, similar can be said about ZFS. ZFS snapshots will slow down
the I/O considerably. I would definitely prefer snapshots done in
hardware and not in software. My favorite file systems, depending on
the type of disk, are F2FS and XFS.
ZFS snapshots don’t typically have much if any performance impact versus
not having a snapshot (and already being on ZFS) because it’s already
doing COW style semantics.
Postgres write performance using ZFS is difficult because it’s super
important to match up the underlying I/O sizes to the device/ZFS ashift,
the ZFS recordsize, and the DB’s page/wal page sizes though, but not
getting this right also cause performance issues without any snapshots,
because again COW. If you’re constantly breaking a record block or sector
there’s going to be a big impact. It won’t be any worse (in my own
testing) regardless of if you have snapshots or not. Snapshots on ZFS
don’t cause any crazy write amplification by themselves (I’m not sure
they cause any extra writes at all, I’d have to do some sleuthing)
ZFS will yes be slower than a raw disk (but that’s not an option for Pg
anyway), and may or may not be faster than a different filesystem on a HW
RAID volume or storage array volume. It absolutely takes more
care/clue/tuning to get Pg write performance on ZFS, and ZFS does
duplicate some of Pg’s resiliency so there is duplicate work going on.
I wonder what percentage of /Big Databases/ (like Op's and Vijaykumar's) are
still on physical servers, as opposed to VMs connected to SANs. Even many
physical servers are connected to SANs. (That is, of course, in the dreaded
Enterprise environment.)
--
Angular momentum makes the world go 'round.