On Tue, Feb 17, 2026 at 4:42 PM Riaan Stander <[email protected]> wrote:

> That's an expensive way to provide some HA. What's the business
> requirement? How does that tie into Postgres? Might be able to do it in
> other ways.
>
> We used to run a SAN shared between our host servers, but this was
> replaced with Storage Spaces. I think they don't trust Postgres native HA
> capabilities and want some hardware guarantee.
>
> Yikes! Yes, SSD would be a big win. It's orders of magnitude faster, and
> just removes so many problems.
>
> I assume it will help, but I fear however that the overhead with a 3 way
> mirror is not going to be solved with just adding SSD. I'm trying to get
> them to rather deploy direct attached NVME/SSD to each Host and then use PG
> HA from there.
>
> Sorry, I have no numbers to provide you there, but I cannot imagine any
> amount of tuning is going to be as big a win as going to SSD.
>
> It does take a lot of convincing and arguing though, so concrete number
> help get the point across.
>
> Thanks for the response
>

Spinning disks+cache  was the most common configuration before SSD came
along.  Burst performance is great but if you overwhelm the cache, write
performance can fall off a cliff.  This sounds like exactly what is
happening to you; moving backups off just bought you some time.  Direct
attached SSD will completely smoke your current setup.

> I think they don't trust Postgres native HA capabilities and want some
hardware guarantee.

What is this, 2005?  Properly configured HS/SR setups are incredibly robust
and are the default configuration for amazon RDS and many, many other
platforms.  Reading between the lines here, it sounds like your storage
team bought overpriced garbage and is refusing to admit it's not getting
the job done.  Postgres failover gets tested routinely across a vast array
of systems, how many times has your exact configuration been tested?

Is your storage shared with other systems?  Do you have any pgbench numbers
for reference?  What are your commit rates?  (see xact_commit in
pg_stat_database, tracked over time)

merlin

Reply via email to