On Sat, Jul 27, 2019 at 11:49 AM farjad.farid <
farjad.fa...@checknetworks.com> wrote:

> With this kind of design requirements it is worth considering hardware
> "failure & recovery". Even SSDs can and do fail.
>
> It is not just a matter of just speed. RAID disks of some kind, depending
> on the budget is worth the effort.
>
>
>
> -----Original Message-----
> From: Alvaro Herrera <alvhe...@2ndquadrant.com>
> Sent: 2019 July 26 22:39
> To: Arya F <arya6...@gmail.com>
> Cc: Tom Lane <t...@sss.pgh.pa.us>; Ron <ronljohnso...@gmail.com>;
> pgsql-general@lists.postgresql.org
> Subject: Re: Hardware for writing/updating 12,000,000 rows per hour
>
> On 2019-Jul-26, Arya F wrote:
>
> > I think I can modify my application to do a batch update. Right now
> > the server has an HDD and it really can't handle a lot of updates and
> > inserts per second. Would changing to a regular SSD be able to easily
> > do 3000 updates per second?
>
> That's a pretty hard question in isolation -- you need to consider how
> many indexes are there to update, whether the updated columns are indexed
> or not, what the datatypes are, how much locality of access you'll have ...
> I'm probably missing some other important factors.  (Of course, you'll have
> to tune various PG server settings to find your sweet spot.)
>
> I suggest that should be measuring instead of trying to guess.  A
> reasonably cheap way is to rent a machine somewhere with the type of
> hardware you think you'll need, and run your workload there for long
> enough, making sure to carefully observe important metrics such as table
> size, accumulated bloat, checkpoint regime, overall I/O activity, and so on.
>
> --
> Álvaro Herrera                https://www.2ndQuadrant.com/
> PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
>
>
>

Hi Farjad

I was thinking of having physical or logical replication. Or is having RAID
a must if I don't want to lose data?

Reply via email to