Hi, On Fri, May 8, 2020 at 3:53 AM Laurenz Albe <laurenz.a...@cybertec.at> wrote:
> On Fri, 2020-05-08 at 03:47 -0300, Avinash Kumar wrote: > > > Just set "autovacuum_max_workers" higher. > > > > No, that wouldn't help. If you just increase autovacuum_max_workers, the > total cost limit of > > autovacuum_vacuum_cost_limit (or vacuum_cost_limit) is shared by so many > workers and it > > further delays autovacuum per each worker. Instead you need to increase > autovacuum_vacuum_cost_limit > > as well when you increase the number of workers. > > True, I should have mentioned that. > > > But, if you do that and also increase workers, well, you would easily > reach the limitations > > of the disk. I am not sure it is anywhere advised to have 20 > autovacuum_max_workers unless > > i have a disk with lots of IOPS and with very tiny tables across all the > databases. > > Sure, if you have a high database load, you will at some point exceed the > limits of > the machine, which is not surprising. What I am trying to say is that you > have to ramp > up the resources for autovacuum together with increasing the overall > workload. > You should consider autovacuum as part of that workload. > > If your machine cannot cope with the workload any more, you have to scale, > which > is easily done by adding more machines if you have many databases. > Agreed. Getting back to the original question asked by Sammy, i think it is still bad to create 2000 databases for storing 2000 clients/(schemas) for a multi-tenant setup. > > Yours, > Laurenz Albe > -- > Cybertec | https://www.cybertec-postgresql.com > > -- Regards, Avinash Vallarapu