[ceph-users] Re: question about rgw delete speed

2020-11-13 Thread Adrian Nicolae
Hi Brent, Thanks for your input. We will use Swift instead of S3. The deletes are mainly done by our customers using the sync appĀ  (i.e they are syncing their folders with the storage accounts and every file change is translated to a delete in the cloud). We have a frontend cluster between th

[ceph-users] Re: question about rgw delete speed

2020-11-13 Thread Janne Johansson
Den ons 11 nov. 2020 kl 21:42 skrev Adrian Nicolae < adrian.nico...@rcs-rds.ro>: > Hey guys, > - 6 OSD servers with 36 SATA 16TB drives each and 3 big NVME per server > (1 big NVME for every 12 drives so I can reserve 300GB NVME storage for > every SATA drive), 3 MON, 2 RGW with Epyc 7402p and 128

[ceph-users] Re: question about rgw delete speed

2020-11-12 Thread Nathan Fish
>From what we have experienced, our delete speed scales with the CPU available to the MDS. And the MDS only seems to scale to 2-4 CPUs per daemon, so for our biggest filesystem, we have 5 active MDS daemons. Migrations reduced performance a lot, but pinning fixed that. Even better is just getting t

[ceph-users] Re: question about rgw delete speed

2020-11-12 Thread Brent Kennedy
Ceph is definitely a good choice for storing millions of files. It sounds like you plan to use this like s3, so my first question would be: Are the deletes done for a specific reason? ( e.g. the files are used for a process and discarded ) If its an age thing, you can set the files to expir