[ceph-users] Re: Looking for Companies who are using Ceph as EBS alternative

2022-08-18 Thread Linh Vu
I've used RBD for Openstack clouds from small to large scale since 2015. Been through many upgrades and done many stupid things, and it's still rock solid. It's the most reliable part of Ceph, I'd say. On Fri, Aug 19, 2022 at 3:47 AM Abhishek Maloo wrote: > Hey Folks, > I have recently joined t

[ceph-users] Re: Is 100pg/osd still the rule of thumb?

2021-12-14 Thread Linh Vu
Pretty sure this rule of thumb was created during the days of 4TB and 6TB spinning disks. Newer spinning disks and SSD / NVMe are faster so they can have more PGs. Obviously a 16TB spinning disk isn't 4 times faster than a 4TB one, so it's not a linear increase, but I think going closer to 200 shou

[ceph-users] Re: ceph-mon pacific doesn't enter to quorum of nautilus cluster

2021-12-14 Thread Linh Vu
May not be directly related to your error, but they slap a DO NOT UPGRADE FROM AN OLDER VERSION label on the Pacific release notes for a reason... https://docs.ceph.com/en/latest/releases/pacific/ It means please don't upgrade right now. On Wed, Dec 15, 2021 at 3:07 PM Michael Uleysky wrote: >

[ceph-users] Re: Experience reducing size 3 to 2 on production cluster?

2021-12-14 Thread Linh Vu
I haven't tested this in Nautilus 14.2.22 (or any nautilus) but in Luminous or older, if you go from a bigger size to a smaller size, there was either a bug or a "feature-not-bug" that didn't allow the OSDs to automatically purge the redundant PGs with data copies. I did this on a size=5 to size=3

[ceph-users] quay.io vs quay.ceph.io for container images

2021-09-07 Thread Linh Vu
-ci/daemon tag:latest-pacific, it is successful (same version 16.2.5). So which one is the real official one that we should use and why the different names and tags? Personally I prefer the tag matching the actual minor release like v16.2.5 though... Cheers, Linh -- Linh Vu Assistant

[ceph-users] Re: PSA: upgrading older clusters without CephFS

2021-08-05 Thread Linh Vu
This is a very interesting bug! Without personally knowing the history of a cluster, is there a way to check and see when and which release it began life as? Or check whether such legacy data structures still exist in the mons? On Fri, Aug 6, 2021 at 1:45 PM Patrick Donnelly wrote: > If your cl

[ceph-users] Re: ceph's replicas question

2019-08-27 Thread Linh Vu
If you have decent CPU and RAM on the OSD nodes, you can try Erasure Coding, even just 4:2 should keep the cost per GB/TB lower than 2:1 replica (as that's basically 1.5:1 for cost) and much safer (same protection as 3:1 replica). We use that on our biggest production SSD pool. _