I've used RBD for Openstack clouds from small to large scale since 2015.
Been through many upgrades and done many stupid things, and it's still rock
solid. It's the most reliable part of Ceph, I'd say.
On Fri, Aug 19, 2022 at 3:47 AM Abhishek Maloo wrote:
> Hey Folks,
> I have recently joined t
Pretty sure this rule of thumb was created during the days of 4TB and 6TB
spinning disks. Newer spinning disks and SSD / NVMe are faster so they can
have more PGs. Obviously a 16TB spinning disk isn't 4 times faster than a
4TB one, so it's not a linear increase, but I think going closer to 200
shou
May not be directly related to your error, but they slap a DO NOT UPGRADE
FROM AN OLDER VERSION label on the Pacific release notes for a reason...
https://docs.ceph.com/en/latest/releases/pacific/
It means please don't upgrade right now.
On Wed, Dec 15, 2021 at 3:07 PM Michael Uleysky wrote:
>
I haven't tested this in Nautilus 14.2.22 (or any nautilus) but in Luminous
or older, if you go from a bigger size to a smaller size, there was either
a bug or a "feature-not-bug" that didn't allow the OSDs to automatically
purge the redundant PGs with data copies. I did this on a size=5 to size=3
-ci/daemon tag:latest-pacific, it is
successful (same version 16.2.5).
So which one is the real official one that we should use and why the
different names and tags?
Personally I prefer the tag matching the actual minor release like v16.2.5
though...
Cheers,
Linh
--
Linh Vu
Assistant
This is a very interesting bug!
Without personally knowing the history of a cluster, is there a way to
check and see when and which release it began life as? Or check whether
such legacy data structures still exist in the mons?
On Fri, Aug 6, 2021 at 1:45 PM Patrick Donnelly wrote:
> If your cl
If you have decent CPU and RAM on the OSD nodes, you can try Erasure Coding,
even just 4:2 should keep the cost per GB/TB lower than 2:1 replica (as that's
basically 1.5:1 for cost) and much safer (same protection as 3:1 replica). We
use that on our biggest production SSD pool.
_