I use EC 4+2 on backup backup site, production site is running replica3,
running 8 servers on backup side and 12 on production side
number of OSDs per server is 16 on all of them
Production has lacp bonded 25G networking for public and cluster network
backup has just 10G networking with no red
You are correct, but that will involve massive data movement.
You can change the failure domain osd/host/rack/datacenter/etc...
You can change the replica_count=2,3,4,5,6
You *CAN'T* change the EC value eg. 4+2 to something else
Kind regards,
Nino
On Fri, Jun 23, 2023 at 12:40 AM Angelo Hönge
problem is just that some of your OSDs have too much PGs and pool cannot
recover as it cannot create more PGs
[osd.214,osd.223,osd.548,osd.584] have slow ops.
too many PGs per OSD (330 > max 250)
I'd have to guess that the safest thing would be permanently or temporarily
adding more s
a from the old pool.
> Changing the crush rule doesn’t allow you to do that.
>
> > On 16. Jun 2023, at 23:32, Nino Kotur wrote:
> >
> > If you create new crush rule for ssd/nvme/hdd and attach it to existing
> pool you should be able to do the migration seamlessly while
After cluster enters healthy state mgr should re-check stray daemons, a lot
of activities are on hold while cluster is in warning state.
In the event it does not disappear after cluster is healthy than mgr
restart should help.
Kind regards,
Nino
On Fri, Jun 16, 2023 at 10:24 PM Nicola Mori w
If you create new crush rule for ssd/nvme/hdd and attach it to existing
pool you should be able to do the migration seamlessly while everything is
online... However impact to user will depend on storage devices load and
network utilization as it will create chaos on cluster network.
Or did i get s
+1 for this issue, i've managed to reproduce it on my test cluster.
Kind regards,
Nino Kotur
On Mon, Jun 12, 2023 at 2:54 PM farhad kh
wrote:
> i deployed the ceph cluster with 8 node (v17.2.6) and after add all of
> hosts, ceph create 5 mon daemon instances
> i try decr
What kind of pool are you using, or do you have different pools for
different purposes... Do you have cephfs or rbd only pools etc... describe
your setup.
It is generally best practice to create new rules and apply them to pools
and not to modify existing pools, but that is possible as well. Below