Can someone please help to respond to the below query ?
Regards
Radha Krishnan S
TCS Enterprise Cloud Practice
Tata Consultancy Services
Cell:- +1 848 466 4870
Mailto: radhakrishnan...@tcs.com
Website: http://www.tcs.com
Experience certainty. IT Serv
Surprisingly, a google search didnt seem to find the answer on this, so guess I
should ask here:
what determines if an rdb is "100% busy"?
I have some backend OSDs, and an iSCSI gateway, serving out some RBDs.
iostat on the gateway says rbd is 100% utilized
iostat on individual OSds only goes
Hi Lenz,
That PR will need a lot of rebasing, as there's been later changes to the
rbd controller.
Nevertheless, while working on that I found a few quick wins that could be
easily implemented (I'll try to come back at this in the next weeks):
- Caching object instances and using flyweight ob
Hi,
we‘re currently in the process of building a new ceph cluster to backup rbd
images from multiple ceph clusters.
We would like to start with just a single ceph cluster to backup which is about
50tb. Compression ratio of the data is around 30% while using zlib. We need to
scale the backup cl
Hi,
A. will ceph be able to recover over time? I am afraid that the 14 PGs
that are down will not recover.
if all OSDs come back (stable) the recovery should eventually finish.
B. what caused the OSDs going down and up during recovery after the
failed OSD node came back online? (step 2 above
On 10/01/2020 10:41, Ashley Merrick wrote:
> Once you have fixed the issue your need to mark / archive the crash
> entry's as seen here: https://docs.ceph.com/docs/master/mgr/crash/
Hi Ashley,
thanks, I didn't know this before...
It turned out there were quite a few old crashes (since I never ar
> Am 10.01.2020 um 07:10 schrieb Mainor Daly :
>
>
> Hi Stefan,
>
> before I give some suggestions, can you first describe your usecase for which
> you wanna use that setup? Also which aspects are important for you.
It’s just the backup target of another ceph Cluster to sync snapshots onc
Once you have fixed the issue your need to mark / archive the crash entry's as
seen here: https://docs.ceph.com/docs/master/mgr/crash/
On Fri, 10 Jan 2020 17:37:47 +0800 Simon Oosthoek
wrote
Hi,
last week I upgraded our ceph to 14.2.5 (from 14.2.4) and either during
the proced
Hi,
last week I upgraded our ceph to 14.2.5 (from 14.2.4) and either during
the procedure or shortly after that, some osds crashed. I re-initialised
them and that should be enough to fix everything, I thought.
I looked a bit further and I do see a lot of lines like this (which are
worrying I supp