Hi Peter,
I'm not a rook expert, but are you asking how to remove the rook
action to delete a pool? Or is the pool already deleted from ceph
itself?
We "bare" ceph operators have multiple locks to avoid fat fingers like:
ceph osd pool set cephfs_data nodelete 1
ceph config set mon mon_allow
The companions on Facebook tells you the number of individuals you're
associated with, on the web-based media website. Subsequently, on the off
chance that you can't see your companions, at that point this may be because of
the helpless web association. On the off chance that you accept that the
Now and then an issue can emerge when you may hear peculiar sounds originating
from the printer because of some tech glitch. In the event that that occurs, at
that point you can get the assistance by heading off to the tech help locales
or you can call the Epson Customer Service to get the issue
Hi Saber,
I don't think this is related. New assertion happens along the write
path while the original one occurred on allocator shutdown.
Unfortunately there are not much information to troubleshoot this...
Are you able to reproduce the case?
Thanks,
Igor
On 9/25/2020 4:21 AM, sa...@p
On 2020-09-25 04:40, Peter Sarossy wrote:
> hey folks,
>
> I have managed to fat finger a config apply command and accidentally
> deleted the CRD for one of my pools. The operator went ahead and tried to
> purge it, but fortunately since it's used by CephFS it was unable to.
>
> Redeploying the e
Thanks for the details folks.
Apologies, apparently yesterday definitely was not a day to be operating
anything for me, as I was meaning to send this to the rook users list
instead of the ceph users list :(
I will circle back with and answer for posterity once I figure it out.
On Fri, Sep 25,
Haha I figured out you were on Rook.
I think you need to add an annotation or label to the CRD. Just create an empty
one and do a kubectl get cephcluster -oyaml to see what it generates then
figure out what the appropriate analog for the restored CRD is. Once the
operator sees the correct info
Turns out there is no way to undo the deletion:
https://github.com/kubernetes/kubernetes/issues/69980
Time to rotate the pool under the folder and just let it do it's thing...
On Fri, Sep 25, 2020 at 1:51 PM Brian Topping
wrote:
> Haha I figured out you were on Rook.
>
> I think you need to add
Hey folks!
Just shooting this out there in case someone has some advice. We're
just setting up RGW object storage for one of our new Ceph clusters (3
mons, 1072 OSDs, 34 nodes) and doing some benchmarking before letting
users on it.
We have 10Gb network to our two RGW nodes behind a single ip on
Can you share the object size details. Try to increase gradually to say 1gb
and measure.
Thanks
On Sat, 26 Sep, 2020, 1:10 am Dylan Griff, wrote:
> Hey folks!
>
> Just shooting this out there in case someone has some advice. We're
> just setting up RGW object storage for one of our new Ceph clus
10 matches
Mail list logo