[ceph-users] Re: pool pgp_num not updated

2020-10-08 Thread Eugen Block
Yes, after your cluster has recovered you'll be able to increase  
pgp_num. Or your change will be applied automatically since you  
already set it, I'm not sure but you'll see.



Zitat von Mac Wynkoop :


Well, backfilling sure, but will it allow me to actually change the pgp_num
as more space frees up? Because the issue is that I cannot modify that
value.

Thanks,
Mac Wynkoop, Senior Datacenter Engineer
*NetDepot.com:* Cloud Servers; Delivered
Houston | Atlanta | NYC | Colorado Springs

1-844-25-CLOUD Ext 806




On Wed, Oct 7, 2020 at 1:50 PM Eugen Block  wrote:


Yes, I think that’s exactly the reason. As soon as the cluster has
more space the backfill will continue.


Zitat von Mac Wynkoop :

> The cluster is currently in a warn state, here's the scrubbed output of
> ceph -s:
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
> *cluster:id: *redacted*health: HEALTH_WARN
> noscrub,nodeep-scrub flag(s) set22 nearfull osd(s)
  2
> pool(s) nearfullLow space hindering backfill (add storage if
> this doesn't resolve itself): 277 pgs backfill_toofull
Degraded
> data redundancy: 32652738/3651947772 objects degraded (0.894%), 281 pgs
> degraded, 341 pgs undersized1214 pgs not deep-scrubbed in
time
>   2647 pgs not scrubbed in time2 daemons have
recently
> crashed   services:mon: 5 daemons, *redacted* (age 44h)
mgr:
> *redacted*osd: 162 osds: 162 up (since 44h), 162 in
> (since 4d); 971 remapped pgs flags noscrub,nodeep-scrub
> rgw: 3 daemons active *redacted*tcmu-runner: 18 daemons
active
> *redacted*   data:pools:   10 pools, 2648 pgsobjects: 409.56M
> objects, 738 TiBusage:   1.3 PiB used, 580 TiB / 1.8 PiB avail
pgs:
> 32652738/3651947772 objects degraded (0.894%)
>  517370913/3651947772 objects misplaced (14.167%) 1677
> active+clean 477  active+remapped+backfill_wait
 100
>  active+remapped+backfill_wait+backfill_toofull 80
> active+undersized+degraded+remapped+backfill_wait 60
> active+undersized+degraded+remapped+backfill_wait+backfill_toofull
>42   active+undersized+degraded+remapped+backfill_toofull
 33
>   active+undersized+degraded+remapped+backfilling 25
> active+remapped+backfilling 25
> active+remapped+backfill_toofull 24
> active+undersized+remapped+backfilling 23
> active+forced_recovery+undersized+degraded+remapped+backfill_wait
>19
>
active+forced_recovery+undersized+degraded+remapped+backfill_wait+backfill_toofull
>15   active+undersized+remapped+backfill_wait 14
> active+undersized+remapped+backfill_wait+backfill_toofull 12
> active+forced_recovery+undersized+degraded+remapped+backfill_toofull
>  12   active+forced_recovery+undersized+degraded+remapped+backfilling
>5active+undersized+remapped+backfill_toofull 3
>  active+remapped 1active+undersized+remapped
 1
>active+forced_recovery+undersized+remapped+backfilling   io:
client:
>   287 MiB/s rd, 40 MiB/s wr, 1.94k op/s rd, 165 op/s wrrecovery: 425
> MiB/s, 225 objects/s*
> Now as you can see, we do have a lot of backfill operations going on at
the
> moment. Does that actually prevent Ceph from modifying the pgp_num value
of
> a pool?
>
> Thanks,
> Mac Wynkoop
>
>
>
> On Wed, Oct 7, 2020 at 8:57 AM Eugen Block  wrote:
>
>> What is the current cluster status, is it healthy? Maybe increasing
>> pg_num would hit the limit of mon_max_pg_per_osd? Can you share 'ceph
>> -s' output?
>>
>>
>> Zitat von Mac Wynkoop :
>>
>> > Right, both Norman and I set the pg_num before the pgp_num. For
example,
>> > here is my current pool settings:
>> >
>> >
>> > *"pool 40 '*redacted*.rgw.buckets.data' erasure size 9 min_size 7
>> > crush_rule 2 object_hash rjenkins pg_num 2048 pgp_num 1024
pgp_num_target
>> > 2048 last_change 8458830 lfor 0/0/8445757 flags
>> > hashpspool,ec_overwrites,nodelete,backfillfull stripe_width 24576
>> fast_read
>> > 1 application rgw"*
>> > So, when I set:
>> >
>> >  "*ceph osd pool set hou-ec-1.rgw.buckets.data pgp_num 2048*"
>> >
>> > it returns:
>> >
>> > "*set pool 40 pgp_num to 2048*"
>> >
>> > But upon checking the pool details again:
>> >
>> > "*pool 40 '*redacted*.rgw.buckets.data' erasure size 9 min_size 7
>> > crush_rule 2 object_hash rjenkins pg_num 2048 pgp_num 1024
pgp_num_target
>> > 2048 last_change 8458870 lfor 0/0/8445757 flags
>> > hashpspool,ec_overwrites,nodelete,backfillfull stripe_width 24576
>> fast_read
>> > 1 application rgw*"
>> >
>> > and the pgp_num value does not increase. Am I just doing something
>> > totally wrong?
>> >
>> > Thanks,
>> > Mac Wynkoop
>> >
>> >
>> >
>> >
>> > On Tue, Oct 6, 2020 at 2:32 PM Marc Roos 
>> wrote:
>> >
>> >> pg_num and pgp_num need to be the same, not?
>> >>
>> >> 3.5.1. Set the Number of PG

[ceph-users] Re: Wipe an Octopus install

2020-10-08 Thread Eugen Block
Hm, not really, that command only remove the ceph.conf on my admin  
node. So the same as already reported in [2].



Zitat von Samuel Taylor Liston :


Eugen,
	That sounds promising.  I missed that in the man.  Thanks for  
pointing it out.


Sam Liston (sam.lis...@utah.edu)
==
Center for High Performance Computing - Univ. of Utah
155 S. 1452 E. Rm 405
Salt Lake City, Utah 84112 (801)232-6932
==




On Oct 7, 2020, at 2:32 AM, Eugen Block  wrote:

Hi,

I haven't had the opportunity to test it yet but have you tried:

cephadm rm-cluster

from cephadm man page [1]. But it doesn't seem to work properly yet [2].

Regards,
Eugen


[1] https://docs.ceph.com/en/latest/man/8/cephadm/
[2] https://bugzilla.redhat.com/show_bug.cgi?id=1881192


Zitat von Samuel Taylor Liston :

Wondering if anyone knows or has put together a way to wipe an  
Octopus install?  I’ve looked for documentation on the process,  
but if it exists, I haven’t found it yet.  I’m going through some  
test installs - working through the ins and outs of cephadm and  
containers and would love an easy way to tear things down and  
start over.
	In previous releases managed through ceph-deploy there were three  
very convenient commands that nuked the world.  I am looking for  
something as complete for Octopus.

Thanks,

Sam Liston (sam.lis...@utah.edu)
==
Center for High Performance Computing - Univ. of Utah
155 S. 1452 E. Rm 405
Salt Lake City, Utah 84112 (801)232-6932
==



___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io



___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io



___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: Wipe an Octopus install

2020-10-08 Thread Marc Roos
 

I honestly do not get what the problem is. Just yum remove the rpm's, dd 
your osd drives, if there is something left in /var/lib/ceph, /etc/ceph, 
rm -R -f * those. Do a find / -iname "*ceph*" if there is still 
something there.



-Original Message-
To: Samuel Taylor Liston
Cc: ceph-users@ceph.io
Subject: [ceph-users] Re: Wipe an Octopus install

Hm, not really, that command only remove the ceph.conf on my admin node. 
So the same as already reported in [2].


Zitat von Samuel Taylor Liston :

> Eugen,
>   That sounds promising.  I missed that in the man.  Thanks for 
> pointing it out.
>
> Sam Liston (sam.lis...@utah.edu)
> ==
> Center for High Performance Computing - Univ. of Utah
> 155 S. 1452 E. Rm 405
> Salt Lake City, Utah 84112 (801)232-6932 
> ==
>
>
>
>> On Oct 7, 2020, at 2:32 AM, Eugen Block  wrote:
>>
>> Hi,
>>
>> I haven't had the opportunity to test it yet but have you tried:
>>
>> cephadm rm-cluster
>>
>> from cephadm man page [1]. But it doesn't seem to work properly yet 
[2].
>>
>> Regards,
>> Eugen
>>
>>
>> [1] https://docs.ceph.com/en/latest/man/8/cephadm/
>> [2] https://bugzilla.redhat.com/show_bug.cgi?id=1881192
>>
>>
>> Zitat von Samuel Taylor Liston :
>>
>>> Wondering if anyone knows or has put together a way to wipe an 
>>> Octopus install?  I’ve looked for documentation on the process, but 

>>> if it exists, I haven’t found it yet.  I’m going through some test 

>>> installs - working through the ins and outs of cephadm and 
>>> containers and would love an easy way to tear things down and start 
>>> over.
>>> In previous releases managed through ceph-deploy there were 
three 
>>> very convenient commands that nuked the world.  I am looking for 
>>> something as complete for Octopus.
>>> Thanks,
>>>
>>> Sam Liston (sam.lis...@utah.edu)
>>> ==
>>> Center for High Performance Computing - Univ. of Utah
>>> 155 S. 1452 E. Rm 405
>>> Salt Lake City, Utah 84112 (801)232-6932 
>>> ==
>>>
>>>
>>>
>>> ___
>>> ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an 

>>> email to ceph-users-le...@ceph.io
>>
>>
>> ___
>> ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an 
>> email to ceph-users-le...@ceph.io


___
ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an 
email to ceph-users-le...@ceph.io

___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] What are mon.-safe containers?

2020-10-08 Thread Sebastian Luna Valero
Hi,

When I run `ceph orch ps` I see a couple of containers running on our MON
nodes whose names end with the `-safe` suffix, and I was wondering what
they are?

I couldn't find information about it in https://docs.ceph.com

This cluster is running Ceph 15.2.5, recently upgraded from 15.2.4

Many thanks,
Sebastian
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] el6 / centos6 rpm's for luminous?

2020-10-08 Thread Marc Roos


Nobody ever used luminous on el6?


___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: el6 / centos6 rpm's for luminous?

2020-10-08 Thread Dan van der Ster
We had built some rpms locally for ceph-fuse, but AFAIR luminous needs
systemd so the server rpms would be difficult.

-- dan

On Thu, Oct 8, 2020 at 11:12 AM Marc Roos  wrote:
>
>
> Nobody ever used luminous on el6?
>
>
> ___
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: Error "Operation not permitted" using rbd pool init command

2020-10-08 Thread Janne Johansson
Den tors 8 okt. 2020 kl 10:25 skrev floda :

> Hi guys,
> I run the commands as the Linux root user and as the Ceph user
> client.admin (I have turned off apparmor and other hardening things as
> well). The chep user client.admin has the following setup in its keyring:
> [client.admin]
> key = .
> caps mds = "allow *"
> caps mgr = "allow *"
> caps mon = "allow *"
>
>
>
Don't see any permissions of OSD in that list.

-- 
May the most significant bit of your life be positive.
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: Error "Operation not permitted" using rbd pool init command

2020-10-08 Thread floda
Haha, how could I miss that? :)
Anyway, I really appriciate the help and I can confirm that it was the issue.

Thanks!

Best regards,
Fredrik




Från: Janne Johansson 
Skickat: den 8 oktober 2020 13:01
Till: floda 
Kopia: ceph-users@ceph.io 
Ämne: Re: [ceph-users] Error "Operation not permitted" using rbd pool init 
command

Den tors 8 okt. 2020 kl 10:25 skrev floda 
mailto:flo...@hotmail.com>>:
Hi guys,
I run the commands as the Linux root user and as the Ceph user client.admin (I 
have turned off apparmor and other hardening things as well). The chep user 
client.admin has the following setup in its keyring:
[client.admin]
key = .
caps mds = "allow *"
caps mgr = "allow *"
caps mon = "allow *"



Don't see any permissions of OSD in that list.

--
May the most significant bit of your life be positive.
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: pool pgp_num not updated

2020-10-08 Thread Mac Wynkoop
OK, great. We'll keep tabs on it for now then and try again once we're
fully rebalanced.
Mac Wynkoop, Senior Datacenter Engineer
*NetDepot.com:* Cloud Servers; Delivered
Houston | Atlanta | NYC | Colorado Springs

1-844-25-CLOUD Ext 806




On Thu, Oct 8, 2020 at 2:08 AM Eugen Block  wrote:

> Yes, after your cluster has recovered you'll be able to increase
> pgp_num. Or your change will be applied automatically since you
> already set it, I'm not sure but you'll see.
>
>
> Zitat von Mac Wynkoop :
>
> > Well, backfilling sure, but will it allow me to actually change the
> pgp_num
> > as more space frees up? Because the issue is that I cannot modify that
> > value.
> >
> > Thanks,
> > Mac Wynkoop, Senior Datacenter Engineer
> > *NetDepot.com:* Cloud Servers; Delivered
> > Houston | Atlanta | NYC | Colorado Springs
> >
> > 1-844-25-CLOUD Ext 806
> >
> >
> >
> >
> > On Wed, Oct 7, 2020 at 1:50 PM Eugen Block  wrote:
> >
> >> Yes, I think that’s exactly the reason. As soon as the cluster has
> >> more space the backfill will continue.
> >>
> >>
> >> Zitat von Mac Wynkoop :
> >>
> >> > The cluster is currently in a warn state, here's the scrubbed output
> of
> >> > ceph -s:
> >> >
> >> >
> >> >
> >> >
> >> >
> >> >
> >> >
> >> >
> >> >
> >> >
> >> >
> >> >
> >> >
> >> >
> >> >
> >> >
> >> >
> >> >
> >> >
> >> >
> >> >
> >> >
> >> >
> >> >
> >> >
> >> >
> >> >
> >> >
> >> >
> >> >
> >> >
> >> >
> >> >
> >> >
> >> >
> >> >
> >> >
> >> >
> >> >
> >> >
> >> >
> >> >
> >> >
> >> >
> >> >
> >> >
> >> >
> >> >
> >> >
> >> >
> >> >
> >> > *cluster:id: *redacted*health: HEALTH_WARN
> >> > noscrub,nodeep-scrub flag(s) set22 nearfull osd(s)
> >>   2
> >> > pool(s) nearfullLow space hindering backfill (add storage
> if
> >> > this doesn't resolve itself): 277 pgs backfill_toofull
> >> Degraded
> >> > data redundancy: 32652738/3651947772 objects degraded (0.894%), 281
> pgs
> >> > degraded, 341 pgs undersized1214 pgs not deep-scrubbed in
> >> time
> >> >   2647 pgs not scrubbed in time2 daemons have
> >> recently
> >> > crashed   services:mon: 5 daemons, *redacted* (age 44h)
> >> mgr:
> >> > *redacted*osd: 162 osds: 162 up (since 44h), 162
> in
> >> > (since 4d); 971 remapped pgs flags
> noscrub,nodeep-scrub
> >> > rgw: 3 daemons active *redacted*tcmu-runner: 18 daemons
> >> active
> >> > *redacted*   data:pools:   10 pools, 2648 pgsobjects: 409.56M
> >> > objects, 738 TiBusage:   1.3 PiB used, 580 TiB / 1.8 PiB avail
> >> pgs:
> >> > 32652738/3651947772 objects degraded (0.894%)
> >> >  517370913/3651947772 objects misplaced (14.167%) 1677
> >> > active+clean 477  active+remapped+backfill_wait
> >>  100
> >> >  active+remapped+backfill_wait+backfill_toofull 80
> >> > active+undersized+degraded+remapped+backfill_wait 60
> >> > active+undersized+degraded+remapped+backfill_wait+backfill_toofull
> >> >42   active+undersized+degraded+remapped+backfill_toofull
> >>  33
> >> >   active+undersized+degraded+remapped+backfilling 25
> >> > active+remapped+backfilling 25
> >> > active+remapped+backfill_toofull 24
> >> > active+undersized+remapped+backfilling 23
> >> > active+forced_recovery+undersized+degraded+remapped+backfill_wait
> >> >19
> >> >
> >>
> active+forced_recovery+undersized+degraded+remapped+backfill_wait+backfill_toofull
> >> >15   active+undersized+remapped+backfill_wait
>  14
> >> > active+undersized+remapped+backfill_wait+backfill_toofull
>  12
> >> > active+forced_recovery+undersized+degraded+remapped+backfill_toofull
> >> >  12
>  active+forced_recovery+undersized+degraded+remapped+backfilling
> >> >5active+undersized+remapped+backfill_toofull
>3
> >> >  active+remapped 1active+undersized+remapped
> >>  1
> >> >active+forced_recovery+undersized+remapped+backfilling   io:
> >> client:
> >> >   287 MiB/s rd, 40 MiB/s wr, 1.94k op/s rd, 165 op/s wrrecovery:
> 425
> >> > MiB/s, 225 objects/s*
> >> > Now as you can see, we do have a lot of backfill operations going on
> at
> >> the
> >> > moment. Does that actually prevent Ceph from modifying the pgp_num
> value
> >> of
> >> > a pool?
> >> >
> >> > Thanks,
> >> > Mac Wynkoop
> >> >
> >> >
> >> >
> >> > On Wed, Oct 7, 2020 at 8:57 AM Eugen Block  wrote:
> >> >
> >> >> What is the current cluster status, is it healthy? Maybe increasing
> >> >> pg_num would hit the limit of mon_max_pg_per_osd? Can you share 'ceph
> >> >> -s' output?
> >> >>
> >> >>
> >> >> Zitat von Mac Wynkoop :
> >> >>
> >> >> > Right, both Norman and I set the pg_num before the pgp_num. For
> >> example,
> >> >> > here is my current pool settings:
> >> >> >
> >> >> >
> >> >> > *"pool 40 '*redacted*.rgw.buckets.data' erasure size 9 min_size 7
> >> >> > crush_rule 2 object_hash rjenkins pg_num 2048 pgp

[ceph-users] Fwd: pool pgp_num not updated

2020-10-08 Thread Mac Wynkoop
Just making sure this makes the list:
Mac Wynkoop




-- Forwarded message -
From: 胡 玮文 
Date: Wed, Oct 7, 2020 at 9:00 PM
Subject: Re: pool pgp_num not updated
To: Mac Wynkoop 


Hi,

You can read about this behavior at
https://ceph.io/rados/new-in-nautilus-pg-merging-and-autotuning/

In short, ceph will not increase pgp_num if misplaced > 5% (by default),
and once you got misplaced < 5%, it will increase pgp_num gradually, until
reaching the value you set. This 5% can be configured by
target_max_misplaced_ratio config option.

在 2020年10月8日,03:22,Mac Wynkoop  写道:

Well, backfilling sure, but will it allow me to actually change the pgp_num
as more space frees up? Because the issue is that I cannot modify that
value.

Thanks,
Mac Wynkoop, Senior Datacenter Engineer
*NetDepot.com:* Cloud Servers; Delivered
Houston | Atlanta | NYC | Colorado Springs

1-844-25-CLOUD Ext 806




On Wed, Oct 7, 2020 at 1:50 PM Eugen Block  wrote:

Yes, I think that’s exactly the reason. As soon as the cluster has

more space the backfill will continue.



Zitat von Mac Wynkoop :


The cluster is currently in a warn state, here's the scrubbed output of

ceph -s:




















































*cluster:id: *redacted*health: HEALTH_WARN

noscrub,nodeep-scrub flag(s) set22 nearfull osd(s)

 2

pool(s) nearfullLow space hindering backfill (add storage if

this doesn't resolve itself): 277 pgs backfill_toofull

Degraded

data redundancy: 32652738/3651947772 objects degraded (0.894%), 281 pgs

degraded, 341 pgs undersized1214 pgs not deep-scrubbed in

time

 2647 pgs not scrubbed in time2 daemons have

recently

crashed   services:mon: 5 daemons, *redacted* (age 44h)

mgr:

   *redacted*osd: 162 osds: 162 up (since 44h), 162 in

(since 4d); 971 remapped pgs flags noscrub,nodeep-scrub

rgw: 3 daemons active *redacted*tcmu-runner: 18 daemons

active

*redacted*   data:pools:   10 pools, 2648 pgsobjects: 409.56M

objects, 738 TiBusage:   1.3 PiB used, 580 TiB / 1.8 PiB avail

pgs:

   32652738/3651947772 objects degraded (0.894%)

517370913/3651947772 objects misplaced (14.167%) 1677

active+clean 477  active+remapped+backfill_wait

100

active+remapped+backfill_wait+backfill_toofull 80

active+undersized+degraded+remapped+backfill_wait 60

active+undersized+degraded+remapped+backfill_wait+backfill_toofull

  42   active+undersized+degraded+remapped+backfill_toofull

33

 active+undersized+degraded+remapped+backfilling 25

active+remapped+backfilling 25

active+remapped+backfill_toofull 24

active+undersized+remapped+backfilling 23

active+forced_recovery+undersized+degraded+remapped+backfill_wait

  19


active+forced_recovery+undersized+degraded+remapped+backfill_wait+backfill_toofull

  15   active+undersized+remapped+backfill_wait 14

active+undersized+remapped+backfill_wait+backfill_toofull 12

active+forced_recovery+undersized+degraded+remapped+backfill_toofull

12   active+forced_recovery+undersized+degraded+remapped+backfilling

  5active+undersized+remapped+backfill_toofull 3

active+remapped 1active+undersized+remapped

1

  active+forced_recovery+undersized+remapped+backfilling   io:

client:

 287 MiB/s rd, 40 MiB/s wr, 1.94k op/s rd, 165 op/s wrrecovery: 425

MiB/s, 225 objects/s*

Now as you can see, we do have a lot of backfill operations going on at

the

moment. Does that actually prevent Ceph from modifying the pgp_num value

of

a pool?


Thanks,

Mac Wynkoop




On Wed, Oct 7, 2020 at 8:57 AM Eugen Block  wrote:


What is the current cluster status, is it healthy? Maybe increasing

pg_num would hit the limit of mon_max_pg_per_osd? Can you share 'ceph

-s' output?



Zitat von Mac Wynkoop :


Right, both Norman and I set the pg_num before the pgp_num. For

example,

here is my current pool settings:



*"pool 40 '*redacted*.rgw.buckets.data' erasure size 9 min_size 7

crush_rule 2 object_hash rjenkins pg_num 2048 pgp_num 1024

pgp_num_target

2048 last_change 8458830 lfor 0/0/8445757 flags

hashpspool,ec_overwrites,nodelete,backfillfull stripe_width 24576

fast_read

1 application rgw"*

So, when I set:


"*ceph osd pool set hou-ec-1.rgw.buckets.data pgp_num 2048*"


it returns:


"*set pool 40 pgp_num to 2048*"


But upon checking the pool details again:


"*pool 40 '*redacted*.rgw.buckets.data' erasure size 9 min_size 7

crush_rule 2 object_hash rjenkins pg_num 2048 pgp_num 1024

pgp_num_target

2048 last_change 8458870 lfor 0/0/8445757 flags

hashpspool,ec_overwrites,nodelete,backfillfull stripe_width 24576

fast_read

1 application rgw*"


and the pgp_num value does not increase. Am I just doing something

totally wrong?


Thanks,

Mac Wynkoop





On Tue, Oct 6, 2020 at 2

[ceph-users] Re: Wipe an Octopus install

2020-10-08 Thread Eugen Block
Ah, if you run 'cephadm rm-cluster --fsid ...' on each node it will  
remove all containers and configs (ceph-salt comes in handy with  
this). You'll still have to wipe the drives though, but nevertheless  
it's a little quicker than doing it all manually.



Zitat von Samuel Taylor Liston :


Eugen,
	That sounds promising.  I missed that in the man.  Thanks for  
pointing it out.


Sam Liston (sam.lis...@utah.edu)
==
Center for High Performance Computing - Univ. of Utah
155 S. 1452 E. Rm 405
Salt Lake City, Utah 84112 (801)232-6932
==




On Oct 7, 2020, at 2:32 AM, Eugen Block  wrote:

Hi,

I haven't had the opportunity to test it yet but have you tried:

cephadm rm-cluster

from cephadm man page [1]. But it doesn't seem to work properly yet [2].

Regards,
Eugen


[1] https://docs.ceph.com/en/latest/man/8/cephadm/
[2] https://bugzilla.redhat.com/show_bug.cgi?id=1881192


Zitat von Samuel Taylor Liston :

Wondering if anyone knows or has put together a way to wipe an  
Octopus install?  I’ve looked for documentation on the process,  
but if it exists, I haven’t found it yet.  I’m going through some  
test installs - working through the ins and outs of cephadm and  
containers and would love an easy way to tear things down and  
start over.
	In previous releases managed through ceph-deploy there were three  
very convenient commands that nuked the world.  I am looking for  
something as complete for Octopus.

Thanks,

Sam Liston (sam.lis...@utah.edu)
==
Center for High Performance Computing - Univ. of Utah
155 S. 1452 E. Rm 405
Salt Lake City, Utah 84112 (801)232-6932
==



___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io



___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io



___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Multisite replication speed

2020-10-08 Thread Nicolas Moal
Hello everybody,

We have two Ceph object clusters replicating over a very long-distance WAN 
link. Our version of Ceph is 14.2.10.
Currently, replication speed seems to be capped around 70 MiB/s even if there's 
a 10Gb WAN link between the two clusters.
The clusters themselves don't seem to suffer from any performance issue.

The replication traffic leverages HAProxy VIPs, which means there's a single 
endpoint (the HAProxy VIP) in the multisite replication configuration.

So, my questions are:
- Is it possible to improve replication speed by adding more endpoints in the 
multisite replication configuration? The issue we are facing is that the 
secondary cluster is way behind the master cluster because of the relatively 
slow speed.
- Is there anything else I can do to optimize replication speed ?

Thanks for your comments !

Nicolas

___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: Wipe an Octopus install

2020-10-08 Thread Samuel Taylor Liston
Thanks.  PDSH will help too.

Sam Liston (sam.lis...@utah.edu)
===
Center for High Performance Computing
155 S. 1452 E. Rm 405
Salt Lake City, Utah 84112 (801)232-6932
===

On Oct 8, 2020, at 8:10 AM, Eugen Block  wrote:

Ah, if you run 'cephadm rm-cluster --fsid ...' on each node it will remove all 
containers and configs (ceph-salt comes in handy with this). You'll still have 
to wipe the drives though, but nevertheless it's a little quicker than doing it 
all manually.


Zitat von Samuel Taylor Liston :

Eugen,
   That sounds promising.  I missed that in the man.  Thanks for pointing it 
out.

Sam Liston (sam.lis...@utah.edu)
==
Center for High Performance Computing - Univ. of Utah
155 S. 1452 E. Rm 405
Salt Lake City, Utah 84112 (801)232-6932
==



On Oct 7, 2020, at 2:32 AM, Eugen Block  wrote:

Hi,

I haven't had the opportunity to test it yet but have you tried:

cephadm rm-cluster

from cephadm man page [1]. But it doesn't seem to work properly yet [2].

Regards,
Eugen


[1] https://docs.ceph.com/en/latest/man/8/cephadm/
[2] https://bugzilla.redhat.com/show_bug.cgi?id=1881192


Zitat von Samuel Taylor Liston :

Wondering if anyone knows or has put together a way to wipe an Octopus install? 
 I’ve looked for documentation on the process, but if it exists, I haven’t 
found it yet.  I’m going through some test installs - working through the ins 
and outs of cephadm and containers and would love an easy way to tear things 
down and start over.
   In previous releases managed through ceph-deploy there were three very 
convenient commands that nuked the world.  I am looking for something as 
complete for Octopus.
Thanks,

Sam Liston (sam.lis...@utah.edu)
==
Center for High Performance Computing - Univ. of Utah
155 S. 1452 E. Rm 405
Salt Lake City, Utah 84112 (801)232-6932
==



___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io



___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: Multisite replication speed

2020-10-08 Thread Paul Mezzanini
With a long distance link I would definitely look into switching to BBR for 
your congestion control as your first step.

Well, your _first_ step is to do an iperf and establish a baseline

A quick search and this link seems to explain it not-too-bad
https://www.cyberciti.biz/cloud-computing/increase-your-linux-server-internet-speed-with-tcp-bbr-congestion-control/

We have used it before with great success for long distance, high throughput 
transfers.

-paul
--
Paul Mezzanini
Sr Systems Administrator / Engineer, Research Computing
Information & Technology Services
Finance & Administration
Rochester Institute of Technology
o:(585) 475-3245 | pfm...@rit.edu

CONFIDENTIALITY NOTE: The information transmitted, including attachments, is
intended only for the person(s) or entity to which it is addressed and may
contain confidential and/or privileged material. Any review, retransmission,
dissemination or other use of, or taking of any action in reliance upon this
information by persons or entities other than the intended recipient is
prohibited. If you received this in error, please contact the sender and
destroy any copies of this information.



From: Nicolas Moal 
Sent: Thursday, October 8, 2020 10:36 AM
To: ceph-users
Subject: [ceph-users] Multisite replication speed

Hello everybody,

We have two Ceph object clusters replicating over a very long-distance WAN 
link. Our version of Ceph is 14.2.10.
Currently, replication speed seems to be capped around 70 MiB/s even if there's 
a 10Gb WAN link between the two clusters.
The clusters themselves don't seem to suffer from any performance issue.

The replication traffic leverages HAProxy VIPs, which means there's a single 
endpoint (the HAProxy VIP) in the multisite replication configuration.

So, my questions are:
- Is it possible to improve replication speed by adding more endpoints in the 
multisite replication configuration? The issue we are facing is that the 
secondary cluster is way behind the master cluster because of the relatively 
slow speed.
- Is there anything else I can do to optimize replication speed ?

Thanks for your comments !

Nicolas

___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: el6 / centos6 rpm's for luminous?

2020-10-08 Thread Marc Roos
 
Ok thanks Dan for letting me know.


-Original Message-
Cc: ceph-users
Subject: Re: [ceph-users] el6 / centos6 rpm's for luminous?

We had built some rpms locally for ceph-fuse, but AFAIR luminous needs 
systemd so the server rpms would be difficult.

-- dan

>
>
> Nobody ever used luminous on el6?
>
>
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Bluestore migration: per-osd device copy

2020-10-08 Thread Chris Dunlop

Hi,

The docs have scant detail on doing a migration to bluestore using a 
per-osd device copy:


https://docs.ceph.com/en/latest/rados/operations/bluestore-migration/#per-osd-device-copy

This mentions "using the copy function of ceph-objectstore-tool", but 
ceph-objectstore-tool doesn't have a copy function (all the way from v9 to 
current). 


Has anyone actually tried doing this?

Is there any further detail available on what is involved, e.g. a broad 
outline of the steps?


Of course, detailed instructions would be even better, even if accompanied 
by "here be dragons!" warnings.


Cheers,

Chris
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: Ceph OIDC Integration

2020-10-08 Thread Pritha Srivastava
Hello,

If it is possible for the uid that has been used for LDAP users to be the
same for OIDC users (which is based off the 'sub' field of the OpenID
connect token), then there are no extra migration steps needed.

Which version of Ceph are you using? In octopus, offline token validation
has been introduced, where an incoming web token is validated using the
certificate of the IDP.  Uptil Octopus, there were no shadow users for OIDC
users, but we have introduced  shadow user creation in the 'master' branch,
and that is done automatically when an AssumeRoleWithWebIdentity call is
made. So the metadata to look at right now would be $$buckets
which stores the user stats and make sure that the same uid is being used
across both LDAP and OIDC (if that is possible), else there is a
radosgw-admin user rename command that will rename the user and update all
other metadata.

Also, please note that currently AssumeRoleWithWebIdentity has been tested
only with Keycloak. The documentation for STS in Octopus is here:
https://docs.ceph.com/en/octopus/radosgw/STS/

Thanks,
Pritha

On Mon, Oct 5, 2020 at 9:56 PM  wrote:

> Hello, we have integrated Ceph's RGW with LDAP and have authenticated
> users using the mail attribute successfully. We would like to shift to SSO
> and are evaluating the new OIDC feature in Ceph together with dexIdP with
> an LDAP connector as an upstream IdP.
>
> We are trying to understand the flow of the user authentication and how it
> will effect my current LDAP users buckets which are already created in Ceph
> as LDAP users.
>
> Will the Ceph RGW be able to pass the token to be verified to the IdP and
> what type of user will then be created in Ceph? Is this the intended way of
> OIDC integration?
>
> Thanks for any assistance
> ___
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
>
>
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] pg active+clean but can not handle io

2020-10-08 Thread 古轶特
Hi all:
I have a ceph cluster, the version is 12.2.12.
this is my ceph osd tree:
[root@node-1 ~]# ceph osd tree
ID  CLASS WEIGHT  TYPE NAME   STATUS REWEIGHT PRI-AFF
 -25   2.78760 root rack-test 
 -26   0.92920 rack rack_1
 -37   0.92920 host host_1
  12   ssd 0.92920 osd.12  up  1.0 1.0
 -27   0.92920 rack rack_2
 -38   0.92920 host host_2
   6   ssd 0.92920 osd.6   up  1.0 1.0
 -28   0.92920 rack rack_3
 -39   0.92920 host host_3
  18   ssd 0.92920 osd.18  up  1.0 1.0

I have a pool in cluster:
pool 14 'gyt-test' replicated size 2 min_size 1 crush_rule 2 object_hash 
rjenkins pg_num 128 pgp_num 128 last_change 5864 lfor 0/5828 flags hashpspool 
stripe_width 0
 removed_snaps [1~3]


crush_rule 2 dump:
{
"rule_id": 2,
"rule_name": "replicated_rule_rack",
"ruleset": 2,
"type": 1,
"min_size": 1,
"max_size": 10,
"steps": [
{
"op": "take",
"item": -25,
"item_name": "rack-test"
},
{
"op": "chooseleaf_firstn",
"num": 0,
"type": "rack"
},
{
"op": "emit"
}
]
}


I have a rbd images in gyt-test pool:
[root@node-1 ~]# rbd ls gyt-test
gyt-test


now, I use fio tool to test this rbd images:
[root@node-1 ~]# fio --ioengine=rbd --pool=gyt-test --rbdname=gyt-test 
--rw=randwrite --bs=4k --numjobs=1 --runtime=120 --iodepth=128 
--clientname=admin --direct=1 --name=test --time_based=1 --eta-newline 1
test: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 
4096B-4096B, ioengine=rbd, iodepth=128
fio-3.1
Starting 1 process
Jobs: 1 (f=1): [w(1)][2.5%][r=0KiB/s,w=42.2MiB/s][r=0,w=10.8k IOPS][eta 01m:57s]
Jobs: 1 (f=1): [w(1)][4.2%][r=0KiB/s,w=57.7MiB/s][r=0,w=14.8k IOPS][eta 01m:55s]
Jobs: 1 (f=1): [w(1)][5.8%][r=0KiB/s,w=52.4MiB/s][r=0,w=13.4k IOPS][eta 01m:53s]
Jobs: 1 (f=1): [w(1)][7.5%][r=0KiB/s,w=61.1MiB/s][r=0,w=15.6k IOPS][eta 01m:51s]
Jobs: 1 (f=1): [w(1)][9.2%][r=0KiB/s,w=30.0MiB/s][r=0,w=7927 IOPS][eta 01m:49s]
 Jobs: 1 (f=1): [w(1)][10.8%][r=0KiB/s,w=59.1MiB/s][r=0,w=15.1k IOPS][eta 
01m:47s]
Jobs: 1 (f=1): [w(1)][12.5%][r=0KiB/s,w=51.6MiB/s][r=0,w=13.2k IOPS][eta 
01m:45s]
Jobs: 1 (f=1): [w(1)][14.2%][r=0KiB/s,w=58.3MiB/s][r=0,w=14.9k IOPS][eta 
01m:43s]
Jobs: 1 (f=1): [w(1)][15.8%][r=0KiB/s,w=56.1MiB/s][r=0,w=14.4k IOPS][eta 
01m:41s]
Jobs: 1 (f=1): [w(1)][17.5%][r=0KiB/s,w=44.8MiB/s][r=0,w=11.5k IOPS][eta 
01m:39s]


This is normal
And then, I move host_1 bucket to rack-test:
[root@node-1 ~]# ceph osd crush move host_1 root=rack-test
moved item id -37 name 'host_1' to location {root=gyt-test} in crush map


[root@node-1 ~]# ceph osd tree
-25   2.78760 root rack-test  
-37   0.92920 host host_1
  12   ssd 0.92920 osd.12  up  1.0 1.0
 -26 0 rack rack_1
 -27   0.92920 rack rack_2
 -38   0.92920 host host_2
   6   ssd 0.92920 osd.6   up  1.0 1.0
 -28   0.92920 rack rack_3
 -39   0.92920 host host_3
  18   ssd 0.92920 osd.18  up  1.0 1.0



use fio tool test gyt-test rbd again: 
[root@node-1 ~]# fio --ioengine=rbd --pool=gyt-test --rbdname=gyt-test 
--rw=randwrite --bs=4k --numjobs=1 --runtime=120 --iodepth=64 
--clientname=admin --direct=1 --name=test --time_based=1 --eta-newline 1
test: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 
4096B-4096B, ioengine=rbd, iodepth=64
fio-3.1
Starting 1 process
Jobs: 1 (f=1): [w(1)][2.5%][r=0KiB/s,w=0KiB/s][r=0,w=0 IOPS][eta 01m:57s]
Jobs: 1 (f=1): [w(1)][4.2%][r=0KiB/s,w=0KiB/s][r=0,w=0 IOPS][eta 01m:55s]
Jobs: 1 (f=1): [w(1)][5.8%][r=0KiB/s,w=0KiB/s][r=0,w=0 IOPS][eta 01m:53s]
Jobs: 1 (f=1): [w(1)][7.5%][r=0KiB/s,w=0KiB/s][r=0,w=0 IOPS][eta 01m:51s]
Jobs: 1 (f=1): [w(1)][9.2%][r=0KiB/s,w=0KiB/s][r=0,w=0 IOPS][eta 01m:49s]
Jobs: 1 (f=1): [w(1)][10.8%][r=0KiB/s,w=0KiB/s][r=0,w=0 IOPS][eta 01m:47s]
Jobs: 1 (f=1): [w(1)][12.5%][r=0KiB/s,w=0KiB/s][r=0,w=0 IOPS][eta 01m:45s]
Jobs: 1 (f=1): [w(1)][14.2%][r=0KiB/s,w=0KiB/s][r=0,w=0 IOPS][eta 01m:43s]
Jobs: 1 (f=1): [w(1)][15.8%][r=0KiB/s,w=0KiB/s][r=0,w=0 IOPS][eta 01m:41s]
Jobs: 1 (f=1): [w(1)][17.5%][r=0KiB/s,w=0KiB/s][r=0,w=0 IOPS][eta 01m:39s]
Jobs: 1 (f=1): [w(1)][19.2%][r=0KiB/s,w=0KiB/s][r=0,w=0 IOPS][eta 01m:37s]
Jobs: 1 (f

[ceph-users] Re: [Suspicious newsletter] Weird performance issue with long heartbeat and slow ops warnings

2020-10-08 Thread Void Star Nill
Thanks Istvan.

I did some more investigation and what I found is that if I run FIO with
100% write on an already warm volume, then the performance degradation
doesn't happen. In other words, 100% write OPS on an empty volume causes
performance degradation while subsequent read/writes on a volume where data
was already allocated, doesn't cause the degradation. I tested this with
thick provisioned volumes too and experienced the same problem.

Regards,
Shridhar


On Thu, 8 Oct 2020 at 18:31, Szabo, Istvan (Agoda) 
wrote:

> Hi,
>
> We have a quite serious issue regarding slow ops.
> In our case DB team used the cluster to read and write in the same pool at
> the same time and it made the cluster useless.
> When we ran fio, we realised that ceph doesn't like the read and write at
> the same time in the same pool, so we tested this with fio to create 2
> separate pool, put the read operation to 1 pool and the write to another
> one and magic happened, no slow ops and a weigh higher performance.
> We asked the db team also to split the read and write (as much as thay
> can) and issue solved (after 2 week).
>
> Thank you
> 
> From: Void Star Nill 
> Sent: Thursday, October 8, 2020 1:14 PM
> To: ceph-users
> Subject: [Suspicious newsletter] [ceph-users] Weird performance issue with
> long heartbeat and slow ops warnings
>
> Email received from outside the company. If in doubt don't click links nor
> open attachments!
> 
>
> Hello,
>
> I have a ceph cluster running 14.2.11. I am running benchmark tests with
> FIO concurrently on ~2000 volumes of 10G each. During the time initial
> warm-up FIO creates a 10G file on each volume before it runs the actual
> read/write I/O operations. During this time, I start seeing the Ceph
> cluster reporting about 35GiB/s write throughput for a while, but after
> some time I start seeing "long heartbeat" and "slow ops" warnings and in a
> few mins the throughput drops to ~1GB/s and stays there until all FIO runs
> complete.
>
> The cluster has 5 monitor nodes and 10 data nodes - each with 10x3.2TB NVME
> drives. I have setup 3 OSD for each NVME, so there are a total of 300 OSDs.
> Each server has 200GB uplink and there's no apparent network bottleneck as
> the network is set up to support over 1Tbps bandwidth. I dont see any CPU
> or memory issues also on the servers.
>
> There is a single manager instance running on one of the mons.
>
> The pool is configured for 3 replication factor with min_size of 2. I tried
> to use pg_num of 8192 and 16384 and saw the issue with both settings.
>
> Could you please suggest if this is a known issue or if I can tune any
> parameters?
>
>Long heartbeat ping times on back interface seen, longest is
> 1202.120 msec
> Long heartbeat ping times on front interface seen, longest is
> 1535.191 msec
> 35 slow ops, oldest one blocked for 122 sec, daemons
>
> [osd.135,osd.14,osd.141,osd.143,osd.149,osd.15,osd.151,osd.153,osd.157,osd.162]...
> have slow ops.
>
> Regards,
> Shridhar
> ___
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
>
> 
> This message is confidential and is for the sole use of the intended
> recipient(s). It may also be privileged or otherwise protected by copyright
> or other legal rules. If you have received it by mistake please let us know
> by reply email and delete it from your system. It is prohibited to copy
> this message or disclose its content to anyone. Any confidentiality or
> privilege is not waived or lost by any mistaken delivery or unauthorized
> disclosure of the message. All messages sent to and from Agoda may be
> monitored to ensure compliance with company policies, to protect the
> company's interests and to remove potential malware. Electronic messages
> may be intercepted, amended, lost or deleted, or contain viruses.
>
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: [Suspicious newsletter] Weird performance issue with long heartbeat and slow ops warnings

2020-10-08 Thread Szabo, Istvan (Agoda)
Hi,

We have a quite serious issue regarding slow ops.
In our case DB team used the cluster to read and write in the same pool at the 
same time and it made the cluster useless.
When we ran fio, we realised that ceph doesn't like the read and write at the 
same time in the same pool, so we tested this with fio to create 2 separate 
pool, put the read operation to 1 pool and the write to another one and magic 
happened, no slow ops and a weigh higher performance.
We asked the db team also to split the read and write (as much as thay can) and 
issue solved (after 2 week).

Thank you

From: Void Star Nill 
Sent: Thursday, October 8, 2020 1:14 PM
To: ceph-users
Subject: [Suspicious newsletter] [ceph-users] Weird performance issue with long 
heartbeat and slow ops warnings

Email received from outside the company. If in doubt don't click links nor open 
attachments!


Hello,

I have a ceph cluster running 14.2.11. I am running benchmark tests with
FIO concurrently on ~2000 volumes of 10G each. During the time initial
warm-up FIO creates a 10G file on each volume before it runs the actual
read/write I/O operations. During this time, I start seeing the Ceph
cluster reporting about 35GiB/s write throughput for a while, but after
some time I start seeing "long heartbeat" and "slow ops" warnings and in a
few mins the throughput drops to ~1GB/s and stays there until all FIO runs
complete.

The cluster has 5 monitor nodes and 10 data nodes - each with 10x3.2TB NVME
drives. I have setup 3 OSD for each NVME, so there are a total of 300 OSDs.
Each server has 200GB uplink and there's no apparent network bottleneck as
the network is set up to support over 1Tbps bandwidth. I dont see any CPU
or memory issues also on the servers.

There is a single manager instance running on one of the mons.

The pool is configured for 3 replication factor with min_size of 2. I tried
to use pg_num of 8192 and 16384 and saw the issue with both settings.

Could you please suggest if this is a known issue or if I can tune any
parameters?

   Long heartbeat ping times on back interface seen, longest is
1202.120 msec
Long heartbeat ping times on front interface seen, longest is
1535.191 msec
35 slow ops, oldest one blocked for 122 sec, daemons
[osd.135,osd.14,osd.141,osd.143,osd.149,osd.15,osd.151,osd.153,osd.157,osd.162]...
have slow ops.

Regards,
Shridhar
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


This message is confidential and is for the sole use of the intended 
recipient(s). It may also be privileged or otherwise protected by copyright or 
other legal rules. If you have received it by mistake please let us know by 
reply email and delete it from your system. It is prohibited to copy this 
message or disclose its content to anyone. Any confidentiality or privilege is 
not waived or lost by any mistaken delivery or unauthorized disclosure of the 
message. All messages sent to and from Agoda may be monitored to ensure 
compliance with company policies, to protect the company's interests and to 
remove potential malware. Electronic messages may be intercepted, amended, lost 
or deleted, or contain viruses.
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io