[ceph-users] Re: Clearing contents of OSDs without removing them?

2020-12-19 Thread Robert Sander
Hi,

Am 18.12.20 um 17:56 schrieb Dallas Jones:

> As you can see from the partial output of ceph -s, I left a bunch of crap
> spread across the OSDs...
> 
> pools:   8 pools, 32 pgs
> objects: 219 objects, 1.2 KiB

Just remove all pools and create new ones. Removing the pools also
removes the objects and you can start new.

Regards
-- 
Robert Sander
Heinlein Support GmbH
Schwedter Str. 8/9b, 10119 Berlin

http://www.heinlein-support.de

Tel: 030 / 405051-43
Fax: 030 / 405051-19

Zwangsangaben lt. §35a GmbHG:
HRB 93818 B / Amtsgericht Berlin-Charlottenburg,
Geschäftsführer: Peer Heinlein -- Sitz: Berlin



signature.asc
Description: OpenPGP digital signature
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: Clearing contents of OSDs without removing them?

2020-12-19 Thread Eugen Block
Depending on your actual OSD setup (separate rocksDB/WAL) simply  
deleting pools won’t immediately delete the remaining objects. The DBs  
are cleaned up quite slowly which can leave you with completely  
saturated disks. This has been explained multiple times here, I just  
don’t have a link at hand. If this is just a test cluster it could be  
way faster to rebuild the OSDs. Or you can first try the pool deletion  
and see how fast you can rebuild your pools.


Zitat von Dallas Jones :


Stumbling closer toward a usable production cluster with Ceph, but I have
yet another stupid n00b question I'm hoping you all will tolerate.

I have 38 OSDs up and in across 4 hosts. I (maybe prematurely) removed my
test filesystem as well as the metadata and data pools used by the deleted
filesystem.

This leaves me with 38 OSDs with a bunch of data on them.

Is there a simple way to just whack all of the data on all of those OSDs
before I create new pools and a new filesystem?

Version:
ceph version 14.2.4 (75f4de193b3ea58512f204623e6c5a16e6c1e1ba) nautilus
(stable)

As you can see from the partial output of ceph -s, I left a bunch of crap
spread across the OSDs...

pools:   8 pools, 32 pgs
objects: 219 objects, 1.2 KiB
usage:   45 TiB used, 109 TiB / 154 TiB avail
pgs: 32 active+clean

Thanks in advance for a shove in the right direction.

-Dallas
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io



___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: cephfs flags question

2020-12-19 Thread Patrick Donnelly
On Fri, Dec 18, 2020 at 6:28 AM Stefan Kooman  wrote:
> >> I have searched through documentation but I don't see anything
> >> related. It's also not described / suggested in the part about upgrading
> >> the MDS cluster (IMHO that would be a logical place) [1].
> >
> > You're the first person I'm aware of asking for this. :)
>
> Somehow I'm not suprised :-). But on the other hand, I am. I will try to
> explain this. Maybe this works best with an example.
>
> Nautilus 14.2.4 cluster here (upgraded from luminous, mimic).
>
> relevant part of ceph -s:
>
>   mds: cephfs:1 {0=mds2=up:active} 1 up:standby-replay
>
> ^^ Two MDSes in this cluster: one active and one standby-replay.
>
> ceph fs get cephfs |grep flags
> flags   1e
>
> Let's try to enable standby replay support:
>
> ceph fs set cephfs allow_standby_replay true
>
> That worked, did flags change?
>
> ceph fs get cephfs |grep flags
> flags   3e
>
> Yes! But why?

Well that's interesting. I don't have an explanation unfortunately.
You upgraded the MDS too, right? Only scenario that could cause this I
can think of is that the MDS were never restarted/upgraded to
nautilus.

> I would expect the support for standby replay to have been
> enabled already. How else would it work even without setting this fs
> feature. But apparently it does, and does not need this feature to be
> enabled like this. And that might explain that nobody ever wondered how
> to change "ceph fs flags" in the first place. Is that correct?
>
> At this point I ask myself the question: who / what uses the cephfs
> flags, and what fort. Do I, as a storage admin, need to care about this
> at all?

Operators should only care about "is X flag turned on" but we don't
really show that very well in the MDSMap dump. I'll make a note to
improve that. We'd really rather not that operators need to do bitwise
arithmetic on the flags bitfield to determine what features are turned
on.

https://tracker.ceph.com/issues/48683

> But hey, here we are, and now I would like to undersand it.
>
> If, just for the sake of upgrading clusters to have identical features,
> I would like to "upgrade" the cephfs to support all ceph fs features, I
> seem not to be able to do that:
>
> ceph fs set cephfs
> Invalid command: missing required parameter
> var(max_mds|max_file_size|allow_new_snaps|inline_data|cluster_down|allow_dirfrags|balancer|standby_count_wanted|session_timeout|session_autoclose|allow_standby_replay|down|joinable|min_compat_client)
> fs set 
> max_mds|max_file_size|allow_new_snaps|inline_data|cluster_down|allow_dirfrags|balancer|standby_count_wanted|session_timeout|session_autoclose|allow_standby_replay|down|joinable|min_compat_client
>  {--yes-i-really-mean-it} :  set fs parameter  to 
> Error EINVAL: invalid command
>
> I can choose between these:
> max_mds|max_file_size|allow_new_snaps|inline_data|cluster_down|allow_dirfrags|balancer|standby_count_wanted|session_timeout|session_autoclose|allow_standby_replay|down|joinable|min_compat_client
>
> Most of them I would not like to set at all (i.e down, joinable,
> max_mds) as they are not "features" but merely a way to set the ceph fs
> in a certain STATE.
>
> So my question is: what do I need to enable to get an upgraded fs to get
> say "flags 12" (Nautilus with snapshot support enabled AFAIK). Is that
> at all possible?

I'll also add a note to list features that can be turned on:
https://tracker.ceph.com/issues/48682

> The reason why I started this whole thread was to eliminate any Ceph
> config related difference between production and test. But maybe I
> should ask a different question: Does a (ceph-fuse / kernel) client use
> the  *cephfs flags* bit at all? If not than we don't have to focus on
> this, and we can conclude we cannot reproduce the issue on our test
> environment.

ceph-fuse/kernel client don't use these flags. Only the MDS.

> I hope above makes sense to you ;-).
>
> Thanks,
>
> Gr. Stefan
>


-- 
Patrick Donnelly, Ph.D.
He / Him / His
Principal Software Engineer
Red Hat Sunnyvale, CA
GPG: 19F28A586F808C2402351B93C3301A3E258DD79D
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] PG inconsistent with empty inconsistent objects

2020-12-19 Thread Seena Fallah
Hi,

I'm facing something strange! One of the PGs in my pool got inconsistent
and when I run `rados list-inconsistent-obj $PG_ID --format=json-pretty`
the `inconsistents` key was empty! What is this? Is it a bug in Ceph or..?

Thanks.
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] failed to process reshard logs

2020-12-19 Thread Seena Fallah
Hi,

I used radosgw-admin reshard process to process a manual bucket resharding
after it completes it logs an error below
ERROR: failed to process reshard logs, error=(2) No such file or directory

I've added a bucket to resharding queue with radosgw-admin reshard add
--bucket bucket-tmp --num-shards 2053
Is anything wrong with it?
Using nautilus 14.2.14.

Thanks.
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io