Re: [ceph-users] deleted snap dirs are back as _origdir_1099536400705

2019-12-16 Thread Marc Roos
Am I the only lucky one having this problem? Should I use the bugtracker system for this? -Original Message- From: Marc Roos Sent: 14 December 2019 10:05 Cc: ceph-users Subject: Re: [ceph-users] deleted snap dirs are back as _origdir_1099536400705 ceph tell mds.a scrub start / rec

Re: [ceph-users] deleted snap dirs are back as _origdir_1099536400705

2019-12-16 Thread Marc Roos
Yes Thanks!!! you are right I deleted the higher created snapshots, and they are now gone. -Original Message- Cc: ceph-users Subject: Re: [ceph-users] deleted snap dirs are back as _origdir_1099536400705 With just the one ls listing and my memory it's not totally clear, but I belie

Re: [ceph-users] help! pg inactive and slow requests after filestore to bluestore migration, version 12.2.12

2019-12-16 Thread Stefan Kooman
Quoting Jelle de Jong (jelledej...@powercraft.nl): > > It took three days to recover and during this time clients were not > responsive. > > How can I migrate to bluestore without inactive pgs or slow request. I got > several more filestore clusters and I would like to know how to migrate > witho

Re: [ceph-users] deleted snap dirs are back as _origdir_1099536400705

2019-12-16 Thread Gregory Farnum
With just the one ls listing and my memory it's not totally clear, but I believe this is the output you get when delete a snapshot folder but it's still referenced by a different snapshot farther up the hierarchy. -Greg On Mon, Dec 16, 2019 at 8:51 AM Marc Roos wrote: > > > Am I the only lucky on

[ceph-users] Separate disk sets for high IO?

2019-12-16 Thread Philip Brown
Still relatively new to ceph, but have been tinkering for a few weeks now. If I'm reading the various docs correctly, then any RBD in a particular ceph cluster, will be distributed across ALL OSDs, ALL the time. There is no way to designate a particular set of disks, AKA OSDs, to be a high perfo

Re: [ceph-users] Separate disk sets for high IO?

2019-12-16 Thread Marc Roos
You can classify osd's, eg as ssd. And you can assign this class to a pool you create. This way you have have rbd's running on only ssd's. I think you have also a class for nvme and you can create custom classes. -Original Message- From: Philip Brown [mailto:pbr...@medata.com] Se

Re: [ceph-users] Separate disk sets for high IO?

2019-12-16 Thread Nathan Fish
Indeed, you can set device class to pretty much arbitrary strings and specify them. By default, 'hdd', 'ssd', and I think 'nvme' are autodetected - though my Optanes showed up as 'ssd'. On Mon, Dec 16, 2019 at 4:58 PM Marc Roos wrote: > > > > You can classify osd's, eg as ssd. And you can assign

Re: [ceph-users] Separate disk sets for high IO?

2019-12-16 Thread Philip Brown
Sounds very useful. Any online example documentation for this? havent found any so far? - Original Message - From: "Nathan Fish" To: "Marc Roos" Cc: "ceph-users" , "Philip Brown" Sent: Monday, December 16, 2019 2:07:44 PM Subject: Re: [ceph-users] Separate disk sets for high IO? Inde

Re: [ceph-users] Separate disk sets for high IO?

2019-12-16 Thread DHilsbos
Philip; There's isn't any documentation that shows specifically how to do that, though the below comes close. Here's the documentation, for Nautilus, on CRUSH operations: https://docs.ceph.com/docs/nautilus/rados/operations/crush-map/ About a third of the way down the page is a discussion of "D

Re: [ceph-users] Separate disk sets for high IO?

2019-12-16 Thread Philip Brown
Yes I saw that thanks. Unfortunately, that doesnt show use of "custom classes" as someone hinted at. - Original Message - From: dhils...@performair.com To: "ceph-users" Cc: "Philip Brown" Sent: Monday, December 16, 2019 3:38:49 PM Subject: RE: Separate disk sets for high IO? Philip;

Re: [ceph-users] Separate disk sets for high IO?

2019-12-16 Thread DHilsbos
Philip; Ah, ok. I suspect that isn't documented because the developers don't want average users doing it. It's also possible that it won't work as expected, as there is discussion on the web of device classes being changed at startup of the OSD daemon. That said... "ceph osd crush class crea

Re: [ceph-users] Separate disk sets for high IO?

2019-12-16 Thread Nathan Fish
https://ceph.io/community/new-luminous-crush-device-classes/ https://docs.ceph.com/docs/nautilus/rados/operations/crush-map/#device-classes On Mon, Dec 16, 2019 at 5:42 PM Philip Brown wrote: > > Sounds very useful. > > Any online example documentation for this? > havent found any so far? > > > -

Re: [ceph-users] Separate disk sets for high IO?

2019-12-16 Thread Paul Mezzanini
We use custom device classes to split data nvme from metadata nvme drives. If a device has a class set it does not get overwritten at startup. Once you set the class it works just like it says on the tin. Put this pool on these classes, this other pool on this other class etc. -- Paul Mezzan

Re: [ceph-users] Pool Max Avail and Ceph Dashboard Pool Useage on Nautilus giving different percentages

2019-12-16 Thread ceph
I have observed this in the ceph nautilus dashboard too - and Think it is a Display Bug... but sometimes it Shows tue right values Which nautilus u use? Am 10. Dezember 2019 14:31:05 MEZ schrieb "David Majchrzak, ODERLAND Webbhotell AB" : >Hi! > >While browsing /#/pool in nautilus ceph dashbo