Re: [ceph-users] cephfs quota limit

2018-11-07 Thread Jan Fajerski
On Tue, Nov 06, 2018 at 08:57:48PM +0800, Zhenshi Zhou wrote: Hi, I'm wondering whether cephfs have quota limit options. I use kernel client and ceph version is 12.2.8. Thanks CephFS has quota support, see http://docs.ceph.com/docs/luminous/cephfs/quota/. The kernel has recently gained C

[ceph-users] [bug] mount.ceph man description is wrong

2018-11-07 Thread xiang . dai
Hi! I use ceph version 13.2.1 (5533ecdc0fda920179d7ad84e0aa65a127b20d77) mimic (stable) and i want to call `ls -ld` to read whole dir size in cephfs: When i man mount.ceph: rbytes Report the recursive size of the directory contents for st_size on directories. Default: on But without rbyte

Re: [ceph-users] librbd::image::CreateRequest: 0x55e4fc8bf620 handle_create_id_object: error creating RBD id object

2018-11-07 Thread Dengke Du
Thanks! This problem fixed by your advice: 1. add 3 osd service 2. link  libcls_rbd.so to libcls_rbd.so.1.0.0, because I build ceph from source code according to Mykola's advice. On 2018/11/6 下午4:33, Ashley Merrick wrote: Is that correct or have you added more than 1 OSD? CEPH is never goi

[ceph-users] ceph 12.2.9 release

2018-11-07 Thread Dietmar Rieder
Hi, I wonder if there is any release announcement for ceph 12.2.9 that I missed. I just found the new packages on download.ceph.com, is this an official release? ~ Dietmar -- _ D i e t m a r R i e d e r, Mag.Dr. Innsbruck Medical University Biocenter - D

Re: [ceph-users] cephfs quota limit

2018-11-07 Thread Luis Henriques
Jan Fajerski writes: > On Tue, Nov 06, 2018 at 08:57:48PM +0800, Zhenshi Zhou wrote: >> Hi, >> I'm wondering whether cephfs have quota limit options. >> I use kernel client and ceph version is 12.2.8. >> Thanks > CephFS has quota support, see > http://docs.ceph.com/docs/luminous/cephfs/q

Re: [ceph-users] cephfs quota limit

2018-11-07 Thread Zhenshi Zhou
Hi Jan, Thanks for the explanation. I think I would deploy a mimic cluster and test it on a client with kernel version above 4.17. Then I may do some planning on upgrading my current cluster if everything goes fine :) Thanks Jan Fajerski 于2018年11月7日周三 下午4:50写道: > On Tue, Nov 06, 2018 at 08:57:

Re: [ceph-users] ceph 12.2.9 release

2018-11-07 Thread Konstantin Shalygin
I wonder if there is any release announcement for ceph 12.2.9 that I missed. I just found the new packages on download.ceph.com, is this an official release? This is because 12.2.9 have a several bugs. You should avoid to use this release and wait for 12.2.10 k

[ceph-users] Move rdb based image from one pool to another

2018-11-07 Thread Uwe Sauter
Hi, I have several VM images sitting in a Ceph pool which are snapshotted. Is there a way to move such images from one pool to another and perserve the snapshots? Regards, Uwe ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.

[ceph-users] scrub and deep scrub - not respecting end hour

2018-11-07 Thread Luiz Gustavo Tonello
Hello guys, Some days ago I created a time window for scrub execution in my OSDs, and for 2 days it works perfectly. Yesterday, I saw a deep scrubbing running out of this period and I though that maybe osd_scrub_begin_hour and osd_scrub_end_hour are only for scrub and not for deep scrub (am I righ

Re: [ceph-users] scrub and deep scrub - not respecting end hour

2018-11-07 Thread Konstantin Shalygin
Or scrub still running until it finish the process on queue? Yes, this queue thresholds. If u want to finish your scrubs at 11, schedule end to 10. k ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-us

Re: [ceph-users] Move rdb based image from one pool to another

2018-11-07 Thread Jason Dillaman
With the Mimic release, you can use "rbd deep-copy" to transfer the images (and associated snapshots) to a new pool. Prior to that, you could use "rbd export-diff" / "rbd import-diff" to manually transfer an image and its associated snapshots. On Wed, Nov 7, 2018 at 7:11 AM Uwe Sauter wrote: > > H

Re: [ceph-users] Move rdb based image from one pool to another

2018-11-07 Thread Uwe Sauter
I'm still on luminous (12.2.8). I'll have a look on the commands. Thanks. Am 07.11.18 um 14:31 schrieb Jason Dillaman: > With the Mimic release, you can use "rbd deep-copy" to transfer the > images (and associated snapshots) to a new pool. Prior to that, you > could use "rbd export-diff" / "rbd im

Re: [ceph-users] ceph 12.2.9 release

2018-11-07 Thread Simon Ironside
On 07/11/2018 10:59, Konstantin Shalygin wrote: I wonder if there is any release announcement for ceph 12.2.9 that I missed. I just found the new packages on download.ceph.com, is this an official release? This is because 12.2.9 have a several bugs. You should avoid to use this release and

[ceph-users] osd reweight = pgs stuck unclean

2018-11-07 Thread John Petrini
Hello, I've got a small development cluster that shows some strange behavior that I'm trying to understand. If I reduce the weight of an OSD using ceph osd reweight X 0.9 for example Ceph will move data but recovery stalls and a few pg's remain stuck unclean. If I reset them all back to 1 ceph go

Re: [ceph-users] ceph 12.2.9 release

2018-11-07 Thread Matthew Vernon
On 07/11/2018 10:59, Konstantin Shalygin wrote: >> I wonder if there is any release announcement for ceph 12.2.9 that I missed. >> I just found the new packages on download.ceph.com, is this an official >> release? > > This is because 12.2.9 have a several bugs. You should avoid to use this > rele

Re: [ceph-users] ceph 12.2.9 release

2018-11-07 Thread Marc Roos
I don't see the problem. I am installing only the ceph updates when others have done this and are running several weeks without problems. I have noticed this 12.2.9 availability also, did not see any release notes, so why install it? Especially with recent issues of other releases. That bei

Re: [ceph-users] ceph 12.2.9 release

2018-11-07 Thread Thomas White
One of the Ceph clusters my team manages is on 12.2.9 on a Proxmox environment seems to be running fine with simple x3 replication and RBD. Would be interesting to know what issues have been encountered so far. All our OSDs are simple filestore at present and our path to 12.2.9 was 10.2.7 -> 10.2.1

Re: [ceph-users] ceph 12.2.9 release

2018-11-07 Thread Matthew Vernon
On 07/11/2018 14:16, Marc Roos wrote: > > > I don't see the problem. I am installing only the ceph updates when > others have done this and are running several weeks without problems. I > have noticed this 12.2.9 availability also, did not see any release > notes, so why install it? Especiall

Re: [ceph-users] Packages for debian in Ceph repo

2018-11-07 Thread Kevin Olbrich
Am Mi., 7. Nov. 2018 um 07:40 Uhr schrieb Nicolas Huillard < nhuill...@dolomede.fr>: > > > It lists rbd but still fails with the exact same error. > > I stumbled upon the exact same error, and since there was no answer > anywhere, I figured it was a very simple problem: don't forget to > install t

Re: [ceph-users] ceph 12.2.9 release

2018-11-07 Thread Dietmar Rieder
On 11/7/18 11:59 AM, Konstantin Shalygin wrote: >> I wonder if there is any release announcement for ceph 12.2.9 that I missed. >> I just found the new packages on download.ceph.com, is this an official >> release? > > This is because 12.2.9 have a several bugs. You should avoid to use this > rele

Re: [ceph-users] [bug] mount.ceph man description is wrong

2018-11-07 Thread Ilya Dryomov
On Wed, Nov 7, 2018 at 2:25 PM wrote: > > Hi! > > I use ceph version 13.2.1 (5533ecdc0fda920179d7ad84e0aa65a127b20d77) mimic > (stable) and i want to call `ls -ld` to read whole dir size in cephfs: > > When i man mount.ceph: > > rbytes Report the recursive size of the directory contents for st_si

Re: [ceph-users] Filestore to Bluestore migration question

2018-11-07 Thread Hayashida, Mami
I would agree with that. So, here is what I am planning on doing today. I will try this from scratch on a different OSD node from the very first step and log input and output for every step. Here is the outline of what I think (based on all the email exchanges so far) should happen. *** Try

Re: [ceph-users] Filestore to Bluestore migration question

2018-11-07 Thread Ashley Merrick
ceph osd destroy 70 --yes-i-really-mean-it I am guessing that’s a copy and paste mistake and should say 120. Is the SSD @ /dev/sdh fully for the OSD120 is a partition on this SSD the journal and other partitions are for other SSD’s? On Wed, 7 Nov 2018 at 11:21 PM, Hayashida, Mami wrote: > I w

Re: [ceph-users] Filestore to Bluestore migration question

2018-11-07 Thread Hayashida, Mami
Yes, that was indeed a copy-and-paste mistake. I am trying to use /dev/sdh (hdd) for data and a part of /dev/sda (ssd) for the journal. That's how the Filestore is set-up. So, for the Bluestore, data on /dev/sdh, wal and db on /dev/sda. On Wed, Nov 7, 2018 at 10:26 AM, Ashley Merrick wrote:

Re: [ceph-users] Filestore to Bluestore migration question

2018-11-07 Thread Ashley Merrick
Sorry my mixup. Therefore you shouldn’t be running ZAP against /dev/sda as this will wipe the whole SSD. I Guess currently in its setup it’s using a partition on /dev/sda? Like /dev/sda2 for example. ,Ashley On Wed, 7 Nov 2018 at 11:30 PM, Hayashida, Mami wrote: > Yes, that was indeed a copy-

Re: [ceph-users] ceph 12.2.9 release

2018-11-07 Thread Gregory Farnum
On Wed, Nov 7, 2018 at 5:58 AM Simon Ironside wrote: > > > On 07/11/2018 10:59, Konstantin Shalygin wrote: > >> I wonder if there is any release announcement for ceph 12.2.9 that I > missed. > >> I just found the new packages on download.ceph.com, is this an official > >> release? > > > > This is

Re: [ceph-users] Filestore to Bluestore migration question

2018-11-07 Thread Hector Martin
On 11/8/18 12:29 AM, Hayashida, Mami wrote: > Yes, that was indeed a copy-and-paste mistake.  I am trying to use > /dev/sdh (hdd) for data and a part of /dev/sda (ssd)  for the journal.  > That's how the Filestore is set-up.  So, for the Bluestore, data on > /dev/sdh,  wal and db on /dev/sda. /de

Re: [ceph-users] ceph 12.2.9 release

2018-11-07 Thread Kevin Olbrich
Am Mi., 7. Nov. 2018 um 16:40 Uhr schrieb Gregory Farnum : > On Wed, Nov 7, 2018 at 5:58 AM Simon Ironside > wrote: > >> >> >> On 07/11/2018 10:59, Konstantin Shalygin wrote: >> >> I wonder if there is any release announcement for ceph 12.2.9 that I >> missed. >> >> I just found the new packages

Re: [ceph-users] ceph 12.2.9 release

2018-11-07 Thread Christoph Adomeit
Hello together, we have upgraded to 12.2.9 because it was in the official repos. Right after the update and some scrubs we have issues. This morning after regular scrubs we had around 10% of all pgs inconstent: pgs: 4036 active+clean 380 active+clean+inconsistent After repairung

Re: [ceph-users] ceph 12.2.9 release

2018-11-07 Thread Simon Ironside
On 07/11/2018 15:39, Gregory Farnum wrote: On Wed, Nov 7, 2018 at 5:58 AM Simon Ironside > wrote: On 07/11/2018 10:59, Konstantin Shalygin wrote: >> I wonder if there is any release announcement for ceph 12.2.9 that I missed. >> I just found the

Re: [ceph-users] ceph 12.2.9 release

2018-11-07 Thread Gregory Farnum
The specific bug you are known at risk for when installing the 12.2.9 packages is http://tracker.ceph.com/issues/36686. It only triggers when PGs are not active+clean and are running different minor versions. (Even more specifically, it seems to only show up when doing backfill from an OSD running

Re: [ceph-users] ceph 12.2.9 release

2018-11-07 Thread Ashley Merrick
I am seeing this on the latest mimic on my test cluster aswel. Every automatic deep-scrub comes back as inconsistent, but doing another manual scrub comes back as fine and clear each time. Not sure if related or not.. On Wed, 7 Nov 2018 at 11:57 PM, Christoph Adomeit < christoph.adom...@gatworks

Re: [ceph-users] Filestore to Bluestore migration question

2018-11-07 Thread Hayashida, Mami
Thank you very much. Yes, I am aware that zapping the SSD and converting it to LVM requires stopping all the FileStore OSDs whose journals are on that SSD first. I will add in the `hdparm` to my steps. I did run into remnants of gpt information lurking around when trying to re-use osd disks in th

Re: [ceph-users] Filestore to Bluestore migration question

2018-11-07 Thread Hector Martin
On 11/8/18 2:15 AM, Hayashida, Mami wrote: > Thank you very much.  Yes, I am aware that zapping the SSD and > converting it to LVM requires stopping all the FileStore OSDs whose > journals are on that SSD first.  I will add in the `hdparm` to my steps. > I did run into remnants of gpt information l

Re: [ceph-users] ceph 12.2.9 release

2018-11-07 Thread David Turner
My big question is that we've had a few of these releases this year that are bugged and shouldn't be upgraded to... They don't have any release notes or announcement and the only time this comes out is when users finally ask about it weeks later. Why is this not proactively announced to avoid a pr

Re: [ceph-users] Move rdb based image from one pool to another

2018-11-07 Thread Uwe Sauter
I've been reading a bit and trying around but it seems I'm not quite where I want to be. I want to migrate from pool "vms" to pool "vdisks". # ceph osd pool ls vms vdisks # rbd ls vms vm-101-disk-1 vm-101-disk-2 vm-102-disk-1 vm-102-disk-2 # rbd snap ls vms/vm-102-disk-2 SNAPID NAME SIZE

Re: [ceph-users] Move rdb based image from one pool to another

2018-11-07 Thread Jason Dillaman
If your CLI supports "--export-format 2", you can just do "rbd export --export-format 2 vms/vm-102-disk2 - | rbd import --export-format 2 - vdisks/vm-102-disk-2" (you need to specify the data format on import otherwise it will assume it's copying a raw image). On Wed, Nov 7, 2018 at 2:38 PM Uwe Sau

Re: [ceph-users] Move rdb based image from one pool to another

2018-11-07 Thread Uwe Sauter
I tried that but it fails: # rbd export --export-format 2 vms/vm-102-disk-2 - | rbd import --export-format 2 - vdisks/vm-102-disk-2 rbd: import header failed. Importing image: 0% complete...failed. rbd: import failed: (22) Invalid argument Exporting image: 0% complete...failed. rbd: export error

Re: [ceph-users] Move rdb based image from one pool to another

2018-11-07 Thread Uwe Sauter
Looks like I'm hitting this: http://tracker.ceph.com/issues/34536 Am 07.11.18 um 20:46 schrieb Uwe Sauter: I tried that but it fails: # rbd export --export-format 2 vms/vm-102-disk-2 - | rbd import --export-format 2 - vdisks/vm-102-disk-2 rbd: import header failed. Importing image: 0% complet

Re: [ceph-users] Filestore to Bluestore migration question

2018-11-07 Thread Hayashida, Mami
Wow, after all of this, everything went well and I was able to convert osd.120-129 from Filestore to Bluestore. *** root@osd2:~# ls -l /var/lib/ceph/osd/ceph-120 total 48 -rw-r--r-- 1 ceph ceph 384 Nov 7 14:34 activate.monmap lrwxrwxrwx 1 ceph ceph 19 Nov 7 14:38 block -> /dev/hdd120/data120 lr

Re: [ceph-users] Move rdb based image from one pool to another

2018-11-07 Thread Jason Dillaman
There was a bug in "rbd import" where it disallowed the use of stdin for export-format 2. This has been fixed in v12.2.9 and is in the pending 13.2.3 release. On Wed, Nov 7, 2018 at 2:46 PM Uwe Sauter wrote: > > I tried that but it fails: > > # rbd export --export-format 2 vms/vm-102-disk-2 - | rb

Re: [ceph-users] Move rdb based image from one pool to another

2018-11-07 Thread Uwe Sauter
I do have an empty disk in that server. Just go the extra step, save the export to a file and import that one? Am 07.11.18 um 20:55 schrieb Jason Dillaman: There was a bug in "rbd import" where it disallowed the use of stdin for export-format 2. This has been fixed in v12.2.9 and is in the pe

Re: [ceph-users] Move rdb based image from one pool to another

2018-11-07 Thread Jason Dillaman
Yes, that's it -- or upgrade your local Ceph client packages (if you are on luminous). On Wed, Nov 7, 2018 at 3:02 PM Uwe Sauter wrote: > > I do have an empty disk in that server. Just go the extra step, save the > export to a file and import that one? > > > > Am 07.11.18 um 20:55 schrieb Jason D

Re: [ceph-users] Move rdb based image from one pool to another

2018-11-07 Thread Alex Gorbachev
On Wed, Nov 7, 2018 at 2:38 PM Uwe Sauter wrote: > > I've been reading a bit and trying around but it seems I'm not quite where I > want to be. > > I want to migrate from pool "vms" to pool "vdisks". > > # ceph osd pool ls > vms > vdisks > > # rbd ls vms > vm-101-disk-1 > vm-101-disk-2 > vm-102-d

Re: [ceph-users] Move rdb based image from one pool to another

2018-11-07 Thread Uwe Sauter
Am 07.11.18 um 21:17 schrieb Alex Gorbachev: On Wed, Nov 7, 2018 at 2:38 PM Uwe Sauter wrote: I've been reading a bit and trying around but it seems I'm not quite where I want to be. I want to migrate from pool "vms" to pool "vdisks". # ceph osd pool ls vms vdisks # rbd ls vms vm-101-di

Re: [ceph-users] ceph 12.2.9 release

2018-11-07 Thread Ken Dreyer
On Wed, Nov 7, 2018 at 8:57 AM Kevin Olbrich wrote: > We solve this problem by hosting two repos. One for staging and QA and one > for production. > Every release gets to staging (for example directly after building a scm tag). > > If QA passed, the stage repo is turned into the prod one. > Using

Re: [ceph-users] ceph 12.2.9 release

2018-11-07 Thread Ricardo J. Barberis
El Miércoles 07/11/2018 a las 11:05, Matthew Vernon escribió: > On 07/11/2018 10:59, Konstantin Shalygin wrote: > >> I wonder if there is any release announcement for ceph 12.2.9 that I > >> missed. > >> I just found the new packages on download.ceph.com, is this an official > >> release? > > > >

Re: [ceph-users] ceph 12.2.9 release

2018-11-07 Thread Ricardo J. Barberis
El Miércoles 07/11/2018 a las 11:28, Matthew Vernon escribió: > On 07/11/2018 14:16, Marc Roos wrote: > > > > > > I don't see the problem. I am installing only the ceph updates when > > others have done this and are running several weeks without problems. I > > have noticed this 12.2.9 availab

Re: [ceph-users] ceph 12.2.9 release

2018-11-07 Thread Ricardo J. Barberis
El Miércoles 07/11/2018 a las 10:58, Simon Ironside escribió: > On 07/11/2018 10:59, Konstantin Shalygin wrote: > >> I wonder if there is any release announcement for ceph 12.2.9 that I > >> missed. I just found the new packages on download.ceph.com, is this an > >> official release? > > > > This i

Re: [ceph-users] ceph 12.2.9 release

2018-11-07 Thread Neha Ojha
For those on 12.2.9 - If you have successfully upgraded to 12.2.9, there is no reason for you to downgrade, since the bug appears while upgrading to 12.2.9 - http://tracker.ceph.com/issues/36686. We suggest you to not upgrade to 12.2.10, which reverts the feature that caused this bug. Also, 12.2.1

Re: [ceph-users] Filestore to Bluestore migration question

2018-11-07 Thread Hector Martin
On 11/8/18 4:54 AM, Hayashida, Mami wrote: > Wow, after all of this, everything went well and I was able to convert > osd.120-129 from Filestore to Bluestore. Glad to hear it works! Make sure you reboot and check that everything comes back up cleanly. FWIW, I expect most of the files under /var/

[ceph-users] troubleshooting ceph rdma performance

2018-11-07 Thread Raju Rangoju
Hello All, I have been collecting performance numbers on our ceph cluster, and I had noticed a very poor throughput on ceph async+rdma when compared with tcp. I was wondering what tunings/settings should I do to the cluster that would improve the ceph rdma (async+rdma) performance. Currently,

[ceph-users] Migrate OSD journal to SSD partition

2018-11-07 Thread Dave.Chen
Hi all, I have been trying to migrate the journal to SSD partition for an while, basically I followed the guide here [1], I have the below configuration defined in the ceph.conf [osd.0] osd_journal = /dev/disk/by-partlabel/journal-1 And then create the journal in this way, # ceph-osd -i 0 -mk

Re: [ceph-users] ceph 12.2.9 release

2018-11-07 Thread Christoph Adomeit
So my question regarding the latest ceph releases still is: Where do all these scrub errors come from and do we have to worry about ? On Thu, Nov 08, 2018 at 12:16:05AM +0800, Ashley Merrick wrote: > I am seeing this on the latest mimic on my test cluster aswel. > > Every automatic deep-scrub c