[ceph-users] luminous -> nautilus upgrade path

2020-02-12 Thread Wolfgang Lendl
hello, we plan to upgrade from luminous to nautilus. does it make sense to do the mimic step instead of going directly for nautilus? br wolfgang ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.i

[ceph-users] Re: luminous -> nautilus upgrade path

2020-02-12 Thread ceph
Afaik you can migrate from 12 to 14 in a direct way. This is supported iirc. I will do that on a few month on my ceph Cluster. Hth Mehmet Am 12. Februar 2020 09:19:53 MEZ schrieb Wolfgang Lendl : >hello, > >we plan to upgrade from luminous to nautilus. >does it make sense to do the mimic step

[ceph-users] Re: luminous -> nautilus upgrade path

2020-02-12 Thread Eugen Block
Hi, we also skipped Mimic when upgrading from L --> N and it worked fine. Zitat von c...@elchaka.de: Afaik you can migrate from 12 to 14 in a direct way. This is supported iirc. I will do that on a few month on my ceph Cluster. Hth Mehmet Am 12. Februar 2020 09:19:53 MEZ schrieb Wolfgang L

[ceph-users] Re: luminous -> nautilus upgrade path

2020-02-12 Thread Massimo Sgaravatto
We skipped from Luminous to Nautilus, skipping Mimic This is supported and documented On Wed, Feb 12, 2020 at 9:30 AM Eugen Block wrote: > Hi, > > we also skipped Mimic when upgrading from L --> N and it worked fine. > > > Zitat von c...@elchaka.de: > > > Afaik you can migrate from 12 to 14 in a

[ceph-users] Re: luminous -> nautilus upgrade path

2020-02-12 Thread Dietmar Rieder
worked fine for us as well D. On 2020-02-12 09:33, Massimo Sgaravatto wrote: > We skipped from Luminous to Nautilus, skipping Mimic > This is supported and documented > > On Wed, Feb 12, 2020 at 9:30 AM Eugen Block wrote: > >> Hi, >> >> we also skipped Mimic when upgrading from L --> N and it

[ceph-users] Re: "mds daemon damaged" after restarting MDS - Filesystem DOWN

2020-02-12 Thread Dan van der Ster
Hi all, I'm helping Luca with this a bit and we made some progress. We currently have an MDS starting and we're able to see the files. But when browsing the filesystem we have lot of "loaded dup inode" warnings, e.g. 2020-02-12 08:47:44.546063 mds.ceph-mon-01 [ERR] loaded dup inode 0x100

[ceph-users] Re: cephfs slow, howto investigate and tune mds configuration?

2020-02-12 Thread Wido den Hollander
On 2/11/20 2:53 PM, Marc Roos wrote: > > Say I think my cephfs is slow when I rsync to it, slower than it used to > be. First of all, I do not get why it reads so much data. I assume the > file attributes need to come from the mds server, so the rsync backup > should mostly cause writes not?

[ceph-users] Re: cephfs slow, howto investigate and tune mds configuration?

2020-02-12 Thread Janne Johansson
> > > The problem is that rsync creates and renames files a lot. When doing > this with small files it can be very heavy for the MDS. > Perhaps run rsync with --in-place to prevent it from re-creating partial files to a temp entity named .dfg45terf.~tmp~ and then renaming it into the correct filen

[ceph-users] Re: cephfs slow, howto investigate and tune mds configuration?

2020-02-12 Thread Marc Roos
> > >> >> Say I think my cephfs is slow when I rsync to it, slower than it used >> to be. First of all, I do not get why it reads so much data. I assume >> the file attributes need to come from the mds server, so the rsync >> backup should mostly cause writes not? >> > >Are you run

[ceph-users] Re: extract disk usage stats from running ceph cluster

2020-02-12 Thread mj
Hi Muhammad, Yes, that tool helps! Thank you for pointing it out! With a combination of openSeaChest_Info and smartctl I was able to extract the following stats of our cluster, and the numbers are very surprising to me. I hope someone here can explain the what we see below: node1 AnnualWr

[ceph-users] Re: extract disk usage stats from running ceph cluster

2020-02-12 Thread mj
On 2/12/20 11:23 AM, mj wrote: Better layout for the disks usage stats: https://pastebin.com/8V5VDXNt ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io

[ceph-users] Ceph Erasure Coding - Stored vs used

2020-02-12 Thread Kristof Coucke
Hi all, I have an issue on my Ceph cluster. For one of my pools I have 107TiB STORED and 298TiB USED. This is strange, since I've configured erasure coding (6 data chunks, 3 coding chunks). So, in an ideal world this should result in approx. 160.5TiB USED. The question now is why this is the case

[ceph-users] Re: Ceph Erasure Coding - Stored vs used

2020-02-12 Thread Kristof Coucke
I just found an interesting thread: http://lists.ceph.com/pipermail/ceph-users-ceph.com/2018-February/024589.html I assume this is the case I’m dealing with. The question is, can I safely adapt the parameter bluestore_min_alloc_size_hdd and how will the system react? Is this backwards compat

[ceph-users] CephFS hangs with access denied

2020-02-12 Thread Dietmar Rieder
Hi, we sometimes loose access to our cephfs mount and get permission denied if we try to cd into it. This happens apparently only on some of our HPC cephfs-client nodes (fs mounted via kernel client) when they are busy with calculation and I/O. When we then manually force unmount the fs and remou

[ceph-users] Re: Ceph Erasure Coding - Stored vs used

2020-02-12 Thread Janne Johansson
Den ons 12 feb. 2020 kl 12:58 skrev Kristof Coucke : > For one of my pools I have 107TiB STORED and 298TiB USED. > This is strange, since I've configured erasure coding (6 data chunks, 3 > coding chunks). > So, in an ideal world this should result in approx. 160.5TiB USED. > > There are 473+M obje

[ceph-users] Re: Ceph Erasure Coding - Stored vs used

2020-02-12 Thread Simon Leinen
Kristof Coucke writes: > I have an issue on my Ceph cluster. > For one of my pools I have 107TiB STORED and 298TiB USED. > This is strange, since I've configured erasure coding (6 data chunks, 3 > coding chunks). > So, in an ideal world this should result in approx. 160.5TiB USED. > The question n

[ceph-users] Re: Ceph Erasure Coding - Stored vs used

2020-02-12 Thread Kristof Coucke
Hi Simon and Janne, Thanks for the reply. It seems indeed related to the bluestore_min_alloc_size. In an old thread I've also found the following: *S3 object saving pipeline:* *- S3 object is divided into multipart shards by client.* *- Rgw shards each multipart shard into rados objects of siz

[ceph-users] PR #26095 experience (backported/cherry-picked to Nauilus)

2020-02-12 Thread Simon Leinen
We have been using RadosGW with Keystone integration for a couple of years, to allow users of our OpenStack-based IaaS to create their own credentials for our object store. This has caused us a fair amount of performance headaches. Last year, Jjames Weaver (BBC) has contributed a patch (PR #26095

[ceph-users] Re: cephfs slow, howto investigate and tune mds configuration?

2020-02-12 Thread Yan, Zheng
On Wed, Feb 12, 2020 at 6:08 PM Marc Roos wrote: > > > > > > >> > >> Say I think my cephfs is slow when I rsync to it, slower than it > used > >> to be. First of all, I do not get why it reads so much data. I > assume > >> the file attributes need to come from the mds server, so the rsync >

[ceph-users] Re: RBD-mirror instabilities

2020-02-12 Thread Oliver Freyermuth
Dear Cephalopodians, for those on the list also fighting rbd mirror process instabilities: With 14.2.7 (but maybe it was also present before, it does not happen often), I very rarely encounter a case where none of the two described hacks I use is working anymore, since "ceph daemon /var/run/cep

[ceph-users] Re: cephfs slow, howto investigate and tune mds configuration?

2020-02-12 Thread Marc Roos
>> > >> >> >> >> Say I think my cephfs is slow when I rsync to it, slower than it >> used >> to be. First of all, I do not get why it reads so much data. >> I assume >> the file attributes need to come from the mds server, so >> the rsync >> backup should mostly cause writes not?

[ceph-users] Re: RBD-mirror instabilities

2020-02-12 Thread Jason Dillaman
On Wed, Feb 12, 2020 at 11:55 AM Oliver Freyermuth wrote: > > Dear Cephalopodians, > > for those on the list also fighting rbd mirror process instabilities: With > 14.2.7 (but maybe it was also present before, it does not happen often), > I very rarely encounter a case where none of the two descr

[ceph-users] Re: RBD-mirror instabilities

2020-02-12 Thread Oliver Freyermuth
Dear Jason, Am 12.02.20 um 19:29 schrieb Jason Dillaman: > On Wed, Feb 12, 2020 at 11:55 AM Oliver Freyermuth > wrote: >> >> Dear Cephalopodians, >> >> for those on the list also fighting rbd mirror process instabilities: With >> 14.2.7 (but maybe it was also present before, it does not happen o

[ceph-users] Re: RBD-mirror instabilities

2020-02-12 Thread Jason Dillaman
On Wed, Feb 12, 2020 at 2:53 PM Oliver Freyermuth wrote: > > Dear Jason, > > Am 12.02.20 um 19:29 schrieb Jason Dillaman: > > On Wed, Feb 12, 2020 at 11:55 AM Oliver Freyermuth > > wrote: > >> > >> Dear Cephalopodians, > >> > >> for those on the list also fighting rbd mirror process instabilities

[ceph-users] Re: CephFS hangs with access denied

2020-02-12 Thread Dietmar Rieder
Hi, now we got a kernel crash (Oops) probably related to the my issue since it all seems to start with a hung mds (see attached dmesg from crashed client and mds log from mds server): [281202.923064] Oops: 0002 [#1] SMP [281202.924952] Modules linked in: fuse xt_multiport squashfs loop overlay(T)

[ceph-users] [ceph-user] SSD disk utilization high on ceph-12.2.12

2020-02-12 Thread Amit Ghadge
Hello All, We seen one of the Ceph data nodes, all osd's are 90-100% disk utilized , those all are SSD drive and traffic is normal compare to other data nodes. How can we debug it? ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an