Re: [ceph-users] slow requests and high i/o / read rate on bluestore osds after upgrade 12.2.8 -> 12.2.10

2019-01-15 Thread Stefan Priebe - Profihost AG
Hello list, i also tested current upstream/luminous branch and it happens as well. A clean install works fine. It only happens on upgraded bluestore osds. Greets, Stefan Am 14.01.19 um 20:35 schrieb Stefan Priebe - Profihost AG: > while trying to upgrade a cluster from 12.2.8 to 12.2.10 i'm expe

[ceph-users] ceph-osd processes restart during Luminous -> Mimic upgrade on CentOS 7

2019-01-15 Thread Wido den Hollander
Hi, I'm in the middle of upgrading a 12.2.8 cluster to 13.2.4 and I've noticed that during the Yum/RPM upgrade the OSDs are being restarted. Jan 15 11:24:25 x yum[2348259]: Updated: 2:ceph-base-13.2.4-0.el7.x86_64 Jan 15 11:24:47 x systemd[1]: Stopped target ceph target allowing to start/

Re: [ceph-users] ceph-osd processes restart during Luminous -> Mimic upgrade on CentOS 7

2019-01-15 Thread Dan van der Ster
Hi Wido, `rpm -q --scripts ceph-selinux` will tell you why. It was the same from 12.2.8 to 12.2.10: http://tracker.ceph.com/issues/21672 And the problem is worse than you described, because the daemons are even restarted before all the package files have been updated. Our procedure on these upg

Re: [ceph-users] ceph-osd processes restart during Luminous -> Mimic upgrade on CentOS 7

2019-01-15 Thread Wido den Hollander
On 1/15/19 11:39 AM, Dan van der Ster wrote: > Hi Wido, > > `rpm -q --scripts ceph-selinux` will tell you why. > > It was the same from 12.2.8 to 12.2.10: http://tracker.ceph.com/issues/21672 > Thanks for pointing it out! > And the problem is worse than you described, because the daemons ar

[ceph-users] samsung sm863 vs cephfs rep.1 pool performance

2019-01-15 Thread Marc Roos
Is this result to be expected from cephfs, when comparing to a native ssd speed test. 4k r ran. 4k w ran. 4k r seq. 4k w seq. 1024k r ran. 1024k w ran. 1024k r seq. 1024k w seq. size lat iops kB/s lat iops kB/s lat iops MB/s lat iops MB/s lat iops MB

Re: [ceph-users] slow requests and high i/o / read rate on bluestore osds after upgrade 12.2.8 -> 12.2.10

2019-01-15 Thread Marc Roos
I upgraded this weekend from 12.2.8 to 12.2.10 without such issues (osd's are idle) -Original Message- From: Stefan Priebe - Profihost AG [mailto:s.pri...@profihost.ag] Sent: 15 January 2019 10:26 To: ceph-users@lists.ceph.com Cc: n.fahldi...@profihost.ag Subject: Re: [ceph-users] s

[ceph-users] Best practice creating pools / rbd images

2019-01-15 Thread Thomas
Hi,   my use case for Ceph is serving a central backup storage. This means I will backup multiple databases in Ceph storage cluster.   This is my question: What is the best practice for creating pools & images? Should I create multiple pools, means one pool per database? Or should I create a single

Re: [ceph-users] mds0: Metadata damage detected

2019-01-15 Thread Yan, Zheng
On Tue, Jan 15, 2019 at 3:51 PM Sergei Shvarts wrote: > > Hello ceph users! > > A couple of days ago I've got a ceph health error - mds0: Metadata damage > detected. > Overall ceph cluster is fine: all pgs are clean, all osds are up and in, no > big problems. > Looks like there is not much infor

Re: [ceph-users] cephfs kernel client instability

2019-01-15 Thread Andras Pataki
An update on our cephfs kernel client troubles.  After doing some heavier testing with a newer kernel 4.19.13, it seems like it also gets into a bad state when it can't connect to monitors (all back end processes are on 12.2.8): Jan 15 08:49:00 mon5 kernel: libceph: mon1 10.128.150.11:6789 ses

Re: [ceph-users] rocksdb mon stores growing until restart

2019-01-15 Thread Dan van der Ster
On Wed, Sep 19, 2018 at 7:01 PM Bryan Stillwell wrote: > > > On 08/30/2018 11:00 AM, Joao Eduardo Luis wrote: > > > On 08/30/2018 09:28 AM, Dan van der Ster wrote: > > > Hi, > > > Is anyone else seeing rocksdb mon stores slowly growing to >15GB, > > > eventually triggering the 'mon is using a lot

Re: [ceph-users] slow requests and high i/o / read rate on bluestore osds after upgrade 12.2.8 -> 12.2.10

2019-01-15 Thread Stefan Priebe - Profihost AG
Am 15.01.19 um 12:45 schrieb Marc Roos: > > I upgraded this weekend from 12.2.8 to 12.2.10 without such issues > (osd's are idle) it turns out this was a kernel bug. Updating to a newer kernel - has solved this issue. Greets, Stefan > -Original Message- > From: Stefan Priebe - Pro

Re: [ceph-users] slow requests and high i/o / read rate on bluestore osds after upgrade 12.2.8 -> 12.2.10

2019-01-15 Thread Mark Nelson
On 1/15/19 9:02 AM, Stefan Priebe - Profihost AG wrote: Am 15.01.19 um 12:45 schrieb Marc Roos: I upgraded this weekend from 12.2.8 to 12.2.10 without such issues (osd's are idle) it turns out this was a kernel bug. Updating to a newer kernel - has solved this issue. Greets, Stefan Hi

[ceph-users] Recommendations for sharing a file system to a heterogeneous client network?

2019-01-15 Thread Ketil Froyn
Hi, I'm pretty new to Ceph - pardon the newbie question. I've done a bit of reading and searching, but I haven't seen an answer to this yet. Is anyone using ceph to power a filesystem shared among a network of Linux, Windows and Mac clients? How have you set it up? Is there a mature Windows drive

Re: [ceph-users] Recommendations for sharing a file system to a heterogeneous client network?

2019-01-15 Thread Robert Sander
Hi Ketil, use Samba/CIFS with multiple gateway machines clustered with CTDB. CephFS can be mounted with Posix ACL support. Slides from my last Ceph day talk are available here: https://www.slideshare.net/Inktank_Ceph/ceph-day-berlin-unlimited-fileserver-with-samba-ctdb-and-cephfs Regards -- Rob

Re: [ceph-users] CEPH_FSAL Nfs-ganesha

2019-01-15 Thread Patrick Donnelly
On Mon, Jan 14, 2019 at 7:11 AM Daniel Gryniewicz wrote: > > Hi. Welcome to the community. > > On 01/14/2019 07:56 AM, David C wrote: > > Hi All > > > > I've been playing around with the nfs-ganesha 2.7 exporting a cephfs > > filesystem, it seems to be working pretty well so far. A few questions:

Re: [ceph-users] Recommendations for sharing a file system to a heterogeneous client network?

2019-01-15 Thread Ketil Froyn
Robert, Thanks, this is really interesting. Do you also have any details on how a solution like this performs? I've been reading a thread about samba/cephfs performance, and the stats aren't great - especially when creating/deleting many files - but being a rookie, I'm not 100% clear on the hardwa

Re: [ceph-users] Bluestore device’s device selector for Samsung NVMe

2019-01-15 Thread Vitaliy Filippov
Try lspci -vs and look for `Capabilities: [148] Device Serial Number 00-02-c9-03-00-4f-68-7e` in the output -- With best regards, Vitaliy Filippov ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ce

Re: [ceph-users] Recommendations for sharing a file system to a heterogeneous client network?

2019-01-15 Thread Maged Mokhtar
Hi Ketil, I have not tested the creation/deletion but the read/write performance was much better then the link you posted. Using CTDB setup based on Robert's presentation, we were getting 800 MB/s write performance for queue depth =1 and  2.2 GB/s  queue depth= 32  from a single CTDB/Samba g

Re: [ceph-users] cephfs kernel client instability

2019-01-15 Thread Kjetil Joergensen
Hi, you could try reducing "osd map message max", some code paths that end up as -EIO (kernel: libceph: mon1 *** io error) is exceeding include/linux/ceph/libceph.h:CEPH_MSG_MAX_{FRONT,MIDDLE,DATA}_LEN. This "worked for us" - YMMV. -KJ On Tue, Jan 15, 2019 at 6:14 AM Andras Pataki wrote: > An

[ceph-users] Why does "df" on a cephfs not report same free space as "rados df" ?

2019-01-15 Thread David Young
Hi folks, My ceph cluster is used exclusively for cephfs, as follows: --- root@node1:~# grep ceph /etc/fstab node2:6789:/ /ceph ceph auto,_netdev,name=admin,secretfile=/root/ceph.admin.secret root@node1:~# --- "rados df" shows me the following: --- root@node1:~# rados df POOL_NAME USE

Re: [ceph-users] Ceph Call For Papers coordination pad

2019-01-15 Thread Kai Wagner
Hi all, just a friendly reminder to use this pad for CfP coordination . Right now it seems like I'm the only one who submitted something to Cephalocon and I can't believe that ;-) https://pad.ceph.com/p/cfp-coordination Thanks, Kai On 5/31/18 1:17 AM, Gregory Farnum wrote: > Short version: ht