Re: [ceph-users] Ceph is now declared stable in Rook v0.9

2018-12-10 Thread Kai Wagner
Congrats to everyone. Seems like we're getting closer to pony's, rainbows and ice cream for everyone!;-) On 12/11/18 12:15 AM, Mike Perez wrote: > Hey all, > > Great news, the Rook team has declared Ceph to be stable in v0.9! Great work > from both communities in collaborating to make this poss

Re: [ceph-users] SLOW SSD's after moving to Bluestore

2018-12-10 Thread Christian Balzer
On Mon, 10 Dec 2018 21:42:46 -0500 Tyler Bishop wrote: > All 4 of these SSD that i've converted to Bluestore are behaving this > way. I have around 300 of these drives in a very large production > cluster and do not have this type of behavior with Filestore. > And the same/similar loading (as i

Re: [ceph-users] SLOW SSD's after moving to Bluestore

2018-12-10 Thread Ashley Merrick
Do they have a form of disk write cache? Have seen really bad performance on HD with disk write cache enabled, same thing disk showing 100% high latency. Disabling via hdparm -W0 has solved my and a few others issues. On Tue, 11 Dec 2018 at 10:43 AM, Tyler Bishop < tyler.bis...@beyondhosting.net

Re: [ceph-users] SLOW SSD's after moving to Bluestore

2018-12-10 Thread Tyler Bishop
All 4 of these SSD that i've converted to Bluestore are behaving this way. I have around 300 of these drives in a very large production cluster and do not have this type of behavior with Filestore. On the filestore setup these SSD are partitioned 20GB for journal and 800GB for data. The systems

Re: [ceph-users] SLOW SSD's after moving to Bluestore

2018-12-10 Thread Christian Balzer
On Mon, 10 Dec 2018 21:01:31 -0500 Tyler Bishop wrote: > LVM | dm-0 | busy101% | read 137 | write 1761 | > KiB/r 4 | KiB/w 30 | MBr/s0.1 | MBw/s5.3 | avq > 185.42 | avio 5.31 ms | > DSK | sdb | busy100% | read 127 | write 1208

Re: [ceph-users] SLOW SSD's after moving to Bluestore

2018-12-10 Thread Tyler Bishop
LVM | dm-0 | busy101% | read 137 | write 1761 | KiB/r 4 | KiB/w 30 | MBr/s0.1 | MBw/s5.3 | avq 185.42 | avio 5.31 ms | DSK | sdb | busy100% | read 127 | write 1208 | KiB/r 4 | KiB/w 32 | MBr/s0.1 | MBw/s3.9 | a

Re: [ceph-users] SLOW SSD's after moving to Bluestore

2018-12-10 Thread Tyler Bishop
Older Crucial/Micron M500/M600 _ *Tyler Bishop* EST 2007 O: 513-299-7108 x1000 M: 513-646-5809 http://BeyondHosting.net This email is intended only for the recipient(s) above and/or otherwise authorized personnel. The info

Re: [ceph-users] SLOW SSD's after moving to Bluestore

2018-12-10 Thread Christian Balzer
Hello, On Mon, 10 Dec 2018 20:43:40 -0500 Tyler Bishop wrote: > I don't think thats my issue here because I don't see any IO to justify the > latency. Unless the IO is minimal and its ceph issuing a bunch of discards > to the ssd and its causing it to slow down while doing that. > What does at

Re: [ceph-users] SLOW SSD's after moving to Bluestore

2018-12-10 Thread Tyler Bishop
I don't think thats my issue here because I don't see any IO to justify the latency. Unless the IO is minimal and its ceph issuing a bunch of discards to the ssd and its causing it to slow down while doing that. Log isn't showing anything useful and I have most debugging disabled. On Mon, Dec

Re: [ceph-users] SLOW SSD's after moving to Bluestore

2018-12-10 Thread Mark Nelson
Hi Tyler, I think we had a user a while back that reported they had background deletion work going on after upgrading their OSDs from filestore to bluestore due to PGs having been moved around.  Is it possible that your cluster is doing a bunch of work (deletion or otherwise) beyond the regul

[ceph-users] SLOW SSD's after moving to Bluestore

2018-12-10 Thread Tyler Bishop
Hi, I have an SSD only cluster that I recently converted from filestore to bluestore and performance has totally tanked. It was fairly decent before, only having a little additional latency than expected. Now since converting to bluestore the latency is extremely high, SECONDS. I am trying to d

Re: [ceph-users] move directories in cephfs

2018-12-10 Thread Andras Pataki
Moving data between pools when a file is moved to a different directory is most likely problematic - for example an inode can be hard linked to two different directories that are in two different pools - then what happens to the file?  Unix/posix semantics don't really specify a parent director

[ceph-users] Ceph is now declared stable in Rook v0.9

2018-12-10 Thread Mike Perez
Hey all, Great news, the Rook team has declared Ceph to be stable in v0.9! Great work from both communities in collaborating to make this possible. https://blog.rook.io/rook-v0-9-new-storage-backends-in-town-ab952523ec53 I am planning on demos at the Rook booth by deploying object/block/fs in Lu

Re: [ceph-users] Cephalocon Barcelona 2019 CFP now open!

2018-12-10 Thread Mike Perez
On Mon, Dec 10, 2018 at 8:05 AM Wido den Hollander wrote: > > > > On 12/10/18 5:00 PM, Mike Perez wrote: > > Hello everyone! > > > > It gives me great pleasure to announce the CFP for Cephalocon Barcelona > > 2019 is now open [1]! > > > > Cephalocon Barcelona aims to bring together more than 800 t

Re: [ceph-users] cephday berlin slides

2018-12-10 Thread Mike Perez
On Mon, Dec 10, 2018 at 3:07 AM stefan wrote: > > Quoting Marc Roos (m.r...@f1-outsourcing.eu): > > > > Are there video's available (MeerKat, CTDB)? > > Nope, no recordings were made during the day. > > > PS. Disk health prediction link is not working > > ^^ Mike, can you check / fix this? Thanks

Re: [ceph-users] move directories in cephfs

2018-12-10 Thread Zhenshi Zhou
Hi Marc, Actually, all the directories are in the same pool, cephfs_data. Does the "mv" operation take effect Thanks Marc Roos 于2018年12月10日周一 下午11:56写道: > > > Except if you have different pools on these directories. Then the data > is not moved(copied), which I think should be done. This shoul

Re: [ceph-users] Pool Available Capacity Question

2018-12-10 Thread Jay Munsterman
Thanks, Konstantin. Just verified in the lab that Centos 7.6 clients can access the cephfs with upmap enabled. On Sun, Dec 9, 2018 at 11:43 PM Konstantin Shalygin wrote: > Our cluster is Luminous. Does anyone know the mapping of ceph client > version to CentOS kernel? It looks like Redhat has a

Re: [ceph-users] Cephalocon Barcelona 2019 CFP now open!

2018-12-10 Thread Sage Weil
On Mon, 10 Dec 2018, Wido den Hollander wrote: > On 12/10/18 5:00 PM, Mike Perez wrote: > > Hello everyone! > > > > It gives me great pleasure to announce the CFP for Cephalocon Barcelona > > 2019 is now open [1]! > > > > Cephalocon Barcelona aims to bring together more than 800 technologists > >

Re: [ceph-users] Cephalocon Barcelona 2019 CFP now open!

2018-12-10 Thread Wido den Hollander
On 12/10/18 5:00 PM, Mike Perez wrote: > Hello everyone! > > It gives me great pleasure to announce the CFP for Cephalocon Barcelona > 2019 is now open [1]! > > Cephalocon Barcelona aims to bring together more than 800 technologists > and adopters from across the globe to showcase Ceph’s histo

[ceph-users] Cephalocon Barcelona 2019 CFP now open!

2018-12-10 Thread Mike Perez
Hello everyone! It gives me great pleasure to announce the CFP for Cephalocon Barcelona 2019 is now open [1]! Cephalocon Barcelona aims to bring together more than 800 technologists and adopters from across the globe to showcase Ceph’s history and its future, demonstrate real-world application

Re: [ceph-users] move directories in cephfs

2018-12-10 Thread Marc Roos
Except if you have different pools on these directories. Then the data is not moved(copied), which I think should be done. This should be changed, because no one will expect a symlink to the old pool. -Original Message- From: Jack [mailto:c...@jack.fr.eu.org] Sent: 10 December 20

Re: [ceph-users] move directories in cephfs

2018-12-10 Thread Zhenshi Zhou
Hi Jack, That means I simply execute "mv" operation on the client side and modify the auth permission on the server side. Just like a "mv" on normal filesystem, only the metadata(inode) changed? I will give it a try, thanks! :) Jack 于2018年12月10日周一 下午10:14写道: > Having the / mounted somewhere, y

Re: [ceph-users] How to troubleshoot rsync to cephfs via nfs-ganesha stalling

2018-12-10 Thread Daniel Gryniewicz
This isn't something I've seen before. rsync generally works fine, even over cephfs. More inline. On 12/09/2018 09:42 AM, Marc Roos wrote: This rsync command fails and makes the local nfs unavailable (Have to stop nfs-ganesha, kill all rsync processes on the client and then start nfs-ganesh

Re: [ceph-users] move directories in cephfs

2018-12-10 Thread Jack
Having the / mounted somewhere, you can simply "mv" directories around On 12/10/2018 02:59 PM, Zhenshi Zhou wrote: > Hi, > > Is there a way I can move sub-directories outside the directory. > For instance, a directory /parent contains 3 sub-directories > /parent/a, /parent/b, /parent/c. All these

[ceph-users] move directories in cephfs

2018-12-10 Thread Zhenshi Zhou
Hi, Is there a way I can move sub-directories outside the directory. For instance, a directory /parent contains 3 sub-directories /parent/a, /parent/b, /parent/c. All these directories have huge data in it. I'm gonna move /parent/b to /b. I don't want to copy the whole directory outside cause it w

Re: [ceph-users] cephday berlin slides

2018-12-10 Thread stefan
Quoting Marc Roos (m.r...@f1-outsourcing.eu): > > Are there video's available (MeerKat, CTDB)? Nope, no recordings were made during the day. > PS. Disk health prediction link is not working ^^ Mike, can you check / fix this? Gr. Stefan -- | BIT BV http://www.bit.nl/Kamer van Kooph

[ceph-users] yet another deep-scrub performance topic

2018-12-10 Thread Vladimir Prokofev
Hello list. Deep scrub totally kills cluster performance. First of all, it takes several minutes to complete: 2018-12-09 01:39:53.857994 7f2d32fde700 0 log_channel(cluster) log [DBG] : 4.75 deep-scrub starts 2018-12-09 01:46:30.703473 7f2d32fde700 0 log_channel(cluster) log [DBG] : 4.75 deep-scr

Re: [ceph-users] cephday berlin slides

2018-12-10 Thread Marc Roos
Are there video's available (MeerKat, CTDB)? PS. Disk health prediction link is not working -Original Message- From: Stefan Kooman [mailto:ste...@bit.nl] Sent: 10 December 2018 11:22 To: Mike Perez Cc: ceph-users@lists.ceph.com Subject: Re: [ceph-users] cephday berlin slides Quoti

Re: [ceph-users] [cephfs] Kernel outage / timeout

2018-12-10 Thread Jack
There is only a simple iptables conntrack there Could it be something related to timeout ? /proc/sys/net/netfilter/nf_conntrack_tcp_timeout_established has 7875 currently Best regards, On 12/05/2018 02:47 AM, Yan, Zheng wrote: > > This is more like network issue. check if there is firewall bet

Re: [ceph-users] cephday berlin slides

2018-12-10 Thread Stefan Kooman
Quoting Mike Perez (mipe...@redhat.com): > Hi Serkan, > > I'm currently working on collecting the slides to have them posted to > the Ceph Day Berlin page as Lenz mentioned they would show up. I will > notify once the slides are available on mailing list/twitter. Thanks! FYI: The Ceph Day Berlin

Re: [ceph-users] Performance Problems

2018-12-10 Thread Stefan Kooman
Quoting Robert Sander (r.san...@heinlein-support.de): > On 07.12.18 18:33, Scharfenberg, Buddy wrote: > > > We have 3 nodes set up, 1 with several large drives, 1 with a handful of > > small ssds, and 1 with several nvme drives. > > This is a very unusual setup. Do you really have all your HDDs i

Re: [ceph-users] Performance Problems

2018-12-10 Thread Marc Roos
I think this is a April fools day joke of someone that did not setup his time correctly. -Original Message- From: Robert Sander [mailto:r.san...@heinlein-support.de] Sent: 10 December 2018 09:49 To: ceph-users@lists.ceph.com Subject: Re: [ceph-users] Performance Problems On 07.12.1

Re: [ceph-users] Performance Problems

2018-12-10 Thread Robert Sander
On 07.12.18 18:33, Scharfenberg, Buddy wrote: > We have 3 nodes set up, 1 with several large drives, 1 with a handful of > small ssds, and 1 with several nvme drives. This is a very unusual setup. Do you really have all your HDDs in one node, the SSDs in another and NVMe in the third? How do you