Re: [ceph-users] linuxcon north america, ceph bluestore slides

2016-08-31 Thread Brian ::
Amazing improvements to performance in the preview now.. I wonder will there be a filestore --> bluestore upgrade path... On Wed, Aug 31, 2016 at 6:32 AM, Alexandre DERUMIER wrote: > Hi, > > Here the slides of the ceph bluestore prensentation > > http://events.linuxfoundation.org/sites/events/f

Re: [ceph-users] Ceph + VMware + Single Thread Performance

2016-08-31 Thread Nick Fisk
From: w...@globe.de [mailto:w...@globe.de] Sent: 30 August 2016 18:40 To: n...@fisk.me.uk; 'Alex Gorbachev' Cc: 'Horace Ng' Subject: Re: [ceph-users] Ceph + VMware + Single Thread Performance Hi Nick, here are my answers and questions... Am 30.08.16 um 19:05 schrieb Nick Fisk:

Re: [ceph-users] linuxcon north america, ceph bluestore slides

2016-08-31 Thread Wido den Hollander
> Op 31 augustus 2016 om 9:51 schreef "Brian ::" : > > > Amazing improvements to performance in the preview now.. I wonder Indeed, great work! > will there be a filestore --> bluestore upgrade path... > Yes and No. Since the OSDs API doesn't change you can 'simply': 1. Shut down OSD 2. Wi

Re: [ceph-users] Ceph + VMware + Single Thread Performance

2016-08-31 Thread Nick Fisk
From: w...@globe.de [mailto:w...@globe.de] Sent: 31 August 2016 08:56 To: n...@fisk.me.uk; 'Alex Gorbachev' ; 'Horace Ng' Subject: Re: [ceph-users] Ceph + VMware + Single Thread Performance Nick, what do you think about Infiniband? I have read that with Infiniband the latency is at 1,2us

Re: [ceph-users] ceph-fuse "Transport endpoint is not connected" on Jewel 10.2.2

2016-08-31 Thread Dennis Kramer (DBS)
Hi all, I just want to confirm that the patch works in our environment. Thanks! On 08/30/2016 02:04 PM, Dennis Kramer (DBS) wrote: > Awesome Goncalo, that is very helpful. > > Cheers. > > On 08/30/2016 01:21 PM, Goncalo Borges wrote: >> Hi Dennis. >> >> That is the first issue we saw and has no

[ceph-users] how to print the incremental osdmap

2016-08-31 Thread Zhongyan Gu
Hi kefu, A quick question about the incremental osdmap. I used ceph-objectstore-tool with the op get-inc-osdmap to get the specific inc osdmap, but how to view the content of the incremental osdmap? Seems osdmaptool --print incremental-osdmap can't output the content. Thanks Zhongyan _

Re: [ceph-users] cephfs page cache

2016-08-31 Thread Sean Redmond
It seems using the 'sync' mount option on the server uploader01 is also a valid work around. Is it a problem that the meta data is available to other cephfs clients ahead of the file contents being flushed by the client doing the write? I think having an invalid page cache of zeros is a problem b

[ceph-users] UID reset to root after chgrp on CephFS Ganesha export

2016-08-31 Thread Wido den Hollander
Hi, I have a CephFS filesystem which is re-exported through NFS Ganesha (v2.3.0) with Ceph 10.2.2 The export works fine, but when calling a chgrp on a file the UID is set to root. Example list of commands: $ chown www-data:www-data myfile That works, file is now owned by www-data/www-data $

Re: [ceph-users] UID reset to root after chgrp on CephFS Ganesha export

2016-08-31 Thread John Spray
On Wed, Aug 31, 2016 at 11:23 AM, Wido den Hollander wrote: > Hi, > > I have a CephFS filesystem which is re-exported through NFS Ganesha (v2.3.0) > with Ceph 10.2.2 > > The export works fine, but when calling a chgrp on a file the UID is set to > root. > > Example list of commands: > > $ chown

Re: [ceph-users] UID reset to root after chgrp on CephFS Ganesha export

2016-08-31 Thread Wido den Hollander
> Op 31 augustus 2016 om 12:42 schreef John Spray : > > > On Wed, Aug 31, 2016 at 11:23 AM, Wido den Hollander wrote: > > Hi, > > > > I have a CephFS filesystem which is re-exported through NFS Ganesha > > (v2.3.0) with Ceph 10.2.2 > > > > The export works fine, but when calling a chgrp on a f

Re: [ceph-users] UID reset to root after chgrp on CephFS Ganesha export

2016-08-31 Thread Daniel Gryniewicz
I believe this is a Ganesha bug, as discussed on the Ganesha list. Daniel On 08/31/2016 06:55 AM, Wido den Hollander wrote: Op 31 augustus 2016 om 12:42 schreef John Spray : On Wed, Aug 31, 2016 at 11:23 AM, Wido den Hollander wrote: Hi, I have a CephFS filesystem which is re-exported th

Re: [ceph-users] cephfs page cache

2016-08-31 Thread Yan, Zheng
On Wed, Aug 31, 2016 at 12:49 AM, Sean Redmond wrote: > Hi, > > I have been able to pick through the process a little further and replicate > it via the command line. The flow seems looks like this: > > 1) The user uploads an image to webserver server 'uploader01' it gets > written to a path such

[ceph-users] Antw: Re: rbd cache mode with qemu

2016-08-31 Thread Steffen Weißgerber
>>> Loris Cuoghi schrieb am Dienstag, 30. August 2016 um 16:34: > Hello, > Hi Loris, thank you for your answer. > Le 30/08/2016 à 14:08, Steffen Weißgerber a écrit : >> Hello, >> >> after correcting the configuration for different qemu vm's with rbd disks >> (we removed the cache=writethrou

Re: [ceph-users] Antw: Re: rbd cache mode with qemu

2016-08-31 Thread Alexandre DERUMIER
>>eanwhile I tried to update the viostor driver within the vm (a W2k8) >>but that >>results in a bluescreen. >>When booting via recovery console and loading the new driver from an >>actual >>qemu driver iso the disks are all in writeback mode. >>So maybe the cache mode depends on the iodriver with

Re: [ceph-users] build and Compile ceph in development mode takes an hour

2016-08-31 Thread agung Laksono
HI Brad, After exploring Ceph, I found sometimes when I run *make* command without removing the build folder, it will end with this error: /home/agung/project/samc/ceph/build/bin/init-ceph: ceph conf ./ceph.conf not found; system is not configured. rm -f core* ip 127.0.0.1 port 6789 NOTE: hostna

Re: [ceph-users] build and Compile ceph in development mode takes an hour

2016-08-31 Thread Lenz Grimmer
On 08/18/2016 12:42 AM, Brad Hubbard wrote: > On Thu, Aug 18, 2016 at 1:12 AM, agung Laksono > wrote: >> >> Is there a way to make the compiling process be faster? something >> like only compile a particular code that I change. > > Sure, just use the same build directory and run "make" again af

[ceph-users] Antw: Re: Antw: Re: rbd cache mode with qemu

2016-08-31 Thread Steffen Weißgerber
>>> Alexandre DERUMIER schrieb am Mittwoch, 31. August 2016 um 16:10: >> >eanwhile I tried to update the viostor driver within the vm (a W2k8) >>>but that >>>results in a bluescreen. > >>>When booting via recovery console and loading the new driver from an >>>actual >>>qemu driver iso the disk

[ceph-users] /var/lib/mysql, CephFS vs RBD

2016-08-31 Thread Lazuardi Nasution
Hi, I'm looking for pros and cons of mounting /var/lib/mysql with CephFS or RBD for getting best performance. MySQL save data as files on mostly configuration but the I/O is block access because the file is opened until MySQL down. This case give us both options for storing the data files. For RBD

Re: [ceph-users] cephfs metadata pool: deep-scrub error "omap_digest != best guess omap_digest"

2016-08-31 Thread Sean Redmond
I have updated the tracker with some log extracts as I seem to be hitting this or a very similar issue. I was unsure of the correct syntax for the command ceph-objectstore-tool to try and extract that information. On Wed, Aug 31, 2016 at 5:56 AM, Brad Hubbard wrote: > > On Wed, Aug 31, 2016 at

Re: [ceph-users] /var/lib/mysql, CephFS vs RBD

2016-08-31 Thread RDS
In my testing, using RBD-NBD is faster than using RBD or CephFS. For a MySQL/sysbench test using 25 threads using OLTP, using a 40G network between the client and Ceph, here are some of my results: Using ceph-rbd: transactions per sec: 8620 using ceph rbd-nbd: transaction per sec: 9359 using c

Re: [ceph-users] /var/lib/mysql, CephFS vs RBD

2016-08-31 Thread Lazuardi Nasution
Hi, Thank you for your opinion. I don't know if RBD-NBD is supported by OpenStack since my environment is OpenStack. What file system do you use in your test for RBD and RBD-NBD? Best regards, On Wed, Aug 31, 2016 at 10:11 PM, RDS wrote: > In my testing, using RBD-NBD is faster than using RBD

Re: [ceph-users] cephfs page cache

2016-08-31 Thread Sean Redmond
I am not sure how to tell? Server1 and Server2 mount the ceph file system using kernel client 4.7.2 and I can replicate the problem using '/usr/bin/sum' to read the file or a http GET request via a web server (apache). On Wed, Aug 31, 2016 at 2:38 PM, Yan, Zheng wrote: > On Wed, Aug 31, 2016 at

Re: [ceph-users] UID reset to root after chgrp on CephFS Ganesha export

2016-08-31 Thread Wido den Hollander
> Op 31 augustus 2016 om 15:28 schreef Daniel Gryniewicz : > > > I believe this is a Ganesha bug, as discussed on the Ganesha list. > Ah, thanks. Do you maybe have a link or subject so I can chime in? Wido > Daniel > > On 08/31/2016 06:55 AM, Wido den Hollander wrote: > > > >> Op 31 augustu

Re: [ceph-users] UID reset to root after chgrp on CephFS Ganesha export

2016-08-31 Thread Daniel Gryniewicz
On 08/31/2016 02:15 PM, Wido den Hollander wrote: Op 31 augustus 2016 om 15:28 schreef Daniel Gryniewicz : I believe this is a Ganesha bug, as discussed on the Ganesha list. Ah, thanks. Do you maybe have a link or subject so I can chime in? Wido https://sourceforge.net/p/nfs-ganesha/mai

Re: [ceph-users] Jewel - frequent ceph-osd crashes

2016-08-31 Thread Gregory Farnum
On Tue, Aug 30, 2016 at 2:17 AM, Andrei Mikhailovsky wrote: > Hello > > I've got a small cluster of 3 osd servers and 30 osds between them running > Jewel 10.2.2 on Ubuntu 16.04 LTS with stock kernel version 4.4.0-34-generic. > > I am experiencing rather frequent osd crashes, which tend to happen

Re: [ceph-users] /var/lib/mysql, CephFS vs RBD

2016-08-31 Thread RDS
xfs > On Aug 31, 2016, at 12:14 PM, Lazuardi Nasution > wrote: > > Hi, > > Thank you for your opinion. I don't know if RBD-NBD is supported by OpenStack > since my environment is OpenStack. What file system do you use in your test > for RBD and RBD-NBD? > > Best regards, > > On Wed, Aug 31,

[ceph-users] Slow Request on OSD

2016-08-31 Thread Reed Dier
After a power failure left our jewel cluster crippled, I have hit a sticking point in attempted recovery. Out of 8 osd’s, we likely lost 5-6, trying to salvage what we can. In addition to rados pools, we were also using CephFS, and the cephfs.metadata and cephfs.data pools likely lost plenty of

Re: [ceph-users] Jewel - frequent ceph-osd crashes

2016-08-31 Thread Wido den Hollander
> Op 31 augustus 2016 om 22:14 schreef Gregory Farnum : > > > On Tue, Aug 30, 2016 at 2:17 AM, Andrei Mikhailovsky > wrote: > > Hello > > > > I've got a small cluster of 3 osd servers and 30 osds between them running > > Jewel 10.2.2 on Ubuntu 16.04 LTS with stock kernel version 4.4.0-34-gener

Re: [ceph-users] Slow Request on OSD

2016-08-31 Thread Wido den Hollander
> Op 31 augustus 2016 om 22:56 schreef Reed Dier : > > > After a power failure left our jewel cluster crippled, I have hit a sticking > point in attempted recovery. > > Out of 8 osd’s, we likely lost 5-6, trying to salvage what we can. > That's probably to much. How do you mean lost? Is XFS

Re: [ceph-users] Slow Request on OSD

2016-08-31 Thread Reed Dier
Multiple XFS corruptions, multiple leveldb issues. Looked to be result of write cache settings which have been adjusted now. You’ll see below that there are tons of PG’s in bad states, and it was slowly but surely bringing the number of bad PGs down, but it seems to have hit a brick wall with t

Re: [ceph-users] cephfs metadata pool: deep-scrub error "omap_digest != best guess omap_digest"

2016-08-31 Thread Brad Hubbard
On Thu, Sep 1, 2016 at 1:08 AM, Sean Redmond wrote: > I have updated the tracker with some log extracts as I seem to be hitting > this or a very similar issue. I've updated the tracker asking for "rados list-inconsistent-obj" from the relevant pgs. That should give us a better idea of the nature

[ceph-users] HitSet - memory requirement

2016-08-31 Thread Kjetil Jørgensen
Hi, http://docs.ceph.com/docs/master/rados/operations/cache-tiering/ states > > Note A larger hit_set_count results in more RAM consumed by the ceph-osd > process. By how much - what order - kb ? mb ? gb ? After some spelunking - there's osd_hit_set_max_size, is it fair to make the following a

Re: [ceph-users] how to debug pg inconsistent state - no ioerrors seen

2016-08-31 Thread Goncalo Borges
Hi Kenneth, All Just an update for completeness on this topic. We have been hit again by this issue. I have been discussing it with Brad (RH staff) in another ML thread, and I have opened a tracker issue: http://tracker.ceph.com/issues/17177 I believe this is a bug since there are other peop

[ceph-users] the reweight value of OSD is always 1

2016-08-31 Thread 한승진
Hi Cephers! The re-weight value of OSD is always 1 when we create and activate an OSD daemon. I utilize ceph-deploy tool whenever deploy ceph cluster. Is there a default reweight value of ceph-deploy tool? Can we adjust the reweight value when we activate OSD daemon? ID WEIGHT TYPE NAME

Re: [ceph-users] the reweight value of OSD is always 1

2016-08-31 Thread Henrik Korkuc
Hey, it is normal for reweight value to be 1. You (with "ceph osd reweight OSDNUM newweight") or "ceph osd reweight-by-utilization" can decrease it to move some pgs out of that OSD. Thing that usually differs and depends on disk size is "weight" On 16-08-31 22:06, 한승진 wrote: Hi Cephers! The