Re: [ceph-users] Poor performance on all SSD cluster

2014-06-24 Thread Mark Kirkwood
On 24/06/14 18:15, Robert van Leeuwen wrote: All of which means that Mysql performance (looking at you binlog) may still suffer due to lots of small block size sync writes. Which begs the question: Anyone running a reasonable busy Mysql server on Ceph backed storage? We tried and it did not pe

Re: [ceph-users] Firefly OSDs : set_extsize: FSSETXATTR: (22) Invalid argument

2014-06-24 Thread Ilya Dryomov
On Tue, Jun 24, 2014 at 12:02 PM, Florent B wrote: > Hi all, > > On 2 Firefly cluster, I have a lot of errors like this on my OSDs : > > 2014-06-24 09:54:39.088469 7fb5b8628700 0 > xfsfilestorebackend(/var/lib/ceph/osd/ceph-4) set_extsize: FSSETXATTR: > (22) Invalid argument > > Both are using XF

Re: [ceph-users] Firefly OSDs : set_extsize: FSSETXATTR: (22) Invalid argument

2014-06-24 Thread Ilya Dryomov
On Tue, Jun 24, 2014 at 1:15 PM, Florent B wrote: > On 06/24/2014 11:13 AM, Ilya Dryomov wrote: >> On Tue, Jun 24, 2014 at 12:02 PM, Florent B wrote: >>> Hi all, >>> >>> On 2 Firefly cluster, I have a lot of errors like this on my OSDs : >>> >>> 2014-06-24 09:54:39.088469 7fb5b8628700 0 >>> xfsf

Re: [ceph-users] Poor performance on all SSD cluster

2014-06-24 Thread Mark Kirkwood
On 23/06/14 19:16, Mark Kirkwood wrote: For database types (and yes I'm one of those)...you want to know that your writes (particularly your commit writes) are actually making it to persistent storage (that ACID thing you know). Now I see RBD cache very like battery backed RAID cards - your commi

[ceph-users] 'osd pool set-quota' behaviour with CephFS

2014-06-24 Thread george.ryall
Last week I decided to take a look at the 'osd pool set-quota' option. I have a directory in cephFS that uses a pool called pool-2 (configured by following this: http://www.sebastien-han.fr/blog/2013/02/11/mount-a-specific-pool-with-cephfs/). I have a directory in that filled with cat picture

Re: [ceph-users] Poor performance on all SSD cluster

2014-06-24 Thread Mark Nelson
On 06/24/2014 03:45 AM, Mark Kirkwood wrote: On 24/06/14 18:15, Robert van Leeuwen wrote: All of which means that Mysql performance (looking at you binlog) may still suffer due to lots of small block size sync writes. Which begs the question: Anyone running a reasonable busy Mysql server on Ce

Re: [ceph-users] Poor performance on all SSD cluster

2014-06-24 Thread Mark Nelson
On 06/24/2014 04:46 AM, Mark Kirkwood wrote: On 23/06/14 19:16, Mark Kirkwood wrote: For database types (and yes I'm one of those)...you want to know that your writes (particularly your commit writes) are actually making it to persistent storage (that ACID thing you know). Now I see RBD cache ve

Re: [ceph-users] Poor performance on all SSD cluster

2014-06-24 Thread Jake Young
On Mon, Jun 23, 2014 at 3:03 PM, Mark Nelson wrote: > Well, for random IO you often can't do much coalescing. You have to bite > the bullet and either parallelize things or reduce per-op latency. Ceph > already handles parallelism very well. You just throw more disks at the > problem and so lo

Re: [ceph-users] 'osd pool set-quota' behaviour with CephFS

2014-06-24 Thread Travis Rhoden
Hi George, I actually asked Sage about a similar scenario at the OpenStack summit in Atlanta this year -- namely if I could use the new pool quota functionality to enforce quotas on CephFS. The answer was no, that the pool quota functionality is mostly intended for radosgw and that the existing c

[ceph-users] CDS is Nigh!

2014-06-24 Thread Patrick McGarry
Hey cephers, We’re about 45 minutes from Ceph Developer Summit, Day 1 start.  If you have any problems joining the video conference please ask scuttlemonkey in #ceph-summit on oftc and we’ll work on getting you in.  Thanks! https://wiki.ceph.com/Planning/CDS/CDS_Giant_and_Hammer_(Jun_2014) B

[ceph-users] limitations of erasure coded pools

2014-06-24 Thread Chad Seys
Hi All, Could someone point me to a document (possibly a FAQ :) ) describing the limitations of erasure coded pools? Hopefully it would contain the when and how to use them as well. E.g. I read about people using replicated pools as a front end to erasure coded pools, but I don't know why

Re: [ceph-users] Multiple hierarchies and custom placement

2014-06-24 Thread Gregory Farnum
There's not really a simple way to do this. There are functions in the OSDMap structure to calculate the location of a particular PG, but there are a lot of independent places that map objects into PGs. On Monday, June 23, 2014, Shayan Saeed wrote: > Thanks for getting back with a helpful reply.

[ceph-users] Continuing placement group problems

2014-06-24 Thread Peter Howell
We are using two instances of version 8.0 of ceph both on ZFS. We are frequently getting placement group inconsistent on both Ceph clusters We are suspecting that there is a problem with the network that is randomly corrupting the update of placement groups. Does anyone have any suggestions as

Re: [ceph-users] Ceph RGW + S3 Client (s3cmd)

2014-06-24 Thread Francois Deppierraz
Hi Vickey, This really looks like a DNS issue. Are you sure that the host from which s3cmd is running is able to resolve the host 'bmi-pocfe2.scc.fi'? Does a regular ping works? $ ping bmi-pocfe2.scc.fi François On 23. 06. 14 16:24, Vickey Singh wrote: > # s3cmd ls > > WARNING: Retrying faile

Re: [ceph-users] Deep scrub versus osd scrub load threshold

2014-06-24 Thread David Zafman
Unfortunately, decreasing the osd_scrub_max_interval to 6 days isn’t going to fix it. There is sort of quirk in the way the deep scrub is initiated. It doesn’t trigger a deep scrub until a regular scrub is about to start. So with osd_scrub_max_interval set to 1 week and a high load the next

Re: [ceph-users] Ceph RGW + S3 Client (s3cmd)

2014-06-24 Thread Stephan Fabel
-BEGIN PGP SIGNED MESSAGE- Hash: SHA1 On 06/23/2014 04:24 AM, Vickey Singh wrote: > host_bucket = %(bucket)s.bmi-pocfe2.scc.fi > Should there be a '.' (period) between %(bucket) and "s.bmi-pocfe2.scc.fi"? - -Stephan -BEGIN PGP SIGNATURE- Version: GnuP

Re: [ceph-users] limitations of erasure coded pools

2014-06-24 Thread Blair Bethwaite
> Message: 24 > Date: Tue, 24 Jun 2014 09:39:50 -0500 > From: Chad Seys > To: "ceph-users@lists.ceph.com" > Subject: [ceph-users] limitations of erasure coded pools > Message-ID: <201406240939.50550.cws...@physics.wisc.edu> > Content-Type: Text/Plain; charset="us-ascii" > > Hi All, > Could som

Re: [ceph-users] Poor performance on all SSD cluster

2014-06-24 Thread Mark Kirkwood
On 24/06/14 23:39, Mark Nelson wrote: On 06/24/2014 03:45 AM, Mark Kirkwood wrote: On 24/06/14 18:15, Robert van Leeuwen wrote: All of which means that Mysql performance (looking at you binlog) may still suffer due to lots of small block size sync writes. Which begs the question: Anyone runni

[ceph-users] How to improve performance of ceph objcect storage cluster

2014-06-24 Thread wsnote
OS: CentOS 6.5 Version: Ceph 0.79 Hi, everybody! I have installed a ceph cluster with 10 servers. I test the throughput of ceph cluster in the same datacenter. Upload files of 1GB from one server or several servers to one server or several servers, the total is about 30MB/s. That is to say, the

[ceph-users] CephFS ACL support status

2014-06-24 Thread Sean Crosby
I have recently deployed a Firefly CephFS cluster, and am trying out the POSIX ACL feature that is supposed to have come in as of kernel 3.14. I've mounted my CephFS volume on a machine with kernel 3.15 The ACL support seems to work (as in I can set and retrieve ACL's), but it seems kinda buggy, e