Re: [ceph-users] Sizing SSD's for ceph

2015-01-28 Thread Christian Balzer
On Thu, 29 Jan 2015 01:30:41 + Ramakrishna Nishtala (rnishtal) wrote: > Hi, > Apologize if something came up before like this. > Reading archives, it appears that 4 to 5 spinning disks are recommended > for single SSD. > It all depends on the SSDs and HDDs in question for one (how many HDDs c

[ceph-users] Is this ceph issue ? snapshot freeze on save state

2015-01-28 Thread Zeeshan Ali Shah
https://bugs.launchpad.net/glance/+bug/1415679 -- Regards Zeeshan Ali Shah System Administrator - PDC HPC PhD researcher (IT security) Kungliga Tekniska Hogskolan +46 8 790 9115 http://www.pdc.kth.se/members/zashah ___ ceph-users mailing list ceph-use

Re: [ceph-users] RGW region metadata sync prevents writes to non-master region

2015-01-28 Thread Mark Kirkwood
On 29/01/15 13:58, Mark Kirkwood wrote: However if I try to write to eu-west I get: Sorry - that should have said: However if I try to write to eu-*east* I get: The actual code is (see below) connecting to the endpoint for eu-east (ceph4:80), so seeing it redirected to us-*west* is pretty

Re: [ceph-users] Help:mount error

2015-01-28 Thread 于泓海
Thanks ! I have resolved it with your suggestion. At 2015-01-28 22:38:21,"Yan, Zheng" wrote: >On Wed, Jan 28, 2015 at 10:35 PM, Yan, Zheng wrote: >> On Wed, Jan 28, 2015 at 2:48 PM, 于泓海 wrote: >>> Hi: >>> >>> I have completed the installation of ceph cluster,and the ceph health is >>>

[ceph-users] Sizing SSD's for ceph

2015-01-28 Thread Ramakrishna Nishtala (rnishtal)
Hi, Apologize if something came up before like this. Reading archives, it appears that 4 to 5 spinning disks are recommended for single SSD. I have two questions on the subject. * Some of the links suggest that we should use 'sync writes' to really size the journals. If true, then what

[ceph-users] RGW region metadata sync prevents writes to non-master region

2015-01-28 Thread Mark Kirkwood
Hi, I am following http://docs.ceph.com/docs/master/radosgw/federated-config/ using cepg 0.91 (0.91-665-g6f44f7a): - 2 regions (US and EU). US is the master region - 2 ceph clusters, one per region - 4 zones (us east and west, eu east and west - 4 hosts (ceph1 + ceph2 being us-west + us-east

[ceph-users] Survey re journals on SSD vs co-located on spinning rust

2015-01-28 Thread Anthony D'Atri
My apologies if this has been covered ad-naseum in the past; I wasn't finding a lot of relevant archived info. I'm curious how may people are using 1) OSD's on spinning disks, with journals on SSD's -- how many journals per SSD? 4-5? 2) OSD's on spinning disks, with [10GB] journals co-locate

Re: [ceph-users] chattr +i not working with cephfs

2015-01-28 Thread Eric Eastman
On Wed, Jan 28, 2015 at 11:43 AM, Gregory Farnum wrote: > > On Wed, Jan 28, 2015 at 10:06 AM, Sage Weil wrote: > > On Wed, 28 Jan 2015, John Spray wrote: > >> On Wed, Jan 28, 2015 at 5:23 PM, Gregory Farnum wrote: > >> > My concern is whether we as the FS are responsible for doing anything > >>

Re: [ceph-users] Ceph Testing

2015-01-28 Thread Lincoln Bryant
Hi Raj, Sébastien Han has done some excellent Ceph benchmarking on his blog here: http://www.sebastien-han.fr/blog/2012/08/26/ceph-benchmarks/ Maybe that's a good place to start for your own testing? Cheers, Lincoln On Jan 28, 2015, at 12:59 PM, Jeripotula, Shashiraj wrote: > Resending, Guys,

Re: [ceph-users] Ceph Testing

2015-01-28 Thread Jeripotula, Shashiraj
Resending, Guys, Please help me point to some good documentation. Thanks in advance. Regards Raj From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Jeripotula, Shashiraj Sent: Tuesday, January 27, 2015 10:32 AM To: ceph-users@lists.ceph.com Subject: [ceph-users] Ceph Test

Re: [ceph-users] cephfs modification time

2015-01-28 Thread Christopher Armstrong
Thanks Greg. Perhaps this is a motivation for us to switch to ceph-fuse from the kernel client - at least that way, we could easily upgrade for bug fixes without waiting for a new kernel. Chris On Wed, Jan 28, 2015 at 9:32 AM, Gregory Farnum wrote: > This is in our testing branch and should go

Re: [ceph-users] chattr +i not working with cephfs

2015-01-28 Thread Gregory Farnum
On Wed, Jan 28, 2015 at 10:06 AM, Sage Weil wrote: > On Wed, 28 Jan 2015, John Spray wrote: >> On Wed, Jan 28, 2015 at 5:23 PM, Gregory Farnum wrote: >> > My concern is whether we as the FS are responsible for doing anything >> > more than storing and returning that immutable flag ? are we suppos

Re: [ceph-users] chattr +i not working with cephfs

2015-01-28 Thread Sage Weil
On Wed, 28 Jan 2015, John Spray wrote: > On Wed, Jan 28, 2015 at 5:23 PM, Gregory Farnum wrote: > > My concern is whether we as the FS are responsible for doing anything > > more than storing and returning that immutable flag ? are we supposed > > to block writes to anything that has it set? That

Re: [ceph-users] chattr +i not working with cephfs

2015-01-28 Thread John Spray
On Wed, Jan 28, 2015 at 5:23 PM, Gregory Farnum wrote: > My concern is whether we as the FS are responsible for doing anything > more than storing and returning that immutable flag — are we supposed > to block writes to anything that has it set? That could be much > trickier... The VFS layer is c

[ceph-users] OSDs not getting mounted back after reboot

2015-01-28 Thread J-P Methot
Hi, I'm having an issue wuite similar to this old bug : http://tracker.ceph.com/issues/5194, except that I'm using centos 6. Basically, I setup a cluster using ceph-deploy to save some time (this is a 90+ OSD cluster). I rebooted a node earlier today and now all the drives are unmounted and a

Re: [ceph-users] cephfs modification time

2015-01-28 Thread Gregory Farnum
This is in our testing branch and should go to Linus the next time we send him stuff for merge. Unfortunately there's nobody doing CephFS kernel backports at this time so you'll need to wait for that to come out or spin your own. :( -Greg On Tue, Jan 27, 2015 at 10:46 AM, Christopher Armstrong wr

Re: [ceph-users] chattr +i not working with cephfs

2015-01-28 Thread Gregory Farnum
On Wed, Jan 28, 2015 at 5:24 AM, John Spray wrote: > We don't implement the GETFLAGS and SETFLAGS ioctls used for +i. > > Adding the ioctls is pretty easy, but then we need somewhere to put > the flags. Currently we don't store a "flags" attribute on inodes, > but maybe we could borrow the high b

[ceph-users] Ceph hunting for monitor on load

2015-01-28 Thread Erwin Lubbers
Hi, I'm running a small Ceph cluster (Emperor), with 3 servers, each running a monitor and two 280 GB OSDs (plus an SSD for the journals). Servers have 16 GB memory and a 8 core Xeon processor and are connected with 3x 1 gbps (lacp trunk). As soon as I give the cluster some load from a client

Re: [ceph-users] chattr +i not working with cephfs

2015-01-28 Thread Eric Eastman
Thank you for the reply. This is a feature that we would like to see. Should I write a cephfs tracker report on this as a possible future enhancement? On Wed, Jan 28, 2015 at 6:24 AM, John Spray wrote: > We don't implement the GETFLAGS and SETFLAGS ioctls used for +i. > > Adding the ioctls is pr

Re: [ceph-users] Help:mount error

2015-01-28 Thread Yan, Zheng
On Wed, Jan 28, 2015 at 10:35 PM, Yan, Zheng wrote: > On Wed, Jan 28, 2015 at 2:48 PM, 于泓海 wrote: >> Hi: >> >> I have completed the installation of ceph cluster,and the ceph health is >> ok: >> >> cluster 15ee68b9-eb3c-4a49-8a99-e5de64449910 >> health HEALTH_OK >> monmap e1: 1 m

Re: [ceph-users] RBD over cache tier over EC pool: rbd rm doesn't remove objects

2015-01-28 Thread Irek Fasikhov
Hi,Sage. Yes, Firefly. [root@ceph05 ~]# ceph --version ceph version 0.80.8 (69eaad7f8308f21573c604f121956e64679a52a7) Yes, I have seen this behavior. [root@ceph08 ceph]# rbd info vm-160-disk-1 rbd image 'vm-160-disk-1': size 32768 MB in 8192 objects order 22 (4096 kB objects)

Re: [ceph-users] Help:mount error

2015-01-28 Thread Yan, Zheng
On Wed, Jan 28, 2015 at 2:48 PM, 于泓海 wrote: > Hi: > > I have completed the installation of ceph cluster,and the ceph health is > ok: > > cluster 15ee68b9-eb3c-4a49-8a99-e5de64449910 > health HEALTH_OK > monmap e1: 1 mons at {ceph01=10.194.203.251:6789/0}, election epoch 1, > quor

Re: [ceph-users] RBD over cache tier over EC pool: rbd rm doesn't remove objects

2015-01-28 Thread Sage Weil
On Wed, 28 Jan 2015, Irek Fasikhov wrote: > Sage.  > Is a sentence when deleting objects bypass the cache tier pool. There's currently no knob or hint to do that. It would be pretty simple to add, but it's a heuristic that only works for certain workloads.. sage > Thank > > Wed Jan 28 2015

Re: [ceph-users] RBD over cache tier over EC pool: rbd rm doesn't remove objects

2015-01-28 Thread Irek Fasikhov
Sage. Is a sentence when deleting objects bypass the cache tier pool. Thank Wed Jan 28 2015 at 5:13:36 PM, Irek Fasikhov : > Hi,Sage. > > Yes, Firefly. > [root@ceph05 ~]# ceph --version > ceph version 0.80.8 (69eaad7f8308f21573c604f121956e64679a52a7) > > Yes, I have seen this behavior. > > [root@

Re: [ceph-users] chattr +i not working with cephfs

2015-01-28 Thread John Spray
We don't implement the GETFLAGS and SETFLAGS ioctls used for +i. Adding the ioctls is pretty easy, but then we need somewhere to put the flags. Currently we don't store a "flags" attribute on inodes, but maybe we could borrow the high bits of the mode attribute for this if we wanted to implement

Re: [ceph-users] Ceph, LIO, VMWARE anyone?

2015-01-28 Thread Mike Christie
On 01/28/2015 02:10 AM, Nick Fisk wrote: > Hi Mike, > > I've been working on some resource agents to configure LIO to use implicit > ALUA in an Active/Standby config across 2 hosts. After a week long crash > course in pacemaker and LIO, I now have a very sore head but it looks like > it's working

[ceph-users] Health warning : .rgw.buckets has too few pgs

2015-01-28 Thread Shashank Puntamkar
I am using ceph firefly (ceph version 0.80.7 ) with single Radosgw instance, no RBD. I am facing problem of ".rgw.buckets has too few pgs " I have tried to increased the number of pgs using command "ceph osd pool set pg_num " but in vain. I also tried "ceph osd crush tunables optimal " but no effe

Re: [ceph-users] Ceph, LIO, VMWARE anyone?

2015-01-28 Thread Nick Fisk
Hi Mike, I've been working on some resource agents to configure LIO to use implicit ALUA in an Active/Standby config across 2 hosts. After a week long crash course in pacemaker and LIO, I now have a very sore head but it looks like it's working fairly well. I hope to be in a position in the next f

Re: [ceph-users] Help:mount error

2015-01-28 Thread Lindsay Mathieson
Your mount command? Lindsay Mathieson -Original Message- From: "于泓海" Sent: ‎28/‎01/‎2015 4:48 PM To: "ceph-us...@ceph.com" Subject: [ceph-users] Help:mount error Hi: I have completed the installation of ceph cluster,and the ceph health is ok: cluster 15ee68b9-eb3c-4a49-8a9