Re: [ceph-users] Cuttlefish VS Bobtail performance series

2013-07-11 Thread Erwan Velu
On 10/07/2013 18:01, Mark Nelson wrote: Hello again! Part 2 is now out! We've got a whole slew of results for 4K FIO tests on RBD: http://ceph.com/performance-2/ceph-cuttlefish-vs-bobtail-part-2-4k-rbd-performance/ Hey mark, I'm really fond of this kind of plotting performance results.

[ceph-users] Change of Monitors IP Adresses

2013-07-11 Thread Joachim . Tork
Hi folks, I face the difficulty that I have to change ip adresses in the public network for the monitors. What needs to be done beside the change of the ceph.conf? Best regards Joachim Tork___ ceph-users mailing list ceph-users@lists.ceph.com http://

Re: [ceph-users] Live migration of VM using librbd and OpenStack

2013-07-11 Thread Maciej Gałkiewicz
On 12 March 2013 21:38, Josh Durgin wrote: > Yes, it works with true live migration just fine (even with caching). You > can use "virsh migrate" or even do it through the virt-manager gui. > Nova is just doing a check that doesn't make sense for volume-backed > instances with live migration there

[ceph-users] OCFS2 or GFS2 for cluster filesystem?

2013-07-11 Thread Tom Verdaat
Hi guys, We want to use our Ceph cluster to create a shared disk file system to host VM's. Our preference would be to use CephFS but since it is not considered stable I'm looking into alternatives. The most appealing alternative seems to be to create a RBD volume, format it with a cluster file sy

Re: [ceph-users] Change of Monitors IP Adresses

2013-07-11 Thread Joao Eduardo Luis
On 07/11/2013 09:03 AM, joachim.t...@gad.de wrote: Hi folks, I face the difficulty that I have to change ip adresses in the public network for the monitors. What needs to be done beside the change of the ceph.conf? ceph.conf is only used by other daemons (that aren't the monitors) and client

Re: [ceph-users] Cuttlefish VS Bobtail performance series

2013-07-11 Thread Mark Nelson
On 07/11/2013 02:36 AM, Erwan Velu wrote: On 10/07/2013 18:01, Mark Nelson wrote: Hello again! Part 2 is now out! We've got a whole slew of results for 4K FIO tests on RBD: http://ceph.com/performance-2/ceph-cuttlefish-vs-bobtail-part-2-4k-rbd-performance/ Hey mark, I'm really fond of thi

[ceph-users] Including pool_id in the crush hash ? FLAG_HASHPSPOOL ?

2013-07-11 Thread Sylvain Munaut
Hi, I'd like the pool_id to be included in the hash used for the PG, to try and improve the data distribution. (I have 10 pool). I see that there is a flag named FLAG_HASHPSPOOL. Is it possible to enable it on existing pool ? Cheers, Sylvain __

Re: [ceph-users] Hadoop/Ceph and DFS IO tests

2013-07-11 Thread Noah Watkins
On Wed, Jul 10, 2013 at 6:23 PM, ker can wrote: > > Now separating out the journal from data disk ... > > HDFS write numbers (3 disks/data node) > Average execution time: 466 > Best execution time : 426 > Worst execution time : 508 > > ceph write numbers (3 data disks/data node + 3 journal d

Re: [ceph-users] Hadoop/Ceph and DFS IO tests

2013-07-11 Thread ker can
yep - thats right. 3 OSD daemons per node. On Thu, Jul 11, 2013 at 9:16 AM, Noah Watkins wrote: > On Wed, Jul 10, 2013 at 6:23 PM, ker can wrote: > > > > Now separating out the journal from data disk ... > > > > HDFS write numbers (3 disks/data node) > > Average execution time: 466 > > Best ex

Re: [ceph-users] storage pools ceph (bobtail) auth failure in xenserver SR creation

2013-07-11 Thread Dave Scott
[sorry I didn't manage to reply to the original message; I only just joined this list. Sorry if this breaks your threading!] On 10 Jul 2013 at 16:01 John Shen wrote: > I was following the tech preview of libvirt/ceph integration in xenserver, > but ran > into an issue with ceph auth in setting

Re: [ceph-users] Cuttlefish VS Bobtail performance series

2013-07-11 Thread Mark Nelson
And We've now got part 3 out showing 128K FIO results: http://ceph.com/performance-2/ceph-cuttlefish-vs-bobtail-part-3-128k-rbd-performance/ Mark On 07/10/2013 11:01 AM, Mark Nelson wrote: Hello again! Part 2 is now out! We've got a whole slew of results for 4K FIO tests on RBD: http://ceph

Re: [ceph-users] RadosGW Logging

2013-07-11 Thread Derek Yarnell
>> It will never log anything to /var/log/ceph/radosgw.log. I am looking >> for the debug output which I have seen people post, does anyone have a >> pointer to what could be going on? > > You don't need the 'rgw_enable_ops_log' to have debug logs. The > log_file param should be enough. Do you ha

[ceph-users] Tuning options for 10GE ethernet and ceph

2013-07-11 Thread Mihály Árva-Tóth
Hello, We are planning to use Intel 10 GE ethernet between nodes of OSDs. Host operation system will be Ubuntu 12.04 x86_64. Are there any recommendations available to tuning options (ex. sysctl and ceph)? Thank you, Mihaly ___ ceph-users mailing list c

Re: [ceph-users] Tuning options for 10GE ethernet and ceph

2013-07-11 Thread Mark Nelson
On 07/11/2013 10:04 AM, Mihály Árva-Tóth wrote: Hello, We are planning to use Intel 10 GE ethernet between nodes of OSDs. Host operation system will be Ubuntu 12.04 x86_64. Are there any recommendations available to tuning options (ex. sysctl and ceph)? Thank you, Mihaly Hi, Generally if per

Re: [ceph-users] Tuning options for 10GE ethernet and ceph

2013-07-11 Thread Mihály Árva-Tóth
2013/7/11 Mark Nelson > On 07/11/2013 10:04 AM, Mihály Árva-Tóth wrote: > >> Hello, >> >> We are planning to use Intel 10 GE ethernet between nodes of OSDs. Host >> operation system will be Ubuntu 12.04 x86_64. Are there any >> recommendations available to tuning options (ex. sysctl and ceph)? >>

Re: [ceph-users] Tuning options for 10GE ethernet and ceph

2013-07-11 Thread Mark Nelson
On 07/11/2013 10:27 AM, Mihály Árva-Tóth wrote: 2013/7/11 Mark Nelson mailto:mark.nel...@inktank.com>> On 07/11/2013 10:04 AM, Mihály Árva-Tóth wrote: Hello, We are planning to use Intel 10 GE ethernet between nodes of OSDs. Host operation system will be Ubu

Re: [ceph-users] storage pools ceph (bobtail) auth failure in xenserver SR creation

2013-07-11 Thread John Shen
Hi Dave, Thank you so much for getting back to me. the command returns the same errors: [root@xen02 ~]# virsh pool-create ceph.xml error: Failed to create pool from ceph.xml error: Invalid secret: virSecretFree [root@xen02 ~]# the secret was precreated for the user admin that I use elsewhere wi

Re: [ceph-users] Cuttlefish VS Bobtail performance series

2013-07-11 Thread Erwan Velu
On 11/07/2013 16:56, Mark Nelson wrote: And We've now got part 3 out showing 128K FIO results: http://ceph.com/performance-2/ceph-cuttlefish-vs-bobtail-part-3-128k-rbd-performance/ Hey Mark, As speaking about the 10GbE at the end of your document, I do have the following questions for you.

Re: [ceph-users] Cuttlefish VS Bobtail performance series

2013-07-11 Thread Mark Nelson
On 07/11/2013 11:16 AM, Erwan Velu wrote: On 11/07/2013 16:56, Mark Nelson wrote: And We've now got part 3 out showing 128K FIO results: http://ceph.com/performance-2/ceph-cuttlefish-vs-bobtail-part-3-128k-rbd-performance/ Hey Mark, Hi! As speaking about the 10GbE at the end of your docu

Re: [ceph-users] OCFS2 or GFS2 for cluster filesystem?

2013-07-11 Thread Gilles Mocellin
Le 11/07/2013 12:08, Tom Verdaat a écrit : Hi guys, We want to use our Ceph cluster to create a shared disk file system to host VM's. Our preference would be to use CephFS but since it is not considered stable I'm looking into alternatives. The most appealing alternative seems to be to creat

Re: [ceph-users] storage pools ceph (bobtail) auth failure in xenserver SR creation

2013-07-11 Thread Wido den Hollander
Hi. So, the problem here is a couple of things. First: libvirt doesn't handle RBD storage pools without auth. That's my bad, but I never resolved that bug: http://tracker.ceph.com/issues/3493 For now, make sure cephx is enabled. Also, the commands you are using don't seem to be right. It sh

Re: [ceph-users] storage pools ceph (bobtail) auth failure in xenserver SR creation

2013-07-11 Thread John Shen
Wido, Thanks! I tried again with your command syntax but the result is the same. [root@xen02 ~]# virsh secret-set-value $(cat uuid) $(cat client.admin.key) Secret value set [root@xen02 ~]# xe sr-create type=libvirt name-label=ceph device-config:xml-filename=ceph.xml Error code: libvirt Error para

Re: [ceph-users] OCFS2 or GFS2 for cluster filesystem?

2013-07-11 Thread Alex Bligh
On 11 Jul 2013, at 19:25, Gilles Mocellin wrote: > Hello, > > Yes, you missed that qemu can use directly RADOS volume. > Look here : > http://ceph.com/docs/master/rbd/qemu-rbd/ > > Create : > qemu-img create -f rbd rbd:data/squeeze 10G > > Use : > > qemu -m 1024 -drive format=raw,file=rbd:dat

[ceph-users] Check creating

2013-07-11 Thread Mandell Degerness
Is there any command (in shell or python API), that can tell me if ceph is still creating pgs other than actually attempting a modification of the pg_num or pgp_num of a pool? I would like to minimize the number of errors I get and not keep trying the commands until success, if possible. Right no

Re: [ceph-users] OCFS2 or GFS2 for cluster filesystem?

2013-07-11 Thread McNamara, Bradley
Correct me if I'm wrong, I'm new to this, but I think the distinction between the two methods is that using 'qemu-img create -f rbd' creates an RBD for either a VM to boot from, or for mounting within a VM. Whereas, the OP wants a single RBD, formatted with a cluster file system, to use as a pl

[ceph-users] Possible bug with image.list_lockers()

2013-07-11 Thread Mandell Degerness
I'm not certain what the correct behavior should be in this case, so maybe it is not a bug, but here is what is happening: When an OSD becomes full, a process fails and we unmount the rbd attempt to remove the lock associated with the rbd for the process. The unmount works fine, but removing the l

Re: [ceph-users] OCFS2 or GFS2 for cluster filesystem?

2013-07-11 Thread Tom Verdaat
You are right, I do want a single RBD, formatted with a cluster file system, to use as a place for multiple VM image files to reside. Doing everything straight from volumes would be more effective with regards to snapshots, using CoW etc. but unfortunately for now OpenStack nova insists on having

Re: [ceph-users] OCFS2 or GFS2 for cluster filesystem?

2013-07-11 Thread Tom Verdaat
Hi Alex, We're planning to deploy OpenStack Grizzly using KVM. I agree that running every VM directly from RBD devices would be preferable, but booting from volumes is not one of OpenStack's strengths and configuring nova to make boot from volume the default method that works automatically is not

Re: [ceph-users] OCFS2 or GFS2 for cluster filesystem?

2013-07-11 Thread Darryl Bond
Tom, I'm no expert as I didn't set it up, but we are using Openstack Grizzly with KVM/QEMU and RBD volumes for VM's. We boot the VMs from the RBD volumes and it all seems to work just fine. Migration works perfectly, although live - no break migration only works from the command line tools. The

[ceph-users] Ceph-deploy

2013-07-11 Thread SUNDAY A. OLUTAYO
I will love to know the different between "ceph-deploy new host" and "ceph-deploy new mon"? I will appreaciate your help Sent from my LG Mobile "McNamara, Bradley" wrote: Correct me if I'm wrong, I'm new to this, but I think the distinction between the two methods is that using 'qemu-img crea

Re: [ceph-users] OCFS2 or GFS2 for cluster filesystem?

2013-07-11 Thread Youd, Douglas
Depending on which hypervisor he's using, it may not be possible to mount the RBD's natively. For instance, the elephant in the room... ESXi. I've pondered several architectures for presentation of Ceph to ESXi which may be related to this thread. 1) Large RBD's (2TB-512B), re-presented throug

[ceph-users] Turning off ceph journaling with xfs ?

2013-07-11 Thread ker can
Hi, Is it possible to turn off ceph journaling if I switch to xfs ? For using it as a storage layer for hadoop we're concerned about the additional requirements for separate SSDs ($$) etc. In our testing we're seeing a performance hit when using the same disk for both journal + data ... so we're

Re: [ceph-users] Check creating

2013-07-11 Thread Sage Weil
On Thu, 11 Jul 2013, Mandell Degerness wrote: > Is there any command (in shell or python API), that can tell me if > ceph is still creating pgs other than actually attempting a > modification of the pg_num or pgp_num of a pool? I would like to > minimize the number of errors I get and not keep try

Re: [ceph-users] Turning off ceph journaling with xfs ?

2013-07-11 Thread Mark Nelson
Hi Ker, Unfortunately no. Ceph uses the journal for internal consistency and atomicity and it can't use the XFS journal for it. On the BTRFS side, we've been investigating allowing the Ceph journal to be on the same disk as the OSD and doing a clone() operation to effectively reduce the jou

Re: [ceph-users] Turning off ceph journaling with xfs ?

2013-07-11 Thread Sage Weil
Note that you *can* disable teh journal if you use btrfs, but your write latency will tend to be pretty terrible. This is only viable for bulk-storage use cases where throughput trumps all and latency is not an issue at all (it may be seconds). We are planning on eliminating the double-write f

[ceph-users] latency when OSD falls out of cluster

2013-07-11 Thread Edwin Peer
Hi there, We've been noticing nasty multi-second cluster wide latencies if an OSD drops out of an active cluster (due to power failure, or even being stopped cleanly). We've also seen this problem occur when an OSD is inserted back into the cluster. Obviously, this has the effect of freezing

[ceph-users] Ceph-deploy

2013-07-11 Thread SUNDAY A. OLUTAYO
I am on first exploration of ceph, I need help to understand these terms; ceph-deploy new Host, ceph-deploy new MON Host and ceph-deploy mon create Host? I will appreciate your help. Sent from my LG Mobile ___ ceph-users mailing list ceph-users@lists.c

[ceph-users] Num of PGs

2013-07-11 Thread Stefan Priebe - Profihost AG
Hello, is this calculation for the number of PGs correct? 36 OSDs, Replication Factor 3 36 * 100 / 3 => 1200 PGs But i then read that it should be an exponent of 2 so it should be 2048? Stefan ___ ceph-users mailing list ceph-users@lists.ceph.com htt