Re: [ceph-users] I/O Speed Comparisons

2013-03-12 Thread Wolfgang Hennerbichler
On 03/11/2013 11:56 PM, Josh Durgin wrote: >> dd if=/dev/zero of=/bigfile bs=2M & >> >> Serial console gets jerky, VM gets unresponsive. It doesn't crash, but >> it's not 'healthy' either. CPU load isn't very high, it's in the waiting >> state a lot: > > Does this only happen with rbd_cache tur

Re: [ceph-users] I/O Speed Comparisons

2013-03-12 Thread Christopher Kunz
Am 12.03.13 06:08, schrieb Stefan Priebe - Profihost AG: > Hi Sage, > > i would like to see the high ping problem fixed in 0.56.4 > > Thanks! > For the record, we (filoo) reported this issue about 3 months ago. --ck ___ ceph-users mailing list ceph-u

Re: [ceph-users] Gateway quick start

2013-03-12 Thread waed Albataineh
I see but just to be exact I may need to access the objects in the OSDs and try to move it maniually !! still can do that without RESTful gateway ?? Thank you. --- On Mon, 3/11/13, Dan Mick wrote: From: Dan Mick Subject: Re: [ceph-users] Gateway quick start To: ceph-users@lists.ceph.com Dat

[ceph-users] cluster-network documentation

2013-03-12 Thread Wolfgang Hennerbichler
Hi, I've a question on cluster-network documented here: http://ceph.com/docs/master/rados/configuration/network-config-ref/ In the docs we learn for the cluster network directive: The IP address and netmask of the cluster (back-side) network (e.g., 10.20.30.41/24). Set in [global]. You may specif

[ceph-users] Number of buckets limit per account

2013-03-12 Thread Ashish Kumar
Hi Guys, I am configuring CEPH and planning to use just one account for our app. One schemer will map to one bucket, but before I do this, I need the answer of following questions: 1) can the number of buckets/account be set to unlimited or a very high number which is practically not possible

Re: [ceph-users] Number of buckets limit per account

2013-03-12 Thread Yehuda Sadeh
On Tue, Mar 12, 2013 at 6:16 AM, Ashish Kumar wrote: > > Hi Guys, > > > > I am configuring CEPH and planning to use just one account for our app. > One schemer will map to one bucket, but before I do this, I need the answer > of following > > questions: > > > > 1) can the number of buckets/account

[ceph-users] cephfs set_layout behaviour

2013-03-12 Thread Varun Chandramouli
Hi All, I was running some experiments on my 3-node ceph cluster (mounted at /mnt/ceph), and noticed unexpected behavior regarding cephfs set_layout command. root@varunc-virtual-machine:/mnt/ceph/folder1# touch test3 root@varunc-virtual-machine:/mnt/ceph/folder1# cephfs test3 set_layout -p 3 -s 4

Re: [ceph-users] cephfs set_layout behaviour

2013-03-12 Thread Sage Weil
On Tue, 12 Mar 2013, Varun Chandramouli wrote: > Hi All, > I was running some experiments on my 3-node ceph cluster (mounted at > /mnt/ceph), and noticed unexpected behavior regarding cephfs set_layout > command. > > root@varunc-virtual-machine:/mnt/ceph/folder1# touch test3 > root@varunc-virtual-

Re: [ceph-users] Gateway quick start

2013-03-12 Thread Dan Mick
yes On 03/12/2013 12:54 AM, waed Albataineh wrote: I see but just to be exact I may need to access the objects in the OSDs and try to move it maniually !! still can do that without RESTful gateway ?? Thank you. --- On *Mon, 3/11/13, Dan Mick //* wrote: From: Dan Mick Subject: Re: [ce

[ceph-users] Live migration of VM using librbd and OpenStack

2013-03-12 Thread Travis Rhoden
Hey folks, I'm wondering if the following is possible. I have OpenStack (Folsom) configured to boot VMs from volume using Ceph as a backend for Cinder and Glance. My setup pretty much follows the Ceph guides for this verbatim. I've been using this setup for a while now, and it's all been really

[ceph-users] Release Cadence

2013-03-12 Thread Patrick McGarry
Hey all, Just wanted to link to a few words on the new Ceph release cadence. http://ceph.com/community/ceph-settles-in-to-aggressive-release-cadence/ Feel free to hit us with any questions. Thanks. Best Regards, -- Patrick McGarry Director, Community Inktank http://ceph.com || http://ink

Re: [ceph-users] Live migration of VM using librbd and OpenStack

2013-03-12 Thread tra26
Travis, The root disk (/var/lib/nova/instances) must be on shared storage to run the live migrate. You should be able to run block migration (which is a different form of the live-migration) that does not require shared storage. Take a look at: http://www.sebastien-han.fr/blog/2012/07/12/op

[ceph-users] btrfs for prod in with 3.8 kernel?

2013-03-12 Thread Jeppesen, Nelson
Ubuntu 13.04 will be using a 3.8 kernel. Do you guys think that btrfs is production ready for Ceph in linux 3.8? or would it be safer to use Ubuntu 12.04 and upgrade the kernel to 3.8? or even stick with XFS. Thanks. Nelson Jeppesen Disney Technology Solutions and Services Phone 206-588-5

Re: [ceph-users] Live migration of VM using librbd and OpenStack

2013-03-12 Thread Travis Rhoden
Thanks for the response, Trevor. > The root disk (/var/lib/nova/instances) must be on shared storage to run > the live migrate. > I would argue that it is on shared storage. It is an RBD stored in Ceph, and that's available at each host via librbd. You should be able to run block migration (whi

Re: [ceph-users] Live migration of VM using librbd and OpenStack

2013-03-12 Thread Josh Durgin
On 03/12/2013 01:28 PM, Travis Rhoden wrote: Thanks for the response, Trevor. The root disk (/var/lib/nova/instances) must be on shared storage to run the live migrate. I would argue that it is on shared storage. It is an RBD stored in Ceph, and that's available at each host via librbd. A

[ceph-users] Ceph Read Benchmark

2013-03-12 Thread Scott Kinder
When I try and do a rados bench, I see the following error: # rados bench -p data 300 seq Must write data before running a read benchmark! error during benchmark: -2 error 2: (2) No such file or directory There's been objects written to the data pool. What's required to get the read bench test to

Re: [ceph-users] Live migration of VM using librbd and OpenStack

2013-03-12 Thread Travis Rhoden
Hi Josh, Thanks for the info. So if I want to do live migration with VMs that were launched with boot-from-volume, I'll need to use virsh to do the migration, rather than Nova. Okay, that should be doable. As an aside, I will probably want to look at the OpenStack DB and figure out how to tell

Re: [ceph-users] Ceph Read Benchmark

2013-03-12 Thread David Zafman
Try first doing something like this first. rados bench -p data 300 write --no-cleanup David Zafman Senior Developer http://www.inktank.com On Mar 12, 2013, at 1:46 PM, Scott Kinder wrote: > When I try and do a rados bench, I see the following error: > > # rados bench -p data 300 seq > Mus

Re: [ceph-users] Ceph Read Benchmark

2013-03-12 Thread Mark Nelson
On 03/12/2013 03:46 PM, Scott Kinder wrote: When I try and do a rados bench, I see the following error: # rados bench -p data 300 seq Must write data before running a read benchmark! error during benchmark: -2 error 2: (2) No such file or directory There's been objects written to the data pool.

Re: [ceph-users] Ceph Read Benchmark

2013-03-12 Thread Scott Kinder
That did the trick, thanks David. On Tue, Mar 12, 2013 at 2:48 PM, David Zafman wrote: > > Try first doing something like this first. > > rados bench -p data 300 write --no-cleanup > > David Zafman > Senior Developer > http://www.inktank.com > > > > > On Mar 12, 2013, at 1:46 PM, Scott Kinder

Re: [ceph-users] Live migration of VM using librbd and OpenStack

2013-03-12 Thread Josh Durgin
On 03/12/2013 01:48 PM, Travis Rhoden wrote: Hi Josh, Thanks for the info. So if I want to do live migration with VMs that were launched with boot-from-volume, I'll need to use virsh to do the migration, rather than Nova. Okay, that should be doable. As an aside, I will probably want to look

Re: [ceph-users] Ceph Read Benchmark

2013-03-12 Thread Scott Kinder
A follow-up question. How do I cleanup the written data, after I finish up with my benchmarks? I notice there is a cleanup object command, though I'm unclear on how to use it. On Tue, Mar 12, 2013 at 2:59 PM, Scott Kinder wrote: > That did the trick, thanks David. > > > On Tue, Mar 12, 2013 at

Re: [ceph-users] Ceph Read Benchmark

2013-03-12 Thread Sébastien Han
It's pretty straightforward, but you can 'simply' delete the pool :) (since it should be a test pool ;)). -- Regards, Sébastien Han. On Tue, Mar 12, 2013 at 10:11 PM, Scott Kinder wrote: > A follow-up question. How do I cleanup the written data, after I finish up > with my benchmarks? I notice t

Re: [ceph-users] Ceph Read Benchmark

2013-03-12 Thread David Zafman
I would either make my own pool and delete it when done: rados mkpool testpool RUN BENCHMARKS rados rmpool testpool testpool --yes-i-really-really-mean-it or use the cleanup command, but I ended up having to also delete benchmark_last_metadata RUN BENCHMARKS rados -p data ls … # Note the names

Re: [ceph-users] Gateway quick start

2013-03-12 Thread John Wilkins
Waed, Ceph will rebalance automatically. Once you have completed the 5-minute quick start guide, you can store an object with the rados command line and see where Ceph's CRUSH algorithm placed your data. Follow the instructions in this section: http://ceph.com/docs/master/rados/operations/monito

Re: [ceph-users] Live migration of VM using librbd and OpenStack

2013-03-12 Thread Travis Rhoden
On Tue, Mar 12, 2013 at 5:06 PM, Josh Durgin wrote: > On 03/12/2013 01:48 PM, Travis Rhoden wrote: > >> Hi Josh, >> >> Thanks for the info. So if I want to do live migration with VMs that were >> launched with boot-from-volume, I'll need to use virsh to do the >> migration, >> rather than Nova.