Re: [ceph-users] Mounting CephFS - mount error 5 = Input/output error

2013-05-11 Thread Wyatt Gorman
So what I wound up doing was disabling authentication entirely. It's a test environment, and doesn't really matter. It's working great now! I'm doing throughput testing now, getting about 5.5 MB/s. Thanks everyone. On Thu, May 9, 2013 at 3:48 AM, Matt Chipman wrote: > The auth key needs to be c

Re: [ceph-users] Trouble with bobtail->cuttlefish upgrade

2013-05-11 Thread Smart Weblications GmbH - Florian Wiessner
Hi, Am 11.05.2013 09:40, schrieb Pawel Stefanski: > hello! > > I'm trying to upgrade my test cluster to cuttlefish, but I'm stucked with mon > upgrade. > > Bobtail version - 0.56.6 (previous rolling upgrades) > cuttlefish version - 0.61.1 > > While starting upgraded mon demon it's faulting on

[ceph-users] monitor upgrade from 0.56.6 to 0.61.1 on squeeze failed!

2013-05-11 Thread Smart Weblications GmbH - Florian Wiessner
Hi, i upgraded from 0.56.6 to 0.61.1 and tried to restart one monitor: /etc/init.d/ceph start mon === mon.4 === Starting Ceph mon.4 on node05... [16366]: (33) Numerical argument out of domain failed: 'ulimit -n 8192; /usr/bin/ceph-mon -i 4 --pid-file /var/run/ceph/mon.4.pid -c /etc/ceph/ceph.con

Re: [ceph-users] Hardware recommendation / calculation for large cluster

2013-05-11 Thread Dimitri Maziuk
On 05/11/2013 08:42 AM, Tim Mohlmann wrote: > Each OSD server uses 4U and can take 36x3.5" drives. So in 36U I can put > 36/4=9 OSD servers, containing 9*36=324 HDDs. SuperMicro has a new 4U chassis w/ 72x3.5" drives (2/canister). You can double the number of drives. (With faster drives you may

Re: [ceph-users] RBD vs RADOS benchmark performance

2013-05-11 Thread Mike Lowe
Hmm try searching the libvirt git for josh as an author you should see the commit from Josh Durgan about white listing rbd migration. On May 11, 2013, at 10:53 AM, w sun wrote: > The reference Mike provided is not valid to me. Anyone else has the same > problem? --weiguo > > From: j.micha

Re: [ceph-users] RBD vs RADOS benchmark performance

2013-05-11 Thread w sun
The reference Mike provided is not valid to me. Anyone else has the same problem? --weiguo From: j.michael.l...@gmail.com Date: Sat, 11 May 2013 08:45:41 -0400 To: pi...@pioto.org CC: ceph-users@lists.ceph.com Subject: Re: [ceph-users] RBD vs RADOS benchmark performance I believe that this is f

Re: [ceph-users] Hardware recommendation / calculation for large cluster

2013-05-11 Thread Leen Besselink
Hi, Someone is going to correct me if I'm wrong, but I think you misread something. The Mon-daemon doesn't need that much RAM: The 'RAM: 1 GB per daemon' is per Mon-daemon, not per OSD-daemon. The same for disk-space. You should read this page again: http://ceph.com/docs/master/install/hardwa

[ceph-users] Hardware recommendation / calculation for large cluster

2013-05-11 Thread Tim Mohlmann
Hi, First of all I am new to ceph and this mailing list. At this moment I am looking into the possibilities to get involved in the storage business. I am trying to get an estimate about costs and after that I will start to determine how to get sufficient income. First I will describe my case,

Re: [ceph-users] RBD vs RADOS benchmark performance

2013-05-11 Thread Michael Lowe
I believe that this is fixed in the most recent versions of libvirt, sheepdog and rbd were marked erroneously as unsafe. http://libvirt.org/git/?p=libvirt.git;a=commit;h=78290b1641e95304c862062ee0aca95395c5926c Sent from my iPad On May 11, 2013, at 8:36 AM, Mike Kelly wrote: > (Sorry for send

Re: [ceph-users] RBD vs RADOS benchmark performance

2013-05-11 Thread Mike Kelly
(Sorry for sending this twice... Forgot to reply to the list) Is rbd caching safe to enable when you may need to do a live migration of the guest later on? It was my understanding that it wasn't, and that libvirt prevented you from doing the migration of it knew about the caching setting. If it i

Re: [ceph-users] RBD vs RADOS benchmark performance

2013-05-11 Thread Greg
Le 11/05/2013 13:24, Greg a écrit : Le 11/05/2013 02:52, Mark Nelson a écrit : On 05/10/2013 07:20 PM, Greg wrote: Le 11/05/2013 00:56, Mark Nelson a écrit : On 05/10/2013 12:16 PM, Greg wrote: Hello folks, I'm in the process of testing CEPH and RBD, I have set up a small cluster of hosts r

[ceph-users] Maximums for Ceph architectures

2013-05-11 Thread Igor Laskovy
Hi all, Does anybody know where to learn about Maximums for Ceph architectures? For example, I'm trying to find out about the maximum size of rbd image and cephfs file. Additionally want to know maximum size for RADOS Gateway object (meaning file for uploading). -- Igor Laskovy facebook.com/igor

Re: [ceph-users] RBD vs RADOS benchmark performance

2013-05-11 Thread Greg
Le 11/05/2013 02:52, Mark Nelson a écrit : On 05/10/2013 07:20 PM, Greg wrote: Le 11/05/2013 00:56, Mark Nelson a écrit : On 05/10/2013 12:16 PM, Greg wrote: Hello folks, I'm in the process of testing CEPH and RBD, I have set up a small cluster of hosts running each a MON and an OSD with bot

[ceph-users] RBD snapshot - time and consistent

2013-05-11 Thread Timofey Koolin
Is snapshot time depend from image size? Do snapshot create consistent state of image for moment at start snapshot? For example if I have file system on don't stop IO before start snapshot - Is it worse than turn of power during IO? -- Blog: www.rekby.ru _

[ceph-users] Trouble with bobtail->cuttlefish upgrade

2013-05-11 Thread Pawel Stefanski
hello! I'm trying to upgrade my test cluster to cuttlefish, but I'm stucked with mon upgrade. Bobtail version - 0.56.6 (previous rolling upgrades) cuttlefish version - 0.61.1 While starting upgraded mon demon it's faulting on store conversion. [25622]: (33) Numerical argument out of domain in