[ceph-users] ceph-mon segfaulted

2013-06-20 Thread Artem Silenkov
Good day! Surprisingly we encountered ceph-mon core dumped today. It was not peak load time and system was technically in good state. Configuration Debian GNU/Linux 6.0 x64 Linux h01 2.6.32-19-pve #1 SMP Wed May 15 07:32:52 CEST 2013 x86_64 GNU/Linux ii ceph 0.61.3-

Re: [ceph-users] Desktop or Enterprise SATA Drives?

2013-06-20 Thread James Harper
> Hi all > > I'm building a small ceph cluster with 3 nodes (my first ceph cluster). > Each Node with one System Disk, one Journal SSD Disk and one SATA OSD > Disk. > > My question is now should I use Desktop or Enterprise SATA Drives? > Enterprise Drives have a higher MTBF but the Firmware is ac

[ceph-users] How to change the journal size at run time?

2013-06-20 Thread Da Chun
Hi List, The default journal size is 1G, which I think is too small for my Gb network. I want to extend all the journal partitions to 2 or 4G. How can I do that? The osds were all created by commands like "ceph-deploy osd create ceph-node0:/dev/sdb". The journal partition is on the same disk tog

Re: [ceph-users] radosgw placement groups

2013-06-20 Thread Mandell Degerness
It is possible to create all of the pools manually before starting radosgw. That allows control of the pg_num used. The pools are: .rgw, .rgw.control, .rgw.gc, .log, .intent-log, .usage, .users, .users.email, .users.swift, .users.uid On Wed, Jun 19, 2013 at 6:13 PM, Derek Yarnell wrote: > Hi,

Re: [ceph-users] palcing SSDs and SATAs in same hosts

2013-06-20 Thread Ugis
Thanks! Rethinking same first example I think it is doable even like shown there. Nothing prevents mapping osds to host-like entities whatever they are called. 2013/6/20 Gregory Farnum : > On Thursday, June 20, 2013, Edward Huyer wrote: >> >> > Hi, >> > >> > I am thinking how to make ceph with 2 p

[ceph-users] Openstack Multi-rbd storage backend

2013-06-20 Thread w sun
Anyone saw the same issue as below? We are trying to test the multi backend feature with two RBD pools on Grizzly release. At this point, it seems that rbd.py does not take separate cephx users for the two RBD pools for authentication as it defaults to the single ID defined in /etc/init/cind

[ceph-users] Exclusive mount

2013-06-20 Thread Timofey Koolin
Is way to exclusive map rbd? For example - I map on host A, then I try map it on host B. I want fail map on host B while it mapped to host A. I read about lock command, I want atomic lock and mount rbd for one host and auto unlock it when host A fail. -- Blog: www.rekby.ru _

Re: [ceph-users] MON quorum a single point of failure?

2013-06-20 Thread Bo
Thank you, Mike Sage and Greg. Completely different than everything I had heard or read. Clears it all up. :) Gracias, -bo On Thu, Jun 20, 2013 at 11:15 AM, Gregory Farnum wrote: > On Thursday, June 20, 2013, Bo wrote: > > > > Howdy! > > > > Loving working with ceph; learning a lot. :) > > >

Re: [ceph-users] MON quorum a single point of failure?

2013-06-20 Thread Gregory Farnum
On Thursday, June 20, 2013, Bo wrote: > > Howdy! > > Loving working with ceph; learning a lot. :) > > I am curious about the quorum process because I seem to get conflicting > information from "experts". Those that I report to need a clear answer from > me which I am currently unable to give. > >

Re: [ceph-users] MON quorum a single point of failure?

2013-06-20 Thread Sage Weil
On Thu, 20 Jun 2013, Bo wrote: > Howdy! > Loving working with ceph; learning a lot. :) > > I am curious about the quorum process because I seem to get conflicting > information from "experts". Those that I report to need a clear answer from > me which I am currently unable to give. > > Ceph needs

Re: [ceph-users] v0.61.4 released

2013-06-20 Thread Guido Winkelmann
I'm still using Fedora 17 on some machines that use Ceph, and it seems that the official Ceph RPM repository for Fedora 17 (http://eu.ceph.com/rpm-cuttlefish/fc17/x86_64/) hasn't seen any new releases since 0.61.2. Are you discontinuing RPMs for Fedora 17? Guido ___

Re: [ceph-users] MON quorum a single point of failure?

2013-06-20 Thread Mike Lowe
Quorum means you need at least %51 participating be it people following parliamentary procedures or mons in ceph. With one dead and two up you have %66 participating or enough to have a quorum. An even number doesn't get you any additional safety but does give you one more thing than can fail v

Re: [ceph-users] Changelog about 0.61.4...

2013-06-20 Thread Fabio - NS3 srl
Il 20/06/13 15:40, Joao Eduardo Luis ha scritto: On 06/20/2013 01:23 PM, Fabio - NS3 srl wrote: Hi, there is a changlog about 0.61.4? There will be as soon as 0.61.4 is officially released. An announcement to ceph-devel, ceph-users and the blog at ceph.com usually accompanies the release.

[ceph-users] MON quorum a single point of failure?

2013-06-20 Thread Bo
Howdy! Loving working with ceph; learning a lot. :) I am curious about the quorum process because I seem to get conflicting information from "experts". Those that I report to need a clear answer from me which I am currently unable to give. Ceph needs an odd number of monitors in any given cluste

Re: [ceph-users] palcing SSDs and SATAs in same hosts

2013-06-20 Thread Gregory Farnum
On Thursday, June 20, 2013, Edward Huyer wrote: > > Hi, > > > > I am thinking how to make ceph with 2 pools - fast and slow. > > Plan is to use SSDs and SATAs(or SAS) in the same hosts and define pools > that > > use fast and slow disks accordingly. Later it would be easy to grow > either pool > >

Re: [ceph-users] Relocation of a node

2013-06-20 Thread Gregory Farnum
On Thursday, June 20, 2013, Kurt Bauer wrote: > Hi, > > we run a 3 node cluster, every node runs a mon and 4 osds. 2 defined > pools, one with replication level 2, the second with replication level 3. > We now want to relocate one node from one datacenter to another, which > means a downtime of a

Re: [ceph-users] Where to pu the journal partitions?

2013-06-20 Thread Sage Weil
On Thu, 20 Jun 2013, Da Chun wrote: > Hi List, > According to the doc, the journal partition is not recommended to be on the > same disk as the osd. Should I put it in a separate one? But the total > journal size is not big, 2 or 4g is enough for gigabyte network. It's > a tremendous waste to put i

[ceph-users] v0.61.4 released

2013-06-20 Thread Sage Weil
We have resolved a number of issues that v0.61.x Cuttlefish users have been hitting and have prepared another point release, v0.61.4. This release fixes a rare data corruption during power cycle when using the XFS file system, a few monitor sync problems, several issues with ceph-disk and ceph-

[ceph-users] Desktop or Enterprise SATA Drives?

2013-06-20 Thread Stefan Schneebeli
Hi all I'm building a small ceph cluster with 3 nodes (my first ceph cluster).Each Node with one System Disk, one Journal SSD Disk and one SATA OSD Disk. My question is now should I use Desktop or Enterprise SATA Drives? Enterprise Drives have a higher MTBF but the Firmware is actually build for

[ceph-users] RGW snapshots

2013-06-20 Thread Mike Bryant
Hi, is there any way to create snapshots of individual buckets, that can be restored from piecemeal? i.e. if someone deletes objects by mistake? Cheers Mike -- Mike Bryant | Systems Administrator | Ocado Technology mike.bry...@ocado.com | 01707 382148 | www.ocadotechnology.com -- Notice: This

[ceph-users] Relocation of a node

2013-06-20 Thread Kurt Bauer
Hi, we run a 3 node cluster, every node runs a mon and 4 osds. 2 defined pools, one with replication level 2, the second with replication level 3. We now want to relocate one node from one datacenter to another, which means a downtime of about 4 hours for that specific node, which shouldn't hurt

Re: [ceph-users] palcing SSDs and SATAs in same hosts

2013-06-20 Thread Edward Huyer
> Hi, > > I am thinking how to make ceph with 2 pools - fast and slow. > Plan is to use SSDs and SATAs(or SAS) in the same hosts and define pools that > use fast and slow disks accordingly. Later it would be easy to grow either > pool > by need. > > I found example for CRUSH map that does simila

[ceph-users] palcing SSDs and SATAs in same hosts

2013-06-20 Thread Ugis
Hi, I am thinking how to make ceph with 2 pools - fast and slow. Plan is to use SSDs and SATAs(or SAS) in the same hosts and define pools that use fast and slow disks accordingly. Later it would be easy to grow either pool by need. I found example for CRUSH map that does similar thing by defining

Re: [ceph-users] Changelog about 0.61.4...

2013-06-20 Thread Joao Eduardo Luis
On 06/20/2013 01:23 PM, Fabio - NS3 srl wrote: Hi, there is a changlog about 0.61.4? There will be as soon as 0.61.4 is officially released. An announcement to ceph-devel, ceph-users and the blog at ceph.com usually accompanies the release. -Joao -- Joao Eduardo Luis Software Engineer |

[ceph-users] Changelog about 0.61.4...

2013-06-20 Thread Fabio - NS3 srl
Hi, there is a changlog about 0.61.4? Thanks FabioFVZ ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] Problem with multiple hosts RBD + Cinder

2013-06-20 Thread Sebastien Han
Hi,No this must always be the same UUID. You can only specify one in cinder.conf.Btw nova does the attachment this is why it needs the uuid and secret.The first secret import generates an UUID, then always re-use the same one for all your compute node, do something like:   9e4c7795-0681-cd4f-cf36-8

Re: [ceph-users] Help needed porting Ceph to RSockets

2013-06-20 Thread Joao Eduardo Luis
On 06/20/2013 10:09 AM, Matthew Anderson wrote: Hi All, I've had a few conversations on IRC about getting RDMA support into Ceph and thought I would give it a quick attempt to hopefully spur some interest. What I would like to accomplish is an RSockets only implementation so I'm able to use Ceph

[ceph-users] Problem with multiple hosts RBD + Cinder

2013-06-20 Thread Igor Laskovy
Hello list! I am trying deploy Ceph RBD + OpenStack Cinder. Basically, my question related to this section in documentation: cat > secret.xml < client.volumes secret EOF sudo virsh secret-define --file secret.xml sudo virsh secret-set-value --secret {uuid of secret} --base64 $(cat cli

[ceph-users] Help needed porting Ceph to RSockets

2013-06-20 Thread Matthew Anderson
Hi All, I've had a few conversations on IRC about getting RDMA support into Ceph and thought I would give it a quick attempt to hopefully spur some interest. What I would like to accomplish is an RSockets only implementation so I'm able to use Ceph, RBD and QEMU at full speed over an Infiniband fa