[ceph-users] No gracefull handling of a maxed out cluster network with noup / nodown set.

2013-11-20 Thread Robert van Leeuwen
Hi, I'm playing with our new Ceph cluster and it seems that Ceph is not gracefully handling a maxed out cluster network. I had some "flapping" nodes once every few minutes when pushing a lot of traffic to the nodes so I decided to set the noup and nodown as described in the docs. http://ceph.c

[ceph-users] How to replace a failed OSD

2013-11-20 Thread Robert van Leeuwen
Hi, What is the easiest way to replace a failed disk / OSD. It looks like the documentation here is not really compatible with ceph_deploy: http://ceph.com/docs/master/rados/operations/add-or-rm-osds/ It is talking about adding stuff to the ceph.conf while ceph_deploy works in a different way. (

Re: [ceph-users] How to replace a failed OSD

2013-11-20 Thread Daniel Schwager
Hi Robert, > What is the easiest way to replace a failed disk / OSD. > It looks like the documentation here is not really compatible with > ceph_deploy: > http://ceph.com/docs/master/rados/operations/add-or-rm-osds/ I found the following thread useful: http://www.spinics.net/lists/ceph-u

Re: [ceph-users] How to replace a failed OSD

2013-11-20 Thread Mark Kirkwood
On 20/11/13 22:27, Robert van Leeuwen wrote: Hi, What is the easiest way to replace a failed disk / OSD. It looks like the documentation here is not really compatible with ceph_deploy: http://ceph.com/docs/master/rados/operations/add-or-rm-osds/ It is talking about adding stuff to the ceph.conf

Re: [ceph-users] Mapping rbd's on boot

2013-11-20 Thread Laurent Barbe
Hello, Yes, with ubuntu, the init script needs to be enabled with update-rc.d. If you still have this problem, could you try to add "_netdev" option in your fstab ? e.g. : UUID=2f6aca33-c957-452c-8534-7234dd1612c9 /mnt/testrbd xfs defaults,_netdev0 0 Laurent Le 15/11/2013 0

Re: [ceph-users] install three mon-nodes, two successed, one failed

2013-11-20 Thread Rzk
Hi, maybe you can try this, http://cephnotes.ksperis.com/blog/2013/08/29/mon-failed-to-start --see whether your third monitor exist in ceph Root# ceph mon dump dumped monmap epoch 12 epoch 12 fsid b3ecd9c5-182b-4978-9272-d4b278454500 last_changed 2013-10-23 17:57:44.185915 created 2013-05-16 16:4

Re: [ceph-users] How to replace a failed OSD

2013-11-20 Thread Alexis GÜNST HORN
Hello, It would be great to have a command like : ceph-deply out osd.xx Physically change the drive, then ceph-deploy replace osd.xx What do you think ? Best Regards - Cordialement Alexis 2013/11/20 Mark Kirkwood : > On 20/11/13 22:27, Robert van Leeuwen wrote: >> >> Hi, >> >> What is th

Re: [ceph-users] How to replace a failed OSD

2013-11-20 Thread Loic Dachary
Hi, Let say disk /dev/sdb failed on node nodeA. I would hot remove it, plug a new one and ceph-deploy osd create nodeA:/dev/sdb There is more context about how this is actually managed by ceph and the operating system in http://dachary.org/?p=2428 Fully automated disks life cycle in a Ceph

[ceph-users] DFS Job Position

2013-11-20 Thread Andy Edmonds
Apologies for interrupting the normal business... Hi all, The ICCLab [1] has another new position opened that perhaps you or someone you know might be interested in. Briefly, the position is a Applied Researcher in the area of Cloud Computing (more IaaS than PaaS) and would need particular skills

Re: [ceph-users] Size of RBD images

2013-11-20 Thread Bernhard Glomm
That might be, manpage of ceph version 0.72.1 tells me it isn't though. anyhow still running kernel 3.8.xx Bernhard Am 19.11.2013 20:10:04, schrieb Wolfgang Hennerbichler: > On Nov 19, 2013, at 3:47 PM, Bernhard Glomm <> bernhard.gl...@ecologic.eu> > > wrote: > > > Hi Nicolas > > just fyi > >

Re: [ceph-users] Size of RBD images

2013-11-20 Thread nicolasc
Thank you Bernhard and Wogri. My old kernel version also explains the format issue. Once again, sorry to have mixed that in the problem. Back to my original inquiries, I hope someone can help me understand why: * it is possible to create an RBD image larger than the total capacity of the cluste

Re: [ceph-users] ceph-deploy disk zap fails but succeeds on retry

2013-11-20 Thread Alfredo Deza
On Mon, Nov 18, 2013 at 1:12 PM, Gruher, Joseph R wrote: > >>-Original Message- >>From: Alfredo Deza [mailto:alfredo.d...@inktank.com] >>Sent: Monday, November 18, 2013 6:34 AM >>To: Gruher, Joseph R >>Cc: ceph-users@lists.ceph.com >>Subject: Re: [ceph-users] ceph-deploy disk zap fails but

Re: [ceph-users] Size of RBD images

2013-11-20 Thread Josh Durgin
On 11/20/2013 06:53 AM, nicolasc wrote: Thank you Bernhard and Wogri. My old kernel version also explains the format issue. Once again, sorry to have mixed that in the problem. Back to my original inquiries, I hope someone can help me understand why: * it is possible to create an RBD image large

Re: [ceph-users] ceph-deploy disk zap fails but succeeds on retry

2013-11-20 Thread Gruher, Joseph R
>-Original Message- >From: Alfredo Deza [mailto:alfredo.d...@inktank.com] >Sent: Wednesday, November 20, 2013 7:17 AM >To: Gruher, Joseph R >Cc: ceph-users@lists.ceph.com >Subject: Re: [ceph-users] ceph-deploy disk zap fails but succeeds on retry > >On Mon, Nov 18, 2013 at 1:12 PM, Gruher

Re: [ceph-users] Big or small node ?

2013-11-20 Thread Martin B Nielsen
Hi, I'd almost always go with more lesser beefy nodes than bigger ones. You're much more vulnerable if the big one(s) die and replication will not impact your cluster as much. I also find it easier to extend a cluster with smaller nodes. At least it feels like you can increase in more smooth rate

Re: [ceph-users] Mapping rbd's on boot

2013-11-20 Thread Peter Matulis
On 11/20/2013 05:33 AM, Laurent Barbe wrote: > Hello, > > Yes, with ubuntu, the init script needs to be enabled with update-rc.d. > If you still have this problem, could you try to add "_netdev" option in > your fstab ? > > e.g. : > UUID=2f6aca33-c957-452c-8534-7234dd1612c9 /mnt/testrbd xfs > de

Re: [ceph-users] alternative approaches to CEPH-FS

2013-11-20 Thread Dimitri Maziuk
On 11/19/2013 08:02 PM, YIP Wai Peng wrote: > Hm, so maybe this nfsceph is not _that_ bad after all! :) Your read clearly > wins, so I'm guessing the drdb write is the slow one. Which drdb mode are > you using? Active/passive pair, meta-disk internal, protocol C over a 5"-long crossover cable on

[ceph-users] [s3] delete bucket with many files

2013-11-20 Thread Dominik Mostowiec
Hi, I plan to delete 2 buckets, 5M and 15M files. This can be dangerous if I do it via: radosgw-admin --bucket=largebucket1 --purge-objects bucket rm ? -- Pozdrawiam Dominik ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/lis

Re: [ceph-users] [s3] delete bucket with many files

2013-11-20 Thread Yehuda Sadeh
It's not more dangerous than going through the RESTful interface. Yehuda On Wed, Nov 20, 2013 at 12:41 PM, Dominik Mostowiec wrote: > Hi, > I plan to delete 2 buckets, 5M and 15M files. > This can be dangerous if I do it via: > radosgw-admin --bucket=largebucket1 --purge-objects bucket rm > ? >

Re: [ceph-users] Intel 520/530 SSD for ceph

2013-11-20 Thread mdw
On Tue, Nov 19, 2013 at 09:02:41AM +0100, Stefan Priebe wrote: ... > >You might be able to vary this behavior by experimenting with sdparm, > >smartctl or other tools, or possibly with different microcode in the drive. > Which values or which settings do you think of? ... Off-hand, I don't know.

[ceph-users] unsubscribe

2013-11-20 Thread Andrew_Kelley
Dell - Internal Use - Confidential unsubscribe ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] radosgw-agent AccessDenied 403

2013-11-20 Thread Mark Kirkwood
On 13/11/13 21:16, lixuehui wrote: Hi ,list We've ever reflected that ,radosgw-agent sync data failed all the time ,before. We paste the concert log here to seek any help now . application/json; charset=UTF-8 Wed, 13 Nov 2013 07:24:45 GMT x-amz-copy-source:sss%2Frgwconf /sss/rgwconf 2013-11-13