Hi,
I'm playing with our new Ceph cluster and it seems that Ceph is not gracefully
handling a maxed out cluster network.
I had some "flapping" nodes once every few minutes when pushing a lot of
traffic to the nodes so I decided to set the noup and nodown as described in
the docs.
http://ceph.c
Hi,
What is the easiest way to replace a failed disk / OSD.
It looks like the documentation here is not really compatible with ceph_deploy:
http://ceph.com/docs/master/rados/operations/add-or-rm-osds/
It is talking about adding stuff to the ceph.conf while ceph_deploy works in a
different way.
(
Hi Robert,
> What is the easiest way to replace a failed disk / OSD.
> It looks like the documentation here is not really compatible with
> ceph_deploy:
> http://ceph.com/docs/master/rados/operations/add-or-rm-osds/
I found the following thread useful:
http://www.spinics.net/lists/ceph-u
On 20/11/13 22:27, Robert van Leeuwen wrote:
Hi,
What is the easiest way to replace a failed disk / OSD.
It looks like the documentation here is not really compatible with ceph_deploy:
http://ceph.com/docs/master/rados/operations/add-or-rm-osds/
It is talking about adding stuff to the ceph.conf
Hello,
Yes, with ubuntu, the init script needs to be enabled with update-rc.d.
If you still have this problem, could you try to add "_netdev" option in
your fstab ?
e.g. :
UUID=2f6aca33-c957-452c-8534-7234dd1612c9 /mnt/testrbd xfs
defaults,_netdev0 0
Laurent
Le 15/11/2013 0
Hi,
maybe you can try this,
http://cephnotes.ksperis.com/blog/2013/08/29/mon-failed-to-start
--see whether your third monitor exist in ceph
Root# ceph mon dump
dumped monmap epoch 12
epoch 12
fsid b3ecd9c5-182b-4978-9272-d4b278454500
last_changed 2013-10-23 17:57:44.185915
created 2013-05-16 16:4
Hello,
It would be great to have a command like :
ceph-deply out osd.xx
Physically change the drive, then
ceph-deploy replace osd.xx
What do you think ?
Best Regards - Cordialement
Alexis
2013/11/20 Mark Kirkwood :
> On 20/11/13 22:27, Robert van Leeuwen wrote:
>>
>> Hi,
>>
>> What is th
Hi,
Let say disk /dev/sdb failed on node nodeA. I would hot remove it, plug a new
one and
ceph-deploy osd create nodeA:/dev/sdb
There is more context about how this is actually managed by ceph and the
operating system in http://dachary.org/?p=2428 Fully automated disks life
cycle in a Ceph
Apologies for interrupting the normal business...
Hi all,
The ICCLab [1] has another new position opened that perhaps you or someone
you know might be interested in. Briefly, the position is a Applied
Researcher in the area of Cloud Computing (more IaaS than PaaS) and would
need particular skills
That might be,
manpage of
ceph version 0.72.1
tells me it isn't though.
anyhow still running kernel 3.8.xx
Bernhard
Am 19.11.2013 20:10:04, schrieb Wolfgang Hennerbichler:
> On Nov 19, 2013, at 3:47 PM, Bernhard Glomm <> bernhard.gl...@ecologic.eu> >
> wrote:
>
> > Hi Nicolas
> > just fyi
> >
Thank you Bernhard and Wogri. My old kernel version also explains the
format issue. Once again, sorry to have mixed that in the problem.
Back to my original inquiries, I hope someone can help me understand why:
* it is possible to create an RBD image larger than the total capacity
of the cluste
On Mon, Nov 18, 2013 at 1:12 PM, Gruher, Joseph R
wrote:
>
>>-Original Message-
>>From: Alfredo Deza [mailto:alfredo.d...@inktank.com]
>>Sent: Monday, November 18, 2013 6:34 AM
>>To: Gruher, Joseph R
>>Cc: ceph-users@lists.ceph.com
>>Subject: Re: [ceph-users] ceph-deploy disk zap fails but
On 11/20/2013 06:53 AM, nicolasc wrote:
Thank you Bernhard and Wogri. My old kernel version also explains the
format issue. Once again, sorry to have mixed that in the problem.
Back to my original inquiries, I hope someone can help me understand why:
* it is possible to create an RBD image large
>-Original Message-
>From: Alfredo Deza [mailto:alfredo.d...@inktank.com]
>Sent: Wednesday, November 20, 2013 7:17 AM
>To: Gruher, Joseph R
>Cc: ceph-users@lists.ceph.com
>Subject: Re: [ceph-users] ceph-deploy disk zap fails but succeeds on retry
>
>On Mon, Nov 18, 2013 at 1:12 PM, Gruher
Hi,
I'd almost always go with more lesser beefy nodes than bigger ones. You're
much more vulnerable if the big one(s) die and replication will not impact
your cluster as much.
I also find it easier to extend a cluster with smaller nodes. At least it
feels like you can increase in more smooth rate
On 11/20/2013 05:33 AM, Laurent Barbe wrote:
> Hello,
>
> Yes, with ubuntu, the init script needs to be enabled with update-rc.d.
> If you still have this problem, could you try to add "_netdev" option in
> your fstab ?
>
> e.g. :
> UUID=2f6aca33-c957-452c-8534-7234dd1612c9 /mnt/testrbd xfs
> de
On 11/19/2013 08:02 PM, YIP Wai Peng wrote:
> Hm, so maybe this nfsceph is not _that_ bad after all! :) Your read clearly
> wins, so I'm guessing the drdb write is the slow one. Which drdb mode are
> you using?
Active/passive pair, meta-disk internal, protocol C over a 5"-long
crossover cable on
Hi,
I plan to delete 2 buckets, 5M and 15M files.
This can be dangerous if I do it via:
radosgw-admin --bucket=largebucket1 --purge-objects bucket rm
?
--
Pozdrawiam
Dominik
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/lis
It's not more dangerous than going through the RESTful interface.
Yehuda
On Wed, Nov 20, 2013 at 12:41 PM, Dominik Mostowiec
wrote:
> Hi,
> I plan to delete 2 buckets, 5M and 15M files.
> This can be dangerous if I do it via:
> radosgw-admin --bucket=largebucket1 --purge-objects bucket rm
> ?
>
On Tue, Nov 19, 2013 at 09:02:41AM +0100, Stefan Priebe wrote:
...
> >You might be able to vary this behavior by experimenting with sdparm,
> >smartctl or other tools, or possibly with different microcode in the drive.
> Which values or which settings do you think of?
...
Off-hand, I don't know.
Dell - Internal Use - Confidential
unsubscribe
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
On 13/11/13 21:16, lixuehui wrote:
Hi ,list
We've ever reflected that ,radosgw-agent sync data failed all the time ,before.
We paste the concert log here to seek any help now .
application/json; charset=UTF-8
Wed, 13 Nov 2013 07:24:45 GMT
x-amz-copy-source:sss%2Frgwconf
/sss/rgwconf
2013-11-13
22 matches
Mail list logo