Hi,
On a freshly created 4 node cluster I'm struggling to get the 4th node
to create correctly. ceph-deploy is unable to create the OSDs on it
and when logging in to the node and attempting to run `ceph -s`
manually (after copying the client.admin keyring) with debug
parameters it ends up hanging
On 25 August 2014 10:31, Wido den Hollander wrote:
> On 08/24/2014 08:27 PM, Andrei Mikhailovsky wrote:
>>
>> Hello guys,
>>
>> I am planning to do rbd images off-site backup with rbd export-diff and I
>> was wondering if ceph has checksumming functionality so that I can compare
>> source and dest
Hi,
After upgrading to dumpling I appear unable to get the mds cluster
running. The active server just sits in the rejoin state spinning and
causing lots of i/o on the osds. Looking at the logs it appears to be
going through checking a vast number of missing inodes.
2013-08-20 13:50:29.129624 7fd
be good.
> -Greg
> Software Engineer #42 @ http://inktank.com | http://ceph.com
>
> On Tue, Aug 20, 2013 at 5:51 AM, Damien Churchill wrote:
>> Hi,
>>
>> After upgrading to dumpling I appear unable to get the mds cluster
>> running. The active server just sits in the
On 15 October 2013 15:52, Guang wrote:
> [osd2.ceph.mobstor.bf1.yahoo.com][ERROR ] sudo: sgdisk: command not found
A complete guess, but this could be due to the PATH environment not
being set correctly for whatever user ceph-deploy logs into the
machine as.
__
Hi,
I was wondering if anyone has had any experience in attempting to use a RBD
volume as a clustered drive in Windows Failover Clustering? I'm getting the
impression that it won't work since it needs to be either an iSCSI LUN or a
SCSI LUN.
Thanks,
Damien
t; RBD can be re-published via iSCSI using a gateway host to sit in between,
> for example using targetcli.
>
>
>
> On 2013-10-22 13:15, Damien Churchill wrote:
>
>> Hi,
>>
>> I was wondering if anyone has had any experience in attempting to use
>> a RBD volum
Hi,
Since the upgrade to 0.72 I've been experiencing an issue with a number of
volumes. It seems to occur on both the librbd clients as well as the krbd
clients. From the kernel client, when trying to access one of these
appearing corrupted images, it causes a panic and you end up with an
assertio
or upgrade bug 6761"
> http://tracker.ceph.com/issues/6761#note-19
> (Forgive my brevity; it's late here. :)
> -Greg
> Software Engineer #42 @ http://inktank.com | http://ceph.com
>
>
> On Thu, Nov 14, 2013 at 2:15 AM, Damien Churchill
> wrote:
> > Hi,
> >
> &g
On 19 November 2013 20:12, LaSalle, Jurvis
wrote:
>
> On 11/19/13, 2:10 PM, "Wolfgang Hennerbichler" wrote:
>
> >
> >On Nov 19, 2013, at 3:47 PM, Bernhard Glomm
> >wrote:
> >
> >> Hi Nicolas
> >> just fyi
> >> rbd format 2 is not supported yet by the linux kernel (module)
> >
> >I believe this i
On Tue Dec 30 2014 at 11:49:09 PM Lindsay Mathieson <
lindsay.mathie...@gmail.com> wrote:
> On Tue, 30 Dec 2014 11:25:40 PM Erik Logtenberg wrote:
> > f you want to be able to start your osd's with /etc/init.d/ceph init
> > script, then you better make sure that /etc/ceph/ceph.conf does link
> > t
On 6 June 2013 15:02, Morgan KORCHIA wrote:
> As far as I know, thin provisioning is not available in ubuntu 12.04 since
> it does not include LVM2.
Hi,
Fairly sure it does.
$ lvchange --version
LVM version: 2.02.66(2) (2010-05-20)
Library version: 1.02.48 (2010-05-20)
Driver version:
Hi,
I've built a copy of linux 3.10-rc6 (and added the patch from
ceph-client/for-linus) however when I try and map a rbd image created
with:
# rbd create test-format-2 --size 10240 --format 2
and then run a map on the machine running the new kernel:
# rbd map test-format-2
rbd: add failed: (22
On 21 June 2013 16:32, Sage Weil wrote:
> On Fri, 21 Jun 2013, Damien Churchill wrote:
>> Hi,
>>
>> I've built a copy of linux 3.10-rc6 (and added the patch from
>> ceph-client/for-linus) however when I try and map a rbd image created
>> with:
>>
14 matches
Mail list logo