On Thu, May 12, 2016 at 2:16 PM, Wido den Hollander wrote:
> Hi,
Hi Wido,
>
> I am setting up a Jewel cluster in VMs with Ubuntu 16.04.
>
> ceph version 10.2.0 (3a9fba20ec743699b69bd0181dd6c54dc01c64b9)
>
> After a reboot the Ceph Monitors don't start and I have to do so manually.
I had a simil
Hi Wido,
Yes you are right. After removing the down OSDs, reformatting and bring
them up again, at least until 75% of total OSDs, my Ceph Cluster is healthy
again. It seem there is high probability of data safety if the total active
PGs same with total PGs and total degraded PGs same with total un
> Op 14 mei 2016 om 11:53 schreef Ruben Kerkhof :
>
>
> On Thu, May 12, 2016 at 2:16 PM, Wido den Hollander wrote:
> > Hi,
>
> Hi Wido,
>
> >
> > I am setting up a Jewel cluster in VMs with Ubuntu 16.04.
> >
> > ceph version 10.2.0 (3a9fba20ec743699b69bd0181dd6c54dc01c64b9)
> >
> > After a re
On Sat, May 14, 2016 at 12:40 PM, Wido den Hollander wrote:
>
>> Op 14 mei 2016 om 11:53 schreef Ruben Kerkhof :
>>
>>
>> On Thu, May 12, 2016 at 2:16 PM, Wido den Hollander wrote:
>> > Hi,
>>
>> Hi Wido,
>>
>> >
>> > I am setting up a Jewel cluster in VMs with Ubuntu 16.04.
>> >
>> > ceph versio
> Op 14 mei 2016 om 12:40 schreef Wido den Hollander :
>
>
>
> > Op 14 mei 2016 om 11:53 schreef Ruben Kerkhof :
> >
> >
> > On Thu, May 12, 2016 at 2:16 PM, Wido den Hollander wrote:
> > > Hi,
> >
> > Hi Wido,
> >
> > >
> > > I am setting up a Jewel cluster in VMs with Ubuntu 16.04.
> > >
We have a rather urgent situation, and I need help doing one of two things:
1. Fix the MDS and regain a working cluster (ideal)
2. Find a way to extract the contents so I can move it to a new cluster (need
to do this anyway)
We have 4 physical storage machines: stor1, stor2, stor3, stor4
The MDS
Hi Alex,
Thank you for your response! Yes, this is for a production environment... Do
you think the risk of data loss due to the single node be different than if it
was an appliance or a Linux box with raid/zfs?
Cheers,
Mike
> On May 13, 2016, at 7:38 PM, Alex Gorbachev wrote:
>
>
>
>> On
On Sat, 14 May 2016 09:46:23 -0700 Mike Jacobacci wrote:
Hello,
> Hi Alex,
>
> Thank you for your response! Yes, this is for a production
> environment... Do you think the risk of data loss due to the single node
> be different than if it was an appliance or a Linux box with raid/zfs?
>
Depends
Hi Christian,
Thank you, I know what I am asking isn't a good idea... I am just trying to
avoid waiting for all three nodes before I began virtualizing our
infrastructure.
Again thanks for the responses!
Cheers,
Mike
> On May 14, 2016, at 9:56 AM, Christian Balzer wrote:
>
> On Sat, 14 May
Hi all,
I’m currently having trouble with an incomplete pg.
Our cluster has a replication factor of 3, however somehow I found this pg to
be present in 9 different OSDs (being active only in 3 of them, of course).
Since I don’t really care about data loss, I was wondering if it’s possible to
get
One of the issues was a different version of ceph on the nodes. They are now
all back to version 9.0.2 and things are looking a bit better.
We've had some help from a ceph engineer, and the OSD's are now all up and
running:
[root@stor2 ceph]# ceph osd tree
ID WEIGHTTYPE NAMEUP/
11 matches
Mail list logo