Hello
Is there a way to see running / acrive ceph.conf configuration items?
kind regards
Rob Fantini
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
wrote:
>
> Hello,
>
> On Tue, 29 Jul 2014 06:33:14 -0400 Robert Fantini wrote:
>
> > Christian -
> > Thank you for the answer, I'll get around to reading 'Crush Maps ' a
> > few times , it is important to have a good understanding of ceph part
anced?
If not I'll stick with 2 each room until I understand how configure things.
On Mon, Jul 28, 2014 at 9:19 PM, Christian Balzer wrote:
>
> On Mon, 28 Jul 2014 18:11:33 -0400 Robert Fantini wrote:
>
> > "target replication level of 3"
> > " with a mi
uired to allow a single room to operate.
>
> There's no way you can do a 3/2 MON split that doesn't risk the two nodes
> being up and unable to serve data while the three are down so you'd need to
> find a way to make it a 2/2/1 split instead.
>
> -Michael
>
>
&g
: any other idea on how to increase availability are welcome .
On Mon, Jul 28, 2014 at 12:29 PM, Christian Balzer wrote:
> On Mon, 28 Jul 2014 11:22:38 +0100 Joao Eduardo Luis wrote:
>
> > On 07/28/2014 08:49 AM, Christian Balzer wrote:
> > >
> > > Hello,
>
r will
> not serve those requests if quorum is not in place.
>
> -Joao
>
>
>
>> On 28/07/2014 12:22, Joao Eduardo Luis wrote:
>>
>>> On 07/28/2014 08:49 AM, Christian Balzer wrote:
>>>
>>>>
>>>> Hello,
>>>>
>>
>
> On Mon, 28 Jul 2014 04:19:16 -0400 Robert Fantini wrote:
>
> > I have 3 hosts that i want to use to test new setup...
> >
> > Currently they have 3-4 OSD's each.
> >
> How did you create the current cluster?
>
> ceph-deploy or something withing
I have 3 hosts that i want to use to test new setup...
Currently they have 3-4 OSD's each.
Could you suggest a fast way to remove all the OSD's ?
On Mon, Jul 28, 2014 at 3:49 AM, Christian Balzer wrote:
>
> Hello,
>
> On Sun, 27 Jul 2014 18:20:43 -0400 Robert Fant
ying DRBD where it makes more sense
> (IOPS/speed), while migrating everything else to Ceph.
>
> Anyway, lets look at your mail:
>
> On Fri, 25 Jul 2014 14:33:56 -0400 Robert Fantini wrote:
>
> > I've a question regarding advice from these threads:
> >
> https:/
I've a question regarding advice from these threads:
https://mail.google.com/mail/u/0/#label/ceph/1476b93097673ad7?compose=1476ec7fef10fd01
https://www.mail-archive.com/ceph-users@lists.ceph.com/msg11011.html
Our current setup has 4 osd's per node.When a drive fails the
cluster is almos
Hello Christian.
Our current setup has 4 osd's per node.When a drive fails the
cluster is almost unusable for data entry. I want to change pour set up
so that under no circumstances ever happens.We used drbd for 8 years,
and our main concern is high availability . 1200bps Modem spe
Hello.
In this set up:
PowerEdge R720
Raid: Perc H710 eight-port, 6Gb/s
OSD drives: qty 4: Seagate Constellation ES.3 ST2000NM0023 2TB 7200 RPM
128MB Cache SAS 6Gb/s
Would it make sense to uses these good sas drives in raid-1 for journal?
Western Digital XE WD3001BKHG 300GB 1 RPM 32MB Cache
to:ceph-users-boun...@lists.ceph.com] *On Behalf
> Of *Robert Fantini
> *Sent:* Wednesday, July 16, 2014 1:55 PM
> *To:* ceph-users@lists.ceph.com
> *Subject:* [ceph-users] PERC H710 raid card
>
>
>
> I've 2 dell systems with PERC H710 raid cards. Those are very good end
I've 2 dell systems with PERC H710 raid cards. Those are very good end
cards , but do not support jbod .
They support raid 0, 1, 5, 6, 10, 50, 60 .
lspci shows them as: LSI Logic / Symbios Logic MegaRAID SAS 2208
[Thunderbolt] (rev 05)
The firmware Dell uses on the card does not support jbod.
14 matches
Mail list logo