ot;Barry O'Rourke"
*Cc: *ceph-users@lists.ceph.com
*Sent: *Tuesday, May 7, 2013 5:02:42 PM
*Subject: *Re: [ceph-users] Dell R515 performance and specification question
On 05/07/2013 03:36 PM, Barry O'Rourke wrote:
> Hi,
>
>> With so few disks and the inability to do 10G
round this restriction
correct?
Dave Spano
- Original Message -
From: "Mark Nelson"
To: "Barry O'Rourke"
Cc: ceph-users@lists.ceph.com
Sent: Tuesday, May 7, 2013 5:02:42 PM
Subject: Re: [ceph-users] Dell R515 performance and specification question
On
On 05/08/2013 07:08 AM, Barry O'Rourke wrote:
Hi,
I've been doing some numbers today and it looks like our choice is
between 6 x R515's or 6 x R410's depending upon whether we want to allow
for the possibility of adding more OSDs at a later date.
Yeah, tough call. I would expect that R410s or
Hi,
I've been doing some numbers today and it looks like our choice is
between 6 x R515's or 6 x R410's depending upon whether we want to allow
for the possibility of adding more OSDs at a later date.
Do you have any experience with the Dell H200 cards?
You mentioned earlier that the Dell S
On 05/07/2013 03:36 PM, Barry O'Rourke wrote:
Hi,
With so few disks and the inability to do 10GbE, you may want to
consider doing something like 5-6 R410s or R415s and just using the
on-board controller with a couple of SATA disks and 1 SSD for the
journal. That should give you better aggregat
Hi,
> Here's a quick performance display with various block sizes on a host
> with 1 public 1Gbe link and 1 1Gbe link on the same vlan as the ceph
> cluster.
Thanks for taking the time to look into this for me, I'll compare it
with my existing set-up in the morning.
Thanks,
Barry
--
The Univ
Hi,
On Tue, 2013-05-07 at 21:07 +0300, Igor Laskovy wrote:
> If I currently understand idea, when this 1 SSD will fail whole node
> with that SSD will fail. Correct?
Only OSDs that use that SSD for the journal will fail as they will lose
any writes still in the journal. If I only have 2 OSDs sha
Hi,
> With so few disks and the inability to do 10GbE, you may want to
> consider doing something like 5-6 R410s or R415s and just using the
> on-board controller with a couple of SATA disks and 1 SSD for the
> journal. That should give you better aggregate performance since in
> your case yo
If I currently understand idea, when this 1 SSD will fail whole node with
that SSD will fail. Correct?
What scenario for node recovery in this case?
Playing with "ceph-osd --flush-journal" and "ceph-osd --mkjournal" for each
osd?
On Tue, May 7, 2013 at 4:17 PM, Mark Nelson wrote:
> On 05/07/201
.ceph.com
Sent: Tuesday, May 7, 2013 9:17:24 AM
Subject: Re: [ceph-users] Dell R515 performance and specification question
On 05/07/2013 06:50 AM, Barry O'Rourke wrote:
> Hi,
>
> I'm looking to purchase a production cluster of 3 Dell Poweredge R515's
>
On 05/07/2013 06:50 AM, Barry O'Rourke wrote:
Hi,
I'm looking to purchase a production cluster of 3 Dell Poweredge R515's
which I intend to run in 3 x replication. I've opted for the following
configuration;
2 x 6 core processors
32Gb RAM
H700 controller (1Gb cache)
2 x SAS OS disks (in RAID1)
Hi,
I'm running a somewhat similar configuration here. I'm wondering why you
have left out SSDs for the journals?
I can't go into exact prices due to our NDA, but I can say that getting
a couple of decent SSD disks from Dell will increase the cost per server
by a four figure sum, and we're o
FWIW, here is what I have for my ceph cluster:
4 x HP DL 180 G6
12Gb RAM
P411 with 512MB Battery Backed Cache
10GigE
4 HP MSA 60's with 12 x 1TB 7.2k SAS and SATA drives (bought at different times
so there is a mix)
2 HP D2600 with 12 x 3TB 7.2k SAS Drives
I'm currently running 79 qemu/kvm vm's
Hi,
I'd be interested to hear from anyone running a similar configuration
I'm running a somewhat similar configuration here. I'm wondering why you
have left out SSDs for the journals?
I gather they would be quite important to achieve a level of performance
for hosting 100 virtual machines
Hi,
I'm looking to purchase a production cluster of 3 Dell Poweredge R515's
which I intend to run in 3 x replication. I've opted for the following
configuration;
2 x 6 core processors
32Gb RAM
H700 controller (1Gb cache)
2 x SAS OS disks (in RAID1)
2 x 1Gb ethernet (bonded for cluster network
15 matches
Mail list logo