www8.hp.com/h20195/v2/GetDocument.aspx?docname=c04128155
> ?
>
> (60 drives ?)
>
> I think for a full ssd node, it'll be impossible to reach max performance,
> you'll be cpu bound.
>
>
> I think a small node with 6-8 ssd osd for 20cores should be ok.
>
>
> --
Thanks for the tips.
Could anyone share their experience building a SSD pool or a SSD cache
tier with HP SL4540 server?
rgds,
Sreenath
On 4/2/15, Christian Balzer wrote:
>
> Hello,
>
> On Wed, 1 Apr 2015 18:40:10 +0530 Sreenath BH wrote:
>
>> Hi all,
>>
>> we
Hi all,
we are considering building all SSD OSD servers for RBD pool.
Couple of questions:
Does Ceph have any recommendation for number of cores/memory/ghz per
SSD drive, similar to what is usually followed for hard drives(1
core/1 GB Ram/1Ghz speed)?
thanks,
Sreenath
__
Thanks for the information.
-Sreenath
-
Date: Wed, 25 Mar 2015 04:11:11 +0100
From: Francois Lafont
To: ceph-users
Subject: Re: [ceph-users] PG calculator queries
Message-ID: <5512274f.1000...@free.fr>
Content-Type: text/plain; charset=utf-8
Hi,
Sreenath BH
Hi,
consider following values for a pool:
Size = 3
OSDs = 400
%Data = 100
Target PGs per OSD = 200 (This is default)
The PG calculator generates number of PGs for this pool as : 32768.
Questions:
1. The Ceph documentation recommends around 100 PGs/OSD, whereas the
calculator takes 200 as defau
Hi,
Is there a celing on the number for number of placement groups in a
OSD beyond which steady state and/or recovery performance will start
to suffer?
Example: I need to create a pool with 750 osds (25 OSD per server, 50 servers).
The PG calculator gives me 65536 placement groups with 300 PGs pe
of is the Federation guide (
> http://ceph.com/docs/giant/radosgw/federated-config/), but it only briefly
> mentions placement targets.
>
>
>
> On Thu, Mar 12, 2015 at 11:48 PM, Sreenath BH wrote:
>
>> Hi all,
>>
>> Can one Radow gateway support more than o
Hi all,
Can one Radow gateway support more than one pool for storing objects?
And as a follow-up question, is there a way to map different users to
separate rgw pools so that their obejcts get stored in different
pools?
thanks,
Sreenath
___
ceph-users
When a RBD volume is deleted, does Ceph fill used 4 MB chunks with zeros?
thanks,
Sreenath
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Thanks for all the help. We will follow the more careful approach!
-Sreenath
On 11/26/14, Kyle Bader wrote:
>> Thanks for all the help. Can the moving over from VLAN to separate
>> switches be done on a live cluster? Or does there need to be a down
>> time?
>
> You can do it on a life cluster. T
Thanks for all the help. Can the moving over from VLAN to separate
switches be done on a live cluster? Or does there need to be a down
time?
-Sreenath
On 11/26/14, Kyle Bader wrote:
>> For a large network (say 100 servers and 2500 disks), are there any
>> strong advantages to using separate swit
Hi
For a large network (say 100 servers and 2500 disks), are there any
strong advantages to using separate switch and physical network
instead of VLAN?
Also, how difficult it would be to switch from a VLAN to using
separate switches later?
-Sreenath
___
12 matches
Mail list logo