Hello Christian,

Il 01/10/2014 19:20, Christian Balzer ha scritto:
Hello,

On Wed, 01 Oct 2014 18:26:53 +0200 Massimiliano Cuttini wrote:

Dear all,

i need few tips about Ceph best solution for driver controller.
I'm getting confused about IT mode, RAID and JBoD.
I read many posts about don't go for RAID but use instead a JBoD
configuration.

I have 2 storage alternatives right now in my mind:

     *SuperStorage Server 2027R-E1CR24L*
     which use SAS3 via LSI 3008 AOC; IT Mode/Pass-through
     http://www.supermicro.nl/products/system/2U/2027/SSG-2027R-E1CR24L.cfm

and

     *SuperStorage Server 2027R-E1CR24N*
     which use SAS3 via LSI 3108 SAS3 AOC (in RAID mode?)
     http://www.supermicro.nl/products/system/2U/2027/SSG-2027R-E1CR24N.cfm

Firstly, both of these use an expander backplane.
So if you're planning on putting SSDs in there (even if just like 6 for
journals) you may be hampered by that.
The Supermicro homepage is vague as usual and the manual doesn't actually
have a section for that backplane. I guess it will be a 4link connection,
so 4x12Gb/s aka 4.8 GB/s.
If the disks all going to be HDDs you're OK, but keep that bit in mind.
ok i was thinking about connect 24 SSD disks connected with SATA3 (6Gbps).
This is why i choose a 8x SAS3 port LSI card that use double PCI 3.0 connection, that support even (12Gbps).
This should allow me to use the full speed of the SSD (i guess).

I made this analysis:
- Total output: 8x12 = 96Gbps full speed available on the PCI3.0
- Than i should have at least for each disk a maximum speed of 96Gbps/24 disks which 4Gbps each disk - The disks are SATA3 6Gbps than i should have here a little bootleneck that lower me at 4Gbps. - However a common SSD never hit the interface speed, the tend to be at 450MB/s.

Average speed of a SSD:
Min     Avg     Max
369     Read 485        522
162     Write 428       504
223     Mixed 449       512


Then having a bottleneck to 4Gbps (which mean 400MB/s) should be fine (should only if I'm not in wrong).
Is it right what i thougth?

I think that the only bottleneck here is the 4x1Gb ethernet connection.

Ok both of them solution should support JBoD.
However I read that only a LSI with HBA or/and flashed in IT MODE allow
to:

   * "plug&play" a new driver and see it already on a linux distribution
     (without recheck disks)
   * see S.M.A.R.T. data (because there is no volume layer between
     motherboard and disks)
smartctl can handle handle the LSI RAID stuff fine.
Good


   * reduce the disk latency

Not sure about that, depending on the actual RAID and configuration any
cache of the RAID subsystem might get used, so improving things.

The most important reason to use IT for me would be in conjunction with
SSDs, none of the RAIDs I'm aware allow for TRIM/DISCARD. to work.

Did you know if i can flash the LSI 3108 to IT mode?

Then i should probably avoid LSI 3108 (which have a RAID config by
default) and go for the LSI 3008 (already flashed in IT mode).

Of the 2 I would pick the IT mode one for a "classic" Ceph deployment.

Ok, but why?
Can you suggest me some good tech datasheet about IT mode?


Is it so or I'm completly wasting my time on useless specs?

It might be a good idea to tell us what your actual plans are.
As in, how many nodes (these are quite dense ones with 24 drives!), how
much storage in total, what kind of use pattern, clients.
Right now we are just testing and experimenting.
We would start with a non-production environment with 2 nodes, learn Cephs in depth and then replicate test&findings on other 2 nodes, upgrade it to 10GB ethernet and go live. I don't want to start with a bad hardware environment since the beginning, then I'm reading a lot to find the perfect config for our need.
However LSI specs are a nightmare.... they are completly confused.

About the kind of use, take in mind that we need Ceph to run XEN VMs with high-availability (LUN on a NAS), they commonly run Mysql and other low latency application.
Probably we'll implementing them with OpenStack in a next future.
Let me know if you need some more specs.

Thanks,
Massimiliano Cuttini

_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to