On 15/12/14 17:44, ceph@panther-it.nl wrote:
I have the following setup:
Node1 = 8 x SSD
Node2 = 6 x SATA
Node3 = 6 x SATA
Having 1 node different from the rest is not going to help...you will
probably get better results if you sprinkle the SSD through all 3 nodes
and use SATA for osd dat
Hello,
There have been many, many threads about this.
Google is your friend, so is keeping an eye on threads in this ML.
On Mon, 15 Dec 2014 05:44:24 +0100 ceph@panther-it.nl wrote:
> I have the following setup:
> Node1 = 8 x SSD
> Node2 = 6 x SATA
> Node3 = 6 x SATA
> Client1
> All Cisco
I have the following setup:
Node1 = 8 x SSD
Node2 = 6 x SATA
Node3 = 6 x SATA
Client1
All Cisco UCS running RHEL6.5 + kernel 3.18.0 + ceph 0.88.
A "dd bs=4k oflag=direct" test directly on a OSD disk shows me:
Node1 = 60MB/s
Node2 = 30MB/s
Node2 = 30MB/s
I've created 2 pools, each size=1, pg_num=