All of our DC S3500 and S3510 all ran out of writes this week after being in
production for 1.5 years as journal drives to 4 disks each. Having 43 drives
say they have less than 1% of their writes left is scary. I'd recommend having
a monitoring check for your ssds durability in Ceph.
As a not
Hello,
On Mon, 19 Dec 2016 15:05:05 +0800 (CST) mazhongming wrote:
> Hi Christian,
> Thanks for your reply.
>
>
> At 2016-12-19 14:01:57, "Christian Balzer" wrote:
> >
> >Hello,
> >
> >On Mon, 19 Dec 2016 13:29:07 +0800 (CST) 马忠明 wrote:
> >
> >> Hi guys,
> >>
> >> So recently I was testing o
Hi Christian,
Thanks for your reply.
At 2016-12-19 14:01:57, "Christian Balzer" wrote:
>
>Hello,
>
>On Mon, 19 Dec 2016 13:29:07 +0800 (CST) 马忠明 wrote:
>
>> Hi guys,
>>
>> So recently I was testing our ceph cluster which mainly used for block
>> usage(rbd).
>>
>> We have 30 ssd drives total(5
Hello,
On Mon, 19 Dec 2016 13:29:07 +0800 (CST) 马忠明 wrote:
> Hi guys,
>
> So recently I was testing our ceph cluster which mainly used for block
> usage(rbd).
>
> We have 30 ssd drives total(5 storage nodes,6 ssd drives each node).However
> the result of fio is very poor.
>
All relevant deta
Hi guys,
So recently I was testing our ceph cluster which mainly used for block
usage(rbd).
We have 30 ssd drives total(5 storage nodes,6 ssd drives each node).However the
result of fio is very poor.
We tested the workload on ssd pool with following parameter :
"fio --size=50G \
--ioe