Dell calls those sort of drives "Non-RAID" drives and that's what you would set 
them to be in either the iDRAC or the PERC BIOS.
 
 

Andrew Ferris
Network & System Management
UBC Centre for Heart & Lung Innovation
St. Paul's Hospital, Vancouver
http://www.hli.ubc.ca
 


>>> Steven Vacaroaia <ste...@gmail.com> 1/31/2018 8:45 AM >>>
Hi Sean,

Thanks for your willingness to help

I used RAID0 because HBA mode in not available on PERC H710 
Did misunderstood you ?
How can you set RAID level to NONE?

Running fio with more jobs provide results closer to the expected throughput ( 
450MB/s) for SSD drive

fio --filename=/dev/sda --direct=1 --sync=1 --rw=write --bs=4k --numjobs=20 
--iodepth=1 --runtime=60 --time_based --group_reporting --name=journal-test
journal-test: (g=0): rw=write, bs=4K-4K/4K-4K/4K-4K, ioengine=sync, iodepth=1
...
fio-2.2.8
Starting 20 processes
Jobs: 20 (f=20): [W(20)] [100.0% done] [0KB/400.9MB/0KB /s] [0/103K/0 iops] 
[eta 00m:00s]


Steven

On 31 January 2018 at 11:25, Sean Redmond <sean.redmo...@gmail.com> wrote:


Hi Steven,

Thats interesting, I use the same card, but I do use NONE RAID mode, this is a 
historical decision that was made so not much to share with you on that, maybe 
worth doing a fio test of RAID 0 vs NONE RAID mode to see what the difference 
is, if any.

I have not used the SSD you are using, Did you manage to hunt out anyone else 
using the same one to compare the fio tests?

Did you test the SSD in another HBA mode server / desktop to show this is only 
the case when using the PERC? 

Thanks

On Wed, Jan 31, 2018 at 3:57 PM, Steven Vacaroaia <ste...@gmail.com> wrote:



Raid0

Hardware
Controller
ProductName : PERC H710 Mini(Bus 0, Dev 0)
SAS Address : 544a84203afa4a00
FW Package Version: 21.3.5-0002
Status : Optimal
BBU


On 31 January 2018 at 10:48, Sean Redmond <sean.redmo...@gmail.com> wrote:


Hi,

I have seen the Dell R730XD be using with a PERC controller extensively with 
ceph and not had any real performance issues to speak of. Can you share the 
exact model of PERC controller?

Are you exposing the disks as individual RAID 0 or in NONE RAID mode?

Thanks

On Wed, Jan 31, 2018 at 3:39 PM, Steven Vacaroaia <ste...@gmail.com> wrote:


Hi,

Is there anyone using DELL servers with PERC controllers willing to provide 
advise on configuring it for good throughput performance ?

I have 3 servers with 1 SSD and 3 HDD each
All drives are Entreprise grade 

Connector : 00<Internal><Encl Pos 1 >: Slot 0
Vendor Id : TOSHIBA
Product Id : PX04SHB040
State : Online
Disk Type : SAS,Solid State Device
Capacity : 372.0 GB
Power State : Active

Connector : 00<Internal><Encl Pos 1 >: Slot 1
Vendor Id : TOSHIBA
Product Id : AL13SEB600
State : Online
Disk Type : SAS,Hard Disk Device
Capacity : 558.375 GB
Power State : Active


Created OSD with separate WAL(1 GB) and DB (15 GB) partitions on SSD 

rados bench is abysmal 

The interesting part is that testing drives with fio is also pretty bad - that 
is why I am thinking that my controller config might be the culprit 

See below results using various config

Commands used 

megacli -LDInfo -LALL -a0

fio --filename=/dev/sd[a-b] --direct=1 --sync=1 --rw=write --bs=4k --numjobs=5 
--iodepth=1 --runtime=60 --time_based --group_reporting --name=journal-test



SSD drive
Current Cache Policy: WriteThrough, ReadAheadNone, Cached, No Write Cache if 
Bad BBU
Jobs: 5 (f=5): [W(5)] [100.0% done] [0KB/125.2MB/0KB /s] [0/32.5K/0 iops] [eta 
00m:00s]

Current Cache Policy: WriteThrough, ReadAheadNone, Direct, No Write Cache if 
Bad BBU
Jobs: 5 (f=5): [W(5)] [100.0% done] [0KB/224.8MB/0KB /s] [0/57.6K/0 iops] [eta 
00m:00s]



HDD drive

Current Cache Policy: WriteBack, ReadAdaptive, Direct, No Write Cache if Bad BBU
Jobs: 5 (f=5): [W(5)] [100.0% done] [0KB/77684KB/0KB /s] [0/19.5K/0 iops] [eta 
00m:00s]


Current Cache Policy: WriteBack, ReadAdaptive, Cached, No Write Cache if Bad BBU
Jobs: 5 (f=5): [W(5)] [100.0% done] [0KB/89036KB/0KB /s] [0/22.3K/0 iops] [eta 
00m:00s]

rados bench -p rbd 120 write -t 64 -b 4096 --no-cleanup && rados bench -p rbd 
120 -t 64 seq

Total time run: 120.009091
Total writes made: 630542
Write size: 4096
Object size: 4096
Bandwidth (MB/sec): 20.5239
Stddev Bandwidth: 2.43418
Max bandwidth (MB/sec): 37.0391
Min bandwidth (MB/sec): 15.9336
Average IOPS: 5254
Stddev IOPS: 623
Max IOPS: 9482
Min IOPS: 4079
Average Latency(s): 0.0121797
Stddev Latency(s): 0.0208528
Max latency(s): 0.428262
Min latency(s): 0.000859286


Total time run: 88.954502
Total reads made: 630542
Read size: 4096
Object size: 4096
Bandwidth (MB/sec): 27.6889
Average IOPS: 7088
Stddev IOPS: 1701
Max IOPS: 8923
Min IOPS: 1413
Average Latency(s): 0.00901481
Max latency(s): 0.946848
Min latency(s): 0.000286236


_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com 
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com 









_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to