Hi Guys,

I'm in the process of configuring a ceph cluster and am getting some less than 
ideal performance and need some help figuring it out!

This cluster will only really be used for backup storage for Veeam so I don't 
need a crazy amount of I/O but good sequential writes would be ideal.

At the moment I am only getting about 50MBs according to rados bench.

Our environment looks like this.

3x OSD Servers:

*         1x E3 CPU

*         24 GB RAM

*         8x 3TB Enterprise Drives

*         2x SSDs for journals - I haven't used enterprise ones, just whatever 
I had around the office. I figure that this could be the bottle neck but I'd 
figure I'd get more than 50MBs.

*         1x 2 port 10GB NIC with separate public / replication networks

3x Monitor Servers

*         2x Quad

*         16GB RAM

*         2x 300GB SAS RAID 1

*         1x 2 port 10GB NIC

My results from bench tests are:

DD on the RBD device is showing ~50-60MBs
DD on the disks is showing ~160MBs
Rados Bench is showing ~50MBs
Network is showing 9.6Gbits on all servers
FIO is showing the below:

  WRITE: io=1024.0MB, aggrb=3099KB/s, minb=3099KB/s, maxb=3099KB/s, 
mint=338300msec, maxt=338300msec


My ceph config is as follows:

mon_host = 192.168.78.101,192.168.78.102,192.168.78.103
public_network = 192.168.78.0/24
cluster_network = 192.168.79.0/24
ms_bind_ipv6 = false
max_open_files = 131072
auth_cluster_required = cephx
auth_service_required = cephx
auth_client_required = cephx
osd_journal_size = 10000
osd_pool_default_size = 2  # Write an object n times.
osd_pool_default_min_size = 1 # Allow writing n copy in a degraded state.
osd_pool_default_pg_num = 672
osd_pool_default_pgp_num = 672
osd_crush_chooseleaf_type = 1
mon_osd_full_ratio = .75
mon_osd_nearfull_ratio = .65
osd_backfill_full_ratio = .65
mon_clock_drift_allowed = .15
mon_clock_drift_warn_backoff = 30
mon_osd_down_out_interval = 300
mon_osd_report_timeout = 300
filestore_xattr_use_omap = true


Any assistance would be greatly appreciated!

Cheers,

Ben
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to