Re: [ceph-users] cephfs, low performances

2015-12-31 Thread Robert LeBlanc
Because Ceph is not perfectly distributed there will be more PGs/objects in one drive than others. That drive will become a bottleneck for the entire cluster. The current IO scheduler poses some challenges in this regard. I've implemented a new scheduler which I've seen much better drive utilizatio

Re: [ceph-users] Random Write Fio Test Delay

2015-12-31 Thread Jan Schermer
Is it only on the first run or on every run? Fio first creates the file and that can take a while depeding on how fallocate() works on your system. In other words you are probably waiting for a 1G file to be written before the test actually starts. Jan > On 31 Dec 2015, at 04:49, Sam Huracan

[ceph-users] ceph-deploy create-initial errors out with "Some monitors have still not reached quorum"

2015-12-31 Thread Maruthi Seshidhar
hi fellow users, I am setting up a ceph cluster with 3 monitors, 4 osds on CentOS 7.1 Each of the nodes have 2 NICs. 10.31.141.0/23 is the public n/w and 192.168.10.0/24 is the cluster n/w. Completed the "Preflight Checklist" . But i

Re: [ceph-users] ceph-deploy create-initial errors out with "Some monitors have still not reached quorum"

2015-12-31 Thread Wade Holler
I assume you have tested with firewalld disabled ? Best Regards Wade On Thu, Dec 31, 2015 at 9:13 PM Maruthi Seshidhar < maruthi.seshid...@gmail.com> wrote: > hi fellow users, > > I am setting up a ceph cluster with 3 monitors, 4 osds on CentOS 7.1 > > Each of the nodes have 2 NICs. > 10.31.141.0

Re: [ceph-users] ceph-deploy create-initial errors out with "Some monitors have still not reached quorum"

2015-12-31 Thread Maruthi Seshidhar
hi Wade, Yes firewalld is disabled on all nodes. [ceph@ceph-mon1 ~]$ systemctl status firewalld firewalld.service - firewalld - dynamic firewall daemon Loaded: loaded (/usr/lib/systemd/system/firewalld.service; disabled) Active: inactive (dead) thanks, Maruthi. On Fri, Jan 1, 2016 at 7:46

Re: [ceph-users] Random Write Fio Test Delay

2015-12-31 Thread Sam Huracan
Yep, it happen on every run, I have checked on other VMs that do not use Ceph, it had almost no delay, although results were about similarly, 576 iops for Ceph's VM and 650 for non-Ceph VM, I use one image for all test, ubuntu 14.04.1, kernel 3.13.0-32-generic 2015-12-31 23:51 GMT+07:00 Jan Sch