Re: [ceph-users] cephfs, low performances

2015-12-31 Thread Robert LeBlanc
Because Ceph is not perfectly distributed there will be more PGs/objects in
one drive than others. That drive will become a bottleneck for the entire
cluster. The current IO scheduler poses some challenges in this regard.
I've implemented a new scheduler which I've seen much better drive
utilization across the cluster as well as 3-17% performance increase and a
substantial reduction in client performance deviation (all clients are
getting the same amount of performance). Hopefully we will be able to get
that into Jewel.

Robert LeBlanc

Sent from a mobile device please excuse any typos.
On Dec 31, 2015 12:20 AM, "Francois Lafont"  wrote:

> Hi,
>
> On 30/12/2015 10:23, Yan, Zheng wrote:
>
> >> And it seems to me that I can see the bottleneck of my little cluster
> (only
> >> 5 OSD servers with each 4 osds daemons). According to the "atop"
> command, I
> >> can see that some disks (4TB SATA 7200rpm Western digital WD4000FYYZ)
> are
> >> very busy. It's curious because during the bench I have some disks very
> busy
> >> and some other disks not so busy. But I think the reason is that is a
> little
> >> cluster and with just 15 osds (the 5 other osds are full SSD osds
> cephfsmetadata
> >> dedicated), I can have a perfect repartition of data, especially when
> the
> >> bench concern just a specific file of few hundred MB.
> >
> > do these disks have same size and performance? large disks (with
> > higher wights) or slow disks are likely busy.
>
> The disks are exactly the same model with the same size (4TB SATA 7200rpm
> Western digital WD4000FYYZ). I'm not completely sure but it seems to me
> that in a specific node I have a disk which is a little slower than the
> others (maybe minus ~50-75 iops) and it seems to me that it's the busiest
> disk during a bench.
>
> Is it possible (or frequent) to have difference of perfs between exactly
> same model of disks?
>
> >> That being said, when you talk about "using buffered IO" I'm not sure to
> >> understand the option of fio which is concerns by that. Is it the
> --buffered
> >> option ? Because with this option I have noticed no change concerning
> iops.
> >> Personally, I was able to increase global iops only with the --numjobs
> option.
> >>
> >
> > I didn't make it clear. I actually meant buffered write (add
> > --rwmixread=0 option to fio) .
>
> But with fio if I set "--readwrite=randrw --rwmixread=0", it's completely
> equivalent to just set "--readwrite=randwrite", no?
>
> > In your test case, writes mix with reads.
>
> Yes indeed.
>
> > read is synchronous when cache miss.
>
> You mean that I have SYNC IO for reading if I set --direct=0, is it
> correct?
> Is it valid for any file system or just for cephfs?
>
> Regards.
>
> --
> François Lafont
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Random Write Fio Test Delay

2015-12-31 Thread Jan Schermer
Is it only on the first run or on every run?
Fio first creates the file and that can take a while depeding on how 
fallocate() works on your system. In other words you are probably waiting for a 
1G file to be written before the test actually starts.

Jan


> On 31 Dec 2015, at 04:49, Sam Huracan  wrote:
> 
> Hi Ceph-users.
> 
> I have an issue with my new Ceph Storage, which is backend for OpenStack 
> (Cinder, Glance, Nova). 
> When I test random write in VMs with fio, there is a long delay (60s) before 
> fio begin running.
> Here is my script test:
> 
> fio --directory=/root/ --direct=1 --rw=randwrite --bs=4k --size=1G 
> --numjobs=3 --iodepth=4 --runtime=60 --group_reporting --name=testIOPslan1 
> --output=testwriterandom
> 
> It is so strange, because I have not ever seen this method when I test in 
> Physical machine.
> 
> My Ceph system include 5 node, 2 SAS 15k  per node, I use both journal and 
> Filestore in one disk, Public Network 1 Gbps/Node, Replica Network 2 Gbps/Node
> 
> Here is my ceph.conf in node compute:
> http://pastebin.com/raw/wsyDHiRw 
> 
> 
> Could you help me solve this issue?
> 
> Thanks and regards.
> 
> 
> 
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] ceph-deploy create-initial errors out with "Some monitors have still not reached quorum"

2015-12-31 Thread Maruthi Seshidhar
hi fellow users,

I am setting up a ceph cluster with 3 monitors, 4 osds on CentOS 7.1

Each of the nodes have 2 NICs.
10.31.141.0/23 is the public n/w and 192.168.10.0/24 is the cluster n/w.

Completed the "Preflight Checklist"
.
But in the "Storage Cluster Quick Start"
, while doing
"ceph-deploy create-initial" I see  error "Some monitors have still not
reached quorum".

[ceph@ceph-mgmt ceph-cluster]$ ceph-deploy --overwrite-conf mon
create-initial
[ceph_deploy.conf][DEBUG ] found configuration file at:
/home/ceph/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (1.5.30): /usr/bin/ceph-deploy
--overwrite-conf mon create-initial
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  username  : None
[ceph_deploy.cli][INFO  ]  verbose   : False
[ceph_deploy.cli][INFO  ]  overwrite_conf: True
[ceph_deploy.cli][INFO  ]  subcommand: create-initial
[ceph_deploy.cli][INFO  ]  quiet : False
[ceph_deploy.cli][INFO  ]  cd_conf   :

[ceph_deploy.cli][INFO  ]  cluster   : ceph
[ceph_deploy.cli][INFO  ]  func  : 
[ceph_deploy.cli][INFO  ]  ceph_conf : None
[ceph_deploy.cli][INFO  ]  default_release   : False
[ceph_deploy.cli][INFO  ]  keyrings  : None
[ceph_deploy.mon][DEBUG ] Deploying mon, cluster ceph hosts ceph-mon1
ceph-mon2 ceph-mon3
[ceph_deploy.mon][DEBUG ] detecting platform for host ceph-mon1 ...
[ceph-mon1][DEBUG ] connection detected need for sudo
[ceph-mon1][DEBUG ] connected to host: ceph-mon1
[ceph-mon1][DEBUG ] detect platform information from remote host
[ceph-mon1][DEBUG ] detect machine type
[ceph-mon1][DEBUG ] find the location of an executable
[ceph_deploy.mon][INFO  ] distro info: CentOS Linux 7.1.1503 Core
[ceph-mon1][DEBUG ] determining if provided host has same hostname in remote
[ceph-mon1][DEBUG ] get remote short hostname
[ceph-mon1][DEBUG ] deploying mon to ceph-mon1
[ceph-mon1][DEBUG ] get remote short hostname
[ceph-mon1][DEBUG ] remote hostname: ceph-mon1
[ceph-mon1][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph-mon1][DEBUG ] create the mon path if it does not exist
[ceph-mon1][DEBUG ] checking for done path:
/var/lib/ceph/mon/ceph-ceph-mon1/done
[ceph-mon1][DEBUG ] create a done file to avoid re-doing the mon deployment
[ceph-mon1][DEBUG ] create the init path if it does not exist
[ceph-mon1][INFO  ] Running command: sudo systemctl enable ceph.target
[ceph-mon1][INFO  ] Running command: sudo systemctl enable
ceph-mon@ceph-mon1
[ceph-mon1][INFO  ] Running command: sudo systemctl start ceph-mon@ceph-mon1
[ceph-mon1][INFO  ] Running command: sudo ceph --cluster=ceph
--admin-daemon /var/run/ceph/ceph-mon.ceph-mon1.asok mon_status
[ceph-mon1][DEBUG ]

[ceph-mon1][DEBUG ] status for monitor: mon.ceph-mon1
[ceph-mon1][DEBUG ] {
[ceph-mon1][DEBUG ]   "election_epoch": 0,
[ceph-mon1][DEBUG ]   "extra_probe_peers": [
[ceph-mon1][DEBUG ] "192.168.10.3:6789/0",
[ceph-mon1][DEBUG ] "192.168.10.4:6789/0",
[ceph-mon1][DEBUG ] "192.168.10.5:6789/0"
[ceph-mon1][DEBUG ]   ],
[ceph-mon1][DEBUG ]   "monmap": {
[ceph-mon1][DEBUG ] "created": "0.00",
[ceph-mon1][DEBUG ] "epoch": 0,
[ceph-mon1][DEBUG ] "fsid": "d6ca9ac6-bfb9-4464-a128-459068637924",
[ceph-mon1][DEBUG ] "modified": "0.00",
[ceph-mon1][DEBUG ] "mons": [
[ceph-mon1][DEBUG ]   {
[ceph-mon1][DEBUG ] "addr": "10.31.141.76:6789/0",
[ceph-mon1][DEBUG ] "name": "ceph-mon1",
[ceph-mon1][DEBUG ] "rank": 0
[ceph-mon1][DEBUG ]   },
[ceph-mon1][DEBUG ]   {
[ceph-mon1][DEBUG ] "addr": "0.0.0.0:0/1",
[ceph-mon1][DEBUG ] "name": "ceph-mon2",
[ceph-mon1][DEBUG ] "rank": 1
[ceph-mon1][DEBUG ]   },
[ceph-mon1][DEBUG ]   {
[ceph-mon1][DEBUG ] "addr": "0.0.0.0:0/2",
[ceph-mon1][DEBUG ] "name": "ceph-mon3",
[ceph-mon1][DEBUG ] "rank": 2
[ceph-mon1][DEBUG ]   }
[ceph-mon1][DEBUG ] ]
[ceph-mon1][DEBUG ]   },
[ceph-mon1][DEBUG ]   "name": "ceph-mon1",
[ceph-mon1][DEBUG ]   "outside_quorum": [
[ceph-mon1][DEBUG ] "ceph-mon1"
[ceph-mon1][DEBUG ]   ],
[ceph-mon1][DEBUG ]   "quorum": [],
[ceph-mon1][DEBUG ]   "rank": 0,
[ceph-mon1][DEBUG ]   "state": "probing",
[ceph-mon1][DEBUG ]   "sync_provider": []
[ceph-mon1][DEBUG ] }
[ceph-mon1][DEBUG ]

[ceph-mon1][INFO  ] monitor: mon.ceph-mon1 is running
[ceph-mon1][INFO  ] Running command: sudo ceph --cluster=ceph
--admin-daemon /var/run/ceph/ceph-mon.ceph-mon1.asok mon_status
[ceph_deploy.mon][DEBUG ] detecting platform for host ceph-mon2 ...
Warning:

Re: [ceph-users] ceph-deploy create-initial errors out with "Some monitors have still not reached quorum"

2015-12-31 Thread Wade Holler
I assume you have tested with firewalld disabled ?

Best Regards
Wade
On Thu, Dec 31, 2015 at 9:13 PM Maruthi Seshidhar <
maruthi.seshid...@gmail.com> wrote:

> hi fellow users,
>
> I am setting up a ceph cluster with 3 monitors, 4 osds on CentOS 7.1
>
> Each of the nodes have 2 NICs.
> 10.31.141.0/23 is the public n/w and 192.168.10.0/24 is the cluster n/w.
>
> Completed the "Preflight Checklist"
> .
> But in the "Storage Cluster Quick Start"
> , while doing
> "ceph-deploy create-initial" I see  error "Some monitors have still not
> reached quorum".
>
> [ceph@ceph-mgmt ceph-cluster]$ ceph-deploy --overwrite-conf mon
> create-initial
> [ceph_deploy.conf][DEBUG ] found configuration file at:
> /home/ceph/.cephdeploy.conf
> [ceph_deploy.cli][INFO  ] Invoked (1.5.30): /usr/bin/ceph-deploy
> --overwrite-conf mon create-initial
> [ceph_deploy.cli][INFO  ] ceph-deploy options:
> [ceph_deploy.cli][INFO  ]  username  : None
> [ceph_deploy.cli][INFO  ]  verbose   : False
> [ceph_deploy.cli][INFO  ]  overwrite_conf: True
> [ceph_deploy.cli][INFO  ]  subcommand: create-initial
> [ceph_deploy.cli][INFO  ]  quiet : False
> [ceph_deploy.cli][INFO  ]  cd_conf   :
> 
> [ceph_deploy.cli][INFO  ]  cluster   : ceph
> [ceph_deploy.cli][INFO  ]  func  :  at 0x1fd6e60>
> [ceph_deploy.cli][INFO  ]  ceph_conf : None
> [ceph_deploy.cli][INFO  ]  default_release   : False
> [ceph_deploy.cli][INFO  ]  keyrings  : None
> [ceph_deploy.mon][DEBUG ] Deploying mon, cluster ceph hosts ceph-mon1
> ceph-mon2 ceph-mon3
> [ceph_deploy.mon][DEBUG ] detecting platform for host ceph-mon1 ...
> [ceph-mon1][DEBUG ] connection detected need for sudo
> [ceph-mon1][DEBUG ] connected to host: ceph-mon1
> [ceph-mon1][DEBUG ] detect platform information from remote host
> [ceph-mon1][DEBUG ] detect machine type
> [ceph-mon1][DEBUG ] find the location of an executable
> [ceph_deploy.mon][INFO  ] distro info: CentOS Linux 7.1.1503 Core
> [ceph-mon1][DEBUG ] determining if provided host has same hostname in
> remote
> [ceph-mon1][DEBUG ] get remote short hostname
> [ceph-mon1][DEBUG ] deploying mon to ceph-mon1
> [ceph-mon1][DEBUG ] get remote short hostname
> [ceph-mon1][DEBUG ] remote hostname: ceph-mon1
> [ceph-mon1][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
> [ceph-mon1][DEBUG ] create the mon path if it does not exist
> [ceph-mon1][DEBUG ] checking for done path:
> /var/lib/ceph/mon/ceph-ceph-mon1/done
> [ceph-mon1][DEBUG ] create a done file to avoid re-doing the mon deployment
> [ceph-mon1][DEBUG ] create the init path if it does not exist
> [ceph-mon1][INFO  ] Running command: sudo systemctl enable ceph.target
> [ceph-mon1][INFO  ] Running command: sudo systemctl enable
> ceph-mon@ceph-mon1
> [ceph-mon1][INFO  ] Running command: sudo systemctl start
> ceph-mon@ceph-mon1
> [ceph-mon1][INFO  ] Running command: sudo ceph --cluster=ceph
> --admin-daemon /var/run/ceph/ceph-mon.ceph-mon1.asok mon_status
> [ceph-mon1][DEBUG ]
> 
> [ceph-mon1][DEBUG ] status for monitor: mon.ceph-mon1
> [ceph-mon1][DEBUG ] {
> [ceph-mon1][DEBUG ]   "election_epoch": 0,
> [ceph-mon1][DEBUG ]   "extra_probe_peers": [
> [ceph-mon1][DEBUG ] "192.168.10.3:6789/0",
> [ceph-mon1][DEBUG ] "192.168.10.4:6789/0",
> [ceph-mon1][DEBUG ] "192.168.10.5:6789/0"
> [ceph-mon1][DEBUG ]   ],
> [ceph-mon1][DEBUG ]   "monmap": {
> [ceph-mon1][DEBUG ] "created": "0.00",
> [ceph-mon1][DEBUG ] "epoch": 0,
> [ceph-mon1][DEBUG ] "fsid": "d6ca9ac6-bfb9-4464-a128-459068637924",
> [ceph-mon1][DEBUG ] "modified": "0.00",
> [ceph-mon1][DEBUG ] "mons": [
> [ceph-mon1][DEBUG ]   {
> [ceph-mon1][DEBUG ] "addr": "10.31.141.76:6789/0",
> [ceph-mon1][DEBUG ] "name": "ceph-mon1",
> [ceph-mon1][DEBUG ] "rank": 0
> [ceph-mon1][DEBUG ]   },
> [ceph-mon1][DEBUG ]   {
> [ceph-mon1][DEBUG ] "addr": "0.0.0.0:0/1",
> [ceph-mon1][DEBUG ] "name": "ceph-mon2",
> [ceph-mon1][DEBUG ] "rank": 1
> [ceph-mon1][DEBUG ]   },
> [ceph-mon1][DEBUG ]   {
> [ceph-mon1][DEBUG ] "addr": "0.0.0.0:0/2",
> [ceph-mon1][DEBUG ] "name": "ceph-mon3",
> [ceph-mon1][DEBUG ] "rank": 2
> [ceph-mon1][DEBUG ]   }
> [ceph-mon1][DEBUG ] ]
> [ceph-mon1][DEBUG ]   },
> [ceph-mon1][DEBUG ]   "name": "ceph-mon1",
> [ceph-mon1][DEBUG ]   "outside_quorum": [
> [ceph-mon1][DEBUG ] "ceph-mon1"
> [ceph-mon1][DEBUG ]   ],
> [ceph-mon1][DEBUG ]   "quorum": [],
> [ceph-mon1][DEBUG ]   "rank": 0,
> [ceph-mon1][DEBUG ]   "state": "probing",
> [ceph-mon1][DEBUG ]   "sync_provider": []
> [cep

Re: [ceph-users] ceph-deploy create-initial errors out with "Some monitors have still not reached quorum"

2015-12-31 Thread Maruthi Seshidhar
hi Wade,

Yes firewalld is disabled on all nodes.

[ceph@ceph-mon1 ~]$ systemctl status firewalld
firewalld.service - firewalld - dynamic firewall daemon
   Loaded: loaded (/usr/lib/systemd/system/firewalld.service; disabled)
   Active: inactive (dead)

thanks,
Maruthi.

On Fri, Jan 1, 2016 at 7:46 AM, Wade Holler  wrote:

> I assume you have tested with firewalld disabled ?
>
> Best Regards
> Wade
> On Thu, Dec 31, 2015 at 9:13 PM Maruthi Seshidhar <
> maruthi.seshid...@gmail.com> wrote:
>
>> hi fellow users,
>>
>> I am setting up a ceph cluster with 3 monitors, 4 osds on CentOS 7.1
>>
>> Each of the nodes have 2 NICs.
>> 10.31.141.0/23 is the public n/w and 192.168.10.0/24 is the cluster n/w.
>>
>> Completed the "Preflight Checklist"
>> .
>> But in the "Storage Cluster Quick Start"
>> , while doing
>> "ceph-deploy create-initial" I see  error "Some monitors have still not
>> reached quorum".
>>
>> [ceph@ceph-mgmt ceph-cluster]$ ceph-deploy --overwrite-conf mon
>> create-initial
>> [ceph_deploy.conf][DEBUG ] found configuration file at:
>> /home/ceph/.cephdeploy.conf
>> [ceph_deploy.cli][INFO  ] Invoked (1.5.30): /usr/bin/ceph-deploy
>> --overwrite-conf mon create-initial
>> [ceph_deploy.cli][INFO  ] ceph-deploy options:
>> [ceph_deploy.cli][INFO  ]  username  : None
>> [ceph_deploy.cli][INFO  ]  verbose   : False
>> [ceph_deploy.cli][INFO  ]  overwrite_conf: True
>> [ceph_deploy.cli][INFO  ]  subcommand: create-initial
>> [ceph_deploy.cli][INFO  ]  quiet : False
>> [ceph_deploy.cli][INFO  ]  cd_conf   :
>> 
>> [ceph_deploy.cli][INFO  ]  cluster   : ceph
>> [ceph_deploy.cli][INFO  ]  func  : > at 0x1fd6e60>
>> [ceph_deploy.cli][INFO  ]  ceph_conf : None
>> [ceph_deploy.cli][INFO  ]  default_release   : False
>> [ceph_deploy.cli][INFO  ]  keyrings  : None
>> [ceph_deploy.mon][DEBUG ] Deploying mon, cluster ceph hosts ceph-mon1
>> ceph-mon2 ceph-mon3
>> [ceph_deploy.mon][DEBUG ] detecting platform for host ceph-mon1 ...
>> [ceph-mon1][DEBUG ] connection detected need for sudo
>> [ceph-mon1][DEBUG ] connected to host: ceph-mon1
>> [ceph-mon1][DEBUG ] detect platform information from remote host
>> [ceph-mon1][DEBUG ] detect machine type
>> [ceph-mon1][DEBUG ] find the location of an executable
>> [ceph_deploy.mon][INFO  ] distro info: CentOS Linux 7.1.1503 Core
>> [ceph-mon1][DEBUG ] determining if provided host has same hostname in
>> remote
>> [ceph-mon1][DEBUG ] get remote short hostname
>> [ceph-mon1][DEBUG ] deploying mon to ceph-mon1
>> [ceph-mon1][DEBUG ] get remote short hostname
>> [ceph-mon1][DEBUG ] remote hostname: ceph-mon1
>> [ceph-mon1][DEBUG ] write cluster configuration to
>> /etc/ceph/{cluster}.conf
>> [ceph-mon1][DEBUG ] create the mon path if it does not exist
>> [ceph-mon1][DEBUG ] checking for done path:
>> /var/lib/ceph/mon/ceph-ceph-mon1/done
>> [ceph-mon1][DEBUG ] create a done file to avoid re-doing the mon
>> deployment
>> [ceph-mon1][DEBUG ] create the init path if it does not exist
>> [ceph-mon1][INFO  ] Running command: sudo systemctl enable ceph.target
>> [ceph-mon1][INFO  ] Running command: sudo systemctl enable
>> ceph-mon@ceph-mon1
>> [ceph-mon1][INFO  ] Running command: sudo systemctl start
>> ceph-mon@ceph-mon1
>> [ceph-mon1][INFO  ] Running command: sudo ceph --cluster=ceph
>> --admin-daemon /var/run/ceph/ceph-mon.ceph-mon1.asok mon_status
>> [ceph-mon1][DEBUG ]
>> 
>> [ceph-mon1][DEBUG ] status for monitor: mon.ceph-mon1
>> [ceph-mon1][DEBUG ] {
>> [ceph-mon1][DEBUG ]   "election_epoch": 0,
>> [ceph-mon1][DEBUG ]   "extra_probe_peers": [
>> [ceph-mon1][DEBUG ] "192.168.10.3:6789/0",
>> [ceph-mon1][DEBUG ] "192.168.10.4:6789/0",
>> [ceph-mon1][DEBUG ] "192.168.10.5:6789/0"
>> [ceph-mon1][DEBUG ]   ],
>> [ceph-mon1][DEBUG ]   "monmap": {
>> [ceph-mon1][DEBUG ] "created": "0.00",
>> [ceph-mon1][DEBUG ] "epoch": 0,
>> [ceph-mon1][DEBUG ] "fsid": "d6ca9ac6-bfb9-4464-a128-459068637924",
>> [ceph-mon1][DEBUG ] "modified": "0.00",
>> [ceph-mon1][DEBUG ] "mons": [
>> [ceph-mon1][DEBUG ]   {
>> [ceph-mon1][DEBUG ] "addr": "10.31.141.76:6789/0",
>> [ceph-mon1][DEBUG ] "name": "ceph-mon1",
>> [ceph-mon1][DEBUG ] "rank": 0
>> [ceph-mon1][DEBUG ]   },
>> [ceph-mon1][DEBUG ]   {
>> [ceph-mon1][DEBUG ] "addr": "0.0.0.0:0/1",
>> [ceph-mon1][DEBUG ] "name": "ceph-mon2",
>> [ceph-mon1][DEBUG ] "rank": 1
>> [ceph-mon1][DEBUG ]   },
>> [ceph-mon1][DEBUG ]   {
>> [ceph-mon1][DEBUG ] "addr": "0.0.0.0:0/2",
>> [ceph-mon1][DEBUG ] "name": "ceph-mon3",
>> [ceph-mon

Re: [ceph-users] Random Write Fio Test Delay

2015-12-31 Thread Sam Huracan
Yep, it happen on every run, I have checked on other VMs that do not use
Ceph, it had almost no delay, although results were about similarly, 576
iops for Ceph's VM and 650 for non-Ceph VM,  I use one image for all test,
ubuntu 14.04.1, kernel 3.13.0-32-generic



2015-12-31 23:51 GMT+07:00 Jan Schermer :

> Is it only on the first run or on every run?
> Fio first creates the file and that can take a while depeding on how
> fallocate() works on your system. In other words you are probably waiting
> for a 1G file to be written before the test actually starts.
>
> Jan
>
>
> On 31 Dec 2015, at 04:49, Sam Huracan  wrote:
>
> Hi Ceph-users.
>
> I have an issue with my new Ceph Storage, which is backend for OpenStack
> (Cinder, Glance, Nova).
> When I test random write in VMs with fio, there is a long delay (60s)
> before fio begin running.
> Here is my script test:
>
> fio --directory=/root/ --direct=1 --rw=randwrite --bs=4k --size=1G
> --numjobs=3 --iodepth=4 --runtime=60 --group_reporting --name=testIOPslan1
> --output=testwriterandom
>
> It is so strange, because I have not ever seen this method when I test in
> Physical machine.
>
> My Ceph system include 5 node, 2 SAS 15k  per node, I use both journal
> and Filestore in one disk, Public Network 1 Gbps/Node, Replica Network 2
> Gbps/Node
>
> Here is my ceph.conf in node compute:
> http://pastebin.com/raw/wsyDHiRw
>
>
> Could you help me solve this issue?
>
> Thanks and regards.
>
>
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com