Ray,
Just wondering, what’s the benefit for binding the ceph-osd to a specific CPU
core?
Thanks
Jian
From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Ray Sun
Sent: Tuesday, June 30, 2015 12:19 AM
To: ceph-users@lists.ceph.com
Subject: [ceph-users] How to use cgroup to bin
Do you mean cache tiering?
You can refer to http://ceph.com/docs/master/rados/operations/cache-tiering/
for detail command line.
PGs won't migrate from pool to pool.
-Original Message-
From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Chad
William Seys
Sent: Thu
We haven't tried Giant yet...
Thanks
Jian
-Original Message-
From: Sebastien Han [mailto:sebastien@enovance.com]
Sent: Tuesday, September 23, 2014 11:42 PM
To: Zhang, Jian
Cc: Alexandre DERUMIER; ceph-users@lists.ceph.com
Subject: Re: [ceph-users] [Single OSD performance o
Thanks. The results looked close to our results now.
Thanks
Jian
-Original Message-
From: Alexandre DERUMIER [mailto:aderum...@odiso.com]
Sent: Friday, September 19, 2014 8:54 PM
To: Zhang, Jian
Cc: ceph-users@lists.ceph.com
Subject: Re: [ceph-users] [Single OSD performance on SSD
Thanks for this great information.
We are using Firefly. We will also try this later.
Thanks
Jian
-Original Message-
From: Alexandre DERUMIER [mailto:aderum...@odiso.com]
Sent: Friday, September 19, 2014 3:00 PM
To: Zhang, Jian
Cc: ceph-users@lists.ceph.com
Subject: Re: [ceph-users
Have anyone ever testing multi volume performance on a *FULL* SSD setup?
We are able to get ~18K IOPS for 4K random read on a single volume with fio
(with rbd engine) on a 12x DC3700 Setup, but only able to get ~23K (peak) IOPS
even with multiple volumes.
Seems the maximum random write performan