On Tue, May 9, 2017 at 6:01 PM, Webert de Souza Lima
wrote:
> Hello all,
>
> I'm been using cephfs for a while but never really evaluated its
> performance.
> As I put up a new ceph cluster, I though that I should run a benchmark to
> see if I'm going the right way.
>
> By the results I got, I see
On Tue, May 9, 2017 at 5:23 PM, Webert de Souza Lima
wrote:
> Hi,
>
> by issuing `ceph daemonperf mds.x` I see the following columns:
>
> -mds-- --mds_server-- ---objecter--- -mds_cache-
> ---mds_log
> rlat inos caps|hsr hcs hcr |writ read actv|recd recy stry purg|segs evts
>
Hi!
I increased pg_num and pgp_num for pool default.rgw.buckets.data from
2048 to 4096, and it seems that situation became a bit better, cluster
dies after 20-30 PUTs, not after 1. Could someone please give me some
recommendations how to rescue the cluster?
On 27.04.2017 09:59, Anton Dmitri
Readding list:
So with email, you're talking about lots of small reads and writes. In my
experience with dicom data (thousands of 20KB files per directory), cephfs
doesn't perform very well at all on platter drivers. I haven't experimented
with pure ssd configurations, so I can't comment on that.
On Tue, May 9, 2017 at 4:40 PM, Brett Niver wrote:
> What is your workload like? Do you have a single or multiple active
> MDS ranks configured?
User traffic is heavy. I can't really say in terms of mb/s or iops but it's
an email server with 25k+ users, usually about 6k simultaneously connecte
> Op 9 mei 2017 om 20:26 schreef Brady Deetz :
>
>
> If I'm reading your cluster diagram correctly, I'm seeing a 1gbps
> interconnect, presumably cat6. Due to the additional latency of performing
> metadata operations, I could see cephfs performing at those speeds. Are you
> using jumbo frames?
What is your workload like? Do you have a single or multiple active
MDS ranks configured?
On Tue, May 9, 2017 at 3:10 PM, Webert de Souza Lima
wrote:
> That 1gbps link is the only option I have for those servers, unfortunately.
> It's all dedicated server rentals from OVH.
> I don't have inform
That 1gbps link is the only option I have for those servers, unfortunately.
It's all dedicated server rentals from OVH.
I don't have information regarding the internals of the vrack.
So by what you said, I understand that one should expect a performance drop
in comparison to ceph rbd using the sam
If I'm reading your cluster diagram correctly, I'm seeing a 1gbps
interconnect, presumably cat6. Due to the additional latency of performing
metadata operations, I could see cephfs performing at those speeds. Are you
using jumbo frames? Also are you routing?
If you're routing, the router will intr
Hello all,
I'm been using cephfs for a while but never really evaluated its
performance.
As I put up a new ceph cluster, I though that I should run a benchmark to
see if I'm going the right way.
By the results I got, I see that RBD performs *a lot* better in comparison
to cephfs.
The cluster is
Hi,
by issuing `ceph daemonperf mds.x` I see the following columns:
-mds-- --mds_server-- ---objecter--- -mds_cache-
---mds_log
rlat inos caps|hsr hcs hcr |writ read actv|recd recy stry purg|segs evts
subm|
0 95 41 | 000 | 000 | 00 250 | 1
You can modify the settings while a node is being added. It's actually a
good time to do it. Note that when you decrease the settings, it doesn't
stop the current PGs from backfilling, it just stops the next ones from
starting until there is a slot open on the OSD according to the new setting.
O
Thanks, I had a feeling one of these was too high. Once the current
node finishes I will try again with your recommended settings.
Dan
On 05/08/2017 05:03 PM, David Turner wrote:
WOW!!! Those are some awfully high backfilling settings you have
there. They are 100% the reason that your custo
Hi,
checking the actual value for osd_max_backfills at our cluster (0.94.9)
I also made a config diff of the osd configuration (ceph daemon osd.0
config diff) and wondered why there's a displayed default of 10 which
differs from the documented default at
http://docs.ceph.com/docs/master/rados/conf
14 matches
Mail list logo