On 15/08/2019 11.42, Peter Sarossy wrote:
hey folks,
I spent the past 2 hours digging through the forums and similar sources with no
luck..
I use ceph storage for docker stacks, and this issue has taken the whole thing
down as I cannot mount their volumes back...
Starting yesterday, some of m
Oh no, it's not that bad. It's
$ ping -s 65000 dest.inati.on
on a VPN connection that has a MTU of 1300 via IPv6. So I suspect that I
only get an answer, when all 51 fragments get fully returned. It's clear
that big packets with lots of fragments are more affected by packet loss
than 64 byte ping
FYI I just had an issue with radosgw / civetweb. Wanted to upload 40 MB
file, it started with poor transfer speed, which was decreasing over
time to 20KB/s when I stopped the transfer. I had to kill radosgw and
start it to get 'normal' operation back.
On Wed, Aug 14, 2019 at 12:12:36PM -0500, Reed Dier wrote:
> My main metrics source is the influx plugin, but I enabled the
> prometheus plugin to get access to the per-rbd image metrics. I may
> disable prometheus and see if that yields better stability, until
> possibly the influx plugin gets u
Hi,
I playing around with the ceph balancer in luminous and nautilus. While
tuning some balancer settings I experienced some problems with nautilus.
In Luminous I cold configure the max_misplaced value like this:
ceph config-key set mgr/balancer/max_misplaced 0.002
With the same command in naut
I have a fairly dormant ceph luminous cluster on centos7 with stock
kernel, and thought about upgrading it before putting it to more use.
I can remember some page on the ceph website that had specific
instructions mentioning upgrading from luminous. But I can't find it
anymore, this page[0]
I had already disabled prometheus plugin (again, only using for the rbd stats),
but will also remove the rbd pool from the rbd_support module, as well as
disable the rbd_support module.
It seems slightly more stable so far, but still not rock solid as it was before.
Thanks,
Reed
> On Aug 15,
Pfff, you are right, I don't even know which one is the newest latest,
indeed Nautilus
-Original Message-
Subject: Re: [ceph-users] Upgrad luminous -> mimic , any pointers?
Why would you go to Mimic instead of Nautilus?
>
>
>
> I have a fairly dormant ceph luminous cluster on ce
I have a fairly dormant ceph luminous cluster on centos7 with stock
kernel, and thought about upgrading it before putting it to more use.
I can remember some page on the ceph website that had specific
instructions mentioning upgrading from luminous. But I can't find it
anymore, this page[0]
rbd -p kube bench kube/bench --io-type write --io-threads 1 --io-total
10G --io-pattern rand
elapsed:14 ops: 262144 ops/sec: 17818.16 bytes/sec: 72983201.32
It's a totally unreal number. Something is wrong with the test.
Test it with `fio` please:
fio -ioengine=rbd -name=test -bs=4k
Unfortunately the scsi reset on this vm happened again last night so this
hasn't resolved the issue.
Thanks for the suggestion though.
Rich
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
The overall latency in the cluster may be too high, but it was worth a
shot. I've noticed that these settings really reduces the latency
distribution so that it becomes more predictable and prevented some single
VMs from hanging for long periods of time while others worked just fine
usually when on
12 matches
Mail list logo