Hello Robert,
My disks did not reach 100% on the last warning, they climb to 70-80%
usage. But I see rrqm / wrqm counters increasing...
Device: rrqm/s wrqm/s r/s w/srkB/swkB/s avgrq-sz
avgqu-sz await r_await w_await svctm %util
sda 0.00 4.000.00
On Mon, Jun 10, 2019 at 1:00 AM BASSAGET Cédric <
cedric.bassaget...@gmail.com> wrote:
> Hello Robert,
> My disks did not reach 100% on the last warning, they climb to 70-80%
> usage. But I see rrqm / wrqm counters increasing...
>
> Device: rrqm/s wrqm/s r/s w/srkB/swkB/s
Hi Robert,
Before doing anything on my prod env, I generate r/w on ceph cluster using
fio .
On my newest cluster, release 12.2.12, I did not manage to get
the (REQUEST_SLOW) warning, even if my OSD disk usage goes above 95% (fio
ran from 4 diffrent hosts)
On my prod cluster, release 12.2.9, as soo
Den sön 9 juni 2019 kl 18:29 skrev :
> make sense - makes the cases for ec pools smaller though.
>
> Sunday, 9 June 2019, 17.48 +0200 from paul.emmer...@croit.io <
> paul.emmer...@croit.io>:
>
> Caching is handled in BlueStore itself, erasure coding happens on a higher
> layer.
>
>
>
In your case,
an update from 12.2.9 to 12.2.12 seems to have fixed the problem !
Le lun. 10 juin 2019 à 12:25, BASSAGET Cédric
a écrit :
> Hi Robert,
> Before doing anything on my prod env, I generate r/w on ceph cluster using
> fio .
> On my newest cluster, release 12.2.12, I did not manage to get
> the (REQ
Quoting solarflow99 (solarflo...@gmail.com):
> can the bitmap allocator be set in ceph-ansible? I wonder why is it not
> default in 12.2.12
We don't use ceph-ansible. But if ceph-ansible allow you to set specific
([osd]) settings in ceph.conf I guess you can do it.
I don't know what the policy i
Hi all!
Previous week we ran into terrible situation after added 4 new nodes
into one of our clusters.
Trying to reduce pg moves we set noin flag.
Then deployed 4 new node so added 30% of OSDs with reweight=0.
After that a huge number of PGs stalled in peering or activating state -
about 20%
PGs are not perfectly balanced per OSD, but I think that's expected/OK
due to setting crush_compat_metrics to bytes? Though realizing as I
type this that what I really want is equal percent-used, which may not
be possible given the slight variation in disk size (see below) in my
cluster?
# ceph os
I'm glad it's working, to be clear did you use wpq, or is it still the prio
queue?
Sent from a mobile device, please excuse any typos.
On Mon, Jun 10, 2019, 4:45 AM BASSAGET Cédric
wrote:
> an update from 12.2.9 to 12.2.12 seems to have fixed the problem !
>
> Le lun. 10 juin 2019 à 12:25, BASS
When I run:
rbd map --name client.lol poolname/somenamespace/imagename
The image is mapped to /dev/rbd0 and
/dev/rbd/poolname/imagename
I would expect the rbd to be mapped to (the rbdmap tool tries this name):
/dev/rbd/poolname/somenamespace/imagename
The current map point would not all
On Mon, Jun 10, 2019 at 1:50 PM Jonas Jelten wrote:
>
> When I run:
>
> rbd map --name client.lol poolname/somenamespace/imagename
>
> The image is mapped to /dev/rbd0 and
>
> /dev/rbd/poolname/imagename
>
> I would expect the rbd to be mapped to (the rbdmap tool tries this name):
>
> /dev/r
On Mon, Jun 10, 2019 at 8:03 PM Jason Dillaman wrote:
>
> On Mon, Jun 10, 2019 at 1:50 PM Jonas Jelten wrote:
> >
> > When I run:
> >
> > rbd map --name client.lol poolname/somenamespace/imagename
> >
> > The image is mapped to /dev/rbd0 and
> >
> > /dev/rbd/poolname/imagename
> >
> > I would
Hey everyone,
We have extended the CFP for Ceph Day Netherlands to June 14! The event
itself will be taking place on July 2nd. You can find more information on
how to register for the event and apply for the CFP here:
https://ceph.com/cephdays/netherlands-2019/
We look forward to seeing you for
Hello Cephers,
for the ones who find it easy to be connected to the community using slack,
the openstack community in Latam configured this on [1] In the channel
#ceph and you can auto invite in [2].
Feel free to use and share.
[1] https://openstack-latam.slack.com
[2] https://latam.openstackday.m
14 matches
Mail list logo