Hi,
You are limited by your drives so not much can be done but it should
alt least catch up a bit and reduce the number of pgs that have not
been deep scrubbed in time.
On Wed, Apr 3, 2019 at 8:13 PM Michael Sudnick
wrote:
>
> Hi Alex,
>
> I'm okay myself with the number of scrubs performed, wo
thanks a lot, Jason.
how much performance loss should i expect by enabling rbd mirroring? I really
need to minimize any performance impact while using this disaster recovery
feature. Will a dedicated journal on Intel Optane NVMe help? If so, how big the
size should be?
cheers,
Samuel
huxia
On Mon, 18 Mar 2019 at 16:42, Dan van der Ster wrote:
>
> The balancer optimizes # PGs / crush weight. That host looks already
> quite balanced for that metric.
>
> If the balancing is not optimal for a specific pool that has most of
> the data, then you can use the `optimize myplan ` param.
>
>F
That log shows
2019-04-03 15:39:53.299 7f3733f18700 10 monclient: tick
2019-04-03 15:39:53.299 7f3733f18700 10 cephx: validate_tickets want 53 have 53
need 0
2019-04-03 15:39:53.299 7f3733f18700 20 cephx client: need_tickets: want=53
have=53 need=0
2019-04-03 15:39:53.299 7f3733f18700 10 monclie
Yeah i agree... the auto balancer is definitely doing a poor job for me.
I have been experimenting with this for weeks and i can make way better
optimization than the balancer by looking at "ceph osd df tree" and
manually running various ceph upmap commands.
Too bad this is tedious work, and tend
There are several more fixes queued up for v12.2.12:
16b7cc1bf9 osd/OSDMap: add log for better debugging
3d2945dd6e osd/OSDMap: calc_pg_upmaps - restrict optimization to
origin pools only
ab2dbc2089 osd/OSDMap: drop local pool filter in calc_pg_upmaps
119d8cb2a1 crush: fix upmap overkill
0729a7887
It was disabled in a fit of genetic debugging. I've now tried to revert
all config settings related to auth and signing to defaults.
I can't seem to change the auth_*_required settings. If I try to remove
them, they stay set. If I try to change them, I get both the old and new
settings:
root@t
Thanks, I'll mess around with them and see what I can do.
-Michael
On Thu, 4 Apr 2019 at 05:58, Alexandru Cucu wrote:
> Hi,
>
> You are limited by your drives so not much can be done but it should
> alt least catch up a bit and reduce the number of pgs that have not
> been deep scrubbed in time
On Thu, 4 Apr 2019, Shawn Edwards wrote:
> It was disabled in a fit of genetic debugging. I've now tried to revert
> all config settings related to auth and signing to defaults.
>
> I can't seem to change the auth_*_required settings. If I try to remove
> them, they stay set. If I try to change
On Wed, 3 Apr 2019 at 09:41, Iain Buclaw wrote:
>
> On Tue, 19 Feb 2019 at 10:11, Iain Buclaw wrote:
> >
> >
> > # ./radosgw-gc-bucket-indexes.sh master.rgw.buckets.index | wc -l
> > 7511
> >
> > # ./radosgw-gc-bucket-indexes.sh secondary1.rgw.buckets.index | wc -l
> > 3509
> >
> > # ./radosgw-gc
Hi cephers,
I'm working through our testing cycle to upgrade our main ceph cluster
from Luminous to Mimic, and I ran into a problem with ceph_fuse. With
Luminous, a single client can pretty much max out a 10Gbps network
connection writing sequentially on our cluster with Luminous ceph_fuse.
I think this got dealt with on irc, but for those following along at home:
I think the problem here is that you've set the central config to
disable authentication, but the client doesn't know what those config
options look like until it's connected — which it can't do, because
it's demanding encr
I believe our community manager Mike is in charge of that?
On Wed, Apr 3, 2019 at 6:49 AM Raphaël Enrici wrote:
>
> Dear all,
>
> is there somebody in charge of the ceph hosting here, or someone who
> knows the guy who knows another guy who may know...
>
> Saw this while reading the FOSDEM 2019 p
On Mon, Apr 1, 2019 at 4:04 AM Paul Emmerich wrote:
>
> There are no problems with mixed bluestore_min_alloc_size; that's an
> abstraction layer lower than the concept of multiple OSDs. (Also, you
> always have that when mixing SSDs and HDDs)
>
> I'm not sure about the real-world impacts of a lowe
Hi again,
Can anyone at all please confirm whether this is expected behaviour / a known
issue, or give any advice on how to diagnose this? As far as I can tell my mon
and mgr are healthy. All rbd images have object-map and fast-diff enaabled.
> I've been having an issue with the dashboard b
Hi Wes,
On 4/4/19 9:23 PM, Wes Cilldhaire wrote:
> Can anyone at all please confirm whether this is expected behaviour /
> a known issue, or give any advice on how to diagnose this? As far as
> I can tell my mon and mgr are healthy. All rbd images have
> object-map and fast-diff enaabled.
My g
Hi everybody!
There is a small mistake in the news about the PG autoscaler
https://ceph.com/rados/new-in-nautilus-pg-merging-and-autotuning/
The command
$ ceph osd pool set foo target_ratio .8
should actually be
$ ceph osd pool set foo target_size_ratio .8
Thanks for this great improvement!
Hi Lenz,
Thanks for responding. I suspected that the number of rbd images might have
had something to do with it so I cleaned up old disposable VM images I am no
longer using, taking the list down from ~30 to 16, 2 in the EC pool on hdds and
the rest on the replicated ssd pool. They vary in
18 matches
Mail list logo