it's not
making any progress.
Mac Wynkoop, Senior Datacenter Engineer
*NetDepot.com:* Cloud Servers; Delivered
Houston | Atlanta | NYC | Colorado Springs
1-844-25-CLOUD Ext 806
On Wed, Oct 21, 2020 at 2:41 PM Mac Wynkoop wrote:
> We recently did some work on the Ceph cluster, a
020-10-21 18:48:02.034430comment: waiting for pg acting set to change1:
name: Startedenter_time: 2020-10-21 18:48:01.752957*
Any ideas?
Mac Wynkoop
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
'll monitor
it for a bit to see how it reacts to the more aggressive settings.
Thanks,
Mac Wynkoop
On Tue, Oct 20, 2020 at 8:52 AM Eugen Block wrote:
> The default for max misplaced objects is this (5%):
>
> ceph-node1:~ # ceph config get mon target_max_misplaced_ratio
> 0.0
f pgp_num?
Because I'd like to turn up the rate at which it splits the PG's, but if
autoscaler isn't doing it, I'd have no clue what to adjust. Any ideas?
Thanks,
Mac Wynkoop
On Thu, Oct 8, 2020 at 8:16 AM Mac Wynkoop wrote:
> OK, great. We'll keep tabs on it for n
Just making sure this makes the list:
Mac Wynkoop
-- Forwarded message -
From: 胡 玮文
Date: Wed, Oct 7, 2020 at 9:00 PM
Subject: Re: pool pgp_num not updated
To: Mac Wynkoop
Hi,
You can read about this behavior at
https://ceph.io/rados/new-in-nautilus-pg-merging-and
OK, great. We'll keep tabs on it for now then and try again once we're
fully rebalanced.
Mac Wynkoop, Senior Datacenter Engineer
*NetDepot.com:* Cloud Servers; Delivered
Houston | Atlanta | NYC | Colorado Springs
1-844-25-CLOUD Ext 806
On Thu, Oct 8, 2020 at 2:08 AM Eugen Block wro
Well, backfilling sure, but will it allow me to actually change the pgp_num
as more space frees up? Because the issue is that I cannot modify that
value.
Thanks,
Mac Wynkoop, Senior Datacenter Engineer
*NetDepot.com:* Cloud Servers; Delivered
Houston | Atlanta | NYC | Colorado Springs
1-844-25
ackfilling io:client:
287 MiB/s rd, 40 MiB/s wr, 1.94k op/s rd, 165 op/s wrrecovery: 425
MiB/s, 225 objects/s*
Now as you can see, we do have a lot of backfill operations going on at the
moment. Does that actually prevent Ceph from modifying the pgp_num value of
a pool?
Thanks,
Mac Wyn
w*"
and the pgp_num value does not increase. Am I just doing something
totally wrong?
Thanks,
Mac Wynkoop
On Tue, Oct 6, 2020 at 2:32 PM Marc Roos wrote:
> pg_num and pgp_num need to be the same, not?
>
> 3.5.1. Set the Number of PGs
>
> To set the number of placement grou
Hi everyone,
I'm seeing a similar issue here. Any ideas on this?
Mac Wynkoop,
On Sun, Sep 6, 2020 at 11:09 PM norman wrote:
> Hi guys,
>
> When I update the pg_num of a pool, I found it not worked(no
> rebalanced), anyone know the reason? Pool's info:
>
> p
Anyone else have any insight on this? I'd also be interested to know about
this behavior.
Thanks,
On Mon, Dec 2, 2019 at 6:54 AM Tobias Urdin wrote:
> Hello,
>
> I'm trying to wrap my head around how having a multi-site (two zones in
> one zonegroup) with multiple placement
> targets but only w
Hi all,
I seem to be running into an issue when attempting to unlink a bucket from
a user; this is my output:
user@server ~ $ radosgw-admin bucket unlink --bucket=user_5493/LF-Store
--uid=user_5493
failure: 2019-11-26 15:19:48.689 7fda1c2009c0 0 bucket entry point user
mismatch, can't unlink buc
Hi All,
So, I am trying to create a site-specifc zonegroup at my 2nd site's Ceph
cluster. Upon creating the zonegroup and a placeholder master zone at my
master site, I go to do a period update and commit, and this is what it
returns to me:
(hostname) ~ $ radosgw-admin period commit
2019-11-14 22
2019 at 12:46 AM Konstantin Shalygin wrote:
> On 10/29/19 1:40 AM, Mac Wynkoop wrote:
> > So, I'm in the process of trying to migrate our rgw.buckets.data pool
> > from a replicated rule pool to an erasure coded pool. I've gotten the
> > EC pool set up, good EC pro
Hi Everyone,
So, I'm in the process of trying to migrate our rgw.buckets.data pool from
a replicated rule pool to an erasure coded pool. I've gotten the EC pool
set up, good EC profile and crush ruleset, pool created successfully, but
when I go to "rados cppool xxx.rgw.buckets.data xxx.rgw.buckets.
When trying to modify a zone in one of my clusters to promote it to the
master zone, I get this error:
~ $ radosgw-admin zone modify --rgw-zone atl --master
failed to update zonegroup: 2019-10-09 15:41:53.409 7f9ecae26840 0 ERROR:
found existing zone name atl (94d26f94-d64c-40d1-9a33-56afa948d86a
Hi Everyone,
So it recently came to my attention that on one of our clusters, running
the command "radosgw-admin usage show" returns a blank response. What is
going on behind the scenes with this command, and why might it not be
seeing any of the buckets properly? The data is still accessible over
17 matches
Mail list logo