Hi *,
are there any known limitations or impacts of (too) many objects per
PG? We're dealing with a performance decrease on Nautilus (I know, but
it can't be upgraded at this time) while pushing a million emails
(many small objects) into the cluster. At some point, maybe between
600.000 a
Thanks,
I found this thread [1] recommending offline compaction with large
OMAP/META data on the OSDs. We'll try that first and see if it helps.
We're still missing some details about the degredation, I'll update
this thread when we know more.
The PGs are balanced quite perfectly and we do
Am 21.07.22 um 17:50 schrieb Ilya Dryomov:
On Thu, Jul 21, 2022 at 11:42 AM Peter Lieven wrote:
Am 19.07.22 um 17:57 schrieb Ilya Dryomov:
On Tue, Jul 19, 2022 at 5:10 PM Peter Lieven wrote:
Am 24.06.22 um 16:13 schrieb Peter Lieven:
Am 23.06.22 um 12:59 schrieb Ilya Dryomov:
On Thu, Jun 2
On Sun, Jul 24, 2022 at 8:33 AM Yuri Weinstein wrote:
> Still seeking approvals for:
>
> rados - Travis, Ernesto, Adam
> rgw - Casey
> fs, kcephfs, multimds - Venky, Patrick
> ceph-ansible - Brad pls take a look
>
> Josh, upgrade/client-upgrade-nautilus-octopus failed, do we need to fix
> it, pls
Hello!
This is a bit of an older topic but were just hitting it now. Our cluster is
still 14.2.22 (working on the upgrade) and we had the "mons are allowing
insecure global_id reclaim" health warning. It took us awhile to update all our
clients but after doing so I have set “auth_allow_insecure
Hi all,
We have 2 Ceph clusters in multisite configuration, both are working fine
(syncing correctly) but 1 of them is showing warning 32 large omap objects
in the log pool.
This seems to be coming from the sync error list
for i in `rados -p wilxite.rgw.log ls`; do echo -n "$i:"; rados -p
wilxit
It looks like you’re setting environment variables that force your new
keyring, it you aren’t telling the library to use your new CephX user. So
it opens your new keyring and looks for the default (client.admin) user and
doesn’t get anything.
-Greg
On Tue, Jul 26, 2022 at 7:54 AM Adam Carrgilson
Hi all,
I slowly worked my way through re-targeting any lingering ceph.git PRs
(there were 300+ of them) from the master to main branch. There were a
few dozen repos I wanted to rename the master branch on and the tool I
used did not automatically retarget existing PRs.
This means the time
Is rook/CSI still not using efficient rbd object maps ?
It could be that you issued a new benchmark while ceph was busy
(inefficiently) removing the old rbd images. This is quite a stretch but
could be worth exploring.
On Mon, Jul 25, 2022, 21:42 Mark Nelson wrote:
> I don't think so if this i
Afaik is csi just some go code that maps an rbd image, it does as you would do
it from the command line. Then again they really do not understand csi there,
and are just developing a kubernetes 'driver'.
>
> Is rook/CSI still not using efficient rbd object maps ?
>
> It could be that you issue
On Thu, Jul 21, 2022 at 10:28 AM Yuri Weinstein wrote:
>
> Details of this release are summarized here:
>
> https://tracker.ceph.com/issues/56484
> Release Notes - https://github.com/ceph/ceph/pull/47198
>
> Seeking approvals for:
>
> rados - Neha, Travis, Ernesto, Adam
> rgw - Casey
> fs, kcephfs
We can’t do the final release until the recent mgr/volumes security fixes
get merged in, though.
https://github.com/ceph/ceph/pull/47236
On Tue, Jul 26, 2022 at 3:12 PM Ramana Krisna Venkatesh Raja <
rr...@redhat.com> wrote:
> On Thu, Jul 21, 2022 at 10:28 AM Yuri Weinstein
> wrote:
> >
> > Deta
On Tue, Jul 26, 2022 at 3:41 PM Yuri Weinstein wrote:
> Greg, I started testing this PR.
> What do you want to rerun for it? Are fs, kcephfs, multimds suites
> sufficient?
We just need to run the mgr/volumes tests — I think those are all in the fs
suite but Kotresh or Ramana can let us know.
-
On Wed, Jul 27, 2022 at 5:02 AM Gregory Farnum wrote:
> On Tue, Jul 26, 2022 at 3:41 PM Yuri Weinstein
> wrote:
>
>> Greg, I started testing this PR.
>> What do you want to rerun for it? Are fs, kcephfs, multimds suites
>> sufficient?
>
>
> We just need to run the mgr/volumes tests — I think th
Dear all
Are there any updates on this question ?
In particular, since I am planning how to update my infrastructure, I would
be interested to know if there are plans to provide packages for centos9
stream for Pacific and/or Quincy
Thanks, Massimo
On Fri, Jun 10, 2022 at 6:46 PM Gregory Farnum
15 matches
Mail list logo