Not yet, but we have a theory and a test build in
https://tracker.ceph.com/issues/43364#note-6, if anybody would like to
give it a try.
Thanks,
Neha
On Fri, Dec 20, 2019 at 2:31 PM Sasha Litvak
wrote:
>
> Was the root cause found and fixed? If so, will the fix be available in
> 14.2.6 or soone
Offline optimization is using the same underlying code that the ceph-mgr
does. So it should for the most part produce the same results.
There is a special weight stored as "weight_set" in crush that is set by
the crush-compat balancer. Not sure of the commands but these should be
removed
I just noticed that arm64 packages only exist for xenial. Is there a reason
why bionic packages aren't being built?
Thanks,
Bryan
> On Dec 20, 2019, at 4:22 PM, Bryan Stillwell wrote:
>
> I was going to try adding an OSD to my home cluster using one of the 4GB
> Raspberry Pis today, but it a
I was going to try adding an OSD to my home cluster using one of the 4GB
Raspberry Pis today, but it appears that the Ubuntu Bionic arm64 repo is
missing a bunch of packages:
$ sudo grep ^Package:
/var/lib/apt/lists/download.ceph.com_debian-nautilus_dists_bionic_main_binary-arm64_Packages
Packa
Was the root cause found and fixed? If so, will the fix be available in
14.2.6 or sooner?
On Thu, Dec 19, 2019 at 5:48 PM Mark Nelson wrote:
> Hi Paul,
>
>
> Thanks for gathering this! It looks to me like at the very least we
> should redo the fixed_u_to_string and fixed_to_string functions in
I managed to export the rockdb, and compact. Just don't know how to put it
back in - I guess "ceph-bluestore-tool prime-osd-dir" is the closest thing
I can get to, but I can't specify what gets primed. :(
To export Rocksdb from Bluestore:
$ ceph bluestore-tool bluefs-export --path /var/lib/ceph/os
Hi,
I have a weird situation where an OSD's rocksdb fails to compact, because
the OSD became full and the osd-full-ratio was 1.0 (not a good idea, I
know).
Hitting "bluefs enospc" while compacting:
-376> 2019-12-18 15:48:16.492 7f2e0a5ac700 1 bluefs _allocate failed to
allocate 0x40da486 on b
Sending this out to close the loop on this... (not filing a bug because I
think the case is uncommon)
We were using 2 different Prometheus clients to scrape the metrics, while
transitioning from one metrics system to another.
Turning off one of the clients - thus using just 1 - solved the issue.
The default is used by radosgw/radosgw-admin when a --realm-id isn't
explicitly specified. The same goes for the default zone and zonegroup.
When your cluster only hosts a single zone, it can be convenient to set
its zone/zonegroup/realm as the default.
On 12/20/19 4:17 AM, tda...@hotmail.com
Hi,
Like I said in an earlier mail to this list, we re-balanced ~ 60% of the
CephFS metadata pool to NVMe backed devices. Roughly 422 M objects (1.2
Billion replicated). We have 512 PGs allocated to them. While
rebalancing we suffered from quite a few SLOW_OPS. Memory, CPU and
device IOPS capacity
Hi,
We have been rebalacing our cluster (CRUSH policy change for cephfs
metadata pool). The rebalancing is done ... but now there are PG
deep-scrubs going on (mostly on the pool that got almost completely
backfilled). We have had the no-(deep)-scrub flags active ... but that
does not prevent new (
Sorry I forgot to reply to this -- yes it's all good after you re-signed
and pushed.
Thanks!
dan
On Thu, Dec 12, 2019 at 9:57 PM David Galloway wrote:
> I just re-signed and pushed the 14.2.5 packages after adding the
> --no-database fix. Can you confirm ceph-debuginfo installs as expected
>
Thanks a lot Casy.
Having only one realm as a default does it mean anything in terms of both "can
both radosgw operate normally?"
And thanks for the "period update --commit --realm-id" command
I think that might do the trick. I will test it later today.
__
Hello David,
many thanks for testing the new balancer code with my OSDmap.
I have completed task to set reweight of any OSD to 1.0 and then I
enable balancer.
However there's no change to the PG distribution on my critical pool.
Therefore I started offline optimization with osdmaptool following
Hi,
I have tested the PG-upmap offline optimization with 1 of my pools: ssd
This pool is unbalanced; here's the ouput of ceph osd df tree before the
optimization:
root@ld3955:~# ceph osd df tree class ssd
ID CLASS WEIGHT REWEIGHT SIZE RAW USE DATA OMAP META
AVAIL %USE VAR PGS
15 matches
Mail list logo