Quoting Ernesto Puerta (epuer...@redhat.com):
> The default behaviour is that only perf-counters with priority
> PRIO_USEFUL (5) or higher are exposed (via `get_all_perf_counters` API
> call) to ceph-mgr modules (including Dashboard, DiskPrediction or
> Prometheus/InfluxDB/Telegraf exporters).
>
Hi all;
Long story short, I have a cluster of 26 OSD in 3 nodes (8+9+9). One of the
disk is showing
some read error, so I''ve added an OSD in the faulty node (OSD.26) and set the
(re)weight of
the faulty OSD (OSD.12) to zero.
The cluster is now rebalancing, which is fine, but I have now 2 PG i
what about the pool's backfill_full_ratio value?
Simone Lazzaris 于2019年12月9日周一 下午6:38写道:
>
> Hi all;
>
> Long story short, I have a cluster of 26 OSD in 3 nodes (8+9+9). One of the
> disk is showing some read error, so I''ve added an OSD in the faulty node
> (OSD.26) and set the (re)weight of t
In data lunedì 9 dicembre 2019 11:46:34 CET, huang jun ha scritto:
> what about the pool's backfill_full_ratio value?
>
That vaule, as far as I can see, is 0.9000, which is not reached by any OSD:
root@s1:~# ceph osd df
ID CLASS WEIGHT REWEIGHT SIZERAW USE DATAOMAPMETAAVAIL %USE
Hi,
since we upgraded our cluster to Nautilus we also see those messages
sometimes when it's rebalancing. There are several reports about this
[1] [2], we didn't see it in Luminous. But eventually the rebalancing
finished and the error message cleared, so I'd say there's (probably)
nothin
This is a (harmless) bug that existed since Mimic and will be fixed in
14.2.5 (I think?). The health error will clear up without any intervention.
Paul
--
Paul Emmerich
Looking for help with your Ceph cluster? Contact us at https://croit.io
croit GmbH
Freseniusstr. 31h
81247 München
www.croit
I've increased the deep_scrub interval on the OSDs on our Nautilus cluster
with the following added to the [osd] section:
osd_deep_scrub_interval = 260
And I started seeing
1518 pgs not deep-scrubbed in time
in ceph -s. So I added
mon_warn_pg_not_deep_scrubbed_ratio = 1
since the default
Hi all,
I want to attach another RBD image into the Qemu VM to be used as disk.
However, it always failed. The VM definiation xml is attached.
Could anyone tell me where I did wrong?
|| nstcc3@nstcloudcc3:~$ sudo virsh start ubuntu_18_04_mysql --console
|| error: Failed to start dom
Hi,
nice coincidence that you mention that today; I've just debugged the exact
same problem on a setup where deep_scrub_interval was increased.
The solution was to set the deep_scrub_interval directly on all pools
instead (which was better for this particular setup anyways):
ceph osd pool set d
On Mon, Dec 9, 2019 at 5:17 PM Robert LeBlanc wrote:
> I've increased the deep_scrub interval on the OSDs on our Nautilus cluster
> with the following added to the [osd] section:
>
should have read the beginning of your email; you'll need to set the option
on the mons as well because they genera
solved it: the warning is of course generated by ceph-mgr and not ceph-mon.
So for my problem that means: should have injected the option in ceph-mgr.
That's why it obviously worked when setting it on the pool...
The solution for you is to simply put the option under global and restart
ceph-mgr (
On Mon, Dec 9, 2019 at 11:58 AM Paul Emmerich
wrote:
> solved it: the warning is of course generated by ceph-mgr and not ceph-mon.
>
> So for my problem that means: should have injected the option in ceph-mgr.
> That's why it obviously worked when setting it on the pool...
>
> The solution for yo
> How is that possible? I dont know how much more proof I need to present that
> there's a bug.
FWIW, your pastes are hard to read with all the ? in them. Pasting
non-7-bit-ASCII?
> |I increased PGs and see no difference.
From what pgp_num to what new value? Numbers that are not a power of 2
I have a bunch of hard drives I want to use as OSDs, with ceph nautilus.
ceph-volume lvm create makes straight raw dev usage relatively easy, since you
can just do
ceph-volume lvm create --data /dev/sdc
or whatever.
Its nice that it takes care of all the LVM jiggerypokery automatically.
but..
You can loop over the creation of LVs on the SSD of a fixed size, then
loop over creating OSDs assigned to each of them. That is what we did,
it wasn't bad.
On Mon, Dec 9, 2019 at 9:32 PM Philip Brown wrote:
>
> I have a bunch of hard drives I want to use as OSDs, with ceph nautilus.
>
> ceph-vol
Hi Anthony!
Mon, 9 Dec 2019 17:11:12 -0800
Anthony D'Atri ==> ceph-users
:
> > How is that possible? I dont know how much more proof I need to present
> > that there's a bug.
>
> FWIW, your pastes are hard to read with all the ? in them. Pasting
> non-7-bit-ASCII?
I don't see much "?" in
This should get you started with using rbd.
WDC
WD40EFRX-68WT0N0
cat > secret.xml <
client.rbd.vps secret
EOF
virsh secret
17 matches
Mail list logo