Thanks Patrick,
is this the bug you are referring to https://tracker.ceph.com/issues/42515 ?
We also see performance issues mainly on metadata operations like finding file
stats operations , however mds perf dump shows no sign of any latencies . could
this bug cause any performance issues ? h
It's probably a recently fixed openfiletable bug. Please upgrade to
v14.2.8 when it is released in the next week or so.
On Mon, Feb 24, 2020 at 1:46 PM Uday Bhaskar jalagam
wrote:
>
> Hello Patrick,
>
> File system created around 4 months back. Using ceph version 14.2.3 version.
>
> [root@knode25
Hello Patrick,
File system created around 4 months back. Using ceph version 14.2.3 version.
[root@knode25 /]# ceph fs dump
dumped fsmap epoch 577
e577
enable_multiple, ever_enabled_multiple: 0,0
compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable
ranges,3=default file layou
Hi all,
A while back, I indicated we had an issue with our cluster filling up too
fast. After checking everything, we've concluded this was because we had a
lot of small files and the allocation size on the bluestore was too high
(64kb).
We are now recreating the OSD's (2 disk at the same time) bu
On Mon, Feb 24, 2020 at 11:14 AM Uday Bhaskar jalagam
wrote:
>
> Hello Team ,
>
> I am getting frequent LARGE_OMAP_OBJECTS 1 large omap objects in one of my
> cephfs metadata pools , anyone can explain why would this pool getting into
> this state frequently and how could I prevent this in futur
Hello Team ,
I am getting frequent LARGE_OMAP_OBJECTS 1 large omap objects in one of my
cephfs metadata pools , anyone can explain why would this pool getting into
this state frequently and how could I prevent this in future ?
# ceph health detail
HEALTH_WARN 1 large omap objects
LARGE_OMAP_OB
Hi,
we currently creating a new cluster. This cluster is (as far as we can
tell) an config-copy (ansible) of our existing cluster, just 5 years later
- with new hardware (nvme instead of ssd, bigger disks, ...)
The setup:
* NVMe for Journals and "Cache"-Pool
* HDD with NVMe Journals for "Data"-P
Hello
I have ~300TB of data in default.rgw.buckets.data k2m2 pool and I would
like to move it to a new k5m2 pool.
I found instructions using cache tiering[1], but they come with a vague
scary warning, and it looks like EC-EC may not even be possible [2] (is
it still the case?).
Can anybody
Hey all, we're excited to be returning properly to SCaLE in
Pasadena[1] this year (March 5-8) with a Thursday Birds-of-a-Feather
session[2] and a booth in the expo hall. Please come by if you're
attending the conference or are in the area to get face time with
other area users and Ceph developers.
Hi Bryan,
Did you ever learn more about this, or see it again?
I'm facing 100% ceph-mon CPU usage now, and putting my observations
here: https://tracker.ceph.com/issues/42830
Cheers, Dan
On Mon, Dec 16, 2019 at 10:58 PM Bryan Stillwell wrote:
>
> Sasha,
>
> I was able to get past it by restarti
I have tried to increase to 16, with the same result:
# ceph osd pool set cephfs_data pg_num 16
set pool 1 pg_num to 16
# ceph osd pool get cephfs_data pg_num
pg_num: 8
El 24/2/20 a las 15:10, Gabryel Mason-Williams escribió:
> Have you tried making a smaller increment instead of jumping from 8
Have you tried making a smaller increment instead of jumping from 8 to 128 as
that is quite a big leap?
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
Hi, I have a Nautilus installation version 14.2.1 with a very unbalanced
cephfs pool, I have 430 osd in the cluster but this pool only have 8 PG
and PGP and 118 TB used :
# ceph -s
cluster:
id: a2269da7-e399-484a-b6ae-4ee1a31a4154
health: HEALTH_WARN
1 nearfull osd(s)
Sorry for the noise - problem was introduced by a missing iptables rule
:-(
On Fri, 2020-02-21 at 09:04 +0100, Andreas Haupt wrote:
> Dear all,
>
> we recently added two additional RGWs to our CEPH cluster (version
> 14.2.7). They work flawlessly, however they do not show up in 'ceph
> status':
>
ceph version 12.2.13 luminous (stable)
My whole ceph cluster went to kind of read only state. Ceph status showed that
client reads is 0 op/s for whole cluster. There was normal amount of writes
going on.
I checked health and it said:
# ceph health detail
HEALTH_WARN Reduced data availability:
15 matches
Mail list logo