Dear Team ,
I have a warning on my cluster which I deployed using Ansible on ubuntu
20.04 and with pacific ceph version , which says :
root@ceph-mon1:~# ceph health detail
HEALTH_WARN mon ceph-mon1 is low on available space
[WRN] MON_DISK_LOW: mon ceph-mon1 is low on available space
mon.ceph
Hello Ceph Users,
I wanted to hopefully get some advice or at least get some questions
answered about the Ceph Disaster Recovery Process detailed in the docs. The
questions I have are as follows:
- Do all the steps need to be performed or can I check the status of the
MDS after each until it rec
So in other words - it's unsafe to apply quick-fix/repair in 16.2.6
only. You're safe if you applied it before or newly deployed OSDs with
v16 (even 16.2.6).
Igor
On 1/20/2022 5:22 PM, Jay Sullivan wrote:
I just verified the following in my other 16.2.6 cluster:
(S, per_pool_omap)
3
Hello Jay!
Just refreshed my memory - the bug was introduced in 16.2.6 by
https://github.com/ceph/ceph/pull/42956
So it was safe to apply quick-fix in 16.2.4 which explains why you're
fine now.
And OSD deployed by Pacific wouldn't suffer from it at all as they've
got the new omap format fr
Hi Frank,
On Tue, Jan 18, 2022 at 4:54 AM Frank Schilder wrote:
>
> Hi Dan and Patrick,
>
> this problem seems to develop into a nightmare. I executed a find on the file
> system and had some initial success. The number of stray files dropped by
> about 8%. Unfortunately, this is about it. I'm
Hi Michael,
To clarify a bit further "ceph orch rm" works for removing services and
"ceph orch daemon rm" works for removing daemons. In the command you ran
[ceph: root@osd16 /]# ceph orch rm "mds.cephmon03.local osd16.local
osd17.local osd18.local.onl26.drymjr"
the name you've given there is th
Hi Jake,
diskprediction_cloud module is no longer available in Pacific.
There are efforts to enhance the diskprediction module, using our
anonymized device telemetry data, which is aimed at building a dynamic,
large, diverse, free and open data set to help data scientists create
accurate failure p
Hello Ernesto,
I found the reason. One of the users set a directory permission without the
+x bit (drw---). After the command 'chmod 700' everything was OK again.
The MDS log did not help, but with the API call 'ls_dir?path=…' I was able
to iterate to the directory with the wrong permissions.
Reminder -- starting in a few minutes.
Agenda here (still pretty light!)
https://pad.ceph.com/p/ceph-user-dev-monthly-minutes
-- dan
On Thu, Jan 13, 2022 at 7:31 PM Neha Ojha wrote:
>
> Hi everyone,
>
> This month's Ceph User + Dev Monthly meetup is next Thursday, January
> 20, 2022, 15:00-16:
I just verified the following in my other 16.2.6 cluster:
(S, per_pool_omap)
32|2|
0001
I set noout, stopped the OSD service, ran the "ceph-kvstore-tool bluestore-kv
get S per_pool_omap" command, and started the OSD back up.
Looking
Dear All,
Is the cloud option for the diskprediction module deprecated in Pacific?
https://docs.ceph.com/en/pacific/mgr/diskprediction/
If so, are prophetstor still contributing data to the local module, or
is this being updated by someone using data from Backblaze?
Do people find this modul
Dear all,
recently, our dashboard is not able to connect to our RGW anymore:
Error connecting to Object Gateway: RGW REST API failed request with
status code 404
(b'{"Code":"NoSuchKey","BucketName":"admin","RequestId":"tx0f84ffa8b34579fa'
b'a-0061e93872-4bc673c-ext-default-primary
Hi,
On 1/20/22 9:26 AM, Michal Strnad wrote:
Hi,
We are using CephFS in our Kubernetes clusters and now we are trying
to optimize permissions/caps in keyrings. Every guide which we found
contains something like - Create the file system by specifying the
desired settings for the metadata pool
Ad. We are using Nautilus on Ceph side.
Michal Strnad
On 1/20/22 9:26 AM, Michal Strnad wrote:
Hi,
We are using CephFS in our Kubernetes clusters and now we are trying to
optimize permissions/caps in keyrings. Every guide which we found
contains something like - Create the file system by sp
Hi,
We are using CephFS in our Kubernetes clusters and now we are trying to
optimize permissions/caps in keyrings. Every guide which we found
contains something like - Create the file system by specifying the
desired settings for the metadata pool, data pool and admin keyring with
access to t
15 matches
Mail list logo