[ceph-users] Re: rbd persistent cache configuration

2024-01-05 Thread Peter
Thanks for ressponse! Yes, it is in use "watcher=10.1.254.51:0/1544956346 client.39553300 cookie=140244238214096" this is indicating the client is connect the image. I am using fio perform write task on it. I guess it is the feature not enable correctly or setting somewhere incorrect. Should I

[ceph-users] Re: rbd persistent cache configuration

2024-01-05 Thread Ilya Dryomov
On Thu, Jan 4, 2024 at 4:41 PM Peter wrote: > > I follow below document to setup image level rbd persistent cache, > however I get error output while i using the command provide by the document. > I have put my commands and descriptions below. > Can anyone give some instructions? thanks in advance

[ceph-users] Re: mds crashes after up:replay state

2024-01-05 Thread Patrick Donnelly
Hi Lars, On Fri, Jan 5, 2024 at 9:53 AM Lars Köppel wrote: > > Hello everyone, > > we are running a small cluster with 3 nodes and 25 osds per node. And Ceph > version 17.2.6. > Recently the active mds crashed and since then the new starting mds has > always been in the up:replay state. In the ou

[ceph-users] Pacific bluestore_volume_selection_policy

2024-01-05 Thread Reed Dier
After ~3 uneventful weeks after upgrading from 15.2.17 to 16.2.14 I’ve started seeing OSD crashes with "cur >= fnode.size” and "cur >= p.length”, which seems to be resolved in the next point release for pacific later this month, but until then, I’d love to keep the OSDs from flapping. > $ for c

[ceph-users] mds crashes after up:replay state

2024-01-05 Thread Lars Köppel
Hello everyone, we are running a small cluster with 3 nodes and 25 osds per node. And Ceph version 17.2.6. Recently the active mds crashed and since then the new starting mds has always been in the up:replay state. In the output of the command 'ceph tell mds.cephfs:0 status' you can see that the j

[ceph-users] Re: CEPH create an pool with 256 PGs stuck peering

2024-01-05 Thread Eugen Block
Hi, we need more information about your cluster (ceph osd tree) and the applied crush rule for this pool. What ceph version is this? Regards, Eugen Zitat von Phong Tran Thanh : Hi community. I' am running ceph cluster with 10 node and 180 osds, and i create an pool erasure code 4+2 with 2

[ceph-users] Re: Upgrading from 16.2.11?

2024-01-05 Thread Eugen Block
Hi, you can skip Quincy (17.2.X) entirely, Ceph supports upgrading over two versions. Check out the upgrade docs [1] for more details. It also shouldn't be necessary to upgrade to the latest Pacific first, and you can go directly to latest Reef (18.2.1). Regards, Eugen [1] https://docs.c

[ceph-users] Re: Reef Dashboard Recovery Throughput empty

2024-01-05 Thread Nizamudeen A
Hi, Is it possible that this is related to https://tracker.ceph.com/issues/63927 ? Regards, Nizam On Fri, Jan 5, 2024 at 4:22 PM Zoltán Beck wrote: > Hi All, > > we just upgraded to Reef, everything looks great, except the new > Dashboard. The Recovery Throughput graph is empty, the recovery

[ceph-users] Re: How to increment osd_deep_scrub_interval

2024-01-05 Thread Eugen Block
Hi, just omit the ".*" from your command: ceph config set osd osd_deep_scrub_interval 1209600 The asterisk (*) can be used for the 'ceph tell command'. Check out the docs [1] for more infos about the runtime configuration. Regards, Eugen [1] https://docs.ceph.com/en/quincy/rados/configur

[ceph-users] Reef Dashboard Recovery Throughput empty

2024-01-05 Thread Zoltán Beck
Hi All, we just upgraded to Reef, everything looks great, except the new Dashboard. The Recovery Throughput graph is empty, the recovery is ongoing for 18 hours and still no data. I tried to move the prometheus service to other node and redeployed couple of times, but still no data. Kind R

[ceph-users] Re: REST API Endpoint Failure - Request For Where To Look To Resolve

2024-01-05 Thread Nizamudeen A
ah sorry for that. Outside the cephadm shell, if you do cephadm ls | grep "mgr.", that should give you the mgr container name. It should look something like this [root@ceph-node-00 ~]# cephadm ls | grep "mgr." "name": "mgr.ceph-node-00.aoxbdg", "systemd_unit": "ceph-e877a630-abaa-11

[ceph-users] Re: REST API Endpoint Failure - Request For Where To Look To Resolve

2024-01-05 Thread duluxoz
Yeah, that's what I meant when I said I'm new to podman and containers - so, stupid Q: What is the "typical" name for a given container eg if the server is "node1" is the management container "mgr.node1" of something similar? And thanks for the help - I really *do* appreciate it.  :-) On 05/0

[ceph-users] Re: REST API Endpoint Failure - Request For Where To Look To Resolve

2024-01-05 Thread Nizamudeen A
ah yeah, its usually inside the container so you'll need to check the mgr container for the logs. cephadm logs -n also cephadm has its own log channel which can be used to get the logs. https://docs.ceph.com/en/quincy/cephadm/operations/#watching-cephadm-log-messages On Fri, Jan 5, 2024 at 2:54 

[ceph-users] Re: REST API Endpoint Failure - Request For Where To Look To Resolve

2024-01-05 Thread duluxoz
Yeap, can do - are the relevant logs in the "usual" place or buried somewhere inside some sort of container (typically)?  :-) On 05/01/2024 20:14, Nizamudeen A wrote: no, the error message is not clear enough to deduce an error. could you perhaps share the mgr logs at the time of the error? It

[ceph-users] Re: REST API Endpoint Failure - Request For Where To Look To Resolve

2024-01-05 Thread Nizamudeen A
no, the error message is not clear enough to deduce an error. could you perhaps share the mgr logs at the time of the error? It could have some tracebacks which can give more info to debug it further. Regards, On Fri, Jan 5, 2024 at 2:00 PM duluxoz wrote: > Hi Nizam, > > Yeap, done all that - w

[ceph-users] Re: REST API Endpoint Failure - Request For Where To Look To Resolve

2024-01-05 Thread duluxoz
Hi Nizam, Yeap, done all that - we're now at the point of creating the iSCSI Target(s) for the gateway (via the Dashboard and/or the CLI: see the error message in the OP) - any ideas?  :-) Cheers Dulux-Oz On 05/01/2024 19:10, Nizamudeen A wrote: Hi, You can find the APIs associated with t

[ceph-users] Re: REST API Endpoint Failure - Request For Where To Look To Resolve

2024-01-05 Thread Nizamudeen A
Hi, You can find the APIs associated with the iscsi here: https://docs.ceph.com/en/reef/mgr/ceph_api/#iscsi and if you create iscsi service through dashboard or cephadm, it should add the iscsi gateways to the dashboard. you can view them by issuing *ceph dashboard iscsi-gateway-list* and you can