Thanks for ressponse! Yes, it is in use
"watcher=10.1.254.51:0/1544956346 client.39553300 cookie=140244238214096" this
is indicating the client is connect the image.
I am using fio perform write task on it.
I guess it is the feature not enable correctly or setting somewhere incorrect.
Should I
On Thu, Jan 4, 2024 at 4:41 PM Peter wrote:
>
> I follow below document to setup image level rbd persistent cache,
> however I get error output while i using the command provide by the document.
> I have put my commands and descriptions below.
> Can anyone give some instructions? thanks in advance
Hi Lars,
On Fri, Jan 5, 2024 at 9:53 AM Lars Köppel wrote:
>
> Hello everyone,
>
> we are running a small cluster with 3 nodes and 25 osds per node. And Ceph
> version 17.2.6.
> Recently the active mds crashed and since then the new starting mds has
> always been in the up:replay state. In the ou
After ~3 uneventful weeks after upgrading from 15.2.17 to 16.2.14 I’ve started
seeing OSD crashes with "cur >= fnode.size” and "cur >= p.length”, which seems
to be resolved in the next point release for pacific later this month, but
until then, I’d love to keep the OSDs from flapping.
> $ for c
Hello everyone,
we are running a small cluster with 3 nodes and 25 osds per node. And Ceph
version 17.2.6.
Recently the active mds crashed and since then the new starting mds has
always been in the up:replay state. In the output of the command 'ceph tell
mds.cephfs:0 status' you can see that the j
Hi,
we need more information about your cluster (ceph osd tree) and the
applied crush rule for this pool. What ceph version is this?
Regards,
Eugen
Zitat von Phong Tran Thanh :
Hi community.
I' am running ceph cluster with 10 node and 180 osds, and i create an pool
erasure code 4+2 with 2
Hi,
you can skip Quincy (17.2.X) entirely, Ceph supports upgrading over
two versions. Check out the upgrade docs [1] for more details.
It also shouldn't be necessary to upgrade to the latest Pacific first,
and you can go directly to latest Reef (18.2.1).
Regards,
Eugen
[1]
https://docs.c
Hi,
Is it possible that this is related to https://tracker.ceph.com/issues/63927
?
Regards,
Nizam
On Fri, Jan 5, 2024 at 4:22 PM Zoltán Beck wrote:
> Hi All,
>
> we just upgraded to Reef, everything looks great, except the new
> Dashboard. The Recovery Throughput graph is empty, the recovery
Hi,
just omit the ".*" from your command:
ceph config set osd osd_deep_scrub_interval 1209600
The asterisk (*) can be used for the 'ceph tell command'. Check out
the docs [1] for more infos about the runtime configuration.
Regards,
Eugen
[1]
https://docs.ceph.com/en/quincy/rados/configur
Hi All,
we just upgraded to Reef, everything looks great, except the new Dashboard.
The Recovery Throughput graph is empty, the recovery is ongoing for 18 hours
and still no data. I tried to move the prometheus service to other node and
redeployed couple of times, but still no data.
Kind R
ah sorry for that. Outside the cephadm shell, if you do cephadm ls | grep
"mgr.", that should give you the mgr container name. It should look
something like this
[root@ceph-node-00 ~]# cephadm ls | grep "mgr."
"name": "mgr.ceph-node-00.aoxbdg",
"systemd_unit":
"ceph-e877a630-abaa-11
Yeah, that's what I meant when I said I'm new to podman and containers -
so, stupid Q: What is the "typical" name for a given container eg if the
server is "node1" is the management container "mgr.node1" of something
similar?
And thanks for the help - I really *do* appreciate it. :-)
On 05/0
ah yeah, its usually inside the container so you'll need to check the mgr
container for the logs.
cephadm logs -n
also cephadm has
its own log channel which can be used to get the logs.
https://docs.ceph.com/en/quincy/cephadm/operations/#watching-cephadm-log-messages
On Fri, Jan 5, 2024 at 2:54
Yeap, can do - are the relevant logs in the "usual" place or buried
somewhere inside some sort of container (typically)? :-)
On 05/01/2024 20:14, Nizamudeen A wrote:
no, the error message is not clear enough to deduce an error. could
you perhaps share the mgr logs at the time of the error? It
no, the error message is not clear enough to deduce an error. could you
perhaps share the mgr logs at the time of the error? It could have some
tracebacks
which can give more info to debug it further.
Regards,
On Fri, Jan 5, 2024 at 2:00 PM duluxoz wrote:
> Hi Nizam,
>
> Yeap, done all that - w
Hi Nizam,
Yeap, done all that - we're now at the point of creating the iSCSI
Target(s) for the gateway (via the Dashboard and/or the CLI: see the
error message in the OP) - any ideas? :-)
Cheers
Dulux-Oz
On 05/01/2024 19:10, Nizamudeen A wrote:
Hi,
You can find the APIs associated with t
Hi,
You can find the APIs associated with the iscsi here:
https://docs.ceph.com/en/reef/mgr/ceph_api/#iscsi
and if you create iscsi service through dashboard or cephadm, it should add
the iscsi gateways to the dashboard.
you can view them by issuing *ceph dashboard iscsi-gateway-list* and you
can
17 matches
Mail list logo