Hello Tim! First of all, thanks for the detailed answer!
Yes, probably in set up of 4 nodes by 116 OSD it looks a bit overloaded, but
what if I have 10 nodes? Yes, nodes itself are still heavy but in a row it
seems to be not that dramatic, no?
However, in a docu I see that it is quite common for
Hello Dominique!
Os is quite new - Ubuntu 22.04 with all the latest upgrades.
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
Hello everybody!
I have a 4 nodes with 112 OSDs each and 18.2.4. OSD consist of db on SSD and
data on HDD
For some reason, when I reboot node, not all OSDs get up because some VG or LV
are not active.
To make it alive again I manually do vgchange -ay $VG_NAME or lvchange -ay
$LV_NAME.
I suspect
Thanks for the help, buddy! I really appreciate it! Will try to wait. Maybe
someone else jumps in.
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
I did. It says more or less the same
Mar 06 10:44:05 node1.ec.mts conmon[10588]: 2025-03-06T10:44:05.769+
7faca5624640 -1 log_channel(cephadm) log [ERR] : Failed to apply
osd.node1.ec.mts_all_disks spec
DriveGroupSpec.from_json(yaml.safe_load('''service_type: osd
Mar 06 10:44:05 node1.ec.m
a bit more details. Now I've notices that ceph health detail signals to me that
[WRN] CEPHADM_APPLY_SPEC_FAIL: Failed to apply 1 service(s):
osd.node1.ec.all_disks
osd.node1.ec.all_disks: Expecting value: line 1 column 2311 (char 2310)
Okay, I checked my spec but do not see anything suspicious
I will provide you any info you need, just gimme a sign.
My starter post was related to 19.2.0. Now I downgraded (full reinstall as this
is completely new cluster I wanna run) to 18.2.4 and the same story
Mar 06 09:37:41 node1.ec.mts conmon[10588]: failed to collect metrics:
Mar 06 09:37:41 nod
yes, I do
.mgr 10 1 769 KiB2 2.3 MiB 04.7 PiB
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
Hello everybody!
Running 19.2.0 faced an issued still cannot struggle.
And this is Module 'devicehealth' has failed: Expecting value: line 1 column
2378 (char 2377)
in MGR log I see
Mar 04 12:48:07 node2.ec.mts ceph-mgr[3821449]: Traceback (most recent call
last):
found the answer here
https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/message/QJHES7GKTI6O7BT6UBGCHK6WFTJRNJHE/
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
Hi all!
Faced sad situation I don't know where to dig to. That is why I am here again,
hoping for hints.
Situation:
all the "orch" commands aint available and give error in mgr log
mgr.server reply reply (95) Operation not supported Module 'orchestrator' is
not enabled/loaded (required by comm
yes, this is a bug, indeed.
https://www.spinics.net/lists/ceph-users/msg82468.html
> Remove mappings by:
> $ `ceph osd dump`
> For each pg_upmap_primary entry in the above output:
> $ `ceph osd rm-pg-upmap-primary `
___
ceph-users mailing list -- ceph-u
fixed by https://www.spinics.net/lists/ceph-users/msg82468.html
CLOSED.
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
By increasing debulg level I found out the following but have no idea how to
fix this issue.
```
src/osd/OSDMap.cc: 3242: FAILED ceph_assert(pg_upmap_primaries.empty())
```
There is only one topic in google and with no answer
___
ceph-users mailing lis
Hello everybody,
found intresting thing: for some reason ALL the monitors crash when I try to
rbd map on client host.
here is my pool:
root@ceph1:~# ceph osd pool ls
iotest
Here is my rbd in this pool:
root@ceph1:~# rbd ls -p iotest
test1
this is a client creds to connect to this pool:
[cli
Ah! I guess I got it!
So, once all OSDs (made by specification I'd like to delete) are gone - service
will disappear as well, right?
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
Hi everybody!
never seen this before and google stay silent. Just found the same question in
2021 but no answer there (((
so, by ceph orch ls I see:
root@ceph1:~/ceph-rollout# ceph orch ls
NAME PORTSRUNNING REFRESHED AGE PLACEMENT
alertmanager
Hi Etienne,
indeed, even ```rados ls --pool test``` hangs on the same instruction
futex(0x7ffc2de0cb10, FUTEX_WAIT_BITSET_PRIVATE, 0, {tv_sec=10215,
tv_nsec=619004859},
FUTEX_BITSET_MATCH_ANY
Yes, by netcat I have checked from client side and all OSD ports are succeed
___
18 matches
Mail list logo