[ceph-users] CEPH monitor slow ops

2024-09-10 Thread Jan Marek
re is another 11 OSDs and they parformed well. This is not the first occurence of this problem, when it happened the first time, we tried to reload whole server, but then we found, that reload of OSD container is enough... Sincerely Jan Marek -- Ing. Jan Marek University of South Bohemia Acade

[ceph-users] Stuck in upgrade process to reef

2023-12-27 Thread Jan Marek
I've found /var/lib/ceph/crash directories, I attach to this message files, which I've found here. Please, can you advice, what now I can do? It seems, that rocksdb is even non-compatible or corrupted :-( Thanks in advance. Sincerely Jan Marek -- Ing. Jan Marek University of S

[ceph-users] Re: Stuck in upgrade process to reef

2024-01-04 Thread Jan Marek
bzip2 format, how I can share it with you? It contains crash log from start osd.1 too, but I can cut out from it and send it to list... Sincerely Jan Marek Dne Čt, led 04, 2024 at 02:43:48 CET napsal(a) Jan Marek: > Hi Igor, > > I've ran this oneliner: > > for i in {0..12};

[ceph-users] Re: Stuck in upgrade process to reef

2024-01-17 Thread Jan Marek
settings to fix the issue. > > This is reproducible and fixable in my lab this way. > > Hope this helps. > > > Thanks, > > Igor > > > On 15/01/2024 12:54, Jan Marek wrote: > > Hi Igor, > > > > I've tried to start ceph-sod daemon as you

[ceph-users] Scrubbing?

2024-01-22 Thread Jan Marek
this PG once and once again? And another question: Is scrubbing part of mClock scheduler? Many thanks for explanation. Sincerely Jan Marek -- Ing. Jan Marek University of South Bohemia Academic Computer Centre Phone: +420389032080 http://www.gnu.org/philosophy/no-word-attachments.cs.html Jan 2

[ceph-users] Re: Scrubbing?

2024-01-25 Thread Jan Marek
problems. And there was question, if scheduler manage CEPH cluster background (and clients) operation in this way to stil be usable for clients. I've tried to send feedback to developers. Thanks for understanding. Sincerely Jan Marek Dne St, led 24, 2024 at 11:18:20 CET napsal(a) Peter Grand

[ceph-users] Re: Scrubbing?

2024-01-25 Thread Jan Marek
Saturday I will change some settings on networking and I will try to start upgrade process, maybe with --limit=1, to be "soft" for cluster and for our clients... > -Sridhar Sincerely Jan Marek -- Ing. Jan Marek University of South Bohemia Academic Computer Centre Phone: +42038903208

[ceph-users] Re: Scrubbing?

2024-01-30 Thread Jan Marek
client_ops". When I was stucked in the upgrade process, I had in logs so many records, see attached file. Since upgrade is complete, this messages went away... Can be this reason of poor performance? Sincerely Jan Marek Dne Čt, led 25, 2024 at 02:31:41 CET napsal(a) Jan Marek: > Hello Sridhar, &

[ceph-users] Re: Scrubbing?

2024-01-30 Thread Jan Marek
Hello again, I'm sorry, I forgot attach file... :-( Sincerely Jan Dne Út, led 30, 2024 at 11:09:44 CET napsal(a) Jan Marek: > Hello Sridhar, > > at Saturday I've finished upgrade proces to 18.2.1. > > Cluster is now in HEALTH_OK state and performs well. > > A

[ceph-users] RoCE?

2024-02-05 Thread Jan Marek
pe to async+posix and restart ceph.target, cluster converged to HEALTH_OK state... Thanks for advices... Sincerely Jan Marek -- Ing. Jan Marek University of South Bohemia Academic Computer Centre Phone: +420389032080 http://www.gnu.org/philosophy/no-word-attachments.cs.html 2024-02-05T09:56:50.2

[ceph-users] Re: RoCE?

2024-02-20 Thread Jan Marek
Hello, we've found problem: In systemd unit for OSD there is missing this line in the [Service] section: LimitMEMLOCK=infinity When I added this line to systemd unit, OSD daemon started and we have HEALTH_OK state in the cluster status. Sincerely Jan Marek Dne Po, úno 05, 2024 at 11:

[ceph-users] Stuck in upgrade

2022-10-03 Thread Jan Marek
but it cannot join to cluster... Sincerely Jan Marek -- Ing. Jan Marek University of South Bohemia Academic Computer Centre Phone: +420389032080 http://www.gnu.org/philosophy/no-word-attachments.cs.html ___ ceph-users mailing list -- ceph-users@ceph.io T

[ceph-users] Re: Stuck in upgrade

2022-10-07 Thread Jan Marek
dy set require_osd_release parameter to octopus. I suggest to use variant 1) and I've sendig attached patch. There is another question, if MON daemon have to check require_osd_release, when it is joining to the cluster, when it cannot raise it's value. It is potentially dangerous situation,

[ceph-users] Re: Stuck in upgrade

2022-10-07 Thread Jan Marek
aised automatically, when every MON daemons in cluster have this version. Is there a reason to not raise automatically parameter require-osd-release? Sincerely Jan Marek Dne Pá, říj 07, 2022 at 11:08:52 CEST napsal Dan van der Ster: > Hi Jan, > > It looks like you got into this situation

[ceph-users] Ceph networking

2022-11-28 Thread Jan Marek
them a addresses of OSD nodes from 192.168.1.0/24 network, or it will give them addresses randomly? Please, have someone advice, how to set this networking optimally? Thanks a lot. Sincerely Jan Marek -- Ing. Jan Marek University of South Bohemia Academic Computer Centre Phone: +420389032080

[ceph-users] Re: Ceph networking

2022-11-29 Thread Jan Marek
# ceph orch ps I've not mon.host1 in listing and mon.host1 is in the stray daemons? :-( And how it can be written in YAML file for ceph orch apply? Sincerely Jna Marek Dne Po, lis 28, 2022 at 02:36:11 CET napsal Jan Marek: > Hello, > > I have a CEPH cluster with 3 MONs and 6 OSD nodes

[ceph-users] Move ceph to new addresses and hostnames

2023-04-25 Thread Jan Marek
. Disabling old systemd unit ceph-osd@12... Moving data... Traceback (most recent call last): File "/usr/sbin/cephadm", line 9468, in main() File "/usr/sbin/cephadm", line 9456, in main r = ctx.func(ctx) File "/usr/sbin/cephadm", line 2135, in _default_image

[ceph-users] Re: Move ceph to new addresses and hostnames

2023-04-26 Thread Jan Marek
6, then I tried 'cephadm adopt' command once more and voila! It works like a charm. I will try to configure OSDs on the node 1 to adopt WAL and DB from prepared LVM... Maybe after upgrade to newer version of CEPH it will be OK? Sincerely Jan Marek -- Ing. Jan Marek University of South

[ceph-users] CEPH orch made osd without WAL

2023-07-09 Thread Jan Marek
vices: paths: - /dev/nvme0n1 filter_logic: AND objectstore: bluestore Now I have 12 OSD with DB on NVMe device, but without WAL. How I can add WAL to this OSD? NVMe device still have 128GB free place. Thanks a lot. Sincerely Jan Marek -- Ing. Jan Marek University of South Bohemia Aca

[ceph-users] Planning cluster

2023-07-09 Thread Jan Marek
operational? I believe, it is a better choose, it will. And what if "die" 2/3 location? On this cluster is pool with cephfs - this is a main part of CEPH. Many thanks for your notices. Sincerely Jan Marek -- Ing. Jan Marek University of South Bohemia Academic Computer Centre Phone: +42

[ceph-users] Re: CEPH orch made osd without WAL

2023-07-09 Thread Jan Marek
...) Sincerely Jan Marek Dne Po, čec 10, 2023 at 08:10:58 CEST napsal Eugen Block: > Hi, > > if you don't specify a different device for WAL it will be automatically > colocated on the same device as the DB. So you're good with this > configuration. > > Regards, >

[ceph-users] Re: CEPH orch made osd without WAL

2023-07-10 Thread Jan Marek
ng 'osd': expected string of the form TYPE.ID, valid types are: auth, mon, osd, mds, mgr, client\n" I'm on the host, on which is this OSD 8. My CEPH version is latest (I hope) quincy: 17.2.6. Thanks a lot for help. Sincerely Jan Marek > > > Zitat von Ja

[ceph-users] Re: CEPH orch made osd without WAL

2023-07-10 Thread Jan Marek
"wal_used_bytes": 0, "files_written_wal": 535, "bytes_written_wal": 121443819520, "max_bytes_wal": 0, "alloc_unit_wal": 0, "read_random_disk_bytes_wal": 0, "read_disk_bytes_wal&

[ceph-users] How to properly remove of cluster_network

2023-09-25 Thread Jan Marek
a.b.7.0/24 monadvanced public_network a.b.7.0/24 ... How to do it safely? Will be correct to only set: ceph config set global cluster_network a.b.7.0/24 ? Have I then restart mon processes and osd processes? Many thanks for advices. Sincerely Jan -- Ing. Jan Marek University of

[ceph-users] Strange container restarts?

2024-11-08 Thread Jan Marek
the ceph host prepared by ansible, thus there is the same environment. On every machine we have podman version 4.3.1+ds1-8+deb12u1 and conmon version 2.1.6+ds1-1. OS is Debian bookworm. Attached logs was prepared by: grep exec_died /var/log/syslog Sincerely Jan Marek -- Ing. Jan Marek Uni

[ceph-users] OSD process in the "weird" state

2024-12-10 Thread Jan Marek
ng this CEPH cluster as a storage for ProxMox virtualization and some virtuals don't "survive" this situation, as they have not accessible theire "disks" :-(. Is there a some solution, which we can a try? Many thanks for advices. Sincerely Jan Marek -- Ing. Jan Marek Universi

[ceph-users] Statistics?

2025-02-27 Thread Jan Marek
t there is swapped numerator and denominator of fraction, isn't it? Sincerely Jan Marek --. Ing. Jan Marek University of South Bohemia Academic Computer Centre Phone: +420389032080 http://www.gnu.org/philosophy/no-word-attachments.cs.html signature.asc Description

[ceph-users] Re: Statistics?

2025-02-27 Thread Jan Marek
+recovering+undersized+remapped This statistics are OK? Sincerely Jan Marek Dne čt, úno 27, 2025 at 10:46:55 CET napsal(a) Jan Marek: > Hello, > > I have a new created ceph cluster, I had some issues with disks, > and now I have this 'ceph -s' list: >