[ceph-users] Re: OSD upgrade problem nautilus->octopus - snap_mapper upgrade stuck

2023-02-28 Thread Jan Pekař - Imatic
-- From "Jan Pekař - Imatic" To m...@tuxis.nl; ceph-users@ceph.io Date 2/25/2023 4:14:54 PM Subject Re: [ceph-users] OSD upgrade problem nautilus->octopus - snap_mapper upgrade stuck Hi, I tried upgrade to Pacific now. The same result. OSD is not starting, stuck at 1500 keys. JP

[ceph-users] Re: OSD upgrade problem nautilus->octopus - snap_mapper upgrade stuck

2023-02-25 Thread Jan Pekař - Imatic
Hi, I tried upgrade to Pacific now. The same result. OSD is not starting, stuck at 1500 keys. JP On 23/02/2023 00.16, Jan Pekař - Imatic wrote: Hi, I enabled debug and the same - 1500 keys is where it ends.. I also enabled debug_filestore and ... 2023-02-23T00:02:34.876+0100 7f8ef26d1700

[ceph-users] Re: OSD upgrade problem nautilus->octopus - snap_mapper upgrade stuck

2023-02-22 Thread Jan Pekař - Imatic
...@tuxis.nl / +31 318 200208 -- Original Message -- From "Jan Pekař - Imatic" To ceph-users@ceph.io Date 1/12/2023 5:53:02 PM Subject [ceph-users] OSD upgrade problem nautilus->octopus - snap_mapper upgrade stuck Hi all, I have problem upgrading nautilus to octopus on my

[ceph-users] OSD upgrade problem nautilus->octopus - snap_mapper upgrade stuck

2023-01-12 Thread Jan Pekař - Imatic
Hi all, I have problem upgrading nautilus to octopus on my OSD. Upgrade mon and mgr was OK and first OSD stuck on 2023-01-12T09:25:54.122+0100 7f49ff3eae00  1 osd.0 126556 init upgrade snap_mapper (first start as octopus) and there were no activity after that for more than 48 hours. No disk a

[ceph-users] Disable peering of some pool

2022-03-16 Thread Jan Pekař - Imatic
Hi all, we have problem on our production cluster running nautilus (14.2.22). Cluster is almost full and few month ago we noticed issues with slow peering - when we restart any osd (or host) it takes hours to finish peering process, instead of minutes. We noticed, that some pool contains 90k

[ceph-users] MonSession vs TCP connection

2021-05-11 Thread Jan Pekař - Imatic
Hi all, I would like to "pair" MonSession with TCP connection to get real process, which is using that session. I need it to identify processes with old ceph features. MonSession looks like MonSession(client.84324148 [..IP...]:0/3096235764 is open allow *, features 0x27018fb86aa42ada (jewel)

[ceph-users] Re: Problem unusable after deleting pool with bilion objects

2020-09-11 Thread Jan Pekař - Imatic
inally removed. And yet another potential issue (or at least an additional factor) with your setup is a pretty high DB vs. Main devices ratio (1:11). Deleting procedures from multiple OSDs result in a pretty highload on DB volume which becomes overburdened... Thanks, Igor On 9/11/2020 3

[ceph-users] Problem unusable after deleting pool with bilion objects

2020-09-11 Thread Jan Pekař - Imatic
Hi all, I have build testing cluster with 4 hosts, 1 SSD's  and 11 HDD on each host. Running ceph version 14.2.10 (b340acf629a010a74d90da5782a2c5fe0b54ac20) nautilus (stable) on Ubuntu. Because we want to save small size object, I set bluestore_min_alloc_size 8192 (it is maybe important in thi

[ceph-users] Re: Problem with OSD::osd_op_tp thread had timed out and other connected issues

2020-07-21 Thread Jan Pekař - Imatic
Hi Ben, we are not using EC pool on that cluster. OSD out behavior almost stopped when we solved memory issues (less memory allocated to OSD's). Now we are not working on that cluster anymore so we have no other info about that problem. Jan On 20/07/2020 07.59, Benoît Knecht wrote: Hi Jan,

[ceph-users] Re: Problem with OSD::osd_op_tp thread had timed out and other connected issues

2020-03-21 Thread Jan Pekař - Imatic
2020, at 5:46 AM, Jan Pekař - Imatic wrote: Each node has 64GB RAM so it should be enough (12 OSD's = 48GB used). On 21/03/2020 13.14, XuYun wrote: Bluestore requires more than 4G memory per OSD, do you have enough memory? 2020年3月21日 下午8:09,Jan Pekař - Imatic 写道: Hello, I

[ceph-users] Re: Problem with OSD::osd_op_tp thread had timed out and other connected issues

2020-03-21 Thread Jan Pekař - Imatic
Each node has 64GB RAM so it should be enough (12 OSD's = 48GB used). On 21/03/2020 13.14, XuYun wrote: Bluestore requires more than 4G memory per OSD, do you have enough memory? 2020年3月21日 下午8:09,Jan Pekař - Imatic 写道: Hello, I have ceph cluster version 1

[ceph-users] Problem with OSD::osd_op_tp thread had timed out and other connected issues

2020-03-21 Thread Jan Pekař - Imatic
Hello, I have ceph cluster version 14.2.7 (3d58626ebeec02d8385a4cefb92c6cbc3a45bfe8) nautilus (stable) 4 nodes - each node 11 HDD, 1 SSD, 10Gbit network Cluster was empty, fresh install. We filled cluster with data (small blocks) using RGW. Cluster is now used for testing so no client was us

[ceph-users] Re: Problem with OSD - stuck in CPU loop after rbd snapshot mount

2020-02-05 Thread Jan Pekař - Imatic
8390 mlcod 60042'3048390 active+clean] log_op_stats osd_op(osd.9.60026:836317 2.c 2:311c8802:::rbd_data.09df7e2ae8944a.:7b5 [copy-get max 8388608] snapc 0=[] ondisk+read+rwordered+ignore_cache+ignore_overlay+map_snap_clone+known_if_redirected e60042) v8 inb 0 outb 4194423 lat 19.62

[ceph-users] Problem with OSD - stuck in CPU loop after rbd snapshot mount

2020-02-03 Thread Jan Pekař - Imatic
Hi all, I have small cluster and yesterday I tried to mount older RBD snapshot torecover data. (I have approx. 230 daily snapshots of one RBD image on my small ceph). After I did mount and ls operation, cluster was stuck and I notice that 2of my OSD's eaten CPU and raise in memory usage (more