[ceph-users] can i pause a ongoing rebalance process?

2021-12-07 Thread José H . Freidhof
Hello together question: i repaired some osd and now the rebalance process are running. we suffer now performance problems. Can i pause the ongoing rebalance job and continue it at night? thx in advance ___ ceph-users mailing list -- ceph-users@ceph.io

[ceph-users] after wal device crash can not recreate osd

2021-12-06 Thread José H . Freidhof
Hello together, i need to recreate the osd´s on one ceph-node, because the nvme wal device has died. I replaced the nvme to a brand new one and i try now to recreate the osd´s on this node, but i get an error while re-creating them. Can somebody tell me why i get this error? i never saw before th

[ceph-users] Re: bluefs _allocate unable to allocate

2021-10-12 Thread José H . Freidhof
,max_background_compactions=64,level0_file_num_compaction_trigger=64,level0_slowdown_writes_trigger=128,level0_stop_writes_trigger=256,max_bytes_for_level_base=6GB,compaction_threads=32,flusher_threads=8,compaction_readahead_size=2MB Am Di., 12. Okt. 2021 um 12:35 Uhr schrieb José H. Freidhof < harald.fr

[ceph-users] Re: bluefs _allocate unable to allocate

2021-10-12 Thread José H . Freidhof
6 - it's irrelevant to your > case. It was unexpected ENOSPC result from an allocator which still had > enough free space. But in your case bluefs allocator doesn't have free > space at all as the latter is totally wasted by tons of WAL files. > > > Thanks, > >

[ceph-users] Re: bluefs _allocate unable to allocate

2021-10-12 Thread José H . Freidhof
akes 20 mins as well? > > Could you please share OSD log containing both that long startup and > following (e.g. 1+ hour) regular operation? > > Preferable for OSD.2 (or whatever one which has been using default > settings from the deployment). > > > Thanks, > >

[ceph-users] Re: bluefs _allocate unable to allocate

2021-10-08 Thread José H . Freidhof
running stress load testing. The latter might tend to hold > the resources and e.g. prevent from internal house keeping... > > Igor > > > On 10/8/2021 12:16 AM, José H. Freidhof wrote: > > Hi Igor, > > yes the same problem is on osd.2 > > we have 3 OSD Nodes..

[ceph-users] Re: bluefs _allocate unable to allocate

2021-10-07 Thread José H . Freidhof
> > > Thanks > > Igor > On 10/7/2021 12:46 PM, José H. Freidhof wrote: > > Good morning, > > i checked today the osd.8 and the log shows again the same error > bluefs _allocate unable to allocate 0x10 on bdev 0, allocator name > bluefs-wal, allocator type hyb

[ceph-users] Re: bluefs _allocate unable to allocate

2021-10-07 Thread José H . Freidhof
lock size > 0x10, free 0xff000, fragmentation 0, allocated 0x0 > > any idea why that could be? > > Am Mi., 6. Okt. 2021 um 22:23 Uhr schrieb José H. Freidhof < > harald.freid...@googlemail.com>: > >> Hi Igor, >> >> today i repaired one osd node

[ceph-users] Re: bluefs _allocate unable to allocate

2021-10-07 Thread José H . Freidhof
be? Am Mi., 6. Okt. 2021 um 22:23 Uhr schrieb José H. Freidhof < harald.freid...@googlemail.com>: > Hi Igor, > > today i repaired one osd node and all osd´s on the node, creating them new > again > after that i waited for the rebalance/recovery process and the cluste

[ceph-users] Re: bluefs _allocate unable to allocate

2021-10-06 Thread José H . Freidhof
0 B 0 B 0 B 0 B 0 B 0 TOTALS 32 GiB 21 GiB 0 B 0 B 0 B 0 B 796 MAXIMUMS: LOG 920 MiB 4.0 GiB 0 B 0 B 0 B 17 MiB WAL 45 GiB 149 GiB 0 B 0 B 0 B

[ceph-users] Re: bluefs _allocate unable to allocate

2021-10-06 Thread José H . Freidhof
am Mi., 6. Okt. 2021, 13:33: > On 10/6/2021 2:16 PM, José H. Freidhof wrote: > > Hi Igor, > > > > yes i have some osd settings set :-) here are my ceph config dump. those > > settings are from a redhat document for bluestore devices > > maybe it is that settin

[ceph-users] Re: bluefs _allocate unable to allocate

2021-10-06 Thread José H . Freidhof
offline OSD: "ceph-kvstore-tool > bluestore-kv compact" > > Even if the above works great please refrain from applying that compaction > to every OSD - let's see how that "compacted" OSD evolves.Would WAL grow > again or not? > > Thanks, > > Igor > &g

[ceph-users] Re: bluefs _allocate unable to allocate

2021-10-06 Thread José H . Freidhof
Gb. > > Could you please share the output of the following commands: > > ceph daemon osd.N bluestore bluefs device info > > ceph daemon osd.N bluefs stats > > > Thanks, > > Igor > > > On 10/6/2021 12:24 PM, José H. Freidhof wrote: > > Hello together

[ceph-users] bluefs _allocate unable to allocate

2021-10-06 Thread José H . Freidhof
Hello together we have a running ceph pacific 16.2.5 cluster and i found this messages in the service logs of the osd daemons. we have three osd nodes .. each node has 20osds as bluestore with nvme/ssd/hdd is this a bug or maybe i have some settings wrong? cd88-ceph-osdh-01 bash[6283]: debug 2

[ceph-users] ceph-iscsi / tcmu-runner bad pefromance with vmware esxi

2021-09-23 Thread José H . Freidhof
Hello together, i need some help on our ceph 16.2.5 cluster as iscsi target with esxi nodes background infos: - we have build 3x osd nodes with 60 bluestore osd with and 60x6TB spinning disks, 12 ssd´s and 3nvme. - osd nodes have 32cores and 256gb Ram - the osd disk are connected to

[ceph-users] rgw container status unknown but they are running

2021-09-04 Thread José H . Freidhof
Hello together, can somebody help me here? i have four running rgw containers applied and two of them shows status "unkown" root@cd133-ceph-mon-01:~# ceph orch ps NAME HOST PORTS STATUS REFRESHED AGE MEM USE MEM LIM VERSIONIMAGE