[ceph-users] Re: Orphaned rbd_data Objects

2025-02-04 Thread Stolte, Felix
a objects might be linked to namespaced images that can only >>>>>> be >>>>>> listed using the command: rbd ls --namespace >>>>>> I suggest checking this because the 'rbd' pool has historically been >>>>>> Ceph'

[ceph-users] Re: Orphaned rbd_data Objects

2025-01-30 Thread Stolte, Felix
0012f2f > rbd_data.ed93e6548ca56b.000eef03 > rbd_data.26f7c5d05af621.2adf > …. > > > > Am 28.01.2025 um 22:46 schrieb Alexander Patrakov : > > Hi Felix, > > A dumb answer first: if you know the image names, have you tried "rbd > rm $pool/

[ceph-users] Re: Orphaned rbd_data Objects

2025-01-30 Thread Stolte, Felix
gt;> >> A dumb answer first: if you know the image names, have you tried "rbd >> rm $pool/$imagename"? Or, is there any reason like concerns about >> iSCSI control data integrity that prevents you from trying that? >> >> Also, have you checked the rbd trash? >&

[ceph-users] Re: Orphaned rbd_data Objects

2025-01-29 Thread Stolte, Felix
Also, have you checked the rbd trash? On Tue, Jan 28, 2025 at 5:43 PM Stolte, Felix wrote: Hi guys, we have a rbd pool we used for images exported via ceph-iscsi on a 17.2.7 cluster. The pool uses 10 times the diskspace i would suppose it should and after investigating we noticed a lot of rbd_da

[ceph-users] Orphaned rbd_data Objects

2025-01-28 Thread Stolte, Felix
Hi guys, we have a rbd pool we used for images exported via ceph-iscsi on a 17.2.7 cluster. The pool uses 10 times the diskspace i would suppose it should and after investigating we noticed a lot of rbd_data Objects which images are no longer present. I assume that the original images were del

[ceph-users] UPGRADE_REDEPLOY_DAEMON: Upgrading daemon failed

2024-11-25 Thread Stolte, Felix
Hi folks, we did upgrade one of our clusters from pacific to Quincy. Everything worked fine, but cephadm complains about one osd not being upgraded: [WRN] UPGRADE_REDEPLOY_DAEMON: Upgrading daemon osd.15 on host osd-dmz-k5-1 failed. Upgrade daemon: osd.15: cephadm exited with an error code:

[ceph-users] Re: MDS crashes to damaged metadata

2024-06-04 Thread Stolte, Felix
hrieb Patrick Donnelly : Hi Felix, On Sat, May 13, 2023 at 9:18 AM Stolte, Felix wrote: Hi Patrick, we have been running one daily snapshot since december and our cephfs crashed 3 times because of this https://tracker.ceph.com/issues/38452 We currentliy have 19 files with corrupt metadata found by

[ceph-users] cephadm custom jinja2 service templates

2024-04-17 Thread Stolte, Felix
Hi folks, I would like to use a custom jina2 template for an ingress service for rendering the keepalived and haproxy config. Can someone tell me how to override the default templates? Best regards Felix

[ceph-users] Re: MDS crashes to damaged metadata

2023-05-13 Thread Stolte, Felix
e Melchior - - Am 08.01.2023 um 02:14 schrieb Patrick Donnelly : On Thu, Dec 15, 2022 at 9:32 AM Stolte, Felix wrote: Hi Patrick, we used your script to repair the damaged obje

[ceph-users] bluefs_db_type

2023-02-17 Thread Stolte, Felix
Hey guys, most of my osds have HDD for block and SSD for db. But according to "ceph osd metadata" bluefs_db_type = hdd and bluefs_db_rotational = 1. lsblk -o name, rota reveals the following (sdb is db device for 3 hdds): sdb

[ceph-users] Re: cephfs ceph.dir.rctime decrease

2022-12-20 Thread Stolte, Felix
g the rctimes but this got stuck and needs effort to bring it up to date: https://github.com/ceph/ceph/pull/37938 Cheers, dan On Sun, Dec 18, 2022 at 12:23 PM Stolte, Felix wrote: Hi guys, i want to use ceph.dir.rctime for backup purposes. Unfortunately there are some files in our filesystem which

[ceph-users] MDS: mclientcaps(revoke), pending pAsLsXsFsc issued pAsLsXsFsc

2022-12-20 Thread Stolte, Felix
Hi guys, i stumbled about these log entries in my active MDS on a pacific (16.2.10) cluster: 2022-12-20T10:06:52.124+0100 7f11ab408700 0 log_channel(cluster) log [WRN] : client.1207771517 isn't responding to mclientcaps(revoke), ino 0x10017e84452 pending pAsLsXsFsc issued pAsLsXsFsc, sent 62.

[ceph-users] cephfs ceph.dir.rctime decrease

2022-12-18 Thread Stolte, Felix
Hi guys, i want to use ceph.dir.rctime for backup purposes. Unfortunately there are some files in our filesystem which have a ctime of years in the future. This is reflected correctly by ceph.dir.rctime. I changed the the time of this files to now (just did a touch on the file), but rctime stay

[ceph-users] Re: MDS crashes to damaged metadata

2022-12-15 Thread Stolte, Felix
2022 um 20:08 schrieb Patrick Donnelly : On Thu, Dec 1, 2022 at 5:08 PM Stolte, Felix wrote: Script is running for ~2 hours and according to the line count in the memo file we are at 40% (cephfs is still online). We had to modify the script putting a try/catch arround the for loop in line 78

[ceph-users] Re: ceph-iscsi lock ping pong

2022-12-14 Thread Stolte, Felix
stable for us Thanks Joe >>> Xiubo Li 12/13/2022 4:21 AM >>> On 13/12/2022 18:57, Stolte, Felix wrote: > Hi Xiubo, > > Thx for pointing me into the right direction. All involved esx host > seem to use the correct policy. I am going to detach the LUN on each >

[ceph-users] MTU Mismatch between ceph Daemons

2022-12-13 Thread Stolte, Felix
Hi guys, we had some issues with our cephfs last, which probably have been caused by a MTU mismatch (partly at least). Scenario was the following: OSD Servers: MTU 9000 on public and cluster network MON+MSD: MTU 1500 on public network CephFS Clients (Kernel Mout): MTU 9000 on public network RBD

[ceph-users] Re: ceph-iscsi lock ping pong

2022-12-13 Thread Stolte, Felix
chior - - Am 13.12.2022 um 13:21 schrieb Xiubo Li : On 13/12/2022 18:57, Stolte, Felix wrote: Hi Xiubo, Thx for pointing me into the right direction. All involved esx host seem to use the correct policy. I am going to detach th

[ceph-users] Re: ceph-iscsi lock ping pong

2022-12-13 Thread Stolte, Felix
A" you are using ? The ceph-iscsi couldn't implement the real AA, so if you use the RR I think it will be like this. - Xiubo On 12/12/2022 17:45, Stolte, Felix wrote: Hi guys, we are using ceph-iscsi to provide block storage for Microsoft Exchange and vmware vsphere. Ceph docs st

[ceph-users] ceph-iscsi lock ping pong

2022-12-12 Thread Stolte, Felix
Hi guys, we are using ceph-iscsi to provide block storage for Microsoft Exchange and vmware vsphere. Ceph docs state that you need to configure Windows iSCSI Initatior for fail-over-only but there is no such point for vmware. In my tcmu-runner logs on both ceph-iscsi gateways I see the followin

[ceph-users] Re: MDS crashes to damaged metadata

2022-12-01 Thread Stolte, Felix
- - Am 01.12.2022 um 09:55 schrieb Stolte, Felix : I set debug_mds=20 in ceph.conf and inserted it on the running daemon via "ceph daemon mds.mon-e2-1 config set debug_mds 20“. I have to check with my superiors, if i am allowed to provide yout the logs though. Regarding the tool

[ceph-users] Re: MDS crashes to damaged metadata

2022-11-30 Thread Stolte, Felix
- - Am 30.11.2022 um 22:49 schrieb Patrick Donnelly : On Wed, Nov 30, 2022 at 3:10 PM Stolte, Felix wrote: Hey guys, our mds daemons are crashing

[ceph-users] MDS crashes to damaged metadata

2022-11-30 Thread Stolte, Felix
Hey guys, our mds daemons are crashing constantly when someone is trying to delete a file: -26> 2022-11-29T12:32:58.807+0100 7f081b458700 -1 /build/ceph-16.2.10/src/mds/Server.cc: In function 'void Server::_unlink_local(MDRequestRef&, CDentry*, CDentry*)' thread 7f081b458700

[ceph-users] Re: osd set-require-min-compat-client

2022-11-30 Thread Stolte, Felix
Felix, This change won't trigger any rebalancing. It will prevent older clients from connecting, but since this isn't a crush tunable it won't directly affect data placement. Best, Dan On Wed, Nov 30, 2022, 12:33 Stolte, Felix mailto:f.sto...@fz-juelich.de>> wrote: Hey gu

[ceph-users] osd set-require-min-compat-client

2022-11-30 Thread Stolte, Felix
Hey guys, our ceph cluster is on pacific, but started on jewel years ago. While i was going through the logs of the mrg daemon i stumbled about the following entry: = [balancer ERROR root] execute error: r = -1, detail = min_compat_client jewel < luminous, which is required for pg-upmap

[ceph-users] Adding IPs to an existing iscsi gateway

2022-09-29 Thread Stolte, Felix
Hey guys, we are using ceph-iscsi and want to update our configuration to serve iSCSI to an additional network. I did set up everything via the gwcli comman. Originally i created the gateway with „create gw-a 192.168.100.4". Now i want to add an additional IP to the existing gateway, but i don’

[ceph-users] cephfs and samba

2022-08-18 Thread Stolte, Felix
Hello there, is anybody sharing his ceph filesystem via samba to windows clients and willing to share his experience as well as settings in smb.conf and ceph.conf which have performance impacts? We are running this setup for years now, but i think there is still room for improvement and learn

[ceph-users] snap-schedule reappearing

2022-06-13 Thread Stolte, Felix
Hi folks, i removed snapshot scheduling on a cephfs path (pacific), but they reappear the next day. I didn’t remove the retention for this path though. Does the retention on a path trigger the recreation of the snap scheduling if they were removed? Is this intended? regards Felix

[ceph-users] Convert existing folder on cephfs into subvolume

2022-06-07 Thread Stolte, Felix
Hey guys, we are using the ceph filesystem since Luminous and exporting subdirectories via samba as well as nfs. We did upgrade to Pacific and want to use the subvolume feature. Is it possible to convert a subdirectory into a subvolume without using data? Best regards Felix --

[ceph-users] DM-Cache for spinning OSDs

2022-05-16 Thread Stolte, Felix
Hey guys, i have three servers with 12x 12 TB Sata HDDs and 1x 3,4 TB NVME. I am thinking of putting DB/WAL on the NVMe as well as an 5GB DM-Cache for each spinning disk. Is anyone running something like this in a production environment? best regards Felix -

[ceph-users] How much IOPS can be expected on NVME OSDs

2022-05-12 Thread Stolte, Felix
Hi guys, we recently got new hardware with NVME disks (Samsung MZPLL3T2HAJQ) and i am trying to figure out, how to get the most of them. Vendor states 180k for 4k random writes and my fio testing was 160K (fine by me). I built an bluestore OSD on top of that (WAL, DB, Data all on the same disk)

[ceph-users] Disable autostart of old services

2021-08-25 Thread Stolte, Felix
Hey guys, we have an osd server with issues on its network interfaces. I marked out all osds on that server and disabled the ceph-osd@# services as well as ceph.target and ceph-osd.target. But after a reboot the osd services are starting again causing trouble. Which systemd unit do i need to d

[ceph-users] Ceph on windows?

2020-08-20 Thread Stolte, Felix
Hey guys, it seems like there was a presentation called “ceph on windows” at the Cephalocon 2020, but I cannot find any information on that topic. Is there a video from the presentation out there or any other information? I only found https://ceph2020.sched.com/event/ZDUK/ceph-on-windows-alessa

[ceph-users] Convert existing rbd into a cinder volume

2020-08-19 Thread Stolte, Felix
Hello fellow cephers, i know this is not the openstack mailing list, but since many of us are using ceph as a backend for openstack, maybe someone can help me out. I have a pool for rbds which are exported via iscsi to some bare metal windows servers and do snaphots on that rbds regulary. Now I

[ceph-users] Re: Cephfs snapshots in Nautilus

2020-05-06 Thread Stolte, Felix
a thing can happen between upgrades. -----Original Message- From: Stolte, Felix [mailto:f.sto...@fz-juelich.de] Sent: 06 May 2020 09:09 To: ceph-users@ceph.io Subject: [ceph-users] Cephfs snapshots in Nautilus Hi Folks, I really like to use snapshot

[ceph-users] Cephfs snapshots Nautilus

2020-05-06 Thread Stolte, Felix
Hi Folks, I really like to use snapshots on cephfs, but even on octopus release snapshots are still marked as an experimental feature. Is anyone using snapshots in production environments? Which issues did you encounter? Do I risk a corrupted filesystem or just non-working snapshots? We run

[ceph-users] Cephfs snapshots in Nautilus

2020-05-06 Thread Stolte, Felix
smime.p7m Description: S/MIME encrypted message ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io

[ceph-users] Re: MDS: cache pressure warnings with Ganesha exports

2020-04-16 Thread Stolte, Felix
Dr.-Ing. Harald Bolt - - Am 15.04.20, 14:57 schrieb "Jeff Layton" : On Wed, 2020-04-15 at 12:06 +, Stolte, Felix wrote: > Hi Jeff, > > Output of ganesha_stats inode:

[ceph-users] Re: MDS: cache pressure warnings with Ganesha exports

2020-04-16 Thread Stolte, Felix
smime.p7m Description: S/MIME encrypted message ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io

[ceph-users] Re: MDS: cache pressure warnings with Ganesha exports

2020-04-15 Thread Stolte, Felix
- - Am 14.04.20, 21:26 schrieb "Jeff Layton" : On Tue, 2020-04-14 at 06:27 +0000, Stolte, Felix wrote: > Hi Jeff, > > thank you for the hint. I set Entries_HWMark = 100 in MDCACHE Section > of gan

[ceph-users] Re: MDS: cache pressure warnings with Ganesha exports

2020-04-13 Thread Stolte, Felix
- - Am 09.04.20, 14:10 schrieb "Jeff Layton" : On Tue, 2020-04-07 at 07:34 +0000, Stolte, Felix wrote: > Hey folks, > > I keep getting ceph health warnings about clients failing to respond to c

[ceph-users] Using M2 SSDs as osds

2020-04-09 Thread Stolte, Felix
Hey guys, I am evaluating using M2 SSDs as osds for an all flash pool. Is anyone using that in production and can elaborate on his experience? I am a little bit concerned about the lifetime of the M2 disks. Best regards Felix IT-Services Telefon 02461 61-9243 E-Mail: f.sto...@fz-juelich.de --

[ceph-users] MDS: cache pressure warnings with Ganesha exports

2020-04-07 Thread Stolte, Felix
Hey folks, I keep getting ceph health warnings about clients failing to respond to cache pressure. They always refer to sessions from ganesha exports. I've read all threads regarding this issue, but none of my changes resolved it. What I’ve done so far: Ganesha.conf: MDCACHE { Dir_Chunk =

[ceph-users] Ceph pool quotas

2020-03-18 Thread Stolte, Felix
Hey guys, a short question about pool quotas. Do they apply to stats attributes “stored” or “bytes_used” (Is replication counted for or not)? Regards Felix IT-Services Telefon 02461 61-9243 E-Mail: f.sto...@fz-juelich.de --

[ceph-users] Re: Extended security attributes on cephfs (nautilus) not working with kernel 5.3

2020-02-16 Thread Stolte, Felix
t 12:20 PM Stolte, Felix wrote: > > Hi guys, > > I am exporting cephfs with samba using the vfs acl_xattr which stores ntfs acls in the security extended attributes. This works fine using cephfs kernel mount wither kernel version 4.15. > > Using ke

[ceph-users] Extended security attributes on cephfs (nautilus) not working with kernel 5.3

2020-02-14 Thread Stolte, Felix
Hi guys, I am exporting cephfs with samba using the vfs acl_xattr which stores ntfs acls in the security extended attributes. This works fine using cephfs kernel mount wither kernel version 4.15. Using kernel 5.3 I cannot access the security.ntacl attributes anymore. Attributes in user or ceph

[ceph-users] Renaming LVM Groups of OSDs

2020-01-28 Thread Stolte, Felix
Hi all, I would like to rename the logical volumes / volume groups used by my OSDs. Do I need to change anything else than the block and block.db links under /var/lib/ceph/osd/? IT-Services Telefon 02461 61-9243 E-Mail: f.sto...@fz-juelich.de

[ceph-users] Ceph-volume lvm batch: strategy changed after filtering

2020-01-24 Thread Stolte, Felix
Hey guys, I’m struggling with the ceph-volume command in nautilus 14.2.6. I have 12 disks on each server,  3 of them ssds (sda,sdb,sdc) and 9 spinning disks (sdd .. sdl). Initial deploy witch ceph-volume batch works fine, one ssd is used for wal and db for 3 spinning disks. But running the ‘cep