[ceph-users] Re: Issue Replacing OSD with cephadm: Partition Path Not Accepted

2024-09-02 Thread Robert Sander
Hi, On 9/2/24 20:24, Herbert Faleiros wrote: /usr/bin/docker: stderr ceph-volume lvm batch: error: /dev/sdb1 is a partition, please pass LVs or raw block devices A Ceph OSD nowadays needs a logical volume because it stores crucial metadata in the LV tags. This helps to activate the OSD. IMHO

[ceph-users] Re: squid 19.2.0 QE validation status

2024-09-02 Thread Brad Hubbard
On Sat, Aug 31, 2024 at 12:43 AM Yuri Weinstein wrote: > > Details of this release are summarized here: > > https://tracker.ceph.com/issues/67779#note-1 > > Release Notes - TBD > Gibba upgrade -TBD > LRC upgrade - TBD > > It was decided and agreed upon that there would be limited testing for > thi

[ceph-users] Re: MDS cache always increasing

2024-09-02 Thread Alexander Patrakov
MDS cannot release an inode if a client has cached it (and thus can have newer data than OSDs have). The MDS needs to know at least which client to ask if someone else requests the same file. MDS does ask clients to release caps, but sometimes this doesn't work, and there is no good troubleshootin

[ceph-users] Re: Issue Replacing OSD with cephadm: Partition Path Not Accepted

2024-09-02 Thread Eugen Block
I would try it with a spec file that contains a path to the partition (limit the placement to that host only). Or have you tried it already? I don’t use partitions for ceph, but there have been threads from other users who use partitions and with spec files it seemed to work. You can generat

[ceph-users] Re: Discovery (port 8765) service not starting

2024-09-02 Thread Eugen Block
Without having looked too closely, do you run ceph with IPv6? There’s a tracker issue: https://tracker.ceph.com/issues/66426 It will be backported to Reef. Zitat von Matthew Vernon : Hi, I'm running reef, with locally-built containers based on upstream .debs. I've now enabled prometheus m

[ceph-users] Issue Replacing OSD with cephadm: Partition Path Not Accepted

2024-09-02 Thread Herbert Faleiros
I am on a journey, so far successful, to update our clusters to supported versions. I started with Luminous and Ubuntu 16.04, and now we are on Reef with Ubuntu 20.04. We still have more updates to do, but at the moment, I encountered an issue with an OSD, and it was necessary to replace a disk. Si

[ceph-users] Re: lifecycle policy on non-replicated buckets

2024-09-02 Thread Soumya Koduri
On 9/2/24 8:41 PM, Christopher Durham wrote: Asking again, does anyone know how to get this working? I have multisite sync set up between two sites. Due to bandwidth concerns, I have disabled replication on a given bucket that houses temporary data using a multisite sync policy. This works f

[ceph-users] Re: ceph-ansible installation error

2024-09-02 Thread Tim Holloway
Sorry if that sounds trollish. It wasn't intended to be. Look at it this way. There are two approaches to running an IT installation. One is the free-wheeling idependent aproach. The other is the stuffy corporate approach. Free-wheeling shops run things like Ubuntu. Or even BSD (but that's

[ceph-users] Re: ceph-ansible installation error

2024-09-02 Thread Anthony D'Atri
I should know to not feed the trolls, but here goes. I was answering a question asked to the list, not arguing for or against containers. > 2. Logs in containerized ceph almost all go straight to the system journal. > Specialized subsystems such as Prometheus can be configured in other ways,

[ceph-users] The journey to CephFS metadata pool’s recovery

2024-09-02 Thread m
> FYI: Also posted in L1Techs forum: > https://forum.level1techs.com/t/recover-bluestore-osd-in-ceph-cluster/215715. ## The epic intro Through self-inflicted pain, I’m writing here to ask for volunteers in the journey of recovering the lost partitions housing CephFS metadata pool. ## The setup

[ceph-users] Re: lifecycle policy on non-replicated buckets

2024-09-02 Thread Christopher Durham
Asking again, does anyone know how to get this working? I have multisite sync set up between two sites. Due to bandwidth concerns, I have disabled replication on a given bucket that houses temporary data using a multisite sync policy. This works fine. Most of the writing to this bucket is done o

[ceph-users] Discovery (port 8765) service not starting

2024-09-02 Thread Matthew Vernon
Hi, I'm running reef, with locally-built containers based on upstream .debs. I've now enabled prometheus metrics thus: ceph mgr module enable prometheus And that seems to have worked (the active mgr is listening on port 9283); but per the docs[0] there should also be a service discovery endpo

[ceph-users] Re: ceph-ansible installation error

2024-09-02 Thread Michael Worsham
I used the steps under this article for setting up a Ceph cluster in my homelab environment. It uses Ansible in a couple of ways, but honestly you could probably take a number of the manual steps and make your own playbook out of it. https://computingforgeeks.com/install-ceph-storage-cluster-on-

[ceph-users] Re: MDS cache always increasing

2024-09-02 Thread Sake Ceph
The folders contain a couple of million files, but are really static. We have another folder with a lot of updates and the MDS server for that folder has indeed a continuous increase of memory usage. But I would focus on the app2 and app4 folders, because those have a lot less changes in it. Bu

[ceph-users] Re: MDS cache always increasing

2024-09-02 Thread Eugen Block
Can you tell if the number of objects increases in your cephfs between those bursts? I noticed something similar in a 16.2.15 cluster as well. It's not that heavily used, but it contains home directories and development working directories etc. And when one user checked out a git project, t