[ceph-users] Re: Accidentally created systemd units for OSDs

2024-08-16 Thread Tim Holloway
It depends on your available resources, but I really do recommend destroying and re-creating that OSD. If you have to spin up a VM and set up a temporary OSD just to keep the overall system happy, even that is a small price to pay. As I said, you can't unlink/disable the container systemd, because

[ceph-users] Re: Accidentally created systemd units for OSDs

2024-08-16 Thread Dan O'Brien
OK... I've been in the Circle of Hell where systemd lives and I *THINK* I have convinced myself I'm OK. I *REALLY* don't want to trash and rebuild the OSDs. In the manpage for systemd.unit, I found UNIT GARBAGE COLLECTION The system and service manager loads a unit's configuration automatically

[ceph-users] Re: Accidentally created systemd units for OSDs

2024-08-16 Thread Tim Holloway
If it makes you feel better, that sounds exactly like what happened to me and I have no idea how. Other than I'd started with Octopus and it was a transitional release, there are conflicting instructions AND a reference in the Octopus docs to procedures using a tool that was no longer distributed w

[ceph-users] Re: Accidentally created systemd units for OSDs

2024-08-16 Thread Adam Tygart
I would expect it to be: systemctl disable ceph-osd@${instance} If you're wanting to disable them all I believe you can even use wildcards: systemctl disable ceph-osd@\* -- Adam On 8/16/24 2:24 PM, Dan O'Brien wrote: This email originated from outside of K-State. I am 100% using cephadm and

[ceph-users] Re: Accidentally created systemd units for OSDs

2024-08-16 Thread Dan O'Brien
I am 100% using cephadm and containers and plan to continue to do so. Our original setup was all spinners, but after going to Ceph Days NYC, I pushed for SSDs to use for the WAL/RocksDb and I'm in the process of migrating the WAL/RocksDb. In general, it's been fairly straightforward -- IF YOU FO

[ceph-users] Re: weird outage of ceph

2024-08-16 Thread Alwin Antreich
Hi Simon, On Fri, Aug 16, 2024, 11:14 Simon Oosthoek wrote: > Hi > > We had a really weird outage today of ceph and I wonder how it came about. > The problem seems to have started around midnight, I still need to look if > it was to the extend I found it in this morning or if it grew more > grad

[ceph-users] Re: squid release codename

2024-08-16 Thread Nico Schottelius
Bike shedding at its best, so I've also to get my paintbrush for a good place on the shed... ... that said, naming a *release* of a software with the name of well known other open source software is pure crazyness. What's coming next? Ceph Redis? Ceph Apache? Or Apache Ceph? Seriously, do you

[ceph-users] Re: Bug with Cephadm module osd service preventing orchestrator start

2024-08-16 Thread Benjamin Huth
Just wanted to follow up on this, I am unfortunately still stuck with this and can't find where the json for this value is stored. I'm wondering if I should attempt to build a manager container with the code for this reverted to before the commit that introduced the original_weight argument. Pleas

[ceph-users] Re: squid release codename

2024-08-16 Thread Tim Holloway
I find the Spongebox ideas amusing, and I agree that in an isolated world, "Squid" would be the logical next release name. BUT It's going to wreak havoc on search engines that can't tell when someone's looking up Ceph versus the long-establish Squid Proxy. If we're going to look to the cartoon w

[ceph-users] Re: Accidentally created systemd units for OSDs

2024-08-16 Thread Tim Holloway
Been there/did that. Cried a lot. Fixed now. Personally, I recommend the containerise/cephadm-managed approach. In a lot of ways, it's simpler and it supports more than one fsid on a single host.The downside is that the systemd names are really gnarly (the full fsid is part of the unitname) and th

[ceph-users] Accidentally created systemd units for OSDs

2024-08-16 Thread Dan O'Brien
I was [poorly] following the instructions for migrating the wal/db to an SSD https://docs.clyso.com/blog/ceph-volume-create-wal-db-on-separate-device-for-existing-osd/ and I didn't add the '--no-systemd' when I did 'ceph-volume lvm activate' command (3 f***ing times). The result is that I've "tw

[ceph-users] memory leak in mds?

2024-08-16 Thread Dario Graña
Hi all, We’re experiencing an issue with CephFS. I think we are facing this issue . The main symptom is that the MDS starts using a lot of memory within a few minutes and finally it gets killed by OS (Out Of Memory). Sometimes it happens once a week and someti

[ceph-users] Re: Ceph Logging Configuration and "Large omap objects found"

2024-08-16 Thread Janek Bevendorff
I do, but it's a lot of OSDs (1393 to be precise). On 14/08/2024 13:58, Eugen Block wrote: Hm, then I don't see another way than to scan each OSD host for the omap message. Do you have a centralized logging or some configuration management like salt where you can target all hosts with a comman

[ceph-users] Re: squid release codename

2024-08-16 Thread Janne Johansson
Den tors 15 aug. 2024 kl 14:35 skrev Alfredo Rezinovsky : > > I think is a very bad idea to name a release with the name of the most > popular http cache. > It will difficult googling. Just enter "ceph" squid < other terms you might want > and google will make sure the word "ceph" is present, thi