[ceph-users] Cephadm flooding /var/log/ceph/cephadm.log

2025-04-07 Thread Alex
Hi everyone. My company has paid Ceph support. The tech is saying: "...cephadm package of ceph 5 is having bug. So It generate debug logs even its set for "info" logs..." I have two clusters running, one Ceph 5 and the other Ceph 6 (Quincy). Both of them are sending "DEBUG -" messages to c

[ceph-users] Re: Ceph squid fresh install

2025-04-07 Thread Anthony D'Atri
What kernel releases are running ? > On Apr 7, 2025, at 4:34 PM, quag...@bol.com.br wrote: > > Hi Anthony, Thanks for your reply. I dont have any client connected yet. > It's a fresh install. I only made a CephFS mount point on each machine that I > installed ceph. How can I identify these cli

[ceph-users] Re: Ceph squid fresh install

2025-04-07 Thread Laura Flores
Hi Rafael, I would not force the min_compat_client to be reef when there are still luminous clients connected, as it is important for all clients to be >=Reef to understand/encode the pg_upmap_primary feature in the osdmap. As for checking which processes are still luminous, I am copying @Radosla

[ceph-users] Re: NIH Datasets

2025-04-07 Thread Linas Vepstas
Hi Tim, Agree w/ all you say. My two cents an old-timer. These ideas have been around for decades. And there have been many plans, attempts, screeds, projects (I can rattle off a random list, but so can search engines and wikipedia). All with good intentions, heart in the right place. All got mire

[ceph-users] Re: ceph.log using seconds since epoch instead of date/time stamp

2025-04-07 Thread Eugen Block
Just to confirm, I redeployed my single-node test cluster with different versions. The logs are fine in: 17.2.6 (CentOS 8) 18.2.0 (CentOS 8) and further During the upgrade from 17.2.6 to 17.2.7 I see the log format switch after the MON has been upgraded: 2025-04-07T09:34:36.649854+ mgr

[ceph-users] Update of MDS (non-cephadm cluster)

2025-04-07 Thread Massimo Sgaravatto
I have done the update of the ceph software several times, but this is the first time I have to do an update (from quincy to reef) considering also ceph-fs I am not using cephadm I have 3 MDSs (2 active and one in standby) I'm looking at point 5 of: https://ceph.io/en/news/blog/2023/v18-2-0-re

[ceph-users] Re: NIH Datasets

2025-04-07 Thread Tim Holloway
Yeah, Ceph in its current form doesn't seem like a good fit. I think that what we need to support the world's knowledge in the face of enstupidification is some sort of distributed holographic datastore. so, like Ceph's PG replication, a torrent-like ability to pull from multiple unreliable so

[ceph-users] Re: Cephadm upgrade from 16.2.15 -> 17.2.0

2025-04-07 Thread Eugen Block
I haven't tried it this way yet, and I had hoped that Adam would chime in, but my approach would be to remove this key (it's not present when no upgrade is in progress): ceph config-key rm mgr/cephadm/upgrade_state Then rollback the two newer MGRs to Pacific as described before. If they co

[ceph-users] Re: Cephadm upgrade from 16.2.15 -> 17.2.0

2025-04-07 Thread Jeremy Hansen
Thank you. The only thing I’m unclear on is the rollback to pacific. Are you referring to > > > https://docs.ceph.com/en/quincy/cephadm/troubleshooting/#manually-deploying-a-manager-daemon Thank you. I appreciate all the help. Should I wait for Adam to comment? At the moment, the cluster is fun

[ceph-users] Re: Cephadm upgrade from 16.2.15 -> 17.2.0

2025-04-07 Thread Eugen Block
Still no, just edit the unit.run file for the MGRs to use a different image. See Frédéric's instructions (now that I'm re-reading it, there's a little mistake with dots and hyphens): # Backup the unit.run file $ cp /var/lib/ceph/$(ceph fsid)/mgr.ceph01.eydqvm/unit.run{,.bak} # Change contain

[ceph-users] Re: NIH Datasets

2025-04-07 Thread Robert Sander
Am 4/5/25 um 05:39 schrieb Linas Vepstas: OK. So here's the question: is it possible to (has anyone tried) set up an internet-wide Ceph cluster? This will not work as the latency is too high. Regards -- Robert Sander Linux Consultant Heinlein Consulting GmbH Schwedter Str. 8/9b, 10119 Berlin

[ceph-users] write latency increase after upgrade from pacific to quincy

2025-04-07 Thread Nima AbolhassanBeigi
Hi dear ceph community We have encountered an issue with our ceph cluster after upgrading from v16.2.13 to v17.2.7. The issue is that the write latency on OSDs has increased significantly and doesn't seem to plummet back down. The average write latency has almost doubled, and this has happened sin

[ceph-users] Re: Update of MDS (non-cephadm cluster)

2025-04-07 Thread Robert Sander
Am 4/7/25 um 10:16 schrieb Massimo Sgaravatto: Did I understand correctly? But doesn't this mean to have a short period of time when the cluster is in error state (in 2, during the restart) and therefore with possible problems on the cephfs clients ? Your understanding of the procedure and its

[ceph-users] Re: NIH Datasets

2025-04-07 Thread Tim Holloway
Additional features. * No "master server". No Single Point of Failure. * Resource location. A small number of master servers kept in sync like DNS with tiers of secondary resources. I think blockchains also have a similar setup? * Resource identification. A scheme like LDAP. For example: cn

[ceph-users] Data Loss Bug Identified on Squid

2025-04-07 Thread Adam Emerson
https://tracker.ceph.com/issues/70746 tracks a data loss bug on squid when CopyObject is used to copy an object onto itself. S3 clients typically do this when they want to change the metadata of an existing object. Due to a regression caused by an earlier fix for https://tracker.ceph.com/issues/66

[ceph-users] Re: NIH Datasets

2025-04-07 Thread Linas Vepstas
Thanks Šarūnai and all who responded. I guess general discussion will need to go off-list. But first: To summarize, the situation seems to be this: * As a general rule, principle investigators (PI) always have a copy of their "master dataset", which thus is "safe" as long as they don't lose contr

[ceph-users] Re: Cephadm upgrade from 16.2.15 -> 17.2.0

2025-04-07 Thread Jeremy Hansen
Got it. Thank you. I forgot that you mentioned the run files. I’ll hold off a bit to see if there’s more comments but I feel like I at least have things to try. Thanks again. -jeremy > On Monday, Apr 07, 2025 at 1:26 AM, Eugen Block (mailto:ebl...@nde.ag)> wrote: > Still no, just edit the unit

[ceph-users] Re: NIH Datasets

2025-04-07 Thread Šarūnas Burdulis
On 4/4/25 11:39 PM, Linas Vepstas wrote: OK what you will read below might sound insane but I am obliged to ask. There are 275 petabytes of NIH data at risk of being deleted. Cancer research, medical data, HIPAA type stuff. Currently unclear where it's located, how it's managed, who has access t

[ceph-users] Re: NIH Datasets

2025-04-07 Thread Alex Buie
MooseFS is the way to go here. I have it working on android SD cards and of course normal Linux servers over the internet and over Yggdrasil-network. One of my in-progress anarchy projects is a global hard drive for all of humanity’s knowledge. I would LOVE to get involved with this preservatio