Hi everyone.
My company has paid Ceph support.
The tech is saying:
"...cephadm package of ceph 5 is having bug. So It generate debug logs
even its set for "info" logs..."
I have two clusters running, one Ceph 5 and the other Ceph 6 (Quincy).
Both of them are sending "DEBUG -" messages to c
What kernel releases are running ?
> On Apr 7, 2025, at 4:34 PM, quag...@bol.com.br wrote:
>
> Hi Anthony, Thanks for your reply. I dont have any client connected yet.
> It's a fresh install. I only made a CephFS mount point on each machine that I
> installed ceph. How can I identify these cli
Hi Rafael,
I would not force the min_compat_client to be reef when there are still
luminous clients connected, as it is important for all clients to be >=Reef
to understand/encode the pg_upmap_primary feature in the osdmap.
As for checking which processes are still luminous, I am copying @Radosla
Hi Tim,
Agree w/ all you say. My two cents an old-timer. These ideas have been
around for decades. And there have been many plans, attempts, screeds,
projects (I can rattle off a random list, but so can search engines
and wikipedia). All with good intentions, heart in the right place.
All got mire
Just to confirm, I redeployed my single-node test cluster with
different versions. The logs are fine in:
17.2.6 (CentOS 8)
18.2.0 (CentOS 8) and further
During the upgrade from 17.2.6 to 17.2.7 I see the log format switch
after the MON has been upgraded:
2025-04-07T09:34:36.649854+ mgr
I have done the update of the ceph software several times, but this is the
first time I have to do an update (from quincy to reef) considering also
ceph-fs
I am not using cephadm
I have 3 MDSs (2 active and one in standby)
I'm looking at point 5 of:
https://ceph.io/en/news/blog/2023/v18-2-0-re
Yeah, Ceph in its current form doesn't seem like a good fit.
I think that what we need to support the world's knowledge in the face
of enstupidification is some sort of distributed holographic datastore.
so, like Ceph's PG replication, a torrent-like ability to pull from
multiple unreliable so
I haven't tried it this way yet, and I had hoped that Adam would chime
in, but my approach would be to remove this key (it's not present when
no upgrade is in progress):
ceph config-key rm mgr/cephadm/upgrade_state
Then rollback the two newer MGRs to Pacific as described before. If
they co
Thank you. The only thing I’m unclear on is the rollback to pacific.
Are you referring to
> > > https://docs.ceph.com/en/quincy/cephadm/troubleshooting/#manually-deploying-a-manager-daemon
Thank you. I appreciate all the help. Should I wait for Adam to comment? At the
moment, the cluster is fun
Still no, just edit the unit.run file for the MGRs to use a different
image. See Frédéric's instructions (now that I'm re-reading it,
there's a little mistake with dots and hyphens):
# Backup the unit.run file
$ cp /var/lib/ceph/$(ceph fsid)/mgr.ceph01.eydqvm/unit.run{,.bak}
# Change contain
Am 4/5/25 um 05:39 schrieb Linas Vepstas:
OK. So here's the question: is it possible to (has anyone tried) set
up an internet-wide Ceph cluster?
This will not work as the latency is too high.
Regards
--
Robert Sander
Linux Consultant
Heinlein Consulting GmbH
Schwedter Str. 8/9b, 10119 Berlin
Hi dear ceph community
We have encountered an issue with our ceph cluster after upgrading from
v16.2.13 to v17.2.7.
The issue is that the write latency on OSDs has increased significantly and
doesn't seem to plummet back down.
The average write latency has almost doubled, and this has happened sin
Am 4/7/25 um 10:16 schrieb Massimo Sgaravatto:
Did I understand correctly? But doesn't this mean to have a short period of
time when the cluster is in error state (in 2, during the restart) and
therefore with possible problems on the cephfs clients ?
Your understanding of the procedure and its
Additional features.
* No "master server". No Single Point of Failure.
* Resource location. A small number of master servers kept in sync like
DNS with tiers of secondary resources. I think blockchains also have a
similar setup?
* Resource identification. A scheme like LDAP. For example:
cn
https://tracker.ceph.com/issues/70746 tracks a data loss bug on squid
when CopyObject is used to copy an object onto itself. S3 clients
typically do this when they want to change the metadata of an existing
object.
Due to a regression caused by an earlier fix for
https://tracker.ceph.com/issues/66
Thanks Šarūnai and all who responded.
I guess general discussion will need to go off-list. But first:
To summarize, the situation seems to be this:
* As a general rule, principle investigators (PI) always have a copy
of their "master dataset", which thus is "safe" as long as they don't
lose contr
Got it. Thank you. I forgot that you mentioned the run files. I’ll hold off a
bit to see if there’s more comments but I feel like I at least have things to
try. Thanks again.
-jeremy
> On Monday, Apr 07, 2025 at 1:26 AM, Eugen Block (mailto:ebl...@nde.ag)> wrote:
> Still no, just edit the unit
On 4/4/25 11:39 PM, Linas Vepstas wrote:
OK what you will read below might sound insane but I am obliged to ask.
There are 275 petabytes of NIH data at risk of being deleted. Cancer
research, medical data, HIPAA type stuff. Currently unclear where it's
located, how it's managed, who has access t
MooseFS is the way to go here.
I have it working on android SD cards and of course normal Linux servers
over the internet and over Yggdrasil-network.
One of my in-progress anarchy projects is a global hard drive for all of
humanity’s knowledge.
I would LOVE to get involved with this preservatio
19 matches
Mail list logo