I was noting that there are other access modalities.
CephFS and RGW obviate a client-side filesystem. I would think ZFS suits the
OP, allowing sophisticated data protection, compression, and straightforward
NFS export.
> On Mar 28, 2025, at 7:48 PM, Tim Holloway wrote:
>
> Good point, al
One thing I did run into when upgrading was TLS issues pulling images. I
had to set HTTP/S_PROXY and pull manually.
That may relate to this:
025-03-26T10:52:16.547985+ mgr.dell02.zwnrme (mgr.18015288) 23874 :
cephadm [INF] Saving service prometheus spec with placement
dell02.mousetech.com
Hi Eugen,
I’m not sure if this helps, and I would greatly appreciate any suggestions for
improving our setup, but so far we’ve had good luck with our service deployed
using:
ceph nfs cluster create cephfs "label:_admin" --ingress --virtual_ip virtual_ip
And then we manually updated the nfs.ceph
Chris, you tested from a container? Or how do you configure a KRBD disk
for a VM?
El 20/3/25 a las 15:15, Chris Palmer escribió:
I just ran that command on one of my VMs. Salient details:
* Ceph cluster 19.2.1 with 3 nodes, 4 x SATA disks with shared NVMe
DB/WAL, single 10g NICs
* Pr
Thanks Eugen,
One question more: Should I uninstall the monitor and create it again with
the Quincy packages already installed,
or can I do it while I'm still on the Pacific version and the new monitor will
install with rocksdb?
Regards, I
--
yup, there have been some reports of it and I have created a tracker:
https://tracker.ceph.com/issues/70473.
Regards,
Nizam
On Fri, Mar 21, 2025 at 7:31 PM Harry G Coin wrote:
> Has anyone else tried to change the sort order of columns in the
> cluster/osd display on 19.2.1? While the header
It might be a different issue, can you paste the entire stack trace
from the mgr when it's failing?
Also, you could go directly from Pacific to Reef, there's no need to
upgrade to Quincy (which is EOL). And I would also recommend to
upgrade to a latest major version, e.g. 17.2.8, not .0, ot
Hi Danish,
The "unable to find head object data pool for..." could be an incorrect warning
since it pops out for 'most of the objects'. [1]
Regarding the only object named 'cursor.png' that fails to sync, one thing you
could try (since you can't delete it with an s3 client) is to rewrite it w
Glad to hear it!
Thanks Eugen
On Thu, Mar 20, 2025 at 1:57 AM Eugen Block wrote:
> I can't reproduce it either. I removed the buckets and created a new
> one, no issue anymore. Then I upgraded a different test cluster from
> Pacific to Reef with an existing bucket, again no issue. So I guess
> t
orch approved. The suite is obviously quite red, but the vast majority of
the failures are just due to the lack of a proper ignorelist in the orch
suite on reef.
On Mon, Mar 24, 2025 at 5:40 PM Yuri Weinstein wrote:
> Details of this release are summarized here:
>
> https://tracker.ceph.com/issu
Dear All,
We will be having a Ceph science/research/big cluster call on Tuesday
March 25th.
This is an informal open call of community members mostly from
hpc/htc/research/big cluster environments, though anyone is welcome,
where we discuss whatever is on our minds regarding Ceph. Updates,
Please don't drop the list from your response.
The orchestrator might be down, but you should have an active mgr
(ceph -s). On the host running the mgr, you can just run 'cephadm logs
--name mgr.', or 'journalctl -u
ceph-@mgr.'.
To get fresh logs, you can just run 'ceph mgr fail', look on w
Hello,
I'm seeking guidance on best practices for major version upgrades with
Cephadm managed clusters. I have a production Pacfic 16.2.15 cluster that
has had minor version upgrades with Cephadm, but always just a single step
at a time. We plan to upgrade to Reef soon. Is it best to go straight t
Thank you so much for the detailed instructions. Here’s logs from the failover
to a new node.
Apr 05 20:06:08 cn02.ceph.xyz.corp
ceph-95f49c1c-b1e8-11ee-b5d0-0cc47a8f35c1-mgr-cn02-ceph-xyz-corp-ggixgj[2357414]:
:::192.168.47.72 - - [05/Apr/2025:20:06:08] "GET /metrics HTTP/1.1" 200 -
"" "P
All what I can share is. That its working great since years.
www.clyso.com
Hohenzollernstr. 27, 80801 Munich
Utting a. A. | HR: Augsburg | HRB: 25866 | USt. ID-Nr.: DE2754306
DAN ALBRIGHT schrieb am Fr., 28. März 2025, 18:25:
> Hi Ceph Community, looking to understand Ceph implementation as a
There's a lot going on with your cluster... You seem to have broken
the mgr, which is why you're not seeing any deployment attempts, I
assume. Sometimes the MGR can be "broken" if some processes never
finish, your missing ceph06 might cause that, it's hard to say since
you've tried a lot of
I don't see the module error in the logs you provided. Has it cleared?
A mgr failover is often helpful in such cases, basically it's the
first thing I suggest since two or three years.
Zitat von Jeremy Hansen :
Thank you so much for the detailed instructions. Here’s logs from
the failover
[ceph: root@cn01 /]# ceph orch host ls
Error ENOENT: Module not found
-jeremy
> On Saturday, Apr 05, 2025 at 1:27 PM, Eugen Block (mailto:ebl...@nde.ag)> wrote:
> I don't see the module error in the logs you provided. Has it cleared?
> A mgr failover is often helpful in such cases, basically it'
No. Same issue unfortunately.
> On Saturday, Apr 05, 2025 at 1:27 PM, Eugen Block (mailto:ebl...@nde.ag)> wrote:
> I don't see the module error in the logs you provided. Has it cleared?
> A mgr failover is often helpful in such cases, basically it's the
> first thing I suggest since two or three
The mgr logs should contain a stack trace, can you check again?
Zitat von Jeremy Hansen :
No. Same issue unfortunately.
On Saturday, Apr 05, 2025 at 1:27 PM, Eugen Block (mailto:ebl...@nde.ag)> wrote:
I don't see the module error in the logs you provided. Has it cleared?
A mgr failover is of
20 matches
Mail list logo