Hi Noe,
If the MDS has failed and you're sure of the fact that there are no pending
tasks or sessions associated with the failed MDS, you can try to make use
of `ceph mds rmfailed` but beware this MDS is really doing nothing and
doesn't link to any file system otherwise things can go wrong and can
Hi,
I am trying to follow the documentation at
https://docs.ceph.com/en/reef/rbd/nvmeof-target-configure/ to deploy an
NVMe over Fabric service.
Step 2b of the configuration section is currently the showstopper.
First the command says:
error: the following arguments are required: --host-nam
Hi,
On 5/30/24 11:58, Robert Sander wrote:
I am trying to follow the documentation at
https://docs.ceph.com/en/reef/rbd/nvmeof-target-configure/ to deploy an
NVMe over Fabric service.
It looks like the cephadm orchestrator in this 18.2.2 cluster uses the image
quay.io/ceph/nvmeof:0.0.2 whic
Hi,
I've never heard of automatic data deletion. Maybe just some snapshots
were removed? Or someone deleted data on purpose because of the
nearfull state of some OSDs? And there's no trash function for cephfs
(for rbd there is). Do you use cephfs snapshots?
Zitat von Prabu GJ :
Hi Team
Hello Robert,
You could try:
ceph config set mgr mgr/cephadm/container_image_nvmeof
"quay.io/ceph/nvmeof:1.2.13" or whatever image tag you need (1.2.13 is current
latest).
Another way to run the image is by editing the unit.run file of the service or
by directly running the container with pod
On Thursday, May 30, 2024 7:03:44 AM EDT Robert Sander wrote:
> Hi,
>
> On 5/30/24 11:58, Robert Sander wrote:
>
>
> > I am trying to follow the documentation at
> > https://docs.ceph.com/en/reef/rbd/nvmeof-target-configure/ to deploy an
> > NVMe over Fabric service.
>
>
> It looks like the
Dear Community,
I hope, you can guide me to solve this error or I can assist in solving a bug:
RBD-Images are not shown in my Dashbord.
- When accessing the dashboard page (block -> images) no images were listed and
the error "Failed to execute RBD [errno 19] error generating diff from snapshot
I've never used this feature, but I wanted to point out your command versus
the error message; gateway-name / gateway_name (dash versus underscore)
On Thu, May 30, 2024 at 5:07 AM Robert Sander
wrote:
> Hi,
>
> I am trying to follow the documentation at
> https://docs.ceph.com/en/reef/rbd/nvmeof
Hi,
On 5/30/24 14:18, Frédéric Nass wrote:
ceph config set mgr mgr/cephadm/container_image_nvmeof
"quay.io/ceph/nvmeof:1.2.13"
Thanks for the hint. With that the orchestrator deploys the current container
image.
But: It suddenly listens on port 5499 instead of 5500 and:
# podman run -it q
Hi,
Following the introduction of an additional node to our Ceph cluster, we've
started to see unlink errors when taking a rbd mirror snapshot.
We've had RBD mirroring configured for over a year now and it's been working
flawlessly, however after we created OSD's on a new node we've receiving t
There's a major NVMe effort underway but it's not even merged to
master yet, so I'm not sure how docs would have ended up in the Reef
doc tree. :/ Zac, any idea? Can we pull this out?
-Greg
On Thu, May 30, 2024 at 7:03 AM Robert Sander
wrote:
>
> Hi,
>
> On 5/30/24 14:18, Frédéric Nass wrote:
>
On Fri, May 24, 2024 at 7:09 PM Malcolm Haak wrote:
>
> When running a cephfs scrub the MDS will crash with the following backtrace
>
> -1> 2024-05-25T09:00:23.028+1000 7ef2958006c0 -1
> /usr/src/debug/ceph/ceph-18.2.2/src/mds/MDSRank.cc: In function 'void
> MDSRank::abort(std::string_view)' t
On Tue, May 28, 2024 at 8:54 AM Noe P. wrote:
>
> Hi,
>
> we ran into a bigger problem today with our ceph cluster (Quincy,
> Alma8.9).
> We have 4 filesystems and a total of 6 MDs, the largest fs having
> two ranks assigned (i.e. one standby).
>
> Since we often have the problem of MDs lagging be
The fix was actually backported to v18.2.3. The tracker was wrong.
On Wed, May 29, 2024 at 3:26 PM wrote:
>
> Hi,
>
> we have a stretched cluster (Reef 18.2.1) with 5 nodes (2 nodes on each side
> + witness). You can se our daemon placement below.
>
> [admin]
> ceph-admin01 labels="['_admin', 'm
Hi Peter,
The upcoming Reef minor release is delayed due to important bugs:
https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/message/FMFUZHKNFH4Z5DWS5BAYBPENHTNJCAYS/
On Wed, 29 May 2024 at 21:03, Peter Razumovsky
wrote:
> Hello! We're waiting brand new minor 18.2.3 due to
> https://git
I reran rados on the fix https://github.com/ceph/ceph/pull/57794/commits
and seeking approvals from Radek and Laure
https://tracker.ceph.com/issues/65393#note-1
On Tue, May 28, 2024 at 2:12 PM Yuri Weinstein wrote:
>
> We have discovered some issues (#1 and #2) during the final stages of
> testi
16 matches
Mail list logo