Perhaps this? https://s3browser.com/
Regards,
* Anthony Fecarotta*
Founder & President
[image: phone-icon] anth...@linehaul.ai
[image: phone-icon] 224-339-1182 [image: phone-icon] (855) 625-0300
[image: phone-icon] 1 Mid America Plz Flr 3 Oakbrook Terrace, IL 60181
[image: phone-icon]
Yes, all the errors and warnings list as 'suppressed'. Doesn't affect
the bug as reported below.
Of some interest, "OSD_UNREACHABLE" is not listed on the dashboard alert
roster of problems, but is in the command line health detail.
But really, when all the errors list as 'suppressed', what
Hi,
did you also mute the osd_unreachable warning?
ceph health mute OSD_UNREACHABLE 10w
Should bring the cluster back to HEALTH_OK for 10 weeks.
Zitat von Harry G Coin :
Hi Nizam
Answers interposed below.
On 2/10/25 11:56, Nizamudeen A wrote:
Hey Harry,
Do you see that for every alert or
Hi Nizam
Answers interposed below.
On 2/10/25 11:56, Nizamudeen A wrote:
Hey Harry,
Do you see that for every alert or for some of them? If some, what are
those? I just tried a couple of them locally and saw the dashboard
went to a happy state.
My sanbox/dev array has three chronic 'warnings
Hi Cephers,
These are the topics covered in today's meeting:
- *[Patrick] Any objections to running clean-ci now?* Do you have any
important branches in ceph-ci?
- https://github.com/ceph/ceph/pull/61709
- Joseph asked if it could delete branches similarly to GitHub UI/API so
th
The iSCSI gateway is likely to disappear in the future and is
definitely in minimal maintenance mode right now.
However, as with all removed features, if we do that it will have
plenty of warning — we will announce it is deprecated in a major
release without changing or removing it, and then remove
Hello,
I found that there is also a third-party iSCSI implementation that
claims to support RBD:
https://github.com/fujita/tgt
I use it in a non-Ceph context, and it works well.
Question: Does anyone have any experience running tgt as an
alternative to tcmu-runner? If so, which clients (iSCSI i
Hey Harry,
Do you see that for every alert or for some of them? If some, what are
those? I just tried a couple of them locally and saw the dashboard went to
a happy state.
Can you tell me how the ceph health or ceph health detail looks like after
the muted alert? And also does ceph -s reports HEA
Hi Frédéric,
Another half year added to the previous half year wait for basic IP6
clusters then. If only 'ceph health mute' accomplished the goal as a
workaround. Notice even when all complaints are 'suppressed' -- the
dashboard continues to offer the 'flashing red warning dot', and the !
In the same code area: If all the alerts are silenced, nevertheless the
dashboard will not show 'green', but red or yellow depending on the
nature of the silenced alerts.
On 2/10/25 04:18, Nizamudeen A wrote:
Thank you Chris,
I was able to reproduce this. We will look into it and send out a
ISCSI is still being used in the LRC (long running cluster) that is a
storage backend for parts of the ceph team's infrastructure, so I don't
think it's going to disappear in the near future. I believe the plan is to
eventually swap over to nvmeof instead (
https://docs.ceph.com/en/reef/rbd/nvmeof-
I don't think it's a memory leak anymore. But I created a tracker:
https://tracker.ceph.com/issues/69885
Zitat von Eugen Block :
To me it looks like a memory leak which wasn't present in 16.2.11
(the previous Ceph version on this cluster). The usage hasn't
changed, so it must be Ceph. I've
Good morning,
I wanted to inquire about the status of the Ceph iSCSI gateway service. We
currently have several machines installed with this technology that are working
correctly,
although I have seen that it appears to be discontinued since 2022. My question
is whether to continue down th
Thank you Chris,
I was able to reproduce this. We will look into it and send out a fix.
Regards,
Nizam
On Fri, Feb 7, 2025 at 10:35 PM Chris Palmer wrote:
> Firstly thank you so much for the 19.2.1 release. Initial testing
> suggests that the blockers that we had in 19.2.0 have all been resolv
The question was posted here [0] as well. There is a tracker [1] with
a fix [2] which will be backported to Reef, but Quincy is EOL.
[0]
https://serverfault.com/questions/1172161/osds-stability-issues-post-upgrade-to-ceph-quincy-17-2-8
[1] https://tracker.ceph.com/issues/69764
[2] https://g
15 matches
Mail list logo