Hello Istvan,
Upon further reflection, I believe the script could also be utilized to
facilitate the removal of nodes/OSDs, not just for adding new ones.
The key point is that you shouldn't remove the OSDs and purge them immediately
after running the script, as you did previously. Instead, onc
On Tue, Sep 10, 2024 at 5:36 PM Ilya Dryomov wrote:
>
> On Tue, Sep 10, 2024 at 1:23 PM Milind Changire wrote:
> >
> > Problem:
> > CephFS fallocate implementation does not actually reserve data blocks
> > when mode is 0.
> > It only truncates the file to the given size by setting the file size
>
Hi all,
The User + Dev Monthly Meetup is coming up on Sept. 25th!
This month, we will have a discussion on incorporating the upmap-remapped
[1] & pgremapper [2] tools with the balancer mgr module. There has been
interest in the community for a while about incorporating the logic from
these extern
Rados approved
On Tue, Sep 10, 2024 at 1:43 PM Laura Flores wrote:
> I have finished reviewing the upgrade and smoke suites. Most failures are
> already known/tracked:
> https://tracker.ceph.com/projects/rados/wiki/SQUID#v1920-build-3-httpstrackercephcomissues67779
>
> *Upgrade: Pending check fr
I have finished reviewing the upgrade and smoke suites. Most failures are
already known/tracked:
https://tracker.ceph.com/projects/rados/wiki/SQUID#v1920-build-3-httpstrackercephcomissues67779
*Upgrade: Pending check from FS team*
Two fs issues stood out. @Gregory Farnum @Venky Shankar
can you c
CLT discussion on Sep 09, 2024
19.2.0 release:
* Cherry picked patch: https://github.com/ceph/ceph/pull/59492
* Approvals requested for re-runs
CentOS Stream/distribution discussions ongoing
* Significant implications in infrastructure for building/testing requiring
ongoing discussions/work to d
Hi,
I am currently setting up a multi-site replication with rook-ceph.
My setup is the following:
- clusterA in regionA containing local buckets (let's call them bucketsA)
- clusterB in regionB containing local buckets (let's call them bucketsB)
- clusterC in regionC containing local buckets (
Rados reviewed here:
https://tracker.ceph.com/projects/rados/wiki/SQUID#v1920-build-3-httpstrackercephcomissues67779
I have asked @Radoslaw Zarzynski to take a look at
the summary and confirm there are no blockers.
In progress reviewing upgrade and smoke; will provide another update for
those so
On Tue, Sep 10, 2024 at 1:23 PM Milind Changire wrote:
>
> Problem:
> CephFS fallocate implementation does not actually reserve data blocks
> when mode is 0.
> It only truncates the file to the given size by setting the file size
> in the inode.
> So, there is no guarantee that writes to the file
Problem:
CephFS fallocate implementation does not actually reserve data blocks
when mode is 0.
It only truncates the file to the given size by setting the file size
in the inode.
So, there is no guarantee that writes to the file will succeed
Solution:
Since an immediate remediation of this problem
Thank you!
> Op 10-09-2024 09:39 CEST schreef Redouane Kachach :
>
>
> Seems like a BUG in cephadm, the ceph-exporter when deployed doesn't specify
> its port that's why it's not being opened automatically. You can see that in
> the cephadm logs (ports list is empty):
>
> 2024-09-09 04:39:48,
Hello,
we have CEPH cluster with 144 NVMe "disks", "background" network
is RoCE. CEPH cluster is version 18.2.2 and is installed via CEPH
orchestrator cephadm, container daemon is podman. OS is Debian
bookworm, podman is in the version 4.3.1+ds1-8+b1, now we are
installed version 4.3.1+ds1-8+deb12
Seems like a BUG in cephadm, the ceph-exporter when deployed doesn't
specify its port that's why it's not being opened automatically. You can
see that in the cephadm logs (ports list is empty):
2024-09-09 04:39:48,986 7fc2993d7740 DEBUG Loaded deploy configuration:
{'fsid': '250b9d7c-6e65-11ef-8e0
13 matches
Mail list logo