Hi everyone,
I'm seeing different results from reading files, depending on which OSDs
are running, including some incorrect reads with all OSDs running, in
CephFS from a pool with erasure coding. I'm running Ceph 17.2.6.
# More detail
In particular, I have a relatively large backup of some f
Hi all,
another quick update: please use this link to download the script:
https://github.com/frans42/ceph-goodies/blob/main/scripts/pool-scrub-report
The one I sent originally does not follow latest.
Best regards,
=
Frank Schilder
AIT Risø Campus
Bygning 109, rum S14
__
Hi,
I've been searching and trying things but to no avail yet.
This is uncritical because it's a test cluster only, but I'd still
like to have a solution in case this somehow will make it into our
production clusters.
It's an Openstack Victoria Cloud with Ceph backend. If one tries to
remov
On Fri, Dec 15, 2023 at 12:52 PM Eugen Block wrote:
>
> Hi,
>
> I've been searching and trying things but to no avail yet.
> This is uncritical because it's a test cluster only, but I'd still
> like to have a solution in case this somehow will make it into our
> production clusters.
> It's an Open
Ah of course, thanks for pointing that out, I somehow didn't think of
the remaining clones.
Thanks a lot!
Zitat von Ilya Dryomov :
On Fri, Dec 15, 2023 at 12:52 PM Eugen Block wrote:
Hi,
I've been searching and trying things but to no avail yet.
This is uncritical because it's a test clu
Hello everyone! How are you doing?
I wasn't around for two years but I'm back and working on a new development.
I deployed 2x ceph cluster:
1- user_data:5x node [8x4TB Sata SSD, 2x 25Gbit network],
2- data-gen: 3x node [8x4TB Sata SSD, 2x 25Gbit network],
note: hardware is not my choice and I kno
I found something useful and I think I need to dig and use this %100
https://docs.ceph.com/en/reef/cephfs/multimds/#dynamic-subtree-partitioning-with-balancer-on-specific-ranks
DYNAMIC SUBTREE PARTITIONING WITH BALANCER ON SPECIFIC RANKS
The CephFS file system provides the bal_rank_mask option