Den mån 15 apr. 2024 kl 13:09 skrev Mitsumasa KONDO :
> Hi Menguy-san,
>
> Thank you for your reply. Users who use large IO with tiny volumes are a
> nuisance to cloud providers.
>
> I confirmed my ceph cluster with 40 SSDs. Each OSD on 1TB SSD has about 50
> placement groups in my cluster. Therefo
Hi Anthony-san,
Thank you for your advice. I confirm my settings of my ceph cluster.
Autoscaler mode is on, so I had thought it's the best PGs. But the
autoscaler feature doesn't affect OSD's PGs. It's just for PG_NUM in
storage pools. Is that right?
Regards,
--
Mitsumasa KONDO
2024年4月15日(月) 22
Still waiting for approvals:
rados - Radek, Laura approved? Travis? Nizamudeen?
ceph-volume issue was fixed by https://github.com/ceph/ceph/pull/56857
We plan not to upgrade the LRC to 18.2.3 as we are very close to the
first squid RC and will be using it for this purpose.
Please speak up if th
smoke approved.
Infrastructure:
1. https://tracker.ceph.com/issues/64727 - suites/dbench.sh: Socket
exception: No route to host (113)
On Sun, Apr 14, 2024 at 2:08 PM Adam King wrote:
> orch approved
>
> On Fri, Apr 12, 2024 at 2:38 PM Yuri Weinstein
> wrote:
>
>> Details of this release ar
Is there a how-to document available on how to setup Hashicorp's Vault for
Ceph, preferably in a HA state?
Due to some encryption needs, we need to set up LUKS, OSD encryption AND Ceph
bucket encryption as well. Yes, we know there will be a performance hit, but
the encrypt-everything is a hard
At Clyso we've been building a tool that can migrate S3 data around
called Chorus. Normally I wouldn't promote it here, but it's open
source and sounds like it might be useful in this case. I don't work on
it myself, but thought I'd mention it:
https://github.com/clyso/chorus
One problem wi
Hello everyone!
We deployed a platform with Ceph Quincy and now we need to give access to
some old nodes with CentOS7 until 30/07/2024. I found two approaches, the
first one, deploying Ganesha NFS and bringing access through the NFS
protocol. The second one is to use an older cephfs client, specif
If you're using SATA/SAS SSDs I would aim for 150-200 PGs per OSD as shown by
`ceph osd df`.
If NVMe, 200-300 unless you're starved for RAM.
> On Apr 15, 2024, at 07:07, Mitsumasa KONDO wrote:
>
> Hi Menguy-san,
>
> Thank you for your reply. Users who use large IO with tiny volumes are a
> nu
Hello,
If you have a quite large amount of data you can maybe try the Chorus from
CLYSO.
https://docs.clyso.com/blog/2024/01/24/opensourcing-chorus-project
Opensourcing Chorus project | Clyso GmbH
docs.clyso.com
https://github.com/clyso/chorus
clyso/chorus: s3 multi provider data lifecycle man
Hi Menguy-san,
Thank you for your reply. Users who use large IO with tiny volumes are a
nuisance to cloud providers.
I confirmed my ceph cluster with 40 SSDs. Each OSD on 1TB SSD has about 50
placement groups in my cluster. Therefore, each PG has approximately 20GB
of space.
If we create a small
On 12.04.2024 20:54, Wesley Dillingham wrote:
Did you actually get this working? I am trying to replicate your steps
but
am not being successful doing this with multi-tenant.
This is what we are using, the second statement is so that bucket owner
will have access to the object that the user i
11 matches
Mail list logo