Thanks for your comments, Sake and David.
Depending on the customer's budget we'll then either run some tests
with the documented stretch mode or build our own stretch mode in
"legacy" style by creating a suitable crush rule.
Thanks!
Eugen
Zitat von Sake Ceph :
I believe they are working o
Hello everyone,
We operate two clusters that we installed with ceph-deploy in Nautilus
version on Debian 10. We use them for external S3 storage (owncloud) and
rbd disk images.We had them upgraded to Octopus and Pacific versions on
Debian 11 and recently converted them to cephadm and upgraded
Build 4 with https://github.com/ceph/ceph/pull/54224 was built and I
ran the tests below and asking for approvals:
smoke - Laura
rados/mgr - PASSED
rados/dashboard - Nizamudeen
orch - Adam King
See Build 4 runs - https://tracker.ceph.com/issues/63443#note-1
On Tue, Nov 14, 2023 at 12:21 AM Redou
Smoke approved. Failures tracked by:
- https://tracker.ceph.com/issues/63531
- https://tracker.ceph.com/issues/63488
On Tue, Nov 14, 2023 at 9:34 AM Yuri Weinstein wrote:
> Build 4 with https://github.com/ceph/ceph/pull/54224 was built and I
> ran the tests below and asking for approvals:
>
> s
orch approved. After reruns, orch/cephadm was just hitting two known
(nonblocker) issues and orch/rook teuthology suite is known to not be
functional currently.
On Tue, Nov 14, 2023 at 10:33 AM Yuri Weinstein wrote:
> Build 4 with https://github.com/ceph/ceph/pull/54224 was built and I
> ran th
dashboard approved. Failure known and unrelated!
On Tue, Nov 14, 2023, 22:34 Adam King wrote:
> orch approved. After reruns, orch/cephadm was just hitting two known
> (nonblocker) issues and orch/rook teuthology suite is known to not be
> functional currently.
>
> On Tue, Nov 14, 2023 at 10:33
On CentOS 7 systems with the CephFS kernel client, if the data pool has a
`nearfull` status there is a slight reduction in write speeds (possibly
20-50% fewer IOPS).
On a similar Rocky 8 system with the CephFS kernel client, if the data pool
has `nearfull` status, a similar test shows write speeds
OK thx!
We have completed the approvals.
On Tue, Nov 14, 2023 at 9:13 AM Nizamudeen A wrote:
>
> dashboard approved. Failure known and unrelated!
>
> On Tue, Nov 14, 2023, 22:34 Adam King wrote:
>>
>> orch approved. After reruns, orch/cephadm was just hitting two known
>> (nonblocker) issues
Hi Jean Marc,
maybe look at this parameter "rgw_enable_apis", if the values you have
correspond to the default (need rgw restart) :
https://docs.ceph.com/en/quincy/radosgw/config-ref/#confval-rgw_enable_apis
ceph config get client.rgw rgw_enable_apis
_
Hi,
What's the correct way to migrate an OSD wal/db from a fast device to the
(slow) block device?
I have an osd with wal/db on a fast LV device and block on a slow LV
device. I want to move the wal/db onto the block device so I can
reconfigure the fast device before moving the wal/db back t
Greetings group!
We recently reloaded a cluster from scratch using cephadm and reef. The
cluster came up, no issues. We then decided to upgrade two existing cephadm
clusters that were on quincy. Those two clusters came up just fine but
there is an issue with the Grafana graphs on both cluste
11 matches
Mail list logo