On 21.02.2024 17:07, wodel youchi wrote:
- The documentation of ceph does not indicate what versions of
grafana,
prometheus, ...etc should be used with a certain version.
- I am trying to deploy Quincy, I did a bootstrap to see what
containers were downloaded and their version.
Cephadm does not have some variable that explicitly says it's an HCI
deployment. However, the HCI variable in ceph ansible I believe only
controlled the osd_memory_target attribute, which would automatically set
it to 20% or 70% respectively of the memory on the node divided by the
number of OSDs
for the record i tried both ways to configure:
```
radosgw-admin zonegroup get --rgw-zonegroup="dev" | \
jq '.hostnames |= ["dev.s3.localhost"]' | \
radosgw-admin zonegroup set --rgw-zonegroup="dev" -i -
```
```
ceph config set global rgw_dns_name dev.s3.localhost
```
Am Mi., 21.
Hi Adam,
thanks, you saved me from more time spent looking in the dark.
I’ll plan an update.
R
--
Ramon Orrù
Servizio di Calcolo
Laboratori Nazionali di Frascati
Istituto Nazionale di Fisica Nucleare
Via E. Fermi, 54 - 00044 Frascati (RM) Italy
Tel. +39 06 9403 2345
> On 21 Feb 2024, at 16:02
Hi folks,
i just try to setup a new ceph s3 multisite-setup and it looks to me
that dns-style s3 is broken in multi-side as wehn rgw_dns_name is
configured the `radosgw-admin period update -commit`from the new mebe
will not succeeded!
it looks like when ever hostnames is configured it brakes on t
Still seeking approvals:
rados - Radek, Junior, Travis, Adam King
All other product areas have been approved and are ready for the release step.
Pls also review the Release Notes: https://github.com/ceph/ceph/pull/55694
On Tue, Feb 20, 2024 at 7:58 AM Yuri Weinstein wrote:
>
> We have restart
Hi,
I have some questions about ceph using cephadm.
I used to deploy ceph using ceph-ansible, now I have to move to cephadm, I
am in my learning journey.
- How can I tell my cluster that it's a part of an HCI deployment? With
ceph-ansible it was easy using is_hci : yes
- The documentat
Estimate on release timeline for 17.2.8?
- after pacific 16.2.15 and reef 18.2.2 hotfix
(https://tracker.ceph.com/issues/64339,
https://tracker.ceph.com/issues/64406)
Estimate on release timeline for 19.2.0?
- target April, depending on testing and RCs
- Testing plan for Squid beyond dev freeze (r
[mgr modules failing because pyO3 can't be imported more than once]
On 29/01/2024 12:27, Chris Palmer wrote:
I have logged this as https://tracker.ceph.com/issues/64213
I've noted there that it's related to
https://tracker.ceph.com/issues/63529 (an earlier report relating to the
dashboard);
It seems the quincy backport for that feature (
https://github.com/ceph/ceph/pull/53098) was merged Oct 1st 2023. According
to the quincy part of
https://docs.ceph.com/en/latest/releases/#release-timeline it looks like
that would mean it would only be present in 17.2.7, but not 17.2.6.
On Wed, Feb
On Tue, Feb 20, 2024 at 10:58 AM Yuri Weinstein wrote:
>
> We have restarted QE validation after fixing issues and merging several PRs.
> The new Build 3 (rebase of pacific) tests are summarized in the same
> note (see Build 3 runs) https://tracker.ceph.com/issues/64151#note-1
>
> Seeking approval
Hello,
I deployed RGW and NFSGW services over a ceph (version 17.2.6) cluster. Both
services are being accessed using 2 (separated) ingresses, actually working as
expected when contacted by clients.
Besides, I’m experiencing some problem while letting the ingresses work on the
same cluster.
ke
Update: we have run fsck and re-shard on all bluestore volume, seems sharding
were not applied.
Unfortunately scrubs and deep-scrubs are still stuck on PGs of the pool that is
suffering the issue, but other PGs scrubs well.
The next step will be to remove the cache tier as suggested, but its no
Hi,
Short summary
PG 404.bc is an EC 4+2 where s0 and s2 report hash mismtach for 698
objects.
Ceph pg repair doesn't fix it, because if you run deep-srub on the PG
after repair is finished, it still report scrub errors.
Why can't ceph pg repair repair this, it has 4 out of 6 should be able
> 1. Write object A from client.
> 2. Fsync to primary device completes.
> 3. Ack to client.
> 4. Writes sent to replicas.
[...]
As mentioned in the discussion this proposal is the opposite of
what the current policy, is, which is to wait for all replicas
to be written before writes are acknowledg
15 matches
Mail list logo