playing with MULTI-SITE zones for CEPH Object Gateway
ceph version: 17.2.5
my setup: 3 zone multi-site; 3-way full sync mode;
each zone has 3 machines -> RGW+MON+OSD
running load test: 3000 concurrent uploads of 1M object
after about 3-4 minutes of load RGW machine get stuck, on 2 zone out of
rados approved.
Big thanks to Laura for helping with this!
On Thu, Apr 27, 2023 at 11:21 PM Yuri Weinstein wrote:
>
> Details of this release are summarized here:
>
> https://tracker.ceph.com/issues/59542#note-1
> Release Notes - TBD
>
> Seeking approvals for:
>
> smoke - Radek, Laura
> rados -
Hi,
did you finally figure out what happened?
I do have the same behavior and we can't get the mds to start again...
Thanks,
Emmanuel
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
Hi,
I just inherited a ceph storage. Therefore, my level of confidence with the
tool is certainly less than ideal.
We currently have an mds server that refuses to come back online. While
reviewing the logs, I can see that, upon mds start, the recovery goes well:
```
-10> 2023-05-03T08:12:43.
Hello, I have a question, when happened when i delete a pg on which i
set a particular osd as primary using the pg-upmap-primary command ?
--
Nguetchouang Ngongang Kevin
ENS de Lyon
https://perso.ens-lyon.fr/kevin.nguetchouang/
___
ceph-users mailing li
making some DMARC-related changes. Ignore this please.
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
The failure seen in ceph-volume tests isn't related.
That being said, it needs to be fixed to have a better view of the current
status.
On Wed, 3 May 2023 at 21:00, Laura Flores wrote:
> upgrade/octopus-x (pacific) is approved. Went over failures with Adam King
> and it was decided they are not
upgrade/octopus-x (pacific) is approved. Went over failures with Adam King
and it was decided they are not release blockers.
On Wed, May 3, 2023 at 1:53 PM Yuri Weinstein wrote:
> upgrade/octopus-x (pacific) - Laura
> ceph-volume - Guillaume
>
> + 2 PRs are the remaining issues
>
> Josh FYI
>
>
upgrade/octopus-x (pacific) - Laura
ceph-volume - Guillaume
+ 2 PRs are the remaining issues
Josh FYI
On Wed, May 3, 2023 at 11:50 AM Radoslaw Zarzynski wrote:
>
> rados approved.
>
> Big thanks to Laura for helping with this!
>
> On Thu, Apr 27, 2023 at 11:21 PM Yuri Weinstein wrote:
> >
> >
I know of two PRs that have been requested to be cherry-picked in 16.2.13
https://github.com/ceph/ceph/pull/51232 -- fs
https://github.com/ceph/ceph/pull/51200 -- rgw
Casey, Venky - would you approve it?
On Wed, May 3, 2023 at 6:41 AM Venky Shankar wrote:
>
> On Tue, May 2, 2023 at 8:25 PM Yuri
On Wed, May 3, 2023 at 11:24 AM Kamil Madac wrote:
>
> Hi,
>
> We deployed pacific cluster 16.2.12 with cephadm. We experience following
> error during rbd map:
>
> [Wed May 3 08:59:11 2023] libceph: mon2 (1)[2a00:da8:ffef:1433::]:6789
> session established
> [Wed May 3 08:59:11 2023] libceph: a
On Tue, May 2, 2023 at 8:25 PM Yuri Weinstein wrote:
>
> Venky, I did plan to cherry-pick this PR if you approve this (this PR
> was used for a rerun)
OK. The fs suite failure is being looked into
(https://tracker.ceph.com/issues/59626).
>
> On Tue, May 2, 2023 at 7:51 AM Venky Shankar wrote:
>
Hello List,
i made a mistake, draining a host instead of entering it into Maintenance
Mode (for OS reboot). :-/
After "Stop Drain" and restore of original "crush reweight" values, so far
everything looks fine.
cluster:
health: HEALTH_OK
services:
[..]
osd: 79 osds: 78 up (since 3h), 78 in
On Wed, May 3, 2023 at 4:33 AM Janek Bevendorff
wrote:
>
> Hi Patrick,
>
> > I'll try that tomorrow and let you know, thanks!
>
> I was unable to reproduce the crash today. Even with
> mds_abort_on_newly_corrupt_dentry set to true, all MDS booted up
> correctly (though they took forever to rejoin
Dear Cephers,
We are planing the dist upgrade from Octopus to Quincy in the next weeks.
The first step is the linux version upgrade from Ubuntu 18.04 to Ubuntu 20.04
from some big ODS servers runnign this OS version.
we just have a look at ( Upgrading non-cephadm clusters [
https://ceph.io/en/
Hi,
I doubt that you will get a satisfying response for cache-tier related
questions. It hasn't been maintained for quite some time and is
considered deprecated for years. It will be removed in one of the
upcoming releases, maybe Reef.
Regards,
Eugen
Zitat von lingu2008 :
Hi all,
On o
On 5/2/2023 9:02 PM, Nikola Ciprich wrote:
hewever, probably worh noting, historically we're using following OSD options:
ceph config set osd bluestore_rocksdb_options
compression=kNoCompression,max_write_buffer_number=32,min_write_buffer_number_to_merge=2,recycle_log_file_num=32,compaction_s
Hi,
the question is if both sites are used as primary clusters from
different clients or if it's for disaster recovery only (site1 fails,
make site2 primary). If both clusters are used independently with
different clients I would prefer to separate the pools, so this option:
PoolA (site1
Hi,
seems like a reoccuring issue, e.g. [1] [2], but it seems to be
triggered by something different than [1] since you don't seem to have
discontinuing OSD numbers. Maybe a regression, I don't really know,
maybe file a new tracker issue for that?
Thanks,
Eugen
[1] https://tracker.ceph.c
Hi,
We deployed pacific cluster 16.2.12 with cephadm. We experience following
error during rbd map:
[Wed May 3 08:59:11 2023] libceph: mon2 (1)[2a00:da8:ffef:1433::]:6789
session established
[Wed May 3 08:59:11 2023] libceph: another match of type 1 in addrvec
[Wed May 3 08:59:11 2023] libceph
Hi,
The goal is to sync some VMs from site1 - to - site2 and vice-versa sync
some VMs in the other way.
I am thinking of using rdb mirroring for that. But I have little experience
with Ceph management.
I am searching for the best way to do that.
I could create two pools on each site, and cross s
Hi mhnx.
> I also agree with you, Ceph is not designed for this kind of use case
> but I tried to continue what I know.
If your only tool is a hammer ...
Sometimes its worth looking around.
While your tests show that a rep-1 pool is faster than a rep-2 pool, the values
are not exactly impressive
Hi Patrick,
I'll try that tomorrow and let you know, thanks!
I was unable to reproduce the crash today. Even with
mds_abort_on_newly_corrupt_dentry set to true, all MDS booted up
correctly (though they took forever to rejoin with logs set to 20).
To me it looks like the issue has resolved
I think I got it wrong with the locality setting, I'm still limited by
the number of hosts I have available in my test cluster, but as far as
I got with failure-domain=osd I believe k=6, m=3, l=3 with
locality=datacenter could fit your requirement, at least with regards
to the recovery band
Hi,
we had the NFS discussion a few weeks back [2] and at the Cephalocon I
talked to Zac about it.
@Zac: seems like not only NFS over CephFS is affected but CephFS in
general. Could you add that note about the application metadata to the
general CephFS docs as well?
Thanks,
Eugen
[2]
Hi,
just to clarify, you mean in addition to the rbd mirroring you want to
have another sync of different VMs between those clusters (potentially
within the same pools) or are you looking for one option only? Please
clarify. Anyway, I would use dedicated pools for rbd mirroring and
then a
Hi,
Thanks
I am trying to find out what is the best way to synchronize VMS between two
HCI Proxmox clusters.
Each cluster will contain 3 compute/storage nodes and each node will
contain 4 nvme osd disks.
There will be a 10gbs link between the two platforms.
The idea is to be able to sync VMS bet
27 matches
Mail list logo