[ceph-users] Re: Pacific 16.2.14 debian Incomplete

2023-08-30 Thread Zakhar Kirpichenko
It looks much better, at least for Ubuntu focal, thanks! /Z On Thu, 31 Aug 2023 at 03:48, Yuri Weinstein wrote: > We redeployed all packages again. > > Please confirm that the issue is resolved. > > Thank you for your help and patience! > > On Wed, Aug 30, 2023 at 3:44 PM Zakhar Kirpichenko >

[ceph-users] Re: cephfs snapshot mirror peer_bootstrap import hung

2023-08-30 Thread Venky Shankar
Thanks for the follow up! On Wed, Aug 30, 2023 at 11:49 PM Adiga, Anantha wrote: > > Hi Venky, > > > > “peer-bootstrap import” is working fine now. It was port 3300 blocked by > firewall. > > Thank you for your help. > > > > Regards, > > Anantha > > > > From: Adiga, Anantha > Sent: Monday, Augus

[ceph-users] Re: Quincy NFS ingress failover

2023-08-30 Thread Thorne Lawler
Here are the yaml files I used to create the NFS and ingress services: nfs-ingress.yaml service_type: ingress service_id: nfs.xcpnfs placement:   count: 2 spec:   backend_service: nfs.xcpnfs   frontend_port: 2049   monitor_port: 9000   virtual_ip: 172.16.172.199/24 nfs.yaml service_type: nfs s

[ceph-users] Re: Pacific 16.2.14 debian Incomplete

2023-08-30 Thread Yuri Weinstein
We redeployed all packages again. Please confirm that the issue is resolved. Thank you for your help and patience! On Wed, Aug 30, 2023 at 3:44 PM Zakhar Kirpichenko wrote: > > Now the release email comes and the repositories are still missing packages. > What a mess. > > /Z > > On Wed, 30 Aug

[ceph-users] Re: Quincy NFS ingress failover

2023-08-30 Thread Thorne Lawler
If there isn't any documentation for this yet, can anyone tell me: * How do I inspect/change my NFS/haproxy/keepalived configuration? * What is it supposed to look like? Does someone have a working example? Thank you. On 31/08/2023 9:36 am, Thorne Lawler wrote: Sorry everyone, Is there any

[ceph-users] Re: Quincy NFS ingress failover

2023-08-30 Thread Thorne Lawler
Sorry everyone, Is there any more detailed documentation on the high availability NFS functionality in current Ceph? This is a pretty serious sticking point. Thank you. On 30/08/2023 9:33 am, Thorne Lawler wrote: Fellow cephalopods, I'm trying to get quick, seamless NFS failover happening

[ceph-users] Re: Pacific 16.2.14 debian Incomplete

2023-08-30 Thread Zakhar Kirpichenko
Now the release email comes and the repositories are still missing packages. What a mess. /Z On Wed, 30 Aug 2023 at 19:27, Yuri Weinstein wrote: > 16.2.14 has not been released yet. > > Please don't do any upgrades before we send an announcement email. > > TIA > > On Wed, Aug 30, 2023 at 8:45 A

[ceph-users] v16.2.14 Pacific released

2023-08-30 Thread Yuri Weinstein
We're happy to announce the 14th backport release in the Pacific series. https://ceph.io/en/news/blog/2023/v16-2-14-pacific-released/ Notable Changes --- * CEPHFS: After recovering a Ceph File System post following the disaster recovery procedure, the recovered files under lost+foun

[ceph-users] Re: Pacific 16.2.14 debian Incomplete

2023-08-30 Thread Laura Flores
Hey users, To follow up on my previous email, on behalf of the Ceph team, we apologize for any confusion about pre-released packages. We are working on streamlining the release process to avoid this next time. - Laura On Wed, Aug 30, 2023 at 2:14 PM Paul Mezzanini wrote: > -> "At the minimum,

[ceph-users] Re: Multisite RGW setup not working when following the docs step by step

2023-08-30 Thread Zac Dover
Petr, My name is Zac Dover. I'm the upstream docs guy for the Ceph Foundation. I will begin the process of correcting this part of the documentation. I will begin by reviewing the section "Creating a Secondary Zone". My schedule is full until Sunday, but I will raise an issue in tracker.ceph.co

[ceph-users] Re: Pacific 16.2.14 debian Incomplete

2023-08-30 Thread Paul Mezzanini
-> "At the minimum, publishing the versioned repos at $repourl/debian-16.2.14 but not cutting the symlink over for $repourl/debian-pacific until “ready” seems like a very easy and useful release process improvement to prevent these specific issues going forward." This should be standard proced

[ceph-users] Re: Pacific 16.2.14 debian Incomplete

2023-08-30 Thread Laura Flores
Hi, We are still in progress creating the release. Any artifacts are not yet officially released. We will send the usual blog post and email when everything's ready. - Laura On Wed, Aug 30, 2023 at 1:16 PM Zakhar Kirpichenko wrote: > Hi, > > Please note that some packages have been pushed for

[ceph-users] Re: cephfs snapshot mirror peer_bootstrap import hung

2023-08-30 Thread Adiga, Anantha
Hi Venky, “peer-bootstrap import” is working fine now. It was port 3300 blocked by firewall. Thank you for your help. Regards, Anantha From: Adiga, Anantha Sent: Monday, August 7, 2023 1:29 PM To: Venky Shankar ; ceph-users@ceph.io Subject: RE: [ceph-users] Re: cephfs snapshot mirror peer_boots

[ceph-users] Re: Pacific 16.2.14 debian Incomplete

2023-08-30 Thread Zakhar Kirpichenko
Hi, Please note that some packages have been pushed for Ubuntu focal as well, but the repo is incomplete. I think it would be good if such things could be avoided in the future. /Z On Wed, 30 Aug 2023 at 19:27, Yuri Weinstein wrote: > 16.2.14 has not been released yet. > > Please don't do any

[ceph-users] Re: radosgw mulsite multi zone configuration: current period realm name not same as in zonegroup

2023-08-30 Thread Adiga, Anantha
Update: There was a networking issue between the sites, after fixing it , the issue reported below did not occur. Thank you, Anantha From: Adiga, Anantha Sent: Thursday, August 24, 2023 2:40 PM To: ceph-users@ceph.io Subject: radosgw mulsite multi zone configuration: current period realm name

[ceph-users] Multisite RGW setup not working when following the docs step by step

2023-08-30 Thread Petr Bena
Hello, My goal is to setup multisite RGW with 2 separate CEPH clusters in separate datacenters, where RGW data are being replicated. I created a lab for this purpose in both locations (with latest reef ceph installed using cephadm) and tried to follow this guide: https://docs.ceph.com/en/reef/r

[ceph-users] CLT Meeting minutes 2023-08-30

2023-08-30 Thread Nizamudeen A
Hello, Finish v18.2.0 upgrade on LRC? It seems to be running v18.1.3 not much of a difference in code commits news on teuthology jobs hanging? cephfs issues because of network troubles Its resolved by Patrick User council discussion follow-up Detailed info on this pad: https:

[ceph-users] Re: Pacific 16.2.14 debian Incomplete

2023-08-30 Thread Reed Dier
This is more the sentiment that I was hoping to convey. Sure, I have my finger on the pulse of the mailing list and the packages coming down the pipe, but assuming that everyone does and/or will is not a safe assumption. At the minimum, publishing the versioned repos at $repourl/debian-16.2.14 b

[ceph-users] Re: Pacific 16.2.14 debian Incomplete

2023-08-30 Thread Burkhard Linke
Hi, On 8/30/23 18:26, Yuri Weinstein wrote: 16.2.14 has not been released yet. Please don't do any upgrades before we send an announcement email. Then stop pushing packets before the announcement. This is not the first time this problem occurred. And given your answer I'm afraid it won't be

[ceph-users] Re: Pacific 16.2.14 debian Incomplete

2023-08-30 Thread Yuri Weinstein
16.2.14 has not been released yet. Please don't do any upgrades before we send an announcement email. TIA On Wed, Aug 30, 2023 at 8:45 AM Reed Dier wrote: > > It looks like 16.2.14 was released, but it looks like in an incomplete way in > the debian repo? > > I first noticed it because my nigh

[ceph-users] Pacific 16.2.14 debian Incomplete

2023-08-30 Thread Reed Dier
It looks like 16.2.14 was released, but it looks like in an incomplete way in the debian repo? I first noticed it because my nightly mirror snapshot picked it up, and showed that the majority of packages were removed, and only a handful had a new version. > focal-ceph-pacific 230829 to 230830 >

[ceph-users] Re: A couple OSDs not starting after host reboot

2023-08-30 Thread Alison Peisker
Hi, It looks like Igor is right, it does appear to be a corruption. ls /var/lib/ceph/252fcf9a-b169-11ed-87be-3cecef623f33/osd.665/ ceph_fsid config fsid keyring ready require_osd_release type unit.configured unit.created unit.image unit.meta unit.poststop unit.run unit.stop whoami head -c 4096

[ceph-users] Re: lack of RGW_API_HOST in ceph dashboard, 17.2.6, causes ceph mgr dashboard problems

2023-08-30 Thread Eugen Block
Hi, there have been multiple discussions on this list without any satisfying solution for all possible configurations. One of the changes [1] made in Pacific was to use hostname instead of IP, but it only uses the shortname (you can check the "hostname" in 'ceph service dump' output. But

[ceph-users] Re: Reef - what happened to OSD spec?

2023-08-30 Thread Eugen Block
Hi, just a few days ago I replied to a thread [2] with some explanations for destroy, delete and purge. So if you "destroy" an OSD it is meant to be replaced, reusing the ID. A failed drive may not be responsive at all so an automated wipe might fail as well. If the db/wal is located on a d

[ceph-users] Re: Is there any way to fine tune peering/pg relocation/rebalance?

2023-08-30 Thread Louis Koo
maybe the default value is ok, I think set it to 1 is too aggressive. ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io