[ceph-users] Re: Ceph pacific error when add new host

2024-11-11 Thread nguyenvandiep
Hi Tim Thank you for suggestion. We solved our issue by reboot the _admin node, Regards ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io

[ceph-users] Ceph Reef 16 pgs not deep scrub and scrub

2024-11-11 Thread Saint Kid
Hello, I have exactly 16 pgs with these conditions. Is there anyway i could do? I have tried to initiate the scrubing both deep and normal but they remain the same. HEALTH_WARN: 16 pgs not deep-scrubbed in time pg 8.14 not deep-scrubbed since 2024-09-27T19:42:47.463766+0700 pg 7.1b not deep-scrub

[ceph-users] Re: multifs and snapshots

2024-11-11 Thread Dmitry Melekhov
11.11.2024 17:53, Toby Darling пишет: Did you make any progress with snapshots on multiple filesystems with separate pools? Hello! No, there were some plan changes - old hardware is still in use :-( ___ ceph-users mailing list -- ceph-users@ceph

[ceph-users] Re: Move block.db to new ssd

2024-11-11 Thread Alwin Antreich
Hi Roland, On Mon, Nov 11, 2024, 20:16 Roland Giesler wrote: > I have ceph 17.2.6 on a proxmox cluster and want to replace some ssd's > who are end of life. I have some spinners who have their journals on > SSD. Each spinner has a 50GB SSD LVM partition and I want to move those > each to new c

[ceph-users] Re: Cephadm Drive upgrade process

2024-11-11 Thread Anthony D'Atri
> 1.Pulled failed drive ( after troubleshooting of course ) > > 2.Cephadm gui - find OSD, purge osd > 3.Wait for rebalance > 4.Insert new drive ( let cluster rebalance after it automatically adds > the drive as an OSD ) ( yes, we have auto-add on in the clusters ) > I imagine wi

[ceph-users] Cephadm Drive upgrade process

2024-11-11 Thread brentk
We are contemplating an upgrade of 4TB HDD drives to 20TB HDD drives (cluster info below, size 3 ), but as part of that discussion, we were trying to see if there was a more efficient way to do so. Our current process is as follows for failed drives: 1. Pulled failed drive ( after trouble

[ceph-users] Re: quincy v17.2.8 QE Validation status

2024-11-11 Thread Yuri Weinstein
I will start the next steps building pre-released packages, release notes etc. Josh, Guillaume I marked upgrades and ceph-volume "approved" as we discussed t it before. Please speak up if you have any comments/additions etc. On Mon, Nov 11, 2024 at 12:33 PM Laura Flores wrote: > @Yuri Weinstei

[ceph-users] Re: quincy v17.2.8 QE Validation status

2024-11-11 Thread Laura Flores
@Yuri Weinstein *rados* and *p2p *approved. We determined that https://tracker.ceph.com/issues/68882 is not a blocker. https://tracker.ceph.com/issues/68897 came up in the p2p tests, but it is a test issue. Thanks, Laura On Mon, Nov 11, 2024 at 11:37 AM Guillaume ABRIOUX wrote: > Hi Yuri,

[ceph-users] Move block.db to new ssd

2024-11-11 Thread Roland Giesler
I have ceph 17.2.6 on a proxmox cluster and want to replace some ssd's who are end of life.  I have some spinners who have their journals on SSD.  Each spinner has a 50GB SSD LVM partition and I want to move those each to new corresponding partitions. The new 4TB SSD's I have split into volume

[ceph-users] Re: quincy v17.2.8 QE Validation status

2024-11-11 Thread Guillaume ABRIOUX
Hi Yuri, ceph-volume approved - https://pulpito.ceph.com/gabrioux-2024-10-28_15:20:58-orch:cephadm-quincy-release-distro-default-smithi/ Thanks, -- Guillaume Abrioux Software Engineer De : Yuri Weinstein Envoy? : vendredi 1 novembre 2024 16:20 ? : dev ; ceph-us

[ceph-users] Ceph Steering Committee 2024-11-11

2024-11-11 Thread Gregory Farnum
We had a short meeting today, with no pre-set topics. The main topic that came up is a bug detected on upgrades to squid when using the balancer in large clusters: https://tracker.ceph.com/issues/68657. The RADOS team would like to do a squid point release once our quincy release is out the door (i

[ceph-users] Re: Ceph pacific error when add new host

2024-11-11 Thread Tim Holloway
I have seen instances where the crash daemon is running under a container (using /var/lib/cep/{fsid}/crash), but the daemon is trying to use the legacy location (/varlib/ceph/crash). This can result in file access violations or "file not found" issues which should show up in the system logs (jo