Hello,
The rook ci must be failing because a `ceph-bluestore-tool` backport [1] is
missing.
This backport was merged ~6 hours ago.
[1] https://github.com/ceph/ceph/pull/60543
Regards,
--
Guillaume Abrioux
Software Engineer
De : Travis Nielsen
Envoyé
Hello Yuri,
ceph-volume approved
Regards,
--
Guillaume Abrioux
Software Engineer
From: Yuri Weinstein
Date: Friday, 27 December 2024 at 17:31
To: dev , ceph-users
Subject: [EXTERNAL] [ceph-users] Re: squid 19.2.1 RC QE validation status
Hello and Happy Holidays all!
We have merged several
Hi Yuri,
ceph-volume approved -
https://pulpito.ceph.com/gabrioux-2024-10-28_15:20:58-orch:cephadm-quincy-release-distro-default-smithi/
Thanks,
--
Guillaume Abrioux
Software Engineer
De : Yuri Weinstein
Envoy? : vendredi 1 novembre 2024 16:20
? : dev ; ceph
Hi Boris,
Unfortunately, there’s currently nothing in ceph-volume to detect
self-encrypted devices.
That being said, while I can’t give you a precise timeframe, it is something we
are looking into.
Regards,
--
Guillaume Abrioux
Software Engineer
From: Boris
Date: Wednesday, 28 August 2024
Hi Yuri,
ceph-volume approved https://jenkins.ceph.com/job/ceph-volume-test/601/
Regards,
--
Guillaume Abrioux
Software Engineer
From: Yuri Weinstein
Date: Monday, 5 August 2024 at 22:33
To: dev , ceph-users
Subject: [EXTERNAL] [ceph-users] squid 19.1.1 RC QE validation status
Details of
Hi Yuri,
ceph-volume approved -
https://pulpito.ceph.com/gabrioux-2024-07-03_14:50:01-orch:cephadm-squid-release-distro-default-smithi/
(the few failures are known issues)
Thanks!
--
Guillaume Abrioux
Software Engineer
From: Yuri Weinstein
Sent: 01 July 2024
Hi Vlad,
To be honest, this playbook hasn’t received any engineering attention in a
while. It's most likely broken.
Which version of this playbook are you using?
Regards,
--
Guillaume Abrioux
Software Engineer
From: Frédéric Nass
Date: Tuesday, 14 May 2024 at 10:12
To: vladimir fr
Hi Yuri,
The ceph-volume failure is a valid bug.
Investigating for the root cause of it and will submit a patch.
Thanks!
--
Guillaume Abrioux
Software Engineer
From: Yuri Weinstein
Date: Monday, 29 January 2024 at 22:38
To: dev , ceph-users
Subject: [EXTERNAL] [ceph-users] pacific 16.2.15 QE
Hi Yuri,
Any chance we can include [1] ? This patch fixes mpath devices deployments, the
PR has missed a merge and was backported onto reef this morning only.
Thanks,
[1]
https://github.com/ceph/ceph/pull/53539/commits/1e7223281fa044c9653633e305c0b344e4c9b3a4
--
Guillaume Abrioux
Software
Hi Yuri,
Backport PR [2] for reef has been merged.
Thanks,
[2] https://github.com/ceph/ceph/pull/54514/files
--
Guillaume Abrioux
Software Engineer
From: Guillaume Abrioux
Date: Wednesday, 15 November 2023 at 21:02
To: Yuri Weinstein , Nizamudeen A ,
Guillaume Abrioux , Travis Nielsen
Cc
r give
more details).
Another patch [2] is needed in order to fix this regression.
Let me know if more details are needed.
Thanks,
[1]
https://github.com/ceph/ceph/pull/54429/commits/ee26074a5e7e90b4026659bf3adb1bc973595e91
[2] https://github.com/ceph/ceph/pull/54514/files
--
Guillaume Abrio
Hi Yuri,
ceph-volume approved https://jenkins.ceph.com/job/ceph-volume-test/566/
Regards,
--
Guillaume Abrioux
Software Engineer
From: Yuri Weinstein
Date: Monday, 16 October 2023 at 20:53
To: dev , ceph-users
Subject: [EXTERNAL] [ceph-users] quincy v17.2.7 QE Validation status
Details of
ceph-volume approved https://jenkins.ceph.com/job/ceph-volume-test/553/
On Wed, 3 May 2023 at 22:43, Guillaume Abrioux wrote:
> The failure seen in ceph-volume tests isn't related.
> That being said, it needs to be fixed to have a better view of the current
> status.
>
> O
gt; > > ___
>> > > Dev mailing list -- d...@ceph.io
>> > > To unsubscribe send an email to dev-le...@ceph.io
>> >
>> ___
>> Dev mailing list -- d...@ceph.io
>
_
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
>
>
--
*Guillaume Abrioux*Senior Software Engineer
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
t;
> ├─ceph--3a336b8e--ed39--4532--a199--ac6a3730840b-osd--wal--5d845dba--8b55--4984--890b--547fbdaff10c
> 253:12 0 331.2G 0 lvm
>
>
> So it looks like it is using that lvm group right there. Yet, the
> dashboard doesn't show a nvme. (please compare screenshot osd_232.png and
nt NVMe AGN MU AIC 6.4TB
> filter_logic: AND
> objectstore: bluestore
> wal_devices:
> model: Dell Ent NVMe AGN MU AIC 6.4TB
> status:
> created: '2022-08-29T16:02:22.822027Z'
> last_refresh: '2023-02-01T09:03:22.853860Z'
> running: 306
>
On Tue, 31 Jan 2023 at 22:31, mailing-lists wrote:
> I am not sure. I didn't find it... It should be somewhere, right? I used
> the dashboard to create the osd service.
>
what does a `cephadm shell -- ceph orch ls osd --format yaml` say?
--
*Guillaume Abrioux*Senior Sof
v6.0.10-stable-6.0-pacific-centos-stream8 (pacific 16.2.11) is now
available on quay.io
Thanks,
On Tue, 31 Jan 2023 at 13:43, Guillaume Abrioux wrote:
> On Tue, 31 Jan 2023 at 11:14, Jonas Nemeikšis
> wrote:
>
>> Hello Guillaume,
>>
>> A little bit sad news abou
not for now:)
>
> I would like to update and test Pacific's latest version.
>
Let me check if I can get these tags pushed quickly, I'll update this
thread.
Thanks,
--
*Guillaume Abrioux*Senior Software Engineer
___
ceph-users maili
> To unsubscribe send an email to ceph-users-le...@ceph.io
>
>
--
*Guillaume Abrioux*Senior Software Engineer
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
mand that 'provisions' the osds, two podman containers spawned, one for
> each osd so it seems to have worked. I'm a bit confused but will be
> researching more into this .
> I may have messed up my dev env really bad initially, so maybe that's why
> it didnt previously work.
>
ractive community where I can find some people usually online
> and talk to them realtime like discord/slack etc ? I tried irc but most are
> afk.
>
> Thanks
>
> Sent with [Proton Mail](https://proton.me/) secure email.
> ___
> ceph-users mailing list -- ce
gt; mgr: baloo-2(active, since 5m), standbys: baloo-3, baloo-1
> mds: 1/1 daemons up, 1 standby
> osd: 24 osds: 24 up (since 4m), 24 in (since 5m)
> rgw: 1 daemon active (1 hosts, 1 zones)
>
> data:
> volumes: 1/1 healthy
> pools: 7 pools, 177 pgs
> objects: 213 objects, 584 K
- Neha, Laura
>> upgrade/pacific-p2p - Neha - Neha, Laura
>> powercycle - Brad
>> ceph-volume - Guillaume, Adam K
>>
>> Thx
>> YuriW
>>
>> ___
>> Dev mailing list -- d...@ceph.io
>> To unsubscribe send an email to dev-le...@ceph.io
ackages/ceph_volume/api/lvm.py", line 797, in
> get_all_devices_vgs
> 2022-10-24 10:25:20,307 7fd9b0d92b80 INFO /bin/podman: stderr return
> [VolumeGroup(**vg) for vg in vgs]
> 2022-10-24 10:25:20,307 7fd9b0d92b80 INFO /bin/podman: stderr File
> "/usr/lib/python3.6/site-packages/ceph_volume/api/lvm.py", line 797, in
>
> 2022-10-24 10:25:20,307 7fd9b0d92b80 INFO /bin/podman: stderr return
> [VolumeGroup(**vg) for vg in vgs]
> 2022-10-24 10:25:20,307 7fd9b0d92b80 INFO /bin/podman: stderr File
> "/usr/lib/python3.6/site-packages/ceph_volume/api/lvm.py", line 517, in
> __init__
> 2022-10-24 10:25:20,307 7fd9b0d92b80 INFO /bin/podman: stderr raise
> ValueError('VolumeGroup must have a non-empty name')
> 2022-10-24 10:25:20,307 7fd9b0d92b80 INFO /bin/podman: stderr ValueError:
> VolumeGroup must have a non-empty name
>
> ___
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
>
>
--
*Guillaume Abrioux*Senior Software Engineer
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
urge-cluster.yml
> to purge the cluster.
>
> Thanks,
> Zhongzhou Cai
> ___
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
>
--
*Guillaume Abrioux*Senior Software Engineer
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
to address them.
> >
> > Josh, Neha - LRC upgrade pending major suites approvals.
> > RC release - pending major suites approvals.
> >
> > Thx
> > YuriW
> >
> > ______
bd approved.
>
> > krbd - missing packages, Adam Kr is looking into it
>
> It seems like a transient issue to me, I would just reschedule.
>
> Thanks,
>
> Ilya
>
>
--
*Guillaume Abrioux*Senior Software Engineer
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
to put in all.yml to get the octopus
> repository for ubuntu 20.04?
>
> Cheers
>
> /Simon
> ___
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
>
>
--
*Guillaume Abrioux*Senior Software Engineer
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
know why the
> playbook would have a hard time with this step?
>
> Thanks in advance!
> ___
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.i
be gold to have containers, I am not equally enthusiastic about
> > it for ceph.
> >
>
> Yeah, so I think it's good to discuss pros and cons and see what problem
> it solves, and what extra problems it creates.
>
> Gr. Stefan
> ___
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
>
>
--
*Guillaume Abrioux*Senior Software Engineer
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
Hi Jan,
I might be wrong but I don't think download.ceph.com provides RPMs that can
be consumed using CentOS 8 at the moment.
Internally, for testing ceph@master on CentOS8, we use RPMs hosted in
chacra.
Dimitri who has worked a bit on this topic might have more inputs.
Thanks,
*Guil
33 matches
Mail list logo