Hello Jonas,
As far as I remember, these tags were pushed mostly for ceph-ansible / OSP.
By the way, the plan is to drop the ceph/daemon image. See corresponding
PRs in both ceph-ansible and ceph-container repositories [1] [2]
What stable tag are you looking for? I can trigger new builds if it ca
OK, the OSD is filled again. In and Up, but it is not using the nvme
WAL/DB anymore.
And it looks like the lvm group of the old osd is still on the nvme
drive. I come to this idea, because the two nvme drives still have 9 lvm
groups each. 18 groups but only 17 osd are using the nvme (shown in
Hello Guillaume,
A little bit sad news about drop. :/
We've just future plans to migrate to cephadm, but not for now:)
I would like to update and test Pacific's latest version.
Thanks!
On Tue, Jan 31, 2023 at 11:23 AM Guillaume Abrioux
wrote:
> Hello Jonas,
>
> As far as I remember, these
What does your OSD service specification look like? Did your db/wall device
show as having free space prior to the OSD creation?
On Tue, Jan 31, 2023, at 04:01, mailing-lists wrote:
> OK, the OSD is filled again. In and Up, but it is not using the nvme
> WAL/DB anymore.
>
> And it looks like the
On Tue, 31 Jan 2023 at 11:14, Jonas Nemeikšis wrote:
> Hello Guillaume,
>
> A little bit sad news about drop. :/
>
Why? That shouldn't change a lot in the end. You will be able to use
ceph/ceph:vXX images instead.
> We've just future plans to migrate to cephadm, but not for now:)
>
> I would l
Please excuse the request: I don't have the previous email in this
thread and the archives seem to be currently unavailable.
Do you have a link to the issue which was resolved? I am wondering if
it might be related to the recent issue I discovered on CoreOS 37.
Thanks,
Matt
On Mon, 30 Jan 2023
v6.0.10-stable-6.0-pacific-centos-stream8 (pacific 16.2.11) is now
available on quay.io
Thanks,
On Tue, 31 Jan 2023 at 13:43, Guillaume Abrioux wrote:
> On Tue, 31 Jan 2023 at 11:14, Jonas Nemeikšis
> wrote:
>
>> Hello Guillaume,
>>
>> A little bit sad news about drop. :/
>>
>
> Why? That shou
Did your db/wall device show as having free space prior to the OSD creation?
Yes.
root@ceph-a1-06:~# pvs
PV VG Fmt Attr
PSize PFree
/dev/nvme0n1 ceph-3a336b8e-ed39-4532-a199-ac6a3730840b lvm2 a--
5.82t 2.91t
/dev/nvme1n1 ceph-b381
Cool, thank you!
On Tue, Jan 31, 2023 at 4:34 PM Guillaume Abrioux
wrote:
> v6.0.10-stable-6.0-pacific-centos-stream8 (pacific 16.2.11) is now
> available on quay.io
>
> Thanks,
>
> On Tue, 31 Jan 2023 at 13:43, Guillaume Abrioux
> wrote:
>
>> On Tue, 31 Jan 2023 at 11:14, Jonas Nemeikšis
>> w
Did your db/wall device show as having free space prior to the OSD creation?
Yes.
root@ceph-a1-06:~# pvs
PV VG Fmt Attr
PSize PFree
/dev/nvme0n1 ceph-3a336b8e-ed39-4532-a199-ac6a3730840b lvm2 a--
5.82t 2.91t
/dev/nvme1n1 ceph-b381
On Tue, 31 Jan 2023 at 22:31, mailing-lists wrote:
> I am not sure. I didn't find it... It should be somewhere, right? I used
> the dashboard to create the osd service.
>
what does a `cephadm shell -- ceph orch ls osd --format yaml` say?
--
*Guillaume Abrioux*Senior Software Engineer
Hi,
How can I collect the logs of the RBD client?
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
And here's the recording in case you missed it:
https://www.youtube.com/playlist?list=PLrBUGiINAakM3d4bw6Rb7EZUcLd98iaWG
On Thu, Jan 26, 2023 at 6:15 AM Kevin Hrpcek
wrote:
> Hey all,
>
> We will be having a Ceph science/research/big cluster call on Tuesday
> January 31st. If anyone wants to di
Hi Ceph Developers and Users,
Various upstream developers and I are working on adding labels to perf
counters (https://github.com/ceph/ceph/pull/48657).
We would like to understand the ramifications of changing the format of the
json dumped by the `perf dump` command for the Reef Release on users
14 matches
Mail list logo