I just discovered that rook is tracking this here:
https://github.com/rook/rook/issues/13136
On Tue, 7 Nov 2023 at 18:09, Matthew Booth wrote:
> On Tue, 7 Nov 2023 at 16:26, Matthew Booth wrote:
>
>> FYI I left rook as is and reverted to ceph 17.2.6 and the issue is
>> resolv
On Tue, 7 Nov 2023 at 16:26, Matthew Booth wrote:
> FYI I left rook as is and reverted to ceph 17.2.6 and the issue is
> resolved.
>
> The code change was added by
> commit 2e52c029bc2b052bb96f4731c6bb00e30ed209be:
> ceph-volume: fix broken workaround for atari partitions
that regression.
Fixes: https://tracker.ceph.com/issues/62001
Signed-off-by: Guillaume Abrioux
(cherry picked from commit b3fd5b513176fb9ba1e6e0595ded4b41d401c68e)
It feels like a regression to me.
Matt
On Tue, 7 Nov 2023 at 16:13, Matthew Booth wrote:
> Firstly I'm rolli
self.list(args)
File "/usr/lib/python3.6/site-packages/ceph_volume/decorators.py", line
16, in is_root
return func(*a, **kw)
File "/usr/lib/python3.6/site-packages/ceph_volume/devices/raw/list.py",
line 122, in list
report = self.generate(args.device)
File &qu
king I had enough space.
Thanks!
Matt
>
> Regards,
> Eugen
>
> [1] https://docs.ceph.com/en/reef/cephadm/services/osd/#activate-existing-osds
>
> Zitat von Matthew Booth :
>
> > I have a 3 node ceph cluster in my home lab. One of the pools spans 3
> > hdds,
so I will most likely rebuild
it. I'm running rook, and I will most likely delete the old node and
create a new one with the same name. AFAIK, the OSDs are fine. When
rook rediscovers the OSDs, will it add them back with data intact? If
not, is there any way I can make it so it will?
Thanks!
--
On Thu, 6 Jul 2023 at 12:54, Mark Nelson wrote:
>
>
> On 7/6/23 06:02, Matthew Booth wrote:
> > On Wed, 5 Jul 2023 at 15:18, Mark Nelson wrote:
> >> I'm sort of amazed that it gave you symbols without the debuginfo
> >> packages installed. I'
pping the number of tp_pwl
> threads from 4 to 1 and see if that changes anything.
Will do. Any idea how to do that? I don't see an obvious rbd config option.
Thanks for looking into this,
Matt
--
Matthew Booth
___
ceph-users mailing list
On Tue, 4 Jul 2023 at 10:00, Matthew Booth wrote:
>
> On Mon, 3 Jul 2023 at 18:33, Ilya Dryomov wrote:
> >
> > On Mon, Jul 3, 2023 at 6:58 PM Mark Nelson wrote:
> > >
> > >
> > > On 7/3/23 04:53, Matthew Booth wrote:
> > > > On Thu, 2
On Tue, 4 Jul 2023 at 14:24, Matthew Booth wrote:
> On Tue, 4 Jul 2023 at 10:45, Yin, Congmin wrote:
> >
> > Hi , Matthew
> >
> > I see "rbd with pwl cache: 5210112 ns", This latency is beyond my
> > expectations and I believe it is unlikely to
daemon /mnt/pmem/cache.asok perf dump
I assume these are to be run on the client with the admin socket
hosted on the pwl device? Is anything supposed to be connected to that
socket?
Incidentally, note that I'm using SSD not pmem.
Thanks,
Matt
>
> -Original Message-
> Fro
On Mon, 3 Jul 2023 at 18:33, Ilya Dryomov wrote:
>
> On Mon, Jul 3, 2023 at 6:58 PM Mark Nelson wrote:
> >
> >
> > On 7/3/23 04:53, Matthew Booth wrote:
> > > On Thu, 29 Jun 2023 at 14:11, Mark Nelson wrote:
> > >>>>> This contain
>
>
> -Original Message-
> From: Matthew Booth
> Sent: Thursday, June 29, 2023 7:23 PM
> To: Ilya Dryomov
> Cc: Giulio Fidente ; Yin, Congmin
> ; Tang, Guifeng ; Vikhyat
> Umrao ; Jdurgin ; John Fulton
> ; Francesco Pantano ;
> ceph-users@ceph.io
> Subj
rocess and start out with something
> like 100 samples (more are better but take longer). You can run it like:
>
>
> ./unwindpmp -n 100 -p
I've included the output in this gist:
https://gist.github.com/mdbooth/2d68b7e081a37e27b78fe396d771427d
That gist contains 4 runs: 2 with PWL e
/var/lib/libvirt/images/pwl
>> pool
>> rbd_persistent_cache_size1073741824
>> config
>> rbd_plugins pwl_cache
>> pool
>>
>> # rbd status libvirt-pool/pwl-test
>> Watchers:
>> watcher=10.1.240.27:0/1406459716 client.14475
>> cookie=140282423200720
>> Persistent cache state:
>> host: dell-r640-050
>> path:
>> /var/lib/libvirt/images/pwl/rbd-pwl.libvirt-pool.37e947fd216b.pool
>> size: 1 GiB
>> mode: ssd
>> stats_timestamp: Mon Jun 26 11:29:21 2023
>> present: true empty: falseclean: true
>> allocated: 180 MiB
>> cached: 135 MiB
>> dirty: 0 B
>> free: 844 MiB
>> hits_full: 1 / 0%
>> hits_partial: 3 / 0%
>> misses: 21952
>> hit_bytes: 6 KiB / 0%
>> miss_bytes: 349 MiB
--
Matthew Booth
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
ll about write latency of really small writes, not bandwidth.
Matt
>
> Josh
>
> On Tue, Jun 27, 2023 at 9:04 AM Matthew Booth wrote:
>>
>> ** TL;DR
>>
>> In testing, the write latency performance of a PWL-cache backed RBD
>> disk was 2 orders of magnitude wor
d: 180 MiB
cached: 135 MiB
dirty: 0 B
free: 844 MiB
hits_full: 1 / 0%
hits_partial: 3 / 0%
misses: 21952
hit_bytes: 6 KiB / 0%
miss_bytes: 349 MiB
--
Matthew Booth
___
ceph-users mailing li
e send an email to ceph-users-le...@ceph.io
>
--
Matthew Booth
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
4-0.fc37 -> 2:4.17.4-2.fc37
selinux-policy 37.16-1.fc37 -> 37.17-1.fc37
selinux-policy-targeted 37.16-1.fc37 -> 37.17-1.fc37
tpm2-tss 3.2.0-3.fc37 -> 3.2.1-1.fc37
Removed:
cracklib-dicts-2.9.7-30.fc37.x86_64
--
Matthew Booth
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
19 matches
Mail list logo