Hi, Konstantin,
Can i only upgrade clients from Nautilus to Pacific, but still keep Nautilus
version in the cluster? just in order to avoid this librbd issue
cheers,
Samuel
huxia...@horebdata.cn
From: Konstantin Shalygin
Date: 2021-12-22 07:47
To: J-P Methot
CC: ceph-users
Subject: [ceph-
Hi,
ceph-volume inventory (and thus the orchestrator) only considers a block
device free when there is literally nothing on it.
Would it make sense to add physical volumes from LVM here, too?
I see use cases like a large NVMe that should hold two OSDs, a RAID1 of
two SSDs for rocks db device
> I guess what caused the issue was high latencies on our “big” SSD’s (7TB
> drives), which got really high after the upgrade to Octopus. We split them
> into 4OSD’s some days ago and since then the high commit latencies on the
> OSD’s and on bluestore are gone
Hmm, but this is sort of a work arou
>
> Persistent client side cache potentially may help in this case if you
> are ok with the trade-offs. It's been a while since I've seen any
> benchmarks with it so you may need to do some testing yourself.
I would be interested to seeing these test results also.
___
I mean definitely this!
Of course, if your client machines is not serve ceph-mon, ceph-mds, ceph-osd
processes - just upgrade ceph packages
k
> On 22 Dec 2021, at 12:32, huxia...@horebdata.cn wrote:
>
> Can i only upgrade clients from Nautilus to Pacific, but still keep Nautilus
> version in
Hi,
> On 22 Dec 2021, at 13:10, Robert Sander wrote:
>
> ceph-volume inventory (and thus the orchestrator) only considers a block
> device free when there is literally nothing on it.
>
> Would it make sense to add physical volumes from LVM here, too?
>
> I see use cases like a large NVMe that
Kai,
yes, it looks so. Thanks for the suggestion.
I am experimenting in an environment with /etc/hosts file on each server,
without DNS.
The /etc/hosts file is correct and complete. I can resolve the hostname
correctly, but on the host server(s) only.
I was not aware of the fact that docker conta
Where do I find information on the release timeline for quincy?
I learned a lesson some time ago with regard to building from source
and accidentally upgrading my cluster to the dev branch. whoops.
Just wondering if there is a published timeline on the next major
release, so I can figure out my ga
On 12/22/21 4:23 AM, Marc wrote:
I guess what caused the issue was high latencies on our “big” SSD’s (7TB
drives), which got really high after the upgrade to Octopus. We split them
into 4OSD’s some days ago and since then the high commit latencies on the
OSD’s and on bluestore are gone
Hmm, but
Am 22.12.21 um 16:39 schrieb J-P Methot:
> So, from what I understand from this, neither the latest nautilus client nor
> the latest octopus client has the fix? Only the latest pacific?
Are you sure that this issue is reproducible in Nautilus?
I tried it with a Nautilus 14.2.22 client and it wo
Hi Robert!
Quote: "Just my 2¢: Do not use systemd-timesyncd."
I've been using systemd-timesyncd on arch-linux since 2015.
I built up 7 clusters and I had only 1 NTP failure on 1 cluster and I
can't blame systemd-timesyncd about it.
The right way is having NTP servers on Monitor nodes.
I was think
Joshua,
Quincy should release in March 2022, You can find the release cycle and
standards from https://docs.ceph.com/en/latest/releases/general/
Norman
Best regards
On 12/22/21 9:37 PM, Joshua West wrote:
Where do I find information on the release timeline for quincy?
I learned a lesson some
Chad,
As the document noted, min_size means "Minimum number of replicas to
serve the request", so you can't read when number of PGs below min_size.
Norman
Best regards
On 12/17/21 10:59 PM, Chad William Seys wrote:
ill open an issue to h
___
cep
Hi,
The "ceph status" intermittently shows "0 slow ops" . Could you tell
me how should
I handle this problem and what does "0 slow ops" mean?
I investigated by referring the following documents, but no luck.
https://docs.ceph.com/en/latest/rados/troubleshooting/troubleshooting-osd/#debugging-slo
14 matches
Mail list logo