Hi,
I have the same situation with some OSD on Octopus 15.2.5 (Ubuntu 20,04).
But, I have no problem with MGR. Any clue about this?
Best regards,
Date: Tue, 9 Jun 2020 23:47:24 +0200
> From: Wido den Hollander
> Subject: [ceph-users] Octopus OSDs dropping out of cluster:
> _check_auth_
Hi,
Suddenly we have a recovery_unfound situation. I find that PG acting set is
missing some OSDs which are up. Why can't OSD 3 and 71 on following PG
query result be members of PG acting set? Currently, we use v15.2.8. How to
recover from this situation?
{
"snap_trimq": "[]",
"snap_trimq
12:29 AM Lazuardi Nasution
wrote:
> Hi,
>
> Suddenly we have a recovery_unfound situation. I find that PG acting set
> is missing some OSDs which are up. Why can't OSD 3 and 71 on following PG
> query result be members of PG acting set? Currently, we use v15.2.8. How to
ards,
On Fri, May 7, 2021 at 10:17 PM 胡玮文 wrote:
> 在 2021/5/7 下午6:46, Lazuardi Nasution 写道:
>
> Hi,
>
> After recreating some related OSDs (3, 71 and 237), now the acting set is
> normal but the PG is incomplete now and there are slow ops on primary OSD
> (3). I have tried
Hi,
I have read some benchmarks which recommends of using 4 OSDs per NVME
drive. Until now, I'm using 4 NVME namespaces per drive for doing that way.
If I'm using SPDK, do I still need to follow 4 OSDs per NVME drive way? Is
there any benchmark related to SPDK and number of OSDs per NVME drive?
B
> more parallelism. The read path can be a bit more of a wildcard but
> some times can also benefit from extra parallelism. Still, it's only
> really worth it if you've got the CPU to back it up.
>
>
> Mark
>
>
> On 2/5/22 6:55 AM, Lazuardi Nasution wrote:
Hi,
Is there any EC plugins benchmark with current Intel/AMD CPU? It seem there
are new instructions which may accelerate EC. Let's say we want to
benchmark plugins using Intel 6200 or AMD 7002 series. I hope there is
better result than what have been benchmarked some years ago.
Best regards,
___
but I cannot find
it for Ceph implementation.
Best regards,
On Fri, May 15, 2020 at 9:03 PM Marc Roos wrote:
>
> How many % of the latency is even CPU related?
>
>
>
> -Original Message-
> From: Lazuardi Nasution [mailto:mrxlazuar...@gmail.com]
> Sent: 15 May 202
Hi Konstantin,
I hope you or anybody still follows this old thread.
Can this EC data pool be configured per pool, not per client? If we follow
https://docs.ceph.com/docs/master/rbd/rbd-openstack/ we may see that cinder
client will access vms and volumes pools, both with read and write
permission.
Hi Max,
Would you mind to share some config examples? What happen if we create the
instance which boot with newly created or existing volume?
Best regards,
On Fri, Aug 28, 2020 at 5:27 PM Max Krasilnikov
wrote:
> Hello!
>
> Fri, Aug 28, 2020 at 04:05:55PM +0700, mrxlazuardin wrote:
>
> > Hi
Hi Max,
I see, it is very helpful and inspired, thank you for that. I assume that
you use the same way for Nova ephemeral (nova user to vms pool).
How do you put the policy for cross pools access between them? I mean nova
user to images and volumes pools, cinder user to images, vms and backups
po
Hi Max,
As far as I know, cross access of Ceph pools is needed for copy on write
feature which enables fast cloning/snapshotting. For example, nova and
cinder users need to read to images pool to do copy on write from such an
image. So, it seems that Ceph policy from the previous URL can be modifi
Hi Max,
So, it seems that you prefer to use image cache than allowing cross access
between Ceph users. By that, all communications are APi based, the snapshot
and CoW happen inside the same pool for a single Ceph client only, isn't
it? I'll consider this way and compare with the cross pool access
Hi,
I have something weird about GID selection for Ceph with RDMA. When I do
configuration with ms_async_rdma_device_name and ms_async_rdma_gid_idx,
Ceph with RDMA running successfully. But, when I do configuration with
ms_async_rdma_device_name, ms_async_rdma_local_gid and
ms_async_rdma_roce_ver,
pport is
> officially announced, or still on development?
>
> best regards,
>
> samuel
>
> --
> huxia...@horebdata.cn
>
>
> *From:* Lazuardi Nasution
> *Date:* 2020-09-18 19:21
> *To:* ceph-users
> *Subject:* [ceph-users] Ceph RDM
wrote:
> Which Ceph version are you using? Just wondering Ceph RDMA support is
> officially announced, or still on development?
>
> best regards,
>
> samuel
>
> --
> huxia...@horebdata.cn
>
>
> *From:* Lazuardi Nasution
> *Date:* 202
16 matches
Mail list logo