Hi all,
on our octopus-latest cluster I accidentally destroyed an up+in OSD with the
command line
ceph-volume lvm zap /dev/DEV
It executed the dd command and then failed at the lvm commands with "device
busy". Problem number one is, that the OSD continued working fine. Hence, there
is no in
In addition to not having resiliency by default, my recollection is
that BeeGFS also doesn't guarantee metadata durability in the event of
a crash or hardware failure like CephFS does. There's not really a way
for us to catch up to their "in-memory metadata IOPS" with our
"on-disk metadata IOPS". :
Hallo folks,
i am deploying a quincy ceph cluster 17.2.0 on openstack vm with ubuntu
22.04 minimal with cephadm.
I was able too boostrap the cluster and add the hosts and the mons, but
when i apply the osd spec with the encryption option enabled, it fails
```
service_type: osd
service_id:
Does someone have an idea what I can check, maybe what logs I can turn on,
to find the cause of the problem? Or at least can have a monitoring that
tells me when this happens?
Currently I go through ALL of the buckets and basically do a "compare
bucket index to radoslist" for all objects in the bu
My understanding is BeeGFS doesn't offer data redundancy by default,
you have to configure mirroring. You've not said how your Ceph cluster
is configured but my guess is you have the recommended 3x replication
- I wouldn't be surprised if BeeGFS was much faster than Ceph in this
case. I'd be intere
On 11/22/22 12:43, Eugen Block wrote:
Hi,
there was a similar thread recently
(https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/message/WSVIAZBFJQKBXA47RKISA5U4J7BHX6DK/).
I did not find that thread unfortunately, need to up my search skills.
Check the output of 'cepham ls' on the
Hi Ilya
Thank you very much for clarification. I created a cronjob based script which
is available here https://github.com/oposs/rbd_snapshot_manager
Tobias
- Ursprüngliche Mail -
Von: "Ilya Dryomov"
An: "Tobias Bossert"
CC: "ceph-users"
Gesendet: Sonntag, 20. November 2022 13:56:1
Hi,
there was a similar thread recently
(https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/message/WSVIAZBFJQKBXA47RKISA5U4J7BHX6DK/). Check the output of 'cepham ls' on the node where the OSD is not running and remove it with 'cephadm rm-daemon --name osd.3'. If there's an empty directo
Hi,
I'm in the process of re-provisioning OSDs on a test cluster with
cephadm. One of the OSD id's that was supposedly previously living on
host3 is now alive on host2. And cephadm is not happy about that:
"Found duplicate OSDs: osd.3 in status running on host2, osd.3 in status
stopped on ho
Hello everybody,
Someone that can help me in the right direction?
Kind regards,
Jelle
On 11/21/22 17:14, Jelle de Jong wrote:
Hello everybody,
I had an HW failure and had to take an osd out however I now got
stale+active+clean.
I am okay with having zeros as replacement for the lost block
On Tue, Nov 22, 2022 at 9:20 AM Marcus Müller wrote:
>
> Hi Ilya,
>
> thanks. This looks like as a general setting for all RBD images, not for some
> specific, right?
Right.
>
> Is a more specific definition possible, so you can have multiple rbd images
> in different ceph namespaces?
It look
Hi Ilya,
thanks. This looks like as a general setting for all RBD images, not for some
specific, right?
Is a more specific definition possible, so you can have multiple rbd images in
different ceph namespaces?
Regards
Marcus
> Am 21.11.2022 um 22:22 schrieb Ilya Dryomov :
>
> On Mon, Nov 21
12 matches
Mail list logo