e-over-existing-cluster.yml is in the root directory
> of ceph-ansible (/usr/share/ceph-ansible) when you run it (as per step 10.
> of the documentation)?
>
> Regards,
> Frédéric.
>
> ----- Le 14 Mai 24, à 19:31, vladimir franciz blando <
> vladimir.bla...@gmail.com>
gt; - ip_version == 'ipv4'
>
> I can see we set it in our old all.yaml file.
>
> Regards,
> Frédéric.
>
> - Le 13 Mai 24, à 14:19, vladimir franciz blando
> vladimir.bla...@gmail.com a écrit :
>
> > Hi,
> >
> > If I follow the guide, i
while. It's most likely broken.
>
> Which version of this playbook are you using?
>
>
>
> Regards,
>
>
>
> --
>
> Guillaume Abrioux
>
> Software Engineer
>
>
>
> *From: *Frédéric Nass
> *Date: *Tuesday, 14 May 2024 at 10:12
> *
_ipv4_addresses'] |
> ips_in_ranges(hostvars[item]['public_network'].split(',')) | first}) }}"
>
> [1]
>
> https://github.com/ceph/ceph-ansible/blob/878cce5b4847a9a112f9d07c0fd651aa15f1e58b/roles/ceph-facts/tasks/set_monitor_address.yml
>
> Zitat
I know that only a few are using this script but just trying my luck here
if someone has the same issue as mine.
But first, who has successfully used this script and what version did you
use? Im using this guide on my test environment (
https://access.redhat.com/documentation/en-us/red_hat_ceph_st
store-tool)
>
> But if the disk is failing a DD is probably your best method.
>
>
>
> On Thu, 17 Oct 2019 11:44:20 +0800 *vladimir franciz blando
> >* wrote
>
> Sorry for not being clear, when I say healthy disk, I mean those are
> already an OSD, so I
Sorry for not being clear, when I say healthy disk, I mean those are
already an OSD, so I need to transfer the data from the failed OSD to the
other OSDs that are healthy.
- Vlad
ᐧ
On Thu, Oct 17, 2019 at 11:31 AM Konstantin Shalygin wrote:
>
> On 10/17/19 10:29 AM, vladimir franciz
Hi,
I have a not ideal setup on one of my cluster, 3 ceph nodes but using
replication 1 on all pools (don't ask me why replication 1, it's a long
story).
So it has come to this situation that a disk keeps on crashing, possible a
hardware failure and I need to recover from that.
What's my best
ore more rgw instances for each zone
>
> I don't know if there are simpler approaches
>
> Cheers, Massimo
>
> On Tue, Sep 10, 2019 at 11:20 AM Wesley Peng wrote:
>
>>
>>
>> on 2019/9/10 17:14, vladimir franciz blando wrote:
>> > I have 2 OpenStack en
I have 2 OpenStack environment that I want to integrate to an existing ceph
cluster. I know technically it can be done but has anyone tried this?
- Vlad
ᐧ
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...
10 matches
Mail list logo