[ceph-users] Re: Ceph RBD w/erasure coding

2024-09-13 Thread Bogdan Adrian Velica
Hi, Can you run "ceph osd pool get <> allow_ec_overwrites"? What is the output of that? Thank you, Bogdan Velica On Fri, Sep 13, 2024 at 8:46 PM wrote: > I have a Ceph instance I'm wherein I'm trying to use erasure coding with > RBD (and libvirt). I've followed > > https://docs.ceph.com/en/lat

[ceph-users] Re: How to know is ceph is ready?

2024-08-29 Thread Bogdan Adrian Velica
Hi, It's a hacky idea but create a script that checks if the Ceph RBD pool is fully "active+clean" to ensure it's ready before starting Proxmox VMs. Something like this... 1. Bash Script: - Write a script that checks if the Ceph RBD pool is in the "active+clean" state using ceph pg stat or ceph -

[ceph-users] Re: Cephfs client capabilities

2024-08-27 Thread Bogdan Adrian Velica
Hi Yufan, Could you please provide a bit more details please? In what way do you want to restrict your user (ceph client user correct?) How does your client look like (you can use "ceph auth get client.myuser" to get the details) Thank you, Bogdan V. croit.io On Tue, Aug 27, 2024 at 3:31 PM wro

[ceph-users] Re: Do you need to use a dedicated server for the MON service?

2024-08-23 Thread Bogdan Adrian Velica
Ah, for OpenStack.. Yeah I've seen setups like that so it should be fine... On Fri, Aug 23, 2024 at 1:27 PM Bogdan Adrian Velica wrote: > Hi, > > MON servers typically don't consume a lot of resources, so it's fairly > common to run MGR/MON daemons on the sam

[ceph-users] Re: Do you need to use a dedicated server for the MON service?

2024-08-23 Thread Bogdan Adrian Velica
Hi, MON servers typically don't consume a lot of resources, so it's fairly common to run MGR/MON daemons on the same machines as the OSDs in a cluster. Do you have any insights on the type of workload or data you plan to use with your Ceph cluster? Thank you, Bogdan Velica croit.io On Fri, Aug 2

[ceph-users] Re: Cephfs mds node already exists crashes mds

2024-08-20 Thread Bogdan Adrian Velica
HI, What pops out is the " handle_client_mkdir()"... Does this mean the MDS crashed when a client was creating a new dir or snapshot? Any idea about the steps? Thank you, Bogdan Velica croit.io On Tue, Aug 20, 2024 at 7:47 PM Tarrago, Eli (RIS-BCT) < eli.tarr...@lexisnexisrisk.com> wrote: > Her

[ceph-users] Re: Unable to add new OSDs

2024-05-01 Thread Bogdan Adrian Velica
Hi, I would suggest wiping the disks first with "wipefs -af /dev/_your_disk" or "sgdisk --zap-all /dev/your_disk" and try again. Try only one disk first. Is the host visible by running the command: "ceph orch host ls". Is the FQDN name correct? If so, does the following command return any errors?

[ceph-users] Re: Reconstructing an OSD server when the boot OS is corrupted

2024-04-30 Thread Bogdan Adrian Velica
Hi, If I may, I would try something like this but I haven't tested this so please take this with a grain of salt... 1.I would reinstall the Operating System in this case... Since the root filesystem is accessible but the OS is not bootable, the most straightforward approach would be to perform a

[ceph-users] Re: CephFS space usage

2024-03-13 Thread Bogdan Adrian Velica
Hi, Not sure if it was mentioned but also you could check the following: 1. Snapshots Snapshots can consume a significant amount of space without being immediately obvious. They preserve the state of the filesystem at various points in time. List Snapshots: Use the "*ceph fs subvolume snapshot ls

[ceph-users] Re: LibCephFS Python Mount Failure

2022-07-25 Thread Bogdan Adrian Velica
Hi Adam, I think this might be related to the user you are running the script as, try running the scrip as ceph user (or the user you are running your ceph with). Also make sure the variable os.environ.get is used (i might be mistaking here). do a print or something first to see the key is loaded.

[ceph-users] Re: Cluster healthy, but 16.2.7 osd daemon upgrade says its unsafe to stop them?

2022-05-27 Thread Bogdan Adrian Velica
Hi, Can you please tell us the side of your ceph cluster? How man OSDs do you have? The default recommendations are to have a min_size of 2 and replica 3 per replicated pool. Thank you, Bogdan Velica croit.io On Fri, May 27, 2022 at 6:33 PM Sarunas Burdulis wrote: > On 5/27/22 04:54, Robert Sa

[ceph-users] Re: Benefits of high RAM on a metadata server?

2020-02-06 Thread Bogdan Adrian Velica
Hi, I am running on 3 MDS servers (1 active and 2 backups and I recommend that) each of 128 GB of RAM (the clients are running ML analysis) and I have about 20 mil inodes loaded in ram. It's working fine except some warnings I have "client X is failing to respond to cache pressure." Besides that t

[ceph-users] Re: Which network is used for recovery / rebalancing

2019-09-02 Thread Bogdan Adrian Velica
Hi, If you have 2 network cards (one used ina VLAN with the ceph clients and the other one in a VLAN only for the ceph servers - as recommended in the documentation) then the recovery is don via the second vlan. the one with only the ceph servers are accessible. In the /etc/ceph/ceph.conf/ you sh