Hi,
Can you run "ceph osd pool get <> allow_ec_overwrites"?
What is the output of that?
Thank you,
Bogdan Velica
On Fri, Sep 13, 2024 at 8:46 PM wrote:
> I have a Ceph instance I'm wherein I'm trying to use erasure coding with
> RBD (and libvirt). I've followed
>
> https://docs.ceph.com/en/lat
Hi,
It's a hacky idea but create a script that checks if the Ceph RBD pool is
fully "active+clean" to ensure it's ready before starting Proxmox VMs.
Something like this...
1. Bash Script:
- Write a script that checks if the Ceph RBD pool is in the "active+clean"
state using ceph pg stat or ceph -
Hi Yufan,
Could you please provide a bit more details please? In what way do you
want to restrict your user (ceph client user correct?)
How does your client look like (you can use "ceph auth get client.myuser"
to get the details)
Thank you,
Bogdan V.
croit.io
On Tue, Aug 27, 2024 at 3:31 PM wro
Ah, for OpenStack..
Yeah I've seen setups like that so it should be fine...
On Fri, Aug 23, 2024 at 1:27 PM Bogdan Adrian Velica
wrote:
> Hi,
>
> MON servers typically don't consume a lot of resources, so it's fairly
> common to run MGR/MON daemons on the sam
Hi,
MON servers typically don't consume a lot of resources, so it's fairly
common to run MGR/MON daemons on the same machines as the OSDs in a
cluster. Do you have any insights on the type of workload or data you plan
to use with your Ceph cluster?
Thank you,
Bogdan Velica
croit.io
On Fri, Aug 2
HI,
What pops out is the " handle_client_mkdir()"... Does this mean the MDS
crashed when a client was creating a new dir or snapshot? Any idea about
the steps?
Thank you,
Bogdan Velica
croit.io
On Tue, Aug 20, 2024 at 7:47 PM Tarrago, Eli (RIS-BCT) <
eli.tarr...@lexisnexisrisk.com> wrote:
> Her
Hi,
I would suggest wiping the disks first with "wipefs -af /dev/_your_disk" or
"sgdisk --zap-all /dev/your_disk" and try again. Try only one disk first.
Is the host visible by running the command: "ceph orch host ls". Is the
FQDN name correct? If so, does the following command return any errors?
Hi,
If I may, I would try something like this but I haven't tested this so
please take this with a grain of salt...
1.I would reinstall the Operating System in this case...
Since the root filesystem is accessible but the OS is not bootable, the
most straightforward approach would be to perform a
Hi,
Not sure if it was mentioned but also you could check the following:
1. Snapshots
Snapshots can consume a significant amount of space without being
immediately obvious. They preserve the state of the filesystem at various
points in time.
List Snapshots: Use the "*ceph fs subvolume snapshot ls
Hi Adam,
I think this might be related to the user you are running the script as,
try running the scrip as ceph user (or the user you are running your ceph
with). Also make sure the variable os.environ.get is used (i might be
mistaking here). do a print or something first to see the key is loaded.
Hi,
Can you please tell us the side of your ceph cluster? How man OSDs do you
have?
The default recommendations are to have a min_size of 2 and replica 3 per
replicated pool.
Thank you,
Bogdan Velica
croit.io
On Fri, May 27, 2022 at 6:33 PM Sarunas Burdulis
wrote:
> On 5/27/22 04:54, Robert Sa
Hi,
I am running on 3 MDS servers (1 active and 2 backups and I recommend that)
each of 128 GB of RAM (the clients are running ML analysis) and I have
about 20 mil inodes loaded in ram. It's working fine except some warnings I
have "client X is failing to respond to cache pressure."
Besides that t
Hi,
If you have 2 network cards (one used ina VLAN with the ceph clients and
the other one in a VLAN only for the ceph servers - as recommended in the
documentation) then the recovery is don via the second vlan. the one with
only the ceph servers are accessible.
In the /etc/ceph/ceph.conf/ you sh
13 matches
Mail list logo