Hi - I have a 4 node cluster, and started to have some odd access issues to my
file system "Home"
When I started investigating, saw the message "1 MDSs behind on trimming", but
I also noticed that I seem to have 2 MDSs running on each server - 3 Daemons
up, with 5 standby. Is this expected
Hi,
I've just checked Veeam backup (build 12.0.0.1420) to reef 18.2.0.
Works great so far.
BR
Wolfgang
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
Hi,
I'm running ceph version 15.2.16
(a6b69e817d6c9e6f02d0a7ac3043ba9cdbda1bdf) octopus (stable), that would
mean I am not running the fix.
Glad to know that an upgrade will solve the issue!
Med vänliga hälsningar
Josef Johansson
On 8/16/23 12:05, Konstantin Shalygin wrote:
Hi,
On 16 Aug
Hi,
Let's do some serious necromancy here.
I just had this exact problem. Turns out that after rebooting all nodes
(one at the time of course), the monitor could join perfectly.
Why? You tell me. We did not see any traces of the ip address in any
dumps that we could get a hold of. I restarte
Yeah this seems to have done the trick. I still need to complete the full
cluster adoption, but after mon and mgr reconfigure they have come back up and
built from the correct image.
Thanks for this.
___
ceph-users mailing list -- ceph-users@ceph.io
To
The recording for this Ceph Tech Talk is now available
https://www.youtube.com/watch?v=Mwr0bayhtI4
--
Mike Perez
Community Manager
Ceph Foundation
--- Original Message ---
On Tuesday, August 15th, 2023 at 2:55 PM, Mike Perez
wrote:
> Hi everyone,
>
> Join us tomorrow at 15:00 UTC to h
On Thu, Aug 17, 2023 at 12:14 PM wrote:
>
> Hello,
>
> Yes, I can see that there are metrics to check the size of the compressed
> data stored in a pool with ceph df detail (relevant columns are USED COMPR
> and UNDER COMPR)
>
> Also the size of compressed data can be checked on osd level using
Hello,
Yes, I can see that there are metrics to check the size of the compressed data
stored in a pool with ceph df detail (relevant columns are USED COMPR and UNDER
COMPR)
Also the size of compressed data can be checked on osd level using perf dump
(relevant values "bluestore_compressed_alloc
Hi,
> On 17 Aug 2023, at 18:21, yosr.kchao...@gmail.com wrote:
>
> Thanks for your reply. By the Bluestore compression I mean the compression
> enabled on the pool level. It is also called inline compression.
> https://docs.ceph.com/en/reef/rados/configuration/bluestore-config-ref/#inline-compre
Hi all!
The cluster was installed before device classes were a thing so as a
prepartion to install some SSDs into a Ceph cluster with OSDs on 7 maschines
I migrated all replicated pools to a CRUSH rule with device-class set. Lots
of misplaced objects (probably because of changed ids in the CRUSH
Hello Konstantin,
Thanks for your reply. By the Bluestore compression I mean the compression
enabled on the pool level. It is also called inline compression.
https://docs.ceph.com/en/reef/rados/configuration/bluestore-config-ref/#inline-compression
Do you see what I mean now ?
Thanks
Yosr
_
Hi,
What you mean, Bluestore compression? The rgw compression is a rados
compression, not the compress by rgw itself. You can setup different storage
classes and upload to same pool uncompressed, or compressed objects
The compression ratio you can determine with exporter [1]
[1] https://githu
Hello,
I have enabled Bluestore compression and I see that my data is being compressed
by checking the following metrics:
bash-4.4$ ceph tell osd.0 perf dump | grep compre
"compress_success_count": 24064,
"compress_rejected_count": 1,
"bluestore_compressed": 1617592,
Hi,
I'm using a v16.2.13 Ceph cluster. Yesterday, I just add some SSD node for
replace HDD node. During the process, 1 SSD node have different MTU that cause
the some PGs become unactive for a while. But after change the MTU, all the PGs
is active+clean now. But after that, I can't access some b
14 matches
Mail list logo