Dear ceph users,
Since recently we have 3 locations with ceph osd nodes, for 3 copy
pools, it is trivial to create a crush rule that uses all 3 datacenters
for each block, but 4 copy is harder. Our current "replicated" rule is this:
rule replicated_rule {
id 0
type replicated
min_
Or you can mount with 'dirstat' option and use 'cat .' for determine CephFS
stats:
alias fsdf="cat . | grep rbytes | awk '{print \$2}' | numfmt --to=iec
--suffix=B"
[root@host catalog]# fsdf
245GB
[root@host catalog]#
Cheers,
k
> On 17 Dec 2021, at 00:25, Jesper Lykkegaard Karlsen wrote:
>
Kai, thank you for your answer. It looks like the "ceph config set mgr..."
commands are the key part, to specify my local registry. However, I haven't got
that far with the installation. I have tried various options, but I have
problems already with the bootstrap step.
I have documented the pro
Hi Zoran,
I'd like to have this properly documented in the Ceph documentation as
well. I just created
https://github.com/ceph/ceph/pull/44346 to add the monitoring images to
that section. Feel free to review this one.
Sebastian
Am 17.12.21 um 11:06 schrieb Zoran Bošnjak:
> Kai, thank you for y
Hi all,
I'm also seeing these messages spamming the logs after update from
octopus to pacific 16.2.7.
Any clue yet what this means?
Thanks!!
Kenneth
On 29/10/2021 22:21, Alexander Y. Fomichev wrote:
Hello.
After upgrading to 'pacific' I found log spammed by messages like this:
... active+c
Thanks Konstantin,
Actually, I went a bit further and made the script more universal in usage:
ceph_du_dir:
# usage: ceph_du_dir $DIR1 ($DIR2 .)
for i in $@; do
if [[ -d $i && ! -L $i ]]; then
echo "$(numfmt --to=iec --suffix=B --padding=7 $(getfattr --only-values -n
ceph.dir.rbytes $i 2
Yes the Cephalocon CfP has been extended until Sunday the 19th!
https://linuxfoundation.smapply.io/prog/cephalocon_2022/
On Fri, Dec 10, 2021 at 8:28 PM Bobby wrote:
>
> one typing mistakeI meant 19 December 2021
>
> On Fri, Dec 10, 2021 at 8:21 PM Bobby wrote:
>
> >
> > Hi all,
> >
> > Has
On 16.12.21 21:57, Andrei Mikhailovsky wrote:
public_network = 192.168.168.0/24,192.168.169.0/24
AFAIK there is only one public_network possible.
In your case you could try with 192.168.168.0/23, as both networks are
direct neighbors bitwise.
Regards
--
Robert Sander
Heinlein Consulting Gm
Hi all,
in a Luminous+Bluestore cluster, I would like to migrate rocksdb (including
wal) to nvme (lvm).
(output comes from test env. with minimum sized hdd to test procedures)
ceph-bluestore-tool show-label --path /var/lib/ceph/osd/ceph-0
infering bluefs devices from bluestore path
{
"/var/lib/cep
Hi all,
The documentation for "min_size" says "Sets the minimum number of
replicas required for I/O".
https://docs.ceph.com/en/latest/rados/operations/pools/
Can anyone confirm that a PG below "min_size" but still online can still
be read?
If someone says "the PG can be read" I will open a
Hey Flavio,
I think there are no options other then either upgrade the cluster or
backport the relevant bluefs migration code to Lumnous and make a custom
build.
Thanks,
Igor
On 12/17/2021 4:43 PM, Flavio Piccioni wrote:
Hi all,
in a Luminous+Bluestore cluster, I would like to migrate roc
The terminology here can be subtle.
The `public_network` value AIUI in part is an ACL of sorts. Comma-separated
values are documented and permissable. The larger CIDR block approach also
works.
The address(s) that mons bind / listen to are a different matter.
> On 16.12.21 21:57, Andrei Mikh
12 matches
Mail list logo