[ceph-users] crush rule for 4 copy over 3 failure domains?

2021-12-17 Thread Simon Oosthoek

Dear ceph users,

Since recently we have 3 locations with ceph osd nodes, for 3 copy 
pools, it is trivial to create a crush rule that uses all 3 datacenters 
for each block, but 4 copy is harder. Our current "replicated" rule is this:


rule replicated_rule {
id 0
type replicated
min_size 2
max_size 4
step take default
step choose firstn 2 type datacenter
step chooseleaf firstn 2 type host
step emit
}

For 3 copy, the rule would be

rule replicated_rule_3copy {
id 5
type replicated
min_size 2
max_size 3
step take default
step choose firstn 3 type datacenter
step chooseleaf firstn 1 type host
step emit
}

But 4 copy requires an additional osd, so how to tell the crush 
algorithm to first take one from each datacenter and then take one more 
from any datacenter?


I'd be interested to know if this is possible and if so, how...

Having said that, I don't think there's much additional value for a 4 
copy pool, compared to a 3copy pool with 3 separate locations. Or is 
there (apart from the 1 more copy thing in general)?


Cheers

/Simon
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: cephfs quota used

2021-12-17 Thread Konstantin Shalygin
Or you can mount with 'dirstat' option and use 'cat .' for determine CephFS 
stats:

alias fsdf="cat . | grep rbytes | awk '{print \$2}' | numfmt --to=iec 
--suffix=B"

[root@host catalog]# fsdf
245GB
[root@host catalog]#


Cheers,
k

> On 17 Dec 2021, at 00:25, Jesper Lykkegaard Karlsen  wrote:
> 
> Anyway, I just made my own ceph-fs version of "du".
> 
> ceph_du_dir:
> 
> #!/bin/bash
> # usage: ceph_du_dir $DIR
> SIZE=$(getfattr -n ceph.dir.rbytes $1 2>/dev/null| grep "ceph\.dir\.rbytes" | 
> awk -F\= '{print $2}' | sed s/\"//g)
> numfmt --to=iec-i --suffix=B --padding=7 $SIZE
> 
> Prints out ceph-fs dir size in "human-readble"
> It works like a charm and my god it is fast!.
> 
> Tools like that could be very useful, if provided by the development team 🙂

___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: airgap install

2021-12-17 Thread Zoran Bošnjak
Kai, thank you for your answer. It looks like the "ceph config set mgr..." 
commands are the key part, to specify my local registry. However, I haven't got 
that far with the installation. I have tried various options, but I have 
problems already with the bootstrap step.

I have documented the procedure (and the errors) here:
https://github.com/zoranbosnjak/ceph-install#readme

Would you please have a look and suggest corrections.
Ideally, I would like to run administrative commands from a dedicated (admin) 
node... or alternatively to setup mon nodes to be able to run administrative 
commands...

regards,
Zoran

- Original Message -
From: "Kai Stian Olstad" 
To: "Zoran Bošnjak" 
Cc: "ceph-users" 
Sent: Thursday, December 16, 2021 9:40:22 AM
Subject: Re: [ceph-users] airgap install

On Mon, Dec 13, 2021 at 06:18:55PM +, Zoran Bošnjak wrote:
> I am using "ubuntu 20.04" and I am trying to install "ceph pacific" version 
> with "cephadm".
> 
> Are there any instructions available about using "cephadm bootstrap" and 
> other related commands in an airgap environment (that is: on the local 
> network, without internet access)?

Unfortunately they say cephadm is stable but I would call it beta because of
lacking feature, bugs and missing documentation.

I can give you some pointers.

The best source to find the images you need is in cephadm code and for 16.2.7
you find it here [1].

cephadm bootstrap has the --image option to specify what image to use.
I also run the bootstrap with --skip-monitoring-stack, if not it fails since it
can't find the images.

After that you can update the monitor containers to you registry.
cephadm shell
ceph config set mgr mgr/cephadm/container_image_prometheus 
ceph config set mgr mgr/cephadm/container_image_node_exporter 
ceph config set mgr mgr/cephadm/container_image_grafana 
ceph config set mgr mgr/cephadm/container_image_alertmanager 

Check the result with
ceph config get mgr

To deploy the monitoring
ceph mgr module enable prometheus
ceph orch apply node-exporter '*'
ceph orch apply alertmanager --placement ...
ceph orch apply prometheus --placement ...
ceph orch apply grafana --placement ...


This should be what you need to get Ceph running in an isolated network.

[1] https://github.com/ceph/ceph/blob/v16.2.7/src/cephadm/cephadm#L50-L61

-- 
Kai Stian Olstad
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: airgap install

2021-12-17 Thread Sebastian Wagner
Hi Zoran,

I'd like to have this properly documented in the Ceph documentation as
well.  I just created

https://github.com/ceph/ceph/pull/44346 to add the monitoring images to
that section. Feel free to review this one.

Sebastian

Am 17.12.21 um 11:06 schrieb Zoran Bošnjak:
> Kai, thank you for your answer. It looks like the "ceph config set mgr..." 
> commands are the key part, to specify my local registry. However, I haven't 
> got that far with the installation. I have tried various options, but I have 
> problems already with the bootstrap step.
>
> I have documented the procedure (and the errors) here:
> https://github.com/zoranbosnjak/ceph-install#readme
>
> Would you please have a look and suggest corrections.
> Ideally, I would like to run administrative commands from a dedicated (admin) 
> node... or alternatively to setup mon nodes to be able to run administrative 
> commands...
>
> regards,
> Zoran
>
> - Original Message -
> From: "Kai Stian Olstad" 
> To: "Zoran Bošnjak" 
> Cc: "ceph-users" 
> Sent: Thursday, December 16, 2021 9:40:22 AM
> Subject: Re: [ceph-users] airgap install
>
> On Mon, Dec 13, 2021 at 06:18:55PM +, Zoran Bošnjak wrote:
>> I am using "ubuntu 20.04" and I am trying to install "ceph pacific" version 
>> with "cephadm".
>>
>> Are there any instructions available about using "cephadm bootstrap" and 
>> other related commands in an airgap environment (that is: on the local 
>> network, without internet access)?
> Unfortunately they say cephadm is stable but I would call it beta because of
> lacking feature, bugs and missing documentation.
>
> I can give you some pointers.
>
> The best source to find the images you need is in cephadm code and for 16.2.7
> you find it here [1].
>
> cephadm bootstrap has the --image option to specify what image to use.
> I also run the bootstrap with --skip-monitoring-stack, if not it fails since 
> it
> can't find the images.
>
> After that you can update the monitor containers to you registry.
> cephadm shell
> ceph config set mgr mgr/cephadm/container_image_prometheus 
> ceph config set mgr mgr/cephadm/container_image_node_exporter  image>
> ceph config set mgr mgr/cephadm/container_image_grafana 
> ceph config set mgr mgr/cephadm/container_image_alertmanager  image>
>
> Check the result with
> ceph config get mgr
>
> To deploy the monitoring
> ceph mgr module enable prometheus
> ceph orch apply node-exporter '*'
> ceph orch apply alertmanager --placement ...
> ceph orch apply prometheus --placement ...
> ceph orch apply grafana --placement ...
>
>
> This should be what you need to get Ceph running in an isolated network.
>
> [1] https://github.com/ceph/ceph/blob/v16.2.7/src/cephadm/cephadm#L50-L61
>
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: bunch of " received unsolicited reservation grant from osd" messages in log

2021-12-17 Thread Kenneth Waegeman

Hi all,

I'm also seeing these messages spamming the logs after update from 
octopus to pacific 16.2.7.


Any clue yet what this means?

Thanks!!

Kenneth

On 29/10/2021 22:21, Alexander Y. Fomichev wrote:

Hello.
After upgrading to 'pacific' I found log spammed by messages like this:
... active+clean]  scrubber pg(46.7aas0) handle_scrub_reserve_grant:
received unsolicited reservation grant from osd 138(1) (0x560e77c51600)

If I understand it correctly this is exactly what it looks, and this is not
good. Running with debug osd 1/5 don't help much  and google bring me
nothing and I stuck. Could anybody give a hint what's happening or where
  to dig.


___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: cephfs quota used

2021-12-17 Thread Jesper Lykkegaard Karlsen
Thanks Konstantin,

Actually, I went a bit further and made the script more universal in usage:

ceph_du_dir:
# usage: ceph_du_dir $DIR1 ($DIR2 .)
for i in $@; do
if [[ -d $i && ! -L $i ]]; then
echo "$(numfmt --to=iec --suffix=B --padding=7 $(getfattr --only-values -n 
ceph.dir.rbytes $i 2>/dev/nul) | sed -r 's/([0-9])([a-zA-Z])/\1 \2/g; 
s/([a-zA-Z])([0-9])/\1 \2/g') $i"
fi
done

The above can be run as:

ceph_du_dir $DIR

with multiple directories:

ceph_du_dir $DIR1 $DIR2 $DIR3 ..

Or even with wildcard:

ceph_du_dir $DIR/*

Best,
Jesper

--
Jesper Lykkegaard Karlsen
Scientific Computing
Centre for Structural Biology
Department of Molecular Biology and Genetics
Aarhus University
Gustav Wieds Vej 10
8000 Aarhus C

E-mail: je...@mbg.au.dk
Tlf:+45 50906203


Fra: Konstantin Shalygin 
Sendt: 17. december 2021 09:17
Til: Jesper Lykkegaard Karlsen 
Cc: Robert Gallop ; ceph-users@ceph.io 

Emne: Re: [ceph-users] cephfs quota used

Or you can mount with 'dirstat' option and use 'cat .' for determine CephFS 
stats:

alias fsdf="cat . | grep rbytes | awk '{print \$2}' | numfmt --to=iec 
--suffix=B"

[root@host catalog]# fsdf
245GB
[root@host catalog]#


Cheers,
k

On 17 Dec 2021, at 00:25, Jesper Lykkegaard Karlsen 
mailto:je...@mbg.au.dk>> wrote:

Anyway, I just made my own ceph-fs version of "du".

ceph_du_dir:

#!/bin/bash
# usage: ceph_du_dir $DIR
SIZE=$(getfattr -n ceph.dir.rbytes $1 2>/dev/null| grep "ceph\.dir\.rbytes" | 
awk -F\= '{print $2}' | sed s/\"//g)
numfmt --to=iec-i --suffix=B --padding=7 $SIZE

Prints out ceph-fs dir size in "human-readble"
It works like a charm and my god it is fast!.

Tools like that could be very useful, if provided by the development team 🙂

___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: Cephalocon 2022 deadline extended?

2021-12-17 Thread Dan van der Ster
Yes the Cephalocon CfP has been extended until Sunday the 19th!

https://linuxfoundation.smapply.io/prog/cephalocon_2022/

On Fri, Dec 10, 2021 at 8:28 PM Bobby  wrote:
>
> one typing mistakeI meant 19 December 2021
>
> On Fri, Dec 10, 2021 at 8:21 PM Bobby  wrote:
>
> >
> > Hi all,
> >
> > Has the CfP deadline for Cephalcoon 2022 been extended to 19 December
> > 2022? Please confirm if anyone knows it...
> >
> >
> > Thanks
> >
> ___
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: ceph on two public networks - not working

2021-12-17 Thread Robert Sander

On 16.12.21 21:57, Andrei Mikhailovsky wrote:


public_network = 192.168.168.0/24,192.168.169.0/24


AFAIK there is only one public_network possible.

In your case you could try with 192.168.168.0/23, as both networks are 
direct neighbors bitwise.


Regards
--
Robert Sander
Heinlein Consulting GmbH
Schwedter Str. 8/9b, 10119 Berlin

https://www.heinlein-support.de

Tel: 030 / 405051-43
Fax: 030 / 405051-19

Amtsgericht Berlin-Charlottenburg - HRB 220009 B
Geschäftsführer: Peer Heinlein - Sitz: Berlin
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Luminous: export and migrate rocksdb to dedicated lvm/unit

2021-12-17 Thread Flavio Piccioni
Hi all,
in a Luminous+Bluestore cluster, I would like to migrate rocksdb (including
wal) to nvme (lvm).

(output comes from test env. with minimum sized hdd to test procedures)
ceph-bluestore-tool show-label --path /var/lib/ceph/osd/ceph-0
infering bluefs devices from bluestore path
{
"/var/lib/ceph/osd/ceph-0/block": {
"osd_uuid": "399e7751-d791-4493-9f53-caf1650573ed",
"size": 107369988096,
"btime": "2021-12-16 16:24:32.412358",
"description": "main",
"bluefs": "1",
"ceph_fsid": "uuid",
"kv_backend": "rocksdb",
"magic": "ceph osd volume v026",
"mkfs_done": "yes",
"osd_key": "mykey",
"ready": "ready",
"require_osd_release": "\u000e",
"whoami": "0"
}
}
rocksdb and wal are integrated in slowfs, so there is no rock.db o .wal
entry

In Luminous and Mimic, there is no bluefs-bdev-new-db option for
ceph-bluestore-tool.
How can this dump+migration be archived in old versions?

Regards
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] min_size ambiguity

2021-12-17 Thread Chad William Seys

Hi all,
  The documentation for "min_size" says "Sets the minimum number of 
replicas required for I/O".

https://docs.ceph.com/en/latest/rados/operations/pools/

Can anyone confirm that a PG below "min_size" but still online can still 
be read?


If someone says "the PG can be read" I will open an issue to have the 
language made more precise and if someone says "the PG cannot be read" 
I'll open an issue requesting this enhancement. :)


Thanks!
C.
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: Luminous: export and migrate rocksdb to dedicated lvm/unit

2021-12-17 Thread Igor Fedotov

Hey Flavio,

I think there are no options other then either upgrade the cluster or 
backport the relevant bluefs migration code to Lumnous and make a custom 
build.



Thanks,

Igor

On 12/17/2021 4:43 PM, Flavio Piccioni wrote:

Hi all,
in a Luminous+Bluestore cluster, I would like to migrate rocksdb (including
wal) to nvme (lvm).

(output comes from test env. with minimum sized hdd to test procedures)
ceph-bluestore-tool show-label --path /var/lib/ceph/osd/ceph-0
infering bluefs devices from bluestore path
{
"/var/lib/ceph/osd/ceph-0/block": {
"osd_uuid": "399e7751-d791-4493-9f53-caf1650573ed",
"size": 107369988096,
"btime": "2021-12-16 16:24:32.412358",
"description": "main",
"bluefs": "1",
"ceph_fsid": "uuid",
"kv_backend": "rocksdb",
"magic": "ceph osd volume v026",
"mkfs_done": "yes",
"osd_key": "mykey",
"ready": "ready",
"require_osd_release": "\u000e",
"whoami": "0"
}
}
rocksdb and wal are integrated in slowfs, so there is no rock.db o .wal
entry

In Luminous and Mimic, there is no bluefs-bdev-new-db option for
ceph-bluestore-tool.
How can this dump+migration be archived in old versions?

Regards
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


--
Igor Fedotov
Ceph Lead Developer

Looking for help with your Ceph cluster? Contact us at https://croit.io

croit GmbH, Freseniusstr. 31h, 81247 Munich
CEO: Martin Verges - VAT-ID: DE310638492
Com. register: Amtsgericht Munich HRB 231263
Web: https://croit.io | YouTube: https://goo.gl/PGE1Bx

___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: ceph on two public networks - not working

2021-12-17 Thread Anthony D'Atri
The terminology here can be subtle.

The `public_network` value AIUI in part is an ACL of sorts.  Comma-separated 
values are documented and permissable.  The larger CIDR block approach also 
works.

The address(s) that mons bind / listen to are a different matter.

> On 16.12.21 21:57, Andrei Mikhailovsky wrote:
> 
>> public_network = 192.168.168.0/24,192.168.169.0/24
> 
> AFAIK there is only one public_network possible.
> 
> In your case you could try with 192.168.168.0/23, as both networks are direct 
> neighbors bitwise.
> 
> Regards
> -- 
> Robert Sander
> Heinlein Consulting GmbH
> Schwedter Str. 8/9b, 10119 Berlin
> 
> https://www.heinlein-support.de
> 
> Tel: 030 / 405051-43
> Fax: 030 / 405051-19
> 
> Amtsgericht Berlin-Charlottenburg - HRB 220009 B
> Geschäftsführer: Peer Heinlein - Sitz: Berlin
> ___
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io

___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io