[ceph-users] Re: ceph-ansible in Pacific and beyond?

2021-03-17 Thread Lars Täuber
I vote for an SSH orchestrator for a bare metal installation too! (And no, I'm not able to write it.) Our second Ceph cluster is underway, and I don't know if we ever update our first cluster (nautilus) to a containerized version. It is constructed a special way. Thanks! Lars smime.p7s Descri

[ceph-users] Re: add debian buster stable support for ceph-deploy

2020-09-07 Thread Lars Täuber
Hi Paul, the GPG Key of the repo has changed on 4th of June. Is this correct? Thanks for your buster repo! Cheers, Lars Mon, 18 Nov 2019 20:08:01 +0100 Paul Emmerich ==> Jelle de Jong : > We maintain an unofficial mirror for Buster packages: > https://croit.io/2019/07/07/2019-07-07-debian-mi

[ceph-users] Re: Ceph SSH orchestrator?

2020-07-06 Thread Lars Täuber
+1 from me I also hope for a bare metal solution for the upcoming versions. At the moment it is a show stopper for an upgrade to Octopus. Thanks everybody involved for the great storage solution! Cheers, Lars Am Fri, 3 Jul 2020 20:44:02 +0200 schrieb Oliver Freyermuth : > Of course, no solutio

[ceph-users] Re: cephadm - How to deploy ceph cluster with a partition on SSD for block.db

2020-06-18 Thread Lars Täuber
Is there a possibility to specify partitions? I only see whole discs/devices chosen by vendor or model name. Regards, Lars Am Thu, 18 Jun 2020 09:52:36 + schrieb Eugen Block : > You'll need to specify drive_groups in a yaml file if you don't deploy > standalone OSDs: > > https://docs.cep

[ceph-users] Re: Combining erasure coding and replication?

2020-03-27 Thread Lars Täuber
Hi Brett, I'm far from being an expert, but you may consider rbd-mirroring between EC-pools. Cheers, Lars Am Fri, 27 Mar 2020 06:28:02 + schrieb Brett Randall : > Hi all > > Had a fun time trying to join this list, hopefully you don’t get this message > 3 times! > > On to Ceph… We are l

[ceph-users] Re: Ceph and Windows - experiences or suggestions

2020-02-13 Thread Lars Täuber
I don't have samba experiences. Isn't the installation and administration of a samba server just for one "share" overkill? Thu, 13 Feb 2020 09:36:31 +0100 "Marc Roos" ==> ceph-users , taeuber : > Via smb, much discussed here > > -Original Message- > Sent: 13 February 2020 09:33 > To:

[ceph-users] Ceph and Windows - experiences or suggestions

2020-02-13 Thread Lars Täuber
Hi there! I got the task to connect a Windows client to our existing ceph cluster. I'm looking for experiences or suggestions from the community. There came two possibilities to my mind: 1. iSCSI Target on RBD exported to Windows 2. NFS-Ganesha on CephFS exported to Windows Is there a third way e

[ceph-users] Re: High CPU usage by ceph-mgr in 14.2.5

2020-01-09 Thread Lars Täuber
yesterday: https://ceph.io/releases/v14-2-6-nautilus-released/ Cheers, Lars Thu, 9 Jan 2020 10:10:12 +0100 Wido den Hollander ==> Neha Ojha , Sasha Litvak : > On 12/24/19 9:19 PM, Neha Ojha wrote: > > The root cause of this issue is the overhead added by the network ping > > time monitoring f

[ceph-users] Re: Balancing PGs across OSDs

2020-01-06 Thread Lars Täuber
Hi Konstantin, Mon, 23 Dec 2019 13:47:55 +0700 Konstantin Shalygin ==> Lars Täuber : > On 12/18/19 2:16 PM, Lars Täuber wrote: > > the situation after moving the PGs with osdmaptool is not really better > > than without: > > > > $ ceph osd df class hdd >

[ceph-users] Re: list CephFS snapshots

2019-12-18 Thread Lars Täuber
depends on a mount option. This the metadata pool can't know. The name of the snapshot directory itself is not given. You can only find directories where snapshots are in, not the actual snapdirs. So you get the same information from find -inum. And it's simpler to use: $ find /mnt/point/

[ceph-users] Re: list CephFS snapshots

2019-12-18 Thread Lars Täuber
e": "snap-7" > }, > > > > -----Original Message- > Cc: ceph-users@ceph.io > Subject: [ceph-users] Re: list CephFS snapshots > > Have you tried "ceph daemon mds.NAME dump snaps" (available since > mimic)? > > ==

[ceph-users] Re: Balancing PGs across OSDs

2019-12-17 Thread Lars Täuber
much unusable space this means but I'm sure there is a relevant amount of it. Thanks for all your patience and support Lars Tue, 17 Dec 2019 07:45:24 +0100 Lars Täuber ==> Konstantin Shalygin : > Hi Konstantin, > > the cluster has finished it's backfilling. > I got this

[ceph-users] Re: list CephFS snapshots

2019-12-17 Thread Lars Täuber
Hi Michael, thanks for your gist. This is at least a way to do it. But there are many directories in our cluster. The "find $1 -type d" lasts for about 90 minutes to find all 2.6 million directories. Is there another (faster) way e.g. via mds? Cheers, Lars Mon, 16 Dec 2019 17:03:41 + Step

[ceph-users] Re: Balancing PGs across OSDs

2019-12-16 Thread Lars Täuber
your hints. Regards, Lars Mon, 16 Dec 2019 15:38:30 +0700 Konstantin Shalygin ==> Lars Täuber : > On 12/16/19 3:25 PM, Lars Täuber wrote: > > Here it comes. > > Maybe some bug in osdmaptool, when defined pools is less than one no > actually do_upmap is executed. >

[ceph-users] Re: RESEND: Re: PG Balancer Upmap mode not working

2019-12-16 Thread Lars Täuber
Mon, 16 Dec 2019 14:42:49 +0100 Thomas Schneider <74cmo...@gmail.com> ==> Lars Täuber , ceph-users@ceph.io : > Hi, > > can you please advise how to verify if and which weight-set is active? try: $ ceph osd crush weight-set ls Lars > > Regards > Thomas > > Am

[ceph-users] list CephFS snapshots

2019-12-16 Thread Lars Täuber
Hi! Is there a mean to list all snapshots existing in a (subdir of) Cephfs? I can't use the find dommand to look for the ".snap" dirs. I'd like to remove certain (or all) snapshots within a CephFS. But how do I find them? Thanks, Lars ___ ceph-users m

[ceph-users] Re: RESEND: Re: PG Balancer Upmap mode not working

2019-12-16 Thread Lars Täuber
Hi Thomas, do you have the backward-compatible weight-set still active? Try removing it with: $ ceph osd crush weight-set rm-compat I'm unsure if it solves my similar problem, but the progress looks very promising. Cheers, Lars ___ ceph-users mailing

[ceph-users] Re: Balancing PGs across OSDs

2019-12-16 Thread Lars Täuber
Mon, 16 Dec 2019 15:38:30 +0700 Konstantin Shalygin ==> Lars Täuber : > On 12/16/19 3:25 PM, Lars Täuber wrote: > > Here it comes. > > Maybe some bug in osdmaptool, when defined pools is less than one no > actually do_upmap is executed. > > Try like this: > &

[ceph-users] Re: Balancing PGs across OSDs

2019-12-16 Thread Lars Täuber
Mon, 16 Dec 2019 15:17:37 +0700 Konstantin Shalygin ==> Lars Täuber : > On 12/16/19 2:42 PM, Lars Täuber wrote: > > There seems to be a bug in nautilus. > > > > I think about increasing the number of PG's for the data pool again, > > because the average

[ceph-users] Re: Balancing PGs across OSDs

2019-12-15 Thread Lars Täuber
bug in nautilus. I think about increasing the number of PG's for the data pool again, because the average number of PG's per OSD now is 76.8. What do you say? Thanks, Lars Wed, 4 Dec 2019 16:21:33 +0700 Konstantin Shalygin ==> Lars Täuber , ceph-users@ceph.io : > On 12/4/19

[ceph-users] Re: Balancing PGs across OSDs

2019-12-04 Thread Lars Täuber
Hi Konstantin, thanks for your suggestions. > Lars, you have too much PG's for this OSD's. I suggest to disable PG > autoscaler and: > > - reduce number of PG's for cephfs_metada pool to something like 16 PG's. Done. > > - reduce number of PG's for cephfs_data to something like 512. Done.

[ceph-users] Re: Balancing PGs across OSDs

2019-12-02 Thread Lars Täuber
0.01 limiting to pools cephfs_data (1) no upmaps proposed Tue, 3 Dec 2019 07:30:24 +0100 Lars Täuber ==> Konstantin Shalygin : > Hi Konstantin, > > > Tue, 3 Dec 2019 10:01:34 +0700 > Konstantin Shalygin ==> Lars Täuber , > ceph-users@ceph.io : > > Please pas

[ceph-users] Re: Balancing PGs across OSDs

2019-12-02 Thread Lars Täuber
Hi Konstantin, Tue, 3 Dec 2019 10:01:34 +0700 Konstantin Shalygin ==> Lars Täuber , ceph-users@ceph.io : > Please paste your `ceph osd df tree`, `ceph osd pool ls detail`, `ceph > osd crush rule dump`. here it comes: $ ceph osd df tree ID CLASS WEIGHTREWEIGHT SIZERAW

[ceph-users] Re: Balancing PGs across OSDs

2019-12-02 Thread Lars Täuber
Hi there! Here we have a similar situation. After adding some OSDs to the cluster the PGs are not equally distributed over the OSDs. The balancing mode is set to upmap. The docs https://docs.ceph.com/docs/master/rados/operations/balancer/#modes say: "This CRUSH mode will optimize the placement o

[ceph-users] Re: subtrees have overcommitted (target_size_bytes / target_size_ratio)

2019-11-01 Thread Lars Täuber
Thanks a lot! Lars Fri, 1 Nov 2019 13:03:25 + (UTC) Sage Weil ==> Lars Täuber : > This was fixed a few weeks back. It should be resolved in 14.2.5. > > https://tracker.ceph.com/issues/41567 > https://github.com/ceph/ceph/pull/31100 > > sage > > > On Fri,

[ceph-users] Re: subtrees have overcommitted (target_size_bytes / target_size_ratio)

2019-11-01 Thread Lars Täuber
Is there anybody who can explain the overcommitment calcuation? Thanks Mon, 28 Oct 2019 11:24:54 +0100 Lars Täuber ==> ceph-users : > Is there a way to get rid of this warnings with activated autoscaler besides > adding new osds? > > Yet I couldn't get a satisfactory an

[ceph-users] Re: Dirlisting hangs with cephfs

2019-10-29 Thread Lars Täuber
n Tue, Oct 29, 2019 at 9:04 PM Nathan Fish wrote: > > > Ubuntu's 4.15.0-66 has this bug, yes. -65 is safe and -67 will have the > > fix. > > > > On Tue, Oct 29, 2019 at 4:54 PM Patrick Donnelly > > wrote: > > > > > > On Mon, Oct 28, 2019 at

[ceph-users] Re: Dirlisting hangs with cephfs

2019-10-28 Thread Lars Täuber
Hi! What kind of client (kernel vs. FUSE) do you use? I experience a lot of the following problems with the most recent ubuntu 18.04.3 kernel 4.15.0-66-generic : kernel: [260144.644232] cache_from_obj: Wrong slab cache. inode_cache but object is from ceph_inode_info Other clients with older ker

[ceph-users] Re: subtrees have overcommitted (target_size_bytes / target_size_ratio)

2019-10-28 Thread Lars Täuber
y used to calculate the SIZE? It seems USED(df) = SIZE(autoscale-status) Isn't the RATE already taken into account here? Could someone please explain the numbers to me? Thanks! Lars Fri, 25 Oct 2019 07:42:58 +0200 Lars Täuber ==> Nathan Fish : > Hi Nathan, > > Thu, 24 Oct 2

[ceph-users] Re: subtrees have overcommitted (target_size_bytes / target_size_ratio)

2019-10-24 Thread Lars Täuber
Hi Nathan, Thu, 24 Oct 2019 10:59:55 -0400 Nathan Fish ==> Lars Täuber : > Ah, I see! The BIAS reflects the number of placement groups it should > create. Since cephfs metadata pools are usually very small, but have > many objects and high IO, the autoscaler gives them 4x t

[ceph-users] Re: subtrees have overcommitted (target_size_bytes / target_size_ratio)

2019-10-24 Thread Lars Täuber
ercommited when it is the only pool on a set of OSDs? Best regards, Lars Thu, 24 Oct 2019 09:39:51 -0400 Nathan Fish ==> Lars Täuber : > The formatting is mangled on my phone, but if I am reading it correctly, > you have set Target Ratio to 4.0. This means you have told the balancer >

[ceph-users] Re: subtrees have overcommitted (target_size_bytes / target_size_ratio)

2019-10-24 Thread Lars Täuber
case? - Data is stored outside of the pool? How comes this is only the case when the autoscaler is active? Thanks Lars Thu, 24 Oct 2019 10:36:52 +0200 Lars Täuber ==> ceph-users@ceph.io : > My question requires too complex an answer. > So let me ask a simple question: > > What d

[ceph-users] Re: subtrees have overcommitted (target_size_bytes / target_size_ratio)

2019-10-24 Thread Lars Täuber
My question requires too complex an answer. So let me ask a simple question: What does the SIZE of "osd pool autoscale-status" tell/mean/comes from? Thanks Lars Wed, 23 Oct 2019 14:28:10 +0200 Lars Täuber ==> ceph-users@ceph.io : > Hello everybody! > > What does this

[ceph-users] subtrees have overcommitted (target_size_bytes / target_size_ratio)

2019-10-23 Thread Lars Täuber
Hello everybody! What does this mean? health: HEALTH_WARN 1 subtrees have overcommitted pool target_size_bytes 1 subtrees have overcommitted pool target_size_ratio and what does it have to do with the autoscaler? When I deactivate the autoscaler the warning goes away.

[ceph-users] Re: CephFS no permissions for subdir

2019-10-09 Thread Lars Täuber
Hi Eugen, Wed, 09 Oct 2019 08:44:28 + Eugen Block ==> ceph-users@ceph.io : > Hi, > > > I'd tried to make this: > > ceph auth caps client.XYZ mon 'allow r' mds 'allow r, allow rws > > path=/XYZ, allow path=/ABC' osd 'allow rw pool=cephfs_data' > > do you want to remove all permissions fr

[ceph-users] CephFS no permissions for subdir

2019-10-09 Thread Lars Täuber
Hi! Is it possible and if yes how to remove any permission to a subdir for a user. I'd tried to make this: ceph auth caps client.XYZ mon 'allow r' mds 'allow r, allow rws path=/XYZ, allow path=/ABC' osd 'allow rw pool=cephfs_data' but got: Error EINVAL: mds capability parse failed, stopped at '

[ceph-users] Re: Best osd scenario + ansible config?

2019-09-04 Thread Lars Täuber
Wed, 4 Sep 2019 11:11:14 +0200 Yoann Moulin ==> ceph-users@ceph.io : > Le 04/09/2019 à 11:01, Lars Täuber a écrit : > > Wed, 4 Sep 2019 10:32:56 +0200 > > Yoann Moulin ==> ceph-users@ceph.io : > >> Hello, > >> > >>> Tue, 3 Sep 2019 11:28:

[ceph-users] Re: Best osd scenario + ansible config?

2019-09-04 Thread Lars Täuber
Wed, 4 Sep 2019 10:32:56 +0200 Yoann Moulin ==> ceph-users@ceph.io : > Hello, > > > Tue, 3 Sep 2019 11:28:20 +0200 > > Yoann Moulin ==> ceph-users@ceph.io : > >> Is it better to put all WAL on one SSD and all DBs on the other one? Or > >> put WAL and DB of the first 5 OSDs on the first SSD an

[ceph-users] Re: Best osd scenario + ansible config?

2019-09-04 Thread Lars Täuber
Hi! Tue, 3 Sep 2019 11:28:20 +0200 Yoann Moulin ==> ceph-users@ceph.io : > Is it better to put all WAL on one SSD and all DBs on the other one? Or put > WAL and DB of the first 5 OSDs on the first SSD and the 5 others on > the second one. I don't know if this has a relevant impact on the latenc