I vote for an SSH orchestrator for a bare metal installation too!
(And no, I'm not able to write it.)
Our second Ceph cluster is underway, and I don't know if we ever update our
first cluster (nautilus) to a containerized version. It is constructed a
special way.
Thanks!
Lars
smime.p7s
Descri
Hi Paul,
the GPG Key of the repo has changed on 4th of June. Is this correct?
Thanks for your buster repo!
Cheers,
Lars
Mon, 18 Nov 2019 20:08:01 +0100
Paul Emmerich ==> Jelle de Jong
:
> We maintain an unofficial mirror for Buster packages:
> https://croit.io/2019/07/07/2019-07-07-debian-mi
+1 from me
I also hope for a bare metal solution for the upcoming versions. At the moment
it is a show stopper for an upgrade to Octopus.
Thanks everybody involved for the great storage solution!
Cheers,
Lars
Am Fri, 3 Jul 2020 20:44:02 +0200
schrieb Oliver Freyermuth :
> Of course, no solutio
Is there a possibility to specify partitions? I only see whole discs/devices
chosen by vendor or model name.
Regards,
Lars
Am Thu, 18 Jun 2020 09:52:36 +
schrieb Eugen Block :
> You'll need to specify drive_groups in a yaml file if you don't deploy
> standalone OSDs:
>
> https://docs.cep
Hi Brett,
I'm far from being an expert, but you may consider rbd-mirroring between
EC-pools.
Cheers,
Lars
Am Fri, 27 Mar 2020 06:28:02 +
schrieb Brett Randall :
> Hi all
>
> Had a fun time trying to join this list, hopefully you don’t get this message
> 3 times!
>
> On to Ceph… We are l
I don't have samba experiences. Isn't the installation and administration of a
samba server just for one "share" overkill?
Thu, 13 Feb 2020 09:36:31 +0100
"Marc Roos" ==> ceph-users ,
taeuber :
> Via smb, much discussed here
>
> -Original Message-
> Sent: 13 February 2020 09:33
> To:
Hi there!
I got the task to connect a Windows client to our existing ceph cluster.
I'm looking for experiences or suggestions from the community.
There came two possibilities to my mind:
1. iSCSI Target on RBD exported to Windows
2. NFS-Ganesha on CephFS exported to Windows
Is there a third way e
yesterday:
https://ceph.io/releases/v14-2-6-nautilus-released/
Cheers,
Lars
Thu, 9 Jan 2020 10:10:12 +0100
Wido den Hollander ==> Neha Ojha , Sasha
Litvak :
> On 12/24/19 9:19 PM, Neha Ojha wrote:
> > The root cause of this issue is the overhead added by the network ping
> > time monitoring f
Hi Konstantin,
Mon, 23 Dec 2019 13:47:55 +0700
Konstantin Shalygin ==> Lars Täuber :
> On 12/18/19 2:16 PM, Lars Täuber wrote:
> > the situation after moving the PGs with osdmaptool is not really better
> > than without:
> >
> > $ ceph osd df class hdd
>
depends on a mount option. This the metadata pool can't
know. The name of the snapshot directory itself is not given.
You can only find directories where snapshots are in, not the actual snapdirs.
So you get the same information from find -inum. And it's simpler to use:
$ find /mnt/point/
e": "snap-7"
> },
>
>
>
> -----Original Message-
> Cc: ceph-users@ceph.io
> Subject: [ceph-users] Re: list CephFS snapshots
>
> Have you tried "ceph daemon mds.NAME dump snaps" (available since
> mimic)?
>
> ==
much unusable space this means but
I'm sure there is a relevant amount of it.
Thanks for all your patience and support
Lars
Tue, 17 Dec 2019 07:45:24 +0100
Lars Täuber ==> Konstantin Shalygin :
> Hi Konstantin,
>
> the cluster has finished it's backfilling.
> I got this
Hi Michael,
thanks for your gist.
This is at least a way to do it. But there are many directories in our cluster.
The "find $1 -type d" lasts for about 90 minutes to find all 2.6 million
directories.
Is there another (faster) way e.g. via mds?
Cheers,
Lars
Mon, 16 Dec 2019 17:03:41 +
Step
your hints.
Regards,
Lars
Mon, 16 Dec 2019 15:38:30 +0700
Konstantin Shalygin ==> Lars Täuber :
> On 12/16/19 3:25 PM, Lars Täuber wrote:
> > Here it comes.
>
> Maybe some bug in osdmaptool, when defined pools is less than one no
> actually do_upmap is executed.
>
Mon, 16 Dec 2019 14:42:49 +0100
Thomas Schneider <74cmo...@gmail.com> ==> Lars Täuber ,
ceph-users@ceph.io :
> Hi,
>
> can you please advise how to verify if and which weight-set is active?
try:
$ ceph osd crush weight-set ls
Lars
>
> Regards
> Thomas
>
> Am
Hi!
Is there a mean to list all snapshots existing in a (subdir of) Cephfs?
I can't use the find dommand to look for the ".snap" dirs.
I'd like to remove certain (or all) snapshots within a CephFS. But how do I
find them?
Thanks,
Lars
___
ceph-users m
Hi Thomas,
do you have the backward-compatible weight-set still active?
Try removing it with:
$ ceph osd crush weight-set rm-compat
I'm unsure if it solves my similar problem, but the progress looks very
promising.
Cheers,
Lars
___
ceph-users mailing
Mon, 16 Dec 2019 15:38:30 +0700
Konstantin Shalygin ==> Lars Täuber :
> On 12/16/19 3:25 PM, Lars Täuber wrote:
> > Here it comes.
>
> Maybe some bug in osdmaptool, when defined pools is less than one no
> actually do_upmap is executed.
>
> Try like this:
>
&
Mon, 16 Dec 2019 15:17:37 +0700
Konstantin Shalygin ==> Lars Täuber :
> On 12/16/19 2:42 PM, Lars Täuber wrote:
> > There seems to be a bug in nautilus.
> >
> > I think about increasing the number of PG's for the data pool again,
> > because the average
bug in nautilus.
I think about increasing the number of PG's for the data pool again, because
the average number of PG's per OSD now is 76.8.
What do you say?
Thanks,
Lars
Wed, 4 Dec 2019 16:21:33 +0700
Konstantin Shalygin ==> Lars Täuber ,
ceph-users@ceph.io :
> On 12/4/19
Hi Konstantin,
thanks for your suggestions.
> Lars, you have too much PG's for this OSD's. I suggest to disable PG
> autoscaler and:
>
> - reduce number of PG's for cephfs_metada pool to something like 16 PG's.
Done.
>
> - reduce number of PG's for cephfs_data to something like 512.
Done.
0.01
limiting to pools cephfs_data (1)
no upmaps proposed
Tue, 3 Dec 2019 07:30:24 +0100
Lars Täuber ==> Konstantin Shalygin :
> Hi Konstantin,
>
>
> Tue, 3 Dec 2019 10:01:34 +0700
> Konstantin Shalygin ==> Lars Täuber ,
> ceph-users@ceph.io :
> > Please pas
Hi Konstantin,
Tue, 3 Dec 2019 10:01:34 +0700
Konstantin Shalygin ==> Lars Täuber ,
ceph-users@ceph.io :
> Please paste your `ceph osd df tree`, `ceph osd pool ls detail`, `ceph
> osd crush rule dump`.
here it comes:
$ ceph osd df tree
ID CLASS WEIGHTREWEIGHT SIZERAW
Hi there!
Here we have a similar situation.
After adding some OSDs to the cluster the PGs are not equally distributed over
the OSDs.
The balancing mode is set to upmap.
The docs https://docs.ceph.com/docs/master/rados/operations/balancer/#modes say:
"This CRUSH mode will optimize the placement o
Thanks a lot!
Lars
Fri, 1 Nov 2019 13:03:25 + (UTC)
Sage Weil ==> Lars Täuber :
> This was fixed a few weeks back. It should be resolved in 14.2.5.
>
> https://tracker.ceph.com/issues/41567
> https://github.com/ceph/ceph/pull/31100
>
> sage
>
>
> On Fri,
Is there anybody who can explain the overcommitment calcuation?
Thanks
Mon, 28 Oct 2019 11:24:54 +0100
Lars Täuber ==> ceph-users :
> Is there a way to get rid of this warnings with activated autoscaler besides
> adding new osds?
>
> Yet I couldn't get a satisfactory an
n Tue, Oct 29, 2019 at 9:04 PM Nathan Fish wrote:
>
> > Ubuntu's 4.15.0-66 has this bug, yes. -65 is safe and -67 will have the
> > fix.
> >
> > On Tue, Oct 29, 2019 at 4:54 PM Patrick Donnelly
> > wrote:
> > >
> > > On Mon, Oct 28, 2019 at
Hi!
What kind of client (kernel vs. FUSE) do you use?
I experience a lot of the following problems with the most recent ubuntu
18.04.3 kernel 4.15.0-66-generic :
kernel: [260144.644232] cache_from_obj: Wrong slab cache. inode_cache but
object is from ceph_inode_info
Other clients with older ker
y
used to calculate the SIZE?
It seems USED(df) = SIZE(autoscale-status)
Isn't the RATE already taken into account here?
Could someone please explain the numbers to me?
Thanks!
Lars
Fri, 25 Oct 2019 07:42:58 +0200
Lars Täuber ==> Nathan Fish :
> Hi Nathan,
>
> Thu, 24 Oct 2
Hi Nathan,
Thu, 24 Oct 2019 10:59:55 -0400
Nathan Fish ==> Lars Täuber :
> Ah, I see! The BIAS reflects the number of placement groups it should
> create. Since cephfs metadata pools are usually very small, but have
> many objects and high IO, the autoscaler gives them 4x t
ercommited when it is the only pool on a set of OSDs?
Best regards,
Lars
Thu, 24 Oct 2019 09:39:51 -0400
Nathan Fish ==> Lars Täuber :
> The formatting is mangled on my phone, but if I am reading it correctly,
> you have set Target Ratio to 4.0. This means you have told the balancer
>
case? - Data is stored outside of the pool?
How comes this is only the case when the autoscaler is active?
Thanks
Lars
Thu, 24 Oct 2019 10:36:52 +0200
Lars Täuber ==> ceph-users@ceph.io :
> My question requires too complex an answer.
> So let me ask a simple question:
>
> What d
My question requires too complex an answer.
So let me ask a simple question:
What does the SIZE of "osd pool autoscale-status" tell/mean/comes from?
Thanks
Lars
Wed, 23 Oct 2019 14:28:10 +0200
Lars Täuber ==> ceph-users@ceph.io :
> Hello everybody!
>
> What does this
Hello everybody!
What does this mean?
health: HEALTH_WARN
1 subtrees have overcommitted pool target_size_bytes
1 subtrees have overcommitted pool target_size_ratio
and what does it have to do with the autoscaler?
When I deactivate the autoscaler the warning goes away.
Hi Eugen,
Wed, 09 Oct 2019 08:44:28 +
Eugen Block ==> ceph-users@ceph.io :
> Hi,
>
> > I'd tried to make this:
> > ceph auth caps client.XYZ mon 'allow r' mds 'allow r, allow rws
> > path=/XYZ, allow path=/ABC' osd 'allow rw pool=cephfs_data'
>
> do you want to remove all permissions fr
Hi!
Is it possible and if yes how to remove any permission to a subdir for a user.
I'd tried to make this:
ceph auth caps client.XYZ mon 'allow r' mds 'allow r, allow rws path=/XYZ,
allow path=/ABC' osd 'allow rw pool=cephfs_data'
but got:
Error EINVAL: mds capability parse failed, stopped at '
Wed, 4 Sep 2019 11:11:14 +0200
Yoann Moulin ==> ceph-users@ceph.io :
> Le 04/09/2019 à 11:01, Lars Täuber a écrit :
> > Wed, 4 Sep 2019 10:32:56 +0200
> > Yoann Moulin ==> ceph-users@ceph.io :
> >> Hello,
> >>
> >>> Tue, 3 Sep 2019 11:28:
Wed, 4 Sep 2019 10:32:56 +0200
Yoann Moulin ==> ceph-users@ceph.io :
> Hello,
>
> > Tue, 3 Sep 2019 11:28:20 +0200
> > Yoann Moulin ==> ceph-users@ceph.io :
> >> Is it better to put all WAL on one SSD and all DBs on the other one? Or
> >> put WAL and DB of the first 5 OSDs on the first SSD an
Hi!
Tue, 3 Sep 2019 11:28:20 +0200
Yoann Moulin ==> ceph-users@ceph.io :
> Is it better to put all WAL on one SSD and all DBs on the other one? Or put
> WAL and DB of the first 5 OSDs on the first SSD and the 5 others on
> the second one.
I don't know if this has a relevant impact on the latenc
39 matches
Mail list logo