Hi,
the autofs on our clients is configured to use LDAP, so there's one
more layer.
This is the current setup:
# autofs on LDAP server
/etc/sysconfig/autofs:SEARCH_BASE="ou=cephfs,ou=AUTOFS,[...]"
# LDAP config
-fstype=ceph,name=autofs,secretfile=/etc/ceph/autofs.key,nodev,nosuid
,,:/path/
Den ons 17 juni 2020 kl 02:14 skrev Seena Fallah :
> Hi all.
> Is there any way that I could calculate how much time it takes to add
> OSD to my cluster and get rebalanced or how much it takes to out OSD
> from my cluster?
>
This is very dependent on all the variables of a cluster, from controlle
Hi Eugen
I configured auto.master and a mount point file auto.ceph with the
following entry. I am wondering if you can show me what your autofs entry
looks like? Thanks
ceph
-fstype=ceph,name=cephfs,conf=/etc/ceph/mini_conf.conf,secretfile=/etc/ceph/client.cephfs,noatime
:/
On Mon, J
Hi all.
Is there any way that I could calculate how much time it takes to add
OSD to my cluster and get rebalanced or how much it takes to out OSD
from my cluster?
Thanks.
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to
Hi Jeff, how did you deduct this from the log file? I can't see
where the 'error' is.
-Original Message-
Subject: Re: [NFS-Ganesha-Support] bug in nfs-ganesha? and cephfs?
On Sun, 2020-06-14 at 15:17 +0200, Marc Roos wrote:
> When rsyncing to a nfs-ganesha exported cephfs the process ha
We have a dev cluster to test things on ceph, that only have one 1gig nic
network. Although it can work we noticed major impact with latencies when
the cluster is balancing or under heavy load.
On Tue, Jun 16, 2020 at 07:54 Olivier AUDRY wrote:
> hello
>
> as far as I know there is no perf advan
Hi,
I have a question regarding Ceph CRUSH. I have been going through Crush.h
file. It says that *struct crush_bucket **buckets * (below) is an array of
pointers. My understanding is that this particular array of pointers is a
collection of addresses of six scalar values namely __s32 id; __u16
Hi,
we had bad blocks on one OSD and around the same time a network switch
outage, which seems to have caused some corruption on the mon service.
> # ceph -s
cluster:
id: d7c5c9c7-a227-4e33-ab43-3f4aa1eb0630
health: HEALTH_WARN
1 daemons have recently crashed
Thanks Simon, as you mentioned, I did the missing things and now everything
works fine.
On Tue, Jun 16, 2020 at 9:30 AM Simon Sutter wrote:
> Hello,
>
>
> When you deploy ceph to other nodes with the orchestrator, they "just"
> have the containers you deployed to them.
> This means in your case,
I'm happy to announce the another release of the go-ceph API
bindings. This is a regular release following our every-two-months release
cadence.
https://github.com/ceph/go-ceph/releases/tag/v0.4.0
The bindings aim to play a similar role to the "pybind" python bindings in the
ceph tree but for
Our install-deps.sh script is looking for "python36-Cython" vs
"python3-Cython" so I guess that needs to be fixed in the nautilus
version of install-deps.sh
On 6/15/20 7:26 AM, Giulio Fidente wrote:
> hi David, thanks for helping
>
> python3-Cython seems to be already in the centos8 PowerTools re
Hi everyone,
I'm looking for a proposal for this month's Tech Talk on the 25th at
17:00 UTC. If you have something you want to share with the Ceph
community, consider sending me your proposal:
https://ceph.io/ceph-tech-talks/
--
Mike Perez
He/Him
Ceph Community Manager
Red Hat Los Angele
We are using multiple filesystems in production (Nautilus). While we
have had a number of issues over the past year, I don't think any of
them are specific to the use of multiple filesystems.
On Tue, Jun 16, 2020 at 8:35 AM Simon Sutter wrote:
>
> Hello,
>
>
> What is the current status, of using
hello
as far as I know there is no perf advantage to do this. Personnaly I'm
doing it in order to monitoring the two different bandwidth usage.
oau
Le mardi 16 juin 2020 à 16:42 +0200, Marcel Kuiper a écrit :
> Hi
>
> I wonder if there is any (theoretical) advantage running a separate
> backend
Hi
I wonder if there is any (theoretical) advantage running a separate
backend network next to the public network (through vlan separation) over
a single interface
I googled a lot and while some blogs advice to do so, they do not give any
argument that supports this statement
Any insights on thi
On Mon, Jun 15, 2020 at 11:31 PM kefu chai wrote:
>
> On Mon, Jun 15, 2020 at 7:27 PM Giulio Fidente wrote:
> >
> > hi David, thanks for helping
> >
> > python3-Cython seems to be already in the centos8 PowerTools repo:
> >
> > http://mirror.centos.org/centos-8/8/PowerTools/x86_64/os/Packages/
>
Hello,
really thanks for your answer. I will try to investigate to the 2nd problems,
and I will keep you informed if I find something.
For the first one, I am experimenting cache pool as you pointed me to, I never
tried it because the ceph documentation doesn't really encourage this with rbd.
Fo
Hello helpful mailing list folks! After a networking outage, I had a MDS rank
failure (originally 3 MDS ranks) that has left my CephFS cluster in a bad
shape. I worked through most of the Disaster Recovery guide
(https://docs.ceph.com/docs/nautilus/cephfs/disaster-recovery-experts/#disaster-re
Hello,
What is the current status, of using multiple cephfs?
In octopuss I get lots of warnings, that this feature is still not fully
tested, but the latest entry regarding multiple cephfs in the mailinglist is
from about 2018.
Is someone using multiple cephfs in production?
Thanks in Advanc
Those df's and PG numbers all look fine to me.
I wouldn't start adjusting pg_num now -- leave the autoscaler module disabled.
Some might be concerned about having 190 PGs on an OSD, but this is
fine provided you have ample memory (at least 3GB per OSD).
Cheers, Dan
On Tue, Jun 16, 2020 at 2:23 P
Oh ok. Because we have two types of ssds. 1.8TB and 3.6TB.
The 1.8TB got around 90-100pgs and the 3.6TB around 150-190pgs
Here is the output:
RAW STORAGE:
CLASS SIZEAVAIL USEDRAW USED %RAW USED
ssd 956 TiB 360 TiB 595 TiB 596 TiB 62.
On Tue, Jun 16, 2020 at 2:00 PM Boris Behrens wrote:
>
> See inline comments
>
> Am Di., 16. Juni 2020 um 13:29 Uhr schrieb Zhenshi Zhou :
> >
> > I did this on my cluster and there was a huge number of pg rebalanced.
> > I think setting this option to 'on' is a good idea if it's a brand new
> >
See inline comments
Am Di., 16. Juni 2020 um 13:29 Uhr schrieb Zhenshi Zhou :
>
> I did this on my cluster and there was a huge number of pg rebalanced.
> I think setting this option to 'on' is a good idea if it's a brand new
> cluster.
>
On our new cluster we enabled them, but not on our primary
Install rhel7/cenos7 minimal
Install the rpms from http://download.ceph.com/rpm-nautilus/el7/
Check this manual
https://ceph.readthedocs.io/en/latest/install/index_manual/#install-manual
-Original Message-
To: Amudhan P
Cc: ceph-users
Subject: [ceph-users] Re: Ceph latest install
Could anyone give me the latest version ceph install guide for ubuntu 20.04
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
I need a proper guide ,plz help
On Sat, Jun 13, 2020 at 9:28 PM Amudhan P wrote:
>
> https://computingforgeeks.com/install-ceph-storage-cluster-on-ubuntu-linux-servers/
>
>
> On Sat, Jun 13, 2020 at 2:31 PM masud parvez
> wrote:
>
>> Could anyone give me the latest version ceph install guide fo
I did this on my cluster and there was a huge number of pg rebalanced.
I think setting this option to 'on' is a good idea if it's a brand new
cluster.
Dan van der Ster 于2020年6月16日周二 下午7:07写道:
> Could you share the output of
>
> ceph osd pool ls detail
>
> ?
>
> This way we can see how the po
Could you share the output of
ceph osd pool ls detail
?
This way we can see how the pools are configured and help recommend if
pg_autoscaler is worth enabling.
Cheers, Dan
On Tue, Jun 16, 2020 at 11:51 AM Boris Behrens wrote:
>
> I read about the "warm" option and we are already discussin
Hi all,
I have a question regarding following rules in Ceph CRUSH map:
enum crush_opcodes {
/*! do nothing
*/
CRUSH_RULE_NOOP = 0,
CRUSH_RULE_TAKE = 1, /* arg1 = value to start with */
CRUSH_RULE_CHOOSE_FIRSTN = 2, /* arg1 = num items to pick */
/* arg2
Are observing something similar to this thread:
https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/thread/FBGIJZNFG445NMYGO73PFNQL2ZB3ZF2Z/#FBGIJZNFG445NMYGO73PFNQL2ZB3ZF2Z
?
=
Frank Schilder
AIT Risø Campus
Bygning 109, rum S14
Fro
I read about the "warm" option and we are already discussing this.
I don't know if the pgs needs a tuning. I don't know what the impact
is and if there will be any difference if we enable it.
The, meanwhile gone, last ceph admin made a ticket, and I am not
particularly familiar with ceph. So I ne
Hi,
I agree with "someone" -- it's not a good idea to just naively enable
pg_autoscaler on an existing cluster with lots of data and active
customers.
If you're curious about this feature, it would be harmless to start
out by enabling it with pg_autoscale_mode = warn on each pool.
This way you ca
Hi,
I would like to enable the pg_autoscaler on our nautilus cluster.
Someone told me that I should be really really careful to NOT have
customer impact.
Maybe someone can share some experience on this?
The Cluster got 455 OSDs on 19 hosts with ~17000 PGs and ~1petabyte
raw storage where ~600TB
Hi Reed,
you might want to use bluefs-bdev-migrate command which simply moves
BlueFS files from source path to destination. I.e. from main device to
DB in you case.
It needs neither OSD redeployment nor additional/new device creation.
Neither it guarantees that spillover reoccurs one day tho
Hi Raymond,
I'm pinging this old thread because we hit the same issue last week.
Is it possible that when you upgraded to nautilus you ran `ceph osd
require-osd-release nautilus` but did not run `ceph mon enable-msgr2`
?
We were in that state (intentionally), and started getting the `unable
to o
As Paul already answered in your previous thread you need to correct
the fsid in your ceph.conf. The ceph-disk activate-all should work as
soon as the config file is correct.
Zitat von Zhenshi Zhou :
Yep, I think the ceph_fsid tells OSDs how to recognize the cluster. It
should be the same
36 matches
Mail list logo