Hi all,
Just want to (double) check something – we’re in the process of luminous ->
mimic upgrades for all of our clusters – particularly this section regarding
MDS steps
• Confirm that only one MDS is online and is rank 0 for your FS:
# ceph status
• Upgrade the last
s get damaged? You had 3x replication on the pool?
-Original Message-
From: Yan, Zheng [mailto:uker...@gmail.com]
Sent: dinsdag 4 juni 2019 1:14
To: James Wilkins
Cc: ceph-users
Subject: Re: [ceph-users] CEPH MDS Damaged Metadata - recovery steps
On
Hi all,
After a bit of advice to ensure we’re approaching this the right way.
(version: 12.2.12, multi-mds, dirfrag is enabled)
We have corrupt meta-data as identified by ceph
health: HEALTH_ERR
2 MDSs report damaged metadata
Asking the mds via damage ls
{
"dam
Hello list,
I'm looking for some more information relating to CephFS and the 'Q'
size, specifically how to diagnose what contributes towards it rising
up
Ceph Version: 11.2.0.0
OS: CentOS 7
Kernel (Ceph Servers): 3.10.0-514.10.2.el7.x86_64
Kernel (CephFS Clients): 4.4.76-1.el7.elrepo.x86_64 - usi
ohn Spray [mailto:jsp...@redhat.com]
Sent: 23 May 2017 13:51
To: James Wilkins
Cc: Users, Ceph
Subject: Re: [ceph-users] MDS Question
On Tue, May 23, 2017 at 1:42 PM, James Wilkins
wrote:
> Quick question on CephFS/MDS but I can’t find this documented
> (apologies if it is)
>
>
&g
Quick question on CephFS/MDS but I can't find this documented (apologies if it
is)
What does the q: represent in a ceph daemon perf dump mds represent?
[root@hp3-ceph-mds2 ~]# ceph daemon
/var/run/ceph/ceph-mds.hp3-ceph-mds2.ceph.hostingp3.local.asok perf dump mds
{
"mds": {
"requ
Apologies if this is documented but I could not find any clear-cut advice
Is it better to have a higher PG count for the metadata pool, or the data pool
of a CephFS filesystem?
If I look at
http://www.slideshare.net/XiaoxiChen3/cephfs-jewel-mds-performance-benchmark -
specfically slide 06 - I
Hello,
Hoping to pick any users brains in relation to production CephFS deployments as
we're preparing to deploy CephFS to replace Gluster for our container based
storage needs.
(Target OS is Centos7 for both servers/clients & latest jewel release)
o) Based on our performance testing we're se
Hello,
Wondering if anyone else has come over an issue we're having with our POC CEPH
Cluster at the moment.
Some details about its setup;
6 x Dell R720 (20 x 1TB Drives, 4 xSSD CacheCade), 4 x 10GB Nics
4 x Generic white label server (24 x 2 4TB Disk Raid-0 ), 4 x 10GB Nics
3 x Dell R620 - Act