Hmm, I asked in the ML some days before,:) likely you hit the kernel bug which
fixed by commit 5e804ac482 "ceph: don't invalidate page cache when inode is no
longer used”. This fix is in 4.4 but not in 4.2. I haven't got a chance to
play with 4.4 , it would be great if you can have a try.
For M
It make sense to me to run MDS inside docker or k8s as MDS is stateless.But Mon
and OSD do have data in local , what's the motivation to run it in docker?
> To: ceph-users@lists.ceph.com
> From: d...@redhat.com
> Date: Thu, 30 Jun 2016 08:36:45 -0400
> Subject: Re: [ceph-users] Running ceph in d
> From: uker...@gmail.com
> Date: Tue, 5 Jul 2016 21:14:12 +0800
> To: kenneth.waege...@ugent.be
> CC: ceph-users@lists.ceph.com
> Subject: Re: [ceph-users] mds0: Behind on trimming (58621/30)
>
> On Tue, Jul 5, 2016 at 7:56 PM, Kenneth Waegeman
> wrote:
> >
> >
> > On 04/07/16 11:22, Kenneth W
+However, it also introduced a regression that could cause MDS damage.
+Therefore, we do *not* recommend that Jewel users upgrade to this version -
+instead, we recommend upgrading directly to v10.2.9 in which the regression is
+fixed.
It looks like this version is NOT production ready. Curious wh
Understood, thanks Abhishek.
So 10.2.9 will not be another release cycle but just 10.2.8+ mds fix,
and expect to be out soon, right?
2017-07-12 23:51 GMT+08:00 Abhishek L :
> On Wed, Jul 12, 2017 at 9:13 PM, Xiaoxi Chen wrote:
>> +However, it also introduced a regression that could
We do try to use DNS to hide the IP and achieve kinds of HA, but failed.
mount.ceph will resolve whatever you provide, to IP address, and pass it to
kernel.
2017-02-28 16:14 GMT+08:00 Robert Sander :
> On 28.02.2017 07:19, gjprabu wrote:
>
> > How to hide internal ip address on ceph
Well , I think the argument here is not all about security gain, it just
NOT a user friendly way to let "df" show out 7 IPs of monitorsMuch
better if they seeing something like "mycephfs.mydomain.com".
And using DNS give you the flexibility of changing your monitor quorum
members , without not
client was feeling like you are trying to attach
another fs...
2017-03-02 0:29 GMT+08:00 Wido den Hollander :
>
> > Op 1 maart 2017 om 16:57 schreef Sage Weil :
> >
> >
> > On Wed, 1 Mar 2017, Wido den Hollander wrote:
> > > > Op 1 maart 2017 om 15:40 schr
36 2.8P
108T 2.7P 4% /mnt/slc_cephFS_8c285b3b59a843b6aab623314288ee36
10.135.3.136:6789:/sharefs_prod/8c285b3b59a843b6aab623314288ee36 2.7P
91T 2.6P 4% /mnt/lvs_cephFS_8c285b3b59a843b6aab623314288ee36
But we do have 5/7 mons for each cluster.
2017-03-02 7:42 GMT+08:00 Xiaoxi Ch
2017-03-02 23:25 GMT+08:00 Ilya Dryomov :
> On Thu, Mar 2, 2017 at 1:06 AM, Sage Weil wrote:
>> On Thu, 2 Mar 2017, Xiaoxi Chen wrote:
>>> >Still applies. Just create a Round Robin DNS record. The clients will
>>> obtain a new monmap while they are connected to the
Hi,
From the admin socket of mds, I got following data on our
production cephfs env, roughly we have 585K inodes and almost same
amount of caps, but we have>2x dentries than inodes.
I am pretty sure we dont use hard link intensively (if any).
And the #ino match with "rados ls --pool $
9,
"strays_created": 706120,
"strays_purged": 702561
}
"mds_mem": {
"ino": 584974,
}
I do have a cache dump from the mds via admin socket, is there
anything I can get from it to make 100% percent sure?
Xiaoxi
2017-03-07 22:20 GMT+08:
Yeah I checked the dump , it it truely the known issue.
Thanks
2017-03-08 17:58 GMT+08:00 John Spray :
> On Tue, Mar 7, 2017 at 3:05 PM, Xiaoxi Chen wrote:
>> Thanks John.
>>
>> Very likely, note that mds_mem::ino + mds_cache::strays_created ~=
>> mds::inodes,
We go with upstream release and mostly Nautilus now, probably the most
aggressive ones among serious production user (i.e tens of PB+ ),
I will vote for November for several reasons:
1. Q4 is holiday season and usually production rollout was blocked
, especially storage related change, which u
hin 15 mins upgrade window...
Wido den Hollander 于2019年7月25日周四 下午3:39写道:
>
>
> On 7/25/19 9:19 AM, Xiaoxi Chen wrote:
> > We had hit this case in production but my solution will be change
> > min_size = 1 immediately so that PG back to active right after.
> >
> &
One tricky thing is each layer of RocksDB is 100% on SSD or 100% on HDD,
so either you need to tweak the rocksdb configuration , or there will be a
huge waste, e.g 20GB DB partition makes no difference compared to a 3GB
one (under default rocksdb configuration)
Janne Johansson 于2020年1月14日周二 下午4
16 matches
Mail list logo