>>Answers for all questions are no.

ok, thanks.

>> feature that limits the number of caps for individual client is on our todo 
>> list.

That's great !

As workaround, I'm running now small bash script in cron in clients, to 
drop_cache when
ceph_dentry_info slab are too big. Since that, my mds seem to be pretty happy.

The only thing, is that drop_caches during a big rsync, is not fast enough 
(purging entries seem
slower than adding new ones), so it can take hours, and memory still increase a 
little bit.
Not sure how it's work, when mds try to revoke caps from the client, if it's 
the same behavior or not.




dropcephinodecache.sh

#!/bin/bash
if pidof -o %PPID -x "dropcephinodecache.sh">/dev/null; then
        echo "Process already running"
        exit 1;
fi

value=`cat /proc/slabinfo |grep 'ceph_dentry_info\|fuse_inode'|awk '/1/ {print 
$2}'|head -1`

if [ "$value" -gt 500000 ];then
   echo "Flush inode cache"
   echo 2 > /proc/sys/vm/drop_caches
   
fi


----- Mail original -----
De: "Zheng Yan" <uker...@gmail.com>
À: "aderumier" <aderum...@odiso.com>
Cc: "ceph-users" <ceph-users@lists.ceph.com>
Envoyé: Lundi 7 Janvier 2019 06:51:14
Objet: Re: [ceph-users] cephfs : rsync backup create cache pressure on clients, 
filling caps

On Fri, Jan 4, 2019 at 11:40 AM Alexandre DERUMIER <aderum...@odiso.com> wrote: 
> 
> Hi, 
> 
> I'm currently doing cephfs backup, through a dedicated clients mounting the 
> whole filesystem at root. 
> others clients are mounting part of the filesystem. (kernel cephfs clients) 
> 
> 
> I have around 22millions inodes, 
> 
> before backup, I have around 5M caps loaded by clients 
> 
> #ceph daemonperf mds.x.x 
> 
> ---------------mds---------------- --mds_cache--- ---mds_log---- -mds_mem- 
> --mds_server-- mds_ -----objecter------ purg 
> req rlat fwd inos caps exi imi |stry recy recd|subm evts segs|ino dn |hcr hcs 
> hsr |sess|actv rd wr rdwr|purg| 
> 118 0 0 22M 5.3M 0 0 | 6 0 0 | 2 120k 130 | 22M 22M|118 0 0 |167 | 0 2 0 0 | 
> 0 
> 
> 
> 
> when backup is running, reading all the files, the caps are increasing to max 
> (and even a little bit more) 
> 
> # ceph daemonperf mds.x.x 
> ---------------mds---------------- --mds_cache--- ---mds_log---- -mds_mem- 
> --mds_server-- mds_ -----objecter------ purg 
> req rlat fwd inos caps exi imi |stry recy recd|subm evts segs|ino dn |hcr hcs 
> hsr |sess|actv rd wr rdwr|purg| 
> 155 0 0 20M 22M 0 0 | 6 0 0 | 2 120k 129 | 20M 20M|155 0 0 |167 | 0 0 0 0 | 0 
> 
> then mds try recall caps to others clients, and I'm gettin some 
> 2019-01-04 01:13:11.173768 cluster [WRN] Health check failed: 1 clients 
> failing to respond to cache pressure (MDS_CLIENT_RECALL) 
> 2019-01-04 02:00:00.000073 cluster [WRN] overall HEALTH_WARN 1 clients 
> failing to respond to cache pressure 
> 2019-01-04 03:00:00.000069 cluster [WRN] overall HEALTH_WARN 1 clients 
> failing to respond to cache pressure 
> 
> 
> 
> Doing a simple 
> echo 2 | tee /proc/sys/vm/drop_caches 
> on backup server, is freeing caps again 
> 
> # ceph daemonperf x 
> ---------------mds---------------- --mds_cache--- ---mds_log---- -mds_mem- 
> --mds_server-- mds_ -----objecter------ purg 
> req rlat fwd inos caps exi imi |stry recy recd|subm evts segs|ino dn |hcr hcs 
> hsr |sess|actv rd wr rdwr|purg| 
> 116 0 0 22M 4.8M 0 0 | 4 0 0 | 1 117k 131 | 22M 22M|116 1 0 |167 | 0 2 0 0 | 
> 0 
> 
> 
> 
> 
> Some questions here : 
> 
> ceph side 
> --------- 
> Is it possible to setup some kind of priority between clients, to force 
> retreive caps on a specific client ? 
> Is is possible to limit the number of caps for a client ? 
> 
> 
> client side 
> ----------- 
> I have tried to use vm.vfs_cache_pressure=40000, to reclam inodes entries 
> more fast, but server have 128GB ram. 
> Is it possible to limit the number of inodes in cache on linux. 
> Is is possible to tune something on the ceph mount point ? 
> 

Answers for all questions are no. feature that limits the number of 
caps for individual client is on our todo list. 

> 
> Regards, 
> 
> Alexandre 
> _______________________________________________ 
> ceph-users mailing list 
> ceph-users@lists.ceph.com 
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com 

_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to