Re: [ceph-users] Move ceph admin node to new other server

2018-04-10 Thread Nghia Than
I appreciate for your kind Paul. On Wed, Apr 11, 2018 at 1:47 AM, Paul Emmerich wrote: > http://docs.ceph.com/ceph-deploy/docs/gatherkeys.html > > 2018-04-10 20:39 GMT+02:00 Nghia Than : > >> Hi Paul, >> >> Thanks for your information. >> >> May i k

Re: [ceph-users] Move ceph admin node to new other server

2018-04-10 Thread Nghia Than
any server) and copy to new node. Thanks, On Wed, Apr 11, 2018 at 1:25 AM, Paul Emmerich wrote: > Hi, > > yes, that folder contains everything you need. You can also use > ceph-deploy gatherkeys to get them from your cluster. > > > Paul > > > 2018-04-09 10:04 GMT

[ceph-users] Move ceph admin node to new other server

2018-04-09 Thread Nghia Than
Hello, We have use 1 server for deploy (called ceph-admin-node) for 3 mon and 4 OSD node. We have created a folder called *ceph-deploy* to deploy all node members. May we move this folder to other server? This folder contains all following files: total 1408 -rw--- 1 root root 113 Oct 26

Re: [ceph-users] DELL R620 - SSD recommendation

2018-03-21 Thread Nghia Than
> ceph-users mailing list > ceph-users@lists.ceph.com > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com > > -- == Nghia Than ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] Performance issues on Luminous

2018-01-05 Thread Nghia Than
Is there any new parameters for rbd in luminous? Maybe I > forgot about > some performance tricks? If more information needed feel > free > to ask. > > > > > ___ > ceph-users mailing list > ceph-users@lists.ceph.com > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com > > > > > ___ > ceph-users mailing list > ceph-users@lists.ceph.com > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com > -- == Nghia Than ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] active+remapped+backfill_toofull

2017-12-20 Thread Nghia Than
May i know what OSD i have to restart in this case? On Wed, Dec 20, 2017 at 9:14 PM David C wrote: > You should just need to restart the relavent OSDs for the new backfill > threshold to kick in. > > On 20 Dec 2017 00:14, "Nghia Than" wrote: > > I added more OSDs

Re: [ceph-users] POOL_NEARFULL

2017-12-19 Thread Nghia Than
USED %RAW USED >>> 31742G 11147G 20594G 64.88 >>> POOLS: >>> NAMEID USED %USED MAX AVAIL OBJECTS >>> templates 5 196G 23.28 645G 50202 >>> cvm 66528 0 1076G

Re: [ceph-users] active+remapped+backfill_toofull

2017-12-19 Thread Nghia Than
ull_ratio = '0.92' (unchangeable) osd.27: osd_backfill_full_ratio = '0.92' (unchangeable) osd.28: osd_backfill_full_ratio = '0.92' (unchangeable) [root@storcp ~]# On Wed, Dec 20, 2017 at 1:57 AM, David C wrote: > What's your backfill full ratio? You

[ceph-users] active+remapped+backfill_toofull

2017-12-19 Thread Nghia Than
104 TOTAL 25775G 20115G 5660G 78.04 MIN/MAX VAR: 0.78/1.19 STDDEV: 9.24 [root@storcp ~]# ​ ​May i know how to get over this?​ -- == Nghia Than ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com