[ceph-users] Does CephFS support SELinux?

2017-06-22 Thread Stéphane Klein
Hi, Does CephFS support SELinux? I have this issue with OpenShift (with SELinux) + CephFS: http://lists.openshift.redhat.com/openshift-archives/users/2017-June/msg00116.html Best regards, Stéphane -- Stéphane Klein blog: http://stephane-klein.info cv : http://cv.stephane-klein.info Twitter

Re: [ceph-users] Does CephFS support SELinux?

2017-06-22 Thread Stéphane Klein
2017-06-22 11:48 GMT+02:00 John Spray : > On Thu, Jun 22, 2017 at 10:25 AM, Stéphane Klein > wrote: > > Hi, > > > > Does CephFS support SELinux? > > > > I have this issue with OpenShift (with SELinux) + CephFS: > > http://lists.openshift.redhat.com/open

[ceph-users] when I set quota on CephFS folder I have this error => setfattr: /mnt/cephfs/foo: Invalid argument

2017-06-23 Thread Stéphane Klein
: /mnt/cephfs/foo: Invalid argument I don't understand, where is my mistake? Best regards, Stéphane -- Stéphane Klein blog: http://stephane-klein.info cv : http://cv.stephane-klein.info Twitter: http://twitter.com/klein_stephane ___ ceph-users mailing

Re: [ceph-users] when I set quota on CephFS folder I have this error => setfattr: /mnt/cephfs/foo: Invalid argument

2017-06-23 Thread Stéphane Klein
2017-06-23 18:06 GMT+02:00 John Spray : > I can't immediately remember which version we enabled quota by default > in -- you might also need to set "client quota = true" in the client's > ceph.conf. > > I need to set this option only on host where I want to mount volume? or on all mds hosts? What

Re: [ceph-users] when I set quota on CephFS folder I have this error => setfattr: /mnt/cephfs/foo: Invalid argument

2017-06-23 Thread Stéphane Klein
2017-06-23 17:59 GMT+02:00 David Turner : > It might be possible that it doesn't want an absolute path and wants a > relative path for setfattr, although my version doesn't seem to care. I > mention that based on the getfattr response. > > I did the test with relative path and I have the same err

Re: [ceph-users] when I set quota on CephFS folder I have this error => setfattr: /mnt/cephfs/foo: Invalid argument

2017-06-23 Thread Stéphane Klein
2017-06-23 20:44 GMT+02:00 David Turner : > I doubt the ceph version from 10.2.5 to 10.2.7 makes that big of a > difference. Read through the release notes since 10.2.5 to see if it > mentions anything about cephfs quotas. > Yes, same error with 10.2.7 :(

[ceph-users] 6 osds on 2 hosts, does Ceph always write data in one osd on host1 and replica in osd on host2?

2017-06-26 Thread Stéphane Klein
replica on osd.2. Best regards, Stéphane -- Stéphane Klein blog: http://stephane-klein.info cv : http://cv.stephane-klein.info Twitter: http://twitter.com/klein_stephane ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi

Re: [ceph-users] 6 osds on 2 hosts, does Ceph always write data in one osd on host1 and replica in osd on host2?

2017-06-26 Thread Stéphane Klein
2017-06-26 11:15 GMT+02:00 Ashley Merrick : > Will need to see a full export of your crush map rules. > This is my crush map rules: # begin crush map tunable choose_local_tries 0 tunable choose_local_fallback_tries 0 tunable choose_total_tries 50 tunable chooseleaf_descend_once 1 tunable choosel

Re: [ceph-users] 6 osds on 2 hosts, does Ceph always write data in one osd on host1 and replica in osd on host2?

2017-06-26 Thread Stéphane Klein
2017-06-26 11:48 GMT+02:00 Ashley Merrick : > Your going across host’s so each replication will be on a different host. > Thanks :) ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[ceph-users] Is it possible to get IO usage (read / write bandwidth) by client or RBD image?

2017-07-20 Thread Stéphane Klein
/ method now? Best regards, Stéphane -- Stéphane Klein blog: http://stephane-klein.info cv : http://cv.stephane-klein.info Twitter: http://twitter.com/klein_stephane ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph

[ceph-users] Environment variable to configure rbd "-c" parameter and "--keyfile" parameter?

2017-08-21 Thread Stéphane Klein
Hi, I look for environment variable to configure rbd "-c" parameter and "--keyfile" parameter. I found nothing in http://docs.ceph.com/docs/master/man/8/rbd/ Best regards, Stéphane -- Stéphane Klein blog: http://stephane-klein.info cv : http://cv.stephane-klei

[ceph-users] rbd showmapped -p and --image options missing in rbd version 10.2.4, why?

2016-12-09 Thread Stéphane Klein
? Best regards, Stéphane -- Stéphane Klein blog: http://stephane-klein.info cv : http://cv.stephane-klein.info Twitter: http://twitter.com/klein_stephane ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users

[ceph-users] mount /dev/rbd0 /mnt/image2 + rm Python-2.7.13 -rf => freeze

2016-12-21 Thread Stéphane Klein
m" process. What can I do? How can I debug that? Best regards, Stéphane -- Stéphane Klein blog: http://stephane-klein.info cv : http://cv.stephane-klein.info Twitter: http://twitter.com/klein_stephane ___ ceph-users mailing list ceph-users@list

Re: [ceph-users] mount /dev/rbd0 /mnt/image2 + rm Python-2.7.13 -rf => freeze

2016-12-21 Thread Stéphane Klein
rbd on AtomicProject. 2016-12-21 14:51 GMT+01:00 Stéphane Klein : > Hi, > > I use this Ansible installation: https://github.com/harobed/ > poc-ceph-ansible/tree/master/vagrant-3mons-3osd > > I have: > > * 3 osd > * 3 mons > > ``` > root@ceph-test-1:/home/vagrant

[ceph-users] Question: can I use rbd 0.80.7 with ceph cluster version 10.2.5?

2016-12-21 Thread Stéphane Klein
Hi, I have this issue: http://lists.ceph.com/pipermail/ceph-users-ceph.com/2016-December/015216.html Question: can I use rbd 0.80.7 with ceph cluster version 10.2.5? Why I use this old version? Because I use Atomic Project http://www.projectatomic.io/ Best regards, Stéphane -- Stéphane Klein

Re: [ceph-users] mount /dev/rbd0 /mnt/image2 + rm Python-2.7.13 -rf => freeze

2016-12-21 Thread Stéphane Klein
I have configured: ``` ceph osd crush tunables firefly ``` on cluster. After that, same error :( 2016-12-21 15:23 GMT+01:00 Stéphane Klein : > No problem with Debian: > > ``` > root@ceph-client-2:/mnt/image2# rbd --version > ceph version 10.2.5 (c461ee19ecbc0c5c330aca20f7392c9a0

Re: [ceph-users] mount /dev/rbd0 /mnt/image2 + rm Python-2.7.13 -rf => freeze

2016-12-21 Thread Stéphane Klein
Same error with rbd image create with --image-format 1 2016-12-21 14:51 GMT+01:00 Stéphane Klein : > Hi, > > I use this Ansible installation: https://github.com/harobed/ > poc-ceph-ansible/tree/master/vagrant-3mons-3osd > > I have: > > * 3 osd > * 3 mons > > ```

Re: [ceph-users] mount /dev/rbd0 /mnt/image2 + rm Python-2.7.13 -rf => freeze

2016-12-21 Thread Stéphane Klein
2016-12-21 18:47 GMT+01:00 Ilya Dryomov : > On Wed, Dec 21, 2016 at 5:50 PM, Stéphane Klein > wrote: > > I have configured: > > > > ``` > > ceph osd crush tunables firefly > > ``` > > If it gets to rm, then it's probably not tunables. Are you r

Re: [ceph-users] mount /dev/rbd0 /mnt/image2 + rm Python-2.7.13 -rf => freeze

2016-12-21 Thread Stéphane Klein
2016-12-21 19:51 GMT+01:00 Ilya Dryomov : > On Wed, Dec 21, 2016 at 6:58 PM, Stéphane Klein > wrote: > >> > > 2016-12-21 18:47 GMT+01:00 Ilya Dryomov : > >> > >> On Wed, Dec 21, 2016 at 5:50 PM, Stéphane Klein > >> wrote: > >> >

Re: [ceph-users] mount /dev/rbd0 /mnt/image2 + rm Python-2.7.13 -rf => freeze

2016-12-21 Thread Stéphane Klein
> Not sure what's going on here. Using firefly version of the rbd CLI > tool isn't recommended of course, but doesn't seem to be _the_ problem. > Can you try some other distro with an equally old ceph - ubuntu trusty > perhaps? Same error with: * Ubuntu trusty root@ceph-client-3:/home/vagrant#

Re: [ceph-users] mount /dev/rbd0 /mnt/image2 + rm Python-2.7.13 -rf => freeze

2016-12-21 Thread Stéphane Klein
2016-12-21 23:06 GMT+01:00 Ilya Dryomov : > What's the output of "cat /proc/$(pidof rm)/stack? > root@ceph-client-3:/home/vagrant# cat /proc/2315/stack [] sleep_on_page+0xe/0x20 [] wait_on_page_bit+0x7f/0x90 [] truncate_inode_pages_range+0x2fe/0x5a0 [] truncate_inode_pages+0x15/0x20 [] ext4_evict

Re: [ceph-users] mount /dev/rbd0 /mnt/image2 + rm Python-2.7.13 -rf => freeze

2016-12-21 Thread Stéphane Klein
2016-12-21 23:33 GMT+01:00 Ilya Dryomov : > On Wed, Dec 21, 2016 at 11:10 PM, Stéphane Klein > wrote: > > > > 2016-12-21 23:06 GMT+01:00 Ilya Dryomov : > >> > >> What's the output of "cat /proc/$(pidof rm)/stack? > > > >

Re: [ceph-users] mount /dev/rbd0 /mnt/image2 + rm Python-2.7.13 -rf => freeze

2016-12-21 Thread Stéphane Klein
2016-12-21 23:33 GMT+01:00 Ilya Dryomov : > What if you boot ceph-client-3 with >512M memory, say 2G? > Success ! Thanks ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] mount /dev/rbd0 /mnt/image2 + rm Python-2.7.13 -rf => freeze

2016-12-21 Thread Stéphane Klein
2016-12-21 23:39 GMT+01:00 Stéphane Klein : > > > 2016-12-21 23:33 GMT+01:00 Ilya Dryomov : > >> What if you boot ceph-client-3 with >512M memory, say 2G? >> > > Success ! > It is possible to add a warning messag

Re: [ceph-users] mount /dev/rbd0 /mnt/image2 + rm Python-2.7.13 -rf => freeze

2016-12-22 Thread Stéphane Klein
2016-12-21 23:33 GMT+01:00 Ilya Dryomov : > > What if you boot ceph-client-3 with >512M memory, say 2G? > > With: * 512 M memory => failed * 1000 M memory => failed * 1500 M memory => success ___ ceph-users mailing list ceph-users@lists.ceph.com http://

[ceph-users] When I shutdown one osd node, where can I see the block movement?

2016-12-22 Thread Stéphane Klein
Hi, When I shutdown one osd node, where can I see the block movement? Where can I see percentage progression? Best regards, Stéphane -- Stéphane Klein blog: http://stephane-klein.info cv : http://cv.stephane-klein.info Twitter: http://twitter.com/klein_stephane

[ceph-users] How can I ask to Ceph Cluster to move blocks now when osd is down?

2016-12-22 Thread Stéphane Klein
Hi, How can I ask to Ceph Cluster to move blocks now when osd is down? Best regards, Stéphane -- Stéphane Klein blog: http://stephane-klein.info cv : http://cv.stephane-klein.info Twitter: http://twitter.com/klein_stephane ___ ceph-users mailing list

Re: [ceph-users] When I shutdown one osd node, where can I see the block movement?

2016-12-22 Thread Stéphane Klein
HEALTH_WARN 43 pgs degraded; 43 pgs stuck unclean; 43 pgs undersized; recovery 24/70 objects degraded (34.286%); too few PGs per OSD (28 < min 30); 1/3 in osds are down; Here Ceph say there are 24 objects to move? ___ ceph-users mailing list ceph-users@l

Re: [ceph-users] When I shutdown one osd node, where can I see the block movement?

2016-12-22 Thread Stéphane Klein
2016-12-22 12:18 GMT+01:00 Henrik Korkuc : > On 16-12-22 13:12, Stéphane Klein wrote: > > HEALTH_WARN 43 pgs degraded; 43 pgs stuck unclean; 43 pgs undersized; > recovery 24/70 objects degraded (34.286%); too few PGs per OSD (28 < min > 30); 1/3 in osds are down; > > it sa

[ceph-users] If I shutdown 2 osd / 3, Ceph Cluster say 2 osd UP, why?

2016-12-22 Thread Stéphane Klein
ation is here: https://github.com/harobed/poc-ceph-ansible/blob/master/vagrant-3mons-3osd/hosts/group_vars/all.yml#L11 What is my mistake? Is it Ceph bug? Best regards, Stéphane -- Stéphane Klein blog: http://stephane-klein.info cv : http://cv.stephane-klein.info Twitter: http://twitter.com/klein_

Re: [ceph-users] If I shutdown 2 osd / 3, Ceph Cluster say 2 osd UP, why?

2016-12-22 Thread Stéphane Klein
2016-12-22 12:30 GMT+01:00 Henrik Korkuc : > try waiting a little longer. Mon needs multiple down reports to take OSD > down. And as your cluster is very small there is small amount (1 in this > case) of OSDs to report that others are down. > > Why this limitation? because my rbd mount on ceph-cli

[ceph-users] What is pauserd and pausewr status?

2016-12-22 Thread Stéphane Klein
avail 64 active+clean where can I found document about: * pauserd ? * pausewr ? Nothing in documentation search engine. Best regards, Stéphane -- Stéphane Klein blog: http://stephane-klein.info cv : http://cv.stephane-klein.info Twitter: http://twitter.com/klein_stephane

[ceph-users] How can I debug "rbd list" hang?

2016-12-22 Thread Stéphane Klein
, 1978 GB / 1979 GB avail 64 active+clean Why "rbd list" command hang? How can I debug that? Best regards, Stéphane -- Stéphane Klein blog: http://stephane-klein.info cv : http://cv.stephane-klein.info Twitter: http://twitter.com/klei

Re: [ceph-users] How can I debug "rbd list" hang?

2016-12-22 Thread Stéphane Klein
2016-12-22 18:07 GMT+01:00 Nick Fisk : > I think you have probably just answered your previous question. I would > guess pauserd and pausewr, pauses read and write IO, hence your command to > list is being blocked on reads. > > > How can I fix that? Where is the documentation about this two flags

Re: [ceph-users] What is pauserd and pausewr status?

2016-12-23 Thread Stéphane Klein
2016-12-22 18:09 GMT+01:00 Wido den Hollander : > > > Op 22 december 2016 om 17:55 schreef Stéphane Klein < > cont...@stephane-klein.info>: > > > > > > Hi, > > > > I have this status: > > > > bash-4.2# ceph status > > cluste

Re: [ceph-users] If I shutdown 2 osd / 3, Ceph Cluster say 2 osd UP, why?

2016-12-23 Thread Stéphane Klein
Very interesting documentation about this subject is here: http://docs.ceph.com/docs/hammer/rados/configuration/mon-osd-interaction/ 2016-12-22 12:26 GMT+01:00 Stéphane Klein : > Hi, > > I have: > > * 3 mon > * 3 osd > > When I shutdown one osd, I work great: > >

Re: [ceph-users] If I shutdown 2 osd / 3, Ceph Cluster say 2 osd UP, why?

2016-12-23 Thread Stéphane Klein
2016-12-23 2:17 GMT+01:00 Jie Wang : > OPTION(mon_osd_min_down_reporters, OPT_INT, 2) // number of OSDs from > different subtrees who need to report a down OSD for it to count > > Yes, it is that: # ceph --admin-daemon /var/run/ceph/ceph-mon.ceph-mon-1.asok config show | grep "repor" "mon_o

[ceph-users] Why mon_osd_min_down_reporters isn't set to 1 like the default value in documentation? It is a bug?

2016-12-23 Thread Stéphane Klein
quot;, "mon_osd_reporter_subtree_level": "host", "osd_mon_report_interval_max": "600", "osd_mon_report_interval_min": "5", "osd_mon_report_max_in_flight": "2", "osd_pg_stat_report_interval_max": "

Re: [ceph-users] What is pauserd and pausewr status?

2016-12-23 Thread Stéphane Klein
2016-12-23 11:35 GMT+01:00 Wido den Hollander : > > > Op 23 december 2016 om 10:31 schreef Stéphane Klein < > cont...@stephane-klein.info>: > > > > > > 2016-12-22 18:09 GMT+01:00 Wido den Hollander : > > > > > > > > > Op 22 decembe

[ceph-users] Why I don't see "mon osd min down reports" in "config show" report result?

2016-12-23 Thread Stéphane Klein
uot;, "mds_mon_shutdown_timeout": "5", "osd_max_markdown_period": "600", "osd_max_markdown_count": "5", "osd_mon_shutdown_timeout": "5", ``` I don't see: mon osd min down reports Why? This f

Re: [ceph-users] What is pauserd and pausewr status?

2016-12-23 Thread Stéphane Klein
2016-12-23 13:03 GMT+01:00 Henrik Korkuc : > On 16-12-23 12:43, Stéphane Klein wrote: > > > 2016-12-23 11:35 GMT+01:00 Wido den Hollander : > >> >> > Op 23 december 2016 om 10:31 schreef Stéphane Klein < >> cont...@stephane-klein.info>: >> > >

[ceph-users] How to fix: HEALTH_ERR 45 pgs are stuck inactive for more than 300 seconds; 19 pgs degraded; 45 pgs stuck inactive; 19 pgs stuck unclean; 19 pgs undersized; recovery 2514/5028 objects deg

2017-01-16 Thread Stéphane Klein
tory": [] } }, "peer_info": [], "recovery_state": [ { "name": "Started\/Primary\/Active", "enter_time": "2017-01-14 13:42:42.084021", "might_have_unfound": [], "recovery_progress": { "backfill_targets": [], "waiting_on_backfill": [], "last_backfill_started": "MIN", "backfill_info": { "begin": "MIN", "end": "MIN", "objects": [] }, "peer_backfill_info": [], "backfills_in_flight": [], "recovering": [], "pg_backend": { "pull_from_peer": [], "pushing": [] } }, "scrub": { "scrubber.epoch_start": "37", "scrubber.active": 0, "scrubber.state": "INACTIVE", "scrubber.start": "MIN", "scrubber.end": "MIN", "scrubber.subset_last_update": "0'0", "scrubber.deep": false, "scrubber.seed": 0, "scrubber.waiting_on": 0, "scrubber.waiting_on_whom": [] } }, { "name": "Started", "enter_time": "2017-01-14 13:42:41.065959" } ], "agent_state": {} } ``` I don't understand what it's mean. Now, I don't know what I need to do to fix it. Some tips? Best regards, Stéphane -- Stéphane Klein blog: http://stephane-klein.info cv : http://cv.stephane-klein.info Twitter: http://twitter.com/klein_stephane ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[ceph-users] How to update osd pool default size at runtime?

2017-01-16 Thread Stéphane Klein
documentation don't explain how to make the changes at runtime. Best regards, Stéphane -- Stéphane Klein blog: http://stephane-klein.info cv : http://cv.stephane-klein.info Twitter: http://twitter.com/klein_stephane ___ ceph-users mailing list

Re: [ceph-users] How to update osd pool default size at runtime?

2017-01-16 Thread Stéphane Klein
2017-01-16 12:47 GMT+01:00 Jay Linux : > Hello Stephane, > > Try this . > > $ceph osd pool get size -->> it will prompt the " > osd_pool_default_size " > $ceph osd pool get min_size-->> it will prompt the " > osd_pool_default_min_size " > > if you want to change in runtime, trigger below

Re: [ceph-users] How to fix: HEALTH_ERR 45 pgs are stuck inactive for more than 300 seconds; 19 pgs degraded; 45 pgs stuck inactive; 19 pgs stuck unclean; 19 pgs undersized; recovery 2514/5028 objects

2017-01-16 Thread Stéphane Klein
2017-01-16 12:24 GMT+01:00 Loris Cuoghi : > Hello, > > Le 16/01/2017 à 11:50, Stéphane Klein a écrit : > >> Hi, >> >> I have two OSD and Mon nodes. >> >> I'm going to add third osd and mon on this cluster but before I want to >> fix this error

Re: [ceph-users] How to fix: HEALTH_ERR 45 pgs are stuck inactive for more than 300 seconds; 19 pgs degraded; 45 pgs stuck inactive; 19 pgs stuck unclean; 19 pgs undersized; recovery 2514/5028 objects

2017-01-16 Thread Stéphane Klein
I see my mistake: ``` osdmap e57: 2 osds: 1 up, 1 in; 64 remapped pgs flags sortbitwise,require_jewel_osds ``` ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[ceph-users] Where can I read documentation of Ceph version 0.94.5?

2017-02-27 Thread Stéphane Klein
Hi, how can I read old Ceph version documentation? http://docs.ceph.com I see only "master" documentation. I look for 0.94.5 documentation. Best regards, Stéphane -- Stéphane Klein blog: http://stephane-klein.info cv : http://cv.stephane-klein.info Twitter: http://twitter.com/klei

Re: [ceph-users] Where can I read documentation of Ceph version 0.94.5?

2017-02-27 Thread Stéphane Klein
2017-02-27 20:53 GMT+01:00 Roger Brown : > replace "master" with the release codename, eg. http://docs.ceph.com/docs/ > kraken/ > > Thanks I suggest to add the doc version list on http://docs.ceph.com page. Best regards, Stéphane __

[ceph-users] too few PGs per OSD (16 < min 30) but I set pool_default_pg_num: 300 in Ansible

2017-06-14 Thread Stéphane Klein
min 30 ? I set 300 pg_num Best regards, Stéphane -- Stéphane Klein blog: http://stephane-klein.info cv : http://cv.stephane-klein.info Twitter: http://twitter.com/klein_stephane ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ce

Re: [ceph-users] too few PGs per OSD (16 < min 30) but I set pool_default_pg_num: 300 in Ansible

2017-06-14 Thread Stéphane Klein
2017-06-14 16:40 GMT+02:00 David Turner : > Once those PG's have finished creating and the cluster is back to normal > How can I see Cluster migration progression? Now I have: # ceph status cluster 800221d2-4b8c-11e7-9bb9-cffc42889917 health HEALTH_WARN pool rbd pg_num 160

Re: [ceph-users] too few PGs per OSD (16 < min 30) but I set pool_default_pg_num: 300 in Ansible

2017-06-14 Thread Stéphane Klein
objects 30925 MB used, 22194 GB / 5 GB avail 143 active+clean 17 activating 2017-06-14 16:56 GMT+02:00 Stéphane Klein : > 2017-06-14 16:40 GMT+02:00 David Turner : > >> Once those PG's have finished creating and the cluster

Re: [ceph-users] too few PGs per OSD (16 < min 30) but I set pool_default_pg_num: 300 in Ansible

2017-06-14 Thread Stéphane Klein
And now: 2017-06-14 17:00 GMT+02:00 Stéphane Klein : > Ok, I missed: > > ceph osd pool set rbd pgp_num 160 > > Now I have: > > ceph status > cluster 800221d2-4b8c-11e7-9bb9-cffc42889917 > health HEALTH_ERR > 9 pgs are stuck inac

Re: [ceph-users] too few PGs per OSD (16 < min 30) but I set pool_default_pg_num: 300 in Ansible

2017-06-14 Thread Stéphane Klein
osds: 6 up, 6 in flags sortbitwise,require_jewel_osds pgmap v60: 160 pgs, 1 pools, 0 bytes data, 0 objects 30924 MB used, 22194 GB / 5 GB avail 160 active+clean Thanks all is perfect ! 2017-06-14 17:00 GMT+02:00 Stéphane Klein : > And

[ceph-users] What is "up:standby"? in ceph mds stat => e5: 1/1/1 up {0=ceph-test-3=up:active}, 2 up:standby

2017-06-16 Thread Stéphane Klein
+clean What is up:standby"? in # ceph mds stat e5: 1/1/1 up {0=ceph-test-3=up:active}, 2 up:standby Best regards, Stéphane -- Stéphane Klein blog: http://stephane-klein.info cv : http://cv.stephane-klein.info Twitter: http://twitter.com/klein_ste

Re: [ceph-users] What is "up:standby"? in ceph mds stat => e5: 1/1/1 up {0=ceph-test-3=up:active}, 2 up:standby

2017-06-16 Thread Stéphane Klein
2017-06-16 13:07 GMT+02:00 Daniel Carrasco : > On MDS nodes, by default only the first you add is active: The others > joins the cluster as standby MDS daemons. When the active fails, then an > standby MDS becomes active and continues with the work. > > Thanks, it is possible to add this informati

[ceph-users] What package I need to install to have CephFS kernel support on CentOS?

2017-06-16 Thread Stéphane Klein
wel.repo Question: what package I need to install to have CephFS kernel support on CentOS? Best regards, Stéphane -- Stéphane Klein blog: http://stephane-klein.info cv : http://cv.stephane-klein.info Twitter: http://twitter.com/klein_stephane ___ ce