Hi,
Happy New Year!
Can anyone point me to specific walkthrough / howto instructions how to
move cephfs metadata to SSD in a running cluster?
How is crush to be modified step by step such that the metadata migrate
to SSD?
Thanks and regards,
Mike
__
Hi,
Regarding this doc page -->
http://docs.ceph.com/docs/jewel/start/quick-ceph-deploy/
I think the following text needs to be changed?
rados put {object-name} {file-path} --pool=data
to
rados put {object-name} {file-path} --pool= {poolname}
thank you
Manuel Sopena Ballesteros | Big data Eng
On 17-01-02 06:24, Lindsay Mathieson wrote:
Hi all, familiar with ceph but out of touch on cephfs specifics, so
some quick questions:
- cephfs requires a MDS for its metadata (file/dir structures,
attributes etc?
yes
- Its Active/Passive, i.e only one MDS can be active at a time, with a
n
Hi all, familiar with ceph but out of touch on cephfs specifics, so some
quick questions:
- cephfs requires a MDS for its metadata (file/dir structures,
attributes etc?
- Its Active/Passive, i.e only one MDS can be active at a time, with a
number of backup passive MDS's
- The passive MDS's
H... my original email got eaten by the big bit bucket in the sky...
What it said was you need to look at the ceph_features.h file for
kernel/userspace to see the differences.
This, for those that can access it, is the latest rhel7.3 version (I
imagine the CentOS version would be similar, if
Hi,
on the other side, for example Zheng Yan, but also others, do not get
tired to point out, that 3.10 is too old.
But i agree, i wonder how 3.10 can be too old ( in general ) and at the
same time being the standard kernel of redhat ( the company behind ceph ).
Happy new year @ all ! :-)
--
M
I don't agree with this point.
3.10 kernel version is enough, becault Redhat ceph enterprise product is
running on Rhel 7 which is 3.10 kernel version too.
though redhat 3.10 kernel version is different from upstream kernel 3.10 ; If
you use Centos or Rhel Linux distribution as a ceph client