yes, https://www.sebastien-han.fr/blog/2014/08/25/ceph-mix-sata-a
nd-ssd-within-the-same-box/ is enough
don't test on your production env. before you start, backup your cursh map.
ceph osd getcrushmap -o crushmap.bin
below's some hint:
ceph osd getcrushmap -o crushmap.bin
crushtool -d cru
Hi,
thanks all, still I would appreciate hints on a concrete procedure how
to migrate cephfs metadata to a SSD pool, the SSDs being on the same
hosts like to spinning disks.
This reference I read:
https://www.sebastien-han.fr/blog/2014/08/25/ceph-mix-sata-and-ssd-within-the-same-box/
Are ther
2017-01-04 23:52 GMT+08:00 Mike Miller :
> Wido, all,
>
> can you point me to the "recent benchmarks" so I can have a look?
> How do you define "performance"? I would not expect cephFS throughput to
> change, but it is surprising to me that metadata on SSD will have no
> measurable effect on laten
Wido, all,
can you point me to the "recent benchmarks" so I can have a look?
How do you define "performance"? I would not expect cephFS throughput to
change, but it is surprising to me that metadata on SSD will have no
measurable effect on latency.
- mike
On 1/3/17 10:49 AM, Wido den Holland
> Op 3 januari 2017 om 2:49 schreef Mike Miller :
>
>
> will metadata on SSD improve latency significantly?
>
No, as I said in my previous e-mail, recent benchmarks showed that storing
CephFS metadata on SSD does not improve performance.
It still might be good to do since it's not that much
will metadata on SSD improve latency significantly?
Mike
On 1/2/17 11:50 AM, Wido den Hollander wrote:
Op 2 januari 2017 om 10:33 schreef Shinobu Kinjo :
I've never done migration of cephfs_metadata from spindle disks to
ssds. But logically you could achieve this through 2 phases.
#1 Conf
> Op 2 januari 2017 om 10:33 schreef Shinobu Kinjo :
>
>
> I've never done migration of cephfs_metadata from spindle disks to
> ssds. But logically you could achieve this through 2 phases.
>
> #1 Configure CRUSH rule including spindle disks and ssds
> #2 Configure CRUSH rule for just pointing
I've never done migration of cephfs_metadata from spindle disks to
ssds. But logically you could achieve this through 2 phases.
#1 Configure CRUSH rule including spindle disks and ssds
#2 Configure CRUSH rule for just pointing to ssds
* This would cause massive data shuffling.
On Mon, Jan 2,
Hi,
Happy New Year!
Can anyone point me to specific walkthrough / howto instructions how to
move cephfs metadata to SSD in a running cluster?
How is crush to be modified step by step such that the metadata migrate
to SSD?
Thanks and regards,
Mike
__