Interestingly,  I have found that if I limit the rate at which data is written 
the tiering behaves as expected.

I'm using a robocopy job from a Windows VM to copy large files from my existing 
storage array to a test Ceph volume.  By using the /IPG parameter I can roughly 
control the rate at which data is written.

I've found that if I limit the write rate to around 30MBytes/sec the data all 
goes to the hot tier, zero data goes to the HDD tier, and the observed write 
latency is about 5msec.   If I go any higher than this I see data being written 
to the HDDs and the observed write latency goes way up.
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to