> I was hoping to get some answers on how would ceph behaive when I install 
> SSDs on the hypervisor level and use them as cache pool.
> Let's say I've got 10 kvm hypervisors and I install one 512GB ssd on each 
> server.
>I then create a cache pool for my storage cluster using these ssds. My 
>questions are:
>
>1. How would the network IO flow when I am performing read and writes on the 
>virtual machines? Would writes get stored on the hypervisor's ssd disk right 
>away or would the rights be directed to the osd servers >first and then 
>redirected back to the cache pool on the hypervisor's ssd? Similarly, would 
>reads go to the osd servers and then redirected to the cache pool on the 
>hypervisors?

You would need to make an OSD of your hypervisors.
Data would be "striped" across all hypervisors in the cache pool.
So you would shift traffic from:
hypervisors -> dedicated ceph OSD pool
to
hypervisors -> hypervisors running a OSD with SSD
Note that the local OSD also has to to do OSD replication traffic so you are 
increasing the network load on the hypervisors by quite a bit.

>  Would majority of network traffic shift to the cache pool level and stay at 
> the hypervisors level rather than hypervisor / osd server level?

I guess it depends on your access patterns and how much data needs to be 
migrated back and forth to the regular storage.
I'm very interested in the effect of caching pools in combination with running 
VMs on them so I'd be happy to hear what you find ;)

As a side note: Running OSDs on hypervisors would not be my preferred choice 
since hypervisor load might impact Ceph performance.
I guess you can end up with pretty weird/unwanted results when your hypervisors 
get above a certain load threshold.
I would certainly test a lot with high loads before putting it in production...

Cheers,
Robert van Leeuwen
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to