On 8/9/2016 10:43 AM, Wido den Hollander wrote:

Op 9 augustus 2016 om 16:36 schreef Александр Пивушков <p...@mail.ru>:


 > >> Hello dear community!
I'm new to the Ceph and not long ago took up the theme of building clusters.
Therefore it is very important to your opinion.
It is necessary to create a cluster from 1.2 PB storage and very rapid access to data. 
Earlier disks of "Intel® SSD DC P3608 Series 1.6TB NVMe PCIe 3.0 x4 Solid State 
Drive" were used, their speed of all satisfies, but with increase of volume of 
storage, the price of such cluster very strongly grows and therefore there was an idea to 
use Ceph.

You may want to tell us more about your environment, use case and in
particular what your clients are.
Large amounts of data usually means graphical or scientific data,
extremely high speed (IOPS) requirements usually mean database
like applications, which one is it, or is it a mix?

This is a mixed project, with combined graphics and science. Project linking 
the vast array of image data. Like google MAP :)
Previously, customers were Windows that are connected to powerful servers 
directly.
Ceph cluster connected on FC to servers of the virtual machines is now planned. 
Virtualization - oVirt.

Stop right there. oVirt, despite being from RedHat, doesn't really support
Ceph directly all that well, last I checked.
That is probably where you get the idea/need for FC from.

If anyhow possible, you do NOT want another layer and protocol conversion
between Ceph and the VMs, like a FC gateway or iSCSI or NFS.

So if you're free to choose your Virtualization platform, use KVM/qemu at
the bottom and something like Openstack, OpenNebula, ganeti, Pacemake with
KVM resource agents on top.
oh, that's too bad ...
I do not understand something...

oVirt built on kvm
https://www.ovirt.org/documentation/introduction/about-ovirt/

Ceph, such as support kvm
http://docs.ceph.com/docs/master/architecture/


KVM is just the hypervisor. oVirt is a tool which controls KVM and it doesn't 
have support for Ceph. That means that it can't pass down the proper arguments 
to KVM to talk to RBD.

What could be the overhead costs and how big they are?


I do not understand why oVirt bad, and the qemu in the Openstack, it's good.
What can be read?


Like I said above. oVirt and OpenStack both control KVM. OpenStack also knows 
how to  'configure' KVM to use RBD, oVirt doesn't.

Maybe Proxmox is a better solution in your case.


oVirt can use ceph through cinder. It doesn't currently provide all the functionality of
other oVirt storage domains but it does work.

Wido


--
Александр Пивушков_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to