On 2016-09-06 17:49:08 +0000 (+0000), Randall, Nathan X wrote: > For the storage backing Elasticsearch data nodes, we have been > using one 500GB Cinder volume (backed by a Ceph cluster built from > DL380s filled with 1.2TB 10k SAS drives) per data node. However, > we've found that a VM with 8 vCPU and 64GB RAM can make use of > more than 500GB disk capacity without bottlenecking on CPU or > memory, so we are experimenting with 1TB or 1.5TB options per data > node. [...]
As a point of comparison, the 6 ES cluster members OpenStack Infra is using are a 60GiB RAM/16 vCPU flavor in Rackspace's DFW region and each of them has a 1TiB Cinder SATA volume formatted ext4 (~50% full). You can see system utilization metrics for one of those at http://cacti.openstack.org/cacti/graph_view.php?action=tree&tree_id=1&leaf_id=123 (though we're apparently missing a graph for the /var/lib/elasticsearch filesystem). -- Jeremy Stanley _______________________________________________ OpenStack-operators mailing list OpenStack-operators@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators