Thinking about Yokozuna it would appear that for some set of hardware specs there must be some maximum practical number of indexed buckets. Yokozuna creates one Solr core per bucket per node. Scaling the Riak cluster will reduce the amount of data indexed per core, but not the number of cores node. I assume there is some static overhead per Solr core, and thus a maximum number of indexed buckets per cluster based on the per node resources.
Any idea what this may be be, roughly? Has anyone tried to max out the number of indexed buckets? Searching the Solr mailing list it seems some folks have up to 800 cores per slave, but their hardware is unknown and queries are being served by slaves, so the cores are only indexing. It looks like there is some ongoing work in Solr to support large number of cores by dynamically loading and unloading them ( http://wiki.apache.org/solr/LotsOfCores). Is this something Yokozuna may make use of? It may be to expensive a hit for latencies. Elias Levy
_______________________________________________ riak-users mailing list riak-users@lists.basho.com http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com