Hi guys, We will send our test result and share the comments with riaksearch about 350K wikipedia document. Same test will be done by the Solandra (Solr on cassandra). Single SOLR and Lucene ofcourse better perofmance but we are looking for huge index spread indexes with clusters.
Second round will be 3million doc. Best Prometheus On Oct 28, 2010, at 7:02 PM, Rusty Klophaus wrote: > Hi folks, > > Very happy to see the excitement around Riak Search. Just a quick note on > benchmarking approach. For best results, make sure to spread the indexing > load across multiple machines in the cluster, rather than firing all requests > on a single node. Otherwise, you will become CPU bound on that node. Load > balancing in a round-robin fashion is fine. > > To make this easier, you may want to bypass the command line interface and > post to Solr directly. In curl, it looks like this: > > curl -X POST -H text/xml --data-binary @datafile.xml > http://hostname:8098/solr/myindex/update > > (Change the name of the datafile, hostname, and index appropriately.) > > Best, > Rusty > > On Thu, Oct 28, 2010 at 6:46 AM, Prometheus WillSurvive > <prometheus.willsurv...@gmail.com> wrote: > Hi Guys, > > I just put the wikipedia riaksearch solr index ready XMLs to the: > > http://rapidshare.com/files/427591191/wikipedia350.tar.gz > > you can download from there. > > there is also a small keyword list for benchmark test. > > We can put bigger documents later ie 3 million wikipedia doc. > > Let us know your test results. I used Apache Jmeter to send 10 clients > queering to the clusters (3 machine) > > Best Regards > > PrometheusWillSurvive > > On Oct 28, 2010, at 12:28 PM, Neville Burnell wrote: > >> Put it on S3 >> >> On 28 October 2010 20:20, francisco treacy <francisco.tre...@gmail.com> >> wrote: >> Very good idea! >> >> 2010/10/28 Prometheus WillSurvive <prometheus.willsurv...@gmail.com>: >> > Hi All, >> > We have prepare wikipedia database output ready to submit RiakSearch. It is >> > XML and described format for solr submit. Each file has 20.000 Document and >> > totaly 15 xml files. Each file around 44 MB. >> > You can submit all XML 's = bin/search-cmd solr wikipedia >> > /wikipedia/content-xml-out/wikipedia_1.xml >> > So you only need to submit this files to the riaksearch and than make a >> > benchmark test/tune and share your experience. >> > I would like to ask Riak Admin guys is there any place that I can share >> > these files for public access to start collaborative tests ? >> > Second phase I can put 3 million wikipedia XML sets to ready to submit >> > riaksearch. So All we have some common benchmark and tuning parameters. >> > I hope this will help the riaksearch community to better understanding its >> > capability. >> > Best Regards >> > >> > >> > _______________________________________________ >> > riak-users mailing list >> > riak-users@lists.basho.com >> > http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com >> > >> > >> >> _______________________________________________ >> riak-users mailing list >> riak-users@lists.basho.com >> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com >> > > > _______________________________________________ > riak-users mailing list > riak-users@lists.basho.com > http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com > >
_______________________________________________ riak-users mailing list riak-users@lists.basho.com http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com