Hi ,
We have a huge database (around 4 billion record - 30 TB) storing the video
watch infromation ie view count , comment , favorited etc. I want to produce
daily report for all videos view counts. It means I need to look 2 day , today
and yesterday so subtract yesterdey view count from today
the load on the machine seems to be steady (around 2.5).
>
> I'd really like some real world input to build the graphs, and when I'm done,
> I'm gonna post them on the wiki. :D
>
> Cheers,
>
> On Thu, Nov 4, 2010 at 2:13 PM, Prometheus WillSurvive
>
Hi Guys,
Week ago I put data ready to index riaksearch via solr interface in the
rapidshare to make it available to community.
I would love to get some benchmark resuts from you guys. Is there anybody test
it ?
Prometheus..
___
riak-users maili
Hi friends,
RiakSearch using the lucene analyzer (java) . In same logic can we use or
integrate the other lucene components to the riakSearch such as highlighter
etc. ?
RiakSearch calling outside java components to get benefit lucene current
analyzers.
Any plan on this ?
Pormetheus
__
md solr testwiki /root/DATA/output/\*.xml
>
> Also, out of curiosity, what does the system say when you type "echo $SHELL"
>
> Best,
> Rusty
>
> On Sun, Oct 31, 2010 at 11:24 AM, Prometheus WillSurvive
> wrote:
> Hi Rusty,
>
> This is what I got
one of the alternatives.
> Other characters represent themselves. Only filenames that have exactly the
> same character in the same position will match. (Matching is case-sensitive;
> i.e. "a" will not match "A").
>
> Best,
> Rusty
>
> On Sun, Oct 31, 2010 a
card using '*':
>
> bin/search-cmd solr index path/to/directory/*.xml
>
> Best,
> Rusty
>
>
> On Sat, Oct 30, 2010 at 4:10 PM, Prometheus WillSurvive
> wrote:
> Hi ,
>
> Is there anyway to submit more than one file in :
>
> "search-cmd sol
n be a
> symptom of growing backlog in the system - e.g. if mnesia is falling behind,
> since every pending mnesia transaction keeps at least one ets table for
> the temporary transaction store.
>
> BR,
> Ulf W
>
> On 31 Oct 2010, at 09:23, Prometheus WillSurvive wro
ulimit = unlimited
On Oct 31, 2010, at 10:30 AM, Neville Burnell wrote:
> Have you increased your ulimit?
>
> On 31 October 2010 19:23, Prometheus WillSurvive
> wrote:
> Hi,
>
> We started a batch index test (wikipedia) when we reached around 600K docs
> system
Hi,
We started a batch index test (wikipedia) when we reached around 600K docs
system gave below error.. Any idea ?
We can not index any more doc in this index.
=ERROR REPORT 31-Oct-2010::10:22:42 ===
** Too many db tables **
DEBUG: riak_search_dir_indexer:197 - "{ error , Type , Er
Hi ,
Is there anyway to submit more than one file in :
"search-cmd solr INDEX abc.xml ? "
it is not accepting *.xml or 1.xml 2.xml in the same command line..
thanks
Prometheus
___
riak-users mailing list
riak-users@lists.basho.com
http://
it smaller, try again.
>
> -alexander
>
> On Oct 30, 2010, at 2:14 PM, Prometheus WillSurvive wrote:
>
>> Hi ,
>>
>> I have problem to submit xml document to the riaksearch through solr
>> interface.
>>
>> It gives :
>>
>> Inde
Hi ,
I have problem to submit xml document to the riaksearch through solr
interface.
It gives :
Indexer <6330.3668.69> - 0.0 % - 0.0 KB/sec - 0/1 files - 2360 seconds
Indexer <6330.3668.69> - 0.0 % - 0.0 KB/sec - 0/1 files - 2372 seconds
Indexer <6330.3668.69> - 0.0 % - 0.0 KB/sec - 0/1 fi
Hi,
>From the wiki :
Riak Search is comprised of:
• Riak Core - Dynamo-inspired distributed-systems framework
• Riak KV - Distributed Key/Value store inspired by Amazon's Dynamo.
• Bitcask - Default storage backend used by Riak KV.
• Riak Search - Distrib
eusWillSurvive
>
> On Oct 28, 2010, at 12:28 PM, Neville Burnell wrote:
>
>> Put it on S3
>>
>> On 28 October 2010 20:20, francisco treacy
>> wrote:
>> Very good idea!
>>
>> 2010/10/28 Prometheus WillSurvive :
>> > Hi All,
>>
rectly. In curl, it looks like this:
>
> curl -X POST -H text/xml --data-binary @datafile.xml
> http://hostname:8098/solr/myindex/update
>
> (Change the name of the datafile, hostname, and index appropriately.)
>
> Best,
> Rusty
>
> On Thu, Oct 28, 2010 at 6:46 AM,
>
> 2010/10/28 Prometheus WillSurvive :
> > Hi All,
> > We have prepare wikipedia database output ready to submit RiakSearch. It is
> > XML and described format for solr submit. Each file has 20.000 Document and
> > totaly 15 xml files. Each file around 44 MB.
> &g
Hi All,
We have prepare wikipedia database output ready to submit RiakSearch. It is XML
and described format for solr submit. Each file has 20.000 Document and totaly
15 xml files. Each file around 44 MB.
You can submit all XML 's =bin/search-cmd solr wikipedia
/wikipedia/content-xml-out/
Hi There,
Is there any easy way to delete whole bucket ?
Is there any command to show me bucket information ie how many document, its
disc size etc..
thanks
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo
Hi there,
I am trying to make some stress test and want to see behaivour of the
riaksearch.
My setup:
3 riak search machine :
1x 8 cores 8 gig
1x2 core 8 gig
1x2 core 2 gig
An there is another machine for load balance (nginx)
I indexed 150K document (from web . ) I made test via Jmet
Hi,
Is there any way to say riaksearh only return x,y,x fields instead of
returning all fields in the result set ?
Thanks
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
Hi Guys,
Its a great work to make search with erlang-lucene family.
I wonder is there any benchmarks withs 10M+ web documents ?
We have subseconds performance with 12M docs on SOLR. But maintaining keeping
the system alive
sometimes a nightmare...
Can we solve our problems with riak_search ?
22 matches
Mail list logo