It works pretty fast.
Cool.
Just keep an eye out for how big the lucene token row gets.
Cheers
Indeed, it may get out of hand, but for now we are ok -- for the
foreseable future I would say.
Should it get larger, I can split it up into rows -- i.e. all tokens
that start with "a", all t
> It works pretty fast.
Cool.
Just keep an eye out for how big the lucene token row gets.
Cheers
-
Aaron Morton
Freelance Developer
@aaronmorton
http://www.thelastpickle.com
On 7/10/2012, at 2:57 AM, Oleg Dulin wrote:
> So, what I ended up doing is this --
>
> As I write m
So, what I ended up doing is this --
As I write my records into the main CF, I tokenize some fields that I
want to search on using Lucene and write an index into a separate CF,
such that my columns are a composite of:
luceneToken:record key
I can then search my records by doing a slice for e
AFAIk if you want to keep it inside cassandra then DSE, roll your own from
scratch or start with https://github.com/tjake/Solandra .
Outside of Cassandra I've heard of people using Elastic Search or Solr which I
*think* is now faster at updating the index.
Hope that helps.
---
Some one did search on Lucene, but for very fresh data they build search
index in memory so data become available for search without delays.
On 3 September 2012 22:25, Oleg Dulin wrote:
> Dear Distinguished Colleagues:
>
>
Dear Distinguished Colleagues:
I need to add full-text search and somewhat free form queries to my
application. Our data is made up of "items" that are stored in a single
column family, and we have a bunch of secondary indices for look ups.
An item has header fields and data fields, and the st