MapReduce Error: bad_utf8_character_code

2012-08-06 Thread Vladimir Shapovalov
Hi all, I've encountered some problems with MR, namely I get "* bad_utf8_character_code*" error. I know that it's caused by having non UTF-8 characters in the data. The problem is that the data is already stored. Does anybody know, what are the possible solutions? Thanks in advance! Vladimir ___

Riak Search. Ranges. Integer Analyzer Factory.

2012-08-19 Thread Vladimir Shapovalov
Hi all, Does anybody know, how to query integers with wildcards and ranges? Currently I can get result with exact match only. I've definded keynames with suffix *_int* (price_int). I use integer_analyzer_factory: ... {dynamic_field, [ {name, "*_int"}, {type, intege

Re: Riak Search. Ranges. Integer Analyzer Factory.

2012-08-20 Thread Vladimir Shapovalov
ot;11". So > 200 to 400 would be like [000200 TO 000400]. Try that. > > > @siculars > http://siculars.posterous.com > > Sent from my iRotaryPhone > > On Aug 19, 2012, at 18:11, Vladimir Shapovalov > wrote: > > Hi all, > > Does anybody kno

Riak Search. Indexing special chars. Internal Server Error.

2012-08-20 Thread Vladimir Shapovalov
Hi all, I'm trying to put some data in a bucket, witch has indexing enabled. One key value has a special char ß (geman S-sharp). It is UTF-8 encoded. The errror message i get:

Re: Riak Search. Indexing special chars. Internal Server Error.

2012-08-20 Thread Vladimir Shapovalov
t;}, {type, string}, {analyzer_factory, {erlang, text_analyzers, * standard_analyzer_factory*}} ]} ... It looks like the whitespace_analyzer_factory has some differences compare to standard_analyzer_factory. Thanks Vladimir On Mon, Aug 20, 2012 at 3:50 PM, Ryan Zezeski wrote: > > On Mo

Re: Riak Search. Ranges. Integer Analyzer Factory.

2012-08-20 Thread Vladimir Shapovalov
e docu: http://wiki.basho.com/Riak-Search---Querying.html#Range-Searches. Please correct me, what I'm doing wrong? Thanks Vladimir On Mon, Aug 20, 2012 at 4:25 PM, Ryan Zezeski wrote: > > > On Mon, Aug 20, 2012 at 4:22 AM, Vladimir Shapovalov > wrote: > >> Hi Alexander, >

Re: Riak Search. Ranges. Integer Analyzer Factory.

2012-08-21 Thread Vladimir Shapovalov
s are just literals now. And it work with this. I'm going to play with integer factory then. Thanks for your help! Vladimir On Mon, Aug 20, 2012 at 6:42 PM, Ryan Zezeski wrote: > > > On Mon, Aug 20, 2012 at 12:29 PM, Vladimir Shapovalov < > shapova...@gmail.com> wrote:

Riak search. Different results (XML - JSON)

2012-08-25 Thread Vladimir Shapovalov
Hi all, I've a key in a bucket, witch is indexed. { ... "street":"Sabinastraße" ... } If I retrieve the key this way: curl -i -v http://localhost:8098/buckets/is_solr/keys/123 The response looks like: { ... "street":"Sabinastraße" ... } which is correct. If I search for key and expect XML out

Re: Riak search. Different results (XML - JSON)

2012-08-26 Thread Vladimir Shapovalov
caped. Cheers Vladimir On Sun, Aug 26, 2012 at 4:55 PM, Olav Frengstad wrote: > Hey, > > \00df is the unicode representation of ß. The JSON spec states that > "Any character may be escaped" so it is valid JSON. > > Your JSON parser should handle this automatic. > &

Re: Riak search. Different results (XML - JSON)

2012-08-27 Thread Vladimir Shapovalov
when serializing data and XML doesn't. Unfortunately I > don't know why the the difference in serialization routines. > > Hope this answers your question. > > Olav > > 2012/8/26 Vladimir Shapovalov : > > Hey Olaf, > > > > Thanks for the answer. > >

Re: riak reached_max_restart_intensity

2012-09-07 Thread Vladimir Shapovalov
Hi Jorge, Concerning ulimit. It depends on your OS. Here you can find some explanations how to set it on Linux, Solaris and MacOS. Cheers Vladimir On Fri, Sep 7, 2012 at 8:14 PM, Jorge Garrido wrote: > Hi > > An error has been detected in riak, the

LevelDB backend. Turning Compression Off.

2012-09-08 Thread Vladimir Shapovalov
Hi all, Is it possible to turn LevelDB compression off? Thanks! Vladimir ___ riak-users mailing list riak-users@lists.basho.com http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com

Re: LevelDB backend. Turning Compression Off.

2012-09-09 Thread Vladimir Shapovalov
> can increase throughput and decrease latency when accessing > highly-compressible values (think JSON, XML, anything with repeated > characters or substrings). The CPU cost of Snappy compression is > small. > > On Sat, Sep 8, 2012 at 5:06 PM, Vladimir Shapovalov > wrote

REST API: Deleting key vs. marking as deleted.

2012-09-11 Thread Vladimir Shapovalov
HI all, Is deleting a key more expensive operation then just marking it as deleted? I noticed that delete a bunch of keys is quite expensive. All CPUs are fully utilized. I can imagine that I can mark the key first and delete them some time later, e.q. not in rush hours. What is actually the bes

Re: REST API: Deleting key vs. marking as deleted.

2012-09-12 Thread Vladimir Shapovalov
be the most efficient way to get rid > of them. Bitcask and HanoiDB supports expiry. > > Kresten > > > On Sep 11, 2012, at 12:08 PM, Vladimir Shapovalov <mailto:shapova...@gmail.com>> wrote: > > HI all, > > Is deleting a key more expensive operation then ju

Re: single machine still need cluster? like three node?

2012-09-12 Thread Vladimir Shapovalov
I'd recommend to set up at least tree nodes even for the test environment. I'm quite sure your n_val has default value=3. That means riak stores three copies of all data on the same node, even if you have only one node running. Th

Re: REST API: Deleting key vs. marking as deleted.

2012-09-13 Thread Vladimir Shapovalov
about 3 Gig in levelDB directory. On every node. The same thing with index directory (merge_index). Is it possible to delete keys in real? Or do I have just some configuration issues? We've 3 node ver.1.2. Thanks in advance! Vladimir On Wed, Sep 12, 2012 at 11:48 AM, Vladimir Shapovalov wrot

Migrate buckets: bucket_exporter gets different number of keys.

2012-09-16 Thread Vladimir Shapovalov
HI all, I'm trying to migrate data from one node cluster (test-1) to three nodes cluster (test-2) with bucket_exporter. Backend - levelDB. I've noticed that bucket_exporter counts number of keys wrongly.

Re: Deleting items from search index increases disk usage

2012-10-26 Thread Vladimir Shapovalov
Hi Jeremy, As far I'm concerned, delete operation doesn't delete the data physically, just mark it as deleted. I've encountered that problem a while ago and was also surprised about that fact that the data grows instead of reduce. Cheers Vladimir On Fri, Oct 26, 2012 at 2:26 PM, Jeremy Raymond