indices with large number of objects on same key

2012-03-05 Thread Telmo Menezes
Hi everyone,

Sorry if this has been answered before but I'm a newcomer and can't
find out a way to search the archives.

So I have the situation where Riak objects contain a list of keys
referencing other objects. There are cases where objects contain a
very high number of such references. At some point, it becomes
unreasonable to just store this entire list in the object.

Is there some way of dealing with this problem in Riak that I'm missing?

Secondary indicies seem like a possible solution: I could just tag the
referenced objets with the id of the referrer. The problem is that the
documentation is very unclear about what happens when a large number
of values have the same tag. If a query for this tag, is there a
reasonable way to get a high number of results? Any sort of pagination
or streaming?

Best,
Telmo.

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: licenses (was Re: riakkit, a python riak object mapper, has hit beta!(

2012-03-05 Thread Andrey V. Martyanov
Hi Greg,

Thank your for sharing the information! I'm sorry, I'm not trying to change
somebody's mind. Everything started when I've made a typo in my initial
message to Shuhao, and wrote "GPL" instead of "LGPL". I strongly agree with
you, licensing is a personal choice. I just said that the license and the
style guide chosen by Shuhao avoid me using his project. It was part of my
feedback. I just don't like to contribute to a project with such a license
and you might know about possible reasons. So, let's close the topic! :)

Best regards,
Andrey Martyanov

On Mon, Mar 5, 2012 at 11:21 AM, Greg Stein  wrote:

> Hey Andrey,
>
> I've spent well over a decade dealing with licensing issues. One thing
> that I've learned is that licensing is a personal choice and decision,
> and it is nearly impossible to alter somebody's philosophy. I find
> people fall into the GPL camp ("free software"), or the Apache/BSD
> camp ("permissive / open source"), so I always recommend GPLv3 or
> ALv2. (I find people choosing weak reciprocal licenses like LGPL, EPL,
> MPL, CDDL, etc should make up their mind and go to GPL or AL)
>
> In any case... license choice and arguments for one over the other is
> best left to personal email, rather than a public mailing list like
> riak-users. Changing minds doesn't happen on a mailing list :-)
>
> Cheers,
> -g
>
> On Fri, Mar 2, 2012 at 05:24, Andrey V. Martyanov 
> wrote:
> > Hi Justin,
> >
> > Sorry for the late response, I didn't  see your message! In fact, I know
> the
> > differences between the two. But, what is the profit of using it? Why
> don't
> > just use BSD, for example, like many open source projects do. The biggest
> > minus of LGPL is that many people think that it's the same as GPL and
> have
> > problems understanding it. Even your think that I don't know the
> difference!
> > :) Why? Because, it's a common practice. A lot of people really don't
> know
> > the difference. That's why I said before that (L)GPL is overcomplicated.
> If
> > you open the LGPL main page [1], first thing you will see is "Why you
> > shouldn't use the Lesser GPL for your next library". Is it normal? It
> > confuses people. There are a lot of profit in pulling back the changes
> > you've made - a lot of people see it, fix it, comment it, improve it and
> so
> > on. Why the license forces me to to that? It shouldn't.
> >
> > [1] http://www.gnu.org/licenses/lgpl.html
> >
> > Best regards,
> > Andrey Martyanov
> >
> > On Fri, Mar 2, 2012 at 8:29 AM, Justin Sheehy  wrote:
> >>
> >> Hi, Andrey.
> >>
> >> On Mar 1, 2012, at 10:18 PM, "Andrey V. Martyanov" 
> >> wrote:
> >>
> >> > Sorry for GPL, it's a typo. I just don't like GPL-based licenses,
> >> > including LGPL. I think it's overcomplicated.
> >>
> >> You are of course free to dislike anything you wish, but it is worth
> >> mentioning that GPL and LGPL are very different licenses; the LGPL is
> >> missing infectious aspects of the GPL.
> >>
> >> There are many projects which could not use GPL code compatibly with
> their
> >> preferred license but which can safely use LGPL code.
> >>
> >> Justin
> >>
> >>
> >
> >
> > ___
> > riak-users mailing list
> > riak-users@lists.basho.com
> > http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
> >
>
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Search and Erlang client

2012-03-05 Thread Doug Selph
Hi Dan,

It took me several days to get back to this issue; my apologies. I was able
to track down what was happening. Bottom line is this: I was creating a new
Riak object with the desired content type. I was then updating the metadata
on the new object before saving it in order to add 2i information. The
updated metadata did not include the content type, so I was clobbering it
before saving. The resulting content type was 'application/octet-stream'

Thanks for your replies.
Doug


On Mon, Feb 27, 2012 at 5:52 PM, Dan Reverri  wrote:

> Hi Doug,
>
> The Erlang client will use whatever content-type is set by the user. Can
> you provide an example of how you are creating your objects?
>
> The following worked for me:
> {ok, Pid} = riakc_pb_socket:start_link("127.0.0.1", 8087).
> O = riakc_obj:new(<<"bucket">>, <<"key">>, <<"{\"foo\":\"bar\"}">>,
> <<"application/json">>).
> riakc_pb_socket:put(Pid, O).
>
> Thanks,
> Dan
>
> Daniel Reverri
> Developer Advocate
> Basho Technologies, Inc.
> d...@basho.com
>
>
> On Mon, Feb 27, 2012 at 11:17 AM, Doug Selph  wrote:
>
>> Hi Dan,
>>
>> Well, I'm stumped. When creating the object, I pass "application/json" as
>> content-type, but all my objects have content-type set to
>> "application/octet-stream"
>>
>> Does the erlang client save everything as a binary?
>>
>> Thanks,
>> Doug
>>
>>
>> On Mon, Feb 27, 2012 at 12:10 PM, Dan Reverri  wrote:
>>
>>> Hi Doug,
>>>
>>> Can you confirm the "Content-Type" header is set to "application/json"
>>> by reading the JSON objects you stored?
>>>
>>> Can you share an example JSON object as well as an example query you are
>>> running with "search-cmd"?
>>>
>>> Thank you,
>>> Dan
>>>
>>> Daniel Reverri
>>> Developer Advocate
>>> Basho Technologies, Inc.
>>> d...@basho.com
>>>
>>>
>>> On Sun, Feb 26, 2012 at 9:01 PM, Doug Selph  wrote:
>>>
  I'm having trouble with riak-search. If I use the seach-cmd command
 line tool to index text files, I can search the index using both search-cmd
 and the sold interface. However, when I store JSON objects via the erlang
 protocol buffers client, in a bucket which has the search pre-commit
 trigger installed, searches in that index always return 0 matches for
 search terms which have matches in the bucket. (I have saved these objects
 to the bucket after the pre-commit trigger was installed.)

 Is there a trick to this? It is my understanding that there is a
 default schema for JSON objects, which should be sufficient as a starting
 point.

 ___
 riak-users mailing list
 riak-users@lists.basho.com
 http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


>>>
>>
>
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Unicode String problem

2012-03-05 Thread Steve Vinoski
On Sun, Mar 4, 2012 at 4:51 PM, Buri Arslon  wrote:

> Hi everybody!
>
> I can't put unicode string to riak. I was following the riak-erlang-client
> docs, and this doesn't work:
>
>   Object = riakc_obj:new(<<"snippet">>, <<"odam">>, <<"Одамлардан тинглаб
> хикоя">>).
>  ** exception error: bad argument
>
> I googled but couldn't find anything meaningful about this issue. So, I'd
> be very grateful if someone could refer me
> to relevant documentation or give me some hints to solve the problem.
>

Have a look at the Erlang unicode module:

http://www.erlang.org/doc/man/unicode.html

You probably need to use unicode:characters_to_binary to generate a valid
binary for the value you're trying to store in riak.

--steve
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Questions on configuring public and private ips for riak on ubuntu

2012-03-05 Thread Tim Robinson
That'll do it - thanks!

Tim

-Original Message-
From: "Aphyr" 
Sent: Sunday, March 4, 2012 10:57pm
To: 
Cc: riak-users@lists.basho.com
Subject: Re: Questions on configuring public and private ips for riak on ubuntu

ssh -NL 8098:localhost:8098 your.vps.com

--Kyle

On 03/04/2012 09:55 PM, Tim Robinson wrote:
> Yeah, I read your blog post when it first came out. I liked it.
>
> I appreciate the warning, but practically speaking I'm really just not 
> worried about it. It's a test environment on an external VPS that no one 
> knows the info for. Demo to the company means show image/content-type load, 
> JSON via browser with proper indentation, and Riak Control. SSH isn't going 
> to do that for me.
>
> I'm using public data for the testing. I can blow the whole thing away any 
> time.
>
> Aside from warnings does anyone want to help with the question.
>
> Thanks,
> Tim
>
>
> -Original Message-
> From: "Aphyr"
> Sent: Sunday, March 4, 2012 10:41pm
> To: "Tim Robinson"
> Subject: Re: Questions on configuring public and private ips for riak on 
> ubuntu
>
> I can get SSH access over Riak's HTTP and protobufs interfaces in about
> five seconds, and can root a box shortly after that, depending on
> kernel. Please don't do it. Just don't.
>
> http://aphyr.com/posts/224-do-not-expose-riak-to-the-internet
> http://aphyr.com/posts/218-systems-security-a-primer
>
> --Kyle
>
> On 03/04/2012 09:38 PM, Tim Robinson wrote:
>> Right now I am just loading data for test purposes. It's nice to be able to 
>> do some benchmarks against the private network (which is @1Gbit/s)... while 
>> being able to poke a hole in the firewall when I want to do a test/demo.
>>
>> Tim
>>
>> -Original Message-
>> From: "Alexander Sicular"
>> Sent: Sunday, March 4, 2012 9:15pm
>> To: "Tim Robinson"
>> Cc: "riak-users@lists.basho.com"
>> Subject: Re: Questions on configuring public and private ips for riak on 
>> ubuntu
>>
>> this is a "Very Bad" idea. do not expose your riak instance over a public ip 
>> address. riak has no internal security mechanism to keep people from doing 
>> very bad things to your data, configuration, etc.
>>
>> -Alexander Sicular
>>
>> @siculars
>>
>> On Mar 5, 2012, at 12:43 AM, Tim Robinson wrote:
>>
>>> Hello all,
>>>
>>> I have a few questions on networking configs for riak.
>>>
>>> I have both a public ip and a private ip for each riak node. I want Riak to 
>>> communicate over the private ip addresses to take advantage of free 
>>> bandwidth, but I would also like the option to interface with riak using 
>>> the public ip's if need be (i.e. for testing / demo's etc).
>>>
>>> I'm gathering that the way people to this is by setting up app.config to 
>>> use ip "0.0.0.0" to listen for all ip's. I'm also gathering vm.args needs 
>>> to have a unique name in the cluster so I would need to use the hostname 
>>> for the -name option (i.e. r...@www.fake-node-domain-name-1.com).
>>>
>>> My hosts file would contain:
>>>
>>> 127.0.0.1  localhost.localdomain  localhost
>>> x.x.x.xwww.fake-node-domain-name-1.commynode-1
>>> 
>>>
>>> where x.x.x.x is the public ip not the private.
>>>
>>> This is where I start to get lost.
>>>
>>> As it sits, if I attempt to join using the private ip's i will get the 
>>> unreachable error - yet I can telnet connect to/from the equivalent nodes.
>>>
>>> So I could add a second IP to the hosts file, but since I need to keep the 
>>> public one as well, how is that riak is going to use the private ips for 
>>> gissip ring, hinted hand-off, ... etc etc.
>>>
>>> There's obviously some networking basics I am missing.
>>>
>>> Any guidance from those of you who have done this?
>>>
>>> Thanks.
>>> Tim
>>>
>>>
>>>
>>>
>>>
>>> ___
>>> riak-users mailing list
>>> riak-users@lists.basho.com
>>> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>>
>>
>>
>> Tim Robinson
>>
>>
>>
>> Tim Robinson
>>
>>
>>
>> ___
>> riak-users mailing list
>> riak-users@lists.basho.com
>> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>>
>
>
> Tim Robinson
>
>
>
> Tim Robinson
>
>
>
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Tim Robinson



___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Map/Reduce | Link Walking in JAVA-CLIENT

2012-03-05 Thread Rajat Mathur
I am new to  Map Reduce Technique. I have been using *Riak-Java-Client.*
*
1.* I wish to know how can I write my own Map functions and Reduce
Functions for querying. Where they have to be stored if I want to use them
the way *NamedEralangFucntions/NamedJSFucntions *are used. How should
reduce functions be written so that they could do aggregation operations on
results provided by map phases.

*2. *During Map Reduce phases can we have an *alternated Filtration and
query process*, i.e. let's say we want to do index search on 4 indices and
have to take OR or AND of them, then instead of calculating keys for all
indices, can we *filter objects* (obtained from querying one of the 2i's )
based on indexes in map reduce.

*3. *If we were to query using Link Walk with Map reduce, but partially
i.e. let's say 2 stages of link walk (and since number of links would be
too high for my app after that, I have stored Keys of next bucket instead
of links to them. Is it fine?) and then from that data, filtering out
desired result, how to get it working ?

ex. Lets say each object in the last step of link walk had a list of keys
for next bucket's objects.  Now, Is there a way to retrieve the objects of
last bucket[Who's key's list is contained in objects of last step..] by
continuing MapReduce, OR I have to Iterate each object's list attribute and
fetch objects by key ?

Please suggest some place on WWW where I could see more *CODE *for
Riak-Java-Client


-- 
*Rajat Mathur*
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: riak_core app using riak_kv

2012-03-05 Thread Ryan Zezeski
Adam,

The reason you don't see this code in Search is because Search uses a hook
to index the incoming objects (i.e. KV passes the data _to_ Search).

What you are looking for is a local Riak client.  You can created one via
`{ok, C} = riak:local_client()` [1].  That will give you a client to the
the local KV instance [2].

-Ryan

[1]: https://github.com/basho/riak_kv/blob/master/src/riak.erl#L75

[2]: https://github.com/basho/riak_kv/blob/master/src/riak_client.erl

On Sun, Mar 4, 2012 at 7:10 AM, Adam Schepis  wrote:

> What is the best way for a riak_core app to use riak_kv for persistent
> storage? I thought that riak search did this but haven't found it in
> that code yet. Should I use the erlang interface via http or protobuf
> or is there an API or module I can use since my app is running as a
> member of the cluster?
>
> Sent from my iPhone
>
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Riak basho bench error when running http_raw driver

2012-03-05 Thread Andrew Whang
Hi, 

I'm trying to run the basho bench tool against my Riak cluster using the 
http_raw driver, and am coming across the following error:

=ERROR REPORT 5-Mar-2012::13:58:09 ===
** Generic server <0.430.0> terminating 
** Last message in was {send_req,
   {{url,undefined,"127.0.0.1",8098,undefined,
undefined,"/riak/test/5526",undefined},
[],get,[],
[{response_format,binary}],
5000}}
** When Server state == {state,"127.0.0.1",8098,undefined,#Ref<0.0.0.1604>,
   false,undefined,[],false,undefined,false,[],
   {[],[]},
   undefined,idle,undefined,<<>>,0,0,[],undefined,
   undefined,undefined,undefined,false,undefined,
   undefined,<<>>,undefined,false,undefined,0,
   undefined}
** Reason for termination == 
** {function_clause,
   [{ibrowse_http_client,send_req_1,
[{<0.65.0>,#Ref<0.0.0.1605>},
 {url,undefined,"127.0.0.1",8098,undefined,undefined,
 "/riak/test/5526",undefined},
 [],get,[],
 [{response_format,binary}],
 5000,
 {state,"127.0.0.1",8098,undefined,#Ref<0.0.0.1604>,false,
 undefined,[],false,undefined,false,[],
 {[],[]},
 undefined,idle,undefined,<<>>,0,0,[],undefined,undefined,
 undefined,undefined,false,undefined,undefined,<<>>,undefined,
 false,undefined,0,undefined}],
[{file,"src/ibrowse_http_client.erl"},{line,619}]},
{gen_server,handle_msg,5,[{file,"gen_server.erl"},{line,578}]},
{proc_lib,init_p_do_apply,3,[{file,"proc_lib.erl"},{line,227}]}]}


My config:

{mode, max}.
{duration, 1}.
{concurrent, 3}.
{driver, basho_bench_driver_http_raw}.
{key_generator, {uniform_int, 1}}.
{value_generator, {fixed_bin, 1}}.
{http_raw_path, "/riak/test"}.
{operations, [{get, 1}, {update, 1}]}.


Any help debugging this error would be great. 

Thanks,
Andrew

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Dreaded: "inet_tcp",{{badmatch,{error,duplicate_name}}

2012-03-05 Thread Robert Lowe
Hi,

Has anyone dealt with this error:

"Protocol: ~p: register error: 
~p~n",["inet_tcp",{{badmatch,{error,duplicate_name}}"

http://hastebin.com/wucipobiho  - someone else having the same issue

For some reason and I'm not completely sure of the root cause (something to due 
with epmd is my guess.)

After `riak start` no crash is produced, but any command will see "Node is not 
up"

However, if you run `riak console`, the crash snippet from above is produced.

What's the resolution for this? Even kill -9'ing all riak related processes 
don't fix the issue.

Even weirder is after a long while the issue goes away (or so I've seen on my 
staging environment)

Regards,
 - Rob___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: indices with large number of objects on same key

2012-03-05 Thread Mark Phillips
Hi Telmo,

To answer one of your questions ...

On Mon, Mar 5, 2012 at 1:17 AM, Telmo Menezes  wrote:
> Hi everyone,
>
> Sorry if this has been answered before but I'm a newcomer and can't
> find out a way to search the archives.
>

http://riak.markmail.org is probably the best way to search the archives.

Mark

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: indices with large number of objects on same key

2012-03-05 Thread Jeremiah Peschka


On Mar 5, 2012, at 1:17 AM, Telmo Menezes wrote:

> So I have the situation where Riak objects contain a list of keys
> referencing other objects. There are cases where objects contain a
> very high number of such references. At some point, it becomes
> unreasonable to just store this entire list in the object.
> 
> Is there some way of dealing with this problem in Riak that I'm missing?
> 
> Secondary indicies seem like a possible solution: I could just tag the
> referenced objets with the id of the referrer. The problem is that the
> documentation is very unclear about what happens when a large number
> of values have the same tag. If a query for this tag, is there a
> reasonable way to get a high number of results? Any sort of pagination
> or streaming?
> 

Riak doesn't support pagination, but it does support streaming. You can perform 
streaming MR operations to consume the results of the last phase of the 
MapReduce job. I have no idea which clients support this functionality (outside 
of the C# client). Although I would suspect most would support it.

>From [1]:

Q: Although streams weren't mentioned, do you have any recommendations on when 
to use streaming map/reduce versus normal map/reduce?

Streaming MapReduce sends results back as they get produced from the last 
phase, in a multipart/mixedformat. To invoke this, add ?chunked=true to the URL 
when you submit the job. Streaming might be appropriate when you expect the 
result set to be very large and have constructed your application such that 
incomplete results are useful to it. For example, in an AJAX web application, 
it might make sense to send some results to the browser before the entire query 
is complete.

[1]: 
http://basho.com/blog/technical/2010/07/27/webinar-recap---mapreduce-querying-in-riak/

---
Jeremiah Peschka - Managing Director, Brent Ozar PLF, LLC
Microsoft SQL Server MVP


___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Problems writing objects to an half full bucket

2012-03-05 Thread Marco Monteiro
Hello!

I have a riak cluster and I'm seeing a write fail rate of  10% to 30%
(varies with the nodes). At the moment I am writing about
300 new objects per second to the same bucket. If I direct the write to
a new (empty) bucket the problem goes away and I don't see any failure.

The non-empty bucket has between 2 and 3 million objects. Each object
has between 4 and 8 secondary indexes (most have 4).

When we started the system, yesterday, it handled a peak of about 1000
writes per second without problems, with the same hardware.

The cluster has 6 nodes, all debian with Riak 1.0.3. We tried Riak 1.1 at
first,
but had the known map-reduce problem and reverted back.

I requested help on the IRC channel and pharkmillups suggested that Riak is
just trying to write too many things to the disk, given the secondary index.

This is an issue report but if someone has any idea of how a change to the
configuration can fix this, please do tell. I would also like to know what
the
problem is (why this happens) and if it can be fixed in the next few days
with maybe a new release of Riak 1.1, along with the fixes for the
map-reduce
problems.

Thanks,
Marco
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Storing large collections.

2012-03-05 Thread Eric Siegel
Originally, I had planned to map each of my items to their own key.
This was foolish as I estimate that I'll have around 6 billion keys, and
this simply won't fit into memory.

My next plan of attack is store a collection of items to a given key,
approximately 1million keys each with 6000 values.

I was wondering how much performance will degrade as the value size gets
larger.  Also, I'm worried about
having to merge the new value into the collection.

Does anyone have any experience with this problem and perhaps some advice?

eric
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Storing large collections.

2012-03-05 Thread Jeremiah Peschka
On Mar 5, 2012, at 7:09 PM, Eric Siegel wrote:

> Originally, I had planned to map each of my items to their own key.  
> This was foolish as I estimate that I'll have around 6 billion keys, and this 
> simply won't fit into memory.

This is only an issue if you're using bitcask (which is the default back end 
for Riak). If you're willing to consider one of the alternative storage 
backends (LevelDB or InnoDB), then you can store as much data as you want, as 
long as you have disk space to hold it.

> 
> My next plan of attack is store a collection of items to a given key, 
> approximately 1million keys each with 6000 values.

This sounds cumbersome.

> 
> I was wondering how much performance will degrade as the value size gets 
> larger.  Also, I'm worried about
> having to merge the new value into the collection.

I would too - there's no guarantee that two writes to the same collection won't 
happen enough together to cause a conflict.

> 
> Does anyone have any experience with this problem and perhaps some advice?
> 
> eric
> 


---
Jeremiah Peschka - Managing Director, Brent Ozar PLF, LLC
Microsoft SQL Server MVP
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Problems writing objects to an half full bucket

2012-03-05 Thread David Smith
Hi Marco,

On Mon, Mar 5, 2012 at 7:51 PM, Marco Monteiro  wrote:

> I have a riak cluster and I'm seeing a write fail rate of  10% to 30%
> (varies with the nodes). At the moment I am writing about
> 300 new objects per second to the same bucket. If I direct the write to
> a new (empty) bucket the problem goes away and I don't see any failure.

A couple of context establishing questions:

1. What sort of error are you getting when a write fails?
2. What backend are you using? (I'm guessing LevelDB)
3. What do your keys look like? For example, are they date-based (and
thus naturally increasing) or are they UUIDs? :)

D.

-- 
Dave Smith
VP, Engineering
Basho Technologies, Inc.
diz...@basho.com

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Storing large collections.

2012-03-05 Thread Eric Siegel
>
>
>
> >
> > My next plan of attack is store a collection of items to a given key,
> approximately 1million keys each with 6000 values.
>
> This sounds cumbersome.
>

Yes, it is true that I will have to deal with a whole bunch of sibling
resolution and merging, but on the plus side, doing
range queries on a collection will be much faster.  This is all assuming
that the value sizes don't become so big that
fetching the collection takes a long time.

Can the values be compressed on disk?  If so that a 2mb value might not be
insanely terrible.

eric
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Problems writing objects to an half full bucket

2012-03-05 Thread Marco Monteiro
Hi, David!

On 6 March 2012 04:37, David Smith  wrote:

> 1. What sort of error are you getting when a write fails?
>

I'm using riak-js and the error I get is:

{ [Error: socket hang up] code: 'ECONNRESET' }


> 2. What backend are you using? (I'm guessing LevelDB)
>

LevelDB. The documentation says this is the only one to support 2i.


> 3. What do your keys look like? For example, are they date-based (and
> thus naturally increasing) or are they UUIDs? :)
>

UUIDs. They are created by Riak. All my queries use 2i. The 2i are integers
(representing seconds) and random strings (length 16) used as identifiers
for user sessions and similar.

Thanks,
Marco
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Storing large collections.

2012-03-05 Thread Jeremiah Peschka
LevelDB will compress on disk via Google's Snappy compression routines. I think 
that's the only Riak backend that does compression.
---
Jeremiah Peschka - Managing Director, Brent Ozar PLF, LLC
Microsoft SQL Server MVP

On Mar 5, 2012, at 8:47 PM, Eric Siegel wrote:

> 
> 
> >
> > My next plan of attack is store a collection of items to a given key, 
> > approximately 1million keys each with 6000 values.
> 
> This sounds cumbersome.
> 
> Yes, it is true that I will have to deal with a whole bunch of sibling 
> resolution and merging, but on the plus side, doing
> range queries on a collection will be much faster.  This is all assuming that 
> the value sizes don't become so big that
> fetching the collection takes a long time. 
> 
> Can the values be compressed on disk?  If so that a 2mb value might not be 
> insanely terrible.
> 
> eric
> 


___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Riak Recap for March 2 - 4

2012-03-05 Thread Mark Phillips
Evening, Morning, Afternoon to All -

Short Recap for today: new code, videos, and upcoming meetups in
Buffalo and Vancouver.

(Also, I'll be on vacation up through next Tuesday, so if anyone wants
to get ambitious and put together Recaps for later this week, you'll
gain instant mailing list celebrity status.)

Enjoy.

Mark

Community Manager
Basho Technologies
wiki.basho.com/Riak.html
twitter.com/pharkmillups
---

Riak Recap for March 2 - 4


1) Mathias Meyer gave a great talk at the beginning of February all
about Querying Riak at the NoSQL Cologne Group. I foolishly forgot to
pass along the video of the talk.

* Watch here ---> http://www.nosql-cologne.org/videos/#Riak

2) Hector Castro has some new code on GitHub that let's you build a
sandbox to play with multiple Riak nodes via Vagrant.

* Repo here --->  https://github.com/hectcastro/riak-cluster

3) We posted the second video from last month's BashoChats meetup to
the Basho Blog. This talk is from Ted Nyman, lead engineer at Simple,
and is all about building and shipping JVM-based services. (There
isn't much about Riak in this talk, but it's worth the 40 minutes so I
thought I would pass it along.)

* Instant-ish Real Service Architecture --->
http://basho.com/blog/technical/2012/03/05/Instantish-Real-Service-Architecture-BashoChats002/

4) There's a Riak presentation happening at the WYN Ruby Bridage on
March 20th in Buffalo, NY. This one will be delivered by Basho Hacker
Andrew Thompson.

* Details and RSVP here --->
http://www.meetup.com/Western-New-York-Ruby/events/51549782/

5) Andy Gross is giving a Riak a talk at the Vancouver Erlang Meetup
on March 14th. We're also going to try and do a drink up after the
meetup. I'll send along details as they come together for this.

* Meetup details here --->
http://www.meetup.com/erlang-vancouver/events/54568002/

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Unicode String problem

2012-03-05 Thread Igor Karymov
You can't use unicode expression this way from console.
Try the same expression writen in the erlang module.

2012/3/5 Steve Vinoski 

>
>
> On Sun, Mar 4, 2012 at 4:51 PM, Buri Arslon  wrote:
>
>> Hi everybody!
>>
>> I can't put unicode string to riak. I was following the
>> riak-erlang-client docs, and this doesn't work:
>>
>>   Object = riakc_obj:new(<<"snippet">>, <<"odam">>, <<"Одамлардан
>> тинглаб хикоя">>).
>>  ** exception error: bad argument
>>
>> I googled but couldn't find anything meaningful about this issue. So, I'd
>> be very grateful if someone could refer me
>> to relevant documentation or give me some hints to solve the problem.
>>
>
> Have a look at the Erlang unicode module:
>
> http://www.erlang.org/doc/man/unicode.html
>
> You probably need to use unicode:characters_to_binary to generate a valid
> binary for the value you're trying to store in riak.
>
> --steve
>
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>
>
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com