Hi everyone,
Sorry if this has been answered before but I'm a newcomer and can't
find out a way to search the archives.
So I have the situation where Riak objects contain a list of keys
referencing other objects. There are cases where objects contain a
very high number of such references. At some
Hi Greg,
Thank your for sharing the information! I'm sorry, I'm not trying to change
somebody's mind. Everything started when I've made a typo in my initial
message to Shuhao, and wrote "GPL" instead of "LGPL". I strongly agree with
you, licensing is a personal choice. I just said that the license
Hi Dan,
It took me several days to get back to this issue; my apologies. I was able
to track down what was happening. Bottom line is this: I was creating a new
Riak object with the desired content type. I was then updating the metadata
on the new object before saving it in order to add 2i informat
On Sun, Mar 4, 2012 at 4:51 PM, Buri Arslon wrote:
> Hi everybody!
>
> I can't put unicode string to riak. I was following the riak-erlang-client
> docs, and this doesn't work:
>
> Object = riakc_obj:new(<<"snippet">>, <<"odam">>, <<"Одамлардан тинглаб
> хикоя">>).
> ** exception error: bad ar
That'll do it - thanks!
Tim
-Original Message-
From: "Aphyr"
Sent: Sunday, March 4, 2012 10:57pm
To:
Cc: riak-users@lists.basho.com
Subject: Re: Questions on configuring public and private ips for riak on ubuntu
ssh -NL 8098:localhost:8098 your.vps.com
--Kyle
On 03/04/2012 09:55 PM,
I am new to Map Reduce Technique. I have been using *Riak-Java-Client.*
*
1.* I wish to know how can I write my own Map functions and Reduce
Functions for querying. Where they have to be stored if I want to use them
the way *NamedEralangFucntions/NamedJSFucntions *are used. How should
reduce funct
Adam,
The reason you don't see this code in Search is because Search uses a hook
to index the incoming objects (i.e. KV passes the data _to_ Search).
What you are looking for is a local Riak client. You can created one via
`{ok, C} = riak:local_client()` [1]. That will give you a client to the
Hi,
I'm trying to run the basho bench tool against my Riak cluster using the
http_raw driver, and am coming across the following error:
=ERROR REPORT 5-Mar-2012::13:58:09 ===
** Generic server <0.430.0> terminating
** Last message in was {send_req,
{{url,undefine
Hi,
Has anyone dealt with this error:
"Protocol: ~p: register error:
~p~n",["inet_tcp",{{badmatch,{error,duplicate_name}}"
http://hastebin.com/wucipobiho - someone else having the same issue
For some reason and I'm not completely sure of the root cause (something to due
with
Hi Telmo,
To answer one of your questions ...
On Mon, Mar 5, 2012 at 1:17 AM, Telmo Menezes wrote:
> Hi everyone,
>
> Sorry if this has been answered before but I'm a newcomer and can't
> find out a way to search the archives.
>
http://riak.markmail.org is probably the best way to search the ar
On Mar 5, 2012, at 1:17 AM, Telmo Menezes wrote:
> So I have the situation where Riak objects contain a list of keys
> referencing other objects. There are cases where objects contain a
> very high number of such references. At some point, it becomes
> unreasonable to just store this entire list
Hello!
I have a riak cluster and I'm seeing a write fail rate of 10% to 30%
(varies with the nodes). At the moment I am writing about
300 new objects per second to the same bucket. If I direct the write to
a new (empty) bucket the problem goes away and I don't see any failure.
The non-empty buck
Originally, I had planned to map each of my items to their own key.
This was foolish as I estimate that I'll have around 6 billion keys, and
this simply won't fit into memory.
My next plan of attack is store a collection of items to a given key,
approximately 1million keys each with 6000 values.
On Mar 5, 2012, at 7:09 PM, Eric Siegel wrote:
> Originally, I had planned to map each of my items to their own key.
> This was foolish as I estimate that I'll have around 6 billion keys, and this
> simply won't fit into memory.
This is only an issue if you're using bitcask (which is the defau
Hi Marco,
On Mon, Mar 5, 2012 at 7:51 PM, Marco Monteiro wrote:
> I have a riak cluster and I'm seeing a write fail rate of 10% to 30%
> (varies with the nodes). At the moment I am writing about
> 300 new objects per second to the same bucket. If I direct the write to
> a new (empty) bucket the
>
>
>
> >
> > My next plan of attack is store a collection of items to a given key,
> approximately 1million keys each with 6000 values.
>
> This sounds cumbersome.
>
Yes, it is true that I will have to deal with a whole bunch of sibling
resolution and merging, but on the plus side, doing
range qu
Hi, David!
On 6 March 2012 04:37, David Smith wrote:
> 1. What sort of error are you getting when a write fails?
>
I'm using riak-js and the error I get is:
{ [Error: socket hang up] code: 'ECONNRESET' }
> 2. What backend are you using? (I'm guessing LevelDB)
>
LevelDB. The documentation sa
LevelDB will compress on disk via Google's Snappy compression routines. I think
that's the only Riak backend that does compression.
---
Jeremiah Peschka - Managing Director, Brent Ozar PLF, LLC
Microsoft SQL Server MVP
On Mar 5, 2012, at 8:47 PM, Eric Siegel wrote:
>
>
> >
> > My next plan of
Evening, Morning, Afternoon to All -
Short Recap for today: new code, videos, and upcoming meetups in
Buffalo and Vancouver.
(Also, I'll be on vacation up through next Tuesday, so if anyone wants
to get ambitious and put together Recaps for later this week, you'll
gain instant mailing list celebr
You can't use unicode expression this way from console.
Try the same expression writen in the erlang module.
2012/3/5 Steve Vinoski
>
>
> On Sun, Mar 4, 2012 at 4:51 PM, Buri Arslon wrote:
>
>> Hi everybody!
>>
>> I can't put unicode string to riak. I was following the
>> riak-erlang-client doc
20 matches
Mail list logo