On 7 Jun 2012, at 22:55, Guido Medina wrote:
> All points to 32 bits, at least for the Java client side (indexes can be of
> type Integer, not Long which is the 64 bits) Look for RiakIndex.java, that
> will give you some answers.
That's a mistake on the part of the client developer at that tim
On 27 Jun 2012, at 11:50, Yousuf Fauzan wrote:
> Its not about the difference in throughput in the two approaches I took.
> Rather, the issue is that even 200 writes/sec is a bit on the lower side.
> I could be doing something wrong with the configuration because people are
> reporting throughp
On 27 Jun 2012, at 12:05, Yousuf Fauzan wrote:
> I did use basho bench on my clusters. It should throughput of around 150
Could you share the config you used, please?
>
> On Wed, Jun 27, 2012 at 4:24 PM, Russell Brown wrote:
>
> On 27 Jun 2012, at 11:50, Yousuf Fauzan wrote
ons, [{get, 1}, {update, 1}]}.
>
>
> On Wed, Jun 27, 2012 at 4:37 PM, Russell Brown wrote:
>
> On 27 Jun 2012, at 12:05, Yousuf Fauzan wrote:
>
>> I did use basho bench on my clusters. It should throughput of around 150
>
> Could you share the config you used, pl
feel for the sizing of any connection pools for your
application.
You can also see how adding nodes and adding workers effects your results to
help you size the cluster you need for your expected usage.
Cheers
Russell
>
> On Wed, Jun 27, 2012 at 4:40 PM, Russell Brown wrote:
>
>
Hi Kaspar,
Sorry for the slow reply.
On 16 Jul 2012, at 07:49, Kaspar Thommen wrote:
> Anyone please?
>
> On Jun 26, 2012 8:57 PM, "Kaspar Thommen" wrote:
> Hi,
>
> The high-level API in the Java client library (IRiakClient) does not allow
> one to use byte[] arrays as keys, only Strings, wh
On 16 Jul 2012, at 16:33, Senthilkumar Peelikkampatti wrote:
> I tried from git master and riak 1.2 "zip" download to try if it is running.
>
> I end of always getting the following error, any idea what I did wrong?
You did nothing wrong, this is a bug.There seems to be an issue with the start
Hi Sergey,
First, sorry for missing your first post. I just didn't see it.
I'll try and answer your questions.
> 1. Why separate gen_servers (riak_api_stat, riak_core_stat,
> riak_kv_stat) were used to gather statistics instead of the direct calls to
> folsom_metrics through some more hig
On 14 Sep 2012, at 14:24, Deepak Balasubramanyam wrote:
> Hi,
>
> I've written a map reduce query on the riak java client like so...
>
> client.mapReduce(BUCKET).addKeyFilter(keyFilter)
> .addLinkPhase(BUCKET, "_", false)
> .addMapPhase(new NamedJSFunctio
On 19 Sep 2012, at 14:54, Ingo Rockel wrote:
> Hi Mark,
>
> thanks for looking into this.
>
> I'm using the riak-java-client for accessing riak,
Are you using HTTP or PB, please? And can you let me know the version, too?
Thanks
Russell
> the bucket is created with the following code:
>
> m
ould add something about this into the java client documentation.
>
> Ingo
>
> Am 19.09.2012 16:10, schrieb Ingo Rockel:
>> Hi Russell,
>>
>> I'm using riak 1.2 and the client version is version 1.0.5 and
>> concerning HTTP and PB, I tried both.
>>
Hey,
'eaddrinuse' from your previous mail suggests that the address your binding to
is already in use. Maybe riak is already running? Or something else is bound to
the address?
Cheers
Russell
On 11 Dec 2012, at 19:06, Kevin Burton wrote:
> Any more information on this or something I can do
On 12 Dec 2012, at 19:20, David Fox wrote:
> Hey everyone,
>
> I'm currently using riak_kv as a reference of how to implement riak_core and
> see that whenever a new coordinator process is needed, a new one is created
> via their supervisor. But in the case of the get and put coordinators, th
Hi Dave,
On 16 Jan 2013, at 11:29, Dave Brady wrote:
> Greetings,
>
> I won't bore everyone with details here: the short story is I ran "riak-admin
> cluster leave/plan/commit" to remove a node and got a lot of grief from our
> five-node ring.
>
> The ring was pretty well de-stabilized. On
Hi Petter,
> Hi
>
> I would like to use Riak, but counters is a must in this project. They are so
> handy :-)
>
> I tried to build the riak-dt, but the build-system seems broken. I get this
> when i do make devrel:
>
> ERROR: generate failed while processing /Users/petter/erlang/riak/rel:
>
)
>
> Petter
>
>
> ________
> Fra: Russell Brown [russell.br...@me.com]
> Sendt: 25. januar 2013 19:52
> Til: Petter Egesund
> Kopi: riak-users List
> Emne: Re: riak_dt
>
> Hi Petter,
>
>> Hi
>>
>> I wo
Hi,
On 13 Feb 2013, at 07:37, Bogdan Flueras wrote:
> Hello all,
> I've got a 5 node cluster with Riak 1.2.1, all machines are multicore,
> with min 4GB RAM.
>
> I want to insert something like 50 million records in Riak with the java
> client (Protobuf used) with default settings. I've tried
configured to spread the load
> across all nodes.
>
> Thanks, I'll have a deeper look into the API and let you know about my
> results.
>
> ing. Bogdan Flueras
>
>
>
> On Wed, Feb 13, 2013 at 10:02 AM, Russell Brown wrote:
> Hi,
>
> On 13 Feb 2013, at
ras
> wrote:
> Each thread has it's own bucket instance (pointing to the same location) and
> I don't re-fetch the bucket per insert.
> Thank you very much!
>
> ing. Bogdan Flueras
>
>
>
> On Wed, Feb 13, 2013 at 10:14 AM, Russell Brown wrote:
>
&g
On 20 Feb 2013, at 14:35, Theo Bot wrote:
> Hi
>
> It's not that I want to use the erlang client. It's just that I want to know
> to to create http queries to maintain the secondary indexes.
Ah, OK.
Sorry for the confusion. Updating the indexes is just like updating a value or
any other obj
Hi,
Thanks for trying Riak.
On 21 Feb 2013, at 23:48, Belai Beshah wrote:
> Hi All,
>
> We are evaluating Riak to see if it can be used to cache large blobs of data.
> Here is our test cluster setup:
>
> • six Ubuntu LTS 12.04 dedicated nodes with 8 core 2.6 Ghz CPU, 32 GB
> RAM, 3.6T
r the 99th+
> percentiles of requests.
>
>
> On 22.02.2013, at 09:24, Russell Brown wrote:
>
>> It is fixed in master, but it doesn't look like it made it into 1.3.0. If
>> you're ok with building from source, I tried it and a patc
riday, March 01, 2013 5:40 AM
> To: Belai Beshah
> Cc: Jared Morrow; riak-users@lists.basho.com; Russell Brown
> Subject: Re: Understanding read_repairs
>
> Interesting. What does the failure look like?
>
> Kresten
>
> On Feb 27, 2013, at 11:25 PM, Belai Beshah
>
That is indeed a bug. I guess you would have seen it in 1.3.0 too, if you
called the '/stats/ endpoint.
I've opened an issue here https://github.com/basho/riak_kv/issues/528
Thanks for letting us now, I'll get a patch as soon as I can.
Cheers
Russell
On 5 Apr 2013, at 13:15, Chris Read wrote
Hi Jeff,
On 10 Apr 2013, at 02:54, Jeff Peck wrote:
> Hello,
>
> In Riak, is it possible to retrieve all of the keys of an index? I do not
> want the object keys in this case, but rather the actual index keys.
I think this is covered by a feature I'm adding, if I understand what you're
askin
On 17 Apr 2013, at 08:54, Mattias Sjölinder wrote:
> Thanks for your help. Your query returned the same number over and over again
> just as expected.
>
> I think I have found the reason for my problem though. The client lib
> CorrugatedIron seems to wrap each document in the MapReduce resul
On 30 Apr 2013, at 09:47, Daniel Iwan wrote:
> When doing migration from pre-1.3.1 do I run
>
> riak-admin reformat-indexes [] []
>
> on every node that is part of the cluster or just one and then it magically
> applies change to all of them? Changelog says:
> Riak 1.3.1 includes a utility,
Hi,
That warning means that riak failed to calculate the leveldb read block error
count stat.
This is caused by a bug fixed in 1.3.2. The stat code picks a random vnode from
1 to num_partitions on the node and asks it for the read block error stat. If
your node has 1 or fewer partitions this er
Hi,
On 21 Jul 2013, at 02:09, Siraaj Khandkar wrote:
> I (sequentially) made 146204 inserts of unique objects to a single bucket.
> Several secondary indices (most with unique values) were set for each object,
> one of which was "bucket" = BucketName (to use 2i for listing all keys).
There is
On 21 Jul 2013, at 14:20, Siraaj Khandkar wrote:
> On 07/21/2013 07:24 AM, Russell Brown wrote:> Hi,
> >
> > On 21 Jul 2013, at 02:09, Siraaj Khandkar wrote:
> >
> >> I (sequentially) made 146204 inserts of unique objects to a single
> >> bucket. S
On 21 Jul 2013, at 18:34, jerryjyuan wrote:
> I am trying to run some Java client benchmark testing with Basho Bench by
> following the Riak document:
> http://docs.basho.com/riak/latest/references/appendices/Java-Client-Benchmark/
>
> The Basho Bench tool configuration file for this need requi
On 21 Jul 2013, at 19:15, Siraaj Khandkar wrote:
> On 07/21/2013 04:54 PM, Russell Brown wrote:
>>
>> On 21 Jul 2013, at 14:20, Siraaj Khandkar wrote:
>>
>>> On 07/21/2013 07:24 AM, Russell Brown wrote:> Hi,
>>>>
>>>> On 21
Hi,
Although I haven't finished the write up for GC, here is the RFC for CRDTs in
Riak https://github.com/basho/riak/issues/354
Please let me know what you think on the Github issue.
Many thanks
Russell
___
riak-users mailing list
riak-users@lists.bas
Hi Sean,
I'm very sorry to say that you've found a featurebug.
There was a fix put in here https://github.com/basho/riak_core/pull/332
But that means that the default timeout of 60 seconds is now honoured. In the
past it was not.
As far as I can see the 2i endpoint never accepted a timeout argu
index_eq_path(bucket, index, query, 'timeout' => '26')
> else
> raise ArgumentError, t('invalid_index_query', :value =>
> query.inspect)
> end
>response = get(200, path)
>JSON.parse(response[:body])
ZWVjMmNiZjY3Y2Y4YmU3ZTVkMWNiZTVjM2ZkYjg2YWU0MGIwNzNjMTE3NDYyZjEzMTNlMDQ5YmI2ZQ=="}
>
> The same keys and continuation value are returned regardless of whether my
> request contains a continuation value. I've tried swapping the order of
> max_results and continuation without any
Hi Yan,
Another 2i bug in 1.4
I've raised an issue here[1], the fix is very simple[2]. We're getting together
a few fixes this week, and expect to cut a 1.4.1 very soon.
This issue only effects range query inputs to MR ($key is unaffected, as are
equals queries).
Sorry for the trouble, fixes
Hi Yan,
Another 2i bug in 1.4
I've raised an issue here[1], the fix is very simple[2]. We're getting together
a few fixes this week, and expect to cut a 1.4.1 very soon.
This issue only effects range query inputs to MR ($key is unaffected, as are
equals queries).
Sorry for the trouble, fixes
Hi Lucas,
I'm sorry, as easy as it would have been to add with the latest changes, we
just ran out of time.
It is something I'd love to add in future. Or maybe something a contributor
could add? (Happy to advise / review.)
Many thanks
Russell
On 31 Jul 2013, at 02:04, Lucas Cooper wrote:
>
On 2 Aug 2013, at 16:56, João Machado wrote:
> Hi Sean,
>
> Thanks for your quick response. If I follow the steps from Sam, it works as
> expected. I tried the same steps but with my own bucket (and data) and it
> worked too. The difference between what I was trying and what Sam did was
> be
Are you using riak search?
On 9 Aug 2013, at 18:17, Lucas Cooper wrote:
> I had a crash of an entire cluster early this morning, I'm not entirely sure
> why, seems to be something with the indexer or Protocol Buffers (or my use of
> them). Here are the various logs: http://egf.me/logs.tar.xz
>
Not that it is something that answers your immediate need, I just thought I'd
point you at this post where Brian Lee Yung Rowe attempts to integrate Riak
MapReduce and R.
http://cartesianfaith.com/2011/08/17/teaser-running-r-as-a-mapreduce-job-from-riak/
On 26 Sep 2013, at 05:33, jeffrey k elia
I'd very much like to see the same thing.
I have a working branch and test here https://github.com/basho/riak_kv/pull/688
and https://github.com/basho/riak_test/tree/feature/rdb-sib-ex
This isn't using the DVVSets but a sort of rough hack, where we store the event
dot for each write in the meta
Hi Wes,
The client application does not need to perform a read before a write, the riak
server must read from disk before updating the counter. Or at least it must
with our current implementation.
What PRs did you have in mind? I'm curious.
Oh, it looks like Sam beat me to it…to elaborate on h
been doing extensive
> benchmarking around distributed counters. Are there pre-existing benchmarks
> out there that I can measure myself against? I haven't stumbled across many
> at this point, probably because of how new it is.
>
> Cheers,
> Wes
>
>
> On Thu, Oct
Hi Daniil,
On 17 Oct 2013, at 16:55, Daniil Churikov wrote:
> Correct me if I wrong, but when you blindly do update without previous read,
> you create a sibling, which should be resolved on read. In case if you make
> a lot of increments for counter and rarely reads it will lead to siblings
> e
On 17 Oct 2013, at 17:21, Jeremiah Peschka wrote:
> When you 'update' a counter, you send in an increment operation. That's added
> to an internal list in Riak. The operations are then zipped up to provide the
> correct counter value on read. The worst that you'll do is add a large(ish)
> num
Hi Louis-Philippe,
It costs to create secondary indexes. Nothing is free.
But I'm not sure what "1000 different secondary indexes" means. When you add
secondary indexes we store the name of the index and the value it indexes as
part of the object metadata and on disk in an index.
I'm sure our P
Yes! Sorry!
We broke backwards compatibility during development. We merged the patch
today[1]. The develop branch works right now. It will get into the next pre (or
whatever the next tag is called.)
Apologies again, it was easier for me to do the new work without thinking about
backwards compa
loper for Apache Hadoop
>
>
> On Tue, Oct 22, 2013 at 2:07 PM, Russell Brown wrote:
> Yes! Sorry!
>
> We broke backwards compatibility during development. We merged the patch
> today[1]. The develop branch works right now. It will get into the next pre
> (or whatever th
Hi Georgi,
All Guido’s (below) advice is good. If you are just importing unique items, I
would set the bucket property to LWW=true for the import, it will be much
faster since Riak will not do N local reads for vclock data.
Cheers
Russell
On 29 Oct 2013, at 15:21, Guido Medina wrote:
> Your
Hi Mark,
It is pretty easy.
Set your bucket to allow_mult=true.
Send a put to bucket, key.
Send another one to the same bucket key.
If you’re using a well behaved client like the Riak-Java-Client, or any other
that gets a vclock before doing a put, use whatever option stops that.
With the pb c
Riak 2.0 will support this with the new Map data type. Until then, I’m afraid a
Key is a counter or a Key is a riak object.
On 8 Nov 2013, at 19:49, Mark A. Basil, Jr. wrote:
> Just a thought. It would be handy if one could add many named counters per
> buket/key to more closely handle a shop
lly with larger objects).
>>>
>>> It's not clear from the docs if there are any limitations, will the maximum
>>> object size be the limitation:?
>>>
>>> A section of the docs[1] comees comes to mind:
>>>
>>> "Having an enormous object
Hi Dave,
Are you sure that "No queries are running”?
The log you posted shows the index coverage fsm running as well as the
streaming merge sort buffer.
My guess would be you have some (many?) 2i queries with large page_size set on
the results and a slow vnode causing all the results to be buf
To add what Eric said: I don’t know Titan, but the phrase “atomic edge
operations” suggests they require properties from the datastore that Riak’s
eventually consistent datatypes can’t satisfy.
All that said, we have been asked by a customer to look at integration with
Titan, and when time perm
Hi Mark,
The Counter is just a a riak_object under the hood (at the Riak end of things)
to the erlang client though it is modelled as an integer and operations on an
integer.
We’ll get around to the README, sorry about that.
Using the counter is pretty simple. First you need to set whatever bu
Hi Massimiliano,
Answers (and explanations) inline below:
On 16 Dec 2013, at 12:20, Massimiliano Ciancio wrote:
> Hi all,
> I've two questions about new Riak 2.0 sets:
> 1) how many elements can be added to a set? Are 1-10-100 millions of
> keys, about 20-30 chars each, reasonable for a single
Hi,
Can you describe your use case a little? Maybe it would be easier for us to
help.
On 18 Dec 2013, at 04:32, Viable Nisei wrote:
> On Wed, Dec 18, 2013 at 8:32 AM, Erik Søe Sørensen wrote:
> It really is not a good idea to use siblings to represent 1-to-many
> relations. That's not what i
Hi James,
We’re working on docs. There are some edocs at the top of riak_kv_wm_crdt that
describe the HTTP API, that I’ve put on DropBox here
https://www.dropbox.com/s/bcdn2q2owgv4jxl/riak_kv_wm_crdt.html, though we are
still pre-freeze on this code, so APIs change.
As for the PB messages, it
Hi Georgio,
With data that small I doubt there is a difference in perf.
Can I get some more info, please?
Are you getting 2400 reqs a second against a single key? What backend are you
using? What is the spec of the machines? Are they real or on some cloud?
Network?
Is this perf figure agains
for_reqid/3
It doesn’t look like Riak is doing a whole lot at this point. I’ll speak with
my colleagues when they get online for some more guidance, as I’m not sure what
numbers you should expect for a single key. Are you going to try with 100s,
1000s, millions of keys (which is fa
ocs that is worth a read
http://docs.basho.com/riak/1.4.0/cookbooks/Linux-Performance-Tuning/
Hopefully I can get some more concrete answers for you later in the day.
Cheers
Russell
>
>
>
>
> On Fri, Dec 20, 2013 at 9:08 PM, Russell Brown wrote:
> Thanks Georgio,
>
&g
Hi,
On 20 Dec 2013, at 23:16, Jason Campbell wrote:
>
> - Original Message -
> From: "Andrew Stone"
> To: "Jason Campbell"
> Cc: "Sean Cribbs" , "riak-users"
> , "Viable Nisei"
> Sent: Saturday, 21 December, 2013 10:01:29 AM
> Subject: Re: May allow_mult cause DoS?
>
>
>> Think of
Hi Elias,
Answers inline below:
On 20 Jan 2014, at 19:31, Elias Levy wrote:
>
> On Sun, Jan 19, 2014 at 9:00 AM, wrote:
> From: Luc Perkins
> * Reduced sibling creation, inspired by the dotted versions vectors research
> from Preguiça, Baquero, et al[1]
>
> [1] http://arxiv.org/abs/1011.58
On 20 Jan 2014, at 20:35, Elias Levy wrote:
> On Mon, Jan 20, 2014 at 12:14 PM, Russell Brown wrote:
> Longer answer: Riak gave users the option of client or _vnode_ ids in version
> vectors. By default Riak uses vnode ids. Riak erred on the side of caution,
> and would
On 23 Jan 2014, at 20:51, Eric Redmond wrote:
> For version 1.4 counters, riak_kv_pncounter. For 2.0 CRDT counters,
> riak_dt_pncounter.
As in, if the data was written in 1.4, or in 2.0 using the legacy, backwards
compatible 1.4 API endpoints, the the type is risk_kv_pncounter. If the counter
ably
only want the Value bit. So:
{{_Ctx, Count}, _Stats} = riak_kv_crdt:value(RiakObject, riak_kv_pncounter),
Should be what you need, let me know if that works, please?
Cheers
Russell
>[ {Key, Count} ].
>
> What am I doing wrong? I can't seem to figure it out... I
On 25 Jan 2014, at 18:50, Daniel Iwan wrote:
> How "heavy" for the cluster are those two operations for Riak cluster 3-5
> nodes?
> Listing all keys and filtering on client side is definitely not recommended
> but is 2i query via $key for given bucket equally heavy and not recommended?
It is a
On 29 Jan 2014, at 09:57, Edgar Veiga wrote:
> tl;dr
>
> If I guarantee that the same key is only written with a 5 second interval, is
> last_write_wins=true profitable?
It depends. Does the value you write depend in anyway on the value you read, or
is it always that you are just getting a t
On 29 Jan 2014, at 11:27, Guido Medina wrote:
> Hi,
>
> We are using Riak Java client 1.4.x and we want to copy all counters from
> cluster A to cluster B (all counters will be stored on a single to very few
> buckets), if I list the keys using special 2i bucket index and then treat
> each k
Oh damn, wait. You said 1.4.*. There might, therefore be siblings, do a counter
increment before the copy to ensure siblings are resolved (if you can.) Or use
RiakEE MDC.
On 29 Jan 2014, at 11:27, Guido Medina wrote:
> Hi,
>
> We are using Riak Java client 1.4.x and we want to copy all counte
ew value.
>
> Best regards
>
>
> On 29 January 2014 10:10, Russell Brown wrote:
>
> On 29 Jan 2014, at 09:57, Edgar Veiga wrote:
>
>> tl;dr
>>
>> If I guarantee that the same key is only written with a 5 second interval,
>> is last_write_wins=
certain we have the latest version so for us last_write_wins...
> Regards,
>
> Guido.
>
> On 30/01/14 10:46, Russell Brown wrote:
>>
>> On 30 Jan 2014, at 10:37, Edgar Veiga wrote:
>>
>>> Also,
>>>
>>> Using last_write_wins = true,
Hi Elias,
This is a great time for you to ask, if you’re asking what I think you’re
asking.
On 8 Feb 2014, at 22:35, Elias Levy wrote:
> Does Basho have any plans for implementing a CRDT that maintains the minimum
> or maximum value for an integer? It would come in handy in our application
ics counters that can decrement.
Ah, in that case, see Sean’s message (yes trivial, yes planned.) Sorry for the
misunderstanding, I’ve been thinking about bounded counters so that context
misconstrued the question. Oops.
>
> Then again, I could be wrong.
>
> - Original Messag
And you only need to send the context object if you’re removing things, so if
you can partition your work between adds and removes, you can have more
efficient adds.
On 4 Mar 2014, at 16:27, Sam Elliott wrote:
> Yes, batch your updates, it'll be much more efficient that way.
>
> Do not try to
On 13 Mar 2014, at 13:27, EmiNarcissus wrote:
> Hi Dear basho team,
>
> …CRDT currently is not really to use via http…
Can you let us know what you think is missing from the CRDT http API, please, I
thought it was ‘done’ in the more recent 2.0pre releases?
>
> --
> Best Regards
> Tim Le
Hey James,
I haven’t analysed the complexity of the data types. Off hand I know that
operations on Maps, Sets, Counters etc are not O(n). Merges sometimes will be,
if every entry must be compared, and we’re looking at ways to optimise this.
The `value` operation on Maps and Sets must be O(n) si
>
>
>
>
> In answer to you're question on performance I believe the switch up to pre20
> should resolve the the poor add performance for us, and overall I've found it
> to be quite impressive so far.
>
> --james
>
>
>
>
>
>
>
Hi David,
Sorry about the hokey-cokey on this.
In 2.0 for allow_mult=false as a default for default/untyped buckets. That is
to support legacy applications, and rolling upgrades with the least surprise.
allow_mult=true by default for typed buckets, as we think this is the correct
way to run Ria
On 2 Apr 2014, at 09:21, David James wrote:
> What versions(s) of Riak have allow_mult=true by default? Which ones have
> allow_mult=false by default?
All released versions of Riak have allow_mult=false for default buckets. All
release version of Riak only have default buckets.
2.0 will have
HTTP or PB? Pretty sure the HTTP client defaults to a pool of 50 connections.
On 14 Apr 2014, at 16:50, Sean Allen wrote:
> We fire off 100 requests for the items in the batch and wait on the futures
> to complete.
>
>
> On Mon, Apr 14, 2014 at 11:40 AM, Alexander Sicular
> wrote:
> I'm not
How large is your max_results size for 2i queries, or, if you’re not using
pagination, what do you estimate the result size is?
Do you require sorted results? If you don’t, and you’re not using pagination,
1.4.8 might solve your issues since it doesn’t buffer and sort results in
memory (see Sor
improvements. I'll look into upgrading,
> too.
>
> Thanks again!
>
> --
> Dave Brady
>
> - Original Message -
> From: "Russell Brown"
> To: "Dave Brady"
> Cc: riak-users@lists.basho.com
> Sent: Friday, May 30, 2014 11:33:4
Hi,
For this type of issue client and server versions are very useful, please.
Russell
On 23 Jun 2014, at 03:55, japhar81 wrote:
> Hello all,
> I've been banging my head against the PBC API all weekend and I can't seem to
> figure out why I'm getting an error. I'm able to call RpbListBuckets (
Hi Mark,
Answers inline below.
On 23 Jun 2014, at 15:58, Mark Richard Thomas wrote:
> Hello
>
> Why does the search-cmd return a “Could not read” error?
>
> · search-cmd show-schema mybucket > /root/schema.txt
>
> · search-cmd set-schema mybucket /root/schema.txt
>
> ::
There is an akka implementation here
https://github.com/patriknw/akka-datareplication
LWW-Set and LWW-Regsiter should be pretty easy to make, if you can’t use the
akka code.
On 1 Jul 2014, at 10:45, David Lopes wrote:
> Hi,
>
> do you know where I can find some CRDTs Java implementation of L
va ? Look at source code?
>
> Thanks,
> Mohan
>
>
> On Tue, Jul 1, 2014 at 4:45 PM, Russell Brown wrote:
> There is an akka implementation here
> https://github.com/patriknw/akka-datareplication
>
> LWW-Set and LWW-Regsiter should be pretty easy to make, if you ca
Hi Bryce,
Single node?
I’m pretty surprised by this, in 1.4 every write to a counter resolves the
siblings at the coordinating vnode. Do you know what your sibling limit is set
too? How are you typically using counters in your application? Are you reusing
the same bucket/key for non-counter obj
Hi Simon,
So the earlier “this is on wheezy, rest are on squeeze” thing is no longer a
factor?
Any and all 2i repair you do ends with the same error?
Cheers
Russell
On 30 Jul 2014, at 07:29, Effenberg, Simon wrote:
> I tried it now with one partition on 6 different machines and everywhere t
tal partitions: 1
>Finished partitions: 1
>Speed: 100
>Total 2i items scanned: 0
>Total tree objects: 0
>Total objects fixed: 0
> With errors:
> Partition: 34253944624943037145398863266787883273185918976
> Error: index_scan_timeout
>
>
> 2014-07-30 06:16:09.154 UT
Great. Thanks Russell..
>>
>> if you need me to do something.. feel free to ask.
>>
>> Cheers
>> Simon
>>
>> On Wed, Jul 30, 2014 at 10:19:56AM +0100, Russell Brown wrote:
>>> Thanks Simon,
>>>
>>> I’m going to spend a some tim
Have you included the deps/riak_pb/ebin in your erlang path?
On 29 Sep 2014, at 17:18, Jon Brisbin wrote:
> I’m trying to use the riak-erlang-client from master to talk to Riak 2.0 and
> I’m getting an error when I use the test commands found in the README:
>
> (rabbit@localhost)1> {ok, Pid} =
Hi Alexander,
I think you are deleting data-types the proper way.
What is your `delete_mode` setting, please?
I would guess that sibling you are seeing is a tombstone, which suggests you
have some concurrent update with the delete.
You will only ever have a single CRDT sibling, and 1 (or possi
On 2 Oct 2014, at 17:59, Igor Senderovich wrote:
> There are no other errors in any of the logs at exactly the same time but
> there are periodic errors in error.log and console.log of the following form
> (and these occurred seconds before and after the crash):
>
>
> ** Reason for terminati
Hi,
Sorry to say that Maps are just riak objects underneath, so the same size limit
applies. Anything over 1mb and you'll start to feel the pain.
Cheers
Russell
On 14 Nov 2014, at 19:34, Mark Rechler wrote:
> Hi Everyone,
>
> I'm curious if anyone has had experience using maps as containers
Did you check that you had an intermediate value after the second write as
expected? Do you have allow_mult set to true?
On 23 Dec 2014, at 03:20, Claudio Cesar Sanchez Tejeda
wrote:
> Hi,
>
> On Mon, Dec 22, 2014 at 11:54 PM, Alexander Sicular
> wrote:
>> Same client code writing to all 5
On 29 Dec 2014, at 12:09, Jason Ryan wrote:
> All types/buckets we use are set to allow_mult: false - last_write_wins:true
Did you change to this setting after these keys were written?
Looks like a bug in Riak, so I’m going to open a ticket: hd([]) should never be
called in reconcile. But any
in the console, probably.
Cheers
Russell
>
> Thanks,
> Jason
>
>
> On 29 December 2014 at 12:19, Russell Brown wrote:
>
> On 29 Dec 2014, at 12:09, Jason Ryan wrote:
>
>> All types/buckets we use are set to allow_mult: false - last_write_wins:true
1 - 100 of 365 matches
Mail list logo