( first post here, hi everybody... )
If you don't need MR, 2i, etc, then BitCask will be faster. You just need
to make sure all your keys fit in memory, which should not be a problem.
How many keys do you have and what's their average length ?
About the values,you can save a lot of space by choos
On 10 July 2013 11:03, Edgar Veiga wrote:
> Hi Guido.
>
> Thanks for your answer!
>
> Bitcask it's not an option due to the amount of ram needed.. We would need
> a lot more of physical nodes so more money spent...
>
Why is it not an option?
If you use Bitcask, then each node needs to store its
xxx___0x_00_000_000xx
>
> We are using the php serialize native function!
>
> Best regards
>
>
>
> On 10 July 2013 11:43, damien krotkine wrote:
>
>>
>>
>>
>> On 10 July 2013 11:03, Edgar Ve
Hi,
I'm trying to use Riak for - basically - a short-lived data storage
system. Here are my prerequisites:
- keys will contain a timestamp and some properties. I can work on
making them short
- values will be binary blobs. From few bytes to few MB
- values are read-only : once written they are ne
On Sat, Dec 21, 2013, at 06:51 AM, Matthew Von-Maszewski wrote:
On Dec 21, 2013, at 4:12 AM, Damien Krotkine <[1]dkrotk...@gmail.com>
wrote:
First option is to use leveldb as storage backend. And use an external
script to expire (delete) keys that are too old (one of the sec
Hi,
I'm stress testing Riak on a reasonable cluster size (10 nodes), where 1
process constantly adds data, and various other processes read it, but
never update it (nor delete it).
I can see that when I add more processes to read the data, the
processing PUTing the data is affected, and its perfo
rge/4-fun-0-',7,[{file,"src/bitcask.erl"},{line,1912}]},{bitcask_fileops,fold_hintfile_loop,5,[{file,"src/bitcask_fileops.erl"},{line,660}]},{bitcask_fileops,fold_file_loop,8,[{file,"src/bitcask_fileops.erl"},{line,720}]},{bitcask_fileops,fold_hintfile,3,[{file,"
Hi,
On the official website, http://docs.basho.com/riak/latest/downloads/
mentions that bitcask users should use this patch :
http://s3.amazonaws.com/downloads.basho.com/patches/bitcask-2.0-merge-crash/riak_kv_bitcask_backend.beam
However, the link will let you download a beam file that actual
Hi,
I'd like to (at least approximately know) how much gossiping (and other
things that are non strictly data-copying related) uses of the network
bandwidth.
Is there any information I can lookup in logs or via the console, or
even any experiment that I can do to measure it?
I am operating
Hi,
As far as I understand, Riak CS uses bitcask for data storage and
leveldb for metadata.
Trying to implement expiration of data in riak cs, bitcask expiration
works fine, but of course metadata are still in the leveldb backend. Is
there any way to expire metadata from riak cs automatically ?
Hi,
In a cluster with 27 nodes, running 2.0.0, I upgraded one node to 2.1.1
(riak-2.1.1-1.el6.x86_64.rpm). Since them, this node is producing these
kind of logs, every couple of minutes:
2015-05-27 14:56:31.978 [info]
<0.94.0>@riak_core_sysmon_handler:handle_event:92 monitor long_schedule
<0.1201
Hi,
In a cluster with 27 nodes, running 2.0.0, I upgraded one node to 2.1.1
(riak-2.1.1-1.el6.x86_64.rpm). Since them, this node is producing these
kind of logs, every couple of minutes:
2015-05-27 14:56:31.978 [info]
<0.94.0>@riak_core_sysmon_handler:handle_event:92 monitor long_schedule
<0.12
, Matt
>
>
> --
> Matt Brender | Developer Advocacy Lead
> [t]@mjbrender[1]
>
> On Wednesday, May 27, 2015 at 11:26 AM, Damien Krotkine
> , wrote:
>> Hi,
>>
>>
>>
>> In a cluster with 27 nodes, running 2.0.0, I upgraded one node
>> to 2.1.1
&g
Hi,
I'm seeing strange things with bitcask expiration: When a new node joins
a Ring, it gets data from the other nodes to rebalance. It seems that
the starting date to count expiration of this data is the time when they
were copied to the new node, *not* the date they were originally created
Hi,
[ this is a possible duplicate, but it seems that messages sent by my
other email account (damien.krotk...@booking.com don't reach list
recipients - if I'm wrong, let me know - so I'm sending it again with
this address ]
I'm seeing strange things with bitcask expiration: When a new node joins
Hi,
I need to do a ring-resizing on a live cluster. The documentation (
http://docs.basho.com/riak/latest/ops/advanced/ring-resizing/ ) is not
very detailed, and I'm missing one important information: how much free
space do I need on the nodes to perform the ring resizing properly ?
Having enough
ke sure my cluster is not creating secondary indexes :)
Thanks,
Damien Krotkine
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
Have you checked riak-admin transfer-limit ? if it's at zero handoffs will be
blocked.
> Le 21 sept. 2015 à 15:41, Nico Revin a écrit :
>
> Hi!
>
> I have the following issue:
> # riak-admin ringready
> TRUE All nodes agree on the ring ['riak@1',
> 'riak@2'
Hi Joe,
I have a similar setup, and in my case, "indexed_s" is properly indexed.
Are you sure that your data is really what you think it is ( i.e. real
JSON, with the right mimetype, etc) ?
dams.
Joe Olson wrote:
Using the default YZ index schema, I know I can index:
dataset={
indexed_
Alexander Popov wrote:
1. does Riak have restrictions on bucket numbers?
In practical life no, as long as you use bucket-types properly.
2. same for SOLR indexes?
I assume you'll want one index per bucket. SOLR indexes are mostly
limited by the RAM and disk space that you can throw at them
AFAIK secondary index pagination sorting doesn't support descending
order. Maybe you can simple have an additional 2i whith your value
inverted ?
Grigory Fateyev wrote:
Hello!
We use 2i pagination and it is very good. The only thing that
bothering me is sorting. Now data returns in ascending
Hi Mark,
I have successfully integrated Banana with Riak 2.0 Solr implementation.
I simply configured a nginx to act as proxy between Riak Search / Solr /
What banana expects. So basically:
- Install Riak 2, java, and enable Riak Search (follow basho doc)
- Install banana
- install nginx and u
100%
valid, but at least you get the idea of what we can do : I'm using the
solr stats features *with* facets at the same time. In this case I4m
only interested by the stats (min/max/sum_of_squares/average) and not
the actual results, so I set row=0.
So basically all the solr
Hi Joe,
First of all, what Dmitri says makes a lot of sense. From what I
understand, you are trying to avoid wasting network bandwidth by
transferring data where you only need the metadata of your keys. As
Dmitri pointed out, if your replication factor is 3 (default), then Riak
will internall
There is an easier way to do it, by using the advanced config file
(usually /etc/riak/advanced.config )
Here is an example, where I use advanced.config to add a vm_args
configuration fields. So I suppose you could use the same syntax to add
your -env option:
[
{vm_args, [
{'-s my_app', ""},
Hi Guillaume,
If I understand correctly you need to change all the values of your JSON data.
How many keys are we talking about, how big are the data, and in how many
buckets are the keys?
Also, is your cluster in production yet?
> Le 7 juin 2016 à 18:43, Guillaume Boddaert
> a écrit :
>
know.
Guillaume
On 08/06/2016 08:49, Damien Krotkine wrote:
Hi Guillaume,
If I understand correctly you need to change all the values of your
JSON data.
How many keys are we talking about, how big are the data, and in how
many buckets are the keys?
Also, is your cluster in production yet?
-10-12 10:02:41.372 [error] <0.1410.0> gen_fsm <0.1410.0> in state
>>>> active terminated with reason: call to undefined function
>>>> riak_kv_multi_backend:range_scan/4 from riak_kv_vnode:list/7 line 1875
>>>>
>>>> And then in the client cod
On Mon, Apr 3, 2017, at 22:20, Matthew Von-Maszewski wrote:
> Rohit,
>
> My apologies for the delayed reply. Too many conflicting demands on
> my time the past two weeks.
>
> I reviewed the riak-debug package you shared. I also discussed its
> contents with other Riak developers.
>
+1 for Apache2
On Thu, Sep 7, 2017, at 19:05, Neeraj Sharma wrote:
> +1 for Apache 2. Thanks for making it open source.
>
> On Thu, Sep 7, 2017 at 3:36 AM, Bill Barnhill
> wrote:
> > +1 for Apache 2. Thank you so much for saving all the important technology
> > developed by the great developers
Hi,
I'm probably coming late to the party, but we have a big number of riak boxes
running at work, in meta-clusters, so some rings are redundantly storing data.
I could move one of them to the RC and compare its performance/errors/whatever
with the non upgraded rings, if you people think it's u
sell Brown
> Sent: 10 April 2018 18:53
> To: Bryan Hunt
> Cc: Damien Krotkine ; riak-users us...@lists.basho.com>
> Subject: Re: Riak-2.2.5 progress update
>
> More update.
>
> There were some minor changes that missed the rc1, so we added them
>
> * yokoz
32 matches
Mail list logo