Issue with yokozuna_extractor_map (riak 2.1.1)

2017-02-28 Thread Simon Jaspar
Hi,

I’m currently experimenting with riak 2.1.1 for a project. I recently ran into 
some trouble with yokozuna trying to register a custom extractor.

I’m not sure how I ended up in that situation, but I’m currently stuck with my 
cluster's yokozuna_extractor_map equal to the atom ignore…

I remember having the default extractor map there, before I try to register a 
custom extractor (following basho documentation 
https://docs.basho.com/riak/kv/2.2.0/developing/usage/custom-extractors/ 
 ), 
and end up here.

While attached to one of my riak's node, running yz_extractor:get_map(). 
returns ignore.

And trying to register a new extractor 
yz_extractor:register("custom_extractor",yz_noop_extractor). returns 
already_registered , with this in my logs :

2017-02-28 11:41:39.265 [error] 
<0.180.0>@riak_core_ring_manager:handle_call:406 ring_trans: invalid return 
value: 
{'EXIT',{function_clause,[{orddict,find,["custom_extractor",ignore],[{file,"orddict.erl"},{line,80}]},{yz_extractor,get_def,3,[{file,"src/yz_extractor.erl"},{line,67}]},{yz_extractor,register_map,2,[{file,"src/yz_extractor.erl"},{line,138}]},{yz_misc,set_ring_trans,2,[{file,"src/yz_misc.erl"},{line,302}]},{riak_core_ring_manager,handle_call,3,[{file,"src/riak_core_ring_manager.erl"},{line,389}]},{gen_server,handle_msg,5,[{file,"gen_server.erl"},{line,585}]},{proc_lib,init_p_do_apply,3,[{file,"proc_lib.erl"},{line,239}]}]}}

I have been trying to bypass that issue by reseting the extractor map to its 
default value using lower level functions from yokozuna source code, but with 
no success.

If anyone has any idea or solution that’d be great !

Thanks in advance for your help.

Best,
Simon JASPAR

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Leveled - Another Erlang Key-Value store

2017-02-28 Thread Martin Sumner
Over the past few months I've been working on an alternative pure-Erlang
Key/Value store to act as a backend to Riak KV.  This is now open source
and available at

https://github.com/martinsumner/leveled

The store is a work-in-progress prototype, originally started to better
understand the impact of different trade-offs in LSM-Tree design.  The aim
is to:

- provide a fully-featured Riak backend (e.g. secondary index, object
expiry support etc)
- provide stable throughput with larger object sizes (> 4KB)
- provide a simpler and more flexible path to making Riak KV changes
end-to-end

The prime change in the store when compared to HanoiDB or eleveldb is that
storage is split with only Keys & Metadata being placed in the merge tree,
and the full object living to the side in a series of CDB-based journals.
The intention of this is to:

- reduce write amplification and ease page-cache pollution issues on
scanning events.
- support faster HEAD requests than GET requests, and in parallel an
alternative Riak KV branch has been produced to move from an n-GET model to
a n-HEAD 1-GET model of fetching data for both KV GET and KV PUT operations

The impact of this has been to improve throughput for larger object sizes
where disk I/O and not CPU is the current limit on throughput.  The
advantage increases the greater the object size, and the tighter the
constraint on disk.

Please visit the github page, I've tried to write up as much about the
project as I can.  There's the results of various volume tests, information
on the research which prompted the design, an overview of the design itself
and some hints as to what I expect to try next with leveled.

Any feedback, please mail me, raise an issue on github, or ping me @masleeds

Cheers

Martin
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Riak: reliable object deletion

2017-02-28 Thread al so
Implications when *delete_mode *set to 'keep'*?*


On Thu, Feb 23, 2017 at 9:53 PM, al so  wrote:

> Present both in Solr and Riak. It's a 1-way replication in MDC. MDC is not
>> the cause unless there is a bug there as well.
>>
>>
>> On Thu, Feb 23, 2017 at 6:02 AM, Fred Dushin  wrote:
>>
>>> Running a Solr query has no impact on writes -- Riak search queries are
>>> direct pass throughs to Solr query and don't touch any of the salient Riak
>>> systems (batching writes to Solr, YZ AAE, etc).  I believe the timing of
>>> the reappearance is a coincidence.
>>>
>>> Is it possible the object reappeared via MDC?  Do you have record of the
>>> reappeared object in your other cluster?
>>>
>>> Also, just to confirm, when you say the object reappeared -- it
>>> re-appeared in Riak (and subsequently in Solr), correct?  Or are you saying
>>> you just find the object in Solr (via a search)?
>>>
>>> -Fred
>>>
>>> On Feb 23, 2017, at 1:43 AM, al so  wrote:
>>>
>>>  Here is brief env:
  Riak v 2.0.8 + Solr/ 5 node cluster/ MDC

  Problem:
   Deleted object suddenly resurrected after few days. Solr search query(
 "*:*") was executed around the time of reappearance.

  Bucket property for this reappeared object
  {
   "props": {
 "name": "UsaHype",
 "allow_mult": false,
 "basic_quorum": false,
 "big_vclock": 50,
 "chash_keyfun": {
   "mod": "riak_core_util",
   "fun": "chash_std_keyfun"
 },
 "dvv_enabled": true,
 "dw": "quorum",
 "last_write_wins": false,
 "linkfun": {
   "mod": "riak_kv_wm_link_walker",
   "fun": "mapreduce_linkfun"
 },
 "n_val": 3,
 "notfound_ok": true,
 "old_vclock": 86400,
 "postcommit": [
   {
 "mod": "riak_repl_leader",
 "fun": "postcommit"
   },
   {
 "mod": "riak_repl2_rt",
 "fun": "postcommit"
   }
 ],
 "pr": 0,
 "precommit": [

 ],
 "pw": 0,
 "r": "quorum",
 "repl": true,
 "rw": "quorum",
 "small_vclock": 50,
 "w": "quorum",
 "young_vclock": 20
   }
 }


 * delete_mode is not configured at all. Defaults to 3 sec? Do Tombstone 
 still get created when no delete_mode at all in the config? Can we query 
 metadata(*X-Riak-Deleted)* for this reappeared object.*

  How would one go about finding root cause. I do know the date when the
 object was deleted from our app logs. I also seem to know when it
 reappeared. Seems like a solr search ("*:*") in code was executed just
 before this object reappeared? Look at AAE? Is there a way to find if Solr
 and Riak backends are out-of-sync (AAE?).

  I understand tuning other params (allow_mult = true, ...) will have
 its own implications.

  In summary:
   How do I find root cause of this issue.
   How to reliable delete an object.

 -Volk

>>>
>>>
>>
>
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


[Ann] Riak TS 1.5.2 released

2017-02-28 Thread Pavel Hardak
Dear Riak Users,

Last week we have released Riak TS 1.5.2, which is available in both OSS
and EE editions. It is a bugfix release, no new features or incompatible
changes. As always, we encourage you to upgrade to the latest version.
Please see Release Notes [1] for details. Additionally, our friends at AWS
posted the updated AMI image (based on the latest Amazon Linux) at [2].

[1] http://docs.basho.com/riak/ts/1.5.2/releasenotes/
[2] https://aws.amazon.com/marketplace/pp/B01F9HDDUM

Best regards,
Pavel Hardak

Director of Product Management, Basho
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Issue with yokozuna_extractor_map (riak 2.1.1)

2017-02-28 Thread Luke Bakken
Hi Simon -

Did you copy the .beam file for your custom extractor to a directory
in the Erlang VM's code path?

If you run "pgrep -a beam.smp" you'll see an argument to beam.smp like this:

-pa /home/lbakken/Projects/basho/riak_ee-2.1.1/rel/riak/bin/../lib/basho-patches

On my machine, that adds the
"/home/lbakken/Projects/basho/riak_ee-2.1.1/rel/riak/lib/basho-patches"
directory to the code path. You will see something that starts with
"/usr/lib/riak/.." or "/usr/lib64/riak/..." in your environment.

You must copy the .beam file to the "basho-patches" directory, and
re-start Riak. Then your extractor code will be found.

--
Luke Bakken
Engineer
lbak...@basho.com

On Tue, Feb 28, 2017 at 3:54 AM, Simon Jaspar
 wrote:
> Hi,
>
> I’m currently experimenting with riak 2.1.1 for a project. I recently ran
> into some trouble with yokozuna trying to register a custom extractor.
>
> I’m not sure how I ended up in that situation, but I’m currently stuck with
> my cluster's yokozuna_extractor_map equal to the atom ignore…
>
> I remember having the default extractor map there, before I try to register
> a custom extractor (following basho documentation
> https://docs.basho.com/riak/kv/2.2.0/developing/usage/custom-extractors/ ),
> and end up here.
>
> While attached to one of my riak's node, running yz_extractor:get_map().
> returns ignore.
>
> And trying to register a new extractor
> yz_extractor:register("custom_extractor",yz_noop_extractor). returns
> already_registered , with this in my logs :
>
> 2017-02-28 11:41:39.265 [error]
> <0.180.0>@riak_core_ring_manager:handle_call:406 ring_trans: invalid return
> value:
> {'EXIT',{function_clause,[{orddict,find,["custom_extractor",ignore],[{file,"orddict.erl"},{line,80}]},{yz_extractor,get_def,3,[{file,"src/yz_extractor.erl"},{line,67}]},{yz_extractor,register_map,2,[{file,"src/yz_extractor.erl"},{line,138}]},{yz_misc,set_ring_trans,2,[{file,"src/yz_misc.erl"},{line,302}]},{riak_core_ring_manager,handle_call,3,[{file,"src/riak_core_ring_manager.erl"},{line,389}]},{gen_server,handle_msg,5,[{file,"gen_server.erl"},{line,585}]},{proc_lib,init_p_do_apply,3,[{file,"proc_lib.erl"},{line,239}]}]}}
>
> I have been trying to bypass that issue by reseting the extractor map to its
> default value using lower level functions from yokozuna source code, but
> with no success.
>
> If anyone has any idea or solution that’d be great !
>
> Thanks in advance for your help.
>
> Best,
> Simon JASPAR
>
>
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Leveled - Another Erlang Key-Value store

2017-02-28 Thread DeadZen
Cheers indeed!
You added HEAD requests so a full GET wouldn't always be required?
Did I read that right? *dives into code*
%% GET requests first follow the path of a HEAD request, and if an object is
%% found, then fetch the value from the Journal via the Inker.
... WHAT?

Very nice work, will be more than happy to provide feedback and patches on this.

On Tue, Feb 28, 2017 at 9:04 AM, Martin Sumner
 wrote:
>
> Over the past few months I've been working on an alternative pure-Erlang
> Key/Value store to act as a backend to Riak KV.  This is now open source and
> available at
>
> https://github.com/martinsumner/leveled
>
> The store is a work-in-progress prototype, originally started to better
> understand the impact of different trade-offs in LSM-Tree design.  The aim
> is to:
>
> - provide a fully-featured Riak backend (e.g. secondary index, object expiry
> support etc)
> - provide stable throughput with larger object sizes (> 4KB)
> - provide a simpler and more flexible path to making Riak KV changes
> end-to-end
>
> The prime change in the store when compared to HanoiDB or eleveldb is that
> storage is split with only Keys & Metadata being placed in the merge tree,
> and the full object living to the side in a series of CDB-based journals.
> The intention of this is to:
>
> - reduce write amplification and ease page-cache pollution issues on
> scanning events.
> - support faster HEAD requests than GET requests, and in parallel an
> alternative Riak KV branch has been produced to move from an n-GET model to
> a n-HEAD 1-GET model of fetching data for both KV GET and KV PUT operations
>
> The impact of this has been to improve throughput for larger object sizes
> where disk I/O and not CPU is the current limit on throughput.  The
> advantage increases the greater the object size, and the tighter the
> constraint on disk.
>
> Please visit the github page, I've tried to write up as much about the
> project as I can.  There's the results of various volume tests, information
> on the research which prompted the design, an overview of the design itself
> and some hints as to what I expect to try next with leveled.
>
> Any feedback, please mail me, raise an issue on github, or ping me @masleeds
>
> Cheers
>
> Martin
>
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Leveled - Another Erlang Key-Value store

2017-02-28 Thread Martin Sumner
Have a look in
https://github.com/martinsumner/leveled/blob/master/docs/FUTURE.md and the
"Riak Features Implemented"

I'm trying to avoid the need for Riak to Get n times for every request
(both a KV GET and a KV PUT), when really it only needs the body once, and
the n-1 times it doesn't need the body if the HEAD response confirms that
the vector clock is at the expected state.  As values get larger there's a
lot of unnecessary disk activity in those GETs.

To understand all the Journal/Inker stuff have a look at
https://github.com/martinsumner/leveled/blob/master/docs/DESIGN.md

The Inker is an actor that controls the Journal - the Journal is a
transaction log where all the values are stored permanently, made up of a
series of CDB files where everything is ordered by sequence number.  The
Keys and Metadata are stored in a merge tree called the Ledger - and the
Ledger is controlled by an actor called the Penciller.  So the HEAD request
should just need to check in the Ledger via the Penciller.  A GET request
must follow that same path, and then once it has the metadata, it has the
sequence number and so can use that to fetch the value from the Journal via
the Inker.

Does this make sense?

My apologies, as the naming may not be helpful.  Perhaps one of the
drawbacks of working in isolation on this.

Thanks

Martin




On 28 February 2017 at 16:51, DeadZen  wrote:

> Cheers indeed!
> You added HEAD requests so a full GET wouldn't always be required?
> Did I read that right? *dives into code*
> %% GET requests first follow the path of a HEAD request, and if an object
> is
> %% found, then fetch the value from the Journal via the Inker.
> ... WHAT?
>
> Very nice work, will be more than happy to provide feedback and patches on
> this.
>
> On Tue, Feb 28, 2017 at 9:04 AM, Martin Sumner
>  wrote:
> >
> > Over the past few months I've been working on an alternative pure-Erlang
> > Key/Value store to act as a backend to Riak KV.  This is now open source
> and
> > available at
> >
> > https://github.com/martinsumner/leveled
> >
> > The store is a work-in-progress prototype, originally started to better
> > understand the impact of different trade-offs in LSM-Tree design.  The
> aim
> > is to:
> >
> > - provide a fully-featured Riak backend (e.g. secondary index, object
> expiry
> > support etc)
> > - provide stable throughput with larger object sizes (> 4KB)
> > - provide a simpler and more flexible path to making Riak KV changes
> > end-to-end
> >
> > The prime change in the store when compared to HanoiDB or eleveldb is
> that
> > storage is split with only Keys & Metadata being placed in the merge
> tree,
> > and the full object living to the side in a series of CDB-based journals.
> > The intention of this is to:
> >
> > - reduce write amplification and ease page-cache pollution issues on
> > scanning events.
> > - support faster HEAD requests than GET requests, and in parallel an
> > alternative Riak KV branch has been produced to move from an n-GET model
> to
> > a n-HEAD 1-GET model of fetching data for both KV GET and KV PUT
> operations
> >
> > The impact of this has been to improve throughput for larger object sizes
> > where disk I/O and not CPU is the current limit on throughput.  The
> > advantage increases the greater the object size, and the tighter the
> > constraint on disk.
> >
> > Please visit the github page, I've tried to write up as much about the
> > project as I can.  There's the results of various volume tests,
> information
> > on the research which prompted the design, an overview of the design
> itself
> > and some hints as to what I expect to try next with leveled.
> >
> > Any feedback, please mail me, raise an issue on github, or ping me
> @masleeds
> >
> > Cheers
> >
> > Martin
> >
> > ___
> > riak-users mailing list
> > riak-users@lists.basho.com
> > http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
> >
>
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Leveled - Another Erlang Key-Value store

2017-02-28 Thread DeadZen
Yup I get it, I like the concept and the fun naming of inkers, pencillers
and clerks as well. Is there a basho bench configuration? This reminds me a
bit if fractal trees but with a focus on NoSQL operational semantics, how
else do you see this adding improvements? It seems index requests could be
cheaper with this backend configuration.

On Tue, Feb 28, 2017 at 12:07 PM Martin Sumner 
wrote:

> Have a look in
> https://github.com/martinsumner/leveled/blob/master/docs/FUTURE.md and
> the "Riak Features Implemented"
>
> I'm trying to avoid the need for Riak to Get n times for every request
> (both a KV GET and a KV PUT), when really it only needs the body once, and
> the n-1 times it doesn't need the body if the HEAD response confirms that
> the vector clock is at the expected state.  As values get larger there's a
> lot of unnecessary disk activity in those GETs.
>
> To understand all the Journal/Inker stuff have a look at
> https://github.com/martinsumner/leveled/blob/master/docs/DESIGN.md
>
> The Inker is an actor that controls the Journal - the Journal is a
> transaction log where all the values are stored permanently, made up of a
> series of CDB files where everything is ordered by sequence number.  The
> Keys and Metadata are stored in a merge tree called the Ledger - and the
> Ledger is controlled by an actor called the Penciller.  So the HEAD request
> should just need to check in the Ledger via the Penciller.  A GET request
> must follow that same path, and then once it has the metadata, it has the
> sequence number and so can use that to fetch the value from the Journal via
> the Inker.
>
> Does this make sense?
>
> My apologies, as the naming may not be helpful.  Perhaps one of the
> drawbacks of working in isolation on this.
>
> Thanks
>
> Martin
>
>
>
>
> On 28 February 2017 at 16:51, DeadZen  wrote:
>
> Cheers indeed!
> You added HEAD requests so a full GET wouldn't always be required?
> Did I read that right? *dives into code*
> %% GET requests first follow the path of a HEAD request, and if an object
> is
> %% found, then fetch the value from the Journal via the Inker.
> ... WHAT?
>
> Very nice work, will be more than happy to provide feedback and patches on
> this.
>
> On Tue, Feb 28, 2017 at 9:04 AM, Martin Sumner
>  wrote:
> >
> > Over the past few months I've been working on an alternative pure-Erlang
> > Key/Value store to act as a backend to Riak KV.  This is now open source
> and
> > available at
> >
> > https://github.com/martinsumner/leveled
> >
> > The store is a work-in-progress prototype, originally started to better
> > understand the impact of different trade-offs in LSM-Tree design.  The
> aim
> > is to:
> >
> > - provide a fully-featured Riak backend (e.g. secondary index, object
> expiry
> > support etc)
> > - provide stable throughput with larger object sizes (> 4KB)
> > - provide a simpler and more flexible path to making Riak KV changes
> > end-to-end
> >
> > The prime change in the store when compared to HanoiDB or eleveldb is
> that
> > storage is split with only Keys & Metadata being placed in the merge
> tree,
> > and the full object living to the side in a series of CDB-based journals.
> > The intention of this is to:
> >
> > - reduce write amplification and ease page-cache pollution issues on
> > scanning events.
> > - support faster HEAD requests than GET requests, and in parallel an
> > alternative Riak KV branch has been produced to move from an n-GET model
> to
> > a n-HEAD 1-GET model of fetching data for both KV GET and KV PUT
> operations
> >
> > The impact of this has been to improve throughput for larger object sizes
> > where disk I/O and not CPU is the current limit on throughput.  The
> > advantage increases the greater the object size, and the tighter the
> > constraint on disk.
> >
> > Please visit the github page, I've tried to write up as much about the
> > project as I can.  There's the results of various volume tests,
> information
> > on the research which prompted the design, an overview of the design
> itself
> > and some hints as to what I expect to try next with leveled.
> >
> > Any feedback, please mail me, raise an issue on github, or ping me
> @masleeds
> >
> > Cheers
> >
> > Martin
> >
> > ___
> > riak-users mailing list
> > riak-users@lists.basho.com
> > http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
> >
>
>
>
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Leveled - Another Erlang Key-Value store

2017-02-28 Thread Martin Sumner
For original testing in isolation I used this:

https://github.com/martinsumner/leveled/blob/master/test/volume/single_node/src/basho_bench_driver_eleveleddb.erl

But I've focused on testing since within Riak using the standard
basho_bench/examples/riakc_pb.config test

Testing of secondary indexing is one of the next big things for me,
definitely going to do this in March.  From a development perspective doing
clones/snapshots was a lot easier than it presumably is in C++.  I'm unsure
about how performance will compare, as 2i terms do end up clumped together
on disk in leveldb and I know MVM has done a lot of optimisation work in
that area.

Some things that may help leveled are: there's no overlapping files in
level 1; the levels don't increase in depth as rapidly.  However, there are
some issues (https://github.com/martinsumner/leveled/issues/34) that I may
still need to contend with.

I think there may be some interesting possibilities with Map functions
based on HEAD not GET requests.  So perhaps as a developer I could add a
bitmap index to my object metadata, and roll across those bitmaps
efficiently through a Map function now that I wouldn't need to pull the
whole object off disk to achieve that. So the concept of division between
Metadata and Value may open up efficient query ideas beyond 2i


On 28 February 2017 at 17:14, DeadZen  wrote:

> Yup I get it, I like the concept and the fun naming of inkers, pencillers
> and clerks as well. Is there a basho bench configuration? This reminds me a
> bit if fractal trees but with a focus on NoSQL operational semantics, how
> else do you see this adding improvements? It seems index requests could be
> cheaper with this backend configuration.
>
> On Tue, Feb 28, 2017 at 12:07 PM Martin Sumner <
> martin.sum...@adaptip.co.uk> wrote:
>
>> Have a look in https://github.com/martinsumner/leveled/blob/
>> master/docs/FUTURE.md and the "Riak Features Implemented"
>>
>> I'm trying to avoid the need for Riak to Get n times for every request
>> (both a KV GET and a KV PUT), when really it only needs the body once, and
>> the n-1 times it doesn't need the body if the HEAD response confirms that
>> the vector clock is at the expected state.  As values get larger there's a
>> lot of unnecessary disk activity in those GETs.
>>
>> To understand all the Journal/Inker stuff have a look at
>> https://github.com/martinsumner/leveled/blob/master/docs/DESIGN.md
>>
>> The Inker is an actor that controls the Journal - the Journal is a
>> transaction log where all the values are stored permanently, made up of a
>> series of CDB files where everything is ordered by sequence number.  The
>> Keys and Metadata are stored in a merge tree called the Ledger - and the
>> Ledger is controlled by an actor called the Penciller.  So the HEAD request
>> should just need to check in the Ledger via the Penciller.  A GET request
>> must follow that same path, and then once it has the metadata, it has the
>> sequence number and so can use that to fetch the value from the Journal via
>> the Inker.
>>
>> Does this make sense?
>>
>> My apologies, as the naming may not be helpful.  Perhaps one of the
>> drawbacks of working in isolation on this.
>>
>> Thanks
>>
>> Martin
>>
>>
>>
>>
>> On 28 February 2017 at 16:51, DeadZen  wrote:
>>
>> Cheers indeed!
>> You added HEAD requests so a full GET wouldn't always be required?
>> Did I read that right? *dives into code*
>> %% GET requests first follow the path of a HEAD request, and if an object
>> is
>> %% found, then fetch the value from the Journal via the Inker.
>> ... WHAT?
>>
>> Very nice work, will be more than happy to provide feedback and patches
>> on this.
>>
>> On Tue, Feb 28, 2017 at 9:04 AM, Martin Sumner
>>  wrote:
>> >
>> > Over the past few months I've been working on an alternative pure-Erlang
>> > Key/Value store to act as a backend to Riak KV.  This is now open
>> source and
>> > available at
>> >
>> > https://github.com/martinsumner/leveled
>> >
>> > The store is a work-in-progress prototype, originally started to better
>> > understand the impact of different trade-offs in LSM-Tree design.  The
>> aim
>> > is to:
>> >
>> > - provide a fully-featured Riak backend (e.g. secondary index, object
>> expiry
>> > support etc)
>> > - provide stable throughput with larger object sizes (> 4KB)
>> > - provide a simpler and more flexible path to making Riak KV changes
>> > end-to-end
>> >
>> > The prime change in the store when compared to HanoiDB or eleveldb is
>> that
>> > storage is split with only Keys & Metadata being placed in the merge
>> tree,
>> > and the full object living to the side in a series of CDB-based
>> journals.
>> > The intention of this is to:
>> >
>> > - reduce write amplification and ease page-cache pollution issues on
>> > scanning events.
>> > - support faster HEAD requests than GET requests, and in parallel an
>> > alternative Riak KV branch has been produced to move from an n-GET
>> model to
>> > a n-HEAD 1-GET model of f

Re: AAE Off

2017-02-28 Thread Matthew Von-Maszewski
Performance gains on write intensive applications.

> On Feb 28, 2017, at 11:18 AM, al so  wrote:
> 
> Why would anyone disable AAE in riak 2.x?
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: AAE Off

2017-02-28 Thread Alexander Sicular
Right. AAE does not come for free. It consumes disk, memory and CPU.
Depending on your circumstances it may or may not be advantageous for your
system.

On Tue, Feb 28, 2017 at 11:39 Matthew Von-Maszewski 
wrote:

> Performance gains on write intensive applications.
>
> > On Feb 28, 2017, at 11:18 AM, al so  wrote:
> >
> > Why would anyone disable AAE in riak 2.x?
> > ___
> > riak-users mailing list
> > riak-users@lists.basho.com
> > http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>
>
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>
-- 


Alexander Sicular
Solutions Architect
Basho Technologies
9175130679
@siculars
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


AAE Off

2017-02-28 Thread al so
Why would anyone disable AAE in riak 2.x?
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Riak TS and downsampling?

2017-02-28 Thread Pavel Hardak
Hi Jordan,

We are going to add GROUP BY time interval in Riak TS 1.6, tentatively
scheduled for late March or early April 2017. Additionally, you might
consider using Riak Spark Connector [1] to perform aggregations and
downsampling - this is available today. Depending on the amount of data,
using Spark might be more efficient option even after we add aggregation by
time.

[1] https://github.com/basho/spark-riak-connector

Best,
Pavel





















*Date: Thu, 23 Feb 2017 16:17:57 +From: Jordan Ganoff
>To: riak-users@lists.basho.com
Subject: Riak TS and downsampling?Message-ID:
  >Content-Type:
text/plain; charset="utf-8"Hi,Is it possible to downsample a table as part
of a query? Specifically, togroup and aggregate a given table's records at
a less granular level thanthe rows are stored in the table using an
aggregation technique. Most timeseries databases offer a way to either
precompute (reindex) at differentgranularities or query with an aggregation
function. I haven't foundanything like GROUP BY time(1h) in the docs. Am I
missing something?Thanks,Jordan*
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: AAE Off

2017-02-28 Thread al so
How would the data get repaired then? i.e. looking for complete list of
Cons when AAE is Off.

On Tue, Feb 28, 2017 at 9:45 AM, Alexander Sicular 
wrote:

> Right. AAE does not come for free. It consumes disk, memory and CPU.
> Depending on your circumstances it may or may not be advantageous for your
> system.
>
> On Tue, Feb 28, 2017 at 11:39 Matthew Von-Maszewski 
> wrote:
>
>> Performance gains on write intensive applications.
>>
>> > On Feb 28, 2017, at 11:18 AM, al so  wrote:
>> >
>> > Why would anyone disable AAE in riak 2.x?
>> > ___
>> > riak-users mailing list
>> > riak-users@lists.basho.com
>> > http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>>
>>
>> ___
>> riak-users mailing list
>> riak-users@lists.basho.com
>> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>>
> --
>
>
> Alexander Sicular
> Solutions Architect
> Basho Technologies
> 9175130679 <(917)%20513-0679>
> @siculars
>
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com