Hi David,
Look at the documentation for object deletion:
http://docs.basho.com/riak/kv/2.1.4/using/cluster-operations/object-deletion/
There is also a blog post talking about this:
http://basho.com/posts/technical/riaks-config-behaviors-part-3/
When deleting an object, riak will first write a to
Luke, yep, that is right. I did that smoke test when debugging precommit
issue. Each of the three nodes has access to compiled beam files it needs
(in a shared folder that each node is configured for, via add_paths in
advanced config, to make sure they have same beam files). I verified it
manually
Hi Sanket,
Another thing I'd like to confirm - you have installed your compiled
.beam file on all Riak nodes and can confirm via "m(precommit)" that
the code is available?
--
Luke Bakken
Engineer
lbak...@basho.com
On Wed, May 11, 2016 at 12:12 PM, Sanket Agrawal
wrote:
> Yep, I can always load
I have a test database with 158 items in it. Some keys are Riak generated and
some are explicitly set. I have two methods. One does a single delete of an
object whilst the other goes through a loop and deletes all of them.
When I run the deleteAllFromBucket: aBucket it loops through all 158. If
That's a great use case cause it's not ad hoc (worst case). Your pre
compute/cache solution will work whichever approach you take. Then the
question just becomes space vs compute.
On Wednesday, May 11, 2016, Alex De la rosa wrote:
> My use case for searching is mainly for internal purposes, rank
Yep, I can always load the module in question in riak console fine. I did
all my testing in riak console, before trying to turn on precommit hook.
Here is the output for loading the module that you asked for - if sasl
logging is not working, perhaps something is broken about commit hooks then:
$ ~
Huh - I get a huge amount of logging when I turn on sasl using
advanced.config - specifically, I have:
{sasl,[{sasl_error_logger,{file, "/tmp/sasl-1.log"}}]}
in my advanced.config, and for just a startup/shutdown cycle I get a 191555
byte file.
Just to confirm that you can, in fact, load the mod
My use case for searching is mainly for internal purposes, rankings and
statistics (all that data is pre-compiled and stored into final objects for
the app to display)... so I think is best to not store anything in SOLR and
just fetch keys to compile the data when required.
Thanks,
Alex
On Wed, M
Those are exactly the two options and opinions vary generally based on use
case. Storing the data not only take up more space but also more io which
makes things slower not only on read time , but more crucially , at write
time.
Often people will take a hybrid approach and store certain elements l
Thanks, Doug. I have enabled sasl logging now through advanced.config
though it doesn't seem to be creating any log yet.
If this might help you folks with debugging precommit issue, what I have
observed is that erl-reload command doesn't load the precommit modules for
any of the three nodes (thoug
Hi all,
We're doing a POC on Riak-TS 1.3 and want to find out if there is an option to
Drop an existing table and enable Authorization for HTTP APIs.
Thanks
-kyle-
The information contained in this message may be confidential and legally
protected under appli
Hi all,
When creating a SOLR schema for Riak Search, we can chose to store or not
the data we are indexing, for example:
I know that the point to have the value stored is to be able to get it
returned automatically when doing a search query... that implies using more
disc to store data that may
As to the SASL logging, unfortunately it's not "on by default" and the
setting in riak.conf, as you found out, doesn't work correctly. However,
you can enable SASL via adding a setting to your advanced.config:
{sasl,[{sasl_error_logger,tty}]} %% Enable TTY output for the SASL app
{sasl,[{sasl_erro
Hi Luke,
This is on a three-node cluster on Amazon EC2 Linux. I built it from 2.1.4
KV source using erlang interpreter that Basho provided (REPL header: Erlang
R16B02_basho8 (erts-5.10.3) [source] [64-bit] [async-threads:10] [hipe]
[kernel-poll:false]).
Also, riak-admin output below to confirm it
Hi Sanket -
I'd like to confirm some details. Is this a one-node cluster? Did you
install an official package or build from source?
Thanks -
--
Luke Bakken
Engineer
lbak...@basho.com
On Tue, May 10, 2016 at 6:49 PM, Sanket Agrawal
wrote:
> One more thing - I set up the hooks by bucket, not buc
15 matches
Mail list logo