Hi All,
I posted this question to Stack Overflow a few days back but not much luck.
Hoping someone here has some thoughts.
I have a use case for an aggregate query across the entire db and all
buckets, I'm wondering the best query method to use, leaning towards
multiple secondary index calls. Th
Operate on the data locally, validating the decryption process as a final
step after the re-encrypted value is put back into the db.
Also, you don't have to do it all in one step. Pull a list of keys down,
break them up, and test your batch job on a small portion. If you're
concerned with dat
Thanks Mark. Yeah I like the idea of operating locally and ensuring
everything before i remove the key.
My fears with data loss mainly pertain to mid-operation failures and them
leading to a discrepancy between my encrypted values and whatever secondary
method i have of storing their usages. So
At its very core, Riak is meant to provide the alternate benefits of
availability and speed. Transactions are out-of-scope of its use case. If
you're still thinking in terms of transactions - and have a justified need
for them - you might consider standing up a relational DB alongside for the
cry
Hello,
I have a (hopefully dumb) question about working with the Java client
and POJOs. I justed started tinkering with Riak and have created a
simple Account POJO and happily crammed it into a bucket "test1" and
mapped reduced it (hooray). The problem starts when I updated the Class
for Acco
Hi Michael,
I'm somewhat confused by your question; map/reduce doesn't really have
anything to do with your Java POJO/class.
When using the Riak Java client and storing a POJO, the default
converter (JSONConverter) uses the Jackson JSON library and converts
the instance of your POJO into a JSON
Hi Roach,
Thanks for taking a moment to give me a hand with this. Let me try and
be a bit more clear on what I am trying to figure out. My first steps
are a Class Account:
public class Account implements Serializable {
private String email;
}
Storing the account via
myBucke
It's probably not a big enough use case to justify another piece of
architecture, also the distributed nature and availability of Riak is why
it's great for this data and the rest of the app functionality. I'm gonna
try implementing a lightweight transaction with ruby procs to wrap the key
rotatio
Hey Everyone,
We have a five-node, 128 partition cluster running 1.4.2 on Debian.
Is there a doc somewhere that explains how to size max_open_files as it applies
to AAE?
I have max_open_files for eLevelDB set to 3000, as we have about 1500 .sst
files in one VNode's data directory, and th
Hi Dave,
Just to confirm that the ulimit settings "stuck", could you please run riak
attach and execute the following Erlang snippet?
os:cmd('ulimit -n').
The period is significant and please exit using CTRL-C twice.
Thanks!
--
Luke Bakken
CSE
lbak...@basho.com
On Mon, Nov 11, 2013 at 11:56 A
Michael -
You have something stored in that bucket that isn't the JSON you're
expecting when you run your second map/reduce. As I mentioned, there's
nothing special about how the Java client works; it just serializes
the POJO instance using Jackson.
My suggestion would be using curl / your browse
hmm …128 partitions divide by 5 nodes is ~26 vnodes per server.AAE creates a parallel number of vnodes, so your servers have ~52 vnodes each.52 x 3,000 is 156,000 files … 156,000 > 65,536 ulimit. Sooner or later 65,536 will be too small. But ...Now, the primary account method in 1.4.2 is memory s
Hi Luke,
Thanks for the fast reply!
Ok, yes, our limit is being inherited (I just raised it to 131072 after our
latest issue a couple of hours ago):
$ riak attach
Remote Shell: Use "Ctrl-C a" to quit. q() or init:stop() will terminate the
riak node.
Erlang R15B01 (erts-5.9.1) [source]
Hi Matthew,
Yes, I *absolutely* agree that the current setting is too high. I was just
hoping to give the nodes way more than enough headroom than I thought they
needed to run. I planned to reduce the limit if I saw memory pressure.
I originally had AAE at the default of 20. We first got th
Ahh, yes, now that makes sense. I see with @RiakUsermeta or
@RiakTombstone it is possible to filter the results of the MapReduce for
tombstones. Is it possible to add a phase to reduce the tombstones
instead of manually filtering the final result?
thanks,
Michael
On 11/11/2013 03:16 PM, Brian
Matthew,
I forgot to add "thanks" for the spreadsheet! I will go through tomorrow (it's
10 PM here).
I have turned off AAE for the time being.
--
Dave Brady
- Original Message -
From: "Dave Brady"
To: "Matthew Von-Maszewski"
Cc: riak-users@lists.basho.com
Sent: Lundi 11
If your mapping function you simply add a qualifier to detect tombstones;
if (values[i].metadata['X-Riak-Deleted'] == 'true')
- Roach
On Mon, Nov 11, 2013 at 1:59 PM, Michael Guymon
wrote:
> Ahh, yes, now that makes sense. I see with @RiakUsermeta or @RiakTombstone
> it is possible to filter th
I'm interested to see how 2.0 fixes this. I too have been bit by the AAE
killing servers problem and have had to turn it off (which is thankfully the
easiest of the AAE config options). It's kind of antithesis to the easy ops
proposition of Riak when a feature that is difficult to configure can
AAE in 2.0 will have IO rate limiting to keep it from overwhelming disks.
On Mon, Nov 11, 2013 at 1:33 PM, Alexander Sicular wrote:
> I'm interested to see how 2.0 fixes this. I too have been bit by the AAE
> killing servers problem and have had to turn it off (which is thankfully the
> easiest
So I had to add @JsonProperty("metadata") for the @RiakUsermeta to
appear in the serialized json being processed by the Reduce phase. I
have been using "ejsLog('/tmp/map_reduce.log', JSON.stringify(values));"
to see what is being passed in.
One last question, the field with @RiakKey is always
20 matches
Mail list logo