Hi everbody,
I'm using riak to store large numbers of FX rates, to allow for deep
backtests of trading strategies. My code seems to be working, except
that I don't get any keys back from 2i range requests.
Here's the code which puts the rate into riak (the rate is itself a
protocol buffers objec
Hi,
I am trying to run a MR script;
WHat does the below error translate into in in english?
Traceback (most recent call last):
File
"/home/ubuntu/workspace/ndkt-scraper/src/parsers/archive/riak_mapreduce_archive.py",
line 188, in
bucket = 'archive_temp'
File
"/home/ubuntu/workspace/ndkt
I got some helpful feedbacks from Jon Meredith about the errors encountered
during my previous runs:
I don't have a Windows environment available to replicate your work with,
but it looks a bit like the R script is being interpreted as a shell script
rather than as Rscript
You may have soem more
Okay. Yokozuna has been targeted to Riak 1.4.0. Please notice the
integration branch names changed to rz-yz-merge-1.4.0 (notice the addition
of rz- prefix and different version).
https://github.com/basho/yokozuna/blob/master/docs/INSTALL.md#install-from-github
Make sure to do a fresh checkout t
Hello All,
i got such error
[7/18/13 8:22:24 PM] Vahric MUHTARYAN: 2013-07-18 20:17:37.442 [warning]
<0.121.0>@stanchion_utils:email_available:591 Error occurred trying to
check if the address <<"vah...@doruk.net.tr">> has been registered. Reason:
<<"{error,{indexes_not_supported,riak_kv_bitcask_
On Fri, Jul 12, 2013 at 3:36 AM, rengasamy@gmail.com <
rengasamy@gmail.com> wrote:
> Is it possible to add a node to cluster only for enabling riak control and
> not for other transaction .
>
Unfortunately, not at this point. The node running Riak Control has to be
an active member of th
Per http://docs.basho.com/riak/latest/cookbooks/Backups/, you need not only
Bitcask data but also ring data and configuration.
-John
On Jul 18, 2013, at 1:40 PM, Mark Wagner wrote:
> Here is what I am doing: on OSX
>
> # Install riak on my local OSX development box
> brew install riak
>
>
A single node deployment of the memory backend will certainly work,
but will not be as performant as Redis.
Here are some suggestions for running the memory backend in production:
1. Since you are running a single node cluster, set your n_val to 1 [1]
2. Make sure you set the max_memory parameter
Follow the white rabbit:
http://docs.basho.com/riak/latest/cookbooks/Linux-Performance-Tuning/
Most recommended parameters are on that link.
HTH,
Guido.
On 18/07/13 19:48, Simon Effenberg wrote:
Sounds like zdbbl.. I'm running 1.3.1 but it started after added 6 more
nodes to the previously 1
Sounds like zdbbl.. I'm running 1.3.1 but it started after added 6 more
nodes to the previously 12 node cluster. So maybe it is because of a 18
node cluster?
I'll try the zdbbl stuff. Any other hint would be cool (if the new
kernel parameters are also good for 1.3.1.. could you provide them?).
Ch
On Thu, Jul 18, 2013 at 10:38 PM, kpandey wrote:
> Are there known production installation of riak that uses
> riak_kv_memory_backend.
Btw, another alternative is to use the leveldb memory-backend impl
(prevents gc/storage overhead from erlang tables). You'll have to patch up
basho's fork of le
More informations in the console.log:
2013-07-18 18:30:18.768 UTC [info]
<0.76.0>@riak_core_sysmon_handler:handle_event:92 monitor busy_dist_port
<0.21558.67> {#Port<0.7283>,'riak@10.47.109.203'}
2013-07-18 18:30:33.760 UTC [info]
<0.76.0>@riak_core_sysmon_handler:handle_event:92 monitor busy_
If what you are describing is happening for 1.4, type riak-admin diag
and see the new recommended kernel parameters, also, on vm.args
uncomment the +zdbbl 32768 parameter, since what you are describing is
similar to what happened to us when we upgraded to 1.4.
HTH,
Guido.
On 18/07/13 19:21,
It's more than 30 handoffs sometimes:
Attempting to restart script through sudo -H -u riak
'riak@10.47.109.209' waiting to handoff 6 partitions
'riak@10.47.109.208' waiting to handoff 2 partitions
'riak@10.47.109.207' waiting to handoff 1 partitions
'riak@10.47.109.206' waiting to handoff 14 parti
My ultimate goal is to use Riak as that one nosql solution for most of my
needs like
1) In memory immutable data( with guaranteed write and failover so n=3,
w=quoram, r=1 )
2) In memory cache
3) bitask with TTL to store immutable session data (n=3, w=quoram, r=1 )
4) audit data (n=3, w=1,r=
Hi @list,
I see sometimes logs talking about "hinted_handoff transfer of .. failed
because of TCP recv timeout".
Also riak-admin transfers shows me many handoffs (is it possible to give some
insights about "how many" handoffs happened through "riak-admin status"?).
- Is it a normal behavior to
Dave,
Since Redis was designed with that use more in mind, I would guess a single
node of Redis would be faster than a single node of RIak with N=1. If you
still want to run Riak, you'd want to lower the ring size to maybe 8 so you
weren't running 64 vnodes on a single node. This would obviousl
Here is what I am doing: on OSX
# Install riak on my local OSX development box
brew install riak
riak start
riak-admin test
Successfully completed 1 read/write cycle to 'riak@127.0.0.1'
riak stop
#goto the default data directory setup by brew
cd ~/Developer/Cellar/riak/1.3.1-x86_64/libexec/dat
In using riak_kv_memory_backend as a replacement of sorts for Redis or
memcached, is there any serious problem with using a single node and an
n_val of 1? I can’t (yet) afford 5 high-RAM servers for a caching layer,
and was looking to replace our memcached box with a Redis one. In the
interest of r
I have installed Erlang and R statistics Language (R 3.0.1 executables) on my
Windows PC desktop, and also I have installed and compiled Basho bench on
Windows PC.
I am able to run the basho bench benchmark tests on Windows PC against the
Riak server on Linux platform. And then I am trying to gen
Yes, it has similar rules. Nested objects have their fields joined by '_'.
Arrays become repetitive field names, which should map to a multi-valued
field. You can use the URL I provided in the last response to see exactly
how field-values are extracted.
On Thu, Jul 18, 2013 at 12:16 PM, Dave M
Does the JSON extractor work in a similar fashion, or does it follow its
own rules? We don’t use XML anywhere (but JSON everywhere). Thanks!
Dave
On Thu, Jul 18, 2013 at 9:31 AM, Ryan Zezeski wrote:
> As Eric said, the XML extractor causes the nested elements to become
> concatenated by an und
Forgot to mention, with N=2 should he be able to have only 4 nodes and
focus on RAM per node rather than 5?
I know is not recommended but shouldn't N=2 reduce the minimum
recommended nodes to 4?
Guido.
On 18/07/13 16:21, Guido Medina wrote:
Since the data he is requiring to store is only "tr
Thanks Jared
Yes I will be running 5 node cluster in production.
Kumar
--
View this message in context:
http://riak-users.197444.n3.nabble.com/riak-kv-memory-backend-in-production-tp4028393p4028395.html
Sent from the Riak Users mailing list archive at Nabble.com.
__
Since the data he is requiring to store is only "transient", would it
make sense to set N=2 for performance? Or will N=2 have the opposite
effect due to amount of nodes having such replica?
Guido.
On 18/07/13 16:15, Jared Morrow wrote:
Kumar,
We have a few customers who use the memory backen
Kumar,
We have a few customers who use the memory backend. The first example I
could find (with the help of our CSE team) uses the memory backend on 8
machines with 12gb of ram each.
I know you are just testing right now, but we'd suggest using 5 node
minimum. With N=3 on a 3-node cluster you c
Are there known production installation of riak that uses
riak_kv_memory_backend. We have a need to store transient data just in
memory ( never hitting persistent store). I'm testing riak on aws with 3
node cluster and looks good so far. Just wanted to find out what kind of
setup people are usin
Jeremiah,
Sorting is broken in protobuffs currently. Unfortunately the fix got lost
in the cracks.
https://github.com/basho/riak_search/pull/136
-Z
On Thu, Jul 18, 2013 at 10:11 AM, Jeremiah Peschka <
jeremiah.pesc...@gmail.com> wrote:
> I just confirmed that today I'm getting the correct so
AH HA! And you have now saved me from going crazy trying to track down
strange collection related behavior.
Ryan Zezeski, you're my hero.
---
Jeremiah Peschka - Founder, Brent Ozar Unlimited
MCITP: SQL Server 2008, MVP
Cloudera Certified Developer for Apache Hadoop
On Thu, Jul 18, 2013 at 7:20
I just confirmed that today I'm getting the correct sorting in the browser
but not in CorrugatedIron. I'm about to start in on a day of working with a
client. Will verify this afternoon.
---
Jeremiah Peschka - Founder, Brent Ozar Unlimited
MCITP: SQL Server 2008, MVP
Cloudera Certified Developer f
Jeremiah,
After a quick glance I don't see anything obvious in the code. I notice
you have a presort defined. By any chance, if you remove the presort, do
you get a correct sorting on the creation_dt field?
-Z
On Wed, Jul 17, 2013 at 5:30 PM, Jeremiah Peschka <
jeremiah.pesc...@gmail.com> wro
Is it possible to route a task to a specific worker? I looked at using
priorities but I don't think that is an option/implemented.
- I have 3 celery servers for a total of 55 workers/threads: hostA/13,
hostB/21, and hostC/21 (concurrency is after the /)
- I'm using RabbitMQ
- I'm ETL'ing ~2TB of M
Dave,
I'm currently in the process re-targeting Yokozuna to 1.4.0 for the 0.8.0
release. I'll ping this thread when the transition is complete.
-Z
On Wed, Jul 17, 2013 at 8:53 PM, Eric Redmond wrote:
> Dave,
>
> Your initial line was correct. Yokozuna is not yet compatible with 1.4.
>
> Eric
Hi!
I encountered a problem while rebuilding src.rpm published at
http://docs.basho.com/riak/latest/downloads/ for RedHat 6.4:
1. it can't be rebuilt at all without manual intervention (e.g. by
some package build system);
2. there is no 'Requires' and 'BuildRequires' tags in RPM spec file
at al
Opps, my bad, wrong list. :)
On Wed, Jul 17, 2013 at 4:54 PM, Justin wrote:
> Is it possible to route a task to a specific worker? I looked at using
> priorities but I don't think that is an option/implemented.
>
> - I have 3 celery servers for a total of 55 workers/threads: hostA/13,
> hostB/2
Is it possible to add a node to cluster only for enabling riak control and
not for other transaction .
--
View this message in context:
http://riak-users.197444.n3.nabble.com/Riak-Control-on-a-non-active-cluster-member-tp4028291.html
Sent from the Riak Users mailing list archive at Nabble.com.
As Eric said, the XML extractor causes the nested elements to become
concatenated by an underscore. "Extractor" is a Yokozuna term. It is the
process by which a Riak Object is mapped to a Solr document. In the case
of a Riak Object whose value is XML the XML is flattened by a)
concatenating nest
37 matches
Mail list logo