It worked.
Thank for all the answers!!
Em 19 de abr de 2017 2:12 AM, "Magnus Kessler"
escreveu:
> On 19 April 2017 at 00:41, Cesar Stuardo wrote:
>
>> and apparently, is for riak ts. Is that what you installed?
>
>
> That's correct. Riak-shell is currently only offered as part of Riak TS.
>
> @
On 19 April 2017 at 00:41, Cesar Stuardo wrote:
> and apparently, is for riak ts. Is that what you installed?
That's correct. Riak-shell is currently only offered as part of Riak TS.
@Wagner, Riak KV is a Key Value store with a REST API and client
implementations in a number of supported clien
@lists.basho.com
Asunto: Re: Re:
haha, yeah, here it is.
http://docs.basho.com/riak/ts/1.4.0/using/riakshell/
Enjoy!
De: Wagner Rodrigues [wagner.b.rodrig...@gmail.com]
Enviado: martes, 18 de abril de 2017 18:36
Para: Cesar Stuardo
CC: riak-users@lists.basho.com
haha, yeah, here it is.
http://docs.basho.com/riak/ts/1.4.0/using/riakshell/
Enjoy!
De: Wagner Rodrigues [wagner.b.rodrig...@gmail.com]
Enviado: martes, 18 de abril de 2017 18:36
Para: Cesar Stuardo
CC: riak-users@lists.basho.com
Asunto: Re:
Thanks for th
I see, then i suppose you first need to start the shell. What about
riak-attach? that works?
De: Wagner Rodrigues [wagner.b.rodrig...@gmail.com]
Enviado: martes, 18 de abril de 2017 18:36
Para: Cesar Stuardo
CC: riak-users@lists.basho.com
Asunto: Re:
Thank
Hi Markus,
You may want to subscribe to the mailing list to avoid that messages from
you are held in the moderation queue. Only subscribed members can post
directly to this list. Also, please always reply to the list address, not
to individual subscribers to keep the discussion public.
Russel jus
With leveldb you can use the special $bucket index. You can also stream the
keys, and paginate them, meaning you can get them in smaller lumps, hopefully
this will appear faster and avoid the timeout you're seeing.
On 29 Jan 2016, at 14:03, Markus Geck wrote:
> Yes, sorry I forgot to mention
It looks like this issues
https://github.com/basho/yokozuna/issues/320
I try to set
* |maxThreads| to |150|
* |Acceptors| to |10|
* |lowResourcesMaxIdleTime| to |5|
in /usr/lib/riak/lib/yokozuna/priv/solr/etc/jetty.xml as recommended in
https://github.com/basho/yokozuna/issues/330
b
Hi Hao --
As Alex said, the error you're receiving is generally related to open file
limits set by your OS configuration. Riak requires a large number of
available open files handles. You can find information on how to up your
limits here: http://docs.basho.com/riak/latest/ops/tuning/open-files-li
A few things:
1. can you provide the output from `riak-admin member-status`?
2. using five VMs per physical node likely means that you're bottleneck is
that host running the VMs not Riak
3. uses a shared disk via iSCSI for storage is most certainly also a
bottleneck
A benchmark that scales linear
Greate thanks to all of you!
I'm very sorry for a late response!
Is it polite to make a reply to all in one email? (If not, ask for forgiveness)
To Alexander:
There are 2 physical machines in our cluster, and 5 riak VMs on each machine.
Two physical CPUs in each machine (totally 12 cores), and I a
Mike,
Your application should speak to Riak using the official client library (
github.com/basho/riak-erlang-client), not directly. That said, the option
you want is -proto_dist inet_tls, which was changed in R15 (or earlier, I
don't recall).
On Wed, Dec 4, 2013 at 7:18 AM, Игонин Михаил wrote:
On Monday 02 December 2013 18:10:32 Ryan Zezeski wrote:
> Ivan,
>
> What makes you think the index is damaged?
Only the mention of "badrec" which I would assume meant a damanged record.
> From what I can see this is a
> bug in the code assuming that a #ho_acc record is always returned but in
>
Also, in the mean time, adding +swt very_low to your vm.args can help
lessen the incidence of this issue.
On Tue, Mar 19, 2013 at 7:41 AM, Ingo Rockel
wrote:
> and the riak-users mailer-daemon should really set a "reply-to"...
>
> Original-Nachricht
> Betreff: Re: riak cluster s
It would be sufficient if I could have two eleveldb instances in
riak_kv_multi_backend and if I could drop one of them with
riak_kv_eleveldb_backend:drop/1 while the other instance is used. After another
interval I would drop the second instance etc.
Jan
-- Původní zpráva --
Od
-- Původní zpráva --
> Od: Matthew Von-Maszewski
> Datum: 22. 10. 2012
> Předmět: Re: Riak performance problems when LevelDB database grows beyond 16GB
> Jan,
>
> ...
> The next question from me is whether the drive / disk array problems are your
> only problem at this point. Th
You are right! Riak was killed by the oom killer on all nodes except the one I
was looking at.
Oct 14 18:34:01 gr-node03 kernel: [ pid ] uid tgid total_vm rss cpu
oom_adj oom_score_adj name
Oct 14 18:34:01 gr-node03 kernel: [31808] 106 31808 5178811 3884730 0
0 0
://janevangelista.rajce.idnes.cz/nastenka/#4Riak_2K_2.1RC2_3d_edited.jpg ).
All the nodes crashed silently, there is nothing interesting in Riak logs.
Thanks, Jan
-- Původní zpráva --
Od: Evan Vigil-McClanahan
Datum: 12. 10. 2012
Předmět: Re: Re: Riak performance problems when LevelDB
Hi there, Jan,
The lsof issue is that max_open_files is per backend, iirc, so if
you're maxed out you'll see vnode count * max_open_files.
I think on the second try, you may have set the cache too high. I'd
drop it back to 8 or 16 MB, and possibly up the open files a bit more,
but you don't see
> Can you attach the eleveldb portion of your app.config file?
> Configuration problems, especially max_open_files being too low, can
> often cause issues like this.
>
> If it isn't sensitive, the whole app.config and vm.args files are also
> often helpful.
Hello Evan,
thanks for responding.
I o
Hi,
You can use
riak-admin reip
to rename a node from what it was before your crash, to the new node name after
the crash. That way the other nodes will know that the data has "moved". Run
this command on one of the live nodes before the new node is started.
Documentation here: https://wi
And, for posterity, here's the complete script that I use to rebuild Riak
locally:
#!/bin/bash
echo 'Making deps';
make deps;
echo 'Making all';
make all;
for i in 1 2 3 4;
do for dep in deps/*;
do rm -rf dev/dev$i/lib/`basename $dep`-* && ln -sf `pwd -P`/$dep
dev/dev$i/lib;
done;
done;
Cool! Thanks for your help on this. Now I can keep breaking future versions
of Riak.
---
Jeremiah Peschka
Managing Director, Brent Ozar PLF, LLC
On Thu, Feb 16, 2012 at 7:30 AM, Sean Cribbs wrote:
> Shoot. That make target should probably have its dependencies changed so
> it doesn't rebuild de
Shoot. That make target should probably have its dependencies changed so it
doesn't rebuild devrel unless necessary. Anyway, here's an equivalent bash
script that does what you want. Run it from the top-level directory of your
clone.
for i in 1 2 3 4; do for dep in deps/*; do rm -rf dev/dev$i/lib/
I think I am tired or something, `make stagedevrel` produces an error:
$ make stagedevrel
mkdir -p dev
(cd rel && ../rebar generate target_dir=../dev/dev1
overlay_vars=vars/dev1_vars.config)
==> rel (generate)
ERROR: Release target directory
"/Users/jeremiah/Projects/riak/rel/../dev/dev1" already
Thanks!
I assume, too, that since riak control is now a requirement I should do
`make deps`, `make all`, `make stagedevrel` ?
---
Jeremiah Peschka
Managing Director, Brent Ozar PLF, LLC
On Wed, Feb 15, 2012 at 10:31 PM, Sean Cribbs wrote:
> Jeremiah,
>
> `make stagedevrel` is what you want.
>
Jeremiah,
`make stagedevrel` is what you want.
On Thu, Feb 16, 2012 at 1:23 AM, Jeremiah Peschka <
jeremiah.pesc...@gmail.com> wrote:
> Let's say that I have a copy of the riak 1.0.0 source in ~/src/basho/riak
> and that I've built a local 4 node clustering using `make devrel`. I've
> gone throu
>
> Brian Rowe wrote:
> > IIRC the best of the kludges was to add an exclude directive in the
> > reltool.config.
>
> Can anyone elaborate on this? I'm looking at the reltool docs but I'm
> driving blind here (Erlang newbie).
>
> In reltool.config
{app, rabbit_common, [{incl_cond, exclude
I increased the mapreduce timeout to 10 minutes and the system has been
running for about a day and a half with no flow_timeout errors and also none
of the nodes going down. The crashed nodes seem somehow related to the
mapreduce operations timing out.
I did a search on the machine and there were
As a follow up to my earlier post, I just reran all 208 MapReduce jobs, and
this time I got four timeouts. This time, riak03 was the culprit (rather than
riak02).
This first timeout wrote to the error log after seven seconds. The second and
third wrote to the error log after five seconds. Th
Indeed. contrib.basho.com doesn't have any Search-specific functions
at the moment, but we definitely want to add some if people have
anything to share. I'm sure a pre-commit hook that checks the
content-type of to-be-indexed data would be hugely useful to a lot of
users.
Mark
On Fri, Jan 21, 201
g to
> find my latest tweets.
>
> Is that what would happen?
> -- Forwarded message --
> From: "Rusty Klophaus"
> Date: Jan 16, 2011 8:42 AM
> Subject: Re: Re: too_many_results error
> To: "Eric Moritz"
> Cc: "riak-users@lists.bas
11 8:42 AM
Subject: Re: Re: too_many_results error
To: "Eric Moritz"
Cc: "riak-users@lists.basho.com"
Hi Eric,
This is a failsafe that is applied prior to the 'rows' parameter.
It is applied separately to provide a hard system limit, intended to allow
the cluster adminis
Hi Eric,
This is a failsafe that is applied prior to the 'rows' parameter.
It is applied separately to provide a hard system limit, intended to allow
the cluster administrator to guard against a malicious user, a client
application that accidentally requests too much data, etc.
Best,
Rusty
On S
Hi Dan,
I installed Mercurial on my system.
And set the environment.
"setenv LANG C"
It worked.
Thanks.
Tetsuya
- Original Message -
>Date: Tue, 28 Sep 2010 18:18:47 -0700
>Subject: Re: riak-erlang-client install error
>From: Dan Reverri
>To: Tetsuya
>Cc: riak-users@lists.basho.com
do you have reference values by using an "in memory" storage backend for
example, in order to clarify that the performance limit is related to the disk
backend ?
wde
>A couple of quick questions for you Karsten that should help us get an idea
>of what kind of issues you might be having.
>
>Ho
>I have the same question. According to the comments in riak_claim.erl, the
>claims will be arranged so that partition sequences of length at most
>target_n_val will have no repeated nodes, if possible, but that there will
>be cases where there may be repeats. Is the sequence of N partitions take
37 matches
Mail list logo