Thanks Hector
I actually did the same steps as you mentioned expect I run Riak-CS in a
real virtual machine.
When I looking into the error log of Riak-CS when I doing riak-cs-access
flush an error was thrown (when doing other file operations are fine):
*2013-12-11 10:14:56.442 [error] <0.12276.0>
Hi Matthew,
it took around 11hours for the first node to finish the compaction. The
second node is running already 12 hours and is still doing compaction.
Besides that I wonder because the fsm_put time on the new 1.4.2 host is
much higher (after the compaction) than on an old 1.3.1 (both are
runn
I need to ask other developers as they arrive for the new day. Does not make
sense to me.
How many nodes do you have? How much RAM do you have in each node? What are
your settings for max_open_files and cache_size in the app.config file? Maybe
this is as simple as leveldb using too much RAM
Hi Matthew
Memory: 23999 MB
ring_creation_size, 256
max_open_files, 100
riak-admin status:
memory_total : 276001360
memory_processes : 191506322
memory_processes_used : 191439568
memory_system : 84495038
memory_atom : 686993
memory_atom_used : 686560
memory_binary : 21965352
memory_code : 11332
Ok, I am now suspecting that your servers are either using swap space (which is
slow) or your leveldb file cache is thrashing (opening and closing multiple
files per request).
How many servers do you have and do you use Riak's active anti-entropy feature?
I am going to plug all of this into a
Sorry I forgot the half of it..
seffenberg@kriak46-1:~$ free -m
total used free sharedbuffers cached
Mem: 23999 23759239 0184 16183
-/+ buffers/cache: 7391 16607
Swap:0 0 0
We have
Also some side notes:
"top" is even better on new 1.4.2 than on 1.3.1 machines.. IO
utilization of disk is mostly the same (round about 33%)..
but
95th percentile of response time for get (avg over all nodes):
before upgrade: 29ms
after upgrade: almost the same
95th percentile of response t
Oh and at the moment they are waiting for some handoffs and I see
errors in logfiles:
2013-12-11 13:41:47.948 UTC [error]
<0.7157.24>@riak_core_handoff_sender:start_fold:269 hinted_handoff
transfer of riak_kv_vnode from 'riak@10.46.109.202'
468137243207554840987117797979434404733540892672
but I
The real Riak developers have arrived on-line for the day. They are telling me
that all of your problems are likely due to the extended upgrade times, and yes
there is a known issue with handoff between 1.3 and 1.4. They also say
everything should calm down after all nodes are upgraded.
I wil
Hi Matthew,
thanks for all your time and work.. see inline for answers..
On Wed, 11 Dec 2013 09:17:32 -0500
Matthew Von-Maszewski wrote:
> The real Riak developers have arrived on-line for the day. They are telling
> me that all of your problems are likely due to the extended upgrade times,
Gavin,
After some more digging, it looks like the issue you're facing is an
open issue against Riak CS:
https://github.com/basho/riak_cs/issues/746
A pull request for the issue has been supplied and will be in the next release:
https://github.com/basho/riak_cs/pull/747
--
Hector
On Wed, Dec
Hi All,
Is it at all possible to have default bucket props be based on the
bucket names? I ask because I'm trying to use Riak to store key/value
data and use the buckets to separate the data based on day and for a
couple of different projects, for example:
buckets/project-year-month-day/key
Hi Matthew,
On Wed, 11 Dec 2013 18:38:49 +0100
Matthew Von-Maszewski wrote:
> Simon,
>
> I have plugged your various values into the attached spreadsheet. I assumed
> a vnode count to allow for one of your twelve servers to die (256 ring size /
> 11 servers).
Great, thanks!
>
> The spread
An additional thought: if increasing max_open_files does NOT help, try
removing +S 4:4 from the vm.args. Typically +S setting helps leveldb, but one
other user mentioned that the new sorted 2i queries needed more CPU in the
Erlang layer.
Summary:
- try increasing max_open_files to 170
- hel
Hi Bryce,
You generally want to avoid creating too many buckets with custom bucket
properties, as this gets stored in Riak's ring data. A large number of
custom buckets will degrade cluster performance.
Is there any reason why you don't just create two buckets, with the desired
LWW/allow_mult=tru
I will do..
but one other thing:
watch Every 10.0s: sudo riak-admin status | grep put_fsm
node_put_fsm_time_mean : 2208050
node_put_fsm_time_median : 39231
node_put_fsm_time_95 : 17400382
node_put_fsm_time_99 : 50965752
node_put_fsm_time_100 : 59537762
node_put_fsm_active : 5
node_put_fsm_active_
The real Riak developers have suggested this might be your problem with stats
being stuck:
https://github.com/basho/riak_core/pull/467
The fix is included in the upcoming 1.4.4 maintenance release (which is overdue
so I am not going to bother guessing when it will actually arrive).
Matthew
On
Hi Bryce,
Unfortunately Riak 2.0 final is not yet available but I would be curious to
know if the upcoming Bucket Types [1] [2] feature would help you model your
problem. You could create a Bucket Type for your allow_mult=true projects
and another for lww=true. so you would have something like (ex
Based on my understanding of the Bucket Types feature - yes, this feature
would solve the problem.
---
Jeremiah Peschka - Founder, Brent Ozar Unlimited
MCITP: SQL Server 2008, MVP
Cloudera Certified Developer for Apache Hadoop
On Wed, Dec 11, 2013 at 12:39 PM, Jordan West wrote:
> Hi Bryce,
>
So I think I have no real chance to get good numbers. I can see a
little bit through the app monitoring but I'm not sure if I can see
real differences about the 100 -> 170 open_files increase.
I will try to change the value on the already migrated nodes as well to
see if this improves the stuff I
One of the core developers says that the following line should stop the stats
process. It will then be automatically started, without the stuck data.
exit(whereis(riak_core_stat_calc_sup), kill), profit().
Matthew
On Dec 11, 2013, at 4:50 PM, Simon Effenberg wrote:
> So I think I have no rea
Cool..
gave me an exception about
** exception error: undefined shell command profit/0
but it worked and now I have new data.. thanks a lot!
Cheers
Simon
On Wed, 11 Dec 2013 17:05:29 -0500
Matthew Von-Maszewski wrote:
> One of the core developers says that the following line should stop the
>
> Can you please try running the following command from within the dev
> directory:
>
> $ ./dev1/bin/riak ping
>
> When I run it locally using your exact configs (downloaded from
> pastebin), I see:
>
> $ ./dev1/bin/riak ping
> Node 'dev1@127.0.0.1\r' not responding to pings.
>
> (Note the \r)
>
Hi Georgio,
There are many possible ways to do something like this. Riak CS in
particular chunks large files into immutable data blocks, and has manifests
pointing to those blocks to track versions of files. Manifests and blocks
are each stored in their own riak object. There are some tricks aroun
Thanks Hector
I got lucky today, after I tried RiakCS-1.4.3, I finally got storage and
access stat data. although the error message is still there.
I notice that if a config the storage schedule to 0600 when 05, it won't
execute one hours later even i restart riak-cs, but will execute after one
d
Hi all , I'm new to riak. and there are four disk partitions on our server,
every partition is ITB. so I want to know if riak support multiple disk
partition usage and how to configure it.
thanks in advance !
--
不学习,不知道
___
riak-users mailing list
riak
Hi Riak users,
I want to start checking out Riak 2 and for that I need the new Java client
I downloaded it from Git but it will not build.
It seems to be missing a few dependencies (One of them was protobuf which I
actually downloaded and built but it did not sync)
Is there anywhere I can downlo
Hi Shimon,
As noted in the README, the new version of the Java client (v2.0) is a
work in progress and not yet usable; while the core is mostly complete
there's currently no user API. Work on that is progressing, and we
hope to have a release candidate available at the end of the month.
The curre
28 matches
Mail list logo