Hi all, I am new to this list. Thanks for taking the time to read my
questions! I just want to know if the data throughput I am seeing is
expected for the bitcask backend or if it is too low.
I am doing the preliminary feasibility study to decide if we should
implement a Riak data store. Our appli
eam.smp jumping to 350-550 while watching %CPU
under top. When I was seeing slower thoughput beam.smp was using much less
CPU.
Kind regards,
-Matt
On Wed, Apr 3, 2013 at 7:20 AM, Reid Draper wrote:
> inline:
>
>
> On Apr 2, 2013, at 6:48 PM, Matthew MacClary <
> maccl...@lifetim
I am measuring throughput by the wall clock time needed to move a few gigs
of data into Riak. I have glanced at iostat, but I was not collecting data
from that tool at this point.
-Matt
On Thu, Apr 4, 2013 at 2:45 PM, Reid Draper wrote:
>
> On Apr 4, 2013, at 4:14 PM, Matthew Ma
ly did testings with < 10kb documents, my tests indicates that PBC
> is twice as fast as HTTP in almost all cases.
>
> Shuhao
>
>
> On 13-04-04 04:14 PM, Matthew MacClary wrote:
>
>> Thanks for the feedback. I made two changes to my test setup and saw
>> better
Is anyone using btrfs file system with Riak? That would be another way to
use multiple disk partitions.
-Matt
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
Greetings all. We are working on our product with embedded Riak. One
question that came up is what does Riak do when/if we start seeing this in
the logs:
system_memory_high_watermark
Is this just informational, or does Riak act on this alarm in some way to
try to reduce its own memory use?
Thank
I have the same use case as Massimiliano. We are using the java client and
our app runs on the same servers as the Riak cluster. We have found that
connecting to the Riak instance running on local host provides the best
performance. It would be nice if the cluster client could be told to prefer
one
Hi everyone, we are running Riak 1.4.1 on RHEL 6.2 using bitcask. We are
using protobufs with the Java client, and our binary objects are typically
a few hundred KB in size. I have noticed a persistent anomaly with riak
reads and writes. It seems like often, maybe 0.5% of the time, writing to
Riak
ing the `node_get_fsm_objsize_100` statistic.
>
> Best regards,
>
> Christian
>
>
>
> On Wed, Mar 12, 2014 at 5:43 AM, Matthew MacClary <
> maccl...@lifetime.oregonstate.edu> wrote:
>
>> Hi everyone, we are running Riak 1.4.1 on RHEL 6.2 using bitcask. We ar
lt, the client does retry after
> receiving the timeout error message which is the behavior you're seeing.
>
> - Roach
> On Mar 12, 2014 7:42 AM, "Matthew MacClary" <
> maccl...@lifetime.oregonstate.edu> wrote:
>
>> Thanks for the suggestion Christian. Rig
I thought I would share part of one of our post install scripts with the
community to get feedback. Our goal is to avoid big merges that affect the
whole cluster at once. We set the merge trigger and merge threshold to the
same value which should result in more frequent but smaller scoped merges.
T
I have a persistent issue I am trying to diagnose. In our use of Riak we
have multiple data creators writing into a 7 node cluster. The value size
is a bit large at around 2MB. The behavior I am seeing is that if I delete
all data out of bitcask, then test performance I get fast writes. As I keep
d
or
> your Riak data volumes:
>
> cat /sys/block/sd*/queue/scheduler
> noop anticipatory deadline [cfq]
>
> * Increase +zdbbl in /etc/riak/vm.args to 96000
>
> Thanks
> --
> Luke Bakken
> CSE
> lbak...@basho.com
>
>
> On Mon, Apr 14, 2014 at 2:33 PM, Matthew M
Hi all, a Riak CS user named Toby started this discussion about write
performance. I am seeing the exact same behavior in terms of idle CPUs,
network, and disks, but low throughput. Toby do you happen to have any
follow up about settings to improve the raw Riak throughput and/or Riak CS
throughput?
14 matches
Mail list logo