Hi, Jan. Your description of the behaviour you're seeing below is
frequently the result of slow access times to data on disk due to low
spindle count for a given data set. Can you tell me the hardware
specifications of the disks in each of the machines in the Riak ring?
Primarily the number of d
Hi, Dmitry. There are some gaps in the information you included here that
might help clarify what's going on so I'm going to just rattle off some
questions for clarification.
Is your test driver only making requests of a single EC2 instance? Or are
you querying all 7 nodes directly in so sort of
Using any of the various VIP implementations but we don't recommend VRRP
behaviour[1] for the VIP because you'll lose the benefit of spreading client
query load to all nodes in a ring. For the plain HTTP client interface
haproxy, squid, varnish, nginx, lighttpd, and even Apache can be used in a
va
;"}}.
{port_cleanup_script, "gmake -C c_src clean"}.
Looking for a proper fix for this for the time being that doesn't involve
autoconf.
--Ryan
On Thu, May 13, 2010 at 8:01 PM, kg9020 wrote:
> Rayn,
>
> Thank you for taking time to look at this, here is the outpu
Which hardware platform are you running on? I'm guessing PowerPC. Bus
errors pretty much don't happen on x86 platforms any more. If you haven't
filed a bug with the full output of the error, please do so. We can't do
much for you without more detailed info.
--Ryan
On Sun, May 16, 2010 at 2:12
Can you mail the list the output of the attached test Makefile? If the
builtin gmake CURDIR variable isn't being set, there isn't much we can do
for you, I'm afraid. It means that your gmake build is woefully broken or
something, somewhere is managing to set it to an empty string.
--Ryan
On Wed
For some reason the standard GNU make CURDIR isn't being set. Are you sure
you're using a GNU make? What does gmake --version return?
--Ryan
On Tue, May 11, 2010 at 9:58 PM, kg9020 wrote:
> Hello,
>
> Update here is the error
> Running make -C c_src
> tar -xzf nsprpub-4.8.tar.gz
> (cd /nsprpu
By "not yet a clustered" installation do you mean that each of the 4
OpenSolaris nodes isn't communicating with the others?
--Ryan
On Mon, May 10, 2010 at 2:40 PM, Stephan Maka wrote:
> MÃ¥rten Gustafson wrote:
> > It'd be interesting to see what numbers you get if your script sets R
> > & W = 1
A couple of quick questions for you Karsten that should help us get an idea
of what kind of issues you might be having.
How many physical hosts are you running the four OpenSolaris virtuals on?
If they're all running on the same host and you don't have a pretty
substantial RAID array backing thei
I can't manage to duplicate the problem you're having with the inotify.sock.
Will the tarball build after a "make distclean rel" ?
--Ryan
On Fri, Apr 23, 2010 at 10:40 AM, Matthew Pflueger <
matthew.pflue...@gmail.com> wrote:
> http://hg.basho.com/riak/get/riak-0.10.tar.gz
>
> Yeah, I disabled
Any
> > other suggestions?
> >
> > --Matthew
> >
> >
> >
> > On Fri, Apr 23, 2010 at 11:56, Ryan Tilder wrote:
> >> Sounds like you have Mercurial's inotify extension enabled. hg
> showconfig
> >> should show it. Workaround for now
Sounds like you have Mercurial's inotify extension enabled. hg showconfig
should show it. Workaround for now would be to following the directions in
the below link for disabling the extension in the riak repo. In
riak/.hg/hgrc:
[extensions]
inotify = !
http://mercurial.selenic.com/wiki/UsingEx
You have two classes[1] of access control for Riak:
- other Riak nodes in the ring
- clients making use of the Riak ring
For both access groups, the settings you want are in riak/etc/app.config.
The config directives you care about for client access all end in "_ip" and
"_port": web_ip, web_po
13 matches
Mail list logo