maybe the good start is to share pbclient object and only create
bucket per request, you will save few steps on client configuration.
have you tried balancing requests to cluster and distribute them over all nodes?
pozdrawiam
Paweł Kamiński
kami...@gmail.com
pkaminski@gmail.com
__
1) Is it ok to share a single pbc client object between 50 threads? Should
it be protected by lock ?
2) I didn't do load balancing between nodes yet, cause I want to understand
better throughput limit. I am planning to do it for much higher throughput.
Pavel
On Wed, Oct 10, 2012 at 9:21 AM, kamis
On 10/10/2012 05:54, sangeetha.pattabiram...@cognizant.com wrote:
Good ' day Christian!
I have two doubts.Please requesting you to do clear me on the same
.Thanks in advance .
1. riak -took 7733m32.525s (nearly 5.3 days) for loading 35 million (1.8 sdata
set)which uses single curl -one
Thanks a lot. :-)
It installed and started without any errors with Erlang R14B04 and riak
1.2.0 on RHEL 5.x version.
Regards,
B.T. Jagadeesh Kumar
On 10/9/2012 10:57 PM, Evan Vigil-McClanahan wrote:
You'll either need to upgrade to a post-5.2 version of RHEL, or
rebuild the package from
I met the similar question. How did you to solve it.
Thanks
--
View this message in context:
http://riak-users.197444.n3.nabble.com/Riak-search-KV-encoding-tp3493025p4025588.html
Sent from the Riak Users mailing list archive at Nabble.com.
___
ri
well I asked same question few days ago (maybe 2 weeks form now) and
the answer was that yes sharing client is thread safe and all you
should do is to create new bucket instance on every request
pozdrawiam
Paweł Kamiński
kami...@gmail.com
pkaminski@gmail.com
__
On 10 Oct
Thanks,
I will try this solution.
Pavel
On Wed, Oct 10, 2012 at 1:51 PM, kamiseq wrote:
> well I asked same question few days ago (maybe 2 weeks form now) and
> the answer was that yes sharing client is thread safe and all you
> should do is to create new bucket instance on every request
>
> p
That question has been answered few times, here is my old answer:
Hi,
It is the Java client which to be honest, doesn't handle well one node
going down, so, for example, in my company we use HA proxy for that, here is
a starting configuration:https://gist.github.com/1507077
Once we switched
Hi,
The node is OK and not down.
I have a way to do load balancing externally to JAVA Client.
I am evaluating Riak for using in my company and want to measure maximal
throughput vs single node.
Thanks,
Pavel
On Wed, Oct 10, 2012 at 2:13 PM, Guido Medina wrote:
> That question has been answe
The answer is there, create a client config with N pooled connections to
your load balancer whatever you are using, I know HA proxy supports the
PBC config (TCP based) which is faster than HTTP client, and hence my
recommendation.
Say, a non-clustered client config with N connections to balanc
Hi guys,
I've enabled Riak Search on my Riak 1.2.0-1 ring, put a schema on a bucket and
pumped some data into it.
When I do a search, I receive a responseHeader that has a maxScore. That's
great, but the score for the individual documents doesn't show. How do I get
the score for the individual
Hi Tom,
With Riak, you load your data by primary key. Just like you have to specify a
PK for a row with Cassandra or an RDBMS, you specify one with Riak. Riak
"knows" all those values belong to the PK because you store them as an opaque
blob within Riak.
Using curl, loading your object would
I understand that load balancing is a final solution, but I want to
benchmark single node.
If I knew that I can load single node with N requests / sec, I could assume
that after load balancing over 5 nodes my throughput limit will increase
linearly.
Pavel
On Wed, Oct 10, 2012 at 2:51 PM, Guido Me
That's why I keep pushing to one answer, Riak is not meant to be in one
cluster, you are removing the external factors and CAP settings you will
be using, and it won't be linear, you could get the same results with
RW=2 with 3, 4 and 5 nodes, there are several factors that will
influence your b
In fact, as more nodes, you might be surprised it that it might be
fastersee my point? Riak is a lot of things, 1st you have to be
aware of the hashing, hashmap, how a key gets copied into different
nodes, how one or more nodes are responsible for a key, etc...so it is
not that simple.
On
ok, you have 100% point here, on the other hand I think pavel looks
for some guidance how to improve performance on client side, so he can
be 100% sure he is not wasting time on something. this is maybe
premature optimization but it maybe also good position to understand
library and enter new world
From that perspective, for now it is better to treat the client as you
would treat a JDBC DataSource pool, the tidy up comes when connecting
the client, either one node or many, the client will behave better if it
has no knowledge of whats going on at the cluster side, of course,
that's as of 1
Hello list,
I've written a large map/reduce query using the Python bindings - basically
it's an expanded version of the users/tags example in this post:
http://basho.com/blog/technical/2010/04/14/practical-map-reduce:-forwarding-and-collecting/
When I run the query it fails with this error, with
I deploy a test stand for server app demo for clients. Riak 1.2.0 on debian
compiled and run with erlang 15b01. Client uses riak erlang protobuf client
1.2. I have to use only one node with almost default config. For storage
backend I use eleveldb_backend (we use second indexing very often). Aft
I got nasty problem in production. We make connection pool with official erlang
pb client. Everything works fine. To organize pool we use hottub (we try
several,but that is simplest). Each connection used at least once in 3-5
minutes(production is not full loaded now).
After several days riak s
I use riak erlang client for my project, and update riak and client library
today from master(client to 1.3.1 and server to 1.2.0). Everything works except
fetching multiple entities with map-reduce return empty. Instead of usual
result {ok, [...,{,},...]} I get {ok,[]}. Rollback
with client an
21 matches
Mail list logo