a quorum
> read here, but a single vnode per-value only. I don’t know if other clients
> added this API, but it would not be hard to add to any client that supports
> 2i. Like I say, originally it was for riakCS, but it’s open source and part
> of the release for 4 years now, so hardly a
re, but a single vnode per-value only. I
> don’t know if other clients added this API, but it would not be hard to add
> to any client that supports 2i. Like I say, originally it was for riakCS,
> but it’s open source and part of the release for 4 years now, so hardly a
> secret.
>
st of the time. Any suggestion here ?
Many thanks in advance.
Br,
Alex
2017-02-06 19:02 GMT+08:00 Alex Feng :
> Hi Russell,
>
> It is really helpful, thank you a lot.
> We are suffering from solr crash now, are considering to switch to 2i.
>
> Br,
> Alex
>
> 2017-02-06
imo they’re
> complementary, and you pick the one that best fits.
>
> Cheers
>
> Russell
>
> On 2 Feb 2017, at 09:43, Alex Feng wrote:
>
> > Hello Riak-users,
> >
> > I am currently using Riak search to do some queries, since my queries
> are very simple, it s
** {"solr OS process exited",137}
2017-02-06 01:58:16 =CRASH REPORT
crasher:
initial call: yz_solr_proc:init/1
pid: <0.14979.6>
:
Br,
Alex
2017-02-05 11:48 GMT+08:00 Alex Feng :
> Hi Luke,
>
> Here is some error log from solr.log.
>
>
> 2017-02-04 13
org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:395)
... 9 more
Br,
Alex
2017-02-04 23:30 GMT+08:00 Luke Bakken :
> Hi Alex -
>
> What is in solr.log ?
> --
> Luke Bakken
> Engineer
> lbak...@basho.com
>
>
> On Sat, Feb 4, 2017 at 3:37 AM, Al
Hello Riak users,
I recently found our solr system got down after running some time, it
got recovered after a restart. But then it happened again.
Below is the error output, do you guys have any clue about this ?
2017-02-04 18:12:22.329 [error] <0.830.0>@yz_kv:index_internal:237
failed to index
Hello Riak-users,
I am currently using Riak search to do some queries, since my queries are
very simple, it should be fulfilled by secondary indexes as well.
So, my question is which one has better performance and less overhead,
let's say both can fulfill the query requirement.
Many thanks in adv
:
> Hi, you should consider using Riak TS for this use case.
>
> -Alexander
>
>
> @siculars
> http://siculars.posthaven.com
>
> Sent from my iRotaryPhone
>
> > On Jan 27, 2017, at 01:54, Alex Feng wrote:
> >
> > Hi,
> >
> > I am wonderi
Hi,
I am wondering if there are some best practice or recommendation for how
many keys inside a single bucket?
Let's say I have some sensors reporting data every 5 seconds, I can put all
these data under one single bucket, or I can dynamically generate a bucket
every day.
My question is, is th
es not show up
> within Erlang. Maybe that is what you are experiencing?
>
> Matthew
>
> Sent from my iPad
>
> > On Jan 26, 2017, at 5:47 AM, Alex Feng wrote:
> >
> > Hi Riak Users,
> >
> > One of my riak nodes, it has 4G memory, when I check the memor
Hi Riak Users,
One of my riak nodes, it has 4G memory, when I check the memory usage with
"free -m", I can see there are only around 150M left. Then I check the
command "riak-admin status", it shows around 415M(415594432) consumed by
Erlang. But with "top" command, it shows Erlang takes 52.1% me
that you'd find it
> doesn't work. You just simply mark it as down though and the cluster will
> re-elect a new claimant to take over the role and you can continue.
>
> Kind Regards,
> Shaun
>
> On Wed, Jan 18, 2017 at 9:05 AM, Alex Feng wrote:
>
>> Hello Ba
Hello Basho Users,
I have some questions regarding claimant node, what does it really mean ?
How a claimant node differs from the other nodes ?
If a claimant node is down, then the whole cluster stop working until I
mark it down from other working node, right ? How do we avoid this kind of
SPOF
Hi Riak Users,
I am a little bit confused about the metadata, it is stored in memory and
synced in every node, so bascially every node share the same metadata,
right ?
But then from the official document, the formula to calcuate the RAM
requirement, the RAM requirement is for whole cluster, for ex
15 matches
Mail list logo