50k definitely is a fair few. What's the objective for that many, if I may
ask? Always interesting to hear about different data models.
Sent from my iPhone
On Aug 2, 2013, at 7:56 PM, Paul Ingalls wrote:
Not that many, I didn't let it run that long since the performance was so
poor. Maybe 50k
Not that many, I didn't let it run that long since the performance was so poor.
Maybe 50k or so...
Paul Ingalls
Founder & CEO Fanzo
p...@fanzo.me
@paulingalls
http://www.linkedin.com/in/paulingalls
On Aug 2, 2013, at 3:54 PM, John Daily wrote:
> Excellent news. How many buckets with custom
Excellent news. How many buckets with custom settings did you create?
Sent from my iPhone
On Aug 2, 2013, at 6:51 PM, Paul Ingalls wrote:
For those interested, I identified my performance problem.
I was creating a lot of buckets, and the properties did not match the
default bucket properties o
For those interested, I identified my performance problem.
I was creating a lot of buckets, and the properties did not match the default
bucket properties of the node. So getting the bucket was taking between
300-400 milliseconds instead of 3-4. Apparently creating buckets with non
default bu
Hey,
Being on slow dev hardware (VMs in Vagrant) I added the following line to
the yokozuna section of app.config:
{solr_startup_wait, 2500}
Solved my time-out issues with Solr.
Cheers,
Dave
On Mon, Jul 29, 2013 at 10:01 AM, Jeremiah Peschka <
jeremiah.pesc...@gmail.com> wrote:
> Hi Erik,
>
On 2 Aug 2013, at 16:56, João Machado wrote:
> Hi Sean,
>
> Thanks for your quick response. If I follow the steps from Sam, it works as
> expected. I tried the same steps but with my own bucket (and data) and it
> worked too. The difference between what I was trying and what Sam did was
> be
Hi Sean,
Thanks for your quick response. If I follow the steps from Sam, it works as
expected. I tried the same steps but with my own bucket (and data) and it
worked too. The difference between what I was trying and what Sam did was
because I used JavaScript and Sam Erlang.
Is there any trick to
Hi João,
You might want to try the steps shown in Sam Elliott's "cookbook":
https://github.com/basho/riak_crdt_cookbook/blob/master/counters/README.md
On Fri, Aug 2, 2013 at 2:56 PM, João Machado wrote:
> Hello,
>
> Anyone tried to use MR with counters?
>
> I'm trying with the following steps:
Hello,
Anyone tried to use MR with counters?
I'm trying with the following steps:
Increment the counter:
-> curl -X POST http://localhost:8098/buckets/BUCKET/counters/MY_COUNTER -d
1
Confirm the actual value:
-> curl http://localhost:8098/buckets/BUCKET/counters/MY_COUNTER
*1*
Execute mapreduc
Responses inline.
---
Jeremiah Peschka - Founder, Brent Ozar Unlimited
MCITP: SQL Server 2008, MVP
Cloudera Certified Developer for Apache Hadoop
On Wed, Jul 31, 2013 at 9:41 PM, Wagner Camarao wrote:
> Hi all ~
>
> Great meetup today - looking forward to upgrading to 1.4
>
> I had a question M
Looks like the root of the problem in incorrect handling of bad_crc backend
error.
This error was mention here https://github.com/basho/riak_kv/pull/385
Could anybody advice way how-to to do dealt with this error in riak 1.2.1-1
?
2013/8/1 Daniil Churikov
> Hello dear list.
> Recently we had an
Hi,
I have a few questions about Riak memory usage.
We're using Riak 1.3.1 on a 3 node cluster. According to bitcask capacity
calculator (
http://docs.basho.com/riak/1.3.1/references/appendices/Bitcask-Capacity-Planning/)
Riak should use about 30Gb of RAM for out data. Actually, it uses about
45Gb
Thank you for the clarification Eric!
--
View this message in context:
http://riak-users.197444.n3.nabble.com/Does-Yokozuna-retrieve-the-Key-only-or-Key-Values-as-well-tp4028654p4028673.html
Sent from the Riak Users mailing list archive at Nabble.com.
__
One thing I am doing different than the benchmark is creating a lot of buckets…
Paul Ingalls
Founder & CEO Fanzo
p...@fanzo.me
@paulingalls
http://www.linkedin.com/in/paulingalls
On Aug 1, 2013, at 11:52 PM, Paul Ingalls wrote:
> The objects are either tweet jsons, so between 1-2k, or simple
14 matches
Mail list logo