Has there been any talk of using compression, maybe something like
Snappy (http://code.google.com/p/snappy/) since it's fast and
shouldn't affect performance too much?
On Fri, Jun 24, 2011 at 3:29 PM, Aphyr wrote:
> Nope.
>
> On 06/24/2011 03:24 PM, Andrew Berman wrote:
>>
>> And related, does Bi
Afternoon, Evening, Morning to All -
For today's Recap: new code, an interview, some content from #riak, and more.
Enjoy and have a great weekend.
Mark
Community Manager
Basho Technologies
wiki.basho.com
twitter.com/pharkmillups
-
Riak Recap for June 22 - 23
And related, does Bitcask have any sort of compression built into it?
On Fri, Jun 24, 2011 at 2:58 PM, Andrew Berman wrote:
> Mathias,
>
> I took the BERT encoding and then encoded that as Base64 which should
> pass the test of valid UTF-8 characters. However, now I'm starting to
> think that ma
Hi Mathias,
Thank you for responding, and give me things to try.
I am testing Raik using a hypervisor, so each node only has one CPU and 1.5 GB
of RAM. I have 222,526 total keys stored.
I used nginx to load balance the connections equally across the three nodes. I
paused two seconds between
Mathias,
I took the BERT encoding and then encoded that as Base64 which should
pass the test of valid UTF-8 characters. However, now I'm starting to
think that maybe doing two encodings and storing that for the purpose
of saving space is not worth the trade-off in performance vs just
storing the
I was pretty sure I mentioned it quite early on in this discussion ;-).
Something about logical deletes vs. physical deletes.
Cheers,
Nico
Am Donnerstag, den 23.06.2011, 10:21 -0700 schrieb Greg Nelson:
> Something to keep in mind here -- which I don't think has been
> mentioned yet -- is the int
David,
given your description of the error behaviour and the MapReduce job shown
below, here's an outline of what's happening in your cluster when you fire that
job 208 times in quick succession:
* All nodes in the cluster go through their keys using the filter you supplied,
that's 208 times o
I just reran the 208 MapReduce jobs, and I got two timeouts (but no crashes).
They were both in the map phase.
=ERROR REPORT 24-Jun-2011::15:23:27 ===
** State machine <0.4087.0> terminating
** Last event in was {mapexec_error,{157389,'riak@10.0.60.210'},
I am doing 208 MapReduce jobs in rapid-fire succession using anonymous
JavaScript functions. I am sending the MapReduce jobs to a single node,
riak01. There are about 75,000 keys in the bucket.
Erlang: R13B04
Riak: 0.14.2
When I had my MapReduce timeout set to 120,000 ("timeout":12), I was
Many Thanks Ryan,
You the man!!
--g
On Jun 24, 2011, at 10:58 , Ryan Zezeski wrote:
Gordon, Gilbert, and all you Search fans out there,
I've patched this bug in the riak_search-0.14 branch. Below you'll find a link
to the pull request.
The bug was a little tricky to find but is fairly "obv
Gordon, Gilbert, and all you Search fans out there,
I've patched this bug in the riak_search-0.14 branch. Below you'll find a
link to the pull request.
The bug was a little tricky to find but is fairly "obvious" once you see
what is happening. The leak occurs when you perform an intersection qu
I'm happy to announce a freshly squeezed release of the Python client for Riak
[1]. Other than a couple of bugfixes it includes some neat new features
contributed by the community.
Here's a handy list of the noteworthy ones:
* A nicer API for using key filters with MapReduce, check out the READ
12 matches
Mail list logo