You're not providing a whole lot of useful information. FreeBSD version, Riak
version, etc… would help.
I've been able to build Riak and LevelDB successfully on FreeBSD 8.x and 9.x;
it all runs without a hitch.
On Jul 11, 2012, at 9:48 PM, Lijun Wang wrote:
> I am evaluating riak. On FreeBSD,
I'm using Riak in a 5 node cluster with LevelDB for the backend (we store A LOT
of archivable data) on FreeBSD.
The data is mapped out as follows: I have a set of database objects that are
closely linked to user accounts - I needed to be able to compose complex
queries on these objects includin
I guess I got off track from my original subject line - once I started writing
I realized read performance wasn't a LevelDB issue (I originally thought that
maybe it was) but that the bottleneck must be our utilization of the API...
On Jul 26, 2012, at 5:18 PM, Parnell Springmeyer wrote:
anks,
Dan
-- Daniel Reverri
Client Architect
Basho Technologies, Inc.
d...@basho.com
Parnell Springmeyer
July 26, 2012
3:29 PM
I guess I got off track
from my original subject line - once I started writing I realized read
performance wasn't a LevelDB issue (I
pendently.
>
> -jd
>
> 2012/7/26 Parnell Springmeyer
> I'm using Riak in a 5 node cluster with LevelDB for the backend (we store A
> LOT of archivable data) on FreeBSD.
>
> The data is mapped out as follows: I have a set of database objects that are
> closely linke
BC? That would also
>> decouple the processes and allow you to scale them independently.
>>
>> -jd
>>
>> 2012/7/26 Parnell Springmeyer
>> I'm using Riak in a 5 node cluster with LevelDB for the backend (we store A
>> LOT of archivable data) on Fre
Hi everyone!
I figured out what my bottleneck was - HTTP API + sequential (as opposed to
concurrent) GET requests.
I wrote a simple Erlange Cowboy handler that uses a worker pool OTP application
I built to make concurrent GETs using the PBC api. My Python web app makes a
call to the handler an
The client might not support that, I don't know. I know the Python client (at
least the version I'm using) also does a full GET of the object.
Here's the kicker though, I'm using LevelDB for the backend and LevelDB is
notorious for having poor read performance when you're trying to GET a key tha
{error,notfound} ->
> undefined;
> X ->
> X
> end.
>
>
>
> On Sat, Jul 28, 2012 at 6:26 PM, Parnell Springmeyer
> wrote:
> Hi everyone!
>
> I figured out what my bottleneck was
Cool idea. Are there any Basho guys in Austin, Texas?
On Aug 7, 2012, at 2:22 PM, Joseph Blomstedt wrote:
> As many of you already know, Basho is largely a distributed company
> with the majority of many teams working remotely from their homes,
> coffee shops, and co-working spaces across the cou
Sorry for the double email, hit the send key combo accidentally.
If there are, I can get the Capital Factory guys to host a meet up/get together
(I'm a tenant in their gorgeous office space).
On Aug 7, 2012, at 2:22 PM, Joseph Blomstedt wrote:
> As many of you already know, Basho is largely a d
Jeremy,
I was looking for something similar and first built an extra handler onto an
internal erlang cowboy API server that used maelstrom (my own worker pool OTP
application).
It was used to make a simple POST with a string of the {bucket, key} pairs and
the server would concurrently GET and
I've been needing the Riak Python client to handle pooled connections
(including pooled connections to multiple nodes) and decided to re-work (just
barely) the current riak-python-client to use gevent's queue to do thread-safe
and light-weight Riak connection pooling.
There is a very minimal ch
Your email is slightly confusing - I assume by "paging" you mean "pagination"?
I recommend not trying to do pagination in Riak.
I can't help you at all with your custom map/reduce phase code in
trend_riak.erl without seeing vm.args, trend_riak.erl, &c...
On Aug 9, 2012, at 7:27 PM, 郎咸武 wrote:
I'm interested in this, I'll fork the repo and see what I can get added in
there.
On Aug 10, 2012, at 7:52 AM, Bryan Fink wrote:
> On Thu, Aug 9, 2012 at 5:11 AM, Kresten Krab Thorup wrote:
>> The only issue with this approach is AFAIK that M/R effectively runs with
>> R=1, i.e. it doesn't ens
So then trap the error and go to the next bucket? Riak is operating truthfully
here - you *want* Riak to tell you if a bucket doesn't exist if you try to
request one.
On Aug 13, 2012, at 12:42 AM, Venki Yedidha wrote:
> Hi All,
>
>
> I am retrieving data from buckets (dateranges)..In so
I use Riak at my company primarily for time series data; I quickly learned that
key filters were a bad idea (when I designed our data model, I had the uid's of
the objects in MySQL plus the timestamp of the data piece) and once I moved to
Map/Reduce using Erlang module/functions, it dramatically
You *could* use Riak for all of that, but I personally find retrieving objects
linked to other objects remarkably painful in Riak; even with secondary indexes
Riak is still more suitable for fast growing data, and keeping the relational
data in Postgres or MySQL.
Example of my setup:
MySQL is
I've had a few situations arise where one or two nodes (all it needs is
one node) will begin a heavy compaction cycle (determined by using gstat
+ looking at leveldb LOG files) and ALL queries put through the cluster
(it doesn't matter which node) return a timeout.
I can fix this situation by kill
19 matches
Mail list logo