I was hesitant to mention it at first, but since you broke the ice...I also
wrote my own pool.  However, I took a different approach.  My pool only
cares about doling out connections and making sure they are alive.  One or
more processes could use the same conn at a given time (side question: is
that a problem?).  My pool relies on the fact that each conn has N-1 other
conns after it before it gets reused, where N = the size of the pool.

That said, I hacked mine together in 15 minutes for something I needed at
work.  It seemed to handle a reasonable load (100+ concurrent connections)
just fine, so I'm using it, but I don't know if I'd suggest anyone else
using it.  I'm mainly putting it up here as contrast to your (David's) pool
and I was curious to get feedback if I'm doing anything insanely stupid?

https://gist.github.com/789616

-Ryan

On Fri, Jan 21, 2011 at 7:12 AM, David Dawson <david.daw...@gmail.com>wrote:

> I am not sure if this is any help but I have uploaded a protocol buffer
> pool client for riak which requires you to pass a client_id for each
> operation.
>
> https://github.com/DangerDawson/riakc_pb_pool
>
> It is very very basic, but does most of the useful things:
>
>        - put / get / delete
>        - riak disconnects,
>        - Clients that die after leasing a riak connection ( connection is
> returned to the pool )
>        - Dynamically increasing the size of the connection pool
>        - Queueing requests if there are no connections available
>        - Useful Statistics
>
> Of course the documentation is very sparse and needs improving, which I
> will get round to.
>
> Dave
>
>
> On 21 Jan 2011, at 00:43, Bob Ippolito wrote:
>
> > Another issue we've run into is that the Erlang native client allows
> > you to store non-binary values, which can not be accessed from the
> > PBC.... so if you're not careful or don't know better, you'll be in
> > for some migration if you're trying to use other clients.
> >
> > The only real problem is that the PBC needs some additional software
> > around it to pool connections, where the Erlang native client got that
> > for free because it was leveraging Erlang distribution.
> >
> > On Fri, Jan 21, 2011 at 3:57 AM, Ryan Maclear <r...@lambdasphere.com>
> wrote:
> >> Agreed. So it therefore makes sense to start using the PBC from the
> outset, allowing for future moving of the client app off any cluster node(s)
> it might be residing on, as well as not being affecting by any subtle
> changes to the internals of the riak_kv code base (specifically the non-PB
> modules).
> >>
> >> On 20 Jan 2011, at 9:37 PM, Mojito Sorbet wrote:
> >>
> >>> To me the major concern is that if you use the native (non-PB)
> >>> interface, your application cluster and the Riak cluster become merged
> >>> into one big Erlang cluster.   The number of TCP connections can start
> >>> getting out of hand, and the work put on the cluster manager starts to
> >>> become significant.
> >>>
> >>>
> >>> _______________________________________________
> >>> riak-users mailing list
> >>> riak-users@lists.basho.com
> >>> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
> >>
> >>
> >> _______________________________________________
> >> riak-users mailing list
> >> riak-users@lists.basho.com
> >> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
> >>
> >
> > _______________________________________________
> > riak-users mailing list
> > riak-users@lists.basho.com
> > http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>
>
> _______________________________________________
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>
_______________________________________________
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com

Reply via email to