Re: What does issue status "CLOSED MIGRATED" mean?

2012-06-13 Thread Andy Gross
Hi Mårten,

We've migrated our issue tracking to GitHub issues, so most bugs have been 
moved to there.  This particular bug represents a feature request- we've 
migrated those to another tool that is currently only visible to Basho 
employees.  Some of the content there probably should be visible to the 
community as well, but we're just not there yet.  

- Andy

Sent from my iPhone

On Jun 13, 2012, at 6:08 PM, Mårten Gustafson  
wrote:

> Honk honk,
> 
> I've been watching the "Delete all keys in a bucket"[1] issue which
> just got closed with the status "CLOSED MIGRATED". What does this
> status indicate?
> 
> 
> 
> cheers, m.
> 
> [1] https://issues.basho.com/show_bug.cgi?id=79
> 
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: OS X Source Compile Woes

2012-08-07 Thread Andy Gross

This may not be the best solution, but:

sudo ln -s /Applications/Xcode.app/Contents/Developer /Developer

...has worked for me.

- Andy

Sent from my iPhone

On Aug 7, 2012, at 6:28 PM, Jeff Kirkell  wrote:

> Is anyone having trouble compiling from github source on OS X Mountain Lion? 
> I am currently running Erlang R15B01.
> 
> Not sure if there is an official log file, but here is what appears prior to 
> the compile error
> 
> in16.h ../../dist/include/nspr
> cd src; make export
> cd io; make export
> make[6]: Nothing to be done for `export'.
> cd linking; make export
> cc -o prlink.o -c -m32  -Wall -fno-common -pthread -O2 -fPIC  -UDEBUG  
> -DNDEBUG=1 -DXP_UNIX=1 -DDARWIN=1 -DHAVE_BSD_FLOCK=1 -DHAVE_SOCKLEN_T=1 
> -DXP_MACOSX=1 -DHAVE_LCHOWN=1 -DHAVE_STRERROR=1  -DFORCE_PR_LOG 
> -D_PR_PTHREADS -UHAVE_CVAR_BUILT_ON_SEM -D_NSPR_BUILD_ 
> -I../../../dist/include/nspr -I../../../pr/include 
> -I../../../pr/include/private -I/Developer/Headers/FlatCarbon  prlink.c
> prlink.c:48:10: fatal error: 'CodeFragments.h' file not found
> #include 
>  ^
> 1 error generated.
> make[6]: *** [prlink.o] Error 1
> make[5]: *** [export] Error 2
> make[4]: *** [export] Error 2
> make[3]: *** [export] Error 2
> make[2]: *** 
> [/Users/jeff/Database/riaka-1.2.0/deps/erlang_js/c_src/system/lib/libnspr4.a] 
> Error 2
> make[1]: *** [c_src] Error 2
> ERROR: Command [compile] failed!
> make: *** [rel] Error 1
> 
> P.S. this happens when doing make or any of the sub-commands i.e. make rel
> 
> Thanks.
> Jeff
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: May allow_mult cause DoS?

2013-12-19 Thread Andy Gross

I’d wait for strong consistency in Riak 2.0 or try another solution.  Your 
requirements probably need to be rethought if you intend to use any database on 
the AP side of the spectrum.

- Andy

—
Andy Gross 
Chief Architect
Basho Technologies, Inc.



On Dec 18, 2013, at 9:18 PM, Viable Nisei  wrote:

> Hi
> 
> On Thu, Dec 19, 2013 at 3:07 AM, Rune Skou Larsen  wrote:
> Save the transaction list inside the customer object keyed by customerid. 
> Index this object with 2i on storeids for each contained tx.
> 
> Not so good idea. Transactions may be running in parallel, but there is no 
> atomic operations in Riak, or lock managers or UPDATE operation knowing about 
> blob structure. Risk of race condition is not so high but it exists.
>  
> If some customer objects grow too big, you can move old txs into archive 
> objects keyed by customerid_seqno. For your low latency customer reads, you 
> probably only need the newest txs anyway.
> 
> Yeah, we've considered approaches similar to this, but rejected this due to 
> race conditions. Also we've considered some kind of DLM (like ZooKeeper), but 
> if we need DLM, we'll just use hadoop/cassandra/hbase...
> 
> That's just one idea. Trifork will be happy to help you find a suitable model 
> for your use cases.
> 
> Ok, but such idea doesn't look as something mind-blowing...we have considered 
> this idea and many other approaches. Also what may be anwser for 
> STORE-TRANSACTION binding? Just mapred?..
>  
> We usually do this by stress-testing a simulation with realistic data 
> sizes/shapes and access patterns.
> Same for us. We using tsung (scripts are generated, tsung is slightly 
> automated with some pieces of erlang code) and some custom multithreaded 
> scenarios like I've mentioned in op-message.
>  
> It's fastest if we come onsite for a couple of days and work with you to set 
> it up, but we can also help you offsite.
>  
> Write me if you're interested, then we can do a call.
> I'm interested, but for now it looks like that there is no prefect solution 
> (the only untested approach left is custom indexing on riak side), so I don't 
> really sure if we should pay to just confirm that there is no real solution...
> 
> 
> On Thu, Dec 19, 2013 at 3:07 AM, Rune Skou Larsen  wrote:
> Save the transaction list inside the customer object keyed by customerid. 
> Index this object with 2i on storeids for each contained tx.
> 
> If some customer objects grow too big, you can move old txs into archive 
> objects keyed by customerid_seqno. For your low latency customer reads, you 
> probably only need the newest txs anyway.
> 
> That's just one idea. Trifork will be happy to help you find a suitable model 
> for your use cases.
> 
> We usually do this by stress-testing a simulation with realistic data 
> sizes/shapes and access patterns. It's fastest if we come onsite for a couple 
> of days and work with you to set it up, but we can also help you offsite.
> 
> Write me if you're interested, then we can do a call.
> 
> Rune Skou Larsen
> Trifork, Denmark
> 
> 
> - Reply message -
> Fra: "Viable Nisei" 
> Til: "riak-users@lists.basho.com" 
> Emne: May allow_mult cause DoS?
> Dato: ons., dec. 18, 2013 20:13
> 
> 
> 
> 
> 
> -- Forwarded message --
> From: Viable Nisei mailto:vsni...@gmail.com>>
> Date: Thu, Dec 19, 2013 at 2:11 AM
> Subject: Re: May allow_mult cause DoS?
> To: Russell Brown mailto:russell.br...@me.com>>
> 
> 
> Hi.
> 
> Thank you for your descriptive and so informative answer very much.
> 
> On Wed, Dec 18, 2013 at 3:29 PM, Russell Brown 
> mailto:russell.br...@me.com>> wrote:
> Hi,
> 
> Can you describe your use case a little? Maybe it would be easier for us to 
> help.
> Yeah, let me describe some abstract case equivalent to our. Let we have 
> CUSTOMER object, STORE object and TRANSACTION object, each TRANSACTION has 
> one tribool attribute STATE={ACTIVE, COMPLETED, ROLLED_BACK}.
> 
> We should be able to list all the TRANSACTIONs of given CUSTOMER, for example 
> (so we should establish 1-many relation, this list should not be long, 
> 10^2-10^3 records, but we should be able to obtain this list fast enough). 
> Also we should be able to list all the TRANSACTIONs of given STATE made in 
> given STORE (lists may be very long, up to 10^8 records), but these list may 
> be computed with some latency. Predictable latency is surely preferred but is 
> not show-stopper. So, that's all.
> 
> Another pain is races and/or operations atomicity, but it's not so important 
> at current time.
> 
> 
> 

Re: Python client performance issue

2011-02-15 Thread Andy Gross
python-riak-client already uses version 2.3.0. Adventurous types might want to 
check out https://github.com/Greplin/fast-python-pb, which wraps the C/C++ 
protocol buffers library. 

-- 
Andy Gross
Principal Architect
Basho Technologies, Inc.
On Tuesday, February 15, 2011 at 1:46 AM, Nico Meyer wrote: 
> Hi Mike,
> 
> perhaps you can try to upgrade the protocol buffers library to at least
> version 2.3.0. This is from the changelog for that version:
> 
> Python
>  * 10-25 times faster than 2.2.0, still pure-Python.
> 
> 
> Cheers,
> Nico
> 
> Am Montag, den 14.02.2011, 19:35 -0500 schrieb Mike Stoddart:
> > Will do when I get time. Would the REST API be any faster?
> > 
> > Thanks
> > Mike
> > 
> > On Mon, Feb 14, 2011 at 7:01 PM, Thomas Burdick
> >  wrote:
> > > I would highly recommend looking in to the cProfile and pstat module and
> > > profile the code that is going slow. If your using the protocol buffer
> > > client it could possibly be related to the fact that python protocol 
> > > buffers
> > > is extraordinarily slow and is well known to be slow. Profile until proven
> > > guilty though.
> > > Tom Burdick
> > > 
> > > On Mon, Feb 14, 2011 at 7:09 AM, Mike Stoddart  wrote:
> > > > 
> > > > I added some code to my system to test writing data into Riak. I'm
> > > > using the Python client library with protocol buffers. I'm writing a
> > > > snapshot of my current data, which is one json object containing on
> > > > average 60 individual json sub-objects. Each sub object contains about
> > > > 22 values.
> > > > 
> > > >  # Archived entry. ts is a formatted timestamp.
> > > >  entry = self._bucket.new(ts, data=data)
> > > >  entry.store()
> > > > 
> > > >  # Now write the current entry.
> > > >  entry = self._bucket.new("current", data=data)
> > > >  entry.store()
> > > > 
> > > > I'm writing the same data twice; the archived copy and the current
> > > > copy, which I can easily retrieve later. Performance is lower than
> > > > expected; top is showing a constant cpu usage of 10-12%.
> > > > 
> > > > I haven't decided to use Riak; this is to help me decide. But for now
> > > > are there any optimisations I can do here? A similiar test with
> > > > MongoDB shows a steady cpu usage of 1%. The cpu usages are for my
> > > > client, not Riak's own processes. The only differences in my test app
> > > > is the code that writes the data to the database. Otherwise all other
> > > > code is 100% the same between these two test apps.
> > > > 
> > > > Any suggestions appreciated.
> > > > Thanks
> > > > Mike
> > > > 
> > > > ___
> > > > riak-users mailing list
> > > > riak-users@lists.basho.com
> > > > http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
> > 
> > ___
> > riak-users mailing list
> > riak-users@lists.basho.com
> > http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
> 
> 
> 
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
> 
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Python client performance issue

2011-02-15 Thread Andy Gross


Sorry, I should have been more clear. The Python client depends on 
"protobuf>=2.3.0" in setup.py, so people are already most likely using 
protobufs-2.3.0.

- Andy

On Tuesday, February 15, 2011 at 3:09 AM, Nico Meyer wrote: 
> Hi Andy.
> 
> I am not quite sure what you mean, is the protobuf library included with
> riak-python-client? Or are you talking about the version of the protobuf
> compiler that was used to create riakclient_pb2.py from 
> riakclient.proto?
> 
> Cheers,
> Nico
> 
> Am Dienstag, den 15.02.2011, 02:23 -0800 schrieb Andy Gross:
> > python-riak-client already uses version 2.3.0. Adventurous types
> > might want to check out https://github.com/Greplin/fast-python-pb,
> > which wraps the C/C++ protocol buffers library. 
> > 
> > -- 
> > Andy Gross
> > Principal Architect
> > Basho Technologies, Inc.
> > 
> > 
> > On Tuesday, February 15, 2011 at 1:46 AM, Nico Meyer wrote:
> > 
> > > Hi Mike,
> > > 
> > > perhaps you can try to upgrade the protocol buffers library to at
> > > least
> > > version 2.3.0. This is from the changelog for that version:
> > > 
> > > Python
> > > * 10-25 times faster than 2.2.0, still pure-Python.
> > > 
> > > 
> > > Cheers,
> > > Nico
> > > 
> > > Am Montag, den 14.02.2011, 19:35 -0500 schrieb Mike Stoddart:
> > > > Will do when I get time. Would the REST API be any faster?
> > > > 
> > > > Thanks
> > > > Mike
> > > > 
> > > > On Mon, Feb 14, 2011 at 7:01 PM, Thomas Burdick
> > > >  wrote:
> > > > > I would highly recommend looking in to the cProfile and pstat
> > > > > module and
> > > > > profile the code that is going slow. If your using the protocol
> > > > > buffer
> > > > > client it could possibly be related to the fact that python
> > > > > protocol buffers
> > > > > is extraordinarily slow and is well known to be slow. Profile
> > > > > until proven
> > > > > guilty though.
> > > > > Tom Burdick
> > > > > 
> > > > > On Mon, Feb 14, 2011 at 7:09 AM, Mike Stoddart
> > > > >  wrote:
> > > > > > 
> > > > > > I added some code to my system to test writing data into Riak.
> > > > > > I'm
> > > > > > using the Python client library with protocol buffers. I'm
> > > > > > writing a
> > > > > > snapshot of my current data, which is one json object
> > > > > > containing on
> > > > > > average 60 individual json sub-objects. Each sub object
> > > > > > contains about
> > > > > > 22 values.
> > > > > > 
> > > > > > # Archived entry. ts is a formatted timestamp.
> > > > > > entry = self._bucket.new(ts, data=data)
> > > > > > entry.store()
> > > > > > 
> > > > > > # Now write the current entry.
> > > > > > entry = self._bucket.new("current", data=data)
> > > > > > entry.store()
> > > > > > 
> > > > > > I'm writing the same data twice; the archived copy and the
> > > > > > current
> > > > > > copy, which I can easily retrieve later. Performance is lower
> > > > > > than
> > > > > > expected; top is showing a constant cpu usage of 10-12%.
> > > > > > 
> > > > > > I haven't decided to use Riak; this is to help me decide. But
> > > > > > for now
> > > > > > are there any optimisations I can do here? A similiar test
> > > > > > with
> > > > > > MongoDB shows a steady cpu usage of 1%. The cpu usages are for
> > > > > > my
> > > > > > client, not Riak's own processes. The only differences in my
> > > > > > test app
> > > > > > is the code that writes the data to the database. Otherwise
> > > > > > all other
> > > > > > code is 100% the same between these two test apps.
> > > > > > 
> > > > > > Any suggestions appreciated.
> > > > > > Thanks
> > > > > > Mike
> > > > > > 
> > > > > > ___
> > > > > > riak-users mailing list
> > > > > > riak-users@lists.basho.com
> > > > > > http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
> > > > 
> > > > ___
> > > > riak-users mailing list
> > > > riak-users@lists.basho.com
> > > > http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
> > > 
> > > 
> > > 
> > > ___
> > > riak-users mailing list
> > > riak-users@lists.basho.com
> > > http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
> 
> 
> 
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
> 
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: generating an object key in the erlang client

2011-03-15 Thread Andy Gross
You need to make sure the crypto app is started.  Try 
'application:start(crypto).' in your application callback module.

- Andy

On Mar 15, 2011, at 10:34 PM, Saurabh Sehgal  wrote:

> Hi, 
> 
> I tried using the riak_core_util:unique_id_62/0 , but get the following error 
> ->
> 
> ** exception error: bad argument
>  in function  port_control/3
> called as port_control(crypto_drv03,5,
>
> <<131,104,2,114,0,3,100,0,13,110,111,110,111,100,101,64,
>  110,111,104,111,115,116,0,0,0,0,...>>)
>  in call from crypto:control/2
>  in call from riak_core_util:unique_id_62/0
> 
> Any ideas ?
> 
> On Sat, Mar 12, 2011 at 8:40 AM, Sean Cribbs  wrote:
> That is correct. Either the driver itself or the application code needs to 
> generate a key.
> 
> Sean Cribbs 
> Developer Advocate
> Basho Technologies, Inc.
> http://basho.com/
> 
> On Mar 12, 2011, at 1:14 AM, Joshua Partogi wrote:
> 
> > Hi Dan,
> >
> > I guess that also affects other driver that by default uses protobuf
> > [like the riak python driver] ? Does that mean the riak python driver
> > need to generate the unique key on the client side when it uses
> > protobuf?
> >
> > Cheers.
> >
> > On Sat, Mar 12, 2011 at 4:11 AM, Dan Reverri  wrote:
> >> Hi Saurabh,
> >> The protocols buffers interface does not currently support server side
> >> generated keys. Bug 485 has been filed for this issue:
> >> https://issues.basho.com/show_bug.cgi?id=485
> >> In the mean time you can generate a unique key on the client side. You can
> >> use the code on the server side as a reference for generating unique keys:
> >> https://github.com/basho/riak_core/blob/master/src/riak_core_util.erl#L131
> >> Thanks,
> >> Dan
> >> Daniel Reverri
> >> Developer Advocate
> >> Basho Technologies, Inc.
> >> d...@basho.com
> >>
> >>
> >> On Thu, Mar 10, 2011 at 10:37 PM, Saurabh Sehgal 
> >> wrote:
> >>>
> >>> Hi,
> >>> I was going through riak's documentation and I saw that through the rest
> >>> API, if a key for an object is not specified, riak generates one for you.
> >>> Can I do the same when storing objects through the riak erlang pb
> >>> client programmatically ?
> >>> Thank you,
> >>> Saurabh
> >>>
> >>> --
> >>> Saurabh Sehgal
> >>> E-mail: saurabh@gmail.com
> >>> Phone: 425-269-1324
> >>> LinkedIn: http://www.linkedin.com/pub/1/7a3/436
> >>>
> >>> ___
> >>> riak-users mailing list
> >>> riak-users@lists.basho.com
> >>> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
> >>>
> >>
> >>
> >> ___
> >> riak-users mailing list
> >> riak-users@lists.basho.com
> >> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
> >>
> >>
> >
> >
> >
> > --
> > http://twitter.com/jpartogi
> >
> > ___
> > riak-users mailing list
> > riak-users@lists.basho.com
> > http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
> 
> 
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
> 
> 
> 
> -- 
> Saurabh Sehgal
> E-mail: saurabh@gmail.com
> Phone: 425-269-1324
> LinkedIn: http://www.linkedin.com/pub/1/7a3/436 
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: 'not found' after join

2011-05-05 Thread Andy Gross
Alex's description roughly matches up with some of our plans to address this
issue.

As with almost anything, this comes down to a tradeoff between consistency
and availability.   In the case of joining nodes, making the
join/handoff/ownership claim process more "atomic" requires a higher degree
of consensus from the machines in the cluster.  The current process (which
is clearly non-optimal) allows nodes to join the ring as long as they can
contact one current ring member.  A more atomic process would introduce
consensus issues that might prevent nodes from joining in partitioned
scenarios.

A good solution would probably involve some consistency knobs around the
join process to deal with a spectrum of failure/partition scenarios.

This is something of which we are acutely aware and are actively pursuing
solutions for a near-term release.

- Andy


On Thu, May 5, 2011 at 12:22 PM, Alexander Sicular wrote:

> I'm really loving this thread. Generating great ideas for the way
> things should be... in the future. It seems to me that "the ring
> changes immediately" is actually the problem as Ryan astutely
> mentions. One way the future could look is :
>
> - a new node comes online
> - introductions are made
> - candidate vnodes are selected for migration (<- insert pixie dust magic
> here)
> - the number of simultaneous migrations are configurable, fewer for
> limited interruption or more for quicker completion
> - vnodes are migrated
> - once migration is completed, ownership is claimed
>
> Selecting vnodes for migration is where the unicorn cavalry attack the
> dragons den. If done right(er) the algorithm could be swappable to
> optimize for different strategies. Don't ask me how to implement it,
> I'm only a yellow belt in erlang-fu.
>
> Cheers,
> Alexander
>
> On Thu, May 5, 2011 at 13:33, Ryan Zezeski  wrote:
> > John,
> > All great points.  The problem is that the ring changes immediately when
> a
> > node is added.  So now, all the sudden, the preflist is potentially
> pointing
> > to nodes that don't have the data and they won't have that data until
> > handoff occurs.  The faster that data gets transferred, the less time
> your
> > clients have to hit 'notfound'.
> > However, I agree completely with what you're saying.  This is just a side
> > effect of how the system currently works.  In a perfect world we wouldn't
> > care how long handoff takes and we would also do some sort of automatic
> > congestion control akin to TCP Reno or something.  The preflist would
> still
> > point to the "old" partitions until all data has been successfully handed
> > off, and then and only then would we flip the switch for that vnode.  I'm
> > pretty sure that's where we are heading (I say "pretty sure" b/c I just
> > joined the team and haven't been heavily involved in these specific talks
> > yet).
> > It's all coming down the pipe...
> > As for your specific I/O question re handoff_concurrecy, you might be
> right.
> >  I would think it depends on hardware/platform/etc.  I was offering it as
> a
> > possible stopgap to minimize Greg's pain.  It's certainly a cure to a
> > symptom, not the problem itself.
> > -Ryan
> >
> > On Thu, May 5, 2011 at 1:10 PM, John D. Rowell  wrote:
> >>
> >> Hi Ryan, Greg,
> >>
> >> 2011/5/5 Ryan Zezeski 
> >>>
> >>> 1. For example, riak_core has a `handoff_concurrency` setting that
> >>> determines how many vnodes can concurrently handoff on a given node.
>  By
> >>> default this is set to 4.  That's going to take a while with your 2048
> >>> vnodes and all :)
> >>
> >> Won't that make the handoff situation potentially worse? From the thread
> I
> >> understood that the main problem was that the cluster was shuffling too
> much
> >> data around and thus becoming unresponsive and/or returning unexpected
> >> results (like "not founds"). I'm attributing the concerns more to an
> >> excessive I/O situation than to how long the handoff takes. If the
> handoff
> >> can be made transparent (no or little side effects) I don't think most
> >> people will really care (e.g. the "fix the cluster tomorrow" anecdote).
> >>
> >> How about using a percentage of available I/O to throttle the vnode
> >> handoff concurrency? Start with 1, and monitor the node's I/O (kinda
> like
> >> 'atop' does, collection CPU, disk and network metrics), if it is below
> the
> >> expected usage, then increase the vnode handoff concurrency, and
> vice-versa.
> >>
> >> I for one would be perfectly happy if the handoff took several hours
> (even
> >> days) if we could maintain the core riak_kv characteristics intact
> during
> >> those events. We've all seen long RAID rebuild times, and it's
> usually
> >> better to just sit tight and keep the rebuild speed low (slower I/O)
> while
> >> keeping all of the dependent systems running smoothly.
> >>
> >> cheers
> >> -jd
> >
> >
> > ___
> > riak-users mailing list
> > riak-users@lists.basho.com
> > http://lists.basho.com/mailman

Re: Question: Object Not Saved After Save/Delete/Save

2011-06-04 Thread Andy Gross


On Jun 3, 2011, at 3:08 PM, Keith Bennett  
wrote:

> 
> You're suggesting I use mercurial to pull down the HEAD and use that, right?
> 

We're on Github now- the code on bitbucket is likely outdated and should 
probably be taken down:

http://github.com/basho/riak

- Andy
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: LevelDB datastore configuration

2012-01-05 Thread Andy Gross

Hi Karthik,

You can download Riak 1.0.2 for your platform from here: 
http://downloads.basho.com/riak/riak-1.0.2/

You can find information about configuring LevelDB here: 
http://wiki.basho.com/LevelDB.html

Hope this helps!

- Andy

---
Andy Gross 
Principal Architect 
Basho Technologies, Inc.


On Jan 5, 2012, at 7:14 PM, Karthik K wrote:

> Hi Riak Team -
>To begin with, thanks for the wonderful tool and putting it out there. 
> 
>As part of doing benchmarking for some storage engines, came across the 
> leveldb store being the backend for Riak discussed here - 
> http://basho.com/blog/technical/2011/07/01/Leveling-the-Field/  . This is 
> certainly promising. I was wondering if there is a specific doc/wiki that 
> details how to get the version of Riak ( 1.0.2 , as per the Downloads page )  
> and leveldb binary installation to work together and install LevelDB as the 
> storage backend for Riak. 
> 
>Any pointers to doc / appropriate config changes would be useful. Thanks ! 
> 
> --
>   Karthik. 
> 
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: initial install - compile error - c_src/ebloom _nifs.cpp:24: error: expected initializer before ‘*’ token

2010-04-17 Thread Andy Gross
The tip of the Riak repo requires Erlang R13B04 to compile- it looks  
like you need to upgrade your system Erlang install.


On Apr 17, 2010, at 8:14 PM, ferriswheel   
wrote:



hello,


below is the result i get from

 mkdir getting_started
 cd getting_started
 hg clone http://hg.basho.com/riak/
 cd riak
 make

os gentoo linux
erlang version 13.2.3
g++ (Gentoo 4.3.4 p1.0, pie-10.1.5) 4.3.4

all assistance is appreciated.

regards


./rebar get-deps
==> protobuffs (get-deps)
==> luke (get-deps)
==> mochiweb (get-deps)
==> webmachine (get-deps)
==> mochiweb (get-deps)
==> riak_core (get-deps)
==> riakc (get-deps)
==> ebloom (get-deps)
==> mochiweb (get-deps)
==> webmachine (get-deps)
==> erlang_js (get-deps)
==> riak_kv (get-deps)
==> rel (get-deps)
==> riak (get-deps)
./rebar compile
==> protobuffs (compile)
Compiled src/protobuffs.erl
Compiled src/pokemon_pb.erl
Compiled src/protobuffs_compile.erl
Compiled src/protobuffs_parser.erl
==> luke (compile)
Compiled src/luke_phases.erl
Compiled src/luke_phase_sup.erl
Compiled src/luke_flow_sup.erl
Compiled src/luke.erl
Compiled src/luke_sup.erl
Compiled src/luke_phase.erl
Compiled src/luke_flow.erl
==> mochiweb (compile)
Compiled src/mochifmt_std.erl
Compiled src/mochiweb_headers.erl
Compiled src/mochiweb_sup.erl
Compiled src/mochiweb_http.erl
Compiled src/mochiweb_multipart.erl
Compiled src/reloader.erl
Compiled src/mochiweb_response.erl
Compiled src/mochiweb_charref.erl
Compiled src/mochiweb_cookies.erl
Compiled src/mochiweb_app.erl
Compiled src/mochifmt_records.erl
Compiled src/mochiweb_skel.erl
Compiled src/mochiweb_socket_server.erl
Compiled src/mochiweb_echo.erl
Compiled src/mochiweb_request.erl
Compiled src/mochihex.erl
Compiled src/mochifmt.erl
Compiled src/mochiweb_html.erl
Compiled src/mochinum.erl
Compiled src/mochijson.erl
Compiled src/mochiweb.erl
Compiled src/mochiweb_util.erl
Compiled src/mochijson2.erl
==> webmachine (compile)
Compiled src/webmachine_resource.erl
Compiled src/wmtrace_resource.erl
Compiled src/webmachine_dispatcher.erl
Compiled src/webmachine_multipart.erl
Compiled src/webmachine_logger.erl
Compiled src/webmachine_util.erl
Compiled src/webmachine_router.erl
Compiled src/webmachine_error_handler.erl
Compiled src/webmachine_sup.erl
Compiled src/webmachine_app.erl
Compiled src/webmachine.erl
Compiled src/webmachine_deps.erl
Compiled src/webmachine_mochiweb.erl
Compiled src/webmachine_perf_logger.erl
Compiled src/webmachine_request.erl
Compiled src/webmachine_decision_core.erl
Compiled src/wrq.erl
==> mochiweb (compile)
Compiled src/mochifmt_std.erl
Compiled src/mochiweb_headers.erl
Compiled src/mochiweb_sup.erl
Compiled src/mochiweb_multipart.erl
Compiled src/mochiweb_cover.erl
Compiled src/reloader.erl
Compiled src/mochiweb_charref.erl
Compiled src/mochiweb_response.erl
Compiled src/mochiweb_http.erl
Compiled src/mochiweb_skel.erl
Compiled src/mochiweb_mime.erl
Compiled src/mochiweb_app.erl
Compiled src/mochifmt_records.erl
Compiled src/mochiweb_socket_server.erl
Compiled src/mochiweb_cookies.erl
Compiled src/mochiweb_html.erl
Compiled src/mochiweb_echo.erl
Compiled src/mochifmt.erl
Compiled src/mochihex.erl
Compiled src/mochiglobal.erl
Compiled src/mochinum.erl
Compiled src/mochiweb_request.erl
Compiled src/mochijson.erl
Compiled src/mochijson2.erl
Compiled src/mochiweb_util.erl
Compiled src/mochiweb.erl
==> riak_core (compile)
Compiled src/gen_nb_server.erl
Compiled src/gen_server2.erl
Compiled src/spiraltime.erl
Compiled src/riak_core_test_util.erl
Compiled src/bloom.erl
Compiled src/riak_core_gossip.erl
Compiled src/app_helper.erl
Compiled src/riak_core_sup.erl
Compiled src/riak_core_util.erl
Compiled src/riak_core_ring_manager.erl
Compiled src/vclock.erl
Compiled src/chash.erl
Compiled src/riak_core_app.erl
Compiled src/riak_core_web.erl
Compiled src/riak_core_ring_events.erl
Compiled src/riak_core_bucket.erl
Compiled src/merkerl.erl
Compiled src/riak_core_claim.erl
Compiled src/json_pp.erl
Compiled src/priority_queue.erl
Compiled src/riak_core_ring.erl
Compiled src/slide.erl
==> riakc (compile)
Compiling src/riakclient.proto
Compiled src/riakc_obj.erl
Compiled src/riakc_pb.erl
Compiled src/riakc_pb_socket.erl
==> ebloom (compile)
Compiled src/ebloom.erl
Compiling c_src/ebloom_nifs.cpp
c_src/ebloom_nifs.cpp:24: error: expected initializer before ‘*’  
token
c_src/ebloom_nifs.cpp:63: error: invalid conversion from ‘ERL_NIF_TE 
RM

 (*)(ErlNifEnv*, int, const ERL_NIF_TERM*)’ to ‘void*’
 c_src/ebloom_nifs.cpp:63: error: invalid conversion from ‘ERL_NIF_T 
ERM

 (*)(ErlNifEnv*, int, const ERL_NIF_TERM*)’ to ‘void*’
 c_src/ebloom_nifs.cpp:63: error: invalid conversion from ‘ERL_NIF_T 
ERM

 (*)(ErlNifEnv*, int, const ERL_NIF_TERM*)’ to ‘void*’
 c_src/ebloom_nifs.cpp:63: error: invalid conversion from ‘ERL_NIF_T 
ERM

 (*)(ErlNifEnv*, int, const ERL_NIF_TERM*)’ to ‘void*’
 c_src/ebloom_nifs.cpp:63: error: invalid conversion from ‘ERL_NIF_T 
ERM

 (*)(ErlNifEnv*, int, const ERL_NIF_TERM*)’ to ‘void*’
 c_src/ebloo

Re: question about postcommit hooks and patch

2010-05-13 Thread Andy Gross
Hi Bruce,

Thanks for the patch - it's definitely worthwhile and we'll likely commit it
to tip soon.

- Andy

--
Andy Gross 
VP, Engineering
Basho Technologies, Inc.


On Thu, May 13, 2010 at 6:16 PM, Bruce Lowekamp wrote:

> I've been playing with using postcommit hooks in some code.  I
> couldn't find an example, so looking at the source, I think the right
> way to set one up is something like:
>
> PHook = {struct, [ {<<"mod">>, <>}, {<<"fun">>,
> <<"notify_change">>}]},
> RiakClient:set_bucket(<>, [{postcommit, [PHook]}]),
>
> Is there a better way?
>
>
> Also, in debugging my hook, I found that wrapping the hook so I could
> see exceptions made it much easier.  I made the following patch that I
> think might be useful to others, as well:
>
> diff -r e836ea266eca apps/riak_kv/src/riak_kv_put_fsm.erl
> --- a/apps/riak_kv/src/riak_kv_put_fsm.erl  Thu May 13 17:28:01 2010
> -0400
> +++ b/apps/riak_kv/src/riak_kv_put_fsm.erl  Thu May 13 14:50:07 2010
> -0700
> @@ -314,7 +314,7 @@
>  invoke_hook(precommit, Mod0, Fun0, undefined, RObj) ->
> Mod = binary_to_atom(Mod0, utf8),
> Fun = binary_to_atom(Fun0, utf8),
> -Mod:Fun(RObj);
> +wrap_hook(Mod, Fun, RObj);
>  invoke_hook(precommit, undefined, undefined, JSName, RObj) ->
> case riak_kv_js_manager:blocking_dispatch({{jsfun, JSName}, RObj}) of
> {ok, <<"fail">>} ->
> @@ -331,13 +331,22 @@
>  invoke_hook(postcommit, Mod0, Fun0, undefined, Obj) ->
> Mod = binary_to_atom(Mod0, utf8),
> Fun = binary_to_atom(Fun0, utf8),
> -proc_lib:spawn(fun() -> Mod:Fun(Obj) end);
> +proc_lib:spawn(fun() -> wrap_hook(Mod,Fun,Obj) end);
>  invoke_hook(postcommit, undefined, undefined, _JSName, _Obj) ->
> error_logger:warning_msg("Javascript post-commit hooks aren't
> implemented");
>  %% NOP to handle all other cases
>  invoke_hook(_, _, _, _, RObj) ->
> RObj.
>
> +wrap_hook(Mod, Fun, Obj)->
> +try Mod:Fun(Obj)
> +catch
> +EType:X ->
> +error_logger:error_msg("problem invoking hook ~p:~p ->
> ~p:~p~n~p~n",
> +   [Mod, Fun, EType, X,
> erlang:get_stacktrace()]),
> +fail
> +end.
> +
>  merge_robjs(RObjs0,AllowMult) ->
> RObjs1 = [X || X <- [riak_kv_util:obj_not_deleted(O) ||
> O <- RObjs0], X /= undefined],
>
>
>
> Bruce
>
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: include mapByFields in riak by default

2010-06-12 Thread Andy Gross
I'm already all up in Bugzilla today, so I just created
http://issues.basho.com/show_bug.cgi?id=237 to track this.  I'm in favor of
adding it too.

Andy Gross 
VP, Engineering
Basho Technologies, Inc.
http://basho.com


On Sat, Jun 12, 2010 at 9:53 AM, Sean Cribbs  wrote:

> Best thing is to open a Bugzilla issue so it can get scheduled for a
> release.  I for one am in favor of it.
>
> Sean Cribbs 
> Developer Advocate
> Basho Technologies, Inc.
> http://basho.com/
>
> On Jun 12, 2010, at 9:44 AM, francisco treacy wrote:
>
> > So what do you guys think? It is a bad idea?
> >
> > 2010/6/6 francisco treacy :
> >> I find myself using a lot this kind of functions (this one
> >> inspired/borrowed from Sean Cribbs) - a sort of SQL's conditions
> >> ("where").
> >>
> >> Riak.mapByFields = function(value, keyData, fields) {
> >>   if(!value.not_found){
> >> var object = Riak.mapValuesJson(value)[0];
> >> for(field in fields) {
> >> if(object[field] != fields[field])
> >> return [];
> >> }
> >> return [object];
> >>   } else {
> >> return [];
> >>   }
> >> }
> >>
> >> Usage:
> >> {"inputs":"test",
> >>  "query":[{"map":{"language":"javascript",
> >>  "name":"Riak.mapByFields",
> >>  "arg": { "city": "Paris" },
> >>  "keep":true}}]
> >> }
> >>
> >> Any real-world app *always* needs to query data by fields.
> >>
> >> Would it be a good idea to ship this by default in Riak?  I do believe
> >> so because it is sufficiently general purpose (just as
> >> Riak.mapValuesJson).
> >>
> >> A lot of libraries (for instance devise-ripple) could build upon this
> >> function without forcing clients/users to alter their app.config,
> >> define a js_source, and include the function in all their nodes.
> >>
> >> What do you think?
> >>
> >> Francisco
> >>
> >
> > ___
> > riak-users mailing list
> > riak-users@lists.basho.com
> > http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>
>
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: include mapByFields in riak by default

2010-06-12 Thread Andy Gross
Hi Francisco,

I just pushed this to Riak tip (
http://hg.basho.com/riak/changeset/d026576d5330).

Feel free to check it out whenever and let us know if it works for you.

Thanks,

Andy

On Sat, Jun 12, 2010 at 12:22 PM, francisco treacy <
francisco.tre...@gmail.com> wrote:

> Great! Thanks for creating the issue.
>
> 2010/6/12 Andy Gross :
> >
> > I'm already all up in Bugzilla today, so I just
> > created http://issues.basho.com/show_bug.cgi?id=237 to track this.  I'm
> in
> > favor of adding it too.
> > Andy Gross 
> > VP, Engineering
> > Basho Technologies, Inc.
> > http://basho.com
> >
> > On Sat, Jun 12, 2010 at 9:53 AM, Sean Cribbs  wrote:
> >>
> >> Best thing is to open a Bugzilla issue so it can get scheduled for a
> >> release.  I for one am in favor of it.
> >>
> >> Sean Cribbs 
> >> Developer Advocate
> >> Basho Technologies, Inc.
> >> http://basho.com/
> >>
> >> On Jun 12, 2010, at 9:44 AM, francisco treacy wrote:
> >>
> >> > So what do you guys think? It is a bad idea?
> >> >
> >> > 2010/6/6 francisco treacy :
> >> >> I find myself using a lot this kind of functions (this one
> >> >> inspired/borrowed from Sean Cribbs) - a sort of SQL's conditions
> >> >> ("where").
> >> >>
> >> >> Riak.mapByFields = function(value, keyData, fields) {
> >> >>   if(!value.not_found){
> >> >> var object = Riak.mapValuesJson(value)[0];
> >> >> for(field in fields) {
> >> >> if(object[field] != fields[field])
> >> >> return [];
> >> >> }
> >> >> return [object];
> >> >>   } else {
> >> >> return [];
> >> >>   }
> >> >> }
> >> >>
> >> >> Usage:
> >> >> {"inputs":"test",
> >> >>  "query":[{"map":{"language":"javascript",
> >> >>  "name":"Riak.mapByFields",
> >> >>  "arg": { "city": "Paris" },
> >> >>  "keep":true}}]
> >> >> }
> >> >>
> >> >> Any real-world app *always* needs to query data by fields.
> >> >>
> >> >> Would it be a good idea to ship this by default in Riak?  I do
> believe
> >> >> so because it is sufficiently general purpose (just as
> >> >> Riak.mapValuesJson).
> >> >>
> >> >> A lot of libraries (for instance devise-ripple) could build upon this
> >> >> function without forcing clients/users to alter their app.config,
> >> >> define a js_source, and include the function in all their nodes.
> >> >>
> >> >> What do you think?
> >> >>
> >> >> Francisco
> >> >>
> >> >
> >> > ___
> >> > riak-users mailing list
> >> > riak-users@lists.basho.com
> >> > http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
> >>
> >>
> >> ___
> >> riak-users mailing list
> >> riak-users@lists.basho.com
> >> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
> >
> >
>



--
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: badarg ets delete

2010-09-16 Thread Andy Gross
Hi Michael,

These errors are almost certainly harmless and being thrown when empty,
non-owned vnodes get shut down.

It appears that in some cases, the underlying ets table might already be
deleted/GC'd by the time BackendModule:stop tries to explicitly delete it.
 I've opened this bug to track the issue:
http://issues.basho.com/show_bug.cgi?id=723

- Andy

--
Andy Gross 
VP, Engineering
Basho Technologies, Inc.
http://basho.com




On Thu, Sep 16, 2010 at 3:59 PM, Michael Colussi  wrote:

>
> Hey guys I have an application using Riak 0.12 that does puts, gets, and
> updates.  It works fine but I get these random error reports in my logs.
>  Any ideas?
>
> ERROR <0.149.0> ** Generic server <0.149.0> terminating
> ** Last message in was stop
> ** When Server state == {state,139315}
> ** Reason for termination ==
> ** {badarg,[{ets,delete,[139315]},
> {riak_kv_ets_backend,srv_stop,1},
> {riak_kv_ets_backend,handle_call,3},
> {gen_server,handle_msg,5},
> {proc_lib,init_p_do_apply,3}]}
> ERROR <0.149.0> crash_report [[{initial_call,
>
>  {riak_kv_ets_backend,init,['Argument__1']}},
>{pid,<0.149.0>},
>{registered_name,[]},
>{error_info,
> {exit,
>  {badarg,
>   [{ets,delete,[139315]},
>{riak_kv_ets_backend,srv_stop,1},
>{riak_kv_ets_backend,handle_call,3},
>{gen_server,handle_msg,5},
>{proc_lib,init_p_do_apply,3}]},
>  [{gen_server,terminate,6},
>   {proc_lib,init_p_do_apply,3}]}},
>{ancestors,
>
>  [<0.148.0>,riak_core_vnode_sup,riak_core_sup,
>  <0.58.0>]},
>{messages,[]},
>{links,[<0.148.0>]},
>{dictionary,[]},
>{trap_exit,false},
>{status,running},
>{heap_size,377},
>{stack_size,24},
>{reductions,243}],
>   []]
>  ERROR <0.148.0> ** State machine <0.148.0> terminating
> ** Last event in was timeout
> ** When State == active
> **  Data  == {state,159851741583067506678528028578343455274867621888,
> riak_kv_vnode,
>
>  {state,159851741583067506678528028578343455274867621888,
>riak_kv_ets_backend,<0.149.0>,
>{kv_lru,100,147509,143412,151606},
>{dict,0,16,16,8,80,48,
>
> {[],[],[],[],[],[],[],[],[],[],[],[],[],
>   [],[],[]},
>
> {{[],[],[],[],[],[],[],[],[],[],[],[],[],
>[],[],[]}}},
>true},
> undefined,none}
> ** Reason for termination =
> ** {{badarg,[{ets,delete,[139315]},
>  {riak_kv_ets_backend,srv_stop,1},
>  {riak_kv_ets_backend,handle_call,3},
>  {gen_server,handle_msg,5},
>  {proc_lib,init_p_do_apply,3}]},
> {gen_server,call,[<0.149.0>,stop]}}
> ERROR <0.148.0> crash_report [[{initial_call,
> {riak_core_vnode,init,['Argument__1']}},
>{pid,<0.148.0>},
>{registered_name,[]},
>{error_info,
> {exit,
>  {{badarg,
>[{ets,delete,[139315]},
>  {riak_kv_ets_backend,srv_stop,1},
> {riak_kv_ets_backend,handle_call,3},
> {gen_server,handle_msg,5},
> {proc_lib,init_p_do_apply,3}]},
>   {gen_server,call,[<0.149.0>,stop]}},
>  [{gen_fsm,terminate,7},
>   {proc_lib,init_p_do_apply,3}]}},
>{ancestors,
>
>  [riak_core_vnode_sup,riak_core_sup,<0.58.0>]},
>{messages,
> [{'EXIT',<

Re: generated ids in java client

2010-10-28 Thread Andy Gross
Hi Jon,

This is an often-requested feature that doesn't exist in the current Java
client - currently one needs to generate their own IDs.

Given the demand for this feature we'll likely implement it soon.  I've
created a bugzilla issue to track this:

http://issues.basho.com/show_bug.cgi?id=859

- Andy

On Thu, Oct 28, 2010 at 1:43 PM, Jon Brisbin  wrote:

> How do I get the ID when I want to store an object in Riak but let the
> server generate the ID for me? I've tried passing "null" and the empty
> string to the RiakObject constructor. Neither seems to work (I get a 400
> error back).
>
> I'm using the 0.11.0 java client against 0.12.0 server (from Homebrew on OS
> X 10.6)
>
> Thanks!
>
> Jon Brisbin
> http://jbrisbin.com/web2
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>



-- 
--
Andy Gross 
VP, Engineering
Basho Technologies, Inc.
http://basho.com
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: listkeys

2010-10-31 Thread Andy Gross
Hi Andrea,

You're right - listing buckets in the PB client could be easier.   We're in
the process of merging the currently-supported Java client and Kresten's PB
client.   In the meantime, it looks like you'll need to manually construct
the ByteString argument to listKeys:

String bucketName = "myBucket";
ByteString bucket = ByteString.copyFromUtf8(bucketName);
this.rc.listKeys(bucket);

- Andy


On Fri, Oct 29, 2010 at 1:27 PM, Andrea Campolonghi
wrote:

> Hi,
>
> I am using http://github.com/krestenkrab/riak-java-pb-client and I cannot
> find a way to get info about a specifc bucket and/or keys.
>
> I am looking something like .
>
>  bucket = client.getBucket();
>
> keys = bucket.getKeys();
>
> What I can is to output the whole riak object tree like this :
>
> ByteString[] buckets = this.rc.listBuckets();
>
> for(ByteString bucket : buckets){
>
> KeySource keys = this.rc.listKeys(bucket);
>
> }
>
>
> but this is completely unuseful if I cannot say I want keys for THIS bucket
> etc...
>
>
> Any suggestion?
>
>
>
>
> --
> Andrea Campolonghi
>
> Cell : +39 347 2298435
> and...@andreacfm.com
> http://www.andreacfm.com
>
> Railo Team
> and...@getrailo.org
> http://getrailo.org
>
>
>
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>
>
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: java client out of memory

2010-11-03 Thread Andy Gross
Hi Andrea -

It looks like you're trying to connect to the HTTP port with the protobufs
client.   When I replace port 8098 with 8087 in this sample code, it works
fine.

Hope that helps,

Andy

On Wed, Nov 3, 2010 at 12:50 PM, Andrea Campolonghi
wrote:

> Hi,
>
> I am trying to use the riak-java-pb-client.
> Sample very very basic code
>
> public static void main(String[] args) {
>  try{
> RiakClient rc = new RiakClient("localhost",8098);
>  String bucket = "bucket";
> String key = "key";
>  String val = "test";
> RiakObject ro = new RiakObject(bucket,key,val);
>  rc.store(ro);
>
> RiakObject[] ros = rc.fetch(bucket, key);
>  for(RiakObject r : ros){
> System.out.println(r.toString());
>  }
>
> }catch(IOException e){
>  e.printStackTrace();
> }
>
>
> Running this simple test code form eclipse rerquire at least 1.5 GB ram.
> Lower than this I get an out of memory exception.
>
> While If I run the code with that huge memory I get this error:
>
> java.io.EOFException
>
> at java.io.DataInputStream.readFully(DataInputStream.java:180)
>
> at java.io.DataInputStream.readFully(DataInputStream.java:152)
>
> at com.trifork.riak.RiakConnection.receive(RiakConnection.java:89)
>
> at com.trifork.riak.RiakClient.store(RiakClient.java:384)
>
> at com.trifork.riak.RiakClient.store(RiakClient.java:363)
>
> at Runner.main(Runner.java:22)
>
>
>
> Any suggestion???
>
>
> Andrea
>
>
>
> --
> Andrea Campolonghi
>
> Cell : +39 347 2298435
> and...@andreacfm.com
> http://www.andreacfm.com
>
> Railo Team
> and...@getrailo.org
> http://getrailo.org
>
>
>
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>
>
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: java client out of memory

2010-11-03 Thread Andy Gross
Andrea,

I'm not sure what you're asking for.   Try connecting to port 8087 in your
code and it should work fine for you.

- Andy


On Wed, Nov 3, 2010 at 1:21 PM, Andrea Campolonghi
wrote:

> Andy,
>
> any more info about this?
>
> Thanks
>
> Andrea
>
> 2010/11/3 Andy Gross 
>
>
>> Hi Andrea -
>>
>> It looks like you're trying to connect to the HTTP port with the protobufs
>> client.   When I replace port 8098 with 8087 in this sample code, it works
>> fine.
>>
>> Hope that helps,
>>
>> Andy
>>
>> On Wed, Nov 3, 2010 at 12:50 PM, Andrea Campolonghi <
>> acampolon...@gmail.com> wrote:
>>
>>> Hi,
>>>
>>> I am trying to use the riak-java-pb-client.
>>> Sample very very basic code
>>>
>>> public static void main(String[] args) {
>>>  try{
>>> RiakClient rc = new RiakClient("localhost",8098);
>>>  String bucket = "bucket";
>>> String key = "key";
>>>  String val = "test";
>>> RiakObject ro = new RiakObject(bucket,key,val);
>>>  rc.store(ro);
>>>
>>> RiakObject[] ros = rc.fetch(bucket, key);
>>>  for(RiakObject r : ros){
>>> System.out.println(r.toString());
>>>  }
>>>
>>> }catch(IOException e){
>>>  e.printStackTrace();
>>> }
>>>
>>>
>>> Running this simple test code form eclipse rerquire at least 1.5 GB ram.
>>> Lower than this I get an out of memory exception.
>>>
>>> While If I run the code with that huge memory I get this error:
>>>
>>> java.io.EOFException
>>>
>>> at java.io.DataInputStream.readFully(DataInputStream.java:180)
>>>
>>> at java.io.DataInputStream.readFully(DataInputStream.java:152)
>>>
>>> at com.trifork.riak.RiakConnection.receive(RiakConnection.java:89)
>>>
>>> at com.trifork.riak.RiakClient.store(RiakClient.java:384)
>>>
>>> at com.trifork.riak.RiakClient.store(RiakClient.java:363)
>>>
>>> at Runner.main(Runner.java:22)
>>>
>>>
>>>
>>> Any suggestion???
>>>
>>>
>>> Andrea
>>>
>>>
>>>
>>> --
>>> Andrea Campolonghi
>>>
>>> Cell : +39 347 2298435
>>> and...@andreacfm.com
>>> http://www.andreacfm.com
>>>
>>> Railo Team
>>> and...@getrailo.org
>>> http://getrailo.org
>>>
>>>
>>>
>>> ___
>>> riak-users mailing list
>>> riak-users@lists.basho.com
>>> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>>>
>>>
>>
>>
>>
>>
>
>
> --
> Andrea Campolonghi
>
> Cell : +39 347 2298435
> and...@andreacfm.com
> http://www.andreacfm.com
>
> Railo Team
> and...@getrailo.org
> http://getrailo.org
>
>
>


-- 
--
Andy Gross 
VP, Engineering
Basho Technologies, Inc.
http://basho.com
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: java driver : lisKeys() returned also deleted keys

2010-11-20 Thread Andy Gross
Hi Andrea,

Key deletion in Riak doesn't occur immediately in Riak - keys are first
marked with a tombstone marker and then reaped asynchronously.   If you
insert a sleep between the delete() and keys() calls (try a few seconds) -
does the test pass?

- Andy


On Sat, Nov 20, 2010 at 10:02 AM, Andrea Campolonghi  wrote:

> I am having this issues with the java driver :
> https://github.com/krestenkrab/riak-java-pb-client
>
> After deleting a key the listKeys method still return it.
> The following test case put and delete one simple key and fails when
> counting the listKeys item.
>
> Any Suggestion?
>
> public class RiakCacheTest extends TestCase {
>
>  private RiakClient rc;
>
> private String host = "172.16.194.134";
>
> private int port = 8087;
>
> private String bucket = "bucket";
>
>   @Override
>
> protected void setUp() throws Exception {
>
>  rc = new RiakClient(this.host,this.port);
>
>  rc.setClientID(this.getClass().toString());
>
> }
>
>   public void testRemove(){
>
>  String key = "key";
>
>  String value = "Andrea";
>
>   RiakObject obj = new RiakObject(this.bucket, key, value);
>
>   try{
>
>  this.rc.store(obj);
>
>  assertTrue(getValue(key).equals(value));
>
>  assertEquals(keys().size(),1);
>
>this.rc.delete(this.bucket, key);
>
>  assertNull(getValue(key));
>
>  assertEquals(keys().size(),0);
>
>}catch(IOException e){
>
>  e.printStackTrace();
>
>  }
>
>   }
>
>  private String getValue(String key){
>
>  String fetched = "";
>
>  try{
>
>  RiakObject[] ros = this.rc.fetch(this.bucket, key);
>
>  for(RiakObject ro : ros){
>
>   return ro.getValue().toStringUtf8();
>
>  }
>
>  }catch(IOException e){
>
>  e.printStackTrace();
>
>  }
>
>  return null;
>
> }
>
>
>  private List keys() {
>
>  ArrayList result = new ArrayList();
>
>  ByteString bucket = ByteString.copyFromUtf8(this.bucket);
>
>   try{
>
>   KeySource keys = this.rc.listKeys(bucket);
>
>   while(keys.hasNext()){
>
>   String key = keys.next().toStringUtf8();
>
>   result.add(key);
>
>   }
>
>  }catch(IOException e){
>
>  e.printStackTrace();
>
>  }
>
>  return result;
>
> }
>
>
> }
>
> --
> Andrea Campolonghi
>
> Cell : +39 347 2298435
> acampolon...@gmail.com
> and...@andreacfm.com
> http://www.andreacfm.com
>
> Railo Team
> and...@getrailo.org
> http://getrailo.org
>
>
>
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>
>
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: using the erlang api to auto generate keys

2010-12-17 Thread Andy Gross
There's an open issue to track the addition of this feature:

http://issues.basho.com/show_bug.cgi?id=859

- Andy


On Fri, Dec 17, 2010 at 9:39 AM, Grant Schofield  wrote:

> This isn't currently built into the client, but you can reuse the function
> we use internally, riak_core_util:unique_id_62() located here:
> https://github.com/basho/riak_core/blob/master/src/riak_core_util.erl#L131
>
> Grant Schofield
> Developer Advocate
> Basho Technologies, Inc.
>
> 
> On Dec 15, 2010, at 11:49 AM, carson li wrote:
>
> hey,
>
> i have a quick question about auto generating keys.
> i know using curl post, all you have to do is specify the bucket, and riak
> will generate the key for you.
>
> is it possible for the erlang library to generate the key for you? if not,
> is there an easy to to generate unique keys using some built in function?
> any leads will help, thanks.
>
> --
> Carson Li
> Liquid Analytics
> carson...@liquidanalytics.com
> 647-273-1024
>
>  ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>
>
>
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>
>
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com