Re: New Counters - client support

2013-07-10 Thread Jeremiah Peschka
If a counter doesn't exist, it's created when you increment/decrement the counter. Otherwise it returns your API's equivalent of "nothing to see". So, in CorrugatedIron, you'd call RiakClient.IncrementCounter("bucket", "counter", 1); After that command completes, the counter has a value of 1; a

New Counters - client support

2013-07-10 Thread Y N
Hi, The counters stuff looks awesome can't wait to use it. Is this already supported via the currently available clients (specifically, the Java 1.1.1 client)?  Also, when can we expect some tutorial / documentation around using counters? I looked at the GitHub link, however, some use ca

Re: Upgrade path 1.4.0

2013-07-10 Thread Toby Corkindale
On 10/07/13 23:14, Jared Morrow wrote: We try to post the packages to all the right places before we make the announcement so I'd highly recommend you don't just auto-update Riak packages when they hit apt/yum. Ah, actually we don't automatically update them, but someone was performing a pass

Re: Erlang PB client

2013-07-10 Thread Sean Cribbs
We intend to overhaul the Erlang client soon. In the meantime, Riak CS has done exactly what Jeremy has suggested. Look here: https://github.com/basho/riak_cs/blob/develop/src/riak_cs_riakc_pool_worker.erl On Wed, Jul 10, 2013 at 5:46 PM, Konstantin Kalin < konstantin.ka...@gmail.com> wrote: > O

Re: Erlang PB client

2013-07-10 Thread Konstantin Kalin
Oops… was looking at wrong column. Sorry and thanks for the advice. Thank you, Konstantin. On Jul 10, 2013, at 3:41 PM, Jeremy Ong wrote: > The X there indicates that it does not support connection pooling out of the > box in contrast to the check. I'd look at poolboy (to use in conjunction w

Re: Erlang PB client

2013-07-10 Thread Jeremy Ong
The X there indicates that it does not support connection pooling out of the box in contrast to the check. I'd look at poolboy (to use in conjunction with riakc_pb_socket) and riakpool (which pulls in riak-erlang-client as a dependency). On Wed, Jul 10, 2013 at 3:33 PM, Konstantin Kalin < konstan

Re: Trouble with some Apt repositories

2013-07-10 Thread Jared Morrow
This should now be fixed, sorry for any troubles it might have caused. -Jared On Wed, Jul 10, 2013 at 3:25 PM, Jared Morrow wrote: > Riak Users, > > Yesterday we had to rebuild 1.4.0 packages and at the same time had to > refresh the Apt/Yum repositories with those new packages. Something in

Erlang PB client

2013-07-10 Thread Konstantin Kalin
Looking at http://docs.basho.com/riak/latest/references/Client-Libraries/ I see that Erlang riak client should support Cluster connections/pools. But looking at Erlang riak client source code I would say that it doesn't support Cluster connections/pools out of box. And I have to develop my own c

Re: Riak PUT failure, lost new value, and went back to old value.

2013-07-10 Thread zhulei
Quick update, looks like this issue is resolved in 1.4.0.1, the same program running on 1.4.0.1 (redhat 5) without a problem so far. Thanks to everybody, Lei -- View this message in context: http://riak-users.197444.n3.nabble.com/Riak-PUT-failure-lost-new-value-and-went-back-to-old-value-tp40

Trouble with some Apt repositories

2013-07-10 Thread Jared Morrow
Riak Users, Yesterday we had to rebuild 1.4.0 packages and at the same time had to refresh the Apt/Yum repositories with those new packages. Something in this process must have broken down. We have had someone report a problem with the Ubuntu precise repo filed here: https://github.com/basho/ria

Re: riak_kv_memory_backend replication

2013-07-10 Thread Jeremiah Peschka
Correct. Unless you've specific an n value of 1 for the bucket. --- Jeremiah Peschka - Founder, Brent Ozar Unlimited MCITP: SQL Server 2008, MVP Cloudera Certified Developer for Apache Hadoop On Wed, Jul 10, 2013 at 12:57 PM, kpandey wrote: > In a multi node cluster with a bucket in memory_bac

Riak 1.4 - Changing backend through API

2013-07-10 Thread Y N
Hi, I just upgraded to 1.4 and have updated my client to the Java 1.1.1 client. According to the release notes, it says all bucket properties are now configurable through the PB API. I tried setting my backend through the Java client, however I get an Exception "Backend not supported for PB".

riak_kv_memory_backend replication

2013-07-10 Thread kpandey
In a multi node cluster with a bucket in memory_backend, will inserting an object in this bucket be replicated to other nodes? -- View this message in context: http://riak-users.197444.n3.nabble.com/riak-kv-memory-backend-replication-tp4028258.html Sent from the Riak Users mailing list archi

Re: Migration from memcachedb to riak

2013-07-10 Thread Andrew Thompson
On Wed, Jul 10, 2013 at 08:19:23AM -0700, Howard Chu wrote: > If you only need a pure key/value store, you should consider > memcacheDB using LMDB as its backing store. It's far faster than > memcacheDB using BerkeleyDB. > http://symas.com/mdb/memcache/ > > I doubt LevelDB accessed through a

Re: Using Hadoop distcp to load data into Riak-CS

2013-07-10 Thread Kelly McLaughlin
Hi Dan. I do not know much about distcp, but if it is the case that it uses a PUT (copy) operation to transfer data then distcp will not currently work with RiakCS. Support for that operation is on our roadmap, but it is not done yet unfortunately. Kelly On Wed, Jul 10, 2013 at 6:20 AM, Sajner,

[ANNC] Riak 1.4.0

2013-07-10 Thread Jared Morrow
Riak Users, As was somewhat hinted at by the apt/yum repo update, we are happy to announce the release of Riak 1.4.0. A blog post giving a high level overview of the release can be found here: http://basho.com/basho-announces-availability-of-riak-1-4/ Does that mention counters? I believe it do

Re: Migration from memcachedb to riak

2013-07-10 Thread Howard Chu
On 10 July 2013 10:49, Edgar Veiga mailto:edgarmve...@gmail.com>> wrote: Hello all! I have a couple of questions that I would like to address all of you guys, in order to start this migration the best as possible. Context: - I'm responsible for the migration of a pure k

Re: Help with local restore for dev enviroment

2013-07-10 Thread Mark Wagner
Thanks for the info!!! I appreciate the help and I am glad to know that it should just work! I don't really need the data from the whole cluster while developing the script. I just need to get my queries working etc... One issue I am facing is I don't know the structure of the data. So I am tr

Re: Upgrade path 1.4.0

2013-07-10 Thread Jared Morrow
We try to post the packages to all the right places before we make the announcement so I'd highly recommend you don't just auto-update Riak packages when they hit apt/yum. I'm glad you eventually got all your nodes upgraded. -Jared On Wed, Jul 10, 2013 at 5:42 AM, Toby Corkindale < toby.corkin

Re: Help with local restore for dev enviroment

2013-07-10 Thread Justin Sheehy
Hi, Mark. You've already received a little advice generally so I won't pile on that part, but one thing stood out to me: > My client has sent me a backup from one of their cluster nodes. bitcask > data,. rings and config. Unless I'm misunderstanding what you're doing, what you're working on wi

RE: Using Hadoop distcp to load data into Riak-CS

2013-07-10 Thread Sajner, Daniel G
Hi. Sorry about the "fake sender" in the subject of the original message. Our mail security system is funny like that... Anyhow, we discovered that distcp puts a temp file name in place and then tries to do a PUT (copy) that copy the file to the permanent name. From the documentation that do

Re: Upgrade path 1.4.0

2013-07-10 Thread Toby Corkindale
Yeah, I didn't think 1.4.0 was into final release yet either -- yet it came through on the Debian and Ubuntu apt repositories automatically this evening. - Original Message - From: "Guido Medina" To: "riak-users" Sent: Wednesday, 10 July, 2013 9:39:02 PM Subject: Re: Upgrade path 1.4.0

Re: Upgrade path 1.4.0

2013-07-10 Thread Guido Medina
Hi Toby, I'm sure someone from Basho will answer soon, I just pointed you to the "release notes" direction. I have only overlooked the release notes until we decide to migrate to 1.4.0 when is final (Right now on rc1) HTH, Guido. On 10/07/13 12:29, Toby Corkindale wrote: Thanks Guido. Loo

Re: Upgrade path 1.4.0

2013-07-10 Thread Toby Corkindale
Thanks Guido. Looks like we've upgraded to 1.4.0 completely now and the cluster is back up. I'm not sure of the exact root cause, but what we were seeing was that too many nodes went down for the ring to be healthy, and then when nodes were restarted they waited for the ring to appear for a whi

Re: Migration from memcachedb to riak

2013-07-10 Thread damien krotkine
Hi, Indeed you're using very big keys. If you can't change the keys, then yes you'll have to use leveldb. However I wonder why you need keys that long :) On 10 July 2013 13:04, Edgar Veiga wrote: > Hi Damien, > > Well let's dive into this a little bit. > > I told you guys that bitcask was not

Re: Upgrade path 1.4.0

2013-07-10 Thread Guido Medina
Release notes: https://github.com/basho/riak/blob/1.4/RELEASE-NOTES.md Maybe related to this? Known Issues leveldb 1.3 to 1.4 conversion The first execution of 1.4.0 leveldb using a 1.3.x or 1.

Re: Help with local restore for dev enviroment

2013-07-10 Thread Shane McEwan
On 09/07/13 22:24, Mark Wagner wrote: Hey all, I'm new to riak and I'm working on an ETL script that needs to pull data from a riak cluster. My client has sent me a backup from one of their cluster nodes. bitcask data,. rings and config. *snip* At this point I believe I should be able to star

Re: Migration from memcachedb to riak

2013-07-10 Thread Edgar Veiga
Hi Damien, Well let's dive into this a little bit. I told you guys that bitcask was not an option due to a bad past experiencie with couchbase (sorry, in the previous post I wrote couchdb), that uses the same architecture as bitcask, keys in memory and values in disk. We started the migration to

Upgrade path 1.4.0

2013-07-10 Thread Toby Corkindale
Hi, some of our nodes upgraded to Riak 1.4.0, and are now refusing to start and join the cluster. Is there documentation on the upgrade path from 1.3.2 to 1.4.0? It appears we have accidentally begun this journey, and I don't know if it's easier to go back or forwards now.. PS. It would have be

Re: Migration from memcachedb to riak

2013-07-10 Thread Guido Medina
For the sake of using the right capacity planner use the latest GA Riak version link which is 1.3.2, and probably comeback after 1.4 is fully is released which should happen really soon, also check release notes between 1.3.2 and 1.4, might give you ideas/good news. http://docs.basho.com/riak/

Re: Migration from memcachedb to riak

2013-07-10 Thread damien krotkine
On 10 July 2013 11:03, Edgar Veiga wrote: > Hi Guido. > > Thanks for your answer! > > Bitcask it's not an option due to the amount of ram needed.. We would need > a lot more of physical nodes so more money spent... > Why is it not an option? If you use Bitcask, then each node needs to store its

Re: Migration from memcachedb to riak

2013-07-10 Thread Edgar Veiga
Guido, we'r not using Java and that won't be an option. The technology stack is php and/or node.js Thanks anyway :) Best regards On 10 July 2013 10:35, Edgar Veiga wrote: > Hi Damien, > > We have ~11 keys and we are using ~2TB of disk space. > (The average object length will be ~2000

Re: Migration from memcachedb to riak

2013-07-10 Thread Edgar Veiga
Hi Damien, We have ~11 keys and we are using ~2TB of disk space. (The average object length will be ~2000 bytes). This is a lot to fit in memory (We have bad past experiencies with couchDB...). Thanks for the rest of the tips! On 10 July 2013 10:13, damien krotkine wrote: > > ( first

Re: Migration from memcachedb to riak

2013-07-10 Thread Guido Medina
If you are using Java you could store Riak keys as binaries using Jackson smile format, supposedly it will compress faster and better than default Java serialization, we use it for very large keys (say a key with a large collection of entries), the drawback is that you won't be able to easily r

Re: Migration from memcachedb to riak

2013-07-10 Thread damien krotkine
( first post here, hi everybody... ) If you don't need MR, 2i, etc, then BitCask will be faster. You just need to make sure all your keys fit in memory, which should not be a problem. How many keys do you have and what's their average length ? About the values,you can save a lot of space by choos

Re: Migration from memcachedb to riak

2013-07-10 Thread Guido Medina
Hi Edgar, You don't need to compress your objects, LevelDB will do that for you, and if you are using Protocol Buffers it will compress the network traffic for you too without compromising performance or any CPU bound process. There isn't anything special about LevelDB config, I would suggest

Re: Migration from memcachedb to riak

2013-07-10 Thread Edgar Veiga
Hi Guido. Thanks for your answer! Bitcask it's not an option due to the amount of ram needed.. We would need a lot more of physical nodes so more money spent... Instead we're using less machines with SSD disks to improve elevelDB performance. Best regards On 10 July 2013 09:58, Guido Medina

Re: Migration from memcachedb to riak

2013-07-10 Thread Guido Medina
Well, I rushed my answer before, if you want performance, you probably want Bitcask, if you want compression then LevelDB, the following links should help you decide better: http://docs.basho.com/riak/1.2.0/tutorials/choosing-a-backend/Bitcask/ http://docs.basho.com/riak/1.2.0/tutorials/choosin

Re: Migration from memcachedb to riak

2013-07-10 Thread Guido Medina
Then you are better off with Bitcask, that will be the fastest in your case (no 2i, no searches, no M/R) HTH, Guido. On 10/07/13 09:49, Edgar Veiga wrote: Hello all! I have a couple of questions that I would like to address all of you guys, in order to start this migration the best as possi

Migration from memcachedb to riak

2013-07-10 Thread Edgar Veiga
Hello all! I have a couple of questions that I would like to address all of you guys, in order to start this migration the best as possible. Context: - I'm responsible for the migration of a pure key/value store that for now is being stored on memcacheDB. - We're serializing php objects and stori