Re: Transfer database to new machines fail

2012-12-07 Thread Rapsey
Attach to riak: riak attach run this: os:cmd("ulimit -n"). Sergej On Fri, Dec 7, 2012 at 7:45 AM, kser wrote: > I enter ulimit. > return: unlimited > > I also cat /proc/sys/fs/file-max > return: 407441 > > I followed this guide and increase the limit to 81788200 > > http://www.cyberciti.biz/f

Re: Transfer database to new machines fail

2012-12-07 Thread Shane McEwan
Or this: grep files /proc/`pgrep -u riak beam`/limits Tricky if you can't get Riak to stay up long enough, though. On 07/12/12 09:59, Rapsey wrote: Attach to riak: riak attach run this: os:cmd("ulimit -n"). Sergej On Fri, Dec 7, 2012 at 7:45 AM, kser mailto:kong_...@hotmail.com>> wrote:

Re: Transfer database to new machines fail

2012-12-07 Thread kser
When i run this: riak attach return: Node is not running! ulimit -n return:1024 any hints to fix this? Thanks a lot for helping. -- View this message in context: http://riak-users.197444.n3.nabble.com/Transfer-database-to-new-machines-fail-tp4026198p4026203.html Sent from the Riak Users mail

Re: Transfer database to new machines fail

2012-12-07 Thread Shane McEwan
For Ubuntu you need to: Add: * - nofile 1048576 to /etc/security/limits.d/max_open_files.conf (You may need to create this file.) Uncomment or add: sessionrequired pam_limits.so in /etc/pam.d/su On 07/12/12 10:52, kser wrote: When i run this: riak attach return: Node is not runnin

Odd question

2012-12-07 Thread Martin Streicher
Thanks for the help earlier this week. I have an odd question... I have an Identity model being used with Omniauth::Identity. All is well -- when someone registers, the credentials (email and encrypted password) are saved in Riak. However, if in some other code, I retrieve an Identity record

Re: Odd question

2012-12-07 Thread Martin Streicher
I have a test case that demonstrates the issue now... describe 'Copy Integrity' do it 'remains unique' do z = create :zid i = create :identity, zid_id: z.key Identity.find_by_index('zid_id', z.key).count.should eq(1) i.email = 'uni...@unique.com' i.save Id

Re: Odd question

2012-12-07 Thread Martin Streicher
Duh. (Probably.) Since email is the unique key for Identity, I suppose changing the key generates a new record rather than changing the old one. Thus, what I want to do is likely delete the old ones, those where the key has essentially been deprecated. Or is there a way to change a key? O

Deleted objects

2012-12-07 Thread Daniil Churikov
Hello, recently we have had an issue with deleted objects in Riak. We use erlang mapreduce to access some buckets, and few days ago we discover error report, our mapreduce failed with function_clause. Input was this: {input,{{ok,{r_object,<<"superbucket">>,<<"superkey">>,[{r_content,{dict,4,16,16

Re: Transfer database to new machines fail

2012-12-07 Thread David Lowell
I worked on different ways of making the new ulimit stick, and by far the best way I found is to create the file "/etc/default/riak" with the ulimit -n command in it: ulimit -n 10 This file gets sourced by the riak init script at startup, ensuring that the shell from which riak gets starte

Re: Deleted objects

2012-12-07 Thread Evan Vigil-McClanahan
That error is from a riak object tombstone being included in the results stream. You need to check the object metadata for the <<"X-Riak-Deleted">> header being true, and then ignore that object in your map function. On Fri, Dec 7, 2012 at 10:01 AM, Daniil Churikov wrote: > Hello, recently we ha

Re: Deleted objects

2012-12-07 Thread Daniil Churikov
Ok, but how fast objects really deleted? Because delete was triggered by read. and result with <<"X-Riak-Deleted">> was returned to me for several days. What if i never do ordinary reads, only mapreduce? -- View this message in context: http://riak-users.197444.n3.nabble.com/Deleted-objects-tp4

Does changing backend configs apply to existing data?

2012-12-07 Thread Ian Ha
Hi, We have a situation where our production riak cluster is missing config information. Specifically, we are running bitcask as our backend and we did not define an expiry_secs value, meaning data lives forever (which we don't want). Question: If we changed the config and define expiry_secs, does

Re: Deleted objects

2012-12-07 Thread Evan Vigil-McClanahan
There are cases where tombstones are deleted very slowly. Since you could get one at any time (unless you never delete objects), you need to write your mapreduce functions to skip over tombstones. On Fri, Dec 7, 2012 at 10:48 AM, Daniil Churikov wrote: > Ok, but how fast objects really deleted?

RE: Does changing backend configs apply to existing data?

2012-12-07 Thread Nathan Wilken
Correction: Sorry, I meant expiry_secs apples to a backend. All data in a given backend is subject to expiration--as defined in the backend configuration--by the merge process each time it runs. From: Nathan Wilken Sent: Friday, December 07, 2012 3:27 PM To: I

RE: Does changing backend configs apply to existing data?

2012-12-07 Thread Nathan Wilken
The expiry_secs parameter applies to a bucket. This means all data in a given bucket is subject to expiration--as defined in the bucket configuration--by the merge process each time it runs. From: riak-users [riak-users-boun...@lists.basho.com] on behalf of Ian

Second of multi-node setup on Azure is invisible

2012-12-07 Thread Kevin Burton
I am trying to follow the instructions at http://docs.basho.com/riak/latest/tutorials/installation/Installing-on-Windo ws-Azure/. For now I am setting up two nodes. The first node works fine. But with the second node I am not able to specify a DNS name (since it is not stand alone). So effectively

RE: Second of multi-node setup on Azure is invisible

2012-12-07 Thread Kevin Burton
Also I get the following error when trying to join ('node1') to the cluster: [root@node1 ~]# riak-admin cluster join riak@ Attempting to restart script through sudo -H -u riak Join failed. Try again in a few moments. From: Kevin Burton [mailto:rkevinb

RE: Second of multi-node setup on Azure is invisible

2012-12-07 Thread Kevin Burton
I tried to restart the VMs and when I log back in to node1 I get [azureuser@node1 ~]$ sudo riak-admin cluster join riak@ Attempting to restart script through sudo -H -u riak sudo: unable to change directory to /var/lib/riak: No such file or directory sudo: unable to execute /bin/bash: No suc

python map reduce and secondary indexes

2012-12-07 Thread David Montgomery
Hi, Given that map reduce is the primary way of getting data out of riak, and i use python api, I am hard pressed to find any simple examples. Not even on the officially supported riak python api. Below is how I add a record to riak: id = """%s:%s:%s:%s:%s""" % (str(uuid4()),campaig