Yokozuna kv write timeouts on 1.4 (yz-merge-1.4.0)

2013-07-15 Thread Dave Martorana
Hi everyone. First post, if I leave anything out just let me know.

I have been using vagrant in testing Yokozuna with 1.3.0 (the official
0.7.0 “release") and it runs swimmingly. When 1.4 was released and someone
pointed me to the YZ integration branch, I decided to give it a go.

I realize that YZ probably doesn’t support 1.4 yet, but here are my
experiences.

- Installs fine
- Using default stagedevrel with 5 node setup
- Without yz enabled in app.config, kv accepts writes and reads
- With yz enabled on dev1 and nowhere else, kv accepts writes and reads,
creates yz index, associates index with bucket, does not index content
- With yz enabled on 4/5 nodes, kv stops accepting writes (timeout)

Ex:

(env)➜  curl -v -H 'content-type: text/plain' -XPUT '
http://localhost:10018/buckets/players/keys/name' -d "Ryan Zezeski"
* Adding handle: conn: 0x7f995a804000
* Adding handle: send: 0
* Adding handle: recv: 0
* Curl_addHandleToPipeline: length: 1
* - Conn 0 (0x7f995a804000) send_pipe: 1, recv_pipe: 0
* About to connect() to localhost port 10018 (#0)
*   Trying 127.0.0.1...
* Connected to localhost (127.0.0.1) port 10018 (#0)
> PUT /buckets/players/keys/name HTTP/1.1
> User-Agent: curl/7.30.0
> Host: localhost:10018
> Accept: */*
> content-type: text/plain
> Content-Length: 12
>
* upload completely sent off: 12 out of 12 bytes
< HTTP/1.1 503 Service Unavailable
< Vary: Accept-Encoding
* Server MochiWeb/1.1 WebMachine/1.9.2 (someone had painted it blue) is not
blacklisted
< Server: MochiWeb/1.1 WebMachine/1.9.2 (someone had painted it blue)
< Date: Mon, 15 Jul 2013 19:54:50 GMT
< Content-Type: text/plain
< Content-Length: 18
<
request timed out
* Connection #0 to host localhost left intact

Here are my Vagrant file:

https://gist.github.com/themartorana/460a52bb3f840010ecde

and build script for the server:

https://gist.github.com/themartorana/e2e0126c01b8ef01cc53

Hope this helps.

Dave
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Data population of Yokozuna on key-path in schema?

2013-07-17 Thread Dave Martorana
Hi,

I realize I may be way off-base, but I noticed the following slide in
Ryan’s recent Ricon East talk on Yokozuna:

http://cl.ly/image/3s1b1v2w2x12

Does the schema pick out values based on key-path automatically? For
instance,

val...

automatically gets mapped to the “commit_repo" field definition for the
schema?

Thanks!

Dave
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Data population of Yokozuna on key-path in schema?

2013-07-18 Thread Dave Martorana
Does the JSON extractor work in a similar fashion, or does it follow its
own rules? We don’t use XML anywhere (but JSON everywhere). Thanks!

Dave


On Thu, Jul 18, 2013 at 9:31 AM, Ryan Zezeski  wrote:

> As Eric said, the XML extractor causes the nested elements to become
> concatenated by an underscore.  "Extractor" is a Yokozuna term.  It is the
> process by which a Riak Object is mapped to a Solr document.  In the case
> of a Riak Object whose value is XML the XML is flattened by a)
> concatenating nested elements with '_' and b) concatenating attributes with
> '@' (this can be changed if necessary, just ask).  Yokozuna provides a
> resource to test how a given object would be extracted.
>
> curl -X PUT -i -h 'content-type: application/xml' 'http://host:port/extract'
> --data-binary @some.xml
>
> This will return a JSON representation of the field-values extracted from
> the object.  You can use a json pretty printer like jsonpp to make it
> easier to read.
>
> -Z
>
>
>
>
> On Wed, Jul 17, 2013 at 8:51 PM, Eric Redmond  wrote:
>
>> That's correct. The XML extractor nests by element name, separating
>> elements by an underscore.
>>
>> Eric
>>
>> On Jul 17, 2013, at 12:46 PM, Dave Martorana  wrote:
>>
>> Hi,
>>
>> I realize I may be way off-base, but I noticed the following slide in
>> Ryan’s recent Ricon East talk on Yokozuna:
>>
>> http://cl.ly/image/3s1b1v2w2x12
>>
>> Does the schema pick out values based on key-path automatically? For
>> instance,
>>
>> val...
>>
>> automatically gets mapped to the “commit_repo" field definition for the
>> schema?
>>
>> Thanks!
>>
>> Dave
>> ___
>> riak-users mailing list
>> riak-users@lists.basho.com
>> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>>
>>
>>
>> ___
>> riak-users mailing list
>> riak-users@lists.basho.com
>> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>>
>>
>
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: riak_kv_memory_backend in production

2013-07-18 Thread Dave Martorana
In using riak_kv_memory_backend as a replacement of sorts for Redis or
memcached, is there any serious problem with using a single node and an
n_val of 1? I can’t (yet) afford 5 high-RAM servers for a caching layer,
and was looking to replace our memcached box with a Redis one. In the
interest of reducing disparate-technology reliance, running a single-node
riak_kv_memory_backend instance would be preferable, unless there are
serious concerns *aside* from data loss.

For us, it’s it’s still a LRU-destroy-model cache, and losing it to machine
failure is only a minor, temporary impediment. Any reason not to run a
single-node memory-only “cluster” as a replacement for a single-machine
memcached or Redis instance?


On Thu, Jul 18, 2013 at 11:23 AM, Guido Medina wrote:

>  Forgot to mention, with N=2 should he be able to have only 4 nodes and
> focus on RAM per node rather than 5?
>
> I know is not recommended but shouldn't N=2 reduce the minimum recommended
> nodes to 4?
>
> Guido.
>
>
> On 18/07/13 16:21, Guido Medina wrote:
>
> Since the data he is requiring to store is only "transient", would it make
> sense to set N=2 for performance? Or will N=2 have the opposite effect due
> to amount of nodes having such replica?
>
> Guido.
>
> On 18/07/13 16:15, Jared Morrow wrote:
>
> Kumar,
>
>  We have a few customers who use the memory backend.  The first example I
> could find (with the help of our CSE team) uses the memory backend on 8
> machines with 12gb of ram each.
>
>  I know you are just testing right now, but we'd suggest using 5 node
> minimum.  With N=3 on a 3-node cluster you could be writing multiple
> replicas to the same machine.
>
>  Good luck in your testing,
> -Jared
>
>
>
>
> On Thu, Jul 18, 2013 at 8:38 AM, kpandey  wrote:
>
>> Are there known production installation of riak that uses
>> riak_kv_memory_backend.  We have a need to store transient data just in
>> memory ( never hitting persistent store). I'm testing riak on aws with 3
>> node cluster and looks good so far.   Just wanted to find out what kind of
>> setup people are using in production.
>>
>> Thanks
>> Kumar
>>
>>
>>
>> --
>> View this message in context:
>> http://riak-users.197444.n3.nabble.com/riak-kv-memory-backend-in-production-tp4028393.html
>> Sent from the Riak Users mailing list archive at Nabble.com.
>>
>> ___
>> riak-users mailing list
>> riak-users@lists.basho.com
>> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>>
>
>
>
> ___
> riak-users mailing 
> listriak-users@lists.basho.comhttp://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>
>
>
>
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>
>
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Yokozuna in Riak Control

2013-07-19 Thread Dave Martorana
Hey everyone,

A feature request, if I may - RAM monitoring in Riak Control currently
shows Riak RAM usage and all-other usage. Would love if it showed
SOLR/Lucene RAM usage as well in future, integrated Yokozuna builds.

Cheers!

Dave
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Yokozuna kv write timeouts on 1.4 (yz-merge-1.4.0)

2013-07-19 Thread Dave Martorana
Looks great so far. Importing lots of data now, will let you know if I run
in to anything else. Thanks!


On Thu, Jul 18, 2013 at 5:06 PM, Ryan Zezeski  wrote:

> Okay.  Yokozuna has been targeted to Riak 1.4.0.  Please notice the
> integration branch names changed to rz-yz-merge-1.4.0 (notice the addition
> of rz- prefix and different version).
>
>
> https://github.com/basho/yokozuna/blob/master/docs/INSTALL.md#install-from-github
>
> Make sure to do a fresh checkout to avoid any lingering old dependencies.
>  Let me know if you run into more issues.
>
> -Z
>
>
> On Thu, Jul 18, 2013 at 9:33 AM, Ryan Zezeski  wrote:
>
>> Dave,
>>
>> I'm currently in the process re-targeting Yokozuna to 1.4.0 for the 0.8.0
>> release.  I'll ping this thread when the transition is complete.
>>
>> -Z
>>
>>
>> On Wed, Jul 17, 2013 at 8:53 PM, Eric Redmond  wrote:
>>
>>> Dave,
>>>
>>> Your initial line was correct. Yokozuna is not yet compatible with 1.4.
>>>
>>> Eric
>>>
>>> On Jul 15, 2013, at 1:00 PM, Dave Martorana  wrote:
>>>
>>> Hi everyone. First post, if I leave anything out just let me know.
>>>
>>> I have been using vagrant in testing Yokozuna with 1.3.0 (the official
>>> 0.7.0 “release") and it runs swimmingly. When 1.4 was released and someone
>>> pointed me to the YZ integration branch, I decided to give it a go.
>>>
>>> I realize that YZ probably doesn’t support 1.4 yet, but here are my
>>> experiences.
>>>
>>> - Installs fine
>>> - Using default stagedevrel with 5 node setup
>>> - Without yz enabled in app.config, kv accepts writes and reads
>>> - With yz enabled on dev1 and nowhere else, kv accepts writes and reads,
>>> creates yz index, associates index with bucket, does not index content
>>> - With yz enabled on 4/5 nodes, kv stops accepting writes (timeout)
>>>
>>> Ex:
>>>
>>> (env)➜  curl -v -H 'content-type: text/plain' -XPUT '
>>> http://localhost:10018/buckets/players/keys/name' -d "Ryan Zezeski"
>>> * Adding handle: conn: 0x7f995a804000
>>> * Adding handle: send: 0
>>> * Adding handle: recv: 0
>>> * Curl_addHandleToPipeline: length: 1
>>> * - Conn 0 (0x7f995a804000) send_pipe: 1, recv_pipe: 0
>>> * About to connect() to localhost port 10018 (#0)
>>> *   Trying 127.0.0.1...
>>> * Connected to localhost (127.0.0.1) port 10018 (#0)
>>> > PUT /buckets/players/keys/name HTTP/1.1
>>> > User-Agent: curl/7.30.0
>>> > Host: localhost:10018
>>> > Accept: */*
>>> > content-type: text/plain
>>> > Content-Length: 12
>>> >
>>> * upload completely sent off: 12 out of 12 bytes
>>> < HTTP/1.1 503 Service Unavailable
>>> < Vary: Accept-Encoding
>>> * Server MochiWeb/1.1 WebMachine/1.9.2 (someone had painted it blue) is
>>> not blacklisted
>>> < Server: MochiWeb/1.1 WebMachine/1.9.2 (someone had painted it blue)
>>> < Date: Mon, 15 Jul 2013 19:54:50 GMT
>>> < Content-Type: text/plain
>>> < Content-Length: 18
>>> <
>>> request timed out
>>> * Connection #0 to host localhost left intact
>>>
>>> Here are my Vagrant file:
>>>
>>> https://gist.github.com/themartorana/460a52bb3f840010ecde
>>>
>>> and build script for the server:
>>>
>>> https://gist.github.com/themartorana/e2e0126c01b8ef01cc53
>>>
>>> Hope this helps.
>>>
>>> Dave
>>>
>>> ___
>>> riak-users mailing list
>>> riak-users@lists.basho.com
>>> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>>>
>>>
>>>
>>> ___
>>> riak-users mailing list
>>> riak-users@lists.basho.com
>>> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>>>
>>>
>>
>
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Error starting Yokozuna

2013-08-02 Thread Dave Martorana
Hey,

Being on slow dev hardware (VMs in Vagrant) I added the following line to
the yokozuna section of app.config:

{solr_startup_wait, 2500}

Solved my time-out issues with Solr.

Cheers,

Dave


On Mon, Jul 29, 2013 at 10:01 AM, Jeremiah Peschka <
jeremiah.pesc...@gmail.com> wrote:

> Hi Erik,
>
> Yokozuna is killed off it takes more than 5 seconds to start. There are
> several items in yokozuna HEAD that should fix this [1][2].
>
> I think the best option is to build YZ from github:
> https://github.com/basho/yokozuna/blob/master/docs/INSTALL.md#install-from-github
>
> I recently rebuilt using `make stagedevrel` and this worked for me.
>
> [1]: https://github.com/basho/yokozuna/pull/127
> [2]: https://github.com/basho/yokozuna/pull/136
>
> ---
> Jeremiah Peschka - Founder, Brent Ozar Unlimited
> MCITP: SQL Server 2008, MVP
> Cloudera Certified Developer for Apache Hadoop
>
>
> On Sun, Jul 28, 2013 at 2:08 PM, Erik Andersen wrote:
>
>> Hi!
>>
>> I have build the latest version of Yokozuna from source following the
>> instructions at
>> https://github.com/basho/yokozuna/blob/master/docs/INSTALL.md but when I
>> try and start any riak node I get errors (configured using stagedevrel).
>>
>> I have Solr 4.4 installed on a Tomcat 7 server running the default port
>> 8080.
>>
>> I'm running CentOS 6.2.
>>
>> In solr.log I get :
>> 2013-07-28 19:57:54,793 [WARN] @XmlConfiguration.java:411 Config
>> error at > name="monitor">data/yz/monitor.sh10
>>
>> In error.log I get:
>> 2013-07-28 19:58:07.669 [error] <0.1615.0> CRASH REPORT Process
>> yz_solr_proc with 0 neighbours exited with reason: bad return value:
>> {error,"Solr didn't start in alloted time"} in gen_server:init_it/6 line 332
>> 2013-07-28 19:58:07.686 [error] <0.1614.0> Supervisor yz_solr_proc_sup
>> had child yz_solr_proc started with yz_solr_proc:start_link("data/yz",
>> "10016", "10015") at undefined exit with reason bad return value:
>> {error,"Solr didn't start in alloted time"} in context start_error
>> 2013-07-28 19:58:07.690 [error] <0.1613.0> Supervisor yz_sup had child
>> yz_solr_proc_sup started with yz_solr_proc_sup:start_link() at undefined
>> exit with reason shutdown in context start_error
>>
>> I feel I'm missing something concerning solr, but what...?
>>
>> Regards,
>> Erik
>>
>> ___
>> riak-users mailing list
>> riak-users@lists.basho.com
>> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>>
>>
>
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>
>
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Practical Riak cluster choices in AWS (number of nodes? AZ's?)

2013-08-12 Thread Dave Martorana
Jared - thanks for the links. I'm in the same boat with Brady with weighing
deployment options in AWS.

Jeremiah - isn't EBS the only option once your data starts reaching into
the hundreds-of-gigs?

Dave


On Sun, Aug 11, 2013 at 8:57 PM, Jared Morrow  wrote:

> +1 to what Jeremiah said, putting a 4 or 5 node cluster in each US West
> and US East using MDC between them would be the optimum solution.  I'm also
> not buying consistent latencies between AZ's, but I've also not tested it
> personally in a production environment.  We have many riak-users members on
> AWS, so hopefully more experienced people will chime in.
>
> If you haven't seen them already, here's what I have in my "Riak on AWS"
> bookmark folder:
>
> http://media.amazonwebservices.com/AWS_NoSQL_Riak.pdf
> http://docs.basho.com/riak/latest/ops/tuning/aws/
> http://basho.com/riak-on-aws-deployment-options/
>
> -Jared
>
>
>
>
> On Sun, Aug 11, 2013 at 6:11 PM, Jeremiah Peschka <
> jeremiah.pesc...@gmail.com> wrote:
>
>> I'd be wary of using EBS backed nodes for Riak - with only a single
>> ethernet connection, it wil be very easy to saturate the max of 1000mbps
>> available in a single AWS NIC (unless you're using cluster compute
>> instances). I'd be more worried about temporarily losing contact with a
>> node through network saturation than through AZ failure, truthfully.
>>
>> The beauty of Riak is that a node can drop and you can replace it with
>> minimal fuss. Use that to your advantage and make every node in the cluster
>> disposable.
>>
>> As far as doubling up in one AZ goes - if you're worried about AZ
>> failure, you should treat each AZ as a separate data center and design your
>> failure scenarios accordingly. Yes, Amazon say you should put one Riak node
>> in each AZ; I'm not buying that. With no guarantee around latency, and no
>> control around between DCs, you need to be very careful how much of that
>> latency you're willing to introduce into your application.
>>
>> Were I in your position, I'd stand up a 5 node cluster in US-WEST-2 and
>> be done with it. I'd consider Riak EE for my HA/DR solution once the
>> business decides that off-site HA/DR is something it wants/needs.
>>
>>
>> ---
>> Jeremiah Peschka - Founder, Brent Ozar Unlimited
>> MCITP: SQL Server 2008, MVP
>> Cloudera Certified Developer for Apache Hadoop
>>
>>
>> On Sun, Aug 11, 2013 at 1:52 PM, Brady Wetherington > > wrote:
>>
>>> Hi all -
>>>
>>> I have some questions about how I want my Riak stuff to work - I've
>>> already asked these questions of some Basho people and gotten some answers,
>>> but thought I would toss it out into the wider world to see what you all
>>> have to say, too:
>>>
>>> First off - I know 5 instances is the "magic number" of instances to
>>> have. If I understand the thinking here, it's that at the default
>>> redundancy level ('n'?) of 3, it is most likely to start getting me some
>>> scaling (e.g., performance > just that of a single node), and yet also have
>>> redundancy; whereby I can lose one box and not start to take a performance
>>> hit.
>>>
>>> My question is - I think I can only do 4 in a way that makes sense. I
>>> only have 4 AZ's that I can use right now; AWS won't let me boot instances
>>> in 1a. My concern is if I try to do 5, I will be "doubling up" in one AZ -
>>> and in AWS you're almost as likely to lose an entire AZ as you are a single
>>> instance. And so, if I have instances doubled-up in one AZ (let's say
>>> us-east-1e), and then I lose 1e, I've now lost two instances. What are the
>>> chances that all three of my replicas of some chunk of my data are on those
>>> two instances? I know that it's not guaranteed that all replicas are on
>>> separate nodes.
>>>
>>> So is it better for me to ignore the recommendation of 5 nodes, and just
>>> do 4? Or to ignore the fact that I might be doubling-up in one AZ? Also,
>>> another note. These are designed to be 'durable' nodes, so if one should go
>>> down I would expect to bring it back up *with* its data - or, if I
>>> couldn't, I would do a force-replace or replace and rebuild it from the
>>> other replicas. I'm definitely not doing instance-store. So I don't know if
>>> that mitigates my need for a full 5 nodes. I would also consider losing one
>>> node to be "degraded" and would probably seek to fix that problem as soon
>>> as possible, so I wouldn't expect to be in that situation for long. I would
>>> probably tolerate a drop in performance during that time, too. (Not a
>>> super-severe one, but 20-30 percent? Sure.)
>>>
>>> What do you folks think?
>>>
>>> -B.
>>>
>>>
>>> ___
>>> riak-users mailing list
>>> riak-users@lists.basho.com
>>> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>>>
>>>
>>
>> ___
>> riak-users mailing list
>> riak-users@lists.basho.com
>> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>>
>>
>
> ___

Re: Practical Riak cluster choices in AWS (number of nodes? AZ's?)

2013-08-13 Thread Dave Martorana
An interesting hybrid that I'm coming around to seems to be using a Unix
release - OmniOS has an AMI, for instance - and ZFS. With a large-enough
store, I can run without EBS on my nodes, and have a single ZFS backup
instance with a huge amount of slow-EBS storage for accepting ZFS snapshots.

I'm still learning all the pieces, but luckily I have a company upstairs
from me that does a very similar thing with > 300TB and is willing to help
me set up my ZFS backup infrastructure.

Dave


On Mon, Aug 12, 2013 at 10:00 PM, Brady Wetherington
wrote:

> I will probably stick with EBS-store for now. I don't know how comfortable
> I can get with a replica that could disappear with simply an unintended
> reboot (one of my nodes just did that randomly today, for example). Sure, I
> would immediately start rebuilding it as soon as that were to happen, but
> we could be talking a pretty huge chunk of data that would have to get
> rebuilt out of the cluster. And that sounds scary. Even though, logically,
> I understand that it should not be.
>
> I will get there; I'm just a little cautious. As I learn Riak better and
> get more comfortable with it, maybe I would be able to start to move in a
> direction like that. And certainly as the performance characteristics of
> EBS-volumes start to bite me in the butt; that might force me to get
> comfortable with instance-store real quick. I would at least hope to be
> serving a decent-sized chunk of my data from memory, however.
>
> As for throwing my instances in one AZ - I don't feel comfortable with
> that either. I'll try out the way I'm saying and will report back - do I
> end up with crazy latencies all over the map, or does it seem to "just
> work?" We'll see.
>
> In the meantime, I still feel funny about "breaking the rules" on the
> 5-node cluster policy. Given my other choices as having been kinda
> nailed-down for now, what do you guys think of that?
>
> E.g. - should I take the risk of putting a 5th instance up in the same AZ
> as one of the others, or should I just "be ok" with having 4? Or should I
> do something weird like changing my 'n' value to be one fewer or something
> like that? (I think, as I understand it so far, I'm really liking "n=3,
> w=2, r=2" - but I could change it if it made more sense with the topology
> I've selected.)
>
> -B.
>
>
> Date: Sun, 11 Aug 2013 18:57:11 -0600
>> From: Jared Morrow 
>> To: Jeremiah Peschka 
>> Cc: riak-users 
>> Subject: Re: Practical Riak cluster choices in AWS (number of nodes?
>> AZ's?)
>> Message-ID:
>> <
>> cacusovelpu8yfcivykexm9ztkhq-kdnowk1afvpflcsip2h...@mail.gmail.com>
>> Content-Type: text/plain; charset="iso-8859-1"
>>
>>
>> +1 to what Jeremiah said, putting a 4 or 5 node cluster in each US West
>> and
>> US East using MDC between them would be the optimum solution.  I'm also
>> not
>> buying consistent latencies between AZ's, but I've also not tested it
>> personally in a production environment.  We have many riak-users members
>> on
>> AWS, so hopefully more experienced people will chime in.
>>
>> If you haven't seen them already, here's what I have in my "Riak on AWS"
>> bookmark folder:
>>
>> http://media.amazonwebservices.com/AWS_NoSQL_Riak.pdf
>> http://docs.basho.com/riak/latest/ops/tuning/aws/
>> http://basho.com/riak-on-aws-deployment-options/
>>
>> -Jared
>>
>>
>>
>>
>> On Sun, Aug 11, 2013 at 6:11 PM, Jeremiah Peschka <
>> jeremiah.pesc...@gmail.com> wrote:
>>
>> > I'd be wary of using EBS backed nodes for Riak - with only a single
>> > ethernet connection, it wil be very easy to saturate the max of 1000mbps
>> > available in a single AWS NIC (unless you're using cluster compute
>> > instances). I'd be more worried about temporarily losing contact with a
>> > node through network saturation than through AZ failure, truthfully.
>> >
>> > The beauty of Riak is that a node can drop and you can replace it with
>> > minimal fuss. Use that to your advantage and make every node in the
>> cluster
>> > disposable.
>> >
>> > As far as doubling up in one AZ goes - if you're worried about AZ
>> failure,
>> > you should treat each AZ as a separate data center and design your
>> failure
>> > scenarios accordingly. Yes, Amazon say you should put one Riak node in
>> each
>> > AZ; I'm not buying that. With no guarantee around latency, and no
>> control
>> > around between DCs, you need to be very careful how much of that latency
>> > you're willing to introduce into your application.
>> >
>> > Were I in your position, I'd stand up a 5 node cluster in US-WEST-2 and
>> be
>> > done with it. I'd consider Riak EE for my HA/DR solution once the
>> business
>> > decides that off-site HA/DR is something it wants/needs.
>> >
>> >
>> > ---
>> > Jeremiah Peschka - Founder, Brent Ozar Unlimited
>> > MCITP: SQL Server 2008, MVP
>> > Cloudera Certified Developer for Apache Hadoop
>> >
>> >
>> > On Sun, Aug 11, 2013 at 1:52 PM, Brady Wetherington <
>> br...@bespincorp.com>wrote:
>> >
>> >> Hi all -
>> >>
>> >>

Yokozuna 0.8.0 release on omnios - no Yokozuna, no Riak Control

2013-08-14 Thread Dave Martorana
Hi all,

I have Riak building on omnios, with Erlang R15B02, and Oracle JDK 1.7.0_25.

Everything appears to be building just fine. However... even though I have
Yokozuna and Riak Control enabled in my app.config, neither are able to be
used or - it seems - start up. I can confirm that kv is working fine.

app.config: https://gist.github.com/themartorana/eb503f1d7fca798fc6c3
console.log: https://gist.github.com/themartorana/7c517c0ba5549c35540a

Oddly, console.log doesn't show any mention of Riak Control. It does show
the startup line for Solr at the end. However, the webapp only has the
following links:

riak_kv_wm_buckets
riak_kv_wm_buckets
riak_kv_wm_counter
riak_kv_wm_index
riak_kv_wm_keylist
riak_kv_wm_link_walker
riak_kv_wm_link_walker
riak_kv_wm_mapred
riak_kv_wm_object
riak_kv_wm_object
riak_kv_wm_ping
riak_kv_wm_props
riak_kv_wm_stats

There is no admin/control, and no yz. Any attempt to link to /yz/* or
/admin results in a 404.

I'm at a bit of a loss. There is nothing in error.log or crash.log.
yokozuna.jar builds fine. Riak Control appears to be built fine.

Any thoughts?

Cheers,

Dave

P.S. - I have also built the git rz-yz-merge-1.4.0 branch of the riak repo
on github just to be sure. Same result. Building Riak from master - while
obviously not having Yokozuna - *does* have Riak Control enabled. Thanks!
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Yokozuna 0.8.0 release on omnios - no Yokozuna, no Riak Control

2013-08-14 Thread Dave Martorana
Hi Chris,

I've built from the "official" Yokozuna 0.8.0 download, as well
as rz-yz-merge-1.4.0 and rz-yz-merge-master branches off of the riak repo.

When I build from master, Riak Control works fine.

Thanks!

Dave


On Wed, Aug 14, 2013 at 6:52 PM, Christopher Meiklejohn <
cmeiklej...@basho.com> wrote:

> Hi Dave,
>
> Can you provide the tag, or SHA, that you've built Riak from?
>
> - Chris
>
> --
> Christopher Meiklejohn
> Software Engineer
> Basho Technologies, Inc.
>
>
>
> On Wednesday, August 14, 2013 at 3:45 PM, Dave Martorana wrote:
>
> > Hi all,
> >
> > I have Riak building on omnios, with Erlang R15B02, and Oracle JDK
> 1.7.0_25.
> >
> > Everything appears to be building just fine. However... even though I
> have Yokozuna and Riak Control enabled in my app.config, neither are able
> to be used or - it seems - start up. I can confirm that kv is working fine.
> >
> > app.config: https://gist.github.com/themartorana/eb503f1d7fca798fc6c3
> > console.log: https://gist.github.com/themartorana/7c517c0ba5549c35540a
> >
> > Oddly, console.log doesn't show any mention of Riak Control. It does
> show the startup line for Solr at the end. However, the webapp only has the
> following links:
> >
> > riak_kv_wm_buckets
> > riak_kv_wm_buckets
> > riak_kv_wm_counter
> > riak_kv_wm_index
> > riak_kv_wm_keylist
> > riak_kv_wm_link_walker
> > riak_kv_wm_link_walker
> > riak_kv_wm_mapred
> > riak_kv_wm_object
> > riak_kv_wm_object
> > riak_kv_wm_ping
> > riak_kv_wm_props
> > riak_kv_wm_stats
> >
> > There is no admin/control, and no yz. Any attempt to link to /yz/* or
> /admin results in a 404.
> >
> > I'm at a bit of a loss. There is nothing in error.log or crash.log.
> yokozuna.jar builds fine. Riak Control appears to be built fine.
> >
> > Any thoughts?
> >
> > Cheers,
> >
> > Dave
> >
> > P.S. - I have also built the git rz-yz-merge-1.4.0 branch of the riak
> repo on github just to be sure. Same result. Building Riak from master -
> while obviously not having Yokozuna - does have Riak Control enabled.
> Thanks!
> > ___
> > riak-users mailing list
> > riak-users@lists.basho.com (mailto:riak-users@lists.basho.com)
> > http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>
>
>
>
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Yokozuna 0.8.0 release on omnios - no Yokozuna, no Riak Control

2013-08-14 Thread Dave Martorana
Here is my build (make) log

https://gist.github.com/themartorana/6e9f8e49a50b70f56333

and "make rel" log

https://gist.github.com/themartorana/13dcb72306c9ab880c9f

using the 0.8.0 .tar.gz download of Yokozuna.

Thanks,

Dave


On Wed, Aug 14, 2013 at 6:54 PM, Dave Martorana  wrote:

> Hi Chris,
>
> I've built from the "official" Yokozuna 0.8.0 download, as well
> as rz-yz-merge-1.4.0 and rz-yz-merge-master branches off of the riak repo.
>
> When I build from master, Riak Control works fine.
>
> Thanks!
>
> Dave
>
>
> On Wed, Aug 14, 2013 at 6:52 PM, Christopher Meiklejohn <
> cmeiklej...@basho.com> wrote:
>
>> Hi Dave,
>>
>> Can you provide the tag, or SHA, that you've built Riak from?
>>
>> - Chris
>>
>> --
>> Christopher Meiklejohn
>> Software Engineer
>> Basho Technologies, Inc.
>>
>>
>>
>> On Wednesday, August 14, 2013 at 3:45 PM, Dave Martorana wrote:
>>
>> > Hi all,
>> >
>> > I have Riak building on omnios, with Erlang R15B02, and Oracle JDK
>> 1.7.0_25.
>> >
>> > Everything appears to be building just fine. However... even though I
>> have Yokozuna and Riak Control enabled in my app.config, neither are able
>> to be used or - it seems - start up. I can confirm that kv is working fine.
>> >
>> > app.config: https://gist.github.com/themartorana/eb503f1d7fca798fc6c3
>> > console.log: https://gist.github.com/themartorana/7c517c0ba5549c35540a
>> >
>> > Oddly, console.log doesn't show any mention of Riak Control. It does
>> show the startup line for Solr at the end. However, the webapp only has the
>> following links:
>> >
>> > riak_kv_wm_buckets
>> > riak_kv_wm_buckets
>> > riak_kv_wm_counter
>> > riak_kv_wm_index
>> > riak_kv_wm_keylist
>> > riak_kv_wm_link_walker
>> > riak_kv_wm_link_walker
>> > riak_kv_wm_mapred
>> > riak_kv_wm_object
>> > riak_kv_wm_object
>> > riak_kv_wm_ping
>> > riak_kv_wm_props
>> > riak_kv_wm_stats
>> >
>> > There is no admin/control, and no yz. Any attempt to link to /yz/* or
>> /admin results in a 404.
>> >
>> > I'm at a bit of a loss. There is nothing in error.log or crash.log.
>> yokozuna.jar builds fine. Riak Control appears to be built fine.
>> >
>> > Any thoughts?
>> >
>> > Cheers,
>> >
>> > Dave
>> >
>> > P.S. - I have also built the git rz-yz-merge-1.4.0 branch of the riak
>> repo on github just to be sure. Same result. Building Riak from master -
>> while obviously not having Yokozuna - does have Riak Control enabled.
>> Thanks!
>> > ___
>> > riak-users mailing list
>> > riak-users@lists.basho.com (mailto:riak-users@lists.basho.com)
>> > http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>>
>>
>>
>>
>
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Yokozuna 0.8.0 release on omnios - no Yokozuna, no Riak Control

2013-08-14 Thread Dave Martorana
Also, not that this makes any difference, but I noticed that
http://localhost:8098/stats when in master has a "disks" entry in the
dictionary while the 0.8.0 Yokozuna build is missing that dictionary entry
from stats. Other than that, everything looks similar.

Dave


On Wed, Aug 14, 2013 at 6:58 PM, Dave Martorana  wrote:

> Here is my build (make) log
>
> https://gist.github.com/themartorana/6e9f8e49a50b70f56333
>
> and "make rel" log
>
> https://gist.github.com/themartorana/13dcb72306c9ab880c9f
>
> using the 0.8.0 .tar.gz download of Yokozuna.
>
> Thanks,
>
> Dave
>
>
> On Wed, Aug 14, 2013 at 6:54 PM, Dave Martorana  wrote:
>
>> Hi Chris,
>>
>> I've built from the "official" Yokozuna 0.8.0 download, as well
>> as rz-yz-merge-1.4.0 and rz-yz-merge-master branches off of the riak repo.
>>
>> When I build from master, Riak Control works fine.
>>
>> Thanks!
>>
>> Dave
>>
>>
>> On Wed, Aug 14, 2013 at 6:52 PM, Christopher Meiklejohn <
>> cmeiklej...@basho.com> wrote:
>>
>>> Hi Dave,
>>>
>>> Can you provide the tag, or SHA, that you've built Riak from?
>>>
>>> - Chris
>>>
>>> --
>>> Christopher Meiklejohn
>>> Software Engineer
>>> Basho Technologies, Inc.
>>>
>>>
>>>
>>> On Wednesday, August 14, 2013 at 3:45 PM, Dave Martorana wrote:
>>>
>>> > Hi all,
>>> >
>>> > I have Riak building on omnios, with Erlang R15B02, and Oracle JDK
>>> 1.7.0_25.
>>> >
>>> > Everything appears to be building just fine. However... even though I
>>> have Yokozuna and Riak Control enabled in my app.config, neither are able
>>> to be used or - it seems - start up. I can confirm that kv is working fine.
>>> >
>>> > app.config: https://gist.github.com/themartorana/eb503f1d7fca798fc6c3
>>> > console.log: https://gist.github.com/themartorana/7c517c0ba5549c35540a
>>> >
>>> > Oddly, console.log doesn't show any mention of Riak Control. It does
>>> show the startup line for Solr at the end. However, the webapp only has the
>>> following links:
>>> >
>>> > riak_kv_wm_buckets
>>> > riak_kv_wm_buckets
>>> > riak_kv_wm_counter
>>> > riak_kv_wm_index
>>> > riak_kv_wm_keylist
>>> > riak_kv_wm_link_walker
>>> > riak_kv_wm_link_walker
>>> > riak_kv_wm_mapred
>>> > riak_kv_wm_object
>>> > riak_kv_wm_object
>>> > riak_kv_wm_ping
>>> > riak_kv_wm_props
>>> > riak_kv_wm_stats
>>> >
>>> > There is no admin/control, and no yz. Any attempt to link to /yz/* or
>>> /admin results in a 404.
>>> >
>>> > I'm at a bit of a loss. There is nothing in error.log or crash.log.
>>> yokozuna.jar builds fine. Riak Control appears to be built fine.
>>> >
>>> > Any thoughts?
>>> >
>>> > Cheers,
>>> >
>>> > Dave
>>> >
>>> > P.S. - I have also built the git rz-yz-merge-1.4.0 branch of the riak
>>> repo on github just to be sure. Same result. Building Riak from master -
>>> while obviously not having Yokozuna - does have Riak Control enabled.
>>> Thanks!
>>> > ___
>>> > riak-users mailing list
>>> > riak-users@lists.basho.com (mailto:riak-users@lists.basho.com)
>>> > http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>>>
>>>
>>>
>>>
>>
>
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Yokozuna 0.8.0 release on omnios - no Yokozuna, no Riak Control

2013-08-15 Thread Dave Martorana
Not for nothing, but apparently these issues are relegated to the Vagrant
box for omnios. I can't imagine what the possible differences are, but when
using the exact build scripts on EC2 that I use for the Vagrant box,
including user creation, etc., all services are available.

I'd still love to figure out what is going on (and was hoping to save a
little on using a local box for dev) but hey, at least it works on
"production hardware."


On Wed, Aug 14, 2013 at 7:02 PM, Dave Martorana  wrote:

> Also, not that this makes any difference, but I noticed that
> http://localhost:8098/stats when in master has a "disks" entry in the
> dictionary while the 0.8.0 Yokozuna build is missing that dictionary entry
> from stats. Other than that, everything looks similar.
>
> Dave
>
>
> On Wed, Aug 14, 2013 at 6:58 PM, Dave Martorana  wrote:
>
>> Here is my build (make) log
>>
>> https://gist.github.com/themartorana/6e9f8e49a50b70f56333
>>
>> and "make rel" log
>>
>> https://gist.github.com/themartorana/13dcb72306c9ab880c9f
>>
>> using the 0.8.0 .tar.gz download of Yokozuna.
>>
>> Thanks,
>>
>> Dave
>>
>>
>> On Wed, Aug 14, 2013 at 6:54 PM, Dave Martorana wrote:
>>
>>> Hi Chris,
>>>
>>> I've built from the "official" Yokozuna 0.8.0 download, as well
>>> as rz-yz-merge-1.4.0 and rz-yz-merge-master branches off of the riak repo.
>>>
>>> When I build from master, Riak Control works fine.
>>>
>>> Thanks!
>>>
>>> Dave
>>>
>>>
>>> On Wed, Aug 14, 2013 at 6:52 PM, Christopher Meiklejohn <
>>> cmeiklej...@basho.com> wrote:
>>>
>>>> Hi Dave,
>>>>
>>>> Can you provide the tag, or SHA, that you've built Riak from?
>>>>
>>>> - Chris
>>>>
>>>> --
>>>> Christopher Meiklejohn
>>>> Software Engineer
>>>> Basho Technologies, Inc.
>>>>
>>>>
>>>>
>>>> On Wednesday, August 14, 2013 at 3:45 PM, Dave Martorana wrote:
>>>>
>>>> > Hi all,
>>>> >
>>>> > I have Riak building on omnios, with Erlang R15B02, and Oracle JDK
>>>> 1.7.0_25.
>>>> >
>>>> > Everything appears to be building just fine. However... even though I
>>>> have Yokozuna and Riak Control enabled in my app.config, neither are able
>>>> to be used or - it seems - start up. I can confirm that kv is working fine.
>>>> >
>>>> > app.config: https://gist.github.com/themartorana/eb503f1d7fca798fc6c3
>>>> > console.log:
>>>> https://gist.github.com/themartorana/7c517c0ba5549c35540a
>>>> >
>>>> > Oddly, console.log doesn't show any mention of Riak Control. It does
>>>> show the startup line for Solr at the end. However, the webapp only has the
>>>> following links:
>>>> >
>>>> > riak_kv_wm_buckets
>>>> > riak_kv_wm_buckets
>>>> > riak_kv_wm_counter
>>>> > riak_kv_wm_index
>>>> > riak_kv_wm_keylist
>>>> > riak_kv_wm_link_walker
>>>> > riak_kv_wm_link_walker
>>>> > riak_kv_wm_mapred
>>>> > riak_kv_wm_object
>>>> > riak_kv_wm_object
>>>> > riak_kv_wm_ping
>>>> > riak_kv_wm_props
>>>> > riak_kv_wm_stats
>>>> >
>>>> > There is no admin/control, and no yz. Any attempt to link to /yz/* or
>>>> /admin results in a 404.
>>>> >
>>>> > I'm at a bit of a loss. There is nothing in error.log or crash.log.
>>>> yokozuna.jar builds fine. Riak Control appears to be built fine.
>>>> >
>>>> > Any thoughts?
>>>> >
>>>> > Cheers,
>>>> >
>>>> > Dave
>>>> >
>>>> > P.S. - I have also built the git rz-yz-merge-1.4.0 branch of the riak
>>>> repo on github just to be sure. Same result. Building Riak from master -
>>>> while obviously not having Yokozuna - does have Riak Control enabled.
>>>> Thanks!
>>>> > ___
>>>> > riak-users mailing list
>>>> > riak-users@lists.basho.com (mailto:riak-users@lists.basho.com)
>>>> > http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>>>>
>>>>
>>>>
>>>>
>>>
>>
>
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Yokozuna 0.8.0 release on omnios - no Yokozuna, no Riak Control

2013-08-15 Thread Dave Martorana
Hi Chris,

After waiting for about 5 minutes, this is the entirety of the output. I
never do get a console prompt (I tried hitting enter, etc.)

riak% bin/riak console
Node 'riak@127.0.0.1' not responding to pings.
config is OK
Exec: /riak/riak/rel/riak/bin/../erts-5.9.2/bin/erlexec -boot
/riak/riak/rel/riak/bin/../releases/1.4.0/riak  -config
/riak/riak/rel/riak/bin/../etc/app.config -pa
/riak/riak/rel/riak/bin/../lib/basho-patches -args_file
/riak/riak/rel/riak/bin/../etc/vm.args -- console
Root: /riak/riak/rel/riak/bin/..
Erlang R15B02 (erts-5.9.2) [source] [64-bit] [smp:1:1] [async-threads:64]
[kernel-poll:true]


So, I hit ctrl-c and printed the proc info, and found the following:

Message queue: [{#Port<0.7016>,{data,"Error: Exception thrown by the agent
"}},{#Port<0.7016>,{data,": java.net.MalformedURLException: Local host name
unknown: java.net.UnknownHostException: riak: riak: node name or service
name not
known"}},{#Port<0.7016>,{data,"\n"}},{#Port<0.7016>,{exit_status,1}},{'EXIT',#Port<0.7016>,normal}]

So, maybe finally I have a clue. "Local host name unknown"? "hostname"
returns "riak"...

Dave




On Thu, Aug 15, 2013 at 4:09 PM, Chris Meiklejohn wrote:

> Hi Dave,
>
> I just built Yokozuna 0.8.0 from the packages available, and noticed some
> behavior which might be related.  Because the Yokozuna OTP application
> starts before Riak Control and it's dependencies, there is a short period
> where Riak Control will be unavailable.  Can you try the following:
>
> 1. Start riak via 'riak console'.
> 2. Wait until you see the something similar to the following:
> '13:06:08.553 [info] Application riak_control started on node '
> dev1@127.0.0.1'
> 3. Try accessing Riak Control.
>
> - Chris
>
>
>
> On Wed, Aug 14, 2013 at 4:02 PM, Dave Martorana  wrote:
>
>> Also, not that this makes any difference, but I noticed that
>> http://localhost:8098/stats when in master has a "disks" entry in the
>> dictionary while the 0.8.0 Yokozuna build is missing that dictionary entry
>> from stats. Other than that, everything looks similar.
>>
>> Dave
>>
>>
>> On Wed, Aug 14, 2013 at 6:58 PM, Dave Martorana wrote:
>>
>>> Here is my build (make) log
>>>
>>> https://gist.github.com/themartorana/6e9f8e49a50b70f56333
>>>
>>> and "make rel" log
>>>
>>> https://gist.github.com/themartorana/13dcb72306c9ab880c9f
>>>
>>> using the 0.8.0 .tar.gz download of Yokozuna.
>>>
>>> Thanks,
>>>
>>> Dave
>>>
>>>
>>> On Wed, Aug 14, 2013 at 6:54 PM, Dave Martorana wrote:
>>>
>>>> Hi Chris,
>>>>
>>>> I've built from the "official" Yokozuna 0.8.0 download, as well
>>>> as rz-yz-merge-1.4.0 and rz-yz-merge-master branches off of the riak repo.
>>>>
>>>> When I build from master, Riak Control works fine.
>>>>
>>>> Thanks!
>>>>
>>>> Dave
>>>>
>>>>
>>>> On Wed, Aug 14, 2013 at 6:52 PM, Christopher Meiklejohn <
>>>> cmeiklej...@basho.com> wrote:
>>>>
>>>>> Hi Dave,
>>>>>
>>>>> Can you provide the tag, or SHA, that you've built Riak from?
>>>>>
>>>>> - Chris
>>>>>
>>>>> --
>>>>> Christopher Meiklejohn
>>>>> Software Engineer
>>>>> Basho Technologies, Inc.
>>>>>
>>>>>
>>>>>
>>>>> On Wednesday, August 14, 2013 at 3:45 PM, Dave Martorana wrote:
>>>>>
>>>>> > Hi all,
>>>>> >
>>>>> > I have Riak building on omnios, with Erlang R15B02, and Oracle JDK
>>>>> 1.7.0_25.
>>>>> >
>>>>> > Everything appears to be building just fine. However... even though
>>>>> I have Yokozuna and Riak Control enabled in my app.config, neither are 
>>>>> able
>>>>> to be used or - it seems - start up. I can confirm that kv is working 
>>>>> fine.
>>>>> >
>>>>> > app.config:
>>>>> https://gist.github.com/themartorana/eb503f1d7fca798fc6c3
>>>>> > console.log:
>>>>> https://gist.github.com/themartorana/7c517c0ba5549c35540a
>>>>> >
>>>>> > Oddly, console.log doesn't show any mention of Riak Control. It does

Re: Yokozuna 0.8.0 release on omnios - no Yokozuna, no Riak Control

2013-08-16 Thread Dave Martorana
Chris,

I added the node name to /etc/hosts and everything worked like a charm. I
appreciate the help - this was a bizarre one for me, and I would never have
looked in the console.

Cheers,

Dave


On Thu, Aug 15, 2013 at 5:30 PM, Chris Meiklejohn wrote:

> It appears that Yokozuna is failing to start because it can't resolve the
> local host name.  Can you verify that /etc/hosts looks correct, and that
> you have nameservers properly configured on that host?
>
> - Chris
>
>
> On Thu, Aug 15, 2013 at 2:05 PM, Dave Martorana  wrote:
>
>> Hi Chris,
>>
>> After waiting for about 5 minutes, this is the entirety of the output. I
>> never do get a console prompt (I tried hitting enter, etc.)
>>
>> riak% bin/riak console
>> Node 'riak@127.0.0.1' not responding to pings.
>> config is OK
>> Exec: /riak/riak/rel/riak/bin/../erts-5.9.2/bin/erlexec -boot
>> /riak/riak/rel/riak/bin/../releases/1.4.0/riak  -config
>> /riak/riak/rel/riak/bin/../etc/app.config -pa
>> /riak/riak/rel/riak/bin/../lib/basho-patches -args_file
>> /riak/riak/rel/riak/bin/../etc/vm.args -- console
>> Root: /riak/riak/rel/riak/bin/..
>> Erlang R15B02 (erts-5.9.2) [source] [64-bit] [smp:1:1] [async-threads:64]
>> [kernel-poll:true]
>>
>>
>> So, I hit ctrl-c and printed the proc info, and found the following:
>>
>> Message queue: [{#Port<0.7016>,{data,"Error: Exception thrown by the
>> agent "}},{#Port<0.7016>,{data,": java.net.MalformedURLException: Local
>> host name unknown: java.net.UnknownHostException: riak: riak: node name or
>> service name not
>> known"}},{#Port<0.7016>,{data,"\n"}},{#Port<0.7016>,{exit_status,1}},{'EXIT',#Port<0.7016>,normal}]
>>
>> So, maybe finally I have a clue. "Local host name unknown"? "hostname"
>> returns "riak"...
>>
>> Dave
>>
>>
>>
>>
>> On Thu, Aug 15, 2013 at 4:09 PM, Chris Meiklejohn 
>> wrote:
>>
>>> Hi Dave,
>>>
>>> I just built Yokozuna 0.8.0 from the packages available, and noticed
>>> some behavior which might be related.  Because the Yokozuna OTP application
>>> starts before Riak Control and it's dependencies, there is a short period
>>> where Riak Control will be unavailable.  Can you try the following:
>>>
>>> 1. Start riak via 'riak console'.
>>> 2. Wait until you see the something similar to the following:
>>> '13:06:08.553 [info] Application riak_control started on node '
>>> dev1@127.0.0.1'
>>> 3. Try accessing Riak Control.
>>>
>>> - Chris
>>>
>>>
>>>
>>> On Wed, Aug 14, 2013 at 4:02 PM, Dave Martorana wrote:
>>>
>>>> Also, not that this makes any difference, but I noticed that
>>>> http://localhost:8098/stats when in master has a "disks" entry in the
>>>> dictionary while the 0.8.0 Yokozuna build is missing that dictionary entry
>>>> from stats. Other than that, everything looks similar.
>>>>
>>>> Dave
>>>>
>>>>
>>>> On Wed, Aug 14, 2013 at 6:58 PM, Dave Martorana wrote:
>>>>
>>>>> Here is my build (make) log
>>>>>
>>>>> https://gist.github.com/themartorana/6e9f8e49a50b70f56333
>>>>>
>>>>> and "make rel" log
>>>>>
>>>>> https://gist.github.com/themartorana/13dcb72306c9ab880c9f
>>>>>
>>>>> using the 0.8.0 .tar.gz download of Yokozuna.
>>>>>
>>>>> Thanks,
>>>>>
>>>>> Dave
>>>>>
>>>>>
>>>>> On Wed, Aug 14, 2013 at 6:54 PM, Dave Martorana wrote:
>>>>>
>>>>>> Hi Chris,
>>>>>>
>>>>>> I've built from the "official" Yokozuna 0.8.0 download, as well
>>>>>> as rz-yz-merge-1.4.0 and rz-yz-merge-master branches off of the riak 
>>>>>> repo.
>>>>>>
>>>>>> When I build from master, Riak Control works fine.
>>>>>>
>>>>>> Thanks!
>>>>>>
>>>>>> Dave
>>>>>>
>>>>>>
>>>>>> On Wed, Aug 14, 2013 at 6:52 PM, Christopher Meiklejohn <
>>>>>> cmeiklej...@basho.com> wrote:
>>>>>>
>>>>>>> 

Re: Riak on SAN

2013-10-04 Thread Dave Martorana
We're in the middle of building out a Riak cluster with OmniOS and ZFS
snapshot backups offsite with the help some friends that have deployed the
same basic solution (not Riak, but ZFS snapshots for hot-backup) at a
tremendous scale.

You get some other nice bits along with "hot" backups - ZFS snapshots are
basically diffs, so you can snapshot as often as once-per-second if you're
so inclined. Vertical scaling is also interesting in that you can launch a
beefier replacement machine for node and restore the older machine's ZFS
snapshot. The node won't be wise to the fact it's not the same machine,
which makes vertical scaling much faster than the traditional replace
method. (This is where someone at Basho can scream "NOOO! DON'T DO THAT!")

You can continue down that line of thinking to plenty of advantages of
using ZFS if backup is of ultimate importance - including being able to
restore a cluster even if the building hosting your physical boxes are hit
by a daisy cutter. :)

Here's a bit of information on how we're going about it, so far:
http://tech.flyclops.com/building-riak-on-omnios-360

Dave


On Thu, Oct 3, 2013 at 8:45 PM, Pedram Nimreezi wrote:

> I consider that the main use case ;p
>
>
> On Thu, Oct 3, 2013 at 8:38 PM, Mike Oxford  wrote:
>
>> One more use-case for backups:  If you're running a big cluster and UserX
>> makes a bad code deploy which horks a bunch of data ... restore may be the
>> only option.
>>
>> It happens.
>>
>> -mox
>>
>>
>> On Wed, Oct 2, 2013 at 12:12 PM, John E. Vincent <
>> lusis.org+riak-us...@gmail.com> wrote:
>>
>>> I'm going to take a competing view here.
>>>
>>> SAN is a bit overloaded of a term at this point. Nothing precludes a SAN
>>> from being performant or having SSDs. Yes the cost is overkill for fiber
>>> but iSCSI is much more realistic. Alternately you can even do ATAoE.
>>>
>>> From a hardware perspective, if I have 5 pizza boxes as riak nodes, I
>>> can only fit so many disks in them. Meanwhile I can add another shelf to my
>>> SAN and expand as needed. Additionally backup of a SAN is MUCH easier than
>>> backup of a riak node itself. It's a snapshot and you're done. Mind you
>>> nothing precludes you from doing LVM snapshots in the OS but you still need
>>> to get the data OFF that system for it to be truly backed up.
>>>
>>> I love riak and other distributed stores but backing them up is NOT a
>>> solved problem. Walking all keys, coordinating the take down of all your
>>> nodes in a given order or whatever your strategy is a serious pain point.
>>>
>>> Using a SAN or local disk also doesn't excuse you from watching I/O
>>> performance. With a SAN I get multiple redundant paths to a block device
>>> and I don't get that necessarily with local storage.
>>>
>>> Just my two bits.
>>>
>>>
>>>
>>> On Wed, Oct 2, 2013 at 2:18 AM, Jeremiah Peschka <
>>> jeremiah.pesc...@gmail.com> wrote:
>>>
 Could you do it? Sure.

 Should you do it? No.

 An advantage of Riak is that you can avoid the cost of SAN storage by
 getting duplication at the machine level rather than rely on your storage
 vendor to provide it.

 Running Riak on a SAN also exposes you to the SAN becoming your
 bottleneck; you only have so many fiber/iSCSI ports and a fixed number of
 disks. The risk of storage contention is high, too, so you can run into
 latency issues that are difficult to diagnose without looking into both
 Riak as well as the storage system.

 Keeping cost in mind, too, SAN storage is about 10x the cost of
 consumer grade SSDs. Not to mention feature licensing and support... The
 cost comparison isn't favorable.

 Please note: Even though your vendor calls it a SAN, that doesn't mean
 it's a SAN.
  On Oct 1, 2013 11:08 PM, "Guy Morton"  wrote:

> Does this make sense?
>
> --
> Guy Morton
> Web Development Manager
> Brüel & Kjær EMS
>
> This e-mail is confidential and may be read, copied and used only by
> the intended recipient. If you have received it in error, please contact
> the sender immediately by return e-mail. Please then delete the e-mail and
> do not disclose its contents to any other person.
>
>
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>

 ___
 riak-users mailing list
 riak-users@lists.basho.com
 http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


>>>
>>> ___
>>> riak-users mailing list
>>> riak-users@lists.basho.com
>>> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>>>
>>>
>>
>> ___
>> riak-users mailing list
>> riak-users@lists.basho.com
>> http://lists.basho.com/mailman/listinfo/ri

Yokozuna: Riak Python client PB error with Solr stored boolean fields

2013-10-14 Thread Dave Martorana
I studied the problem I was having with using the Python client's
.fulltext_search(...) method and got it down to this - it seems that I get
an error when searching against Solr using the Python client's
.fulltext_search(...) method (using protocol buffers) whenever I have a *
stored* boolean field.

In my schema, I have:



With that (or any named field of type "boolean" that is set to
stored="true") I receive the following stack trace:

http://pastebin.com/ejCixPEZ

In the error.log file on the server, I see the following repeated:

2013-10-15 01:21:17.480 [error] <0.2872.0>@yz_pb_search:maybe_process:95
function_clause
[{yz_pb_search,to_binary,[false],[{file,"src/yz_pb_search.erl"},{line,154}]},{yz_pb_search,encode_field,2,[{file,"src/yz_pb_search.erl"},{line,152}]},{lists,foldl,3,[{file,"lists.erl"},{line,1197}]},{yz_pb_search,encode_doc,1,[{file,"src/yz_pb_search.erl"},{line,144}]},{yz_pb_search,'-maybe_process/3-lc$^0/1-0-',1,[{file,"src/yz_pb_search.erl"},{line,76}]},{yz_pb_search,maybe_process,3,[{file,"src/yz_pb_search.erl"},{line,76}]},{riak_api_pb_server,process_message,4,[{file,"src/riak_api_pb_server.erl"},{line,383}]},{riak_api_pb_server,connected,2,[{file,"src/riak_api_pb_server.erl"},{line,221}]}]

Does anyone have any insight? I'm not a Solr expert, so perhaps storing
boolean fields for retrieval is not a good idea? I know if I index but
don't store, I can still successfully search against a boolean value.

Thanks!

Dave
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Riak consumes too much memory

2013-10-18 Thread Dave Martorana
Matthew,

For we who don't quite understand, can you explain - does this mean
mv-flexcache is a feature that just comes with 2.0, or is it something that
will need to be turned on, etc?

Thanks!

Dave


On Thu, Oct 17, 2013 at 9:45 PM, Matthew Von-Maszewski
wrote:

> It is already in test and available for your download now:
>
> https://github.com/basho/leveldb/tree/mv-flexcache
>
> Discussion is here:
>
> https://github.com/basho/leveldb/wiki/mv-flexcache
>
> This code is slated for Riak 2.0.  Enjoy!!
>
> Matthew
>
> On Oct 17, 2013, at 20:50, darren  wrote:
>
> But why isn't riak smart enough to adjust itself to the available memory
> or lack thereof?
>
> No serious enterprise technology should just consume everything and crash.
>
>
> Sent from my Verizon Wireless 4G LTE Smartphone
>
>
>
>  Original message 
> From: Matthew Von-Maszewski 
> Date: 10/17/2013 8:38 PM (GMT-05:00)
> To: ZhouJianhua 
> Cc: riak-users@lists.basho.com
> Subject: Re: Riak consumes too much memory
>
>
> Greetings,
>
> The default config targets 5 servers and 16 to 32G of RAM.  Yes, the
> app.config needs some adjustment to achieve happiness for you:
>
> - change ring_creation_size from 64 to 16 (remove the % from the beginning
> of the line)
> - add this line before "{data_root, }" in eleveldb section:
> "{max_open_files, 40}," (be sure the comma is at the end of this line).
>
> Good luck,
> Matthew
>
>
> On Oct 17, 2013, at 8:23 PM, ZhouJianhua  wrote:
>
> Hi
>
> I installed riak v1.4.2 on ubuntu12.04(64bit, 4G RAM) with apt-get,  run
> it with default app.conf but change the backend to leveldb, and test it
> with https://github.com/tpjg/goriakpbc .
>
> Just keep putting (key, value) to an bucket,  the memory always increasing,
> and in the end it crashed, as it cannot allocate memory.
>
> Should i change the configuration or other?
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>
>
>
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>
>
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Riak consumes too much memory

2013-10-24 Thread Dave Martorana
Awesome - thanks for the info. I'm planning on launching in step with 2.0,
so very cool stuff.

Dave


On Fri, Oct 18, 2013 at 2:37 PM, Matthew Von-Maszewski
wrote:

> Darren
>
> File cache is favored over block cache.  The cost of a miss to the file
> cache is much larger than a miss to the block cache.  The block cache will
> release data for a new file cache entry, until it reaches a minimum of
> 2Mbytes.  Both caches use Google's original LRU formula to remove the least
> recently used cache entry to make space.
>
> File cache will release any file cache entry that has not been accessed in
> 4 days.  This keeps old, stale files from taking up memory for no reason.
>
> Matthew
>
> On Oct 18, 2013, at 2:24 PM, Darren Govoni  wrote:
>
>  Sounds nice. And then the question is what happens when that limit is
> reached on a node?
>
> On 10/18/2013 02:21 PM, Matthew Von-Maszewski wrote:
>
> The user has the option of setting a default memory limit in the
> app.config / riak.conf file (either absolute number or percentage of total
> system memory).  There is a default percentage (which I am still adjusting)
> if the user takes no action.
>
>  The single memory value is then dynamically partitioned to each Riak
> vnode (and AAE vnodes) as the server takes on more or fewer vnodes
> throughout normal operations and node failures.
>
>  There is no human interaction required once the memory limit is
> established.
>
>  Matthew
>
>
>  On Oct 18, 2013, at 2:08 PM, darren  wrote:
>
>  Is it smart enough to manage itself?
> Or does it require human babysitting?
>
>
>  Sent from my Verizon Wireless 4G LTE Smartphone
>
>
>
>  Original message 
> From: Matthew Von-Maszewski 
> Date: 10/18/2013 1:48 PM (GMT-05:00)
> To: Dave Martorana 
> Cc: darren ,riak-users@lists.basho.com
> Subject: Re: Riak consumes too much memory
>
>
> Dave,
>
>  flexcache will be a new feature in Riak 2.0.  There are some subscribers
> to this mailing list that like to download and try things early.  I was
> directing those subscribers to the GitHub branch that contains the
> work-in-progress code.
>
>  flexcache is a new method for sizing / accounting the memory used by
> leveldb.  It replaces the current method completely.  flexcache is
> therefore not an option, but an upgrade to the existing logic.
>
>  Again, the detailed discussion is here:
> ttps://github.com/basho/leveldb/wiki/mv-flexcache<https://github.com/basho/leveldb/wiki/mv-flexcache>
>
>  Matthew
>
>
>  On Oct 18, 2013, at 12:33 PM, Dave Martorana  wrote:
>
>  Matthew,
>
>  For we who don't quite understand, can you explain - does this mean
> mv-flexcache is a feature that just comes with 2.0, or is it something that
> will need to be turned on, etc?
>
>  Thanks!
>
>  Dave
>
>
> On Thu, Oct 17, 2013 at 9:45 PM, Matthew Von-Maszewski  > wrote:
>
>>  It is already in test and available for your download now:
>>
>>  https://github.com/basho/leveldb/tree/mv-flexcache
>>
>>  Discussion is here:
>>
>>  https://github.com/basho/leveldb/wiki/mv-flexcache
>>
>>  This code is slated for Riak 2.0.  Enjoy!!
>>
>>  Matthew
>>
>> On Oct 17, 2013, at 20:50, darren  wrote:
>>
>>  But why isn't riak smart enough to adjust itself to the available
>> memory or lack thereof?
>>
>>  No serious enterprise technology should just consume everything and
>> crash.
>>
>>
>>  Sent from my Verizon Wireless 4G LTE Smartphone
>>
>>
>>
>>  Original message 
>> From: Matthew Von-Maszewski 
>> Date: 10/17/2013 8:38 PM (GMT-05:00)
>> To: ZhouJianhua 
>> Cc: riak-users@lists.basho.com
>> Subject: Re: Riak consumes too much memory
>>
>>
>> Greetings,
>>
>>  The default config targets 5 servers and 16 to 32G of RAM.  Yes, the
>> app.config needs some adjustment to achieve happiness for you:
>>
>>  - change ring_creation_size from 64 to 16 (remove the % from the
>> beginning of the line)
>> - add this line before "{data_root, }" in eleveldb section:
>> "{max_open_files, 40}," (be sure the comma is at the end of this line).
>>
>>  Good luck,
>> Matthew
>>
>>
>>  On Oct 17, 2013, at 8:23 PM, ZhouJianhua  wrote:
>>
>>  Hi
>>
>>  I installed riak v1.4.2 on ubuntu12.04(64bit, 4G RAM) with apt-get,
>>  run it with default app.conf but change the backend to leveldb, and test
>> it with https://github.com/tpjg/goriakpbc .
>>
>>  Just keep putting (key, value) t

Re: Riak Recap for September 26 - October 25

2013-10-25 Thread Dave Martorana
When you say "it's time to gather around some great speakers (virtually or
in person) and talk distributed systems" are you saying some of the *
speakers* will be virtual, or is there a way for us to actually watch the
talks virtually? That's something I'd pay for, even though I can't attend
the conference in person.

Cheers,

Dave


On Fri, Oct 25, 2013 at 1:12 PM, Alex Rice  wrote:

> It's exciting to learn the 2.0 tech preview is getting so close!
> Looking forward to kicking the tires (currently in development w/
> 1.4.2)
> Cheers
>
> On Fri, Oct 25, 2013 at 9:46 AM, John Daily  wrote:
> > Today is the last day to complete the latest quarterly Riak community
> survey and get some swag:
> http://basho.com/quarterly-riak-community-survey/
> >
> > More importantly, RICON all the things! It's the end of October, which
> means it's time to gather around some great speakers (virtually or in
> person) and talk distributed systems. There are still a handful of tickets
> available if you can join us next week:
> http://ricon-west-2013.eventbrite.com
> >
> > Mark Phillips has captured all the Riak (particularly 2.0) goodness that
> we'll be talking about onstage:
> http://www.themarkphillips.com/2013/10/14/Riak-2-dot-0-and-riconwest.html
> >
> > John
> > twitter.com/macintux
> > 
> >
> > Riak Recap for September 26 - October 25
> > 
> >
> > Basho released Riak CS 1.4.2
> > -
> http://lists.basho.com/pipermail/riak-users_lists.basho.com/2013-October/013639.html
> >
> > The UK's National Health Service will be replacing an Oracle solution
> with Riak
> > - http://www.theregister.co.uk/2013/10/10/nhs_drops_oracle_for_riak/
> > -
> http://basho.com/nhs-implements-riak-to-improve-performance-and-patient-care/
> > -
> http://basho.com/nhs-to-deploy-riak-for-new-it-backbone-with-quality-of-care-improvements-in-sight/
> >
> > Seagate released their Kinetic Open Storage platform with help from
> Basho and Riak
> > -
> http://basho.com/basho-releases-ekinetic-driver-and-integrated-riak-backend-with-seagate-partnership/
> >
> > Kivra talked in Dublin about switching to Riak for scale
> > - http://basho.com/kivra-built-their-secure-mailbox-service-on-riak/
> >
> > Rovio, Basho, Angry Birds, and Riak caught the attention of Computer
> Weekly and won the Best Technology Innovation award
> > -
> http://basho.com/basho-is-the-winner-of-the-computer-weekly-european-user-awards-for-storage/
> >
> > Basho CTO Justin Sheehy gave an interview to TechWeekEurope
> > - http://www.techweekeurope.co.uk/interview/it-life-basho-riak-130350
> >
> > Kresten Krab Thorup released a web hook for Riak that POSTs newly
> written objects to a remote HTTP server
> > - https://github.com/krestenkrab/riak_webhook
> >
> > Ryan Zezeski has announced Yokozuna 0.10.0
> > -
> http://lists.basho.com/pipermail/riak-users_lists.basho.com/2013-October/013566.html
> >
> > Hector Castro created a Cloud Foundry service broker for Riak
> > - https://github.com/hectcastro/cf-riak-service-broker
> >
> > Bryce Kerley clarified the future (or lack thereof) for Ripple
> > -
> http://lists.basho.com/pipermail/riak-users_lists.basho.com/2013-October/013589.html
> >
> > Alexey Kachayev has published a sample project leveraging Riak Pipe
> > - https://github.com/kachayev/riak-pipe-workshop
> >
> > Akash Manohar wrote a post on how to interact with Riak from Elixir
> > - http://akash.im/2013/09/30/using-riak-with-elixir.html
> >
> > Sebastien Goasguen of Citrix created a walkthrough for setting up a Riak
> CS cluster
> > - http://buildacloud.org/blog/290-a-look-at-riakcs-from-basho.html
> >
> > A few members of the Salt Stack released a video of them spinning up a
> 100 node Riak cluster with Salt SSH
> > - http://www.youtube.com/watch?v=uWGDC1PdySQ
> > - http://docs.saltstack.com/index.html (for those of you who haven't
> seen Salt Stack yet)
> >
> > Basho now has a dedicated conference page on Lanyrd, if you're curious
> where you can find us next
> > - http://lanyrd.com/basho/
> >
> > While we're talking about 2.0 and CRDTs at RICON West, Joel Jacobson
> will be talking about them in London
> > -
> http://jaxlondon.com/sessions/conflict-free-replicated-data-types-eventually-consistent-systems
> >
> > Alex Rice published Azure benchmarks for Riak that started an
> interesting discussion of benchmarking and points of concern for various
> cloud hosting solutions
> > -
> http://lists.basho.com/pipermail/riak-users_lists.basho.com/2013-October/013475.html
> >
> > Drew Searcy posted a blog on troubleshooting Riak daemon startup problems
> > - http://drewsearcy.com/troubleshooting-riak-startup/
> >
> > John Daily captured some of the differences between Riak and Riak CS
> > - http://basho.com/riak-cs-vs-riak/
> >
> >
> > ___
> > riak-users mailing list
> > riak-users@lists.basho.com
> > http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>
> _

Re: [ANN] Python client 2.0.2 release

2013-11-21 Thread Dave Martorana
Hi Sean!

I was wondering if you were starting to work on Riak 2.0 features, and if
so, which branch I might follow development?

Cheers,

Dave


On Mon, Nov 18, 2013 at 4:44 PM, Sean Cribbs  wrote:

> Hi riak-users,
>
> I've just released version 2.0.2 of the official Python client for
> Riak[1]. This includes a minor feature addition that was included in the
> 1.4.1 release of Riak, namely client-specified timeouts on 2i
> operations[2]. The documentation site has also been updated.
>
> Happy hacking,
>
> --
> Sean Cribbs 
> Software Engineer
> Basho Technologies, Inc.
> http://basho.com/
>
> [1] https://pypi.python.org/pypi/riak/2.0.2
> [2]
> http://basho.github.io/riak-python-client/client.html#riak.client.RiakClient.get_index
>
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>
>
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Riak Search and Yokozuna Backup Strategy

2014-01-23 Thread Dave Martorana
I like that HyperDex provides direct backup support instead of simply
suggesting a stop-filecopy-start-catchup scenario. Are there any plans at
Basho to make backups a core function of Riak (or as a separate but
included utility) - it would certainly be nice to have something Basho
provides help ensure things are done properly each time, all the time.

Cheers,

Dave


On Thu, Jan 23, 2014 at 1:42 PM, Joe Caswell  wrote:

> Apologies, clicked send in the middle of an incomplete thought.  It should
> have read:
>
> Backing up the LevelDB data files while the node is stopped would remove
> the necessity of using the LevelDB repair process upon restoring to make
> the vnode self-consistent.
>
> From: Joe Caswell 
> Date: Thursday, January 23, 2014 1:25 PM
> To: Sean McKibben , Elias Levy <
> fearsome.lucid...@gmail.com>
>
> Cc: "riak-users@lists.basho.com" 
> Subject: Re: Riak Search and Yokozuna Backup Strategy
>
> Backing up LevelDB data files can be accomplished while the node is
> running if the sst_x directories are backed up in numerical order.  The
> undesirable side effects of that could be duplicated data, inconsistent
> manifest, or incomplete writes, which necessitates running the leveldb
> repair process upon restoration for any vnode backed up while the node was
> running.  Since the data is initially written to the recovery log before
> being appended to level 0, and any compaction operation fully writes the
> data to its new location before removing it from its old location, if any
> of these operations are interrupted, the data can be completely recovered
> by leveldb repair.
>
> The only incomplete write that won't be recovered by the LevelDB repair
> process is the initial write to the recovery log, limiting exposure  to the
> key being actively written at the time of the snapshot/backup.  As long as
> 2 vnodes in the same preflist are not backed up while simultaneously
> writing the same key to the recovery log (i.e. rolling backups are good),
> this key will be recovered by AAE/read repair after restoration.
>
> Backing up the LevelDB data files while the node is stopped would remove
> the necessity of repairing the
>
> Backing up Riak Search data, on the other hand, is a dicey proposition.
>  There are 3 bits to riak search data: the document you store, the output
> of the extractor, and the merge index.
>
> When you put a document in <<"key">> in a <<"bucket">> with search
> enabled, Riak uses the pre-defined extractor to parse the document into
> terms, possibly flattening the structure, and stores the result in
> <<"_rsid_bucket">>/<<"key">>, which is used during update operations to
> remove stale entries before adding new ones, and would most likely be
> stored in a different vnode, possibly on a different node entirely.  The
> document id/link is inserted into the merge index entry for each term
> identified by the extractor, any or all of which may reside on different
> nodes.  Since the document, its index document, and the term indexes could
> not be guaranteed to be captured in any single backup operation, it is a
> very real probability that these would be out of sync in the event that a
> restore is required.
>
> If restore is only required for a single node, consistency could be
> restored by running a repair operation for each riak_kv vnode and
> riak_search vnode stored on the node, which would repair the data from
> other nodes in the cluster.  If more than one node is restored, it is quite
> likely that they both stored replicas of the same data, for some subset of
> the full data set.  The only way to ensure consistency is fully restored in
> the latter case is to reindex the data set.  This can be accomplished by
> reading and  rewriting all of the data, or by reindexing via MapReduce as
> suggested in this earlier mailing list post:
> http://lists.basho.com/pipermail/riak-users_lists.basho.com/2012-October/009861.html
>
> In either restore case, having a backup of the merge_index data files is
> not helpful, so there does not appear to be any point in backing them up.
>
> Joe Caswell
> From: Sean McKibben 
> Date: Tuesday, January 21, 2014 1:04 PM
> To: Elias Levy 
> Cc: "riak-users@lists.basho.com" 
> Subject: Re: Riak Search and Yokozuna Backup Strategy
>
> +1 LevelDB backup information is important to us
>
>
> On Jan 20, 2014, at 4:38 PM, Elias Levy 
> wrote:
>
> Anyone from Basho care to comment?
>
>
> On Thu, Jan 16, 2014 at 10:19 AM, Elias Levy 
> wrote:
>
>>
>> Also, while LevelDB appears to be largely an append only format, the
>> documentation currently does not recommend live backups, presumably because
>> there are some issues that can crop up if restoring a DB that was not
>> cleanly shutdown.
>>
>> I am guessing those issues are the ones documented as edge cases here:
>> https://github.com/basho/leveldb/wiki/repair-notes
>>
>> That said, it looks like as of 1.4 those are largely cleared up, at least
>> from what I gather from that page, and that one must on

Re: Basho Product Alert: Active Anti-Entropy in Riak 1.4.4 - 1.4.7

2014-01-29 Thread Dave Martorana
While nowhere near as important, does this bug also exist in the current
2.0 pre-releases?

Thanks,

Dave


On Wed, Jan 29, 2014 at 11:38 AM, Tom Santero  wrote:

> Hello,
>
> Basho Engineering has uncovered a bug in the Active Anti-Entropy (AAE)
> code in Riak versions 1.4.4 through 1.4.7, inclusive. This code incorrectly
> generates hashes resulting in AAE failing to repair data. Data will
> continue to be repaired via the standard read repair mechanisms. As the
> incorrect hash generation will utilize system resources with no useful
> result, our official recommendation is that you should not use AAE until a
> fix is released in 1.4.8.
>
> To disable AAE on a running cluster, perform a `riak attach`, then:
>
> a.
> rpc:multicall(riak_kv_entropy_manager, disable, []).
> rpc:multicall(riak_kv_entropy_manager, cancel_exchanges, []).
> z.
>
>
> then Press Ctrl-c a, then Enter to quit.
>
> Next, you'll want to disable AAE on restart. Adjust your the app.config
> file on all nodes in the cluster in the following manner:
>
> % Before
> {anti_entropy, {on, []}}
>
> % After
> {anti_entropy, {off, []}}
>
>
> Once again, this issue only affects Riak versions 1.4.4, 1.4.5, 1.4.6, and
> 1.4.7. There is no need to make changes to your cluster due to this bug if
> you are not running one of those versions, or if you are not using AAE. A
> forthcoming Riak 1.4.8 release will address this problem and be released in
> the next few days.
>
>
> Thank you,
> Basho
>
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>
>
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: The best (fastest) way to delete/clear a bucket [python]

2014-06-12 Thread Dave Martorana
Dimitri,

Can you better explain this behavior for strongly-consistent buckets? We
plan on using one (and only one) but I expect keys to fly in and out of
there rather quickly. I'm concerned about indefinite retention of
tombstones and the data required to maintain them. Not being able to
actually, wholly delete something seems like a problem. (I understand
retention for a certain period of time to enforce consistency, but
indefinite retention is confusing to me.)

Cheers,

Dave


On Wed, May 21, 2014 at 5:12 AM, Paweł Królikowski 
wrote:

> @Dmitri - cool, thanks. Now that I know it's an expected behaviour, even
> if I think it's strange, I can find a way of working around it :)
>
> @Sean - tbh, I don't know. I was trying to test a whole application,
> involving http requests + multiple consumers over rabbitmq with semi-real
> data, so random bucket/key names sound .. wrong (&compliated?). On the
> other hand, restarting riak & nuking data directory, possibly on mutli-node
> cluster, doesn't seem that much better.
>
> I'll play with tests a little longer, I'll come up with something that
> works.
>
> Anyway, thanks for the help :)
>
>
> On 20 May 2014 15:50, Sean Cribbs  wrote:
>
>> For what it's worth, in the integration tests of our client libraries we
>> have moved to generating random bucket and key names for each test/example.
>> This reduces setup/teardown time and is less susceptible to the types of
>> unexpected behaviors you are seeing from list-keys. If possible, I highly
>> recommend this approach in your suite.
>>
>>
>> On Tue, May 20, 2014 at 9:25 AM, Dmitri Zagidulin 
>> wrote:
>>
>>> Ok, so, from what I understand, this is going to be expected behavior
>>> from strongly consistent buckets. (I'm in the process of confirming this,
>>> and we'll see if we can add it to the documentation). The delete_mode:
>>> immediate is ignored, and the tombstone is kept around, to ensure the
>>> consistency of not found, etc. (In the context of further over-writes of
>>> that key).
>>>
>>> So, unfortunately that may be bad news in terms of deleting a
>>> stongly_consistent bucket via keylist for unit testing. :)
>>>
>>> You may want to switch to method #2, for your test suite. (Write a shell
>>> script to stop the node, delete the bitcask & aae dirs, and restart. And
>>> invoke it as a shell script command from your test suite. Or just call
>>> those commands directly.).
>>>
>>>
>>>
>>> On Tue, May 20, 2014 at 5:44 AM, Paweł Królikowski 
>>> wrote:
>>>
 Ok then,

 I've stopped riak, wiped bitcask and anti_entropy directories, updated
 config, started riak.

 I've tried to verify it with:

 riak config generate -l debug

 Got output:

 [...]

 10:25:46.260 [info] /etc/riak/advanced.config detected, overlaying
 proplists
  -config /var/lib/riak/generated.configs/app.2014.05.20.10.25.46.config
 -args_file /var/lib/riak/generated.configs/vm.2014.05.20.10.25.46.args
 -vm_args /var/lib/riak/generated.configs/vm.2014.05.20.10.25.46.args


 And at the very end of the config file there's:

  {k_kv,[{delete_mode,immediate}]}].

 So, it worked.


  Then did this:

 >>> import riak
 >>> c = riak.RiakClient(pb_port=8087, protocol='pbc', host='db-13')
 >>> b = c.bucket(name='locate', bucket_type='strongly_consistent')
 >>> o = b.get('foo')
 >>> o.data = 3
 >>> o.store()
 
 >>> o.delete()
 
 >>> b.delete('foo')
 
 >>> o.exists
 False
 >>> b.get_keys()
 ['foo']


 So, it didn't work.

 It's not just the python client, because if I do this, I get the key
 back:


 http://db-13:8098/types/strongly_consistent/buckets/locate/keys?keys=true
 {"keys":["foo"]}



 I've tried deleting the key via http request (curl -v -X DELETE
 http://db-13:8098/types/strongly_consistent/buckets/locate/keys/bar),
 but it still remains.

 http://db-13:8098/types/strongly_consistent/buckets/locate/keys/foo

 returns

 not found

 but


 http://db-13:8098/types/strongly_consistent/buckets/locate/keys?keys=true

 gives

 {"keys":["foo","bar"]}


 I've tried looking for detailed logs, but console.log, even on debug,
 doesn't print anything useful.
 I've also tried looking inside bitcask directory, and there's
 definitely 'some' binary data there, even after deletion.


 On 19 May 2014 23:23, Dmitri Zagidulin  wrote:

> Ah, that's interesting, let's see if we can test this.
>
> The 'delete_mode' configuration is not supported in the regular
> riak.conf file, from what I understand.
> However, you can still set it in the 'advanced.config' file, as
> described here:
>
> https://github.com/basho/basho_docs/blob/features/lp/advanced-conf/source/languages/en/riak/ops/advanced/configs/configur

Tag 2.0.0rc1 ??

2014-07-14 Thread Dave Martorana
Hi!

I've been kind of holding off development for a bit while the Basho team
puts the polish on 2.0 - and then I saw a tag - 2.0.0rc1 - in Github from 3
days ago.

Has riak 2.0 quietly moved in to RC mode? And if so, are we well in to
feature freeze now?

Is it OK that I'm posting this to the mailing list?

Thanks!

Dave
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Tag 2.0.0rc1 ??

2014-07-15 Thread Dave Martorana
Jared,

I'm pretty excited about this. I've been playing with 2.0 for a long time,
and am looking forward to finally getting off of our MySQL solution!

I didn't mean to rush you guys or anything. The last thing I should *ever*
intentionally point out in anyone else is pushing back a deadline to get in
that extra bit of polish. I'm the worst offender (and TBH, I prefer the
polish!). I just wasn't sure if I had missed something. Hope I didn't spoil
the surprise.

Thanks again for everything!

Dave


On Mon, Jul 14, 2014 at 4:40 PM, Jared Morrow  wrote:

> Dave,
>
> Yes, I tagged RC1 on Friday.  The plan for us is to do some finishing work
> on docs, some last rounds of tests to be done, and then put it out to the
> public sometime soon.  I've learned to not give firm dates at this point,
> but I'd personally be very disappointed if something wasn't in your hands
> by the end of this week.  There will always be more work to do, but we plan
> to get it out to the public ASAP.
>
> To bring everyone in the loop, once we hit RC stage, we treat it like a
> true release candidate, keeping all changes out unless they fix a major
> bug.  Usually the limits us to dataloss bugs, but recently we allowed for
> some performance changes to go into RC's as well.  Sometimes it is hard to
> get an exact test of large-cluster performance when things are still
> changing before RC.  With that in mind, using RC you can expect no API and
> command-line changes before final release.  Right now even though Riak
> server has reached RC, not all of our clients are done quite yet as they
> have to lag behind Riak proper a bit.
>
> It has taken a long time for us to hit RC, and trust us that no one feels
> this pain more than us.  We thank all our users for being patient with us
> through this extremely long 2.0 cycle, we have put a lot of features into
> 2.0 and that stretched us more thin that we initially hoped.  Even so, we
> didn't skimp on testing, so that is why RC is hitting now instead of a few
> months ago.
>
> Thanks again for the support from all of our users.
> -Jared
>
>
>
>
> On Mon, Jul 14, 2014 at 1:46 PM, Dave Martorana  wrote:
>
>> Hi!
>>
>> I've been kind of holding off development for a bit while the Basho team
>> puts the polish on 2.0 - and then I saw a tag - 2.0.0rc1 - in Github from 3
>> days ago.
>>
>> Has riak 2.0 quietly moved in to RC mode? And if so, are we well in to
>> feature freeze now?
>>
>> Is it OK that I'm posting this to the mailing list?
>>
>> Thanks!
>>
>> Dave
>>
>> ___
>> riak-users mailing list
>> riak-users@lists.basho.com
>> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>>
>>
>
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Riak 2.0.0 RC1

2014-07-21 Thread Dave Martorana
Very excited by this!


On Mon, Jul 21, 2014 at 6:17 PM, Brian Roach  wrote:

> On Mon, Jul 21, 2014 at 4:01 PM, Jared Morrow  wrote:
> > There is a Java driver,
> > http://docs.basho.com/riak/latest/dev/using/libraries/  The 2.0 support
> for
> > that will land very soon, so keep an eye out on this list for the updated
> > Java client.
>
> As of about 5 minutes ago the new Riak Java 2.0 RC1 client is cut.
>
> The master branch in the Java client repo reflects this version:
>
> https://github.com/basho/riak-java-client/tree/master
>
> I've released it to maven central, but these days it takes about 3 - 4
> hours for it to be synced over to the public repository. Once it shows
> up in maven central, the new artifact info is:
>
> 
>   com.basho.riak
>   riak-client
>   2.0.0.RC1
> 
>
> I realize the Javadoc is sparse (and missing in some places). After a
> much needed break I'll be working on that for the final release.
>
> Thanks!
> - Brian Roach
>
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com