o a key-key-value-value store. There is an elegance to
> storing both the data and the metadata at the same time and in the same
> place via the same operation, so that is the perferred direction.
>
>
>
> From: Damien Krotkine
> Date: Tuesday, December 8, 2015 at 12:35 AM
> T
Hi Joe,
1. Yes, it's possible (with the HTTP HEAD request, or the client library
equivalent (I'm pretty sure all the client libraries expose the 'return
only the headers' part of the object fetch -- see the Optional Parameters
head=true section of the PB API
http://docs.basho.com/riak/latest/dev/r
er_creation` is set as
> true. Set this as false when this CS nodes is populated as public service.
> 3) Yes
> 4) Yes
>
> In case useful, here are our configs:
> https://github.com/dimagi/commcarehq-ansible/tree/riak/ansible/roles/riakcs/templates
>
> On Wed, Nov 25, 2015 at
Hi Ben,
Just to double-check:
1) is Stanchion installed and running, when you try to create the user?
(and the stanchion_host entry is pointing to it, in cs config?)
2) is anonymous_user_creation = on in the config file?
3) do you have 'buckets.default.allow_mult = true' in the Riak config file?
Short answer to 'should I use the same search index across all CRDT
buckets': Probably not.
Long answer: It depends of what you're going to be storing in your CRDT
buckets. And what you want to query on.
If your buckets store objects that have fields in common (for example, all
your CRDTs have a
Hi Joe,
My other suggestion (aside from checking the things Damien mentioned) is --
take a look at the solr.log in the riak error log directory, it often
provides clues for when objects are invalid and don't index.
On Tue, Nov 17, 2015 at 3:13 AM, Damien Krotkine
wrote:
> Hi Joe,
>
> I have a
Hi Alberto,
>From what I understand, the state of the art in terms of migration of
objects from Amazon S3 to Riak CS is -- writing migration scripts.
Either as shell scripts (using s3cmd), or language-specific libraries like
boto (or even just the S3 SDKs).
And the scripts would consist of:
1) get
Would Search be an option for you, instead?
You can use search to "tag" custom headers on a counter, like you would a
secondary index, I think.
On Tuesday, October 27, 2015, Łukasz Biedrycki
wrote:
> Hey,
>
> I need a secondary index with my counter, but I found an information that
> "
> Coun
> "allow_mult" is not suitable for me. Because sibling affects results of
Yokozuna search.
Can you tell us more about that? How do siblings affect the results of
search in your case?
On Sat, Oct 10, 2015 at 1:22 AM, mtakahashi-ivi wrote:
> Hello,
>
> Thank you all and sorry for replying so late
it sooner
> than later...
>
> Regards,
> Vanessa
>
>
>
> On Wed, Oct 7, 2015 at 4:02 PM, Dmitri Zagidulin
> wrote:
>
>> Glad you sorted it out!
>>
>> (I do want to encourage you to bump your R setting to at least 2, though.
>> Run some tests -
is easily solvable with a load-balancer, though for
> complicated reasons we actually don't need to do that right now. It's just
> acceptable for us temporarily. Later, we'll get the load-balancer working
> and even that won't be a problem.
>
> I *think* we're o
On second thought, ignore the Search recommendation. Search + Expiry
doesn't work very well (when objects expire from Riak, their search index
entries persist, except now those are orphaned).
On Wed, Oct 7, 2015 at 4:11 PM, Dmitri Zagidulin
wrote:
> Hi David,
>
> 1) Storing bil
Hi David,
1) Storing billions of small files is definitely a good use case for Riak
KV. (Since they're small, there's no reason to use CS (now re-branded as
S2)).
2) As far as deleting an entire bucket, that part is tougher.
(Incidentally, if you were thinking of using Riak CS because it has a
ay causes that client to fail
> as well. Is that what you mean, or are there other drawbacks as well?
>
> If there's anything else you can recommend, or links other than the one
> above you can point me to, it would be much appreciated. We expect both
> node failure and delibera
Hello,
There are two things going on here: the W quorum value of the write and
delete operations, and possibly the delete_mode setting.
Let's walk through the scenario.
You're writing to a 2 node cluster, two copies of each object (n_val=2),
with your write quorum of 1 (W=1).
So that's possibili
Hi Vanessa,
Riak is definitely meant to run behind a load balancer. (Or, at the worst
case, to be load-balanced on the client side. That is, all clients connect
to all 4 nodes).
When you say "we did try putting all 4 Riak nodes behind a load-balancer
and pointing the clients at it, but it didn't
I second what Luke said.
Definitely use Key/Value operations for this case (the users-by-email
bucket), which is a One-to-One relationship. Don't use Search or Secondary
Indexes.
On Fri, Sep 4, 2015 at 9:18 AM, Luke Bakken wrote:
> Use another bucket, keyed by email, with the users generated ID
By the way, it's worth pointing out: if you can avoid it, don't set search
indexes on buckets. Use bucket types instead.
Custom bucket properties (like search indexes) are a lot more
resource-intensive to use than bucket type properties. You're going to see
a slowdown for each new custom bucket yo
=auto erl
> ubuntu@test-riak-products-hetzner-04:~$ sudo riak-admin reip riak@192.168.3.8
> riak@172.16.16.211
> Node is not running!
> ubuntu@test-riak-products-hetzner-04:~$ riak version
> 1.4.2
>
> It’s odd, because it looks like the reip command requires the node to be
> ru
,[false]},{{riak_kv,listkeys_backpressure},...},...]},...]],...},...]},...}
> in application_master:init/4 line 138
>
> At the moment, it looks like I can’t restore the cluster. Is there any
> other way of verifying the backup? Perhaps I can simply pull out all the
> keys in the bitc
w node to the existing cluster).
>
> At the moment I have not formed the new cluster (all 5 riak nodes are
> standalone).
> What do I need to do in order to rename the ring on the nodes in the new
> cluster?
>
> Sujay
>
>
> On Tue, Aug 11, 2015 at 4:47 PM, Dmitri Zag
>From what I understand, this is a limitation of that particular client
(what language is that, by the way?). Feel free to open an issue on Github
for it.
The HTTP API, at least, does distinguish between a non-existent counter and
a counter whose value happens to be 0.
For example, here's the res
Amao,
As I've mentioned, those pending transfers are going to stay there
indefinitely. They will keep showing up on the 'status' list, until you do
a 'force-replace' or 'force-remove'.
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.b
91 '
> waiting to handoff 5 partitions
> 'riak@10.21.136.86 '
> waiting to handoff 5 partitions
> 'riak@10.21.136.81 '
> waiting to handoff 2 partitions
> 'riak@10.21.136.76 '
> waiting to handoff 3 partitions
> 'riak@10.21.136.71 '
asked about backup is because it sounded like you cleared
the disk on it. If it currently has the data, then it'll be fine.
Force-remove just changes the IP address, and doesn't delete the data or
anything.
On Tue, Aug 11, 2015 at 7:32 PM, Dmitri Zagidulin
wrote:
> 1. How to force lea
If you know one node's HTTP listening port, you know them all -- all the
nodes are supposed to listen on the same ports. (Otherwise, load balancing
gets awkward, etc).
A case where different nodes in the cluster are listening on different
ports is exotic enough to not be worth supporting. (Plus, t
Here's some more examples / writeups on indexing custom metadata fields, in
addition to Zeeshan's links:
https://github.com/basho/yokozuna/issues/5
https://github.com/basho/yokozuna/blob/develop/docs/TAGGING.md
On Wed, Aug 12, 2015 at 12:25 AM, Zeeshan Lakhani
wrote:
> Hey Joe,
>
> Yes, you are
e cookie must be modified in /etc/riak/riak.conf, is that
>> a riak 2 thing?
>> I can see a -setcookie riak line in /etc/riak/vm.args, is that what you
>> mean?
>>
>> Sujay
>>
>>
>> On Thu, Aug 6, 2015 at 2:11 PM, Dmitri Zagidulin
>> wrote:
&
36.76'
> 45671926166590716193865151022383844364247891968 to 'riak@10.21.136.93'
> 45671926166590716193865151022383844364247891968 failed because of enotconn
> 2015-07-30 16:04:33.643 [error]
> <0.197.0>@riak_core_handoff_manager:handle_info:289 An outbound handoff of
&g
Hi Brad!
You're most of the way there. The access key and the secret key go into the
riak-cs.conf file (usually located in /etc/riak-cs/ ). And you get them
from creating an admin user. (And then copy & paste them into the config
file).
http://docs.basho.com/riakcs/latest/cookbooks/configuration
188850757632
> was terminated for reason: {shutdown,{error,enotconn}}
>
> During the last 5 days, there's no changes of the "riak-admin member
> status" output.
> 3. how to accelerate the data balance?
>
>
> On Fri, Aug 7, 2015 at 6:41 AM, Dmitri Zagidulin
&
Hi Joe,
For most use cases, there would be no limit to the number of buckets you
can have on a level db cluster. (Aside from obvious limits of, eventually
you'd run out of disk space for all the objects).
Riak essentially treats the bucket as merely a prefix for the key. (It
basically concatenate
ve new nodes, reformat the above
> new nodes with LVM disk management (bind 6 disk with virtual disk group).
> Replace the "data-root" parameter with one folder, and then start "riak"
> service again. After that, the cluster began the data balance again.
> That's
Hi Amao,
Can you explain a bit more which steps you've taken, and what the problem
is?
Which nodes have been added, and which nodes are leaving the cluster?
On Tue, Jul 28, 2015 at 11:03 PM, Changmao.Wang
wrote:
> Hi Raik user group,
>
> I'm using riak and riak-cs 1.4.2. Last weekend, I added
Sujay,
You're right - the best way to verify the backup is to bring up a separate
5 node cluster, and restore it from the backup files.
The procedure is slightly more involved than untar-ing, though. The backed
up ring directories from the original cluster will contain the node ids
(which rely on
Hi Sinh,
Just to double check, by '/solr.war/WEB-INF/lib', do you mean '/yokozuna-*/priv/solr/solr-webapp/webapp/WEB-INF/lib'? Because that's
where the jts file should go.
On Thu, Jun 4, 2015 at 6:49 AM, sinh nguyen wrote:
> Hello,
>
> I am trying to retrieve all locations within a provided pol
Hi Sinh,
The other issue here, I suspect, is that the default Solr install does not
come with the jar files containing JtsSpatialContextFactory. You will need
to install them yourself. (Fwiw, there was an issue opened recently
requesting that this get included with Riak by default, here:
https://g
Hi Mohamad.
Good questions. You can install Stanchion on the HA proxy load balancer
node, or a separate standalone node, whichever you prefer.
Stanchion exists only to provide serialization (essentially, a gatekeeper
process) for the creation of new users and new buckets, only. As you
mentioned,
Hi Marc,
This sounds like a very cool project! I'd be very interested in hearing
more about this, and answering any data modeling or setup questions.
In order to answer the setup questions specifically, we'd need to know more
about what the project is intending to do. Will users be typically
inst
no, there is no load balancer on our cluster.
> Thank you
>
>
> On Thu, Oct 2, 2014 at 11:52 AM, Dmitri Zagidulin
> wrote:
>
>> One other question - are you using a load balancer for your cluster (like
>> HAProxy or the like). In which case, take a look at its logs, a
One other question - are you using a load balancer for your cluster (like
HAProxy or the like). In which case, take a look at its logs, also.
On Thu, Oct 2, 2014 at 11:51 AM, Dmitri Zagidulin
wrote:
> Igor,
> Can you look in the riak log directory, in the error.log (and console log
>
Igor,
Can you look in the riak log directory, in the error.log (and console log
and crash dump file) to see if there's any entries, around the time of the
delete operation? And post them here?
On Thu, Oct 2, 2014 at 11:45 AM, Igor Senderovich <
isenderov...@esperdyne.com> wrote:
> Hi,
>
> I get
If you're running a single-node Riak instance to start with (such as a
prototype web app), but there's the possibility of expanding the cluster
and adding other nodes, you should run with n_val=3. (It's not recommended
to change the ring size in mid-stream).
If you're only running a single node,
Mark,
What version Riak are you trying to export from?
On Tue, Sep 2, 2014 at 1:07 PM, Mark Rechler wrote:
> Hello All,
>
> What would be the best tool for moving data from one riak cluster to
> another?
>
> To give some context, we needed to change the ring_creation_size and
> build out a new
Hi Mark,
The best way to bulk load objects into Riak (and into Solr) is to take
advantage of Riak's parallelism.
Spin up a bunch of worker threads (and have them share a pool of
connections) and have them issue parallel concurrent puts to all of the
nodes in a cluster (you can either use something
Sangeetha,
As Bryan mentioned, above, the first thing you want to double-check, when
migrating your blob type column to Riak, is the typical (and max) blob size.
If your objects are less than 1mb, then storing in Riak should be fine. If
the max object size runs larger than 1mb, you should store th
request (curl -v -X DELETE
> http://db-13:8098/types/strongly_consistent/buckets/locate/keys/bar), but
> it still remains.
>
> http://db-13:8098/types/strongly_consistent/buckets/locate/keys/foo
>
> returns
>
> not found
>
> but
>
> http://db-13:8098/types/str
s what I need is a confirmation
> that something is broken/that I'm doing something stupid.
>
> I've tried looking for similar issues (github.com/basho/riak/issues),
> didn't find any -> I guess that suggests I'm doing something stupid, I just
> don't know
t; tombstones simply remain in my system indefinitely.
>
> --
> Paweł
>
>
> On 19 May 2014 15:32, Dmitri Zagidulin wrote:
>
>> Hi Pawel,
>>
>> There's basically three ways to clear data from Riak (for the purposes of
>> automated testing):
>>
Hi Pawel,
There's basically three ways to clear data from Riak (for the purposes of
automated testing):
1. Iterate through the keys via get_keys(), and delete each one. This is
what you're currently doing, except you don't need to invoke if.exists().
if.exists() makes an additional API call to Ri
Brisa,
You want to use the 'wt=json' parameter, in your query (see the Solr
section of http://docs.basho.com/riak/latest/dev/using/search/ )
So, your request would be:
http://127.0.0.1:10018/solr/index_bucket/select?q=key:Value&wt=json
(As a side note, in HTTP terminology, the 'Content-Type: appl
Lee,
Double-check your riak-cs app.config, and make sure
'anonymous_user_creation' is set to true. (If you're changing the setting,
be sure to restart riak cs).
On Tue, Apr 29, 2014 at 6:11 PM, Lee Sylvester wrote:
> Hey guys,
>
> So, I’m currently trying to configure a new CS cluster. However
The other thing to keep in mind is,
/types/users_t/buckets/users/
and
/buckets/users/
are two separate entities. Meaning, if you write to one of those, you won't
be able to read it from the other. Only the writes to
/types/users_t/buckets/users/ will actually be indexed by Search.
On Wed, Mar 1
James,
In general, I think this is a reasonable approach (for some reasonable
value of "real time").
A couple of questions.
Do you mean to use a standalone Solr instance, or the Yokozuna project
(Solr integrated with Riak, in Riak 2.0)? (You can read about its usage
here: http://docs.basho.com/ri
I ran into this the other day, as well. The problem turns out to be an
Erlang version incompatibility -- you will want Erlang R16B02 to compile
the newer 2.0 stuff.
On Tue, Jan 28, 2014 at 9:07 AM, Andrew Hamilton
wrote:
> When I follow the steps to make a development cluster on Ubuntu 12.04,
>
Vincent,
How large are the objects that you're requesting? (in the 1000 objs
example).
Also, what does your cluster configuration look like? How many nodes? Are
you load-balancing the GETs to your riak nodes (via something like
HAProxy), or are you making requests to a single riak node?
It sounds
On Thu, Oct 10, 2013 at 5:21 AM, Siddhu Warrier (siwarrie) <
siwar...@cisco.com> wrote:
> Btw, a related question: can I protect Riak CS Control from
> unauthenticated access, by requiring users to enter the admin credentials
> before they are allowed to view or edit information?
>
Not at the mom
Sorry I failed to attach the s3cfg file.
>
> Cheers,
>
> Siddhu
>
> From: Dmitri Zagidulin
> Date: Wednesday, 9 October 2013 16:51
>
> To: Siddhu Warrier
> Cc: "riak-users@lists.basho.com"
> Subject: Re: Unable to configure Riak-CS-Control to manage us
YO1GI0 -
> s3://riak-cs/user/9UND62Q1-EIDE9YO1GI0 -> [1 of 1]
> encoding="UTF-8"?>foo...@example.comfoobarfoo
> bar9UND62Q1-EIDE9YO1GI04o 333 of 333
> 100% in0s43.04 kB/s
>
> doneet>6057fd7d3a7c43b06f839441585d35de197baa57a4696318803afd81c5887aecdisabled
>
> Thanks,
>
> Siddhu
>
> From:
on_ip, "10.0.1.202"},
> {stanchion_port, 8085 },
>{stanchion_ssl, false },
>
> Thanks,
>
> Siddhu
>
> From: Dmitri Zagidulin
> Date: Wednesday, 9 October 2013 15:38
> To: Siddhu Warrier
> Cc: "riak-users@lists.basho
Hi Siddhu,
Can you try changing 'cs_proxy_host' to localhost? So:
{cs_proxy_host, "127.0.0.1" }.
and retry.
On Wed, Oct 9, 2013 at 9:55 AM, Siddhu Warrier (siwarrie) <
siwar...@cisco.com> wrote:
> Hi,
>
> I have a two node Riak CS (1.4) cluster set up on two nodes (node-1 and
> node-2 he
(Just to be extra clear, that's meant to be a comma at the end of that
directive, not a period. Also, don't forget to restart Riak CS Control,
after changing the proxy host).
On Wed, Oct 9, 2013 at 10:36 AM, Dmitri Zagidulin wrote:
> Hi Siddhu,
>
> Can you try changing
Alexander, quick question - do you have Active Anti-Entropy turned on? If
yes, check out this discussion:
http://riak-users.197444.n3.nabble.com/Active-Anti-Entropy-with-Bitcask-Key-Expiry-td4027688.html(might
shed more light on the matter).
> is there a cheap way to figure out the number of keys
Abdul,
To follow up on what Eric said, see
http://docs.basho.com/riak/latest/ops/tuning/open-files-limit/ for
suggestions on how to change the OS ulimit.
Also, if that does not work, try running
$ sudo riak console
This will start riak up and display console output and more detailed error
messag
Alexander,
Your question about n_val on a one-node server is very valid (and also the
question of, so how do you migrate to a larger n_val size when you grow
your cluster).
As an aside -- as John mentioned, Riak is designed from the ground up to be
run on multi-node clusters, so you have to keep
Wow! Troy, the project looks very impressive. Excellent docs, too. I can't
wait to try it out.
On Sat, Jun 1, 2013 at 7:36 PM, Troy Melhase wrote:
> Hello everyone,
>
> I've put together an ODM [1] for Riak and Node.js. I tried a few of the
> the available packages, but none were to my liking.
Kurt,
I'm not sure about the cause of the MapReduce crash (I suspect it's running
out of resources of some kind, even with the increase of vm count and mem).
One word of advice about the list keys timeout, though:
Be sure to use streaming list keys.
In Python, this would look something like:
for
ering in Riak v2.x, whether it would be
> possible to include a feature that automatically creates the index for you
> behind the scenes so that indeed GET url/bucket(s) would return the keys.
> Just a thought...
>
>
> Dmitri Zagidulin wrote
> > You're probably wondering
Tom,
Just to emphasize Joe's comment -- 512 should be the _maximum_ you want to
use as your ring size with leveldb/multi backend. But you should probably
use a smaller size, unless your cluster is going to have several dozen
nodes.
The recommended rule of thumb with ring size is "~10 vnodes to a
Tom,
In addition to Matt's links above, I would recommend to take a look at the
following pages:
http://docs.basho.com/riak/latest/references/appendices/Cluster-Capacity-Planning/
and
http://docs.basho.com/riak/latest/tutorials/System-Planning/
The short version is:
* RAM is important, especiall
Hi Simon,
You are correct - setting the ETag in the Riak object header when doing a
PUT or POST does not work (there is no way to specify or change the ETag on
most riak clients).
The good news is, I think you can solve your particular problem (caching
users' web pages) without that capability. Y
In addition, to reiterate what Alexander said in the email thread above,
keep in mind that doing a 'list keys' on a bucket forces Riak to iterate
through ALL of the keys in a cluster, not just those belonging to a bucket.
Meaning, if your cluster has 100 million keys, but a particular bucket has
o
Hi Chuck,
So there is not currently support for listing keys by just issuing a GET to
/buckets/bucketname/.
Part of the reason for that is - there's many operations to be performed on
the bucket resource -- list keys, get bucket properties, etc. That's why
you have several URLs to specify what you
What's interesting is that Pavlo Baron mentions a heavily modified Disco,
that he altered to make it run alongside each Riak node. I wonder if those
mods are available?
It would be great to talk to him about this.
On Wed, Apr 17, 2013 at 9:54 AM, Sean Cribbs wrote:
> This presentation might inte
> Would this approach work? Or will I need to look at a migration tool?
>
> Matt
>
>
>
> On 10 April 2013 00:06, Dmitri Zagidulin wrote:
>
>> Matt,
>>
>> Just for clarity - you mention that you plan to move the backend to
>> LevelDB before backin
ntly we're on the bitcask
> backend, and on our roadmap is a move over to eleveldb and the application
> of appropriate 2i across the whole dataset. Looks like that will be the
> next step - before doing any backup of old data.
>
> Matt
>
>
>
> On 9 April 2013 0
(er, forgot to reply to the list instead of user)
Antonio,
Though the exact answer would depend on the implementation details, a
Facebook type "newsfeed" would best be implemented on Riak Search, not MR.
Take a look at this video:
http://vimeo.com/album/2258285/video/52417831(Building a Social
Ap
Matt,
That's a good idea; I'll see if I can add that to the docs.
Dmitri
On Mon, Apr 8, 2013 at 7:26 PM, Matt Black wrote:
> I think an short and explicit discussion of using sequential GETs would be
> good to add to the docs in [1]. It'll be helpful to put the alternate
> option in the reader'
Matt,
My recommendation to you is - don't use MapReduce for this use case. Fetch
the objects via regular Riak GETs (using connection pooling and
multithreading, preferably).
I'm assuming that you have a list of keys (either by keeping track of them
externally to Riak, or via a Secondary Index que
Hi Kevin,
While it's not as simple as a one-command upgrade, you can do a rolling
upgrade of each node in the cluster, fairly easily.
Take a look at http://docs.basho.com/riak/latest/cookbooks/Rolling-Upgrades/,
to start.
On Wed, Mar 13, 2013 at 10:10 AM, Kevin Burton wrote:
> I want to upgrade
e default: 8098
>
> Could not connect to Riak on PB port 8087
>
> ** **
>
> So apparently it throw an error. If I look in the app.config it seems that
> the port is enabled. Any idea what the problem is?
>
> ** **
>
> {riak_api, [
>
> . . . .*
Lars,
If increasing number of worker threads and using connection pooling
did not improve performance, then maybe we're looking at some kind of
environmental or setting issue.
But just in case -- can you post snippets of the code that's setting
up the java riak client and issuing the writes?
On
Ok, the 0.1.4 binary download link should be working now.
On Fri, Feb 22, 2013 at 9:13 AM, Chris Read wrote:
> The README on the git repo refers to version 0.1.4, but the link to the
> download is broken...
>
>
> On Wed, Feb 20, 2013 at 12:07 PM, Hector Castro wrote:
>
>> Hi Kevin,
>>
>> The ri
Heh, I'm just about to upload the 1.4 prebuilt jar file, hold on.
On Fri, Feb 22, 2013 at 9:13 AM, Chris Read wrote:
> The README on the git repo refers to version 0.1.4, but the link to the
> download is broken...
>
>
> On Wed, Feb 20, 2013 at 12:07 PM, Hector Castro wrote:
>
>> Hi Kevin,
>>
>
Question about 2) -- are you also encoding the index names and values when
issuing the fetch? (Maybe post some example code or Riak object header
snippets, and examples of the fetch queries, that might help).
On Wed, Feb 20, 2013 at 3:56 PM, Age Mooij wrote:
> Hi all,
>
> I'm writing a new Scala
When running throughput tests, I've usually found that the bottleneck is in
the loading/testing script itself. That number (400-600 writes/sec) is
usually the limit that a simple load script (consisting of a while loop
that does PUTs to Riak) reaches.
Meaning, a single while loop issuing writes an
How large are the objects that you're working with?
As part of a previous project, we ran some benchmarks on bulk fetches --
A/B testing two different options. Option one was fetching the objects via
MapReduce (as you are trying to do), and option two was issuing a bunch of
GETs, serially.
Counter
Hi Ingo.
It's difficult to diagnose the exact reason without looking at your code.
But that error is a JSON parser error. It gets thrown whenever the code
tries to parse an empty string as a json object.
The general-case solution is to validate your strings or input streams that
you're turning int
Uruka,
What does your load testing script look like? If you post the code
somewhere, I'll take a look to see if any obvious stumbling block comes to
mind. I agree, that's way lower than you should be getting with Riak on
that hardware.
Dmitri
On Fri, Nov 2, 2012 at 1:30 PM, Uruka Dark wrote:
>
r and as
> close as I can (based on the model that I create) retrieve the same type of
> information using Riak and compare the performance. Then once I have a
> basic apples to apples comparison I can show the other salient features of
> Riak.
>
> ** **
>
> *From:* riak-
;
> ** **
>
> *From:* riak-users [mailto:riak-users-boun...@lists.basho.com] *On Behalf
> Of *Dmitri Zagidulin
> *Sent:* Thursday, November 01, 2012 2:07 PM
> *To:* riak-users
>
> *Subject:* Re: Import tables/data from Microsoft SQL Server
>
> ** **
>
> Node.js ht
Node.js http://nodejs.org/ http://en.wikipedia.org/wiki/Nodejs is a
server-side Javascript web application framework.
It is completely unrelated to Java, and will have to be installed
separately. (It uses its own, javascript-specific package management app
called 'npm' (it's similar to ruby gems a
Excellent question.
Unfortunately, you're not going to find any universal/pushbutton tool to
migrate data from an SQL db to Riak.
Fortunately, writing that migration tool that is specific to your
application is fairly straightforward. All of the variations out there boil
down to:
1) Export your d
Tin,
The easiest way to report docs-related bugs and feature requests, is to
open an issue on the Github repo: https://github.com/basho/basho_docs
Thanks for bringing this up!
Dmitri
On Mon, Oct 29, 2012 at 4:18 PM, Tin Le wrote:
> Sorry if this is the wrong place. Is there a specific email
94 matches
Mail list logo