On Wed, Mar 26, 2014 at 10:36 AM, Eric Redmond wrote:
> That is correct. Storing values in solr make the values redundant. That's
> because solr and Riak are different systems with different storage
> strategies. Since we only return the solr results, the only way we can
> access the kv values wo
On Wed, Mar 26, 2014 at 9:44 AM, Eric Redmond wrote:
> The newest Riak Search 2 (yokozuna) allows you to create custom solr
> schemas. One of the field options is to save the values you wish to index.
> Then when you perform a search request, you can retrieve any of the stored
> values with a fie
job, and has an
implicit R=1 that can't be tuned.
Elias Levy
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
l when dealing with
> statistics counters that can decrement.
>
> Then again, I could be wrong.
>
> - Original Message -
> From: "Russell Brown"
> To: "Elias Levy"
> Cc: "riak-users"
> Sent: Sunday, 9 February, 2014 9:53:42 AM
> Subj
Does Basho have any plans for implementing a CRDT that maintains the
minimum or maximum value for an integer? It would come in handy in our
application and it would be very simple to implement.
Elias Levy
___
riak-users mailing list
riak-users
On Mon, Jan 27, 2014 at 12:29 PM, Ryan Zezeski wrote:
> Any comments on the backup strategy for Yokozuna? Will it make use of
>> Solr's Replication Handler, or something more lower level? Will the node
>> need to be offline to backup it up?
>>
>
> There is no use of any Solr replication code--
On Mon, Jan 27, 2014 at 11:57 AM, Matthew Von-Maszewski
wrote:
>
> Google's designed leveldb to always assume it was not cleanly shutdown.
> If the startup can read the most recent MANIFEST file, leveldb cleans up
> the rest of the mess.
>
> However, using the backup strategy previously discussed
On Thu, Jan 23, 2014 at 10:25 AM, Joe Caswell wrote:
> Backing up LevelDB data files can be accomplished while the node is
> running if the sst_x directories are backed up in numerical order. The
> undesirable side effects of that could be duplicated data, inconsistent
> manifest, or incomplete
Anyone from Basho care to comment?
On Thu, Jan 16, 2014 at 10:19 AM, Elias Levy wrote:
>
> Also, while LevelDB appears to be largely an append only format, the
> documentation currently does not recommend live backups, presumably because
> there are some issues that can crop up if
On Mon, Jan 20, 2014 at 12:50 PM, Russell Brown wrote:
>
> I guess you must be right. Riak’s vnode version vectors, in the case
> described in 3.2 would generate siblings. The put of `v` with an empty VV
> would lead to the value `v` and VV {b, 1}, but the put of `w` with no VV
> would not lead to
On Mon, Jan 20, 2014 at 12:14 PM, Russell Brown wrote:
> Longer answer: Riak gave users the option of client or _vnode_ ids in
> version vectors. By default Riak uses vnode ids. Riak erred on the side of
> caution, and would create false concurrency, rather than lose writes.
>
I am curious as to
On Sun, Jan 19, 2014 at 9:00 AM, wrote:
> From: Luc Perkins
> * Reduced sibling creation, inspired by the dotted versions vectors
> research from Preguiça, Baquero, et al[1]
>
> [1] http://arxiv.org/abs/1011.5808
>
A quick skim over the paper seems to indicate that version vectors with
per-serv
files.
So is the backup documentation on LevelDB still correct? Will Basho will
enable hot backups on LevelDB backends any time soon?
On Thu, Jan 16, 2014 at 10:05 AM, Elias Levy wrote:
> How well does Riak Search play with backups? Can you backup the Riak
> Search data without bringi
How well does Riak Search play with backups? Can you backup the Riak
Search data without bringing the node down?
The Riak documentation backup page is completely silent on Riak Search and
its merge_index backend.
And looking forward, what is the backup strategy for Yokozuna? Will it
make use of
The backend is leveldb. The HTTP headers indicate the secondary index
exists. Yet querying for it returns no results.
$ curl -v
http://riak:8098/riak/foo/8db0d7f3a27291f197173a1e3a3a7242fc49deb2d06f90598475c919417a1c7a
…
> GET
/riak/foo/8db0d7f3a27291f197173a1e3a3a7242fc49deb2d06f90598475c919417
Well, what do you know: 2,565,187. Apologies for the false alarm.
On Fri, Jun 21, 2013 at 7:29 AM, Joe Caswell wrote:
> Elias,
>
> Just for the sake of argument, if you use
> index(bucket_name,"$key",'0'..'z')
> do you get the same result?
I've just inserted some data into a six node Riak 1.3.1 EE cluster. The
keys are all SHA256s. The bucket previously had somewhere in
the vicinity of 1 million objects.
A MR job using the $key 2i with a range of '0' to 'Z', which should cover
all possible SHA256s, and using
both riak_kv_mapreduce
s
from the every index on every node architecture.
Elias Levy
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
ther distributed or
excel at handling data aggregation.
Elias Levy
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
eature that has been requested previously on
the list.
Elias Levy
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
Any plans to integrate Riak CS with Yokozuna?
We have a need to store data in Riak CS and it would be rather useful if we
could search on it without having to stand up another cluster of
Yokozuna/ElasticSearch/SolrCloud to do so.
Elias Levy
___
riak
ch stores all the data in the same merge index backend.
You may want to twice before upgrading such cluster to Yokozuna.
Elias Levy
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
I am wondering if AAE means that we can now change the replication factor
of a Riak bucket and have the additional missing replicas be created by
AAE, rather than having to reinsert all the data in the bucket.
Elias Levy
___
riak-users mailing list
riak
ely different AAE. One that
compares the state of KV and Solr, not between KV nodes.
I think the answer may be the _yz_ed field Yokozuna adds to the indexed
documents. It contains entropy data. Maybe Yokozuna is reading it to
build the entropy
/LotsOfCores). Is this something Yokozuna may
make use of? It may be to expensive a hit for latencies.
Elias Levy
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
if it does not have access to the original data. I asume its not using the
KV data, since that is exactly what it wants to compare against to ensure
its in sync.
Elias Levy
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/ma
lly lose any data, since the data
> directories would be there the whole time.
> The force-replace command will always give that warning though.
>
> Thanks,
> Alex
>
>
> On Fri, Apr 5, 2013 at 5:07 PM, Elias Levy wrote:
>
>> On Fri, Apr 5, 2013 at 1:51 PM, Alexander
27;s data is still on disk, I should expect
no data loss on the node. Is that correct?
Elias Levy
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
Can someone explain the meaning of this warning when doing a force-replace
cluster command to rename a node?
What replicas is the warning referring to? Is there any data loss?
Elias Levy
___
riak-users mailing list
riak-users@lists.basho.com
http
mplement some Erlang map
functions that give us just what we want when using MR to fetch in bulk.
Now, if Basho implemented a proper multi-get call, I think many customers
would be rather happy.
Elias Levy
___
riak-users mailing list
riak-users@lists.b
When setting up replication between two clusters with Riak EDS, are bucket
properties replicated?
I am guessing not, which means I need to set up such things as Riak
Search's hook and custom backends in the replica cluster before starting
replication. Correct?
Elias
_
R value of the
internal get should match the W value of the INM request.
Elias Levy
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
On Thu, Jan 3, 2013 at 9:00 AM, wrote:
> From: qaspar
>
> Hi,
>
> Can someone confirm this? If it's true, what exactly is the purpose of
> offering the if_not_modified flag?
>
> Kaspar
>
The consistency guarantees of if_not_modified depend on the default R
value configured on the bucket propert
I believe I've shared this in the past, but to use Basho's Ruby client in
non-blocking mode all you have to do is this:
https://gist.github.com/4445654
The advantage of doing so it that it gives you the usual blocking API but
uses non-blocking IO under the hood. Just create a new Fiber for each n
Resending as I did not get a response from Basho folks.
Is this expected behavior?
On Fri, Dec 14, 2012 at 5:08 PM, Elias Levy wrote:
> It appears that the Riak MR API can on occasions return a JSON Object,
> rather than an Array. This blows up the Ruby client, and I am guessing
>
No changes in a year and pending pulls up to a year old.
Elias Levy
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
s that Riak (or mochijson2?) will convert an array of 2 tuples
into a JSON hash when returning the data.
This is actually quite nice, at least for this application, as its exactly
what I want, but it seems is either undocumented or a bug, since clients do
not respond well to it.
Thoughts?
I should mention that I can't use the existing map_identify function.
Doing so causes the PB connection to die. Presumably there is some binary
data within the RObject that cases an error when Riak tries to convert it
to JSON.
On Sun, Dec 2, 2012 at 5:26 PM, Elias Levy wrote:
&g
While processing a request I am finding that I am spending most of my time
fetching data from Riak. As I am using the Ruby client I can't
parallelisethem, as it does not support non-blocking IO and I don't
want to spawn many
threads.
For these requests I am willing to accept an R of 1, so I can u
Sean,
Thanks for the explanation.
One last follow up. During testing I noticed that when using
if_not_modified against a test cluster with a node using the PB interface
and the Ruby client, if the bucket had an n_val greater than the number of
nodes, the put would fail with 'modified' error, even
On Tue, Nov 6, 2012 at 9:57 PM, Elias Levy wrote:
> It's also not clear from the docs what Riak considers the latest value
> to return if allow_mult is false and so is last_write_wins, when you
> have a conflict.
>
Any Basho folks have an answer to this one? How does Riak resolv
de explicit.
It's also not clear from the docs what Riak considers the latest value
to return if allow_mult is false and so is last_write_wins, when you
have a conflict.
>
> Reid
>
> On Nov 6, 2012, at 2:06 PM, Elias Levy wrote:
>
>> The description of if_not_modif
ation what it does in the case of conflicts to
resolve the siblings into the single value. The docs say you'll receive
the "latest" value, but does not say how latest is determined in the case
of a conflict.
Elias Levy
___
riak-users mail
On Sat, May 12, 2012 at 9:00 AM, wrote:
> All that said, there is work currently going on to put blooms in leveldb to
> alleviate the not-found issue. I'm not sure what the status is but perhaps
> someone else will chime in on that.
>
Any idea when this may make it to Riak? I note that the blo
On Wed, May 9, 2012 at 4:00 PM, wrote:
>
> Actually, its worse than that because of some legacy behaviour. EDS
> wants to know the bind IP, not a hostname, and it will exchange node IPs
> with the other side of the connection, so internal IPs can 'leak' to the
> other cluster and cause connection
On Mon, Apr 23, 2012 at 6:26 PM, wrote:
> I'm also doing work on this to make conjunction queries safer, do less
> work, and have better latencies. A query that produces a "large" result
> set is still problematic but a conjunction of small and large result sets
> will be much, much better. I s
On Fri, Apr 20, 2012 at 12:09 PM, wrote:
> From: Paul Gross
> 1. Add other data types like lists and sets. With lists, features like
> blocking pop would be great. MongoDB also has capped collections which
> keep a fixed number of documents.
>
+1. I'd love to see something along the lines of Re
On Fri, Apr 20, 2012 at 9:01 AM, wrote:
> Eventually this becomes the primary workload of the cluster and individual
> deletion latencies grow (more detailed measurements on the shape of this
> degradation are forthcoming if that is helpful).
>
Are you in EC2 or metal?
How are you deleting the
On Thu, Apr 19, 2012 at 1:24 PM, wrote:
> More posts/ talks on actual use cases, agreed. The funny thing is that
> riak is actually one of the dead simplest nosql systems out there. There
> really isn't much you need to know to use it or set it up.
>
I disagree. Yes, Riak is in general a simple
h was not a complete fix and I've not had time to revisit the
issue. Maybe you can take it from there.
In any case, if you don't mind a few dropped connections because of short
reads, the use of synchrony's TCPSocket works relatively well. We are
using it in production.
Elias Le
On Mon, Feb 27, 2012 at 10:35 PM, wrote:
> From: Sreejith K
> Subject: Re: Multiple Index Queries using Riak and Python
>
> I find this solution extremely useful in our PaaS solution where we needed
> to support APIs similar to Google App Engine. Performance is
> largely dependent on the number o
I am wondering if the Basho folks have any recommendations about enabling
HiPE in Erlang and compiling with the enable-native-libs option.
Is this something people have been using in production? Is it stable?
Will there be some Riak option in the future to compile to native code
using HiPE?
On Thu, Jan 26, 2012 at 9:59 AM, David Smith wrote:
> Yes, the latest eleveldb now builds Snappy and you can enable/disable
> with {compression, true|false} setting.
>
> This will be part of 1.1 release of Riak (in next few weeks).
>
Awesome. I presume that if you turn on Snappy it will only ap
On Thu, Jan 26, 2012 at 6:27 AM, wrote:
> I just bumped the version of LevelDB that is used in eleveldb, which
> incorporates theses fixes.
>
>
> https://github.com/basho/eleveldb/commit/0ef9fc1f0bedd0d45a36fa875710f560a1340d64
>
>
Are we any closer to being able to turn on Snappy compression of
On Thu, Jan 26, 2012 at 6:27 AM, wrote:
>Input = {index, Bucket, Index, Key},
>IdentityQuery = [{reduce,
> {modfun, riak_kv_mapreduce, reduce_identity},
> [{reduce_phase_only_1, true}],
> true}],
>
> My queries can return hund
Do keep in mind that you loose the ability to set an R-value by doing this,
and you are essentially requesting an R-value of 1. Not exactly ideal for
many applications.
On Tue, Jan 24, 2012 at 12:20 AM, wrote:
> If you know the keys you're going to be retrieving, you can pass bucket:key
> combin
Hi folks,
Riak provides pretty good stats for the KV functionality. Allow me to
throw in a vote for the same level of stats for the MR, 2i, and Riak Search
components. We really have no visibility into those aspects of Riak, other
than what we can collect externally.
Cheers,
Elias
On Tue, Jan 10, 2012 at 11:27 AM, David Smith wrote:
>
> Rebalancing can be quite expensive with the default claim algorithm
> that ships in all releases to date. In the 1.0.3 release (dropping
> this week, I hope), we will be switching to a new default claim
> algorithm that will dramatically de
Good day,
Yesterday we went through the exercise of doubling the size of our initial
3 node cluster. The rebalancing took four hours or so. Each node now has
between 7 and 8 GB of data stored in the Leveldb backend with an n_val of
3.
During rebalancing the cluster became nearly useless. The CP
On Sat, Jan 7, 2012 at 5:00 PM, Ryan Zezeski wrote:
> I hope this makes some sense. I realize I've completely ignored your
> range vs. prefix query question but that's because I don't think that's the
> real issue here.
>
Actually meant to reply earlier, but got side tracked. I'll look deeper
rge time
ranges, possibly all the data for any time for the other values, and in
those cases ranges seem to underperform a wildcard.
Elias Levy
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
On Wed, Dec 14, 2011 at 12:58 PM, Mark Phillips wrote:
> Hmmm. Perhaps I'm not following here, but I don't see how R=1 on M/R
> would make it unreliable in the face of node failure or node addition.
> Assuming you have at least three nodes (the Basho Recommended
> Minimum™) and standard defaults,
Basho folks,
Any answer to these questions?
On Mon, Dec 12, 2011 at 9:52 AM, Elias Levy wrote:
> I went through the bug database and could not find any open ticket for
> having a configurable r-value in mapreduce. Is there one that someone
> knows of?
>
> It would seem like
r
querying, then have Riak EDS populate it, and then have it start accepting
reads?
Elias Levy
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
On Thu, Dec 1, 2011 at 10:50 AM, Greg Pascale wrote:
> I think your logic is flawed. Each node has fewer *keys to return*, but
> that doesn't mean it has that much less work. Whether you're returning 1
> key or 100, you still have to go to disk to read from the index, and I have
> to imagine that
On Wed, Nov 30, 2011 at 1:32 PM, wrote:
> Here at Clipboard, we make very heavy use of Riak Search and a couple of
> manual indices here and there. I've wanted to use 2i a few times but have
> decided against it for a few reasons:
>
> 1) Apprehension about the coverage set query, as Matt articula
On Wed, Nov 30, 2011 at 6:01 AM, wrote:
> From: Jeroen van Dijk
>
> The use case I'm talking about is when you are looking for a term that is
> very common and thus will yield many results. My understanding of the
> implementation of Riak [citation needed] is that the search is divided into
> a
Is there such thing as a configurable R value for MR jobs?
Elias
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
On Wed, Nov 16, 2011 at 12:20 PM, wrote:
> From: Paul Gross
>
> Alternatively, riak could be more generic and take a function that would
> run in riak and update the documents without having to return them all
> to the client.
>
You should already be able to do this, but it requires writting a m
Is there some roadmap for 2i features?
We are currently using search for indexing as we need to perform compound
queries. After some testing I found that if we could use i2 we probably
see 4-5x improvements in insert throughput and and 1.5-2x in querying
times, depending on the query. We can wor
On Tue, Nov 15, 2011 at 3:25 PM, David Smith wrote:
> I have not seen this error before; have you been able to reproduce?
>
No. Only saw it that one time. Soon after the install. Rather odd.
> The logging is consistent with various bits-and-bobs of MR/Pipe
> shutting down -- nothing serious
I figure the following is not currently possible in Riak, so I'd like to
propose them as potential features.
We are processing millions of documents with Riak and we need to very
quickly find that ones we want. One problem we've had is that some of the
terms we want to search on are either very l
On Fri, Nov 11, 2011 at 5:57 PM, Ryan Zezeski wrote:
> This is an implementation detail of Search. It stores something we call a
> "proxy object" under the bucket _rsid_ [1]. It does this so it
> knows which entries to remove from the index when an object is
> updated/deleted. To achieve your
+00b1a8ce42a54d81bf46d9bb7a7b4b21_1318204800k
i_bg_tsm
+eacc2a8e434f4498a70aa6ce904efe19_1318222800l
+eacc2a8e434f4498a70aa6ce904efe19_1318222800k
i_ag_ts and i_bg_ts are two of our indexed fields, and those are the values
being indexed. So why is Riak Search storing data in the leveldb backend
I am seeing some strange someone behavior that maybe someone can explain.
Using a modified default search schema, so that these fields are tokenized
using the noop analyzer, if I index something like:
{"i":{"bg":[{"dnm":"generic"},{"dnm":"onlinegamesfva"},{"dnm":"8ffa6"}]}}
and I try searching
On Wed, Nov 9, 2011 at 3:29 PM, Phil Stanhope wrote:
> Tread carefully here ... by forcing localilty ... you will sacrifice high
> availability by algorithmically creating a bias and a single point of
> failure in the cluster.
>
You don't have to loose high availability, your data is still being
On Wed, Nov 9, 2011 at 12:00 PM, wrote:
> From: Nate Lawson
>
> We have been looking into ways to cluster keys to benefit from the LevelDB
> backend's prefix compression. If we were doing a batch of lookups and the
> keys from a given document could be grouped on a partition, they could be
> rea
er slicing,
not before, which means its useless for paging results by anything other
than sorting by score. I think you'll find that ES, being based on Lucene,
will give you more tokenizer options.
RS is great to add some search to your data stored within Riak, but if you
want a search engine,
On Tue, Nov 1, 2011 at 6:36 AM, Rusty Klophaus wrote:
> Whew, that is some top-shelf debugging. Glad to hear you were able to
> track down the issue.
>
Fix at https://github.com/igrigorik/em-synchrony/pull/79. After that
change it all works nicely.
Elias
___
On Mon, Oct 31, 2011 at 11:14 PM, Elias Levy wrote:
> On Mon, Oct 31, 2011 at 2:01 PM, Rusty Klophaus wrote:
>
>> Thanks for your excellent description of the problem. We haven't seen
>> this before to my knowledge, and this isn't expected behavior.
>> Also, if
On Mon, Oct 31, 2011 at 2:01 PM, Rusty Klophaus wrote:
> Thanks for your excellent description of the problem. We haven't seen this
> before to my knowledge, and this isn't expected behavior.
> Also, if you can share your code, or if you have a small script that can
> reproduce the failure, that
Any ideas on this? Should indexing for sub-objects in an array with the
same field names in a JSON document work?
On Sun, Oct 30, 2011 at 6:56 AM, Elias Levy wrote:
> On Sat, Oct 29, 2011 at 9:59 PM, Elias Levy
> wrote:
>
>> I am wondering if Riak search can index intermediat
I am finding that there appears to be some sort of race condition when
reading recently written objects (as in concurrently). I am using Riak
1.0.0 with the leveldb backend through the multi backend in a 3 node
cluster. Writes are done with W=2 and reads with R=2. The client is using
the riak cl
On Sat, Oct 29, 2011 at 9:59 PM, Elias Levy wrote:
> I am wondering if Riak search can index intermediate nested fields. When
> indexing json data through the KV precommit hook, the underscore is
> understood in the schema as indicating nesting. Thus, foo_bar will index
> the va
I am wondering if Riak search can index intermediate nested fields. When
indexing json data through the KV precommit hook, the underscore is
understood in the schema as indicating nesting. Thus, foo_bar will index
the value "bah" of field "bar" in the json document { "foo" : { "bar" :
"bah" } }.
I've been reading in the archives about setting the node name in vm.args to
something durable and that you can reuse if you want to restore a node from
a backup. What would happen if there was a mistmatch is not stated, but the
implication it that restore would fail.
Is that the case?
Can you ch
On Thu, Oct 27, 2011 at 1:06 PM, Ryan Zezeski wrote:
> This is indeed the case. Even in the case of an intersection, Search will
> run all sub-queries to completion and then combine them at a coordinator
> based on the query plan. If any of the sub-queries returns a large number
> of results th
Riak 1.0 introduced two reduce phase tuning
parameters: reduce_phase_batch_size and reduce_phase_only_1. Both of these
can be specified on a per phase basis using the arg parameter, passing them
in as a map. This appears to overload the arg parameter, which was already
being used to pass static ar
On Wed, Oct 26, 2011 at 1:18 PM, Bryan Fink wrote:
> I believe this one is explainable by a known Riak 1.0 MapReduce bug:
>
> https://issues.basho.com/show_bug.cgi?id=1185
>
> If you check your Riak log, I bet you'll see an error message, and if
> the final run of a reduce fails, then in Riak 1.0
Maybe I misunderstand how MR works or maybe it is a problem with the Ruby
client. I am trying to run the following job that will filter the keys as
the first phase. I am not using key filter, as the input will be a search
query. But whatever I dod, the filtering reduce phase does not appear to
ha
On Wed, Oct 26, 2011 at 9:00 AM, wrote:
> From: Kresten Krab Thorup
>
> Perhaps as follow-up FAQ-style questions, you could also answer these
> questions:
>
> 1. What is the typical execution profile of 2i vs solr queries? Do all
> queries go to all nodes in the cluster, or does it depend on th
to document the presort Solr query option in the Wiki.
On Tue, Oct 25, 2011 at 9:35 AM, Elias Levy wrote:
> Rusty,
>
> Thanks for the reply. Glad to hear its documented. BTW, I am attempting
> to sort on an inline field, so that partial solution would at least be
> useful to m
here, and then paginate from there.
>
> Best,
> Rusty
>
> On Mon, Oct 24, 2011 at 8:44 PM, Elias Levy
> wrote:
>
>> In what order are the sort and rows options of search via the Solr API
>> applied?
>>
>> It would appear that rows is applied first, with the
In what order are the sort and rows options of search via the Solr API
applied?
It would appear that rows is applied first, with the cluster waiting just
long enough to receive enough answers to to fulfill the rows requirement,
and only then sorting the result set. Alas, if that is the case, then
I am wondering if anyone has experienced the following, which
just occurred to me.
The set up is a 3 node cluster, multibackend, actually storing data into
eleveldb. One bucket and search enabled with a custom schema. The cluster
is lightly loaded. Load is almost all writes, with a handful of s
On Fri, Oct 21, 2011 at 2:23 PM, Elias Levy wrote:
> I found that if I limited the timestamps to a range that covers a
> reasonable number of records the query succeeds. But if the query is of the
> form 'ts:[0 TO 1319228408]', then Riak generates that error and the cli
ery syntax I notice that you can use '*'
instead of trying to use a min or max value in the range to emulate < or >,
e.g. 'ts:* TO 1319228408]', but Riak does not support this syntax.
Elias Levy
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
On Wed, Sep 21, 2011 at 6:32 AM, David Smith wrote:
> On Tue, Sep 20, 2011 at 12:41 PM, Elias Levy
> wrote:
>
> > Now from what I've been able to find,{error,emfile} usually means you are
> > out of file descriptors. Yes?
>
> Yes. If you are running a default
e a basic operations feature. I created an
enhancement request for it.
> If you know the object ids then you could delete only the necessary indexes
> via a small bit of Erlang at the riak console via `search:delete_docs/1`.
>
> search:delete_docs([{<<"bucket"&g
On Tue, Sep 20, 2011 at 7:53 AM, Ryan Zezeski wrote:
> Elias,
>
> It's hard to say from just this one stacktrace but it seems that the
> vnode/leveldb backend might be failing under load causing the R value to go
> unmet. The Search hook has to perform a read of a special object it stores
> in t
1 - 100 of 104 matches
Mail list logo