Can someone please suggest how to understand the formula for
open_file_memory on this page:
http://docs.basho.com/riak/latest/ops/advanced/backends/leveldb/#Parameter-Planning
1. It definitely lacks some brackets, the correct formula is:
OPEN_FILE_MEMORY = (max_open_files-10) * (184 +
(average_s
Hi!
I'm getting quite a lot of the errors like this:
2014-04-16 16:45:46.838 [error] <0.2110.0> Hintfile
'./data/fs_chunks/1370157784997721485815954530671515330927436759040/3.bitcask.hint'
invalid
running riak_2.0.0_pre20
What can be the reason and does it mean my data is corrupted?
e here[0].
>
> -Brian
>
> [0] https://github.com/basho/bitcask/issues/164
>
>
> On Wed, Apr 16, 2014 at 9:52 AM, Oleksiy Krivoshey wrote:
>
>> Hi!
>>
>> I'm getting quite a lot of the errors like this:
>> 2014-04-16 16:45:46.838 [error] <0.2110.0>
Hi!
I'm trying to update yokozuna code (javascript, protocol buffers) which
worked with pre11 for pre20 and I'm getting the following response when
issuing RpbYokozunaIndexGetReq:
error: undefined
reply: { index: [ [Error: Encountered unknown message tag] ] }
First problem is that error is retur
The same error happens when using '_yz_default' schema with index.
On 16 April 2014 22:03, Oleksiy Krivoshey wrote:
> Hi!
>
> I'm trying to update yokozuna code (javascript, protocol buffers) which
> worked with pre11 for pre20 and I'm getting the f
Never mind. This was an error with underlying Javascript protobuf
implementation.
On 17 April 2014 18:33, Oleksiy Krivoshey wrote:
> The same error happens when using '_yz_default' schema with index.
>
>
> On 16 April 2014 22:03, Oleksiy Krivoshey wrote:
>
>>
Hi guys,
can someone please suggest what can be the reason for 'get' immediately
following successful 'put' to fail?
I'm running a fully connected, healthy, 5 node Riak 2.0-beta1 cluster.
Using a multiple backend feature, so the order of operations is:
1. 'SetBucket' for a new bucket with a back
roperty.
On 1 May 2014 00:51, Evan Vigil-McClanahan wrote:
> Does this continue if the bucket hasn't been created recently? Does
> it matter how large the object is? Is it particularly large in this
> case?
>
> On Wed, Apr 30, 2014 at 3:47 PM, Oleksiy Krivoshey
> wrote:
ckend.
> 4. gossip finishes.
> 5. you do the get. it fails because now the bucket for this backend
> has made 2 replicas unreachable.
> 6. read repair happens, repopulating the 2 missing replicas.
> 7. you re-get and it works.
>
> On Wed, Apr 30, 2014 at 4:04 PM, Oleksiy Krivosh
kend, any
> buckets of that type created later will inherit that setting, so you
> should be able to create as many of them as you like.
>
> On Wed, Apr 30, 2014 at 4:17 PM, Oleksiy Krivoshey
> wrote:
> > Yes, it really looks like the above scenario. It started happening when
Hi!
Please suggest how to run a map/reduce over all keys in a bucket with
custom bucket type?
--
Oleksiy Krivoshey
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
Thank you!
On 3 May 2014 17:25, Brian Roach wrote:
> riak-users@lists.basho.com
--
Oleksiy Krivoshey
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
quot;allow_mult":"false"}}
fs_chunks backend:
{<<"fs_chunks">>, riak_kv_bitcask_backend, [
{data_root, "/var/lib/riak/fs_chunks"}
]}
Thanks!
--
Oleksiy Krivoshey
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
' or 'delete' and provide a vclock.
On 21 May 2014 12:10, Oleksiy Krivoshey wrote:
> Hi,
>
> I have a quite rare problem of lost data in Riak 2.0 beta1. I can hardly
> replicate it but when it happens it looks like this order of operations:
>
> (All operations a
I think I've found that rare case when I don't get deletedvlock before
'put' in my code.
Sorry for bothering everyone :)
On 21 May 2014 14:15, Oleksiy Krivoshey wrote:
> I think its a different issue and might be my own misunderstanding:
>
> The actual order o
ion is to do a read before 5 with r=n (r=3 in the default case),
> and
> then do the write with the returned vclock, if any is returned. Try that
> and
> see if it helps.
> --
> Matthew
>
>
--
Oleksiy Krivoshey
___
riak-users mailin
Hi!
Once in a few days I got the following error in my Riak cluster:
2014-10-25 03:01:23.731 [error] <0.221.0> Supervisor riak_pipe_fitting_sup
had child undefined started with riak_pipe_fitting:start_link() at
<0.22692.2455> exit with reason noproc in context shutdown_err
or
2014-10-25 05:00:09.
s of a custom type, but nothing special, just sets a LevelDB
backend with default options.
On 26 October 2014 02:28, Christopher Meiklejohn <
christopher.meiklej...@gmail.com> wrote:
> Hi,
>
> Can you please send the map-reduce job you tried to run and I would be
> happy to de
It is riak_2.0.0-1 running as 5 node cluster on Ubuntu 14.04
On 26 October 2014 10:02, Christopher Meiklejohn <
christopher.meiklej...@gmail.com> wrote:
> Hi Oleksiy,
>
> What version of Riak are you running?
>
> - Chris
>
> > On Oct 26, 2014, at 6:20
Hi,
I'm running a 5 node cluster (Riak 2.0.0) and I had to replace hardware on
one of the servers. So I did a 'cluster leave', waited till the node
exited, checked the ring status and members status, all was ok, with no
pending changes. Then later after about 5 minutes every client connection
to a
and then the second after few hours.
On 4 November 2014 20:44, Oleksiy Krivoshey wrote:
> Hi,
>
> I'm running a 5 node cluster (Riak 2.0.0) and I had to replace hardware on
> one of the servers. So I did a 'cluster leave', waited till the node
> exited, checked the
lution/ fix for
this? My whole cluster is unusable with just 1 node failed.
On 5 November 2014 00:11, Oleksiy Krivoshey wrote:
> There were also errors during initial handoff, here is a full console.log
> for that day: https://www.dropbox.com/s/o7zop181pvpxoa5/console.log?dl=0
>
> I
>
> Also, "riak-admin top -sort msg_q" can give a real-time view of Erlang
> mailbox sizes, sorted by mailbox size.
>
> -Scott
>
--
Oleksiy Krivoshey
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
Hi!
Can anyone please explain in more details what kind of negative impact has
the 'nif' bitcask IO mode and what worst-case scenarios can increase
latencies or cause IO collapses?
http://docs.basho.com/riak/latest/ops/advanced/backends/bitcask/#Configuring-Bitcask
"In general, the nif IO mode p
x27; only, does it affect other file IO (reads,
stats)?
On 10 November 2014 20:02, Oleksiy Krivoshey wrote:
> Hi!
>
> Can anyone please explain in more details what kind of negative impact has
> the 'nif' bitcask IO mode and what worst-case scenarios can increase
> latencies
velop/src/bitcask_io.erl
>
> -Scott
>
--
Oleksiy Krivoshey
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
Hi,
Can anyone please suggest what will be the best setup of Yokozuna (I mean
indexing/search performance) if I have many buckets of the same bucket-type:
1. having 1 yokozuna index associated with a bucket-type (e.g. with all
buckets)
2. having separate yokozuna index created and associated with
Hi,
I have enabled Yokozuna on existing Riak 2.0 buckets and while it is still
indexing everything I've already received about 50 errors like this:
emulator Error in process <0.26807.79> on node 'riak@10.0.1.1' with exit
value:
{{badmatch,false},[{base64,decode_binary,2,[{file,"base64.erl"},{line
se64_string from this?
On 21 November 2014 21:41, Ryan Zezeski wrote:
> redbug:start("yz_solr:to_pair -> return").
--
Oleksiy Krivoshey
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
r:to_pair -> return", [{time, 60}]).
--
Oleksiy Krivoshey
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
Hi,
How long does it usually take to create entropy trees? I saw in a manual
that they are created 1 per hour but mine seem to be stuck for 9 hours:
11417981541647679048466287755595961091061972992--
57089907708238395242331438777979805455309864960--
102761833874829111436196589800363649819
1381575766539369164864420818427111292018498732032 0 0 0
1427247692705959881058285969449495136382746624000 0 14 722
On 21 November 2014 22:11, Ryan Zezeski wrote:
>
> Oleksiy Krivoshey writes:
>
> >
> > Still, what kind of base64 string can it be? I don
Zezeski wrote:
>
> Oleksiy Krivoshey writes:
>
> >
> > Entropy Trees
> >
> > Index
[{<<"vsn">>,<<"1">>},
{<<"riak_bucket_type">>,<<"fs_files">>},
{<<"riak_bucket_name">>,
&
s):
{ value:
{ ctime: '2014-11-21T17:40:07.947Z',
mtime: '2014-11-21T17:40:07.947Z',
isDirectory: true },
content_type: 'application/json',
vtag: '6gqpDUb5r2GWyUQO6JFVaB',
last_mod: 1416591607,
last_mod_usecs: 948073,
indexes:
[ { key: 'abtiuxzz
There are other keys, that start with the same prefix, e.g.:
/.Trash/MT03 348 plat frames/MT03 348 plat 001.jpg
/.Trash/MT03 348 plat frames/MT03 348 plat 002.jpg
/.Trash/MT03 348 plat frames/MT03 348 plat 003.jpg
On 22 November 2014 00:20, Oleksiy Krivoshey wrote:
> Bucket backend
HI Ryan,
Thanks! Unfortunately I can't do this on my live application. Is this going
to be fixed? Should I submit an issue to Yokozuna project at github?
On 22 November 2014 02:02, Ryan Zezeski wrote:
>
> Oleksiy Krivoshey writes:
>
> > Got few results:
> >
> >
. One index can be more
> efficient, as long as you're not creating hundreds of thousands of buckets.
> On the other hand, you don't want to create hundreds of thousands of
> indexes. Could you us give a bit more information of your expected numbers?
>
> Thanks,
> Eric
&g
14 at 16:15, Eric Redmond wrote:
> Go ahead and submit an issue. We'll take a look at it, but it could be
> months before a fix ends up in a release. I'd take Ryan's suggestion for
> now and replace spaces with another char like underscore (_).
>
> Eric
>
>
&g
KeysA
10 KeysB + 10 KeysA
Why is this happening?
--
Oleksiy Krivoshey
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
t; Eric
>
>
> On Nov 29, 2014, at 9:26 AM, Oleksiy Krivoshey wrote:
>
> Hi,
>
> I get inconsistent number of documents returned for the same search query
> when index keys are repaired by search AAE. The prerequisites are:
>
> 1. Create a bucket, insert some keys (10 ke
#x27;s
> report is the association of an index _after_ data had already been
> written. That data is sometimes missing. These two issues could be
> related but I don't see anything in that GitHub report to indicate why.
>
> >
> > On Nov 29, 2014, at 9:26 AM, Oleksiy Krivos
ic load caused by my application. What can be the reason? AAE?
Thanks!
--
Oleksiy Krivoshey
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
I guess those splashes were caused by AAE, because eventually it became
normal:
https://www.dropbox.com/s/zjemsub9astjhiw/Screenshot%202014-12-07%2017.36.45.png?dl=0
On 6 December 2014 at 14:27, Oleksiy Krivoshey wrote:
> Hi,
>
> Can someone please explain the strange beha
Hi!
Riak 2.1.3
Having a stable data set (no documents deleted in months) I'm receiving
inconsistent search results with Yokozuna. For example first query can
return num_found: 3000 (correct), the same query repeated in next seconds
can return 2998, or 2995, then 3000 again. Similar inconsistency
encies or are in need of repair. Do you have AAE enabled?
>
> -Fred
>
> > On Feb 26, 2016, at 8:36 AM, Oleksiy Krivoshey
> wrote:
> >
> > Hi!
> >
> > Riak 2.1.3
> >
> > Having a stable data set (no documents deleted in months) I'm r
ximum score
optional uint32 num_found = 3; // Number of results
}
On Fri, 26 Feb 2016 at 17:51 Oleksiy Krivoshey wrote:
> Yes, AAE is enabled:
>
> anti_entropy = active
>
> anti_entropy.use_background_manager = on
> handoff.use_background_manager = on
>
> anti_entrop
Riak 2.1.1
The following error is logged multiple times and then Riak shuts down:
scan_key_files: error function_clause @
[{riak_kv_bitcask_backend,key_transform_to_1,[<<>>],[{file,"src/riak_kv_bitcask_backend.erl"},{line,99}]},{bitcask,'-scan_key_files/5-fun-0-',7,[{file,"src/bitcask.erl"},{line
I have a bucket with ~200 keys in it and I wanted to iterate them with the
help of $bucket index and 2i request, however I'm facing the recursive
behaviour, for example I send the following 2i request:
{
bucket: 'BUCKET_NAME',
type: 'BUCKET_TYPE',
index: '$bucket',
key: 'BUCKET_NAME',
qtype: 0,
ma
I've updated Riak to 2.1.3 and it started successfully.
On 28 February 2016 at 14:47, Oleksiy Krivoshey wrote:
> Riak 2.1.1
>
> The following error is logged multiple times and then Riak shuts down:
>
> scan_key_files: error function_clause @
> [{riak_kv_bitcask_bac
Hi Magnus,
You are right, there was a Solr indexing issue:
2016-03-01 09:00:17,640 [ERROR] @SolrException.java:109
org.apache.solr.common.SolrException: Invalid Date String:'Invalid date'
However I'm struggling to find the object that causes this, the error
message doesn't contain the object id
I've found the keys in riak console.log within AAE errors.
Thanks!
On 5 March 2016 at 22:53, Oleksiy Krivoshey wrote:
> 2016-03-01 09:00:17,640 [ERROR]
> @SolrException.java:109
> org.apache.solr.common.SolrException: Invalid Date String:'Invalid date'
>
> Howe
Anyone?
On 4 March 2016 at 19:11, Oleksiy Krivoshey wrote:
> I have a bucket with ~200 keys in it and I wanted to iterate them with the
> help of $bucket index and 2i request, however I'm facing the recursive
> behaviour, for example I send the following 2i request:
>
> {
&g
So event when I fixed 3 documents which caused AAE errors,
restarted AAE with riak_core_util:rpc_every_member_ann(yz_entropy_mgr,
expire_trees, [], 5000).
waited 5 days (now I see all AAE trees rebuilt in last 5 days and no AAE or
Solr errors), I still get inconsistent num_found.
For a bucket with
_yz_pn:37
_yz_pn:115 OR _yz_pn:103 OR _yz_pn:91 OR
_yz_pn:55 OR _yz_pn:43 OR _yz_pn:31 OR _yz_pn:19 OR _yz_pn:7
0
On 11 March 2016 at 12:05, Oleksiy Krivoshey wrote:
> So event when I fixed 3 documents which caused AAE errors,
> restarted AA
oMNlsnVKvXcawQK6BGnCAKx58pC9xX:1","UMiHx4qDR5pHWT9OgLAu1KMlFeEKbISm:0","F3KcwtjG9VAtM5u8vbwBuCjuGBrPTnfq:2","YQlRWkJPFYiLlAwhvgqOysJC3ycmQ9OA:0","kP3w2p9zXqZ2oAx48S1SgEJAlbtfHUvI:15","kP3w2p9zXqZ2oAx48S1SgEJAlbtfHUvI:25"],"continuation":"g20ja1AzdzJwOXpYcV
AACNTM=
> "
>
> Once the end of the key list is reached, the server returns an empty keys
> list and no further continuation value.
>
> Please let me know if this works for you.
>
> Kind Regards,
>
> Magnus
>
>
> [0]: http://docs.basho.com/riak/latest/dev/
uCjuGBrPTnfq:2","YQlRWkJPFYiLlAwhvgqOysJC3ycmQ9OA:0","kP3w2p9zXqZ2oAx48S1SgEJAlbtfHUvI:15","kP3w2p9zXqZ2oAx48S1SgEJAlbtfHUvI:25"],"continuation":"g20ja1AzdzJwOXpYcVoyb0F4NDhTMVNnRUpBbGJ0ZkhVdkk6MjU="}
On 11 March 2016 at 14:58, Oleksiy Krivoshey
a larger
> max_results. I remember a bug with small results set, I thought it was
> fixed, I’m looking into the past issues, but can you try “max_results=1000”
> or something, and let me know what you see?
>
> On 11 Mar 2016, at 13:03, Oleksiy Krivoshey wrote:
>
> > Here i
buckets. Either with 2i or with Yokozuna.
On Fri, Mar 11, 2016 at 15:32 Russell Brown wrote:
> Not the answer, by why pagination for 200 keys? Why the cost of doing the
> query 20 times vs once?
>
> On 11 Mar 2016, at 13:28, Oleksiy Krivoshey wrote:
>
> > Unfortunately there
esult in down
> time on query. Is this production data or a test environment?
>
> -Fred
>
> On Mar 11, 2016, at 7:38 AM, Oleksiy Krivoshey wrote:
>
> Here are two consequent requests, one returns 30118 keys, another 37134
>
>
>
>
> 0
> 6
>
&
bucket or bucket-type properties for that small
> bucket? If you open an issue on github, please add the properties there,
> too.
>
> Many Thanks,
>
> On 11 March 2016 at 13:46, Oleksiy Krivoshey wrote:
>
>> I got the recursive behavior with other, larger buckets but I h
ch explains your problem, but you should have failed a lot earlier.
>
> On 11 Mar 2016, at 16:26, Oleksiy Krivoshey wrote:
>
> > Hi Magnus,
> >
> > The bucket type has the following properties:
> >
> >
> '{"props":{"backend"
The problem is not a problem then :)
Thanks!
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
Hi Magnus,
Yes, thank you. Though I'm stuck a bit. When I experienced recursive
behaviour with paginated $bucket query I tried to create Yokozuna Solr
index with an empty schema (just _yz_* fields). Unfortunately querying
Yokozuna gives me inconsistent number of results as described in my another
Because otherwise I have no ways to walk all the keys in some of my largest
buckets, paginated query not supported, RpbListKeysReq fails with Riak
'timeout' error. All hopes for Yokozuna then!
On 11 March 2016 at 18:49, Oleksiy Krivoshey wrote:
> Hi Magnus,
>
> Yes, thank yo
else.
I have tried deleting the search index (with PBC call) and tried expiring
AAE trees. Nothing helps. I can't get consistent search results from
Yokozuna.
Please help.
On 11 March 2016 at 18:18, Oleksiy Krivoshey wrote:
> Hi Fred,
>
> This is production environment but I c
s close to 1,000,000)
Should I now try to delete the index and yokozuna AAE data and wait another
2 weeks? If yes - how should I delete the index and AAE data?
Will RpbYokozunaIndexDeleteReq be enough?
On 18 March 2016 at 18:54, Oleksiy Krivoshey wrote:
> Hi Magnus,
>
> As of today I had no Yo
association with any bucket or bucket type. Any PUT operations on these
> buckets will lead to indexing failures being logged until the index has
> been recreated. However, this also means that no separate operation in
> `riak-admin` is required to associate the newly recreated index with
OK!
On 24 March 2016 at 21:11, Magnus Kessler wrote:
> Hi Oleksiy,
>
> On 24 March 2016 at 14:55, Oleksiy Krivoshey wrote:
>
>> Hi Magnus,
>>
>> Thanks! I guess I will go with index deletion because I've already tried
>> expiring the trees before.
>
ex is in use" and list of all buckets of the fs_chunks type
- for some reason all these buckets had their own search_index property set
to that same index
How can this happen if I definitely never set the search_index property per
bucket?
On 24 March 2016 at 22:41, Oleksiy Krivoshey wr
on two nodes (of 5) last AAE exchanges happened > 20 days ago.
For now I have issued ` riak_core_util:rpc_every_member_ann(yz_entropy_mgr,
expire_trees, [], 5000).` on each node again. I will wait 10 days more but
I don't think that will fix anything.
On 25 March 2016 at 09:28, Oleksi
On 4 April 2016 at 17:15, Oleksiy Krivoshey wrote:
> Continuation...
>
> The new index has the same inconsistent search results problem.
> I was making a snapshot of `search aae-status` command almost each day.
> There were absolutely no Yokozuna errors in logs.
>
> I can s
...},...},
> <<>>,
>
> "./data/yz_anti_entropy/913438523331814323877303020447676887284957839360",
> <<>>,incremental,[],0,
> {array,...}}}],
>closed = false}
>
>
s
- some (quite a lot) of AAE trees don't have exchanges and are not rebuilt
Last output of `search aae-status` from all nodes attached.
On 5 April 2016 at 22:54, Oleksiy Krivoshey wrote:
> Hi Fred,
>
> Thanks for internal call tips, I'll dig deeper!
>
> I've atta
75 matches
Mail list logo