Re:Re: Riak shuts down itself after couple of minutes
Which one? I am bit of lost. [root@localhost riak_data]# cat /proc/sys/fs/file-max 786810 [root@localhost riak_data]# ulimit -Hn 4096 [root@localhost riak_data]# ulimit -Sn 1024 --Hao 在 2015-08-07 23:58:56,"Alex Moore" 写道: Hi Hao, Looks like you might be running into an EMFILE error: http://docs.basho.com/riak/latest/community/faqs/logs/#riak-logs-contain-error-emfile-in-the-message What’s the open files limit on your nodes? Thanks, Alex On Aug 7, 2015, at 11:55 AM, 王昊 wrote: I am running it on a single node. Using Erlang riak client. I added a map bucket type, create a new index, set the index on a bucket, saved a few map data into the bucket. Then I did some search. All good. Then I can remember I tried a invalid search query like "title_s_register:[New movie one]" which shuts down Riak. But it may or may not be the first time Riak starts to shut down. I can't remember. Now Riak itself shut down after running a few minutes. I must have done something really wrong. Any idea what I can do? I have tired to restore bitcask data folder from a backup from days ago. It didn't help. It still crashes. The console.log is here : http://www.pastebin.ca/3092510 The error begins at Line 172 Anyone can help? Much appreciated. -Hao ___ riak-users mailing list riak-users@lists.basho.com http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com ___ riak-users mailing list riak-users@lists.basho.com http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
How to fix an stale index situation after a bitcask data restore
Single node with search=on, I previously indexed 2 map objects. Then I did a restore of my previous bitcask folder. I can see newly created bucket type still there. And when I search by index, the index data are also there. So I removed the index folder (under /var/lib/riak/yz/todoriak_main_movie_idx/data ) , riak should start to index anew. But it turned out, no.The index folder was re-created by Riak though. When I search by index, I still see the old 2 map objects. And when I add new objects, they are not indexed at all. Always the old 2 map objects only. I thought AAE will fix index. No? Is the AAE only for cluster? What can I do now? Delete index and re-create? What's the business of re-attach? Thank you -Hao ___ riak-users mailing list riak-users@lists.basho.com http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
For erlang proplist object, extractor is needed?
If I want to save Erlang proplist into Riak and use Riak Search 2.0, do I need to create a Erlang Proplist extractor for solr to be able to index it? -- Hao ___ riak-users mailing list riak-users@lists.basho.com http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
Re:Re: For erlang proplist object, extractor is needed?
Big thanks! Drew. I was just cracking my head wondering what went wrong. I
didn't realize that it must be binary. I do want to save proplist directly.
Cheer!
--
Hao
At 2015-08-09 01:54:44, "Drew Kerrigan" wrote:
Hello Hao,
Riak object values must be supplied to Riak as binary, are you attempting to
store the proplist using term_to_binary/1? If so, you would need to create a
custom search extractor for your values. Here is a small tutorial on creating
extractors for yokozuna:
http://docs.basho.com/riak/latest/dev/search/custom-extractors/
If you don't require the value to be specifically a proplist, a simpler
solution would be to convert your proplist to JSON which already has an
extractor built into RIak Search. A good module for JSON encoding / decoding is
mochijson2 which comes with the https://github.com/mochi/mochiweb repo.
> SimpleProplist = [{key1, <<"value1">>}, {key2, <<"value2">>}].
...
> mochijson2:encode(SimpleProplist).
...
> RiakObjectValue =
> list_to_binary(lists:flatten(mochijson2:encode(SimpleProplist))).
<<"{\"key1\":\"value1\",\"key2\":\"value2\"}">>
Cheers!
Drew
On Sat, Aug 8, 2015 at 2:06 AM Hao wrote:
If I want to save Erlang proplist into Riak and use Riak Search 2.0, do I need
to create a Erlang Proplist extractor for solr to be able to index it?
--
Hao
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
SolrException: Error CREATEing SolrCore after deleting index
Hi, I deleted a search index through Erlang client. Then I saw this in solr log showing every minute: org.apache.solr.common.SolrException: Error CREATEing SolrCore 'todoriak_main_movie_idx': Unable to create core: todoriak_main_movie_idx Caused by: /var/lib/riak/yz/todoriak_main_movie_idx/data/index/_0.si (No such file or directory) at org.apache.solr.handler.admin.CoreAdminHandler.handleCreateAction(CoreAdminHandler.java:546) at org.apache.solr.handler.admin.CoreAdminHandler.handleRequestBody(CoreAdminHandler.java:152) at org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:135) at org.apache.solr.servlet.SolrDispatchFilter.handleAdminRequest(SolrDispatchFilter.java:732) at org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:268) at org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:217) at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1419) at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:455) at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:137) at org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:557) at org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:231) at org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1075) Solr admin panel displays: SolrCore Initialization Failures todoriak_main_movie_idx: org.apache.solr.common.SolrException:org.apache.solr.common.SolrException: Error opening new searcher How can I fix it? -Hao___ riak-users mailing list riak-users@lists.basho.com http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
What's the maximum seconds to set index on bucket after creating the index
Hi,
What's the maximum seconds to wait after creating an search index and
before setting it on the bucket?
On my local machine, I only need to wait 1 second, sometimes I feel I
don't need to wait at all, but on a production server which is basically
zero traffic, I have to wait about 10 seconds(definitely over 5s) before
I can set the index on a bucket.
I am using riakc_pb_socket client. At first I thought something wrong
with my function to "create" and "set" the index but then when I split
the process, it's fine. So seems it's the interval in between that matters.
I need to know how long is the maximum because I need to restore a lot
of buckets and set index on them via a script. I don't care how long it
takes but I don't want it to miss any index not being set on the bucket.
The exact error on the console when I set the index on a bucket is
<<"Invalid bucket properties: [{search_index,\n
<<\"application_test_player_idx does not exist\">>}]">>
Thanks,
--
Hao
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
Re: What's the maximum seconds to set index on bucket after creating the index
v. 2.0.4. I think I know what might be causing the wrong. I forgot to
add the my custom extractor into riak. Will test it out.
Thanks,
-Hao
On 09/02/2015 11:36 PM, Alex Moore wrote:
netease mail Hao,
What version of Riak are you using?
Thanks,
Alex
On Sep 2, 2015, at 11:26 AM, Fred Dushin <mailto:fdus...@basho.com>> wrote:
I apologize, I was wrong about the timeouts -- they are configurable,
either through the client, or in the advanced config on the Riak
server(s).
The timeout gets set in the server here:
https://github.com/basho/yokozuna/blob/2.1.1/src/yz_pb_admin.erl#L114
This means you can set the timeout in the PB client, as in
riakc_pb_socket:create_search_index(Pid, Index, Schema,
[{timeout, Timeout}, ...])
where timeout is in milliseconds (or the atom 'infinity').
cf. http://basho.github.io/riak-erlang-client/
The order of precedence is:
1. client-defined
2. riak config
3. default (45 seconds)
-Fred
On Sep 2, 2015, at 8:13 AM, Fred Dushin <mailto:fdus...@basho.com>> wrote:
What is the return value you are getting from
rick_pb_socket:create_search_index? If it's ok, then the Solr cores
should have been created on all nodes. Otherwise, you should check
the logs for timeout messages, e.g.,
https://github.com/basho/yokozuna/blob/2.1.1/src/yz_index.erl#L443
If you are getting timeouts, instead of sleeping, you should
probably query your cluster for the search index, along the lines of
what is done in one of the riak tests, e.g.,
https://github.com/basho/yokozuna/blob/2.1.1/riak_test/yz_pb.erl#L100
If necessary, you might want to fold over all nodes in your cluster,
to ensure the index has been propagated to all nodes, and possibly
use the wait_for patterns used in the tests.
Unfortunately, it looks like the internal timeout used to wait for
propagation of indexes to all nodes is not configurable -- it
defaults to 45 seconds:
https://github.com/basho/yokozuna/blob/2.1.1/include/yokozuna.hrl#L134
I hope that helps,
-Fred
On Sep 2, 2015, at 6:27 AM, Hao <mailto:jusf...@163.com>> wrote:
Hi,
What's the maximum seconds to wait after creating an search index
and before setting it on the bucket?
On my local machine, I only need to wait 1 second, sometimes I feel
I don't need to wait at all, but on a production server which is
basically zero traffic, I have to wait about 10 seconds(definitely
over 5s) before I can set the index on a bucket.
I am using riakc_pb_socket client. At first I thought something
wrong with my function to "create" and "set" the index but then
when I split the process, it's fine. So seems it's the interval in
between that matters.
I need to know how long is the maximum because I need to restore a
lot of buckets and set index on them via a script. I don't care how
long it takes but I don't want it to miss any index not being set
on the bucket.
The exact error on the console when I set the index on a bucket is
<<"Invalid bucket properties: [{search_index,\n
<<\"application_test_player_idx does not exist\">>}]">>
Thanks,
--
Hao
___
riak-users mailing list
riak-users@lists.basho.com <mailto:riak-users@lists.basho.com>
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
___
riak-users mailing list
riak-users@lists.basho.com <mailto:riak-users@lists.basho.com>
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
*1* attachments
signature.asc(1K)
download
<http://preview.mail.163.com/xdownload?filename=signature.asc&mid=1tbi8AhpDlUL30BM8wAAsh&part=3&sign=2fdb499b2208526e80acda5d4d379b22&time=1441208246&uid=jusfeel%40163.com>
preview
<http://preview.mail.163.com/preview?mid=1tbi8AhpDlUL30BM8wAAsh&part=3&sign=2fdb499b2208526e80acda5d4d379b22&time=1441208246&uid=jusfeel%40163.com>
--
Hao
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
How to deal with 1 second indexing requirement in Riak Search 2.0
Take registration for example, within 1 second, how could I find out the same user already registered in the database? Suppose the email address is the ID but is not the key. KEY is generated guid. (I have been suggested to use the email address as the key. But it's a big change. I just want to make sure that there is really no other way) Thanks, -- Hao ___ riak-users mailing list riak-users@lists.basho.com http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
Re: How to deal with 1 second indexing requirement in Riak Search 2.0
I actually start to wonder even using key/value (email_address as the key) to query the record(during registration), will it be still possibility that the two request is too close(client fired 2 sequential requests) and the second will still find no registration of the user because the 1st request is not quick enough to finish saving. After all, Riak saving a record is eventual consistency right? ( or is key save fast enough to ignore?) Thanks, -Hao On 09/05/2015 04:28 AM, Dmitri Zagidulin wrote: I second what Luke said. Definitely use Key/Value operations for this case (the users-by-email bucket), which is a One-to-One relationship. Don't use Search or Secondary Indexes. On Fri, Sep 4, 2015 at 9:18 AM, Luke Bakken <mailto:lbak...@basho.com>> wrote: Use another bucket, keyed by email, with the users generated ID as the value: Bucket/key: buckets/users-by-email/bob.bar...@gmail.com <mailto:bob.bar...@gmail.com> Value: Riak generated ID There are, I am sure, race conditions and eventual consistency issues to keep in mind, but it's good to remember that you can use key/value operations in this manner. -- Luke Bakken Engineer lbak...@basho.com <mailto:lbak...@basho.com> On Fri, Sep 4, 2015 at 3:04 AM, Hao mailto:jusf...@163.com>> wrote: > Take registration for example, within 1 second, how could I find out the > same user already registered in the database? Suppose the email address is > the ID but is not the key. KEY is generated guid. > > (I have been suggested to use the email address as the key. But it's a big > change. I just want to make sure that there is really no other way) > > Thanks, > > -- > Hao ___ riak-users mailing list riak-users@lists.basho.com <mailto:riak-users@lists.basho.com> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com -- Hao ___ riak-users mailing list riak-users@lists.basho.com http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
cannot start solr, no logs
:prep_stop:230Unregistered pb services 2015-12-2816:34:41.869[info]<0.337.0>@riak_kv_app:prep_stop:235unregistered webmachine routes 2015-12-2816:34:41.873[info]<0.337.0>@riak_kv_app:prep_stop:237all active put FSMs completed 2015-12-2816:34:41.874[info]<0.413.0>@riak_kv_js_vm:terminate:237Spidermonkey VM(pool:riak_kv_js_hook)host stopping(<0.413.0>) 2015-12-2816:34:41.874[info]<0.414.0>@riak_kv_js_vm:terminate:237Spidermonkey VM(pool:riak_kv_js_hook)host stopping(<0.414.0>) 2015-12-2816:34:41.874[info]<0.406.0>@riak_kv_js_vm:terminate:237Spidermonkey VM(pool:riak_kv_js_reduce)host stopping(<0.406.0>) 2015-12-2816:34:41.874[info]<0.408.0>@riak_kv_js_vm:terminate:237Spidermonkey VM(pool:riak_kv_js_reduce)host stopping(<0.408.0>) 2015-12-2816:34:41.875[info]<0.407.0>@riak_kv_js_vm:terminate:237Spidermonkey VM(pool:riak_kv_js_reduce)host stopping(<0.407.0>) 2015-12-2816:34:41.875[info]<0.410.0>@riak_kv_js_vm:terminate:237Spidermonkey VM(pool:riak_kv_js_reduce)host stopping(<0.410.0>) 2015-12-2816:34:41.875[info]<0.411.0>@riak_kv_js_vm:terminate:237Spidermonkey VM(pool:riak_kv_js_reduce)host stopping(<0.411.0>) 2015-12-2816:34:41.875[info]<0.409.0>@riak_kv_js_vm:terminate:237Spidermonkey VM(pool:riak_kv_js_reduce)host stopping(<0.409.0>) 2015-12-2816:34:41.876[info]<0.397.0>@riak_kv_js_vm:terminate:237Spidermonkey VM(pool:riak_kv_js_map)host stopping(<0.397.0>) 2015-12-2816:34:41.876[info]<0.399.0>@riak_kv_js_vm:terminate:237Spidermonkey VM(pool:riak_kv_js_map)host stopping(<0.399.0>) 2015-12-2816:34:41.876[info]<0.398.0>@riak_kv_js_vm:terminate:237Spidermonkey VM(pool:riak_kv_js_map)host stopping(<0.398.0>) 2015-12-2816:34:41.876[info]<0.401.0>@riak_kv_js_vm:terminate:237Spidermonkey VM(pool:riak_kv_js_map)host stopping(<0.401.0>) 2015-12-2816:34:41.876[info]<0.403.0>@riak_kv_js_vm:terminate:237Spidermonkey VM(pool:riak_kv_js_map)host stopping(<0.403.0>) 2015-12-2816:34:41.877[info]<0.402.0>@riak_kv_js_vm:terminate:237Spidermonkey VM(pool:riak_kv_js_map)host stopping(<0.402.0>) 2015-12-2816:34:41.877[info]<0.400.0>@riak_kv_js_vm:terminate:237Spidermonkey VM(pool:riak_kv_js_map)host stopping(<0.400.0>) 2015-12-2816:34:41.877[info]<0.404.0>@riak_kv_js_vm:terminate:237Spidermonkey VM(pool:riak_kv_js_map)host stopping(<0.404.0>) 2015-12-2816:34:41.877[info]<0.337.0>@riak_kv_app:stop:248Stopped application riak_kv. 2015-12-2816:34:41.891[info]<0.163.0>@riak_core_app:stop:114Stopped application riak_core. | -- 王 昊 Hao WANG 86 186 0086 9737 为学日益 为道日损 http://blog.jusfeel.cn ___ riak-users mailing list riak-users@lists.basho.com http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
