Luwak Info

2012-08-17 Thread Jorge Garrido
Can any provide some information about how to use luwak from erlang api 
(protobuffs)

Thanks.

Jorge Garrido - MoreloSoft
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


riak reached_max_restart_intensity

2012-09-07 Thread Jorge Garrido
Hi

An error has been detected in riak, the error is:

reached_max_restart_intensity

the problem I think is the ulimit in the system used by riak because when I
try make a put to riak with a large content the node is disconnected from my 
application, but I am not sure of that

I use protbuffs to access riak, using a simple load balancing to my cluster.

Is it the problem?
How can I configure the ulimit for riak, if it is the problem?

Thanks

Jorge Garrido




___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


backup size

2012-11-26 Thread Jorge Garrido
Hi

Actually we are usign riak in production, but we are worried for the size of our
backups, actually the size is 5 GB, is that normally or can we implement any 
solution for that??

Thanks 

Jorge Garrido
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


backup size

2012-11-26 Thread Jorge Garrido
> Sorry, for the limited info, I am worried for cost of storage and time for 
> transactions since I dont know if it is normally or maybe I am making some 
> wrong when I
> put or get data in riak
> 
> Thanks,
> 
> Jorge Garrido 


___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


{error, <<"timeout">>}

2012-12-19 Thread Jorge Garrido
Hi

Recently I have had some problems in my cluster on Riak, when I make a put on a 
node, 
Riak sends {error, <<"timeout">>}, I dont know which is the cause, all nodes in 
the cluster
are agree.
I use protobuffs to connect erlang client application to Riak, but when {error, 
<<"timeout">>} 
happens, my connection to riak node is down (disconnected).
Recently our servers are restarted but I dont know if it was the problem or the 
cluster
is not configured correctly, the main question is:

Which ones are the causes for this failure?

Thanks

Jorge Garrido
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Riak Search Using Riak Erlang Client

2013-01-18 Thread Jorge Garrido
Hi 

I've implemented a riak search using riak-erlang-client like this:

First I put an object into Riak using erlang:

$ erl -pa 

> ObjUser1 = riakc_obj:new(<<”users”>>, <<”user_one”>>, [{name, mario}, 
{age, 10}]).
  
  > riakc_pb_socket:put(Pid, ObjUser1).
  ok

Now when I try a search:

> riakc_pb_socket:search(Pid, <<"users_data">>, <<"name:mario">>).


it works, but if my object value is a proplist like this:

[{name, "MARIO"},.]

so, the values of each key in the proplist is a string

the question is: how can I make a query for this?

Thanks

Jorge Garrido___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Riak Search on Updated buckets

2013-01-23 Thread Jorge Garrido
Hi

Recently, i ve implemented riak search, but I am worried about a problem:

When data is written into riak (bucket), the riak search works great, but
when data is updated then riak search saves the old value and the new value for 
the 
search is not matching, 

can you tell me, what is it happening?

Jorge Garrido

Thanks
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Issue on riak search when update bucket

2013-01-24 Thread Jorge Garrido
Hi, 

I 've a problem, when my application (written in erlang) puts data on a bucket 
that is installed with 'search-cmd install finder', data is working ok, and 
searchs is ok.

The problems comes when update data, then when my application search, Riak 
retrieves the data matching the old value and the new value

How can I kept only last value (updated) for Riak search? 

Thanks

Jorge Garrido
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Restore Fails!

2013-01-27 Thread Jorge Garrido
Hi,

I create a file backup with command backup in riak version 1.0.2
When I try restore command with this file in the riak version 1.2.1, show me 
this error:

$ sh dev1/bin/riak-admin restore dev1@127.0.0.1 riak /root/rbackup-102.backup 
Restoring from '/root/rbackup-102.backup' to cluster to which 'dev1@127.0.0.1' 
belongs.
{"init terminating in 
do_boot",{function_clause,[{riak_kv_backup,traverse_backup,[{{continuation,<0.41.0>,934256632,[]},[],658},#Fun,332440],[{file,"src/riak_kv_backup.erl"},{line,132}]},{riak_kv_backup,restore,2,[{file,"src/riak_kv_backup.erl"},{line,125}]},{erl_eval,do_apply,6,[{file,"erl_eval.erl"},{line,572}]},{init,start_it,1,[]},{init,start_em,1,[]}]}}

Any ideas?

Thanks 

Jorge Garrido


___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Bitcask .write.lock and bad behaviours

2014-02-04 Thread Jorge Garrido
Hi, 

In the last weeks we are experimenting a bad behaviour in our riak cluster, the 
bitcask backend is creating .write.lock files and it is related to bad 
behaviours when we are accesing to data, for example mapreduce, links or even 
write, update or get data from/to cluster.

The solution that we found is delete all .lock files, but we know if there any 
option to prevent this or if this is a issue on riak?

We use riak version 1.2.1 and we check the logs but we don't get any info about 
this problem.

Any ideas?

Thanks

Jorge Garrido
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Riak Search Pagination

2015-12-21 Thread Jorge Garrido
Great!! But can you explain why this issue?

On Monday, December 21, 2015, Zeeshan Lakhani  wrote:

> Best to provide a specific sort ordering on a field if you can.
>
> Zeeshan Lakhani
> programmer |
> software engineer at @basho |
> org. member/founder of @papers_we_love |
> twitter => @zeeshanlakhani
>
> > On Dec 21, 2015, at 21:54, Garrido >
> wrote:
> >
> > No, we don’t provide a sort on the query, let us check and we can tell
> you if its the same score, but, in case of  search returns the same score,
> which one will be the solution?
> >
> >> On Dec 21, 2015, at 8:21 PM, Zeeshan Lakhani  > wrote:
> >>
> >> The coverage plan can change per query. Are you providing a sort on the
> query? If not or if by score, does each  item return the same score?
> >>
> >> Zeeshan Lakhani
> >> programmer |
> >> software engineer at @basho |
> >> org. member/founder of @papers_we_love |
> >> twitter => @zeeshanlakhani
> >>
> >>> On Dec 21, 2015, at 18:34, Garrido >
> wrote:
> >>>
> >>> Hello,
> >>>
> >>> Recently we migrated our Riak nodes to another network, so we backup
> the data and then regenerate the ring, all is well, but there is a strange
> behaviour in a riak search, for example if we execute a query using the
> riak_erlang_client, returns the objects in the order:
> >>>
> >>> A, B, C
> >>>
> >>> And then if we execute again the same query the result is:
> >>>
> >>> B, A, C,
> >>>
> >>> So, in other order, do you know what is causing this?, before to
> change our riak ring to another network, it was working perfectly.
> >>>
> >>> Thank you
> >>> ___
> >>> riak-users mailing list
> >>> riak-users@lists.basho.com 
> >>> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
> >
>
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Error on crash.log

2014-09-15 Thread Jorge Garrido gomez
Hello

My team detected the next error on our riak cluster:

2014-09-15 11:09:14 =SUPERVISOR REPORT
 Supervisor: {local,riak_pipe_fitting_sup}
 Context:shutdown_error
 Reason: noproc
 Offender:   
[{pid,<0.12743.378>},{name,undefined},{mfargs,{riak_pipe_fitting,start_link,[]}},{restart_type,temporary},{shutdown,2000},{child_type,worker}]

We don’t know what is it, but sometimes our local application is disconnected 
from riak protobuffers client with a reqpb_timeout, we use erlang client,  the 
next are the configuration for our cluster:

Riak version 1.4.2.
Cluster with 4 nodes.
Enable search on all nodes.
All default settings on vm.args and app.config

I hope you can help us,


___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Error On Riak Search

2014-09-24 Thread Jorge Garrido gomez
Hello, 

Recenlty we experiment an error on our riak cluster related to riak search:

2014-09-20 14:25:36 =ERROR REPORT
** Generic server <0.29307.896> terminating 
** Last message in was 
{tcp,#Port<0.16938821>,<<0,0,0,128,27,10,115,40,112,104,97,115,101,58,112,104,97,115,101,49,32,65,78,68,32,116,121,112,101,95,112,114,111,102,105,108,101,58,78,111,116,101,115,32,65,78,68,32,97,99,116,105,118,101,58,116,114,117,101,32,65,78,68,32,105,115,95,99,111,108,117,109,110,58,116,114,117,101,32,65,78,68,32,98,114,111,111,116,58,100,56,100,98,56,51,102,53,55,100,99,49,52,99,51,49,56,55,101,56,55,98,102,55,56,48,48,52,97,51,55,99,41,18,6,115,101,97,114,99,104,24,20>>}
** When Server state == 
{state,#Port<0.16938821>,undefined,[{riak_api_basic_pb_service,undefined},{riak_core_pb_bucket,undefined},{riak_kv_pb_bucket,{state,{riak_client,'riakliveprod1@10.136.89.100',undefined},undefined,undefined}},{riak_kv_pb_counter,{state,{riak_client,'riakliveprod1@10.136.89.100',undefined}}},{riak_kv_pb_csbucket,{state,{riak_client,'riakliveprod1@10.136.89.100',undefined},undefined,undefined,undefined,0}},{riak_kv_pb_index,{state,{riak_client,'riakliveprod1@10.136.89.100',undefined},undefined,undefined,undefined,0}},{riak_kv_pb_mapred,{state,undefined,undefined}},{riak_kv_pb_object,{state,{riak_client,'riakliveprod1@10.136.89.100',undefined},undefined,undefined,<<0,0,0,0>>}},{riak_search_pb_query,{state,{riak_search_client,{riak_client,'riakliveprod1@10.136.89.100',undefined],<<0,0,0,128,27,10,115,40,112,104,97,115,101,58,112,104,97,115,101,49,32,65,78,68,32,116,121,112,101,95,112,114,111,102,105,108,101,58,78,111,116,101,115,32,65,78,68,32,97,99,116,105,118
 
,101,58,116,114,117,101,32,65,78,68,32,105,115,95,99,111,108,117,109,110,58,116,114,117,101,32,65,78,68,32,98,114,111,111,116,58,100,56,100,98,56,51,102,53,55,100,99,49,52,99,51,49,56,55,101,56,55,98,102,55,56,48,48,52,97,51,55,99,41,18,6,115,101,97,114,99,104,24,20>>,{buffer,[],0,1024}}
** Reason for termination == 
** 
{error,function_clause,[{riak_indexed_doc,to_pairs,[<<"id">>,{error,timeout},all],[{file,"src/riak_indexed_doc.erl"},{line,110}]},{riak_search_pb_query,'-encode_results/3-lc$^1/1-0-',3,[{file,"src/riak_search_pb_query.erl"},{line,110}]},{riak_search_pb_query,'-encode_results/3-lc$^1/1-0-',3,[{file,"src/riak_search_pb_query.erl"},{line,112}]},{riak_search_pb_query,encode_results,3,[{file,"src/riak_search_pb_query.erl"},{line,109}]},{riak_search_pb_query,process,2,[{file,"src/riak_search_pb_query.erl"},{line,81}]},{riak_api_pb_server,process_message,4,[{file,"src/riak_api_pb_server.erl"},{line,223}]},{riak_api_pb_server,handle_message,3,[{file,"src/riak_api_pb_server.erl"},{line,200}]},{riak_api_pb_server,decode_buffer,1,[{file,"src/riak_api_pb_server.erl"},{line,172}]}]}
2014-09-20 14:25:36 =CRASH REPORT
  crasher:
initial call: riak_api_pb_server:init/1
pid: <0.29307.896>
registered_name: []
exception exit: 
{{error,function_clause,[{riak_indexed_doc,to_pairs,[<<"id">>,{error,timeout},all],[{file,"src/riak_indexed_doc.erl"},{line,110}]},{riak_search_pb_query,'-encode_results/3-lc$^1/1-0-',3,[{file,"src/riak_search_pb_query.erl"},{line,110}]},{riak_search_pb_query,'-encode_results/3-lc$^1/1-0-',3,[{file,"src/riak_search_pb_query.erl"},{line,112}]},{riak_search_pb_query,encode_results,3,[{file,"src/riak_search_pb_query.erl"},{line,109}]},{riak_search_pb_query,process,2,[{file,"src/riak_search_pb_query.erl"},{line,81}]},{riak_api_pb_server,process_message,4,[{file,"src/riak_api_pb_server.erl"},{line,223}]},{riak_api_pb_server,handle_message,3,[{file,"src/riak_api_pb_server.erl"},{line,200}]},{riak_api_pb_server,decode_buffer,1,[{file,"src/riak_api_pb_server.erl"},{line,172}]}]},[{gen_server,terminate,6,[{file,"gen_server.erl"},{line,747}]},{proc_lib,init_p_do_apply,3,[{file,"proc_lib.erl"},{line,227}]}]}
ancestors: [riak_api_pb_sup,riak_api_sup,<0.272.0>]
messages: []
links: [<0.276.0>,#Port<0.16938821>]
dictionary: [{random_seed,{29569,23547,27895}}]
trap_exit: false
status: running
heap_size: 4181
stack_size: 24
reductions: 5449716722
  neighbours:
2014-09-20 14:25:36 =SUPERVISOR REPORT
 Supervisor: {local,riak_api_pb_sup}
 Context:child_terminated
 Reason: 
{error,function_clause,[{riak_indexed_doc,to_pairs,[<<"id">>,{error,timeout},all],[{file,"src/riak_indexed_doc.erl"},{line,110}]},{riak_search_pb_query,'-encode_results/3-lc$^1/1-0-',3,[{file,"src/riak_search_pb_query.erl"},{line,110}]},{riak_search_pb_query,'-encode_results/3-lc$^1/1-0-',3,[{file,"src/riak_search_pb_query.erl"},{line,112}]},{riak_search_pb_query,encode_results,3,[{file,"src/riak_search_pb_query.erl"},{line,109}]},{riak_search_pb_query,process,2,[{file,"src/riak_search_pb_query.erl"},{line,81}]},{riak_api_pb_server,process_message,4,[{file,"src/riak_api_pb_server.erl"},{line,223}]},{riak_api_pb_server,handle_message,3,[{file,"src/riak_api_pb_server.erl"},{line,200}]},{riak_api_pb_server,decode_buffer,

Riak 2.0 override default schema

2014-10-09 Thread Jorge Garrido gomez
Hello,

We migrate our data from Legacy to Riak 2.0, the migration was success, but we 
use the default schema provided by Riak, 

The question is: If we assign a new schema to the indexed bucket, it works in 
the same way or data could be corrupted and queries will throw any error?

Thank you
Jorge Garrido
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Delete Objects SOLR

2015-03-30 Thread Jorge Garrido gomez
Hello, 

On riak version 1.x when the search indexes are deleted then the data base is 
corrupted, is the issue present on riak 2.x with YOKOZUNA and SOLR?

The question is because we want delete the objects indexed into Riak Search 2.0

Thank you!
Jorge Garrido 
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Error on crash.log

2015-11-09 Thread Jorge Garrido gomez
Hello,

Recently we upgrade our core application written in erlang that uses riak 
erlang client with protobuffs, before this upgrade we use a simple load 
balancer between nodes, so in a simple gen_server a pid is created and keep 
into the state, but some times the pid will be disconnected and we must connect 
again.

So for that reason we decide use other strategy, we fork the project 
riak-erlang-client and decide to make a small change, we add in the options of 
the start_link function (where pid is created) a new option called: 
mod_callback, with this we send back the response of the pid, so the callback 
module knows that a pid was created and then added to the load balancer, all 
pids are managed by a supervisor, so when a problem occurs the supervisor 
restart a child and then the option mod_callback notifies to our load balancer 
so we can keep it into the state.

This change is here: 
https://github.com/zgbjgg/riak-erlang-client/commit/a531cd98cbaf119becc6c85091b2242cb08cbc0a
 


If you consider that this commit will be into your main repo, we can open a 
pull request, 

Than you very much!

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Mod callback on Pid

2015-11-09 Thread Jorge Garrido gomez
Hello, this is our last commit:

https://github.com/zgbjgg/riak-erlang-client/commit/0f5d170e9ba605a668eea09a55b2318ce5927c40
 
<https://github.com/zgbjgg/riak-erlang-client/commit/0f5d170e9ba605a668eea09a55b2318ce5927c40>

Ignore the previous

Thank you
> On Nov 9, 2015, at 5:56 PM, Jorge Garrido gomez 
>  wrote:
> 
> Hello,
> 
> Recently we upgrade our core application written in erlang that uses riak 
> erlang client with protobuffs, before this upgrade we use a simple load 
> balancer between nodes, so in a simple gen_server a pid is created and keep 
> into the state, but some times the pid will be disconnected and we must 
> connect again.
> 
> So for that reason we decide use other strategy, we fork the project 
> riak-erlang-client and decide to make a small change, we add in the options 
> of the start_link function (where pid is created) a new option called: 
> mod_callback, with this we send back the response of the pid, so the callback 
> module knows that a pid was created and then added to the load balancer, all 
> pids are managed by a supervisor, so when a problem occurs the supervisor 
> restart a child and then the option mod_callback notifies to our load 
> balancer so we can keep it into the state.
> 
> This change is here: 
> https://github.com/zgbjgg/riak-erlang-client/commit/a531cd98cbaf119becc6c85091b2242cb08cbc0a
>  
> <https://github.com/zgbjgg/riak-erlang-client/commit/a531cd98cbaf119becc6c85091b2242cb08cbc0a>
> 
> If you consider that this commit will be into your main repo, we can open a 
> pull request, 
> 
> Than you very much!
> 

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: issue creating a custom index/schema

2016-05-09 Thread Jorge Garrido gomez
Hello Alex, 

We solved the issue using the Spatial:


We use that definition for the field and works perfectly, I hope this can be 
helpful, if you want more info maybe we can help you

Thank you! :-)


> On May 9, 2016, at 12:06 PM, Alex De la rosa  wrote:
> 
> Ok, i solved the issues for the datatypes "int", "string", but still getting 
> errors for the "location_rpt":
> 
>  class="solr.SpatialRecursivePrefixTreeFieldType" 
> spatialContextFactory="com.spatial4j.core.context.jts.JtsSpatialContextFactory"
>  distErrPct="0.025" maxDistErr="0.09" units="degrees" />
> 2016-05-09 19:03:56.798 [error] <0.588.0>@yz_index:core_create:287 Couldn't 
> create index leaders_b: 
> {ok,"500",[{"Content-Type","text/html;charset=ISO-8859-1"},{"Cache-Control","must-revalidate,no-cache,no-store"},{"Content-Length","11214"}],<<"\n\n  http-equiv=\"Content-Type\" content=\"text/html; 
> charset=ISO-8859-1\"/>\nError 500 
> {msg=com/vividsolutions/jts/geom/CoordinateSequenceFactory,trace=java.lang.NoClassDefFoundError:
>  com/vividsolutions/jts/geom/CoordinateSequenceFactory\n\tat 
> java.lang.Class.getDeclaredConstructors0(Native Method)\n\tat 
> java.lang.Class.privateGetDeclaredConstructors(Class.java:2595)\n\tat 
> java.lang.Class.getConstructor0(Class.java:2895)\n\tat 
> java.lang.Class.newInstance(Class.java:354)\n\tat 
> com.spatial4j.core.context.SpatialContextFactory.makeSpatialContext(SpatialContextFactory.java:96)\n\tat
>  
> org.apache.solr.schema.AbstractSpatialFieldType.init(AbstractSpatialFieldType.java:107)\n\tat
>  
> org.apache.solr.schema.AbstractSpatialPrefixTreeFieldType.init(AbstractSpatialPrefixTreeFieldType.java:43)\n\tat
>  
> org.apache.solr.schema.SpatialRecursivePrefixTreeFieldType.init(SpatialRecursivePrefixTreeFieldType.java:37)\n\tat
>  org.apache.solr.schema.FieldType.setArgs(FieldType.java:165)\n\tat 
> org.apache.solr.schema.FieldTypePluginLoader.init(FieldTypePluginLoader.java:141)\n\tat
>  
> org.apache.solr.schema.FieldTypePluginLoader.init(FieldTypePluginLoader.java:43)\n\tat
>  
> org.apache.solr.util.plugin.AbstractPluginLoader.load(AbstractPluginLoader.java:190)\n\tat
>  org.apache.solr.schema.IndexSchema.readSchema(IndexSchema.java:468)\n\tat 
> org.apache.solr.schema.IndexSchema.(IndexSchema.java:166)\n\tat 
> org.apache.solr.schema.IndexSchemaFactory.create(IndexSchemaFactory.java:55)\n\tat
>  
> org.apache.solr.schema.IndexSchemaFactory.buildIndexSchema(IndexSchemaFactory.java:69)\n\tat
>  
> org.apache.solr.core.CoreContainer.createFromLocal(CoreContainer.java:559)\n\tat
>  org.apache.solr.core.CoreContainer.create(CoreContainer.java:597)\n\tat 
> org.apache.solr.handler.admin.CoreAdminHandler.handleCreateAction(CoreAdminHandler.java:509)\n\tat
>  
> org.apache.solr.handler.admin.CoreAdminHandler.handleRequestBody(CoreAdminHandler.java:152)\n\tat
>  
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:135)\n\tat
>  
> org.apache.solr.servlet.SolrDispatchFilter.handleAdminRequest(SolrDispatchFilter.java:732)\n\tat
>  
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:268)\n\tat
>  
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:217)\n\tat
>  
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1419)\n\tat
>  
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:455)\n\tat
>  
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:137)\n\tat
>  
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:557)\n\tat
>  
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:231)\n\tat
>  
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1075)\n\tat
>  
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:384)\n\tat
>  
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:193)\n\tat
>  
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1009)\n\tat
>  
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:135)\n\tat
>  
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:255)\n\tat
>  
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:154)\n\tat
>  
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:116)\n\tat
>  org.eclipse.jetty.server.Server.handle(Server.java:368)\n\tat 
> org.eclipse.jetty.server.AbstractHttpConnection.handleRequest(AbstractHttpConnection.java:489)\n\tat
>  
> org.eclipse.jetty.server.BlockingHttpConnection.handleRequest(BlockingHttpConnection.java:53)\n\tat
>  
> org.eclipse.jetty.server.AbstractHttpConnection.headerComplete(AbstractHttpConnection.java:942)\n\tat
>  
> org.eclipse.jetty.server.AbstractHttpConnection$RequestHandler.headerComplete(AbstractHttpConnection.java:1004)\n\tat
>  org.eclipse.jetty.http.HttpParser.parseNext(H