Near Real Time not working as expected

2022-12-07 Thread Matias Laino
Hi all,

I recently had an issue with very high cpu usage on our Testing SolrCloud 
cluster when sending data to Solr, I've tried several which reduced the usage 
of CPU, now our testing SolrCloud is under an 8 core machine with 32 gb de RAM 
(recently changed the heap to 21g as a test).
When we push data to solr, it takes a couple of minutes for that document to be 
available on search results, I've tried everything and cannot find out what is 
going on, it was working perfectly fine until last week when it suddenly 
started having this delay.

Our configuration for NRT is very aggressive, 60s of auto commit with open 
searcher false and 1s for auto soft commit, but it doesn't matter what 
configuration I try, it will always take a couple of minutes to have the new 
document available on search results.

I've tried modifying the cache configuration to use Caffeine, tried removing 
max warming searchers values, tried modifying autoWarmCount to different values 
and even tried, and still the same issue, it's almost like my configuration 
doesn't matter.

We are using a Solr 8.11 install in SolrCloud mode, 2 nodes, 1 Zookeeper node. 
On each node we have 6 collections of around 10-11M records each (numbers 
didn't change much before and after this issue started). The total amount of 
disk spaced used is 20.4gb, our heap is now 21gb.

I'm kind of desperate since I'll be on vacation starting the end of next week 
and I haven't been able to find out what is wrong with this, my fear is if this 
happens to our production server, we won't be able to know how to fix it other 
than reinstalling Solr from scratch.

Our prod server only has 1 collection of 11gb and is under a 4 core servers 
with 16 gb of ram (8gb heap setup).

Any help or pointer will be highly appreciated as I'm desperate.

Thanks in advance!

Matias Laino | DIRECTOR OF PASSARE REMOTE DEVELOPMENT
matias.la...@passare.com | +54 11-6357-2143
[Image]



Is there a way to shape a query response from SOLR similar to the way the Script Update Processor can transform an update payload/document?

2022-12-07 Thread Matthew Castrigno
I need to shape Json the response from SOLR, is there a way to do that?

Thank you.

--
"This message is intended for the use of the person or entity to which it is 
addressed and may contain information that is confidential or privileged, the 
disclosure of which is governed by applicable law. If the reader of this 
message is not the intended recipient, you are hereby notified that any 
dissemination, distribution, or copying of this information is strictly 
prohibited. If you have received this message by error, please notify us 
immediately and destroy the related message."


Re: Near Real Time not working as expected

2022-12-07 Thread Dave
Just out of curiosity are you using metal? And if so ran any disk io tests to 
see if you may have a hardware problem on any of the nodes?  A document won’t 
be available until all the nodes have it so it just takes one to get slow to 
slow you down

> On Dec 7, 2022, at 9:45 AM, Matias Laino  
> wrote:
> 
> 
> Hi all,
>  
> I recently had an issue with very high cpu usage on our Testing SolrCloud 
> cluster when sending data to Solr, I’ve tried several which reduced the usage 
> of CPU, now our testing SolrCloud is under an 8 core machine with 32 gb de 
> RAM (recently changed the heap to 21g as a test).
> When we push data to solr, it takes a couple of minutes for that document to 
> be available on search results, I’ve tried everything and cannot find out 
> what is going on, it was working perfectly fine until last week when it 
> suddenly started having this delay.
>  
> Our configuration for NRT is very aggressive, 60s of auto commit with open 
> searcher false and 1s for auto soft commit, but it doesn’t matter what 
> configuration I try, it will always take a couple of minutes to have the new 
> document available on search results.
> 
> I’ve tried modifying the cache configuration to use Caffeine, tried removing 
> max warming searchers values, tried modifying autoWarmCount to different 
> values and even tried, and still the same issue, it’s almost like my 
> configuration doesn’t matter. 
> 
> We are using a Solr 8.11 install in SolrCloud mode, 2 nodes, 1 Zookeeper 
> node. On each node we have 6 collections of around 10-11M records each 
> (numbers didn’t change much before and after this issue started). The total 
> amount of disk spaced used is 20.4gb, our heap is now 21gb.
> 
> I’m kind of desperate since I’ll be on vacation starting the end of next week 
> and I haven’t been able to find out what is wrong with this, my fear is if 
> this happens to our production server, we won’t be able to know how to fix it 
> other than reinstalling Solr from scratch.
> 
> Our prod server only has 1 collection of 11gb and is under a 4 core servers 
> with 16 gb of ram (8gb heap setup).
> 
> Any help or pointer will be highly appreciated as I’m desperate. 
> 
> Thanks in advance!
>  
> MATIAS LAINO | DIRECTOR OF PASSARE REMOTE DEVELOPMENT
> matias.la...@passare.com | +54 11-6357-2143
> 
>  


RE: Near Real Time not working as expected

2022-12-07 Thread Matias Laino
I'm sorry but I'm not sure what you mean with metal, our servers are EC2 
instances if that helps in any way.

MATIAS LAINO | DIRECTOR OF PASSARE REMOTE DEVELOPMENT
matias.la...@passare.com | +54 11-6357-2143


-Original Message-
From: Dave  
Sent: Wednesday, December 7, 2022 2:40 PM
To: users@solr.apache.org
Subject: Re: Near Real Time not working as expected

Just out of curiosity are you using metal? And if so ran any disk io tests to 
see if you may have a hardware problem on any of the nodes?  A document won’t 
be available until all the nodes have it so it just takes one to get slow to 
slow you down

> On Dec 7, 2022, at 9:45 AM, Matias Laino  
> wrote:
> 
> 
> Hi all,
>  
> I recently had an issue with very high cpu usage on our Testing SolrCloud 
> cluster when sending data to Solr, I’ve tried several which reduced the usage 
> of CPU, now our testing SolrCloud is under an 8 core machine with 32 gb de 
> RAM (recently changed the heap to 21g as a test).
> When we push data to solr, it takes a couple of minutes for that document to 
> be available on search results, I’ve tried everything and cannot find out 
> what is going on, it was working perfectly fine until last week when it 
> suddenly started having this delay.
>  
> Our configuration for NRT is very aggressive, 60s of auto commit with open 
> searcher false and 1s for auto soft commit, but it doesn’t matter what 
> configuration I try, it will always take a couple of minutes to have the new 
> document available on search results.
> 
> I’ve tried modifying the cache configuration to use Caffeine, tried removing 
> max warming searchers values, tried modifying autoWarmCount to different 
> values and even tried, and still the same issue, it’s almost like my 
> configuration doesn’t matter. 
> 
> We are using a Solr 8.11 install in SolrCloud mode, 2 nodes, 1 Zookeeper 
> node. On each node we have 6 collections of around 10-11M records each 
> (numbers didn’t change much before and after this issue started). The total 
> amount of disk spaced used is 20.4gb, our heap is now 21gb.
> 
> I’m kind of desperate since I’ll be on vacation starting the end of next week 
> and I haven’t been able to find out what is wrong with this, my fear is if 
> this happens to our production server, we won’t be able to know how to fix it 
> other than reinstalling Solr from scratch.
> 
> Our prod server only has 1 collection of 11gb and is under a 4 core servers 
> with 16 gb of ram (8gb heap setup).
> 
> Any help or pointer will be highly appreciated as I’m desperate. 
> 
> Thanks in advance!
>  
> MATIAS LAINO | DIRECTOR OF PASSARE REMOTE DEVELOPMENT 
> matias.la...@passare.com | +54 11-6357-2143
> 
>  


Re: CVE-2022-40153 com.fasterxml.woodstox_woodstox-core

2022-12-07 Thread Kevin Risden
https://issues.apache.org/jira/browse/SOLR-16568 is merged and upgrades
woodstox-core. The only woodstox-core CVE that remained is CVE-2022-40152 (
https://github.com/advisories/GHSA-3f7h-mf4q-vrm4) and fixed in
https://github.com/FasterXML/woodstox/issues/160. It is LOW severity only.

Kevin Risden


On Sat, Dec 3, 2022 at 1:10 PM Gus Heck  wrote:

> Hi Billy,
>
> Thanks for bringing this up. The CVE you link is rejected (
> https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-40153). However
> reading through the report here:
> https://github.com/x-stream/xstream/issues/304 it seems that this was part
> of a series of low quality auto generated CVE reports and 4/6 of them were
> rejected, but annoyingly NVD only reflects the rejected status for 3 out of
> 4, having missed it for the one you linked. In any case,
> https://nvd.nist.gov/vuln/detail/CVE-2022-40152 did eventually stick to
> woodstox after initially being reported against x-stream and can be fixed
> by an upgrade to woodstox 6.4. Main branch is on 6.3.1 presently and Solr
> will receive this upgrade to 6.4 as part of the Caffeine Cache upgrade, so
> you can follow https://issues.apache.org/jira/browse/SOLR-16562 (I have
> added a comment so, hopefully it at least shows up in searches for the
> correct CVE soon).
>
> Sorry the response took so long, For my part I missed the first mail you
> sent. It's not my job any more than anyone else on the PMC to respond, but
> I do appreciate the way you have been following our requested process on
> the security page which I helped revise. Once I saw your second mail, I
> initiated a small private list discussion to try to ensure a coherent
> response since it didn't seem to have been addressed previously. It doesn't
> look like there is much risk from this one since it's at most a DOS and
> would only be encountered by users that are using text tagging
> functionality since that is the only place we use this library directly. I
> also see it as a transitive dependency in some of the s3 related code, but
> is not directly used by us in that module. While there is processing of
> external data in this path, this would generally be indexing related, which
> makes a DOS a bit difficult to achieve. This is based on initial quick
> look/discussion, but there has been no serious attempt to
> find/verify/exclude an exploit since it's getting fixed soon as part of
> other work, so YMMV.
>
> -Gus
>
> On Tue, Nov 29, 2022 at 8:30 AM Billy Kidwell
> 
> wrote:
>
> > https://nvd.nist.gov/vuln/detail/CVE-2022-40153
> >
> > Our container scan found a potential security vulnerability in Solr 9.0.0
> > and 9.1.0 for woodstox-core.
> >
> > I checked the security page, the official list of non-exploitable
> > vulnerabilities and the user mailing list.  I also checked jira.  There
> are
> > a number of tickets concerning woodstox, but they seem to be prior
> issues.
> >
> > For 9.1.0, the package version seems to be 6.2.8
> >
> > /solr/server/solr-webapp/webapp/WEB-INF/lib/woodstox-core-6.2.8.jar
> >
> > This vulnerability is addressed in 6.4.0.
> >
> > Does anyone know if this vulnerability is exploitable in Solr?
> > If so, under what circumstances?
> >
> > Thanks,
> >
> > Bill
> >
>
>
> --
> http://www.needhamsoftware.com (work)
> http://www.the111shift.com (play)
>


Re: Near Real Time not working as expected

2022-12-07 Thread Tomás Fernández Löbbe
Are you seeing any messages in the logs with "PERFORMANCE WARNING:
Overlapping onDeckSearchers"? Can you elaborate on the autowarm
configuration that you have? any "postCommit" events?

If you set the logger of "org.apache.solr.search.SolrIndexSearcher" to
DEBUG level you should see when the searcher is open and how long it takes
to warmup.


On Wed, Dec 7, 2022 at 9:58 AM Matias Laino
 wrote:

> I'm sorry but I'm not sure what you mean with metal, our servers are EC2
> instances if that helps in any way.
>
> MATIAS LAINO | DIRECTOR OF PASSARE REMOTE DEVELOPMENT
> matias.la...@passare.com | +54 11-6357-2143
>
>
> -Original Message-
> From: Dave 
> Sent: Wednesday, December 7, 2022 2:40 PM
> To: users@solr.apache.org
> Subject: Re: Near Real Time not working as expected
>
> Just out of curiosity are you using metal? And if so ran any disk io tests
> to see if you may have a hardware problem on any of the nodes?  A document
> won’t be available until all the nodes have it so it just takes one to get
> slow to slow you down
>
> > On Dec 7, 2022, at 9:45 AM, Matias Laino 
> wrote:
> >
> > 
> > Hi all,
> >
> > I recently had an issue with very high cpu usage on our Testing
> SolrCloud cluster when sending data to Solr, I’ve tried several which
> reduced the usage of CPU, now our testing SolrCloud is under an 8 core
> machine with 32 gb de RAM (recently changed the heap to 21g as a test).
> > When we push data to solr, it takes a couple of minutes for that
> document to be available on search results, I’ve tried everything and
> cannot find out what is going on, it was working perfectly fine until last
> week when it suddenly started having this delay.
> >
> > Our configuration for NRT is very aggressive, 60s of auto commit with
> open searcher false and 1s for auto soft commit, but it doesn’t matter what
> configuration I try, it will always take a couple of minutes to have the
> new document available on search results.
> >
> > I’ve tried modifying the cache configuration to use Caffeine, tried
> removing max warming searchers values, tried modifying autoWarmCount to
> different values and even tried, and still the same issue, it’s almost like
> my configuration doesn’t matter.
> >
> > We are using a Solr 8.11 install in SolrCloud mode, 2 nodes, 1 Zookeeper
> node. On each node we have 6 collections of around 10-11M records each
> (numbers didn’t change much before and after this issue started). The total
> amount of disk spaced used is 20.4gb, our heap is now 21gb.
> >
> > I’m kind of desperate since I’ll be on vacation starting the end of next
> week and I haven’t been able to find out what is wrong with this, my fear
> is if this happens to our production server, we won’t be able to know how
> to fix it other than reinstalling Solr from scratch.
> >
> > Our prod server only has 1 collection of 11gb and is under a 4 core
> servers with 16 gb of ram (8gb heap setup).
> >
> > Any help or pointer will be highly appreciated as I’m desperate.
> >
> > Thanks in advance!
> >
> > MATIAS LAINO | DIRECTOR OF PASSARE REMOTE DEVELOPMENT
> > matias.la...@passare.com | +54 11-6357-2143
> >
> >
>


Re: Is there a way to shape a query response from SOLR similar to the way the Script Update Processor can transform an update payload/document?

2022-12-07 Thread Mikhail Khludnev
Hello Matthew,
Will wt=json
https://solr.apache.org/guide/7_1/common-query-parameters.html#wt-parameter
work for it?

On Wed, Dec 7, 2022 at 8:19 PM Matthew Castrigno  wrote:

> I need to shape Json the response from SOLR, is there a way to do that?
>
> Thank you.
>
> --
> "This message is intended for the use of the person or entity to which it
> is addressed and may contain information that is confidential or
> privileged, the disclosure of which is governed by applicable law. If the
> reader of this message is not the intended recipient, you are hereby
> notified that any dissemination, distribution, or copying of this
> information is strictly prohibited. If you have received this message by
> error, please notify us immediately and destroy the related message."
>


-- 
Sincerely yours
Mikhail Khludnev


group=true solrcloud 8.11

2022-12-07 Thread James Greene
Is the group=true functionality supposed to work in solrcloud 8.11?  I've
tried the most basic options and reconfigured schema using various
combinations of field types as well as index/stored/docvalues settings but
it always errors out with NullPointerException

I've only been able to find this issue which seemed to be prematurely
closed (process over progress):
https://issues.apache.org/jira/browse/SOLR-15347


q=*:*&group=true&group.field=

coll1-1  | 2022-12-07 21:56:03.353 ERROR (qtp33233312-71) [c:coll1 s:shard2
r:core_node4 x:coll1_shard2_replica_n2] o.a.s.h.RequestHandlerBase
org.apache.solr.client.solrj.impl.BaseHttpSolrClient$RemoteSolrException:
Error from server at null: java.lang.NullPointerException

coll1-1  |  at
org.apache.solr.schema.FieldType.toExternal(FieldType.java:361)

coll1-1  |  at
org.apache.solr.search.grouping.distributed.shardresultserializer.TopGroupsResultTransformer.serializeTopGroups(TopGroupsResultTransformer.java:210)

coll1-1  |  at
org.apache.solr.search.grouping.distributed.shardresultserializer.TopGroupsResultTransformer.transform(TopGroupsResultTransformer.java:77)

coll1-1  |  at
org.apache.solr.search.grouping.distributed.shardresultserializer.TopGroupsResultTransformer.transform(TopGroupsResultTransformer.java:57)

coll1-1  |  at
org.apache.solr.search.grouping.CommandHandler.processResult(CommandHandler.java:223)

coll1-1  |  at
org.apache.solr.handler.component.QueryComponent.doProcessGroupedDistributedSearchSecondPhase(QueryComponent.java:1427)

coll1-1  |  at
org.apache.solr.handler.component.QueryComponent.process(QueryComponent.java:378)

coll1-1  |  at
org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:369)

coll1-1  |  at
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:216)

coll1-1  |  at org.apache.solr.core.SolrCore.execute(SolrCore.java:2637)

coll1-1  |  at
org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:791)

coll1-1  |  at
org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:564)

coll1-1  |  at
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:427)

coll1-1  |  at
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:357)

coll1-1  |  at
org.eclipse.jetty.servlet.FilterHolder.doFilter(FilterHolder.java:201)

coll1-1  |  at
org.eclipse.jetty.servlet.ServletHandler$Chain.doFilter(ServletHandler.java:1601)

coll1-1  |  at
org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:548)

coll1-1  |  at
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)

coll1-1  |  at
org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:600)

coll1-1  |  at
org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:127)

coll1-1  |  at
org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:235)

coll1-1  |  at
org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:1624)

coll1-1  |  at
org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:233)

coll1-1  |  at
org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1434)

coll1-1  |  at
org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:188)

coll1-1  |  at
org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:501)

coll1-1  |  at
org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:1594)

coll1-1  |  at
org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:186)

coll1-1  |  at
org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1349)

coll1-1  |  at
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)

coll1-1  |  at
org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:191)

coll1-1  |  at
org.eclipse.jetty.server.handler.InetAccessHandler.handle(InetAccessHandler.java:177)

coll1-1  |  at
org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:146)

coll1-1  |  at
org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:127)

coll1-1  |  at
org.eclipse.jetty.rewrite.handler.RewriteHandler.handle(RewriteHandler.java:322)

coll1-1  |  at
org.eclipse.jetty.server.handler.gzip.GzipHandler.handle(GzipHandler.java:763)

coll1-1  |  at
org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:127)


Re: group=true solrcloud 8.11

2022-12-07 Thread Mike Drob
Have you tried the group.main and group.query options?

https://solr.apache.org/guide/8_11/result-grouping.html#grouping-examples


On Wed, Dec 7, 2022 at 4:16 PM James Greene 
wrote:

> Is the group=true functionality supposed to work in solrcloud 8.11?  I've
> tried the most basic options and reconfigured schema using various
> combinations of field types as well as index/stored/docvalues settings but
> it always errors out with NullPointerException
>
> I've only been able to find this issue which seemed to be prematurely
> closed (process over progress):
> https://issues.apache.org/jira/browse/SOLR-15347
>
>
> q=*:*&group=true&group.field=
>
> coll1-1  | 2022-12-07 21:56:03.353 ERROR (qtp33233312-71) [c:coll1 s:shard2
> r:core_node4 x:coll1_shard2_replica_n2] o.a.s.h.RequestHandlerBase
> org.apache.solr.client.solrj.impl.BaseHttpSolrClient$RemoteSolrException:
> Error from server at null: java.lang.NullPointerException
>
> coll1-1  |  at
> org.apache.solr.schema.FieldType.toExternal(FieldType.java:361)
>
> coll1-1  |  at
>
> org.apache.solr.search.grouping.distributed.shardresultserializer.TopGroupsResultTransformer.serializeTopGroups(TopGroupsResultTransformer.java:210)
>
> coll1-1  |  at
>
> org.apache.solr.search.grouping.distributed.shardresultserializer.TopGroupsResultTransformer.transform(TopGroupsResultTransformer.java:77)
>
> coll1-1  |  at
>
> org.apache.solr.search.grouping.distributed.shardresultserializer.TopGroupsResultTransformer.transform(TopGroupsResultTransformer.java:57)
>
> coll1-1  |  at
>
> org.apache.solr.search.grouping.CommandHandler.processResult(CommandHandler.java:223)
>
> coll1-1  |  at
>
> org.apache.solr.handler.component.QueryComponent.doProcessGroupedDistributedSearchSecondPhase(QueryComponent.java:1427)
>
> coll1-1  |  at
>
> org.apache.solr.handler.component.QueryComponent.process(QueryComponent.java:378)
>
> coll1-1  |  at
>
> org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:369)
>
> coll1-1  |  at
>
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:216)
>
> coll1-1  |  at org.apache.solr.core.SolrCore.execute(SolrCore.java:2637)
>
> coll1-1  |  at
> org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:791)
>
> coll1-1  |  at
> org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:564)
>
> coll1-1  |  at
>
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:427)
>
> coll1-1  |  at
>
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:357)
>
> coll1-1  |  at
> org.eclipse.jetty.servlet.FilterHolder.doFilter(FilterHolder.java:201)
>
> coll1-1  |  at
>
> org.eclipse.jetty.servlet.ServletHandler$Chain.doFilter(ServletHandler.java:1601)
>
> coll1-1  |  at
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:548)
>
> coll1-1  |  at
>
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
>
> coll1-1  |  at
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:600)
>
> coll1-1  |  at
>
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:127)
>
> coll1-1  |  at
>
> org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:235)
>
> coll1-1  |  at
>
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:1624)
>
> coll1-1  |  at
>
> org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:233)
>
> coll1-1  |  at
>
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1434)
>
> coll1-1  |  at
>
> org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:188)
>
> coll1-1  |  at
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:501)
>
> coll1-1  |  at
>
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:1594)
>
> coll1-1  |  at
>
> org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:186)
>
> coll1-1  |  at
>
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1349)
>
> coll1-1  |  at
>
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
>
> coll1-1  |  at
>
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:191)
>
> coll1-1  |  at
>
> org.eclipse.jetty.server.handler.InetAccessHandler.handle(InetAccessHandler.java:177)
>
> coll1-1  |  at
>
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:146)
>
> coll1-1  |  at
>
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:127)
>
> coll1-1  |  at
>
> org.eclipse.jetty.rewrite.handler.RewriteHandler.handle(RewriteHandler.java:322)
>
> coll1-1  |  at
>
> org.eclipse.jetty.server.handler.gzip.GzipHandler.handle(GzipHandler.java:763)
>
> coll1-1  |  at
>
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:127)
>


Re: Is there a way to shape a query response from SOLR similar to the way the Script Update Processor can transform an update payload/document?

2022-12-07 Thread Matthew Castrigno
Hi Mikhail,

Yes I was reading this section but it was not clear to me how I could 
accomplish what I need to do with these resources other that writing my own 
responseWriter. I want to take the key value pairs of the response and change 
the name of the keys,  for example and make changes to the arrangement of the 
repsonse.

Thanks.

From: Mikhail Khludnev 
Sent: Wednesday, December 7, 2022 3:12 PM
To: users@solr.apache.org 
Subject: Re: Is there a way to shape a query response from SOLR similar to the 
way the Script Update Processor can transform an update payload/document?

Hello Matthew, Will wt=json https: //urldefense. com/v3/__https: //solr. 
apache. org/guide/7_1/common-query-parameters. 
html*wt-parameter__;Iw!!FkC3_z_N!NQB65JQR777QCb_Aypbo3QDqBejimEzP-Ea_ISX30_8FS9Y6EsMRy1lGOaOHoi5d2OjvKWWmtw$
 work for it? On Wed,
ZjQcmQRYFpfptBannerStart
This Message Is From an External Sender
This message came from outside the St. Luke's email system.

ZjQcmQRYFpfptBannerEnd

Hello Matthew,
Will wt=json
https://urldefense.com/v3/__https://solr.apache.org/guide/7_1/common-query-parameters.html*wt-parameter__;Iw!!FkC3_z_N!NQB65JQR777QCb_Aypbo3QDqBejimEzP-Ea_ISX30_8FS9Y6EsMRy1lGOaOHoi5d2OjvKWWmtw$
work for it?

On Wed, Dec 7, 2022 at 8:19 PM Matthew Castrigno  wrote:

> I need to shape Json the response from SOLR, is there a way to do that?
>
> Thank you.
>
> --
> "This message is intended for the use of the person or entity to which it
> is addressed and may contain information that is confidential or
> privileged, the disclosure of which is governed by applicable law. If the
> reader of this message is not the intended recipient, you are hereby
> notified that any dissemination, distribution, or copying of this
> information is strictly prohibited. If you have received this message by
> error, please notify us immediately and destroy the related message."
>


--
Sincerely yours
Mikhail Khludnev


--
"This message is intended for the use of the person or entity to which it is 
addressed and may contain information that is confidential or privileged, the 
disclosure of which is governed by applicable law. If the reader of this 
message is not the intended recipient, you are hereby notified that any 
dissemination, distribution, or copying of this information is strictly 
prohibited. If you have received this message by error, please notify us 
immediately and destroy the related message."