> For the most part, committers on this project believe in the "fail > early, fail hard" philosophy. Actually, I'm inclined that way as well and I wouldn’t mind it to occur on the indexing side: if garbage is fed into Solr or the tokenizer chain has errors: crash and burn, no problem :)
But here the response of Solr is a bit of both worlds: - it sends a 500 response header to indicate the failure - and it includes the response of the query and facet modules anyway in the response body So now Solr sends multiple KB of data (in my case) just to indicate it failed somewhere while the response actually contains quite useful data... > If you were using SolrJ [...], you probably would not even *get* the response > [...]. In SolrNet you don't simply get the response either, but the exception thrown contains it so that's what I used in my workaround for now. I assume something similar is possible in SolrJ. > If the behavior you want is *configurable*[...] That's ok for me. Perhaps even add the possibility not to send any response body (or just the exception details) other than the 500 status header. I'll open a Jira issue shortly, but I wanted to mail the dev list first to make sure the idea hadn't popped up before (and been rejected or in progress) Luc -----Original Message----- From: Shawn Heisey [mailto:[email protected]] Sent: woensdag 18 maart 2015 14:42 To: [email protected] Subject: [Possibly spoofed] Re: An Exception in a non-query component should not cause a global HTTP 500 Server Error response On 3/18/2015 4:56 AM, Vanlerberghe, Luc wrote: > I ran into an issue where certain combinations of queries and sort > orders cause an Exception in the Highlighter component that in turn > causes a 500 Server Error. > > I tracked down the cause to a problem in the tokenizer chain I use, > but I do not have a quick solution yet. > > > > The point I want to raise here is that the “global” 500 error for the > whole of the response seems way too restrictive. > > In addition the response body still contains the correct values for > the query and facets itself but getting to it is awkward (I use > SolrNet: I have to catch the WebException, get the response from there > and hope the main parts of it are intact…), but at least it keeps the > bulk of the application running. > There are arguments both ways here. For the most part, committers on this project believe in the "fail early, fail hard" philosophy. This idea is that a failure condition should be detected as early as possible, and that it should result in a complete failure. Part of this philosophy comes from the way that Java handles exceptions - when a program has caught an exception, Java convention says that the code statement which threw the exception has NOT done the work it was asked to do. If you were using SolrJ (the Java client for Solr), you probably would not even *get* the response from the server if the client returned a 500 HTTP response because the server had an exception. What you are asking for here is the ability to say "Even though part of it failed, please give me the parts that didn't fail" ... which goes against this entire philosophy. When considering default behavior, I believe that the way things currently operate is correct. If the behavior you want is *configurable* but does not happen by default, that's a different story. We have similar switches already to allow other partial failures, like shards.tolerant. When shards.tolerant=true, a distributed request will succeed as long as at least one of its shard subrequests succeed, even if some of them fail. In that situation, the HTTP response code would be a normal 200, not 4xx or 5xx. I would recommend opening an issue in the Jira issue tracker to ask for a query parameter that enables the behavior you're looking for. If you can create a patch to implement the behavior, that's even better. https://issues.apache.org/jira/browse/SOLR http://grep.codeconsult.ch/2007/04/28/yoniks-law-of-half-baked-patches/ Thanks, Shawn --------------------------------------------------------------------- To unsubscribe, e-mail: [email protected] For additional commands, e-mail: [email protected]
