SI File Missing

2022-08-11 Thread Brian Byju
I have an index file where I am missing si files for a set of dim,fdt,and nvd files, is there a way i can create the si file, the shard is thus not getting allocated because of this and I am facing no index found exception.

Solr replica recovery mode triggers

2016-10-13 Thread Brian Wright
have experience with this situation. Any help is greatly appreciated. Thanks. -- Signature *Brian Wright* *Sr. Systems Engineer * 901 Mariners Island Blvd Suite 200 San Mateo, CA 94404 USA *Email *bri...@marketo.com <mailto:bri...@marketo.com> *Phone *+1.650.539.3530** *www.marketo.c

Syntax question

2015-12-30 Thread Brian V Zayas
received this e-mail in error….” Search: ((privilege) not w/4 (“received this e-mail in error”)) Thanks in advance! Brian V. Zayas Litigation Support Project Manager JONES DAY® - One Firm Worldwide℠ 1755 Embarcadero Road Palo Alto, CA 94303 Office +1.650.739.3973 (Silicon Valley) Office +1.41

Re: AlreadyClosedException on new index

2015-01-06 Thread Brian Call
combined with not protecting against this possibility. Thanks for your assistance guys! -Brian > On Jan 6, 2015, at 10:22 AM, Brian Call wrote: > > No exception at all… and that’s the crazy part. I create a new IndexWriter > and then immediately create a new SearcherManager using

Re: AlreadyClosedException on new index

2015-01-06 Thread Brian Call
either. -Brian > On Jan 6, 2015, at 9:55 AM, Ian Lea wrote: > > Presumably no exception is thrown from the new IndexWriter() call? > I'd double check that, and try some harmless method call on the > writer and make sure that works. And run CheckIndex against the >

Re: AlreadyClosedException on new index

2015-01-06 Thread Brian Call
received. Blessings, Brian > On Jan 6, 2015, at 2:16 AM, Tomoko Uchida > wrote: > > Hi, > > How often does this error occur? > You do not tell the lucene version, but I guess you use lucene 3.x > according to the stack trace... > IndexWriter would not be closed until

AlreadyClosedException on new index

2015-01-05 Thread Brian Call
closed? I’m completely baffled on this one guys… so many thanks in advance for your help! I’ll take any suggestions on a possible mitigation too if anyone thinks of any. Blessings, Brian Call Manager, Systems Software Development Work: +1 (619) 373-4840 | Cell: +1 (619) 344-1013 <h

Re: Caused by: java.lang.OutOfMemoryError: Map failed

2014-11-08 Thread Brian Call
I’ll try bumping up the per-process file max and see if that fixes it. Thanks for all your help and suggestions guys! -Brian On Nov 7, 2014, at 5:00 PM, Toke Eskildsen wrote: > Brian Call [brian.c...@soterawireless.com] wrote: >> Yep, you guys are correct, I’m supporting a sligh

Re: Caused by: java.lang.OutOfMemoryError: Map failed

2014-11-07 Thread Brian Call
ata. Also, we’re not using Solr, only raw lucene. The indices remain open until the streaming data has stopped and a user has removed the related session from the UI. Yes, it’s a necessary kind of scary… -Brian On Nov 7, 2014, at 4:20 PM, Erick Erickson wrote: > bq: Our server run

Re: Caused by: java.lang.OutOfMemoryError: Map failed

2014-11-07 Thread Brian Call
Yep, you guys are correct, I’m supporting a slightly older version of our product based on Lucene 3. In my previous email I forgot to mention that I also bumped up the maximum allowable file handles per process to 16k, which had been working well. Here’s the ulimit -a output from our server: co

Caused by: java.lang.OutOfMemoryError: Map failed

2014-11-07 Thread Brian Call
-process map count using cat /proc//maps | wc -l and that returned around 4k maps, well under the 65k limit. Does anyone have any ideas on what to check out next? I’m running out of things to try… Many, many thanks in advance! Blessings, Brian Call

variable string search

2013-09-13 Thread Wasikowski, Brian [ JRDUS]
; : Doc3 And maybe even: "another longer free text" : Doc1, Doc2, Doc3 Any help is appreciated. Here are the components I am currently using: Lucene.Net.Analysis.Standard.StandardAnalyzer Lucene.Net.Search.Query query = new Lucene.Net.Search.FuzzyQuery Lucene.Net.Search.TopDocs hits = sea

Performance problems with lazily loaded fields

2011-03-21 Thread Brian Hurt
s? I am calling the doc function on the index search with a null FieldSelector, but this does not seem to reduce the cost of getting fields (indeed, it seems to slow down the whole query processing by a significant factor). Is there any help anyone can give me? Thanks. Brian

Multiple IndexWriter question

2011-03-04 Thread Brian Coverstone
other indexes that are working just fine. Though I've run a check and repair and it seems to be clean. Any advice would be appreciated! Regards, Brian Coverstone

ParallelReader and updateDocument don't play nice?

2011-02-22 Thread Groose, Brian
I have been looking at using ParallelReader as its documentation indicates, to allow certain fields to be updated while most of the fields will not be updated. However, this does not seem possible. Let's say I have two indexes, A and B, which are used in a ParallelReader. If I update a documen

Re: The logic of QueryParser

2010-12-13 Thread Brian Hurt
ome old flame war. > > Or just tell me what > > to google for (the terms I've tried haven't yielded > > anything useful). > > > org.apache.lucene.queryParser.QueryParser.setDefaultOperator() > > > Even better. I had missed that. Thank you very much! Brian

The logic of QueryParser

2010-12-13 Thread Brian Hurt
or (the terms I've tried haven't yielded anything useful). Thanks. Brian

Non matched terms

2010-11-10 Thread Brian C. Dilley
Hi, I'm using Lucene for a search project and I have the following requirements and I was wondering if one of you fine folks could point me in the right direction (currently i'm using the RAMDirectory, IndexSearcher, StandardAnalyzer and QueryParser): Given the example search string: "red leather

solr / lucene engineering positions in Boston, MA USA @ the Echo Nest

2010-09-10 Thread Brian Whitman
Hi all, brief message to let you know that we're in heavy hire mode at the Echo Nest. As many of you know we are very heavy solr/lucene users (~1bn documents across many many servers) and a lot of our staff have been working with and contributing to the projects over the years. We are a "music inte

RE: Wanting batch update to avoid high disk usage

2010-08-24 Thread Beard, Brian
tire segment files are rewritten every time. So it looks like our only option is to bail out when there's not enough space to duplicate the existing index. - Original Message ---- From: "Beard, Brian" To: java-user@lucene.apache.org Sent: Tue, August 24, 2010 8:19:52 AM Sub

RE: Wanting batch update to avoid high disk usage

2010-08-24 Thread Beard, Brian
We had a situation where our index size was inflated to roughly double. It took about a couple of months, but the size eventually dropped back down, so it does seem to eventually get rid of the deleted documents. With that said, in the future expungeDeletes will get called once a day to better man

RE: Tokenization / Analyzer question

2010-08-20 Thread Beard, Brian
e this metaData information while inside the TokenFilter. I guess this would be similar to adding column stride fields, but have multiple ones at different positions in the document. -Original Message- From: Beard, Brian [mailto:brian.be...@mybir.com] Sent: Thursday, August 19, 2010 2:02 P

Tokenization / Analyzer question

2010-08-19 Thread Beard, Brian
pass through and flag them using a typeAttribute, while they don't in search mode. For this though I would most likely have to end up using different delimiter values. Any help is appreciated, Brian Beard - To unsubscribe, e-ma

Boost and ordering based on most recently updated

2010-08-04 Thread Brian Pontarelli
I have a situation where I'm using a Boost on documents to bump them up in the search results when a search has multiple documents with the same hits in the search query. However, it looks like if two or more documents have the same rank after the Boost is applied, the search results are ordered

Re: Strange issue with String vs. Query

2010-03-26 Thread Brian Pontarelli
this. Or try a > later version of lucene. > > > -- > Ian. > > > On Thu, Mar 25, 2010 at 11:04 PM, Brian Pontarelli > wrote: >> I'm new to the list and I'm having an issue that I wanted to ask about >> quick. I'm using Lucene version

Strange issue with String vs. Query

2010-03-25 Thread Brian Pontarelli
I'm new to the list and I'm having an issue that I wanted to ask about quick. I'm using Lucene version 2.4.1 I recently rewrote a query to use the Query classes rather than a String and QueryParser. The search results between the two queries are now in different orders while the number of resul

FieldValueHitQueue question - migration to 3.0

2010-01-21 Thread Beard, Brian
Since FieldSortedHitQueue was deprecated in 3.0, I'm converting to the new FieldValueHitQueue. The trouble I'm having is coming up with a way to use FieldValueHitQueue in a Collector so it is decoupled from a TopDocsCollector. What I'd like to do is have a custom Collector that can add objects ex

Re: Lucene-3.0.0 web demo problem

2009-12-07 Thread brian li
Thanks. And now I know where to go if there are more issues :) On Tue, Dec 8, 2009 at 11:54 AM, Robert Muir wrote: > thanks for reporting this, i opened a jira issue at > https://issues.apache.org/jira/browse/LUCENE-2132 > > On Mon, Dec 7, 2009 at 7:58 PM, brian li wrote: > >

Lucene-3.0.0 web demo problem

2009-12-07 Thread brian li
web demo code, so I just post it here. Just think newbie like me can enjoy one less bump trying this. Regards, Brian - To unsubscribe, e-mail: java-user-unsubscr...@lucene.apache.org For additional commands, e-mail: java-user-h

Re: Indexing large files? - No answers yet...

2009-09-11 Thread Brian Pinkerton
Quite possibly, but shouldn't one expect Lucene's resource to track the size of the problem in question? Paul's two examples below use input files of 5 and 62MB, hardly the size of input I'd expect to handle in a memory-compromised environment. bri On Sep 11, 2009, at 7:43 AM, Glen New

SegmentReader retaining memory

2009-06-22 Thread Groose, Brian
with Sun's JDK 6 update 12, 64-bit, on Debian Lenny. Thanks, Brian The information contained in this email message and its attachments is intended only for the private and confidential use of the recipient(s) named above, unless the sender expressly agrees otherwise. Transmission of email

termDocs / termEnums performance increase for 2.4.0

2009-02-05 Thread Beard, Brian
Thought I would report a performance increase noticed in migrating from 2.3.2 to 2.4.0. Performing an iterated loop using termDocs & termEnums like below is about 30% faster. The example test set I'm running has about 70K documents to go through and process (on a dual processor windows machine) w

RE: Poor QPS with highlighting

2009-02-05 Thread Beard, Brian
A while ago someone posted a link to a project called XTF which does this: http://xtf.wiki.sourceforge.net/ The one problem with this approach still lurking for me (or maybe I don't understand how to get around) is how to handle multiple terms which "must" appear in the query, but are in non-overl

Re: background merge hit exception

2009-01-03 Thread Brian Whitman
> > > It's very strange that CheckIndex -fix did not resolve the issue. After > fixing it, if you re-run CheckIndex on the index do you still see that > original one broken segment present? CheckIndex should have removed > reference to that one segment. > I just ran it again, and it detected the

Re: background merge hit exception

2009-01-02 Thread Brian Whitman
s the same error. On Fri, Jan 2, 2009 at 5:26 PM, Michael McCandless < luc...@mikemccandless.com> wrote: > Also, this (Solr server going down during an add) should not be able to > cause this kind of corruption. > Mike > > Yonik Seeley wrote: > > > On Fri, Jan 2, 2009

Re: background merge hit exception

2009-01-02 Thread Brian Whitman
e [-fix was not specified] On Fri, Jan 2, 2009 at 3:47 PM, Brian Whitman wrote: > I will but I bet I can guess what happened -- this index has many > duplicates in it as well (same uniqueKey id multiple times) - this happened > to us once before and it was because the solr server went down

Re: background merge hit exception

2009-01-02 Thread Brian Whitman
there any other > exceptions prior to this one, or, any previous problems with the OS/IO > system? > > Can you run CheckIndex (java org.apache.lucene.index.CheckIndex to see > usage) and post the output? > Mike > > Brian Whitman wrote: > > > I am getting this on a 10

background merge hit exception

2009-01-02 Thread Brian Whitman
I am getting this on a 10GB index (via solr 1.3) during an optimize: Jan 2, 2009 6:51:52 PM org.apache.solr.common.SolrException log SEVERE: java.io.IOException: background merge hit exception: _ks4:C2504982 _oaw:C514635 _tll:C827949 _tdx:C18372 _te8:C19929 _tej:C22201 _1agw:C1717926 _1agz:C1 into

Re: highlighter / fragmenter performance for large fields

2008-10-20 Thread Brian Beard
Karsten, Thanks, I will look into this. >Hi Brian, > >I don't know the internals of highlighting („explanation“) in lucene. >But I know that XTF ( >http://xtf.wiki.sourceforge.net/underHood_Documents#tocunderHood_Documents5 >) can handle very large documents (above 100 M

highlighter / fragmenter performance for large fields

2008-10-13 Thread Beard, Brian
some more processing there, but disabling it doesn't seem to affect the performance that much. One other thing was just doing a simple regex search without using a scorer or analyzer. This runs about 2x faster, but still is relatively slow. Has anyone had any good experience with performing fra

RE: Memory eaten up by String, Term and TermInfo?

2008-10-06 Thread Beard, Brian
I played around with GC quite a bit in our app and found the following java settings to help a lot (Used with jboss, but should be good for any jvm). set JAVA_OPTS=%JAVA_OPTS% -XX:MaxPermSize=512M -XX:+UseConcMarkSweepGC -XX:+CMSPermGenSweepingEnabled -XX:+CMSClassUnloadingEnabled While these set

RE: performance feedback

2008-07-10 Thread Beard, Brian
35 AM, Beard, Brian <[EMAIL PROTECTED]> wrote: > I will try tweaking RAM, and check about autoCommit=false. It's on the > future agenda to multi-thread through the index writer. The indexing > time I quoted includes the document creation time which would definitely > improve w

RE: performance feedback

2008-07-09 Thread Beard, Brian
performance feedback This is great to hear! If you tweak things a bit (increase RAM buffer size, use autoCommit=false, use threads, etc) you should be able to eke out some more gains... Are you storing fields & using term vectors on any of your fields? Mike Beard, Brian wrote: > > I

performance feedback

2008-07-09 Thread Beard, Brian
e a bit. Total index size (eventually) will be ~15G. Thanks, Brian Beard - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]

performance feedback

2008-07-09 Thread Beard, Brian
y) will be ~15G. Thanks, Brian Beard - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]

search performance & caching

2008-04-28 Thread Beard, Brian
I'm using lucene 2.2.0 & have two questions: 1) Should search times be linear wrt number of queries hitting a single searcher? I've run multiple search threads against a single searcher, and the search times are very linear - 10x slower for 10 threads vs 1 thread, etc. I'm using a paralle multi-

RE: WildCardQuery and TooManyClauses

2008-04-14 Thread Beard, Brian
You can use your approach w/ or w/o the filter. >td = indexSearcher.search(query, filter, maxnumhits); You need to use a filter for the wildcards which is built in to the query. 1) Extend QueryParser to override the getWildcardQuery method. (Or even if you don't use QueryParser, j

RE: Boolean Query search performance

2008-03-10 Thread Beard, Brian
AHA! That is consistent with what is happening now, and explains the discrepancy. The original post of parens around each term was because I was adding them as separate boolean queries, but now with using just the clause the parens is around the entire clause with the boost. -Original Message

RE: Boolean Query search performance

2008-03-06 Thread Beard, Brian
ng - from an earlier suggestion - is it possible to add multiple terms per BooleanClause? I tried using TermQuery.combine() to add in an array of them into one query and making a clause from that, but there was no difference in

Boolean Query search performance

2008-03-05 Thread Beard, Brian
I'm using lucene 2.2.0. I'm in the process of re-writing some queries to build BooleanQueries instead of using query parser. Bypassing query parser provides almost an order of magnitude improvement for very large queries, but then the search performance takes 20-30% longer. I'm adding boost valu

Re: FieldSortedHitQueue rise in memory

2008-02-19 Thread Brian Doyle
wrote: > Hi Brian, > > I ran into something similar a long time ago. My custom sort objects were > being cached by Lucene, but there were too many of them because each one > had > different 'reference values' for different queries. So, I changed the > equals

FieldSortedHitQueue rise in memory

2008-02-18 Thread Brian Doyle
We've implemented a custom sort class and use it to sort by distance. We have implemented the equals and hashcode in the sort comparator. After running for a few hours we're reaching peak memory usage and eventually the server runs out of memory. We did some profiling and noticed that a large

custom sort and out of memory

2008-02-17 Thread Brian Doyle
We've written our own custom sorter to be able to sort on the latitude and longitude fields from the results. We have an index that is about 18million records and 12GB on disk in size. We allocated about 3GB of heap to the index and with about 1 request to the index every 2 or 3 seconds we would

RE: how do I get my own TopDocHitCollector?

2008-01-11 Thread Beard, Brian
Thanks for all this. We're doing warmup searching also, but just for some common date searches. The warmup would be a good place to add some pre-caching capability. I'll plan for this eventually and start with the partial cache for now. Thanks, Brian Beard -Original Message

RE: how do I get my own TopDocHitCollector?

2008-01-10 Thread Beard, Brian
- From: Beard, Brian [mailto:[EMAIL PROTECTED] Sent: Thursday, January 10, 2008 10:08 AM To: java-user@lucene.apache.org Subject: RE: how do I get my own TopDocHitCollector? Thanks for the post. So you're using the doc id as the key into the cache to retrieve the external id. Then what mechan

RE: how do I get my own TopDocHitCollector?

2008-01-10 Thread Beard, Brian
Wednesday, January 09, 2008 7:19 PM To: java-user@lucene.apache.org Subject: Re: how do I get my own TopDocHitCollector? Beard, Brian wrote: > Question: > > The documents that I index have two id's - a unique document id and a > record_id that can link multiple documents together that

how do I get my own TopDocHitCollector?

2008-01-09 Thread Beard, Brian
Question: The documents that I index have two id's - a unique document id and a record_id that can link multiple documents together that belong to a common record. I'd like to use something like TopDocs to return the first 1024 results that have unique record_id's, but I will want to skip some o

RE: Giving boost to a more recent item whiule searching

2007-12-20 Thread Brian Grimal
I would love to revisit this one. I implemented pseudo date boosting in an overly simplistic manner in my app, which I know can be improved upon. Might it be useful to reopen a thread on the topic? Brian -Original Message- From: prabin meitei <[EMAIL PROTECTED]> Sent: Wed

RE: Post processing to get around TooManyClauses?

2007-12-11 Thread Beard, Brian
I had a similar problem (I think). Look at using a WildcardFilter (below), possibly wrapped in a CachingWrapperFilter, depending if you want to re-use it. I over-rode the method QueryParser.getWildcardQuery to customize it. In your case you would probably have to specifically detect for the presenc

RE: Wildcard & filters

2007-10-12 Thread Beard, Brian
ew BitSet(); with = new BitSet(reader.maxDocs()); Beard, Brian wrote: > Mark, > > Thanks so much. > > -Original Message- > From: Mark Miller [mailto:[EMAIL PROTECTED] > Sent: Friday, October 12, 2007 1:54 PM > To: java-user@lucene.apache.org > Subject: Re: Wildcard

RE: Wildcard & filters

2007-10-12 Thread Beard, Brian
.doc()); } } else { break; } } while (enumerator.next()); } finally { termDocs.close(); enumerator.close(); } return bits; } } - Mark Beard, Brian wrote: > I'm trying to over-ride QueryParser.getW

Wildcard & filters

2007-10-12 Thread Beard, Brian
I'm trying to over-ride QueryParser.getWildcardQuery to use filtering. I'm missing something, because the following still gets the maxBooleanClauses limit. I guess the terms are still expanded even though the query is wrapped in a filter. How do I avoid the term expansion altogether? Is there a b

RE: combining Filter's & Query's to search

2007-10-09 Thread Beard, Brian
idable to provide a hook for you to return a Query object of your choosing (e.g. ConstantScoreQuery wrapping your choice of filter) Cheers Mark - Original Message From: "Beard, Brian" <[EMAIL PROTECTED]> To: java-user@lucene.apache.org Sent: Tuesday, 9 October, 2007 3:2

combining Filter's & Query's to search

2007-10-09 Thread Beard, Brian
I'm currently using rangeFilter's and queryWrapperFilter's to get around the max boolean clause limit. A couple of questions concerning this: 1) Is it good design practice to substitue every term containing a wildcard with a queryWrapperFilter, and a rangeQuery with a RangeFilter and ChainedFilt

RE: nfs mount problem

2007-06-25 Thread Beard, Brian
Mike, Thanks for all the info. We'll be making a decision here soon whether to use NFS or not. If we give it a go, or a test run I'll post our experiences. Brian -Original Message- From: Michael McCandless [mailto:[EMAIL PROTECTED] Sent: Friday, June 22, 2007 4:38 PM To:

nfs mount problem

2007-06-22 Thread Beard, Brian
http://issues.apache.org/jira/browse/LUCENE-673 This says the NFS mount problem is still open, is this the case? Has anyone been able to deal with this adequately? - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional

RE: All keys for a field

2007-06-21 Thread Beard, Brian
parser = new QueryParser(); parser.setAllowLeadingWildcard(true); -Original Message- From: Martin Spamer [mailto:[EMAIL PROTECTED] Sent: Thursday, June 21, 2007 7:06 AM To: java-user@lucene.apache.org Subject: All keys for a field I need to return all of the keys for a

RE: MultiSearcher holds on to index - optimization not one segment

2007-06-19 Thread Beard, Brian
That works, thanks. -Original Message- From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Yonik Seeley Sent: Tuesday, June 19, 2007 9:57 AM To: java-user@lucene.apache.org Subject: Re: MultiSearcher holds on to index - optimization not one segment On 6/19/07, Beard, Brian

RE: MultiSearcher holds on to index - optimization not one segment

2007-06-19 Thread Beard, Brian
: Tuesday, June 19, 2007 9:06 AM To: java-user@lucene.apache.org Subject: Re: MultiSearcher holds on to index - optimization not one segment On 6/19/07, Beard, Brian <[EMAIL PROTECTED]> wrote: > The problem I'm having is once the MultiSearcher is open, it holds on to > t

MultiSearcher holds on to index - optimization not one segment

2007-06-19 Thread Beard, Brian
ing before. Any ideas are appreciated. Brian Beard - To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]

index integrity detection in lucene 2.1.0?

2007-05-16 Thread Beard, Brian
I noticed in previous discussion about some index integrity detection classes that were around in version 1.4 (NoOpDirectory or NullDirectory). Does anyone know if this in the 2.1.0 release? I didn't see in 2.1.0 or the contrib folders. Brian

copying fields between documents in different indexes

2007-02-16 Thread Brian Whitman
Using the lucene API, is there a way to copy the contents and parameters of fields between documents in different indexes? Without requiring the field to be stored or needing to pass around the fulltext contents of the field. I guess I am looking for doc.add(new Field("contentsNew", copyFr

Re: searching by field's TF vector (not MoreLikeThis)

2007-02-03 Thread Brian Whitman
On Feb 1, 2007, at 7:13 PM, Brian Whitman wrote: I'm looking for a way to search by a field's internal TF vector representation. MoreLikeThis does not seem to be what I want-- it constructs a text query based on the top scoring TF-IDF terms. I want to query by TF vecto

searching by field's TF vector (not MoreLikeThis)

2007-02-01 Thread Brian Whitman
I'm looking for a way to search by a field's internal TF vector representation. MoreLikeThis does not seem to be what I want-- it constructs a text query based on the top scoring TF-IDF terms. I want to query by TF vector directly, bypassing the tokens. Lucene understandably has knowledge

Conflicts with Stemming and Wildcard / Prefix Queries

2006-06-23 Thread Brian Caruso
ide protected org.apache.lucene.search.Query getWildcardQuery(String field, String termStr) throws ParseException { return super.getWildcardQuery(getUnstemmed(field), termStr); } } -- Brian Caruso Programmer/Analyst Albert R. Mann Library Corn

Multi Search vs reader?

2006-03-21 Thread Brian
er. What's the difference, and what's the benefit of one over the other?? Thanks, Brian __ Do You Yahoo!? Tired of spam? Yahoo! Mail has the best spam protection around h

Re: Possible bug in FieldSortedHitQueue?

2006-03-17 Thread Brian Riddle
Hej Paul, Then, if no comparator is found in the cache, a new one is created (line > 193) and then stored in the cache (line 202). HOWEVER, both the cache > lookup() and store() do NOT take into account locale; if we, on the same > index reader, try to do one search sorted by Locale.FRENCH and one

Re: FunctionQuery example request

2006-03-15 Thread Brian Riddle
lucene. In our environment we found that the best increase was by upgrading to lucene-1.9.1. A little more info can be found here http://www.lucenebook.com/blog/errata/ /Brian

MultiSearch

2006-03-15 Thread Brian
Hello Everyone, I currently have an IndexSearch working Great! What I want to do now, is move to a multi Index search. What's the best way to go about it? Is it a simple process? Any thought's would be appreciated. Thanks, B __ Do You Yahoo!? Ti

Re: File Name Search

2006-03-06 Thread Brian
ery 3 days for every > workstation. > > For assessing network files I'm using JCIFS > (jcifs.samba.org) > > Questions? > > Brian wrote: > > Quick Question, > > Is it possible to create an index & search > based > > on file names? > &

Re: File Name Search

2006-03-06 Thread Brian
Cool, Basically I have soming similar to: name_division.date_order_code So I'm guessing I need to tokenize. Thanks, B --- Erik Hatcher <[EMAIL PROTECTED]> wrote: > On Mar 6, 2006, at 8:07 AM, Brian wrote: > > Quick Question, > > Is it possible to create an ind

File Name Search

2006-03-06 Thread Brian
Quick Question, Is it possible to create an index & search based on file names? Thanks, B __ Do You Yahoo!? Tired of spam? Yahoo! Mail has the best spam protection around http://mail.yahoo.com ---

Re: UpdateIndex

2005-08-24 Thread Brian
Would you want to update, or could you just append to an existing Index? Thanks, B --- Ray Tsang <[EMAIL PROTECTED]> wrote: > This could be off topic, but I made something that > updates indices > that worked like the following, wonder if anybody > has the same ideas? > I found something like In

Re: results link

2005-06-22 Thread Brian
) B --- Erik Hatcher <[EMAIL PROTECTED]> wrote: > > On Jun 22, 2005, at 8:13 AM, Brian wrote: > > All, > > I've been able to create an index across the > > network. However when I do my search, the link I'm > > trying to generate show's

results link

2005-06-22 Thread Brian
All, I've been able to create an index across the network. However when I do my search, the link I'm trying to generate show's null. What it actually points to is http://localhost/mywebapp Where should I be looking in order to have my link generated correctly. Thanks in

Using the Searcher.java sample in a web application

2005-06-21 Thread Brian
pointers. I'm new to java and lucene. Thanks in advance. Brian __ Do You Yahoo!? Tired of spam? Yahoo! Mail has the best spam protection around http://mail.yahoo.com - To unsubs

RE: IndexHTML.java location

2005-06-17 Thread Brian
mples), or the URL of the lucene > application as seen > through Tomcat? > > -Original Message- > From: Brian [mailto:[EMAIL PROTECTED] > Sent: 17 June 2005 16:48 > To: java-user@lucene.apache.org > Subject: IndexHTML.java location > > > Not sure if this is

IndexHTML.java location

2005-06-17 Thread Brian
Not sure if this is the right address, to request this kind of help, so if it isn't please point me else where. Basically I think I have an understanding of how Lucene works, in general. I believe I'm at a point where I need to change the "default" url, so I was planning to make the change in the

Re: lucene question, examples

2005-03-08 Thread Brian Cuttler
for much java, at least not at my site. > : > : I'll have a look and let you know, just for the record, how things > : turn out. > > > -Hoss > > > ----- > To unsubscribe, e-mail: [EMAIL PROTECTED]

Re: lucene question, examples

2005-03-04 Thread Brian Cuttler
Otis, On Fri, Mar 04, 2005 at 11:31:22AM -0500, Brian Cuttler wrote: > > Otis, > > > If by shtml you mean HTML with server-side includes, then note that you > > will not be able to do this with Lucene alone, as server-side includes > > are not static. > > My un

Re: lucene question, examples

2005-03-04 Thread Brian Cuttler
recommend to her ? thank you, Brian On Thu, Mar 03, 2005 at 02:40:41PM -0800, Otis Gospodnetic wrote: > Brian, > > It sounds like you are using a little demo application that comes with > Lu

Re: lucene question, examples

2005-03-04 Thread Brian Cuttler
thank you, Brian On Thu, Mar 03, 2005 at 02:40:41PM -0800, Otis Gospodnetic wrote: > Brian, > > It sounds like you are using a little demo application that comes with > Lucene. This is really just a demo that shows how you can use Lucene.

lucene question, examples

2005-03-03 Thread Brian Cuttler
imply not understood) the docs that might give the options we are hoping to implement. Thank you in advance, Brian --- Brian R Cuttler [EMAIL PROTECTED] Computer Systems Support