admin-extra allows one to include additional links and/or information in
the Solr admin main page:
https://cwiki.apache.org/confluence/display/solr/Core-Specific+Tools
Bill
On Wed, Oct 7, 2015 at 5:40 PM, Upayavira wrote:
> Do you use admin-extra within the admin UI?
>
> If so, please go to [1
Is core swapping supported in SolrCloud? If I have a 5 nodes SolrCloud
cluster and I do a core swap on the leader, will the core be swapped on the
other 4 nodes as well? Or do I need to do a core swap on each node?
Bill
What exactly do you mean by nested objects in Solr. It would help if you
give an example. The Solr schema is flat as far as I know.
Bill
On Fri, Jul 24, 2015 at 9:24 AM, Rajesh
wrote:
> You can use nested entities like below.
>
>
> query="SELECT * FROM User">
>
One of my database column is a varchar containing a comma-delimited list of
values. I wold like to import these values into a multiValued field. I
figure that I will need to write a ScriptTransformer to do that. Is there
a better way?
Bill
On Sat, May 9, 2015 at 11:37 PM, Shawn Heisey wrote:
> On 5/9/2015 8:41 PM, Bill Au wrote:
> > Is the behavior of document being indexed independently on each node in a
> > SolrCloud cluster new in 5.x or is that true in 4.x also?
> >
> > If the document is indexed indep
Is the behavior of document being indexed independently on each node in a
SolrCloud cluster new in 5.x or is that true in 4.x also?
If the document is indexed independently on each node, then if I query the
document from each node directly, a timestamp could hold different values
since the documen
I have a timestamp field in my schema to track when each doc was indexed:
Recently, we have switched over to use atomic update instead of re-indexing
when we need to update a doc in the index. It looks to me that the
timestamp field is not updated during an atomic update. I have also looked
in
es are present on the same add because I haven't looked at that
> code since 4.0, but I think I could make a case for retaining salary or for
> discarding it. That by itself reeks--and it's also not well documented.
> Relying on iffy, poorly-documented behavior is asking for pai
and want to apply them to a
> document without affecting the other fields. A regular add will replace an
> existing document completely. AFAIK Solr will let you mix atomic updates
> with regular field values, but I don't think it's a good idea.
>
> Steve
>
> On Jul 8,
Solr atomic update allows for changing only one or more fields of a
document without having to re-index the entire document. But what about
the case where I am sending in the entire document? In that case the whole
document will be re-indexed anyway, right? So I assume that there will be
no savi
PM, Bill Au wrote:
>
>> But when I use XML include, the Entity pull-down in the Dataimport section
>> of the Solr admin UI is empty. I know that happens when there is a syntax
>> error in solr-data-config.xml. Does DIH supports XML include? Also I am
>> not seeing any
I am trying to simplify my Solr DIH configuration by using XML schema
include element. Here is an example:
]>
&dataSource;
&entity1;
&entity2;
I know my included XML files are good because if I put them all into a
single XML file, DIH works as expected.
But
yViaFullImport ?
>
> James Dyer
> Ingram Content Group
> (615) 213-4311
>
>
> -Original Message-
> From: Bill Au [mailto:bill.w...@gmail.com]
> Sent: Tuesday, October 08, 2013 8:50 AM
> To: solr-user@lucene.apache.org
> Subject: Re: problem with data impo
; Personal website: http://www.outerthoughts.com/
> LinkedIn: http://www.linkedin.com/in/alexandrerafalovitch
> - Time is the quality of nature that keeps events from happening all at
> once. Lately, it doesn't seem to be working. (Anonymous - via GTD book)
>
>
> On Sun, Oct 6, 20
Here is my DIH config:
I am having trouble with delta import. I think it is because the main
entity and the sub-entity use different data source. I have tried using
both a delta query:
deltaQuery=
https://issues.apache.org/jira/browse/SOLR-4978
On Sat, Jun 29, 2013 at 2:33 PM, Shalin Shekhar Mangar <
shalinman...@gmail.com> wrote:
> Yes we need to use getTimestamp instead of getDate. Please create an issue.
>
> On Sat, Jun 29, 2013 at 11:48 PM, Bill Au wrote:
&
should not be calling ResultSet.getDate() for a solr date field. It
should really be calling ResultSet.getTimestamp() instead. Is the fix this
simple? Am I missing anything?
If the fix is this simple I can submit and commit a patch to DIH.
Bill
On Sat, Jun 29, 2013 at 12:13 PM, Bill Au wrote
false and deal with the other column type that are not working now.
Thanks for your help.
Bill
On Sat, Jun 29, 2013 at 10:24 AM, Bill Au wrote:
> I just double check my config. We are using convertType=true. Someone
> else came up with the config so I am not sure why we are using it.
unless you are using convertType="true".
>
> Is MySQL actually returning java.sql.Timestamp objects?
>
> On Sat, Jun 29, 2013 at 5:22 AM, Bill Au wrote:
> > I am running Solr 4.3.0, using DIH to import data from MySQL. I am
> running
> > into a very strange pro
I am running Solr 4.3.0, using DIH to import data from MySQL. I am running
into a very strange problem where data from a datetime column being
imported with the right date but the time is 00:00:00. I tried using SQL
DATE_FORMAT() and also DIH DateFormatTransformer but nothing works. The
raw debu
When using SolrCloud, is it possible to exclude certain files in the conf
directory from being loaded into Zookeeper?
We are keeping our own solr related config files in the conf directory that
is actually different for each node. Right now the copy in Zookeeper is
overriding the local copy.
Bil
:
> It's fairly meaningless from a user perspective, but it happens when an
> index is replicated that cannot be simply merged with the existing index
> files and needs a new directory.
>
> - Mark
>
> On May 15, 2013, at 5:38 PM, Bill Au wrote:
>
> > I am runnin
I am running 2 separate 4.3 SolrCloud clusters. On one of them I noticed
the file data/index.properties on the replica nodes where the index
directory is named "index.".
On the other cluster, the index directory is just named "index".
Under what condition is index.properties created? I am tryin
We are using SolrCloud for replication and dynamic scaling but not
distribution so we are only using a single shard. From time to time we
make changes to the index schema that requires rebuilding of the index.
Should I treat the rebuilding as just any other index operation? It seems
to me it wou
Thanks.
Now I have to go back and re-read the entire SolrCloud Wiki to see what
other info I missed and/or forgot.
Bill
On Thu, Mar 28, 2013 at 12:48 PM, Chris Hostetter
wrote:
>
> : Can I use a single ZooKeeper ensemble for multiple SolrCloud clusters or
> : would each SolrCloud cluster requi
Can I use a single ZooKeeper ensemble for multiple SolrCloud clusters or
would each SolrCloud cluster requires its own ZooKeeper ensemble?
Bill
to whatever node that you've
> specified directly for that initial request.
>
> Erik
>
> p.s. Thanks for attending the webinar, Bill! I saw your name as one of
> the question askers. Hopefully all that stuff I made up is close to the
> truth :)
>
>
>
> On
I am running Solr 4.1. I have set up SolrCloud with 1 leader and 3
replicas, 4 nodes total. Do query requests send to a node only query the
replica on that node, or are they load-balanced to the entire cluster?
Bill
The "Upgrading from Solr 4.1.0" section of the 4.2.0 CHANGES.txt says:
"(No upgrade instructions yet)"
To me that's not the same as no need to do anything. I think the doc
should be updated with either specific instructions or states 4.2.0 is
backward compatible with 4.1.0 so there is no need to
Never mind. I just realized the difference between the two. Sorry for the
noise.
Bill
On Thu, Feb 21, 2013 at 8:42 AM, Bill Au wrote:
> There have been requests for supporting multiple facet.prefix for the same
> facet.field. There is an open JIRA with a patch:
>
There have been requests for supporting multiple facet.prefix for the same
facet.field. There is an open JIRA with a patch:
https://issues.apache.org/jira/browse/SOLR-1351
Wouldn't using multiple facet.query achieve the same result? I mean
something like:
facet.query=lastName:A*&facet.query=la
u'd like prob. Might be worth a JIRA issue to look
> at further options.
>
> - Mark
>
> On Jan 3, 2013, at 5:54 PM, Bill Au wrote:
>
> > Thanks, Mark.
> >
> > That does remove the node. And it seems to do so permanently. Even
> when I
> > restart S
AWS
auto scaling add a new node, I need to make sure it has before active
before I enable it in the load balancer.
Bill
On Thu, Jan 3, 2013 at 9:10 AM, Mark Miller wrote:
>
> http://wiki.apache.org/solr/CoreAdmin#UNLOAD
>
> - Mark
>
> On Jan 3, 2013, at 9:06 AM, Bill Au
all updates to the shard leader directly because I
want to minimize traffic between nodes during search and update
Bill
On Wed, Jan 2, 2013 at 6:47 PM, Mark Miller wrote:
>
> On Jan 2, 2013, at 5:51 PM, Bill Au wrote:
>
> > Is anyone running Solr 4.0 SolrCloud with AWS auto scali
t to me is automatic removal of wrong nodes that ends up in
> data loss or insufficient number of replicas.
>
> But if somebody has done thing and has written up a how-to, I'd love to see
> it!
>
> Otis
> --
> Solr & ElasticSearch Support
> http://sematext
If your exact search returns more than one result, then by default they are
sorted by the score.
Bill
On Thu, Dec 13, 2012 at 11:41 PM, Otis Gospodnetic <
otis.gospodne...@gmail.com> wrote:
> Hi
>
> If you are doing a pure boolean search - something matches or doesn't match
> and you don't care
You need to configure and start Solr independent of any client you use.
Bill
On Fri, Dec 14, 2012 at 2:23 AM, Romita Saha
wrote:
> Hi,
>
> Can anyone please guide me to use SolrPhpClient? The documents available
> are not clear. As to where to place SolrPhpClient?
>
> I have downloaded SolrPhpC
need for it, but's considered required
> these days.
>
> - Mark
>
> On Dec 7, 2012, at 11:04 AM, Bill Au wrote:
>
> > I actually was not using a solr.xml. I am only using a single core. I
> am
> > using the default core name collection1. I know for sure I wi
.@uci.cu> wrote:
> Any news on Solarium Project? Is the one I'm using with Solr 3.6!
>
> - Mensaje original -
> De: "Bill Au"
> Para: solr-user@lucene.apache.org, "Arkadi Colson"
> Enviados: Viernes, 7 de Diciembre 2012 13:40:20
> Asunto: Re
to find what ports it's
> running on - you can only do it when a request actually comes in to my
> knowledge.
>
> - Mark
>
> On Dec 5, 2012, at 1:05 PM, Bill Au wrote:
>
> > I am using tomcat. In my tomcat start script I have tried setting system
> > properties
:
> Thanks for the info!
>
> Do you know if it'spossible to use file uploads to Tika with this client?
>
>
> On 12/03/2012 03:56 PM, Bill Au wrote:
>
>> https://bugs.php.net/bug.php?**id=62332<https://bugs.php.net/bug.php?id=62332>
>>
>> There is a fork with
in a container and the container controls the port. So, you need
> to tell the container which port to use.
>
> For example,
>
> java -Djetty.port=8180 -jar start.jar
>
> -- Jack Krupansky
>
> -----Original Message- From: Bill Au
> Sent: Wednesday, December
https://bugs.php.net/bug.php?id=62332
There is a fork with patches applied.
On Mon, Dec 3, 2012 at 9:38 AM, Arkadi Colson wrote:
> Hi
>
> Anyone tested the pecl Solr Client in combination with SolrCloud? I seems
> to be broken since 4.0
>
> Best regard
> Arkadi
>
>
Yes, my original question is about search. And Mark did answered is in his
original reply. I am guessing that the replicas are updated sequentially
so the newly added documents will be available in some replicas before
other. I want to know where SolrCloud stands in terms of CAP.
Bill
On Thu,
st won't
> return until it's affected all replicas. Low latency eventual consistency.
>
> - Mark
>
> On Nov 14, 2012, at 5:47 PM, Bill Au wrote:
>
> > Will a newly indexed document included in search result in the shard
> leader
> > as soon as it has been i
My replicas are actually on different machines so they do come up. The
problem I found is that since they can't get the leader they just come up
but is not part of the cluster. I can still do local search with
distrib=false. They do not retry to get the leader so I have to restarted
them after t
ard leader (you save one internal request). If not,
> round-robin should be fine.
>
> Tomás
>
> On Fri, Oct 26, 2012 at 12:27 PM, Bill Au wrote:
>
> > I am thinking of using a load balancer for both indexing and querying to
> > spread both the indexing and querying loa
ing up a second core and not
> > specifying numShards."
> >
> > Erick
> >
> > On Fri, Oct 26, 2012 at 10:14 AM, Bill Au wrote:
> > > I am currently using one master with multiple slaves so I do have high
> > > availability for searching now.
<
tomasflo...@gmail.com> wrote:
> It also provides high availability for indexing and searching.
>
> On Thu, Oct 25, 2012 at 4:43 PM, Bill Au wrote:
>
> > So I guess one would use SolrCloud for the same reasons as distributed
> > search:
> >
> > When an index b
So I guess one would use SolrCloud for the same reasons as distributed
search:
When an index becomes too large to fit on a single system, or when a single
query takes too long to execute.
Bill
On Thu, Oct 25, 2012 at 3:38 PM, Shawn Heisey wrote:
> On 10/25/2012 1:29 PM, Bill Au wrote:
>
Sorry, I had copy/paste the wrong link before. Here is the correct one:
https://issues.apache.org/jira/browse/SOLR-3986
Bill
On Wed, Oct 24, 2012 at 10:26 AM, Bill Au wrote:
> I just filed a bug with all the details:
>
> https://issues.apache.org/jira/browse/SOLR-3681
>
> Bi
I just filed a bug with all the details:
https://issues.apache.org/jira/browse/SOLR-3681
Bill
On Tue, Oct 23, 2012 at 2:47 PM, Chris Hostetter
wrote:
> : Just discovered that the replication admin REST API reports the correct
> : index version and generation:
> :
> : http://master_host:port/sol
gt; automatic update but I assume you did that.
>
> Best
> Erick
>
> On Thu, Oct 18, 2012 at 1:59 PM, Bill Au wrote:
> > Just discovered that the replication admin REST API reports the correct
> > index version and generation:
> >
> > http://master_host:port/sol
Just discovered that the replication admin REST API reports the correct
index version and generation:
http://master_host:port/solr/replication?command=indexversion
So is this a bug in the admin UI?
Bill
On Thu, Oct 18, 2012 at 11:34 AM, Bill Au wrote:
> I just upgraded to Solr 4.0.0.
I just upgraded to Solr 4.0.0. I noticed that after a delete by query, the
index version, generation, and size remain unchanged on the master even
though the documents have been deleted (num docs changed and those deleted
documents no longer show up in query responses). But on the slave both the
Taking a thread dump will take you what's going.
Bill
On Wed, Jun 1, 2011 at 3:04 PM, Chris Cowan wrote:
> About once a day a Solr/Jetty process gets hung on my server consuming 100%
> of one of the CPU's. Once this happens the server no longer responds to
> requests. I've looked through the log
ting that limit,
causing an actual call to java.lang.String.intern().
I think I need to reduce the number of fields in my index. Any other things
I can do to help in this case.
Bill
On Wed, May 25, 2011 at 11:28 AM, Bill Au wrote:
> I am taking a snapshot after every commit. From looking at t
thresholds, segments are merged which can
> take quite some time. His are you triggering commits? If it's external,
> think about using auto commit instead.
>
> Best
> Erick
> On May 20, 2011 6:04 PM, "Bill Au" wrote:
> > On my Solr 1.4.1 master I am doing comm
x27;s probably
> > happening. When you pass certain thresholds, segments are merged which
> can
> > take quite some time. His are you triggering commits? If it's external,
> > think about using auto commit instead.
> >
> > Best
> > Erick
> >
On my Solr 1.4.1 master I am doing commits regularly at a fixed interval. I
noticed that from time to time commit will take longer than the commit
interval, causing commits to overlap. Then things will get worse as commit
will take longer and longer. Here is the logs for a long commit:
[2011-0
Besides frequency, you should also look at duration of GC events. You may
want to try the concurrent garbage collector if you see many long full gc.
Bill
2010/8/25 Chengyang
> We have about 500million documents are indexed.The index size is aobut 10G.
> Running on a 32bit box. During the press
It would be very useful if you can take a threads dump while Solr is
hanging. That will give indication where/why Solr is hanging.
Bill
On Mon, Aug 23, 2010 at 9:32 PM, Manepalli, Kalyan <
kalyan.manepa...@orbitz.com> wrote:
> Hi all,
> I am facing a peculiar problem with Solr querying. D
It would be helpful it you can attached a threads dump.
BIll
On Mon, Aug 23, 2010 at 6:00 PM, AlexxelA wrote:
>
> I,
>
> I'm running solr 1.3 in production for now 1 year and i never had any
> problem with it since 2 weeks. It happen 6-7 times a day, all of my thread
> but one are in a blocked
You can use the defType param ni the boost local params to use a different
handler. Here is an example for using dismax:
{!boost b=log(popularity) defType=dismax}foo
I do this with a custom handler that I have implemented fro my app.
Bill
On Wed, Jun 9, 2010 at 11:37 PM, Andy wrote:
> I w
If yes, can you please give me a link or someting
> where I can get more info on this?
>
> Thanks,
> Moazzam
>
>
>
> On Fri, May 28, 2010 at 11:50 AM, Bill Au wrote:
> > You can keep different type of documents in the same index. If each
> > document has a typ
You can keep different type of documents in the same index. If each
document has a type field. You can restrict your searches to specific
type(s) of document by using a filter query, which is very fast and
efficient.
Bill
On Fri, May 28, 2010 at 12:28 PM, Nagelberg, Kallin <
knagelb...@globeand
I never said they weren't.
Bill
On Tue, Apr 20, 2010 at 5:54 PM, Abdelhamid ABID wrote:
> Which are JEE Web components, aren't they?
>
> On 4/20/10, Bill Au wrote:
> >
> > Solr only uses Servlet and JSP.
> >
> >
> > Bill
> >
> >
&g
Solr only uses Servlet and JSP.
Bill
On Sat, Apr 17, 2010 at 9:11 AM, Abdelhamid ABID wrote:
> Solr does use JEE WEB components
>
> On 4/17/10, Lukáš Vlček wrote:
> >
> > Hi,
> >
> > may be you should be aware that JBoss AS is using Tomcat for web
> container
> > (with modified classloader), s
Are you sure you are using a new Searcher that is created after the
replicated index has been installed?
Bill
On Tue, Apr 13, 2010 at 1:00 PM, Jason Rutherglen <
jason.rutherg...@gmail.com> wrote:
> Maybe there's a known bug here? The index is replicated to the index
> directory in /data, howev
What's the exact command you used to run snappuller and snapinstaller? What
do you mean by
"I have set it as a option when Solr starts"
Bill
On Tue, Apr 13, 2010 at 1:01 PM, william pink wrote:
> On Mon, Apr 12, 2010 at 7:02 PM, Bill Au wrote:
>
> > The lines you
The lines you have encloses are commented out by the
Bill
On Mon, Apr 12, 2010 at 1:32 PM, william pink wrote:
> Hi,
>
> I am running Solr 1.2 ( I will be updating in due course)
>
> I am having a few issues with doing the snapshots after a postCommit or
> postOptimize neither appear to work i
Take a heap dump and use jhat to find out for sure.
Bill
On Mon, Mar 29, 2010 at 1:03 PM, Siddhant Goel wrote:
> Gentle bounce
>
> On Sun, Mar 28, 2010 at 11:31 AM, Siddhant Goel >wrote:
>
> > Hi everyone,
> >
> > The output of "jmap -histo:live 27959 | head -30" is something like the
> > follo
Have you started rsyncd on the master? Make sure that it is enabled before
you start:
http://wiki.apache.org/solr/SolrCollectionDistributionOperationsOutline
You can also tried running snappuller with the -V option to et more
debugging info.
Bill
On Wed, Mar 10, 2010 at 4:09 PM, Lars R. Noldan
Lucene 2.9.1 comes with a PayloadTermQuery:
http://lucene.apache.org/java/2_9_1/api/all/org/apache/lucene/search/payloads/PayloadTermQuery.html
I have been using that to use the payload as part of the score without any
problem.
Bill
On Tue, Dec 15, 2009 at 6:31 AM, Raghuveer Kancherla <
raghuve
I had open a Jira and submitted a patch for this:
https://issues.apache.org/jira/browse/SOLR-1545
Bill
On Thu, Dec 3, 2009 at 7:47 AM, Sascha Szott wrote:
> Hi Folks,
>
> is there any way to instruct MoreLikeThisHandler to sort results? I was
> wondering that MLTHandler recognizes faceting par
Thanks for pointing that out. The TermsComponent prefix query is running
much faster than the facet prefix query. I guess there is yet another
reason to optimize the index.
Bill
On Tue, Nov 3, 2009 at 5:09 PM, Koji Sekiguchi wrote:
> Bill Au wrote:
>
>> Should the re
happen if I split and catenate at query time?
Bill
On Tue, Oct 20, 2009 at 8:09 PM, Yonik Seeley wrote:
> On Tue, Oct 20, 2009 at 6:37 PM, Bill Au wrote:
> > I have a question regarding the use of the WordDelimiterFilter in the
> text
> > field in the example schema.xml. The pa
I am having problem with using the ShingleFIlter. My test document is "the
quick brown fox jumps over the lazy dog". My query is "my quick brown".
Since both have the term "quick brown" at term position 2, the query should
match the test document, right? But my query is not returning anything.
I have a question regarding the use of the WordDelimiterFilter in the text
field in the example schema.xml. The parameters are set differently for the
indexing and querying. Namely, catenateWords and catenateNumbers are set
differently. Shouldn't the same analysis be done at both index and query
Thanks for the info. Just want to me sure that I am on the right track
before I go too deep.
Bill
2009/10/12 Noble Paul നോബിള് नोब्ळ्
> A custom UpdateRequestProcessor is the solution. You can access the
> searcher in a UpdateRequestProcessor.
>
> On Tue, Oct 13, 2009 at 4:20
It looks like there is a JIRA covering this:
https://issues.apache.org/jira/browse/SOLR-1387
On Mon, Oct 12, 2009 at 11:00 AM, Bill Au wrote:
> Is it possible to have two different facet.prefix on the same facet field
> in a single query. I wan to get facet counts for two prefix, &q
Is it possible to do searches from within an UpdateRequestProcessor? The
documents in my index reference each other. When a document is deleted, I
would like to update all documents containing a reference to the deleted
document. My initial idea is to use a custom UpdateRequestProcessor. Is
the
Is it possible to have two different facet.prefix on the same facet field in
a single query. I wan to get facet counts for two prefix, "xx" and "yy". I
tried using two facet.prefix (ie &facet.prefix=xx&facet.prefix=yy) but the
second one seems to have no effect.
Bill
Have you looked at snapcleaner?
http://wiki.apache.org/solr/SolrCollectionDistributionScripts#snapcleaner
http://wiki.apache.org/solr/CollectionDistribution#snapcleaner
Bill
On Mon, Oct 5, 2009 at 4:58 PM, solr jay wrote:
> Is there a reliable way to safely clean up index directories? This is
;t pay attention to how
> ambiguously I was using "supported" there :)
>
> Bill Au wrote:
> > SUN has recently clarify the issue regarding "unsupported unless you pay"
> > for the G1 garbage collector. Here is the updated release of Java 6
> update
> > 14
SUN has recently clarify the issue regarding "unsupported unless you pay"
for the G1 garbage collector. Here is the updated release of Java 6 update
14:
http://java.sun.com/javase/6/webnotes/6u14.html
G1 will be part of Java 7, fully supported without pay. The version
included in Java 6 update 1
Have you considered using facet counts for your tag cloud?
Bill
On Fri, Oct 2, 2009 at 11:34 AM, wrote:
> Hello,
>
> I'm trying to create a tag cloud from a term vector, but the array
> returned (using JSON wt) is quite complex and takes an inordinate
> amount of time to process. Is there a bet
A snapshot is a copy of the index at a particular moment in time. So
changes in earlier snapshots are in the latest one as well. Nothing is
missed by pulling the latest snapshot.
When triggering snapshooter with the postCommit hook, a commit always
results in a snapshot being created.
Bill
On
You probably want to add the following command line option to java to
produce a heap dump:
-XX:+HeapDumpOnOutOfMemoryError
Then you can use jhat to see what's taking up all the space in the heap.
Bill
On Thu, Oct 1, 2009 at 11:47 AM, Mark Miller wrote:
> Jeff Newburn wrote:
> > I am trying to
lve this ugly bug. With the upgraded JVM I could run the solr
> > servers
> > > for more than 12 hours on the production environment with the GC
> > mentioned
> > > in the previous e-mails. The results are really amazing. The time spent
> > on
> > > coll
You are running a very old version of Java 6 (update 6). The latest is
update 16. You should definitely upgrade. There is a bug in Java 6
starting with update 4 that may result in a corrupted Lucene/Solr index:
http://bugs.sun.com/bugdatabase/view_bug.do?bug_id=6707044
https://issues.apache.org/
That's certainly something that is doable with a filter. I am not aware of
any available.
Bill
On Wed, Sep 16, 2009 at 10:39 AM, Jay Hill wrote:
> For security reasons (say I'm indexing very sensitive data, medical records
> for example) is there a way to encrypt data that is stored in Solr? S
For the standard query handler, try [* TO *].
Bill
On Mon, Sep 14, 2009 at 8:46 PM, Jay Hill wrote:
> With dismax you can use q.alt when the q param is missing:
> q.alt=*:*
> should work.
>
> -Jay
>
>
> On Mon, Sep 14, 2009 at 5:38 PM, Jonathan Vanasco
> wrote:
>
> > Thanks Jay & Matt
> >
> > I
Or you can just use facet counts.
Bill
On Thu, Aug 27, 2009 at 5:23 PM, AHMET ARSLAN wrote:
>
> > Hi all,How would I go about
> > implementing a 'tag cloud' with Solr1.3? All I
> > want to do is to display a list of most occurring terms in
> > the corpus. Is there an easy way to go about that i
You also need a requestHandler that uses your updateRequestProcessorChain:
custom
...
Bill
On Fri, Aug 28, 2009 at 11:44 AM, Mark Miller wrote:
> Erik Earle wrote:
> > I've read through the wiki for this and it explains most everything
> except where in the solr
ve the copyField strip off the payload while it is
copying since doing it in the analysis phrase is too late? Or should I
start looking into using UpdateProcessors as Chris had suggested?
Bill
On Fri, Aug 21, 2009 at 12:04 PM, Bill Au wrote:
> I ended up not using an XML attribute for the pay
That's my gut feeling (start big and go lower if OOM occurs) too.
Bill
On Tue, Aug 25, 2009 at 5:34 PM, Edward Capriolo wrote:
> On Tue, Aug 25, 2009 at 5:29 PM, Bill Au wrote:
> > Just curious, how often do folks commit when building their Solr/Lucene
> > index from sc
Just curious, how often do folks commit when building their Solr/Lucene
index from scratch for index with millions of documents? Should I just wait
and do a single commit at the end after adding all the documents to the
index?
Bill
I ended up not using an XML attribute for the payload since I need to return
the payload in query response. So I ended up going with:
2.0|Solr In Action
My payload is numeric so I can pick a non-numeric delimiter (ie '|').
Putting the payload in front means I don't have to worry about the delimi
Lucene's default scoring formula gives shorter fields a higher score:
http://lucene.apache.org/java/2_4_0/api/org/apache/lucene/search/Similarity.html
Sounds like you want the opposite. You can write your own Similarity class
overriding the lengthNorm() method:
http://lucene.apache.org/java/2_4
1 - 100 of 254 matches
Mail list logo