On Wed, 15 Oct 2008 11:11:07 -0700
Matthew Runo <[EMAIL PROTECTED]> wrote:
> We've been using Varnish (http://varnish.projects.linpro.no/) in front
> of our Solr servers, and have been seeing about a 70% hit rate for the
> queries. We're using SolrJ, and have seen no bad effects of the cache.
There are two ways to achieve this
1) single core. single data-config xml multiple root entities .Add a
type attribute and put all the fields from all the tables into the
schema. This means that when you make a query you may need to add an
extra condition for the type
2) multiple cores (one per ta
Hi all,
I'm testing out using the Tree Faceting Component (SOLR-792) on top of Solr 1.3.
It looks like it would do exactly what I want, but something is not working
correctly with my schema. When I use the example schema, it works just fine,
but I swap out the example schema's and example index
Perfect!
Jeremy Hinegardner wrote:
>
> On Wed, Oct 15, 2008 at 04:53:07PM -0700, dezok wrote:
>>
>> Is there a way to config the fields that are sent back directly in the
>> solrconfig.xml or schema.xml file?
>>
>> I don't really want to write my own queryResponseWriter.
>>
>> I know that "
On Wed, Oct 15, 2008 at 04:53:07PM -0700, dezok wrote:
>
> Is there a way to config the fields that are sent back directly in the
> solrconfig.xml or schema.xml file?
>
> I don't really want to write my own queryResponseWriter.
>
> I know that "&qt=id,name" works on the URL, but I don't want to
Is there a way to config the fields that are sent back directly in the
solrconfig.xml or schema.xml file?
I don't really want to write my own queryResponseWriter.
I know that "&qt=id,name" works on the URL, but I don't want to send that
all the time.
It would seem that:
would be a natural
Neither Solr no Lucene support partial updates. "Update" means
"add or replace". --wunder
On 10/15/08 4:23 PM, "[EMAIL PROTECTED]" <[EMAIL PROTECTED]> wrote:
> Hi,
> I've been trying to find a way to post partial updates, updating only
> some of the fields in a set of records, via POSTed XM
Hi,
I've been trying to find a way to post partial updates, updating only
some of the fields in a set of records, via POSTed XML messages to a solr
1.3.0 index. In the wiki (http://wiki.apache.org/solr/UpdateXmlMessages),
it almost seems like there's a special root tag which isn't
mentioned
As a 'workaround' :
would instead of striping the available disks, but treating them as N silos
and merging the indices afterwards be an option ?
Britske wrote:
>
> Hi,
>
> I understand that this may not be a 100% related question to the forum
> (perhaps it's more Lucene than Solr) but perhap
Hi,
I understand that this may not be a 100% related question to the forum
(perhaps it's more Lucene than Solr) but perhaps someone here has seen
similar things...
I'm experimenting on Amazon Ec2 with indexing a solr / lucene index on a
striped (Raid 0) partition.
While searching gives good b
Hi,
I have a Java program which maintains a Lucene index using the Lucene Java APIs.
I want to use Solr to provide a web-based read-only view of this index. I
require that the index not be locked in any way while Solr is using it (so the
Java program can continue updating it) and that Solr is a
Hi All,
We are using latest solr release 1.3. Solr has been configured to use
multicore feature. Index has been created successfully. And when using Solr
Admin screen, index document is coming up correctly in the response. But
when I start locating index (.cfs) file into folders . File is not cr
Hi
We are using Solr 1.3 Synonym feature. There are different patterns found in
Solr conf ..synnym folder.
Can somebody explain meaning of different patterns?
PATTERN 1:
#some test synonym mappings unlikely to appear in real input text
aaa =>
bbb => 1 2
ccc => 1,2
PATTERN
: with the index being a few minutes stale as the TTL expires on the cache. I
: don't think solr has a way to, at query time, change the cache control
: headers.
SolrJ lets the HttpClient instance handle all network connections, so
specify whatever caching/proxy info you want to it, and then pass
1) strictly speaking there is no such thing as a NULL field value in Solr
-- there are fields that have a value and fields that don't and as
mentioned yourField:[* TO *] will give you all docs that have a value in
the yourField field.
2) http://people.apache.org/~hossman/#threadhijack
Thread H
I honestly have no confidence that i understand what exactly you are
asking, or what problem you are having, or how you are currently using
snapshooter ... but i do know your question is about sudo, and it seems
like you are interested in when/why snapshooter would run sudo.
it will do this if
We've been using Varnish (http://varnish.projects.linpro.no/) in front
of our Solr servers, and have been seeing about a 70% hit rate for the
queries. We're using SolrJ, and have seen no bad effects of the cache.
That said, we're just caching everything for a few minutes. We don't
pick and
The current Subversion trunk has the new Lucene 2.4.0 libraries
committed. So, it's definitely under way.
-Todd
-Original Message-
From: Julio Castillo [mailto:[EMAIL PROTECTED]
Sent: Wednesday, October 15, 2008 9:48 AM
To: solr-user@lucene.apache.org
Subject: Lucene 2.4 released
Any id
Any ideas when solr 1.3 can be patched to use the official release of Lucene
(rather than a Lucene snapshot)?
Should I submit a JIRA request?
thanks
Julio Castillo
Edgenuity Inc.
Hi,
What is the proper behavior suppose to be between SolrJ and caching?
Im proxying through a framework and wondering if it is possible to
turn on / turn off caching programatically depending on the type of
query (or if this will have no effect whatsoever) ... since SolrJ uses
Apache HT
sunnyfr wrote:
> I tried last evening before leaving and this
> morning time elapsed was very important like you can notice above and no
> snapshot, no error in the logs.
I'm actually having a similar trouble. I've enabled postCommit and
postOptimize hooks with an absolute path to snapshooter.
>From the wiki, I cannot see that data-config.xml allows for multiple
documents.
I have a database with more than one type of document. do I need a
multi-core solr to do this.
For example: Invoices, Products, Customers, Sales Orders, Returns may all
be different documents in the database.
Has
Is it possible to do date math in a FunctionQuery? This doesn't work,
but I'm looking for something like:
bf=recip((NOW-updated),1,200,10) when using DisMax to get the elapsed
time between NOW and when the document was updated (where updated is a
Date field).
I know one can do rord(updated)
I think you can do field:["" TO *] to grab everything that is not null.
-Mike
John E. McBride wrote:
Hello All,
I need to run a query which asks:
field = NOT NULL
should this perhaps be done with a filter?
I can't find out how to do NOT NULL from the documentation, would
appreciate any advi
try:
field:[* TO *]
On Oct 15, 2008, at 9:44 AM, John E. McBride wrote:
Hello All,
I need to run a query which asks:
field = NOT NULL
should this perhaps be done with a filter?
I can't find out how to do NOT NULL from the documentation, would
appreciate any advice.
Thanks,
John
Hello All,
I need to run a query which asks:
field = NOT NULL
should this perhaps be done with a filter?
I can't find out how to do NOT NULL from the documentation, would
appreciate any advice.
Thanks,
John
Hi,
My snapshooter stay stuck but how can I remove this sudo -u
Because it doesn't work automaticly but manually yes. ./bin/snapshooter -V
root 25486 0.0 0.0 23112 1212 ?S14:46 0:00 sudo -u
root /data/solr/video/bin/snapshooter
Arg by default is nothing, it should be nothi
It should work, but if you want to handle multiple languages in ONE index
you end up with a lot of filters and fields handled with different analyzers
in a SINGLE configuration.
On Wed, Oct 15, 2008 at 3:03 PM, sunnyfr <[EMAIL PROTECTED]> wrote:
>
> But about stopwords and stemming, is it a real
But about stopwords and stemming, is it a real issue if on one core I've
several stemming and stopwords(with a different name), it should work?
Hannes Carl Meyer-2 wrote:
>
> Hi,
>
> yes, if you don't handle (stopwords, stemming etc.) a specific language
> you
> should create a general core.
Hi,
yes, if you don't handle (stopwords, stemming etc.) a specific language you
should create a general core.
In my project I'm supporting 10 languages and if I get unsupported languages
it is going to be logged and discarded right away!
Boosting on multiple cores is indeed a problem. An idea wo
If I sent snapshooter manually it seems to work :
[EMAIL PROTECTED]:/# ./data/solr/video/bin/snapshooter -V
+ fixUser -V
+ [[ -z root ]]
++ whoami
+ [[ root != root ]]
++ who -m
++ cut '-d ' -f1
++ sed '-es/^.*!//'
+ oldwhoami=root
+ [[ root == '' ]]
+ [[ -z /data/solr/video/data ]]
++ echo /data
ok MultiCore is handy indeed to don't have this big index wich manage every
language,
but when you have one modification to do you have to do it on all of them.
And the point as well is it's complicate too boost more one language than
another one,
ie with an Italian search video, if we don't hav
Hi,
Sorry I didn"t get your example can you send me it again?
thanks,
Hannes Carl Meyer-2 wrote:
>
> I attached an example for you.
>
> The challenge with MultiCore is on the client's search logic. It would
> help
> if you know which language the person wants to search through. If not you
>
The delta implementation is a bit fragile in DIH for complex queries
I recommend you do delta-import using a full-import
it can be done as follows
define a diffferent entity
when you wish to do a full-import pass the request parameter
entity=articles-full
Shalin Shekhar Mangar schrieb:
You are missing the "pk" field (primary key). This is used for delta
imports.
I added the pk field and rebuild the index yesterday. However, when I
run the delta-import, I still have this error message in the log:
INFO: Starting delta collection.
Oct 15, 2008
Hi everybody,
Everything was working fine until I stopped tomcat service to activate
snapshooter in solrconfig.xml.
Yesterday before this modification cronjob was firing delta-import every
5mn, imports was during between 5-10mn, everything was ok. But now I've
updated to create snapshooter, I tri
Hi Noble,
Thanks for your reply
Sorry that I was not able to reply back in time.
I followed your suggestions and it is now indexing fine up to an extend.
1) if the query is taking values from a single table, the full import will
show the expected count itself.
And when i do
http://localhost:8
37 matches
Mail list logo