create a setter method for the field which take s a Stringand apply
the annotation there
example
private Calendar validFrom;
@Field
public void setvalidFrom(String s){
//convert to Calendar object and set the field
}
On Fri, Nov 13, 2009 at 12:24 PM, paulhyo wrote:
>
> Hi,
>
> I would li
Hi,
I would like to know if there is a way to add type converters when using
getBeans. I need convertion when Updating (Calendar -> String) and when
Searching (String -> Calendar)
The Bean class defines :
@Field
private Calendar validFrom;
but the recieved type within Query Response is a Strin
I played around with it and am also getting a NullPointerException on Solr
1.4, as well (albeit with a slightly different dump). Some of my documents
actually return, FYI, just not all. I'm on a on a multi-solr-core system
searching /solr/core1/admin/luke?id=MYID. My Exception looked different,
I guess SOLR-1352 should solve all the problems with performance. I am
working on one currently and I hope to submit a patch soon.
On Thu, Nov 12, 2009 at 8:05 PM, Sascha Szott wrote:
> Hi Avlesh,
>
> Avlesh Singh wrote:
>>>
>>> 1. Is it considered as good practice to set up several DIH request
>
Is there any tool to directly port java to .Net? then we can etxract
out the client part of the javabin code and convert it.
On Thu, Nov 12, 2009 at 9:56 PM, Erik Hatcher wrote:
> Has anyone looked into using the javabin response format from .NET (instead
> of SolrJ)?
>
> It's mainly a curiosity.
Hi,
Sorry i forgot to mention that comment field is a text field.
Regards,
Raakhi
On Thu, Nov 12, 2009 at 8:05 PM, Grant Ingersoll wrote:
>
> On Nov 12, 2009, at 8:55 AM, Rakhi Khatwani wrote:
>
> > Hi,
> > I am using solr 1.3 and i hv inserted some data in my comment
> > field.
> > for
Hi,
I'm seeing this stack trace when I try to view a specific document, e.g.
/admin/luke?id=1 but luke appears to be working correctly when I just view
/admin/luke. Does this look familiar to anyone? Our sysadmin just upgraded us
to the 1.4 release, I'm not sure if this occurred before that.
: Use SolrJ and embed solr in my webapp, but I want to disable the http access
: to solr, meaning force all calls through my solrj interface I am building (no
: admin access etc).
if you're app is running in a servlet container anyway, you might find it
just as easy to install solr into the same
Not posting a problem or a solution. Just wanted to get word back to
the Solr developers, bug testers, and mailing list gurus how much I love
Solr 1.4. Our site search is more accurate, the search box offers
better suggestions must faster than before, and the elevate
functionality has appease
Jerome L Quinn wrote:
> Hi, everyone, this is a problem I've had for quite a while,
> and have basically avoided optimizing because of it. However,
> eventually we will get to the point where we must delete as
> well as add docs continuously.
>
> I have a Solr 1.3 index with ~4M docs at around 90G
On the CoreAdmin wiki page. thanks
-Original Message-
From: Mark Miller [mailto:markrmil...@gmail.com]
Sent: Thursday, November 12, 2009 7:11 PM
To: solr-user@lucene.apache.org
Subject: Re: Multicore solr.xml schemaName parameter not being recognized
Turner, Robbin J wrote:
> When usin
Turner, Robbin J wrote:
> When using Solr 1.4 in multicore configuration:
>
>
> dataDir="/opt/core0" schemaName="schema-core0.xml" />
> dataDir="/opt/core1" schemaName="schema-core1.xml" />
>
>
>
> I get a runtime error:
>
> SEVERE: java.lang.RuntimeExce
When using Solr 1.4 in multicore configuration:
I get a runtime error:
SEVERE: java.lang.RuntimeException: Can't find resource 'schema.xml' in
classpath or '/opt/multicore/conf/', cwd=/root
at
org.apache.solr.core.SolrResourceLoader.openReso
>
>
> Unfortunately no. the +20 queries are distinct from each other, even tho
> they share some of the original query parameters (and some facet
> information
> from the original query facets).
>
> what I was envisioning was something that works like a facet, but instead
> of
> returning informat
Hi, everyone, this is a problem I've had for quite a while,
and have basically avoided optimizing because of it. However,
eventually we will get to the point where we must delete as
well as add docs continuously.
I have a Solr 1.3 index with ~4M docs at around 90G. This is a single
instance run
tpunder wrote:
>
> Could you use the facet.query feature
> (http://wiki.apache.org/solr/SimpleFacetParameters#facet.query_:_Arbitrary_Query_Faceting)
> to reduce it to 2 queries?
>
> So you'd:
>
> 1. Send solr the first query
> 2. Solr executes and returns the query to you
> 3. You then use t
Could you use the facet.query feature
(http://wiki.apache.org/solr/SimpleFacetParameters#facet.query_:_Arbitrary_Query_Faceting)
to reduce it to 2 queries?
So you'd:
1. Send solr the first query
2. Solr executes and returns the query to you
3. You then use the facet results to create a 2nd query
May be simpler to just download it. The -dev bit was just mentioned on the
list, so check the ML archives.
Otis
--
Sematext is hiring -- http://sematext.com/about/jobs.html?mls
Lucene, Solr, Nutch, Katta, Hadoop, HBase, UIMA, NLP, NER, IR
- Original Message
> From: Nasseam Elkarra
>
On Thu, Nov 12, 2009 at 9:42 PM, Chris Hostetter
wrote:
>
> : > I can contribute a patch for this.
>
> : Attached is the patch I would like to have in Solrj.
> : If there is any problem with it please let me know. I followed the
> : HowToContribute wiki page and I hope that I didn't miss any steps
Checked out the 1.4.0 release from the tag but when I build I get
1.4.1-dev appended to the build artifacts.
Thank you,
Nasseam
http://bodukai.com
Scenario:
1. I have a query I want to execute; I would be using the results and facets
returned
2. I also have a couple of dozen other queries that are closely related to
the first query and to the facets returned by that query. For each query, I
would only be using the total number of results d
On Thu, Nov 12, 2009 at 3:39 PM, Chris Hostetter
wrote:
>
> : I am getting results from one query and I just need 2 index attribute
> values
> : . These index attribute values are used for form new Query to Solr.
>
> can you elaborate on what exactly you mean by "These index attribute
> values are
: > I can contribute a patch for this.
: Attached is the patch I would like to have in Solrj.
: If there is any problem with it please let me know. I followed the
: HowToContribute wiki page and I hope that I didn't miss any steps.
the mailing list typically rejects attachments. As mentioned on
: I am getting results from one query and I just need 2 index attribute values
: . These index attribute values are used for form new Query to Solr.
can you elaborate on what exactly you mean by "These index attribute
values are used for form new Query to Solr" ... are you saying that you
want
On Thu, Nov 12, 2009 at 2:54 PM, Chris Hostetter
wrote:
>
> oh man, so you were parsing the Stored field values of every matching doc
> at query time? ouch.
>
> Assuming i'm understanding your goal, the conventional way to solve this
> type of problem is "payloads" ... you'll find lots of discussi
: I want to disable coord for certain queries. For example, if I pass a URL
: parameter like "disableCoord" to Solr, the BooleanQuery generated will have
: coord disabled. If it's not currently supported, what would be a good way
: to implement it?
in order to have something like this on a per
: Here's how we did it in Lucene: we had an extension of Query, with a custom
: scorer. In the index we stored the category id's as single-valued
: space-separated string. We also stored a space-separated string of scores
: in another field. We made of these fields stored. We simply delegated
: ex: "A + W Root Beer"
: the field uses a keyword tokenizer to keep the string together, then
: it will get converted to "aw root beer" by a custom filter ive made, i
: now want to split that up into 3 tokens (aw, root, beer), but seems
TokenFilter's can produce more tokens then they consume ...
Hi,
I have succeeded running and querying with *PayloadTermQueryPlugin, *
*When I ran my test against an embedded solrj server it ran fine, Im using
maven solr 1.4 artifacts.*
*When I deployed it into my servlet container the plugin didn't load, the
war in the servlet container came from a standard
Like I said before, it has served me and other people very well so far.
AFAIK there is no StAX implementation for .Net, there is XmlReader but it's
quite more complex to use than XmlDocument (DOM).
Of course, I always welcome patches.
Cheers,
Mauricio
On Thu, Nov 12, 2009 at 3:34 PM, Walter Under
DOM is the wrong choice for unmarshalling XML data from a protocol. The DOM is
slow and bloated. You need that if you are manipulating an XML document, but
not if you are stripmining the data from it then throwing the document away.
Try a StAX parser: http://en.wikipedia.org/wiki/StAX
That shou
I simply altered solr.xml and changed it to persistent="true", then
all subsequent actions were saved.
Thanks
2009/11/11 Noble Paul നോബിള് नोब्ळ् :
> On Thu, Nov 12, 2009 at 3:13 AM, Jason Rutherglen
> wrote:
>> It looks like our core admin wiki doesn't cover the persist action?
>> http://wiki
It is recommended [1] to use synonyms at index time only for various reasons
especially with multi-word synonyms.
[1]http://wiki.apache.org/solr/AnalyzersTokenizersTokenFilters#solr.SynonymFilterFactory
only at index time use expand=true ingoreCase=true with synonym.txt :
micheal, michael
OR:
I use DOM. Honestly, I haven't run any perf tests, it all just runs well
enough for me. Then again, my documents and resultsets are typically small
(~1KB docs and ~50 docs per resultset). How big are your documents?
On Thu, Nov 12, 2009 at 2:40 PM, wojtekpia wrote:
>
> I was thinking of going t
I was thinking of going this route too because I've found that parsing XML
result sets using XmlDocument + XPath can be very slow (up to a few seconds)
when requesting ~100 documents. Are you getting good performance parsing
large result sets? Are you using SAX instead of DOM?
Thanks,
Wojtek
ma
I've got a process external to Solr that is constantly feeding it new
documents, retrying if Solr is nonresponding. What's the right way to
stop Solr (running in Tomcat) so no documents are lost?
Currently I'm committing all cores and then running catalina's stop
script, but between my commit and
It's one of my pending issues for SolrNet (
http://code.google.com/p/solrnet/issues/detail?id=71 )
I've looked at the code, it doesn't seem terribly complex to port to C#. It
would be kind of cumbersome to test it though.
I just didn't implement it yet because I'm getting good enough performance
wi
Has anyone looked into using the javabin response format from .NET
(instead of SolrJ)?
It's mainly a curiosity.
How much better could performance/bandwidth/throughput be? How
difficult would it be to implement some .NET code (C#, I'd guess being
the best choice) to handle this response fo
On Nov 12, 2009, at 8:55 AM, Rakhi Khatwani wrote:
> Hi,
> I am using solr 1.3 and i hv inserted some data in my comment
> field.
> for example:
>
> for document1:
>
> The iPhone 3GS finally adds common cell phone features like multimedia
> messaging, video recording, and voice dialing.
-- Forwarded message --
From: Noble Paul നോബിള് नोब्ळ्
Date: 2009/11/12
Subject: Re: ${dataimporter.delta.twitter_id} not getting populated in
deltaImportQuery
To: Mark Ellul
On Thu, Nov 12, 2009 at 8:17 PM, Mark Ellul wrote:
> I think I got it working, thanks for your respon
I'm not sure if this is what you mean, but we do all our indexing on a
non-public server so we can test it. Only when everyone is satisfied do
we put it on the public server.
To do that we just tar up the "index" folder and scp it to the server.
To install it, we stop solr, untar it, and start
Hi Avlesh,
Avlesh Singh wrote:
>>
>> 1. Is it considered as good practice to set up several DIH request
>> handlers, one for each possible parameter value?
>>
> Nothing wrong with this. My assumption is that you want to do this to
> speed
> up indexing. Each DIH instance would block all others, on
On Thu, Nov 12, 2009 at 8:02 AM, Chantal Ackermann
wrote:
> this works fine for me! However, I'm using Java/SolrJ and I have the freedom
> to add any necessary jars to convert the value.
These conversions should normally be done on the Solr server side
(i.e. MoreLikeThis component needs a patch),
Hi,
I am using solr 1.3 and i hv inserted some data in my comment
field.
for example:
for document1:
The iPhone 3GS finally adds common cell phone features like multimedia
messaging, video recording, and voice dialing. It runs faster; its promised
battery life is longer; and the multimed
Hi Experts,
I would like help on multi word synonyms. The scenario is like:
I have a name Micheal Jackson(wrong term) which has a synonym Michael Jackson
i.e.
Micheal Jackson => Michael Jackson
When I try to search for the word Micheal Jackson (not a phrase search), it is
searching for te
Noble Paul wrote:
> Yes , open an issue . This is a trivial change
I've opened JIRA issue SOLR-1554.
-Sascha
>
> On Thu, Nov 12, 2009 at 5:08 AM, Sascha Szott wrote:
>> Noble,
>>
>> Noble Paul wrote:
>>> DIH imports are really long running. There is a good chance that the
>>> connection times ou
Hi Yonik,
this works fine for me! However, I'm using Java/SolrJ and I have the
freedom to add any necessary jars to convert the value.
But how about clients that cannot make use of FieldType? They cannot use
those custom values and will be stuck at that point, isn't it?
Shall I still open a
Thanks Alexey, this is working.
I've split it into query and boostQuery using dismax and it gives some
appropriate results.
Cheers,
Chantal
Alexey Serba schrieb:
Or maybe it's
possible to tweak MoreLikeThis just to return the fields and terms that
could be used for a search on the other core
is in solr 1.4 maby a way to search with an wildcard at the beginning?
in 1.3 i cant activate it.
KingArtus
Hi Noble,
Thanks for the response.
CAPS is not the issue.
Can you please confirm the link below is the code for the SQLEntityProcessor
in the release 1.4?
http://svn.apache.org/viewvc/lucene/solr/tags/release-1.4.0/contrib/dataimporthandler/src/main/java/org/apache/solr/handler/dataimport/SqlEn
Replication? Over Http? - http://wiki.apache.org/solr/SolrReplication
Cheers
Avlesh
On Thu, Nov 12, 2009 at 2:01 AM, Joel Nylund wrote:
> is it possible to index on one server and copy the files over?
>
> thanks
> Joel
>
>
Hello,
I am using Dismax request handler for queries:
...select?q=foo bar foo2 bar2&qt=dismax&mm=2...
With parameter "mm=2" I configure that at least 2 of the optional clauses must
match, regardless of how many clauses there are.
But now I want change this to the following:
List all documents
List,
I somehow fail to index certain pdf files using the
ExtractingRequestHandler in Solr 1.4 with default solrconfig.xml but
modified schema. I have a very simple schema for this case using only
and ID field, a timestamp field and two dynamic fields; ignored_* and
attr_* both indexed, stored an
Hi Solr experts
I just want to know if there is a tool/way by which I can analyze which
query is heavy and is taking how much time to fetch the results from solr.
Our slave solr is serving about 12000 requests per hour and we need to
analyze queries served by it.
I have not explored luke tool mu
> I want to add turkish character support solr. For example
> when i make a
> query with letter 'c' i want to to get result having 'c'
> and Turkish
> character 'ç' and vice versa. How can i do that. Do you
> have any opinion.
You can replace Turkish characters (ç) with their ascii versions (c) w
Hello everyone,
I want to add turkish character support solr. For example when i make a
query with letter 'c' i want to to get result having 'c' and Turkish
character 'ç' and vice versa. How can i do that. Do you have any opinion.
Thanks,
Can
56 matches
Mail list logo