[
https://issues.apache.org/jira/browse/LUCENE-2167?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12865481#action_12865481
]
Steven Rowe edited comment on LUCENE-2167 at 5/9/10 1:05 AM:
-------------------------------------------------------------
I added your change removing CharBuffer.wrap(), Robert, and it appears to have
sped it up, though not as much as I would like:
||Operation||recsPerRun||rec/s||elapsedSec||
|StandardTokenizer|1262799|647,589.23|1.95|
|ICUTokenizer|1268451|526,328.22|2.41|
|UAX29Tokenizer|1268451|558,788.99|2.27|
I plan on attempting to rewrite the grammar to eliminate chaining/lookahead
this weekend.
*edit*: fixed the rec/s, which were from the worst of five instead of the best
of five - the elapsedSec, however, were correct.
was (Author: steve_rowe):
I added your change removing CharBuffer.wrap(), Robert, and it appears to
have sped it up, though not as much as I would like:
||Operation||recsPerRun||rec/s||elapsedSec||
|StandardTokenizer|1262799|423,615.91|1.95|
|ICUTokenizer|1268451|403,836.69|2.41|
|UAX29Tokenizer|1268451|498,604.94|2.27|
I plan on attempting to rewrite the grammar to eliminate chaining/lookahead
this weekend.
> Implement StandardTokenizer with the UAX#29 Standard
> ----------------------------------------------------
>
> Key: LUCENE-2167
> URL: https://issues.apache.org/jira/browse/LUCENE-2167
> Project: Lucene - Java
> Issue Type: New Feature
> Components: contrib/analyzers
> Affects Versions: 3.1
> Reporter: Shyamal Prasad
> Assignee: Steven Rowe
> Priority: Minor
> Attachments: LUCENE-2167.patch, LUCENE-2167.patch, LUCENE-2167.patch,
> LUCENE-2167.patch
>
> Original Estimate: 0.5h
> Remaining Estimate: 0.5h
>
> It would be really nice for StandardTokenizer to adhere straight to the
> standard as much as we can with jflex. Then its name would actually make
> sense.
> Such a transition would involve renaming the old StandardTokenizer to
> EuropeanTokenizer, as its javadoc claims:
> bq. This should be a good tokenizer for most European-language documents
> The new StandardTokenizer could then say
> bq. This should be a good tokenizer for most languages.
> All the english/euro-centric stuff like the acronym/company/apostrophe stuff
> can stay with that EuropeanTokenizer, and it could be used by the european
> analyzers.
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]