I forgot this part of your question.
To go from degrees to KM, multiply by DistanceUtils.DEG_TO_KM.
~ David Smiley
Freelance Apache Lucene/Solr Search Consultant/Developer
http://www.linkedin.com/in/davidwsmiley
On Mon, Dec 22, 2014 at 9:35 AM, wrote:
>
> Thanks for the suggestion.
>
> I am usi
Thanks for the suggestion.
I am using Lucene Vincenty to find the distance but the output is strange. I
cannot figure out how to convert the output to metres/kilo metres.
After extensive search on google, I found GeoDesy source code which gives me
distance in metres. This is also the implementa
Hi Ankit,
Vincenty is the most accurate one — it is the benchmark for the other 2’s
tests for the true answer. In theory it produces the same answers as the
other 2 simpler formulas you mention but is “numerically robust” for
computers. Note that the world model used by Spatial4j when in “geo” m
Hello.
You have stated the use-case so generically that it’s not clear if you
should index the polygon set and query by the point set, or the reverse.
Generally, you should index the set that is known in-advance and then query
by the other, the set that is generally not known. Assuming this is th
Hi,
In fact, the shallow copy possibility (called "cp --reflink=always") in btrfs
and other file systems that support it is really interesting. It would be cool
in Java 7+ 's Files.copy(Path, Path, CopyOption) could use this with an
additional CopyOption - maybe Java 9. The trick here is to clo
Very interesting, have to take a closer look. Thanks Uwe.
D.
On Mon, Dec 22, 2014 at 11:39 AM, Uwe Schindler wrote:
> Hi Dawid,
>
> there are cool things that might be useful just not for Lucene's Java code.
> Like ZFS it now has snapshot functionality and you can copy files mostly
> without d
Hi Dawid,
there are cool things that might be useful just not for Lucene's Java code.
Like ZFS it now has snapshot functionality and you can copy files mostly
without doing I/O (shallow-copy, it uses copy-on-write semantics to do that).
This might be useful for backup purposes. I know we did mo
One of our indexes is updated completely quite frequently -> "batch update" or
"re-index".
If so more than 2million documents are added/updated to/in the very index. This
creates an immense IO load on our system. Does it make sense to set merge
scheduler to NoMergeScheduler (and/or MergePolicy