It's more than possible, it's probable. Cache thrashing would definitely be
my first guess; with so many copies of the exact same data you're not only
missing out on significant gains with the L2 cache, you're also taking a
major hit with every cache miss (which probably happens every context
swit
Also, by OS is the one reason we can think of now, but that doesn't
mean there aren't other reasons.
EG, who knows -- maybe for small indexes NIO doesn't help but for
large ones it does (just an example) and so you'd want non-static
choice.
Mike
Yonik Seeley wrote:
On Wed, Nov 12, 20
Good!
In fact now we see similar slowness with nio-thread vs nio-shared as
we see for RAM-thread vs RAM-shared. Ie, for both RAM and NIO you get
better performance sharing a single reader than reader-per-thread.
This is odd -- I would have expected that with infinite RAM reader-per-
thr
On Wed, Nov 12, 2008 at 5:00 PM, Chris Hostetter
<[EMAIL PROTECTED]> wrote:
> since the choice of FSDirectory varient is largly going to be based on OS,
> I can't think of any reason why a static setter method wouldn't be good
> enough in this particular case.
https://issues.apache.org/jira/browse
: >From the user perspective: a public constructor would be the most
: obvious, and would be consistent with RAMDirectory.
A lot of the cases where system properties are currently used can't
really be solved this way because the client isn't the one constructing
the object. SegmentReader's IMP
>From the user perspective: a public constructor would be the most
obvious, and would be consistent with RAMDirectory.
Dmitri
On Wed, Nov 12, 2008 at 4:50 AM, Michael McCandless
<[EMAIL PROTECTED]> wrote:
>
> I think we really should open up a non-static way to choose a different
> FSDirectory im
Nice!
At 8 threads nio-shared catches up with ram-shared. Here's the complete table:
fs-thread nio-thread ram-thread fs-shared
nio-shared ram-shared
1 71877 70461 54739 73986 72155 61595
2 34949 34945 26735 43719 33019 28935
3
I'm thinking about it, so if someone else doesn't get something together
before I have some free time...
Its just not clear to me at the moment how best to do it.
Michael McCandless wrote:
Any takers for pulling a patch together...?
Mike
Mark Miller wrote:
+1
- Mark
On Nov 12, 2008, at
Any takers for pulling a patch together...?
Mike
Mark Miller wrote:
+1
- Mark
On Nov 12, 2008, at 4:50 AM, Michael McCandless <[EMAIL PROTECTED]
> wrote:
I think we really should open up a non-static way to choose a
different FSDirectory impl? EG maybe add optional Class to
FSDir
+1
- Mark
On Nov 12, 2008, at 4:50 AM, Michael McCandless <[EMAIL PROTECTED]
> wrote:
I think we really should open up a non-static way to choose a
different FSDirectory impl? EG maybe add optional Class to
FSDirectory.getDirectory? Or maybe give NIOFSDirectory a public
ctor? Or s
I think we really should open up a non-static way to choose a
different FSDirectory impl? EG maybe add optional Class to
FSDirectory.getDirectory? Or maybe give NIOFSDirectory a public
ctor? Or something?
Mike
Mark Miller wrote:
Mark Miller wrote:
Thats a good point, and points out
Mark Miller wrote:
Thats a good point, and points out a bug in solr trunk for me. Frankly
I don't see how its done. There is no code I can see/find to use it
rather than FSDirectory. Still assuming there must be a way, but I
don't see it...
Ah - brain freeze. What else is new :) You have to s
Dmitri Bichko wrote:
32 cores, actually :)
Glossed over that - even better! Killer machine to be able to test this on.
I reran the test with readonly turned on (I changed how the time is
measured a little, it should be more consistent):
fs-thread ram-thread fs-shared
32 cores, actually :)
I reran the test with readonly turned on (I changed how the time is
measured a little, it should be more consistent):
fs-thread ram-thread fs-shared ram-shared
1 71877 54739 73986 61595
2 34949 26735 43719 28935
3 25581
I re-ran the no-readonly ram tests:
thread shared
1 64043 53610
2 26999 25260
3 27173 17265
4 22205 13222
5 20795 11098
6 17593 9852
7 17163 8987
8 17275 9052
9 19392 10266
10 27809 10397
11 25987 10724
Nice results, thanks!
The poor disk-based scaling may be fixed by NIOFSDirectory, if you are
on Unix. If you are on Windows it won't help (and will likely be
worse than FSDirectory), because of an apparently bug in Sun's JVM on
Windows whereby NIO positional file reads seem to share a loc
And if you you are on unix and could try trunk and use the new
NIOFSDirectory implementation...that would be awesome.
Woah...that made 2.4 too. A 2.4 release will allow both optimizations.
Many thanks!
-
To unsubscribe, e-m
Nice! An 8 core machine with a test ready to go!
How about trying the read only mode that was added to 2.4 on your
IndexReader?
And if you you are on unix and could try trunk and use the new
NIOFSDirectory implementation...that would be awesome.
Those two additions are our current hope for
18 matches
Mail list logo