Hmm. May have to retract that. Just tried running "ant test" on SSD
and on RAM and saw no difference worth mentioning (nearly 1 hour
each!). So maybe not useful for that.
I _thought_ it was faster for compilation, which is mostly file access
and no waiting for cloud instances to spin up. I'll have to retest
that later.
I did this on Mac where I have the following script:
# Make 1Gb RAM Disk (1024*2048), where 2048 is block count in a megabyte
diskutil erasevolume HFS+ 'RAMDisk' `hdiutil attach -nomount ram://2097152`
Though 1Gb was not enough for the "ant test" and I had to recreate the
disk with 2G. On an 8G total Mac Book Pro.
On Linux I think it requires a memory-backed loop device filesystem.
Or some such.
Regards,
Alex.
Personal: http://www.outerthoughts.com/ and @arafalov
Solr resources and newsletter: http://www.solr-start.com/ and @solrstart
Solr popularizers community: https://www.linkedin.com/groups?gid=6713853
On 3 December 2014 at 01:14, Shawn Heisey <[email protected]> wrote:
> On 12/2/2014 10:53 PM, Alexandre Rafalovitch wrote:
>> A tangent, but a relevant one (to the issue of speed). Have you tried
>> running the tests with Lucene/Solr code being in the RAM disk?
>>
>> I found that compiling source on RAMDisk is a lot faster than even
>> with SSD drive. Must be just frequency of access. It might be the same
>> with tests.
>
> I am intrigued by this. Do you have a quick and dirty guide to doing
> this on Linux? How much RAM must be allocated for a typical build and
> test run? Probably wouldn't want to run Monster tests on it. :)
>
> Thanks,
> Shawn
>
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: [email protected]
> For additional commands, e-mail: [email protected]
>
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]