You should never index directly into your query servers by the way. Index to 
the indexing server and replicate out to you query servers and tune each as 
needed

> On Oct 6, 2022, at 6:52 PM, Dominique Bejean <dominique.bej...@eolya.fr> 
> wrote:
> 
> Thank you Dima,
> 
> Updates are highly multi-threaded batch processes at any time.
> We won't have all index in RAM cache
> Disks are SSD
> 
> Dominique
> 
> 
>> Le ven. 7 oct. 2022 à 00:28, dmitri maziuk <dmitri.maz...@gmail.com> a
>> écrit :
>> 
>>> On 2022-10-06 4:54 PM, Dominique Bejean wrote:
>>> 
>>> Storage configuration is the second point that I would like to
>> investigate
>>> in order to better share disk resources.
>>> Instead have one single RAID 6 volume, isn't it better to have one
>> distinct
>>> not RAID volume per Solr node (if multiple Solr nodes are running on the
>>> server) or multiple not RAID volumes use by a single Solr JVM (if only
>> one
>>> Solr node is running on the server) ?
>> 
>> The best option is to have the indexes in RAM cache. The 2nd best option
>> is the 2-level cache w/ RAM + SSD -- that's what you get with ZFS, and
>> you can use the cheaper HDDs for primary storage. The next one is all
>> SSDs -- in that case RAID-1(0) may give you better read performance than
>> a dedicated drive, but probably not enough to notice. There's very
>> little point in going RAID-5 or 6 on SSDs.
>> 
>> In terms of performance RAID5/6 on HDDs is likely the worst option, and
>> a single RAID6 volume is also the works option in terms of flexibility
>> and maintenance. If your customer doesn't have money to fill those slots
>> with SSDs, I'd probably go with one small SSD for system + swap, a
>> 4-disk RAID-10, and a hot spare for it.
>> 
>> Dima
>> 
>> 

Reply via email to