On 7/6/22 16:38, Christopher Schultz wrote:
Anecdotal data point:
elyograg@bilbo:/usr/local/src$ ps aux | grep '\(java\|PID\)'
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
solr 852288 1.0 9.5 3808952 771204 ? Sl Jul03 59:32 java
-server -Xms512m -Xm
On 7/6/22 18:33, dmitri maziuk wrote:
This way lieth dark magick and madness, of course, but I'm curious
what the optimal config would be for a container infra. For bare metal
a large PCIe SSD should be the best bang for the buck, but on kube the
"disk" is probably iSCSI volumes and you may not
optimise performance to avoid spending (often much less)
> >>>>> money
> >>>>>>>> on sufficient hardware to do the job. I've seen this happen many
> >>>> times,
> >>>>>>>> sadly.
> >>>&
Not sure about ku but docker you can simply mount the ssd into the service as
an alias with the volumes. Unless you have no control over the metal then this
could work?
> On Jul 6, 2022, at 8:34 PM, dmitri maziuk wrote:
>
> On 2022-07-06 2:59 PM, Shawn Heisey wrote:
>> If the mounted filesy
On 2022-07-06 2:59 PM, Shawn Heisey wrote:
If the mounted filesystem is one that the OS can cache, and there is
enough spare memory, then a lot of the mmap requests that Lucene makes
won't ever hit the actual disk. Most block devices can be cached. I
would expect that to be the case for iSC
In my experience yea it will just be slow, but it’s hard to test truthfully
slow without a couple tens of thousands of searches to measure against. It
won’t fail fail, just read the disk. So. Get an ssd to put the index on and
then poof, you have a really fast disk to read from
> On Jul 6, 202
Shawn,
On 7/5/22 23:52, Shawn Heisey wrote:
On 7/5/2022 3:11 PM, Christopher Schultz wrote:
Well, if you need more than 32GiB, I think the recommendation is to go
MUCH HIGHER than 32GiB. If you have a 48GiB machine, maybe restrict to
31GiB of heap, but if you have a TiB, go for it :)
I remem
you don’t
have
to
worry much about hardware failure if you just buy two of
everything
and
have a backup server ready and waiting to take over while the
original
fails and is reconstructed.
On Jul 4, 2022, at 1:32 PM, Shawn Heisey
wrote:
On 7/4/22 03:01, Mike wrote:
My Solr index si
if you just buy two of everything
and
have a backup server ready and waiting to take over while the
original
fails and is reconstructed.
On Jul 4, 2022, at 1:32 PM, Shawn Heisey
wrote:
On 7/4/22 03:01, Mike wrote:
My Solr index size is around 500GB and I have 64GB of RAM. Solr
eats
up
Hi, Shawn.
I don't realy follow the thread, but want to comment on
> I think most network filesystems (NFS,SMB, etc) cannot be locally
cached.
I played with EFS recently. It seems to be cached locally pretty much.
After, a searcher opens a mounted index directory, reads rate spikes for
some time,
On 7/6/22 10:59, dmitri maziuk wrote:
mmap() doesn't side-step disk access though, dep. on the number of of
mmap'ed chunks and chunk size, it can be slow. Especially if your
"disk" is an iSCSI volume on a gigabit link to a slow underprovisioned
NAS.
If the mounted filesystem is one that the O
On 2022-07-05 10:52 PM, Shawn Heisey wrote:
...
That is an interesting question. One of the reasons Lucene queries so
fast when there is plenty of memory is because it accesses files on disk
directly with MMAP, so there is no need to copy the really massive data
structures into the heap at all
On 7/5/2022 3:11 PM, Christopher Schultz wrote:
Well, if you need more than 32GiB, I think the recommendation is to go
MUCH HIGHER than 32GiB. If you have a 48GiB machine, maybe restrict to
31GiB of heap, but if you have a TiB, go for it :)
I remember reading somewhere, likely for a different
riginal
> fails and is reconstructed.
>
> > On Jul 4, 2022, at 1:32 PM, Shawn Heisey wrote:
> >
> > On 7/4/22 03:01, Mike wrote:
> >> My Solr index size is around 500GB and I have 64GB of RAM. Solr eats up
> all
> >> the memory and because of that PH
Shawn,
On 7/4/22 13:31, Shawn Heisey wrote:
On 7/4/22 03:01, Mike wrote:
My Solr index size is around 500GB and I have 64GB of RAM. Solr eats
up all
the memory and because of that PHP works very, very slowly. What can I
do?
Solr is a Java program. A Java program will never directly use
re, you can then turn your eyes to hardware.
> >>>>>>>
> >>>>>>> Deepak
> >>>>>>> "The greatness of a nation can be judged by the way its animals are
> >>>>>> treated
> >>>>>>>
; anymore, you can then turn your eyes to hardware.
>>>>>>>
>>>>>>> Deepak
>>>>>>> "The greatness of a nation can be judged by the way its animals are
>>>>>> treated
>>>>>>> - Mahatma Gandhi
om/in/deicool
> > >>>>
> > >>>> "Plant a Tree, Go Green"
> > >>>>
> > >>>> Make In India :http://www.makeinindia.com/home
> > >>>>
> > >>>>
> > >>>> On Tue, Jul 5,
t; "Plant a Tree, Go Green"
> >>>>
> >>>> Make In India :http://www.makeinindia.com/home
> >>>>
> >>>>
> >>>> On Tue, Jul 5, 2022 at 1:01 AM Dave
> >>> wrote:
> >>>>> Also for $115 I can
, which helps a lot.
>>> It
>>>>> comes to a point where money on hardware will outweigh money on
>>> engineering
>>>>> man power hours, and still come to the same conclusion. As much ram as
>>> your
>>>>> rack can take and as big an
ou just buy two of everything and
have a backup server ready and waiting to take over while the original
fails and is reconstructed.
On Jul 4, 2022, at 1:32 PM, Shawn Heisey wrote:
On 7/4/22 03:01, Mike wrote:
My Solr index size is around 500GB and I have 64GB of RAM. Solr eats
up
all
t
> to
> >> worry much about hardware failure if you just buy two of everything and
> >> have a backup server ready and waiting to take over while the original
> >> fails and is reconstructed.
> >>
> >>> On Jul 4, 2022, at 1:32 PM, Shawn Heisey wrot
ize is around 500GB and I have 64GB of RAM. Solr eats up
all
the memory and because of that PHP works very, very slowly. What can I
do?
Solr is a Java program. A Java program will never directly use more
memory than you specify for the max heap size. We cannot make any general
recommenda
Shawn Heisey wrote:
> >
> > On 7/4/22 03:01, Mike wrote:
> >> My Solr index size is around 500GB and I have 64GB of RAM. Solr eats up
> all
> >> the memory and because of that PHP works very, very slowly. What can I
> do?
> >
> > Solr is a Java
awn Heisey wrote:
>
> On 7/4/22 03:01, Mike wrote:
>> My Solr index size is around 500GB and I have 64GB of RAM. Solr eats up all
>> the memory and because of that PHP works very, very slowly. What can I do?
>
> Solr is a Java program. A Java program will never directly
On 7/4/22 03:01, Mike wrote:
My Solr index size is around 500GB and I have 64GB of RAM. Solr eats up all
the memory and because of that PHP works very, very slowly. What can I do?
Solr is a Java program. A Java program will never directly use more
memory than you specify for the max heap
d configure it to your needs, but at least your client
> code
> > > will keep running.
> > >
> > > Thomas
> > >
> > > Op ma 4 jul. 2022 11:01 schreef Mike :
> > >
> > > > Hello!
> > > >
> > > > My Solr index size is around 500GB and I have 64GB of RAM. Solr eats
> up
> > > all
> > > > the memory and because of that PHP works very, very slowly. What can
> I
> > > do?
> > > >
> > > > Thanks
> > > >
> > > > Mike
> > > >
> > >
> >
>
nd configure it to your needs, but at least your client code
> > will keep running.
> >
> > Thomas
> >
> > Op ma 4 jul. 2022 11:01 schreef Mike :
> >
> > > Hello!
> > >
> > > My Solr index size is around 500GB and I have 64GB of RAM. Solr eats up
> > all
> > > the memory and because of that PHP works very, very slowly. What can I
> > do?
> > >
> > > Thanks
> > >
> > > Mike
> > >
> >
>
e. You're still going to need to
> spec it out and configure it to your needs, but at least your client code
> will keep running.
>
> Thomas
>
> Op ma 4 jul. 2022 11:01 schreef Mike :
>
> > Hello!
> >
> > My Solr index size is around 500GB and I have 64G
I have 64GB of RAM. Solr eats up all
> the memory and because of that PHP works very, very slowly. What can I do?
>
> Thanks
>
> Mike
>
Hello!
My Solr index size is around 500GB and I have 64GB of RAM. Solr eats up all
the memory and because of that PHP works very, very slowly. What can I do?
Thanks
Mike
31 matches
Mail list logo