Hi all
On 2022-01-17 5:16 pm, dc...@prosentient.com.au wrote:
Great to see another Australian library using Koha!
Yes. We are pleased to be able to use such nice NZ Open Source software.
It looks like you're running Debian Buster (based on your Apache
response), so I'm guessing you used the Debian packages to install
Koha? I thought that new instances created this way used Plack out of
the box, but I think I might be mistaken...
Initial install was Debian packages onto Stretch in 2019. Then an
upgrade from Stretch to Buster in September 2021 so thats prob why Plack
is not installed.
If you enable Plack/Starman, page loads and search times will be
faster, because the application code is run in persistent processes
that use pre-loading and caching of code. However, that comes at a
cost. In practice, I find each Plack/Starman process reserves about
200-500MB RAM over its lifetime (default 50 requests before it's
recycled if I recall correctly), and the default Koha configuration
uses 2 Plack/Starman processes. Depending on expected usage, you may
want more/fewer Plack/Starman processes. (Note that those processes
are shared across the Staff Interface and the OPAC.) Overall, enabling
Plack is the best way to improve Koha performance.
(Another thing to note is
startup for Plack/Starman processes are CPU resource heavy as a lot of
work goes into setting up and verifying the REST API.)
Overall, I'd say the more CPUs the better, especially if you're
running your database on the same server.
Yes running on just the one server.
Excellent info above, so useful to know this.
From that perhaps its better to double what I have now to 2 CPUs and 4
GB RAM and then look at adding in plack for better performance again
rather than adding in plack now and later doubling CPU and RAM.
I think Memcached might technically be optional, but you'll want to
keep it for performance. Without it, performance would certainly
degrade.
OK.
In terms of metrics, you can use third-party log analyzers (for the
Apache logs) and web analytics (eg Google Analytics) to get an
overview of how the application is performing overall. For more
targeted analysis, I tend to just use the Developer Tools in the
browser. For instance, I see your OPAC is taking about 4 seconds to
serve the homepage. I tried one of the instances I manage (on a high
specced server using Plack) and it took 333ms. There are other tools
you can use for profiling the application, but that's a something you
might want to explore with the "koha-devel" list instead.
Ah! I had forgotten about using developer tools. I have those in my
browser and yes I see there is a performance tab which shows lots of
load times for components.
More reading:
https://lists.katipo.co.nz/public/koha/2020-December/055598.html
This said: "I'd say minimum 4 CPU between all the various different Koha
subcomponents for any serious usage"
For us I can go to 2 CPUs for now but 4 is a bit too pricy. We can prob
go to 4 when we can show how much the library is being used later on :-)
I haven't used Linode so I can't really comment on its features, but
other platforms like AWS make it very easy to switch between different
CPU and RAM sizes.
Pretty easy. I use AWS also. Going up in a plan is easy as disk is
expanded but going down can be a problem if your disk usage no longer
fits in the smaller plan.
I'd say your first step is to enable Plack (on your test instance),
and just manually compare similar web requests (like loading the home
page, performing the same catalogue search). You should already have
the RAM to handle it. I could see you getting into strife only having
1 CPU, as that 1 CPU would have a lot of work to do, but that's where
the log analysis / web analytics can help you see real life usage.
First step tonight will be to read up on configuring and turning on
plack for the test instance. Then I'll toss up doubling cpu/ram before
further tests.
I hope that helps. Feel free to send more questions.
Thanks. Much appreciated.
-----Original Message-----
Date: Fri, 14 Jan 2022 16:04:11 +1100
From: Mike Lake <mi...@speleonics.com.au>
To: Koha <koha@lists.katipo.co.nz>
Subject: [Koha] How to measure and improve Koha performance for our
instance?
Message-ID: <daad1c0c667cacb91cb604a2e7797...@speleonics.com.au>
Content-Type: text/plain; charset=US-ASCII; format=flowed
Hi all
We have a production library setup at https://opac.caves.org.au and
its now ready for users to use. We also have a test instance that can
be used for testing. I'm wanting to know the options for measuring and
improving the performance - just basic load and search time that users
perceive.
Currently we are running on a Linode instance with 1 CPU and 2 GB RAM.
Yes it's a small library :-) Looking at Linode Longview it shows we
have not hit more than 7% CPU usage and it usually sits at 1% CPU and
670 MB mem when idle. During a query cpu goes to 100% for a short
period and mem to ~ 1 GB.
I see in htop that several memcache threads are running. That's now
installed automatically with Koha? The wiki seems to suggest that its
optional. And there is plack mentioned in the wiki. Should we be
running plack?
I can double our Linode instance to 2 CPUs and 4 GB RAM but that's
possibly an irreversible change (and double the $) so wish to leave
that until I know it would make a significant difference. And how
could I quantify that performance increase?
Thanks
--
Mike
--
Mike
_______________________________________________
Koha mailing list http://koha-community.org
Koha@lists.katipo.co.nz
Unsubscribe: https://lists.katipo.co.nz/mailman/listinfo/koha