Hi,

I'm evaluating OTRS's performance for a ~large scale implementation and I'm
wondering if anyone ran into ~steep performance bottlenecks after hitting
some threshold for a metric such as total # of tickets, db size, attachment
size, etc. Please share if you did.

I ask because I need to formulate a promise towards a potential customer
about how long his implementation's performance will scale in terms of some
metrics usable to approximate time intervals (something like: if you have
500 new tickets per month and about 30% open tickets then this
implementation will very likely scale for ~3 years).

Looking around the docs / Internet I keep hitting a reference for the need
to switch the "backend module for the ticket index" from RuntimeDB to
StaticDB when the db has ~60000 tickets or 6000 open tickets (
http://doc.otrs.org/3.1/en/html/performance-tuning.html). However there is
no technical explanation for how these numbers came about. Does it depend
on hardware or is it a threshold beyond which performance doesn't scale
anymore regardless on how much hardware you throw at it?

Another possible issue would be db size. With this one I saw no clear cut
numbers but various mailing list posts led me to believe performance would
scale, with adequate hardware, for dbs up to 200 GB.

My case will require a relatively light workload at first (~100 tickets /
day) but these will be tickets with large photo attachments so I may have
to offload attachments from the database sooner than I imagine.

Thanks,
Bogdan
---------------------------------------------------------------------
OTRS mailing list: otrs - Webpage: http://otrs.org/
Archive: http://lists.otrs.org/pipermail/otrs
To unsubscribe: http://lists.otrs.org/cgi-bin/listinfo/otrs

Reply via email to