Geoff Steckel <[EMAIL PROTECTED]> writes:

> Any argument to experience must be from similar actual implementations
> using "threads" and another model, such as multiple processes with
> interprocess communications.

Sure. I'll pick up the challenge.

At work we have a server that uses around 4GB RAM and runs on an 4 cpu
machine. It serves millions of tcp connections per hour. sharing the
memory without sharing pointer values is too inefficient since a big
amount of the memory used is a pre-computed cache of most common query
results. The service needs 4x4GB of RAM on the machine to be able to
reload the data efficiently without hitting disk, since hitting the
disk kills performance in critical moments and leads to
inconsistencies between the four machines that run identical instances
of this service.

Therefore:

 - fork would not work because cache would not be shared and this
   would lead to too big cache miss ratio.

 - adding more RAM won't work because it would spend rack real estate
   and power and cooling budget which we can't do.

 - adding more machines will not solve the problem for the same reasons
   as RAM.

 - reducing the data set will not work because we kinda like to make
   lots of money, not just a little money.

 - partitioning the data does not work good because it causes a too
   high cost in performance and memory consumption.

What works is threads. We've had one thread related bug in the past
year.

//art

Reply via email to