Chris Knipe <sav...@savage.za.org> writes:

> This is an actual example of a communication stream between a client
> and the perl server, the socket is already established, and
> communication is flowing between the two parties.  [C] indicates what
> the client is sending to the server, and [S] indicates the responses
> the server sends to the client.  My comments added with # and thus
> does not form part of the communication stream.
>
>       [C] TAKETHIS <i.am.an.article.you.will.w...@example.com>
>       [C] Path: pathost!demo!somewhere!not-for-mail
>       [C] From: "Demo User" <nob...@example.com>
>       [C] Newsgroups: misc.test
>       [C] Subject: I am just a test article
>       [C] Date: 6 Oct 1998 04:38:40 -0500
>       [C] Organization: An Example Com, San Jose, CA
>       [C] Message-ID: <i.am.an.article.you.will.w...@example.com>
>       [C]
>       [C] This is just a test article.
>       [C] .
> # . indicates the end of the article.  Perl now starts to do work,
> processing the article it received. As this process takes time, the
> main while (1) { loop reading from the socket is now blocked, causing
> the server not to read any more from the socket until after the
> article has been dealt with.
>       [C] TAKETHIS <i.am.an.article.you.h...@example.com>
>       [C] Path: pathost!demo!somewhere!not-for-mail
>       [C] From: "Demo User" <nob...@example.com>
>       [C] Newsgroups: misc.test
>       [C] Subject: I am just a test article
>       [C] Date: 6 Oct 1998 04:38:40 -0500
>       [C] Organization: An Example Com, San Jose, CA
>       [C] Message-ID: <i.am.an.article.you.h...@example.com>
>       [C]
>       [C] This is just a test article.
>       [C] .
> # Perl server only NOW responds with an acceptance code for the first article.
>       [S] 239 <i.am.an.article.you.will.w...@example.com>
>
> We have effectively now COMPLETELY missed the second article in our
> while (1) loop, as perl did not read from the socket WHILST processing
> the article.


1.) What you are trying to achieve requires that the server side is
    capable of fully processing every connection within a limited amount
    of time. This amount of time must be so small that you do not have
    to queue up anything, unless the workload is unsteady, allowing you
    to process a queue during times of lower loads.

    You basically have a bucket here, with an unlimited amount of garden
    hoses filling water into the bucket.  The bucket has a drain to let
    out the water.  That drain must always be large enough to let so
    much water run out that the bucket never gets full.  The level of
    water in the bucket may change over time, and it must never flow
    over.

    Unless you can guarantee that the drain is always large enough, you
    *must* limit the amount of water flowing into the bucket --- like by
    limiting the number of concurrent connections.

2.) Number 1.), limiting the amount of incoming data, kinda already
    solves the problem.

3.) With the problem solved, you can look into ways of increasing the
    performance.  Having one thread queuing up the incoming data and
    another thread process it may be a worthwhile approach.  It could
    help with 1.) because you could simply look at the size of the queue
    and deny all new connections when the queue has reached a given
    size.

    That allows you to have one queue-thread per incoming connection,
    with a given maximum number N of queue-threads.  That number N
    already sets a reasonable measure for the number of spool-threads
    processing the queue because you will never have more than N
    connections with clients waiting for an answer simultaneously.

    You can have another thread that monitors the queue size and decides
    whether new connections are to be accepted or not.

    This approach might scale well because you can distribute the
    threads across several servers until you get the required
    performance.

4.) You probably have an issue with the design of the protocol.  No
    matter how many threads you use, it always takes time to set up
    another thread to queue up the incoming data.  Your clients *must*
    wait for an acknowledgement that data they want to sent can now be
    accepted.  Otherwise you may miss data during the time which it
    takes to get ready to receive it.  Alternatively, the clients *must*
    time out after a while when they do not receive an acknowledgement
    that their data has been accepted, in which case they *must* act
    accordingly on their side, like trying again.


I'm not sure if this makes sense to you; it's only how I would probably
go about this problem.  If you cannot solve number 4.), I don't see any
possible solution to the problem other than not handling more than one
connection at a time.  However, even when handling only one connection
at a time, it is bad design when clients just send data before being
told to do so, and/or when they don't act reasonably when they don't
receive an acknowledgement that their data has been received.
Inevitably, such design will result in data being lost, no matter what
you do.


-- 
Knowledge is volatile and fluid.  Software is power.

-- 
To unsubscribe, e-mail: beginners-unsubscr...@perl.org
For additional commands, e-mail: beginners-h...@perl.org
http://learn.perl.org/


Reply via email to