Hello Francois,

I already thought about balancing for concentrator working model ;-)

Concentrator  will  have realtime list of other concentrators and will
be  able  to  answer  "sorry I am full, please try connect x.x.x.x" or
send  back  a  list  of  concentrator so the client will try them each
after  other.  And  also  clients  may  be  redirected  depending  its
geographic position.

As  server  should  be 24/24 7/7 doing a mix with balanced server is a
good idea. But if server is down, concentrator will notify client that
will display some "broken icon" in Tray Icon and if user wants to send
data  he  will  get  a "please try later" (or may be some pending send
queue concentrator-side). Maintenance message will be also implemented
so the client knows why server is down (if even) and when (if told) it
should try reconnect.

I  will  try make concentrator as simple as possible to make it string
and try avoid mem leaks or mem fragmentation as much as possible.

connectors  and server will also have some internal leak detector that
will check its own mem usage and opened handles to check if it goes to
high (and may be restart automatically until I find the bug).

Regards.


FP> First solution seems better to me. You can also have a mix of the two 
solutions. For example having
FP> 8 concentrators with 2000 users and 2 servers. This will allow easy fault 
tolerance. The
FP> concentrators would connect randomly to one of the two servers and if one 
is down, connect to the
FP> other. The client application can also apply the same fault tolerant or 
load balancing system: they
FP> would have a prefered concentrator and one or more fallback concentrators. 
Concentrators could also
FP> communicate (maybe thru the servers) with each other to redirect client to 
another one should
FP> another one be less loaded. All that is not very difficult to implement.

FP> --
FP> [EMAIL PROTECTED]
FP> Author of ICS (Internet Component Suite, freeware)
FP> Author of MidWare (Multi-tier framework, freeware)
FP> http://www.overbyte.be


FP> ----- Original Message ----- 
FP> From: "Dod" <[EMAIL PROTECTED]>
FP> To: "ICS support mailing" <twsocket@elists.org>
FP> Sent: Tuesday, January 03, 2006 10:41 AM
FP> Subject: [twsocket] OT rather big project question


>> Hello,
>>
>> I  am  starting  a  re-write  one of my ICS-based server to be able to
>> handle   15.000   permanent  cnx  smoothly.  The  datas  Clients  will
>> send/receive will be only 2KB packets time to time.
>>
>> As  Windows OS is limited with sockets and handle (whenever you can do
>> some tricky changes in registry) I have decided to split connections.
>>
>> Now I am wondering which solution could be the best :
>>
>> -  Multiple  concentrators  that accept between 2000/5000 connections,
>> each  concentrator  (quite  simple small application) will do only one
>> connection  to  Main server that use SQL database (MySQL for example).
>> If  main server is down users are still connected to concentrator that
>> will notify users and wait until server is up again without.
>>
>> -  Multiple  Servers accepting 2000/5000 connections each, each server
>> connect  to  SQL  datbase. When there is a server maintenance, it will
>> drop  down  all user's the client application will try reconnect until
>> server is up again.
>>
>> Both  solution permit to split data traffic across switched network so
>> it will lower general bandwidth.
>>
>> I  think  I  will  use  MySQL  that should be enought and will have to
>> manage 1 or 10 max connections at same time dependig if I choose first
>> or second solution.
>>
>> First  solution may permit smoother server updates as client part stay
>> always  connected to concentrator and it's concentrator that will keep
>> connection to main server.
>>
>> Do you have any idea/advice ?
>>
>> Regards.
>>
>> -- 
>> To unsubscribe or change your settings for TWSocket mailing list
>> please goto http://www.elists.org/mailman/listinfo/twsocket
>> Visit our website at http://www.overbyte.be

-- 
To unsubscribe or change your settings for TWSocket mailing list
please goto http://www.elists.org/mailman/listinfo/twsocket
Visit our website at http://www.overbyte.be

Reply via email to