Lou Kamenov wrote: 
kinda 
if(conn1 == fails){ tellus; conn2; if(conn2 == fails) { tellus; return >_err; } 
} 
of course with each connect N it`ll try to connect to db1 before >falling back 
to db2; 


Imagine the following scenario: the first db just goes down, and stays like 
that for a long period of time (reasons: the machine is shut down, it goes out 
of memory or there is something wrong with the DNS resolution). The 
corresponding (mysql) errors from the C MySQL API would be: 

`CR_CONN_HOST_ERROR' 
   Failed to connect to the *MySQL* server. 

`CR_OUT_OF_MEMORY' 
   Out of memory. 


`CR_UNKNOWN_HOST' 
   Failed to find the IP address for the hostname. 

In these cases, (maybe only except for the out of memory error),the actual 
downtime *could* be much longer and what will happen is that there would be a 
connection attempt to db1 (which will be down) *before* switching to db2. In my 
opinion this could slow down the process of connecting.  What i suggest is some 
algorithm to be used that automatically switches to db2 if a certain number of 
connections have been made to db1: that is, we can have a table with the number 
of attempts to db1 corresponding to the next number of connections that should 
be made to db2 before trying db1, which is something like quadratically 
increasing. If, when a successful try to db1 is made, then the whole procedure 
is reset and the process starts from the very beginning. I am not quite sure 
this is necessary as a matter of a crucial change or something, but I think it 
could significantly reduce the response time in case of a serious downtime. 

Angel 

Attachment: pgpoYKTZUOTiM.pgp
Description: PGP signature

Reply via email to