On Mon, Aug 22, 2005 at 09:57:44AM -0400, [EMAIL PROTECTED] wrote:
> 
> You are correct in your analysis that you would get better performance if 
> you ran the queries asynchronously. Just remember that each query thread 
> will need its *own* connection. Even if you are querying two databases on 
> the same server, each connection can only handle one 
> query/action/statement at a time. It is "VERY UNWISE" to share connections 
> between threads unless you ensure (by some application level 
> synchronization like a mutex) that only one thread at a time is trying to 
> use any one connection.

 No. As far as I can see, the straightforward method is to use different 
connections. Each thread can get the data in separate arrays, and once they are 
finished, just sort everything together. The design is very simple actually.

 
> 
> Are there clients with this kind of asynchronous query capacity? Probably, 
> but they have been specially built just for the application they support. 
> You are probably going to need to create your own client in order to 
> collect and correlate your separate queries. There are several connection 
> libraries available, use whichever fits into your programming model.
> 
 
 But to me this looks like the what distributed database architecture should be 
in entirety. I mean, only the searches needs to be done in such a manner. Once 
the search is done, the writing, - and even the detailed query of each row - 
can be handled by upper level logic. But the search for the matching rows needs 
to be done at a lower level - and has to be fast. So all you need for a 
distributed database architecture is that you run 'selects' simultaneously on 
multiple servers, and leave the rest of the logic to the higher level code.

 So I am surprised that no one has done this before. I am very new to database 
- (about last week, is when I started getting into the intricacies), so at 
present I am confused why there isn't such a solution. I had created a database 
abstraction layer that mapped rows directly into classes of the same name as 
that of the table, and even had values that were serialized and all. It works, 
since I always limit the database query to around 10-70 results, and all the 
inefficient abstraction is carried out only on a maximum of 70 objects. That is 
why I need another layer at the bottom, since my top layer is too inefficient 
to do multiple selects.

 Anyway, thanks for your response. In effect, it means, I have to write my own 
client. But as I said, I have been into SQL for around 1 week, so I would 
appreciate if you can give me some guidance as to how many client libraries 
exist for mysql, and which one would be best suited for this purpose. Or maybe 
I can use Sqlite (the file based db) and replace its disk-reading code with 
'server-querying' code. So that I don't have to do the sorting etc on my own.

 If I have to write on my own, I think that is the best method - replace the 
file-reading code of sqlite with the multi-server-query from a suitable client 
library.

 Thanks.



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:    http://lists.mysql.com/[EMAIL PROTECTED]

Reply via email to