Hi,

I am currently experiencing trouble getting my new mysql 5-servers running as slaves on my old 4.1.13-master. Looks like I'll have to dump the whole 30GB-database and import it on the new servers :( At this moment I do no see any oppurtunity to do this before the weekend since the longest time I can block any of our production
systems is only 2-3 hours between midnight and 2am :(

I am still curious if Innodb could handle the load of my updates on the heavy-traffic-tables since its disk-bound and
does transactions.

What I would probably need is an in-memory-table without any kind of locking - at least not table-locks! But there is no such engine in mysql. When a cluster can handle that (although it has the transaction-overhead) it would probably be
perfect for since it even adds high availability in a very easy way...

Jan

Jan Kirchhoff schrieb:
sheeri kritzer schrieb:
No problem:

Firstly, how are you measuring your updates on a single table?  I took
a few binary logs, grepped out for things that changed the table,
counting the lines (using wc) and then dividing by the # of seconds
the binary logs covered.  The average for one table was 108 updates
per second.
  I'm very intrigued as to how you came up with 2-300 updates per second
for one table. . . did you do it that way?  If not, how did you do it?
 (We are a VERY heavily trafficked site, having 18,000 people online
and active, and that accounts for the 108 updates per second.  So if
you have more traffic than that. .  .wow!)
Thanks for your hardware/database information. I will look at that close tomorrow since I want to go home for today - it's already 9 pm over here... I need beer ;)

We are not running a webservice here (actually we do, too, but thats on other systems). This is part of our database with data of major stock exchanges worldwide that we deliver realtime data for. Currently that are around 900,000 quotes, during trading hours they change all the time... We have much more updates than selects on the main database. Our Application that receives the datastream writes blocks (INSERT ... ON DUPLICATE KEY UPDATE...) with all records that changed since the last write. It gives me debug output like "[timestamp] Wrote 19427 rows in 6 queries" every 30 seconds - and that are numbers that I can rely on.

Jan




--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:    http://lists.mysql.com/[EMAIL PROTECTED]

Reply via email to