> Just my $0.02 but if it's in MySQL then you really don't need to expire
> each one.  You can write a custom script that will do this.  When you
> break it down, expire is really just finding those tokens that are
> beyond the threshold where id=x and time=y.  The resultant would be
> "where time=x".

Right.  Are there any scripts already out there that do this?

> But even then, you would only trim it down a manageable size per user.
> Our production database for a large number of emails (but using site
> wide) is about 40mb.  

What is your bayes_expiry_max_db_size?  Quite a bit larger than default I
take it.
 
> Even if you stuck with non-MySQL based databases (suck as Berkeley DB)
> you'd still have 160gb of aggregate data files.  If you truly need
> independent DB's for each user (weather file based or MySQL) I'd
> recommend building a big MySQL cluster and managing it that way.  We
> currently manage a MySQL cluster (with mirrored 300gb drives and DRBD
> replication) that houses a whopping 80mb of MySQL data.  

>From what I understand, MySQL cluster design is such that the data nodes keep
all the table data in memory, which would not be feasible in a 160GB
scenario...
 
> I don't think this helps you much, just an opinion.

I appreciate it nonetheless!


                
__________________________________ 
Yahoo! FareChase: Search multiple travel sites in one click.
http://farechase.yahoo.com

Reply via email to