http://www.mysqlperformanceblog.com/

I hope you have seen this blog? They have had a number of post lately
about indexes and always post about optimization in one way or the
other (funny enough).

If you have added indexes which "did not improve much" you may be
defining them wrong for the query you make. You may also need to
modify your queries so that they can make use of the indexes. When
adding an index it is good to run an "explain" on the query before and
after to see that the index is being used.

300'000rows for a join-table doesn't sound like too much for one
server unless you really need to scan most of that table for a lot of
queries... like showing a users friends friends friends... I have that
and more in both "normal" tables and joining tables, though your app
probably sees more requests than mine (not a public app).

I wanted to get to the point that I used to have huge problems with
performance. 9/10 of them needed php related optimizations like
Containable or doing 3 queries instead of a for loop or a Set::extract
(). Once I needed to cache the returned data from Mysql... when
generating reporting data and identical queries could be issued 200
times during the same "request".



On Sep 21, 11:00 pm, byqsri <marco.rizze...@gmail.com> wrote:
> My table is about 300000 records with only 5 fields
> It's a jointable of a HABTM relation.
> But I use this table in some INNER JOIN in some queries that are very
> slowly.
> I use memcache.
> In these cases what is the best practise to consider?
> On 21 Set, 20:20, "Ma'moon" <phpir...@gmail.com> wrote:
>
>
>
> > i was successfully able to handle tables with more than 2000000 records
> > "yes, above 2 million records!" with CakePHP and with a very acceptable
> > performance and speed, the first thing that you should be thinking of is
> > sharding the database into smaller chunks "DBs", second thing is enabling
> > "memcached" to reduce DB access "specially reducing the master DB hits",
> > thirdly enable APC or whatever bytecode cacher you might be able to use
> > "this will gain your web server a very good performance boost", and finally
> > set the debug level for you cake app to "0" to cache the tables "DESC"
> > operations.
> > Taking all the above into consideration CakePHP can be a very powerful
> > framework to handle huge databases!.
>
> > On Mon, Sep 21, 2009 at 8:50 PM, Miles J <mileswjohn...@gmail.com> wrote:
>
> > > Well how big is your table? I have tables with over 200,000+ rows and
> > > it works just fine.
>
> > --http://phpirate.net
--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups 
"CakePHP" group.
To post to this group, send email to cake-php@googlegroups.com
To unsubscribe from this group, send email to 
cake-php+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/cake-php?hl=en
-~----------~----~----~----~------~----~------~--~---

Reply via email to