In response to Birger Blixt <[EMAIL PROTECTED]>:

> On 2006-09-20 15:44, Bill Moran wrote:
> > In response to Birger Blixt <[EMAIL PROTECTED]>:
> > 
> >> Frank Sweetser wrote:
> >>> On Thu, Sep 07, 2006 at 10:58:49AM +0200, Masopust, Christian wrote:
> >>>> Hello Frank,
> >>>>
> >>>> thanks a lot for this info!  but....   :-)))
> >>>>
> >>>> could anybody give the complete info (maybe also modify the wiki-page) 
> >>>> how
> >>>> these indexes should be created?
> >>> The best place to find that is in the documentation for your database.  
> >>> That
> >>> way you can make sure you're getting accurate information, as opposed to
> >>> something for a previous version or the like.
> >> Amazing, my dbcheck at home was searching for orphaned Filename records
> >> the whole day, and I finaly did  mysaladmin kill on the query.
> >>
> >> Then I did:
> >>
> >> mysql bacula
> >>
> >> create index Job_FilesetId_idx on Job(FileSetId);
> >> create index Job_ClientId_idx on Job(ClientId);
> > 
> > I find it hard to believe that these indexes help much, unless you've got
> > a boatload of jobs in the system.  I doubt they'll hurt anything,
> > though.
> > 
> >> create index File_PathId_idx on File(PathId);
> >> create index File_FilenameId_idx on File(FilenameId);
> > 
> > One of these two is redundant.  There's already an index:
> > "file_fp_idx" btree (filenameid, pathid) CLUSTER
> > which should be usable by queries searching on filenameid (this is
> > PostgreSQL, but the idea is the same)  Creating another index on
> > filenameid is just bloat and will slow down inserts.
> > 
> > Personally, I don't recommend that anyone blindly create these
> > indexes.  As you mentioned, it took over 30 minutes to create them,
> > which seems to indicate that their existence will have a negative
> > impact on inserts and updates.
> > 
> > In my case, none of these indexes made a significant improvement to
> > dbcheck time.  dbcheck took 5-1/2 minutes to run with and without the
> > above indexes (I have about 9,000,000 rows in the file table and
> > 500,000 in the filename table).
> > 
> > Try them out ... if they make a significant improvement, use them.
> 
> Well, I admit, I don't know what I'm doing in this case,
> but ,,, the search query _did_ take forever,
> and I canceled it at the end.
> With new indexes it goes fast, so something did happen.
> 
> I was checking the script http://www.aha.com/bacula/recover.pl
> There I found:
> "In order for this program to have a chance of not being painfully slow,
> the following indexs should be added to your database.
> CREATE INDEX file_pathid_idx on file(pathid);
> CREATE INDEX file_filenameid_idx on file(filenameid); "
> 
> I took that as an example, and added them, and the 2 Job indexes as a test, 
> there is always a drop index on table command to remove them if it don't 
> works.
> 
> I should enjoy if someone that really knows how to tune mysql could
> send an output from  "show index from File" , so I can optimize the table
> (that goes for the table Job too I guess )
> 
> File   0 PRIMARY              1  FileId      4541609 BTREE
> File   1 JobId                1  JobId           576 BTREE
> File   1 JobId                2  PathId       378467 BTREE
> File   1 JobId                3  FilenameId  4541609 BTREE
> File   1 JobId_2              1  JobId           576 BTREE
> File   1 File_PathId_idx      1  PathId       105618 BTREE
> File   1 File_FilenameId_idx  1  FilenameId   504623 BTREE
> 
> Maybe I can drop something here, who knows ?
> 
> One source for information can be the make_mysql_tables script

The problem is that depending on your data and usage of the database,
YMMV.  Optimizations that work for one person might not benefit
another.

For example: adding indexes _usually_ speeds query performance, but
it also _usually_ slows insert/update performance, because every change
to the table has to update all the indexes.

If you're interested in very fast backups and rarely do restores, it
might be a bad idea to add any indexes to improve looks in the database
if it slows down the updates.  On the other hand, if getting at your
data quickly is important, it might be worth slower backups to be
able to search the data quickly.

People who have lots of jobs vs. folks with only a few jobs.  A few
large files vs. lots of small files.  One big server vs. many servers.

Each of these scenarios will distribute the data differently in the
database, and cause different optimizations to be worthwhile.
Additionally, the hardware on which the DB runs makes a difference:
fast disk and low RAM vs. lots of RAM and slow disks.

I recommend trying out indexes.  It's pretty simple to remove them
if they don't work out.  But don't forget to test backup performance
as well, since adding an index may speed dbcheck, but hurt your
backup speed.

-- 
Bill Moran
Collaborative Fusion Inc.

-------------------------------------------------------------------------
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT & business topics through brief surveys -- and earn cash
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV
_______________________________________________
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users

Reply via email to