On Thu, 2005-12-15 at 07:43 -0700, Duncan wrote:
> > I was wondering if there are any sane ways to optimize the performance
> > of a Gentoo system.
> This really belongs on user, or perhaps on the appropriate purposed list,
> desktop or hardened or whatever, not on devel.  That said, some
> comments...  (I can't resist. <g>)
-user has the risk of many "use teh -fomglol flag, it si teh fast0r" ;-)
hardened doesn't have much to do with performance (although I'd be
interested what impact - if any - the different security features have!)
 
> > - don't overtweak CFLAGS. "-O2 -march=$your_cpu_family" seems to be on
> > average the best, -O3 is often slower and can cause bugs
> 
> A lot of folks don't realize the effect of cache memory on optimizations. 
> I'll be brief here, but particularly for things like the kernel that stay
> in memory, -Os can at times work wonders, because it means more of the
> working set stays in a cache closer to the CPU, and the additional speed
> in retrieving that code far outweighs the compromises made to
> optimizations to shrink it to size.  Conversely, media streaming or
> encoding apps are constantly throwing out old data and fetching new data,
> and the optimizations are often more effective for them, so they work
> better with -O2 or even -O3.
I've not seen any substantial benefits from -Os over -O2.
Also the size difference is quite small - ~5M on a "normal" install iirc

> There have been occasional problems with -Os, generally because it isn't
> used as much and gets less testing, so earlier in a gcc cycle series. 
> However, I run -Os here (amd64) by default, and haven't seen any issues
> that went away if I reverted to -O2, over the couple years I've been
> running Gentoo. 
I've seen some reproducable breakage, e.g. KDE doesn't like it at all
>  (Actually, that has been the case, even when I've edited
> ebuilds to remove their stripflags calls and the like.  Glibc and xorg
> both stripflags including -Os.  xorg seemed to benefit here from -Os after
> I removed the stripflags call, while glibc worked but seemed slower. Note
> that editing ebuilds means if it breaks, you get to keep the pieces!)
... which is exactly what I wanted to avoid. Ricing for the sake of it is 
boring ;-)

> For gcc, -pipe doesn't improve program optimization, but will make
> compiling faster.  -fomit-frame-pointers makes smaller applications if
> you aren't debugging.  Those are both common enough to be fairly safe. 
agreed
> -frename-registers and -fweb may also be useful. (-fweb ceases to be so on
> gcc4, however, because it is implemented differently.)  -funit-at-a-time
> (new to gcc-3.4, so don't try it with gcc-3.3) may also be worth looking
> into, altho it's already enabled by -Os. These latter flags are less
> commonly used, however, thus less well tested, and may therefore cause
> very occasional problems. (-funit-at-a-time was known to do so early in
> the 3.4 cycle, but those issues should have been long ago dealt with by
> now.)  I consider those /reasonably/ conservative, and it's what I run. 
> If I were running a server, however, I'd probably only run -O2 and the
> first two (-pipe and -fomit-frame-pointers).
on a server you'd not use omit-frame-pointers to keep debuggability I think.
> Do some research on -Os, in any case.  It could be well worth your time.
from my (limited) experience it isn't, especially on CPUs with larger caches

> This suggestion does involve hardware, but not a real heavy cost, and the
> performance boost may be worth it.
That's usually not an option :-)

>  Consider running a RAID system.  I
> recently switched to RAID, a four-disk setup, raid1/mirrored for /boot,
> raid6 (for redundancy) for most of the system, raid0/striped (for speed)
> for /tmp, the portage dir, etc, stuff that was either temporary anyway, or
> could easily be redownloaded. (Swap can also be striped, set equal
> partitions on each disk and set equal priority for them in fstab.) I was
> very pleasantly surprised at how much of a difference it made!
Yes. 4-disk raid5 delivers amazing performance with minimal CPU overhead (~10% 
@1Ghz)
But 4 disks at 100Euro + controller (100 Eur) is more than the price of
a "new" system for most people.
>  If you have
> onboard SATA and are buying new disks so can buy SATA anyway (my case),
> that should do just fine, as SATA runs a dedicated channel to each
> drive anyway.  SCSI is a higher cost option, ruled out here, but SATA
> works very nicely, certainly so for me.
SCSI does deliver better performance, but at a prohibitive cost for "average" 
users.

> Again, a reasonable new-hardware suggestion.  When purchasing a new system
> or considering an upgrade, more memory is often the most effective
> optimization you can make (with the raid suggestion above very close to
> it).
"The only thing better than a large engine is a larger engine" ;-)
Depending on workload 4G does wonders, but again - prohibitive for the
normal user.

>  Slower CPU and more memory, up to a gig or so, is almost always
> better than the reverse, because hard drive access is WAYYY slower than
> even cheap/slow memory.  At a gig of memory, running with swap disabled is
> actually a practical option,
but if you're investing anyway keep  1G per disk for swap just in
case ;-)
>  altho it might not be faster and there are a
> certain memory zone management considerations. Usual X/KDE desktop usage
> will run perhaps a third of a gig.  That means half to 2/3 gig for cache,
> which is "comfortable".
Agreed, although I wonder why we need so much memory in the first
place ...

>  Naturally, if you take the RAID suggestion above,
> this one isn't quite as critical, because drive latency will be lower so
> reliance on swap isn't as painful, and a big cache not nearly as critical
> to good performance.  
latency is the same, but concurrent accesses can happen, thus throughput
increases.
Still memory > * ... 

> A gig to two gig can still be useful, but the
> cost/performance tradeoff isn't as good, and the money will likely be
> better spent elsewhere.
No. The only thing better than memory is more memory ;-)

> I run reiserfs here on everything.  However, some don't consider it
> extremely stable.  I keep second-copy partitions as backups of stuff I
> want to ensure is safe, for that reason and others (fat-finger deleting,
> anyone?).
Backups are independent of drive speed ;-)
>  Bottom line, reiserfs is certainly safe "enough", if you have a
> decent backup system in place, and you follow it regularly, as you should.
> I can't see how anyone can reasonably disagree with that, filesystem
> religious zealousy or not.
In my experience it is as "safe" as ext3 and XFS, meaning it can go down, but 
usually just works.

> As I said, I run reiserfs for everything here, but I also have backup
> images of stuff I know I want to keep.
Always backup, what if your disk(s) die?
I've seen 6 out of 10 disks in a RAID die within a few hours ...

So while not completely related to software tweaks thanks for the
hardware upgrade info ;-)

Patrick
-- 
Stand still, and let the rest of the universe move

Attachment: signature.asc
Description: This is a digitally signed message part

Reply via email to