Rm performance issue

2007-09-26 Thread Ken Naim
I am removing 300gb of data spread across 130 files within a single
directory and the process take just over 2 hours. In my past experiences
removing a small number of large files was very quick, almost instantaneous.
I am running red hat Linux on ibm p series hardware against a san with sata
and fiber drives. I see this issue on both the sata and fiber side although
the rm process is slightly faster on fiber.

 

Uname -a : Linux hostname 2.6.9-55.EL #1 SMP Fri Apr 20 16:33:09 EDT 2007
ppc64 ppc64 ppc64 GNU/Linux

Commands : cd /path/directory/subdirectory

Rm -f *

 

I wanted to know if there is a way to speed this up as it causes 3 hour
process to go to 5 hours.

 

Thanks,

Ken Naim

 

 

 

___
Bug-coreutils mailing list
Bug-coreutils@gnu.org
http://lists.gnu.org/mailman/listinfo/bug-coreutils


RE: Rm performance issue

2007-09-26 Thread Ken Naim
We are using ext3 on top of LVM on IBM SAN. I don't know the SAN hardware
specifics, although I have been trying to squeeze this info out of the
client for a while. 

As for bad io experiences, our core production system use raw devices for
our databases so we don't have the same issue(s), this is our production
reporting system that gets cloned over nightly. So the process removes all
the existing files and then writes new versions of them from a backup onto a
file system. I have noticed the poor io performance since I came onsite but
the unix team keeps saying everything is fine. This rm issue is causing the
database clone process to exceed its allocated downtime window so I thought
I'd start there.

If anyone can point me to any specific information on tuning ext3 file
system I'd appreciate it. I am googling it now.

Thanks much for the help,
Ken

-Original Message-
From: Andreas Schwab [mailto:[EMAIL PROTECTED] 
Sent: Wednesday, September 26, 2007 12:51 PM
To: Philip Rowlands
Cc: Ken Naim; bug-coreutils@gnu.org; [EMAIL PROTECTED]
Subject: Re: Rm performance issue

Philip Rowlands <[EMAIL PROTECTED]> writes:

> unlink shouldn't cause much I/O compared to other read/write
> operations, so I'm surprised you only noticed issues with rm.

Deleting a big file can require quite a bit of block reading, depending
on the filesystem and the fragmentation thereof.

Andreas.

-- 
Andreas Schwab, SuSE Labs, [EMAIL PROTECTED]
SuSE Linux Products GmbH, Maxfeldstraße 5, 90409 Nürnberg, Germany
PGP key fingerprint = 58CA 54C7 6D53 942B 1756  01D3 44D5 214B 8276 4ED5
"And now for something completely different."



___
Bug-coreutils mailing list
Bug-coreutils@gnu.org
http://lists.gnu.org/mailman/listinfo/bug-coreutils