On Tue, Jan 19, 2010 at 9:25 PM, Matthew Ahrens wrote:
> Michael Schuster wrote:
>>
>> Mike Gerdts wrote:
>>>
>>> On Tue, Jan 5, 2010 at 4:34 AM, Mikko Lammi wrote:
Hello,
As a result of one badly designed application running loose for some
time,
we now seem to have
> "ml" == Mikko Lammi writes:
ml> "rm -rf" to problematic directory from parent level. Running
ml> this command shows directory size decreasing by 10,000
ml> files/hour, but this would still mean close to ten months
ml> (over 250 days) to delete everything!
interesting.
does
Michael Schuster wrote:
Mike Gerdts wrote:
On Tue, Jan 5, 2010 at 4:34 AM, Mikko Lammi wrote:
Hello,
As a result of one badly designed application running loose for some
time,
we now seem to have over 60 million files in one directory. Good thing
about ZFS is that it allows it without any i
On Tue, January 5, 2010 10:25, Richard Elling wrote:
> On Jan 5, 2010, at 8:13 AM, David Dyer-Bennet wrote:
>> It's interesting how our ability to build larger disks, and our
>> software's
>> ability to do things like create really large numbers of files,
>> comes back
>> to bite us on the ass ev
On Jan 5, 2010, at 8:52 AM, Daniel Rock wrote:
Am 05.01.2010 16:22, schrieb Mikko Lammi:
However when we deleted some other files from the volume and
managed to
raise free disk space from 4 GB to 10 GB, the "rm -rf directory"
method
started to perform significantly faster. Now it's deleting
On Wed, Jan 6, 2010 at 12:44 AM, Michael Schuster
wrote:
>>> we need to get rid of them (because they eat 80% of disk space) it seems
>>> to be quite challenging.
>>>
>>
>> I've been following this thread. Would it be faster to do the reverse.
>> Copy the 20% of disk then format then move the 20
Paul Gress wrote:
On 01/ 5/10 05:34 AM, Mikko Lammi wrote:
Hello,
As a result of one badly designed application running loose for some time,
we now seem to have over 60 million files in one directory. Good thing
about ZFS is that it allows it without any issues. Unfortunatelly now that
we need
On 01/ 5/10 05:34 AM, Mikko Lammi wrote:
Hello,
As a result of one badly designed application running loose for some time,
we now seem to have over 60 million files in one directory. Good thing
about ZFS is that it allows it without any issues. Unfortunatelly now that
we need to get rid of them
On 01/ 5/10 10:01 AM, Richard Elling wrote:
How are the files named? If you know something about the filename
pattern, then you could create subdirs and mv large numbers of files
to reduce the overall size of a single directory. Something like:
mkdir .A
mv A* .A
mkdir .B
mv B*
Am 05.01.2010 16:22, schrieb Mikko Lammi:
However when we deleted some other files from the volume and managed to
raise free disk space from 4 GB to 10 GB, the "rm -rf directory" method
started to perform significantly faster. Now it's deleting around 4,000
files/minute (240,000/h - quite an impr
On Tue, Jan 5, 2010 at 11:25 AM, Richard Elling wrote:
> On Jan 5, 2010, at 8:13 AM, David Dyer-Bennet wrote:
>
>>
>> On Tue, January 5, 2010 10:01, Richard Elling wrote:
>>
>>> OTOH, if you can reboot you can also run the latest
>>> b130 livecd which has faster stat().
>>>
>>
>> How much faster i
On Jan 5, 2010, at 8:13 AM, David Dyer-Bennet wrote:
On Tue, January 5, 2010 10:01, Richard Elling wrote:
OTOH, if you can reboot you can also run the latest
b130 livecd which has faster stat().
How much faster is it? He estimated 250 days to rm -rf them; so 10x
faster would get that down to
On Tue, January 5, 2010 10:01, Richard Elling wrote:
> OTOH, if you can reboot you can also run the latest
> b130 livecd which has faster stat().
How much faster is it? He estimated 250 days to rm -rf them; so 10x
faster would get that down to 25 days, 100x would get it down to 2.5 days
(assumin
>no - mv doesn't know about zpools, only about posix filesystems.
"mv" doesn't care about filesystems only about the interface provided by
POSIX.
There is no zfs specific interface which allows you to move a file from
one zfs to the next.
Casper
___
Michael Schuster wrote:
> >> "rm -rf" would be at least as quick.
> >
> > Normally when you do a move with-in a 'regular' file system all that's
> > usually done is the directory pointer is shuffled around. This is not the
> > case with ZFS data sets, even though they're on the same pool?
>
> no
On Jan 5, 2010, at 2:34 AM, Mikko Lammi wrote:
Hello,
As a result of one badly designed application running loose for some
time,
we now seem to have over 60 million files in one directory. Good thing
about ZFS is that it allows it without any issues. Unfortunatelly
now that
we need to get
On Tue, January 5, 2010 10:50, Michael Schuster wrote:
> David Magda wrote:
>> Normally when you do a move with-in a 'regular' file system all that's
>> usually done is the directory pointer is shuffled around. This is not
>> the case with ZFS data sets, even though they're on the same pool?
>
> no
> On Tue, January 5, 2010 10:12, casper@sun.com wrote:
>
>>>How about creating a new data set, moving the directory into it, and
>>> then
>>>destroying it?
>>>
>>>Assuming the directory in question is /opt/MYapp/data:
>>> 1. zfs create rpool/junk
>>> 2. mv /opt/MYapp/data /rpool/junk/
>>> 3
David Magda wrote:
On Tue, January 5, 2010 10:12, casper@sun.com wrote:
How about creating a new data set, moving the directory into it, and then
destroying it?
Assuming the directory in question is /opt/MYapp/data:
1. zfs create rpool/junk
2. mv /opt/MYapp/data /rpool/junk/
3. zfs dest
>On Tue, January 5, 2010 10:12, casper@sun.com wrote:
>
>>>How about creating a new data set, moving the directory into it, and then
>>>destroying it?
>>>
>>>Assuming the directory in question is /opt/MYapp/data:
>>> 1. zfs create rpool/junk
>>> 2. mv /opt/MYapp/data /rpool/junk/
>>> 3. zfs
On Tue, January 5, 2010 10:12, casper@sun.com wrote:
>>How about creating a new data set, moving the directory into it, and then
>>destroying it?
>>
>>Assuming the directory in question is /opt/MYapp/data:
>> 1. zfs create rpool/junk
>> 2. mv /opt/MYapp/data /rpool/junk/
>> 3. zfs destroy r
On Tue, January 5, 2010 17:08, David Magda wrote:
> On Tue, January 5, 2010 05:34, Mikko Lammi wrote:
>
>> As a result of one badly designed application running loose for some
>> time,
>> we now seem to have over 60 million files in one directory. Good thing
>> about ZFS is that it allows it withou
>On Tue, January 5, 2010 05:34, Mikko Lammi wrote:
>
>> As a result of one badly designed application running loose for some time,
>> we now seem to have over 60 million files in one directory. Good thing
>> about ZFS is that it allows it without any issues. Unfortunatelly now that
>> we need to g
On Tue, January 5, 2010 05:34, Mikko Lammi wrote:
> As a result of one badly designed application running loose for some time,
> we now seem to have over 60 million files in one directory. Good thing
> about ZFS is that it allows it without any issues. Unfortunatelly now that
> we need to get rid
Mike Gerdts wrote:
On Tue, Jan 5, 2010 at 4:34 AM, Mikko Lammi wrote:
Hello,
As a result of one badly designed application running loose for some time,
we now seem to have over 60 million files in one directory. Good thing
about ZFS is that it allows it without any issues. Unfortunatelly now t
On Tue, Jan 5, 2010 at 4:34 AM, Mikko Lammi wrote:
> Hello,
>
> As a result of one badly designed application running loose for some time,
> we now seem to have over 60 million files in one directory. Good thing
> about ZFS is that it allows it without any issues. Unfortunatelly now that
> we need
s.org
[mailto:zfs-discuss-boun...@opensolaris.org] On Behalf Of Mikko Lammi
Sent: 5. tammikuuta 2010 12:35
To: zfs-discuss@opensolaris.org
Subject: [zfs-discuss] Clearing a directory with more than 60 million files
Hello,
As a result of one badly designed application running loose for some time,
w
"Mikko Lammi" wrote:
> Hello,
>
> As a result of one badly designed application running loose for some time,
> we now seem to have over 60 million files in one directory. Good thing
> about ZFS is that it allows it without any issues. Unfortunatelly now that
> we need to get rid of them (because
Hello,
As a result of one badly designed application running loose for some time,
we now seem to have over 60 million files in one directory. Good thing
about ZFS is that it allows it without any issues. Unfortunatelly now that
we need to get rid of them (because they eat 80% of disk space) it see
29 matches
Mail list logo