On Thursday 08 March 2001 13:42, Goswin Brederlow wrote:
> > " " == Pavel Machek <[EMAIL PROTECTED]> writes:
> > Hi!
> >
> >> I was hoping to point out that in real life, most systems that
> >> need to access large numbers of files are already designed to
> >> do some kin
> " " == Pavel Machek <[EMAIL PROTECTED]> writes:
> Hi!
>> I was hoping to point out that in real life, most systems that
>> need to access large numbers of files are already designed to
>> do some kind of hashing, or at least to divide-and-conquer by
>> using multi-level
Le 12 Mar 2001 21:05:58 +1100, Herbert Xu a écrit :
> Pavel Machek <[EMAIL PROTECTED]> wrote:
>
> > xargs is very ugly. I want to rm 12*. Just plain "rm 12*". *Not* "find
> > . -name "12*" | xargs rm, which has terrible issues with files names
>
> Try
>
> printf "%s\0" 12* | xargs -0 rm
Or fin
Pavel Machek <[EMAIL PROTECTED]> wrote:
> xargs is very ugly. I want to rm 12*. Just plain "rm 12*". *Not* "find
> . -name "12*" | xargs rm, which has terrible issues with files names
Try
printf "%s\0" 12* | xargs -0 rm
--
Debian GNU/Linux 2.2 is out! ( http://www.debian.org/ )
Email: Herbert
[EMAIL PROTECTED] (Bill Crawford) wrote on 22.02.01 in
<[EMAIL PROTECTED]>:
> A particular reason for this, apart from filesystem efficiency,
> is to make it easier for people to find things, as it is usually
> easier to spot what you want amongst a hundred things than among
> a thousand or te
In article <003701c0a722$f6b02700$5517fea9@local>,
Manfred Spraul <[EMAIL PROTECTED]> wrote:
>
>exec_mmap currenly avoids mm_alloc()/activate_mm()/mm_drop() for single
>threaded apps, and that would become impossible.
>I'm not sure how expensive these calls are.
They aren't that expensive: activa
From: "Jamie Lokier" <[EMAIL PROTECTED]>
> Manfred Spraul wrote:
> > I'm not sure that this is the right way: It means that every exec()
> > must call dup_mmap(), and usually only to copy a few hundert
> > bytes. But I don't see a sane alternative. I won't propose to
> > create a temporary file in
Manfred Spraul wrote:
> I'm not sure that this is the right way: It means that every exec() must
> call dup_mmap(), and usually only to copy a few hundert bytes. But I
> don't see a sane alternative. I won't propose to create a temporary file
> in a kernel tmpfs mount ;-)
Every exec creates a who
Jamie wrote:
> Linus Torvalds wrote:
> > The long-term solution for this is to create the new VM space for
the
> > new process early, and add it to the list of mm_struct's that the
> > swapper knows about, and then just get rid of the
pages[MAX_ARG_PAGES]
> > array completely and instead just pop
Linus Torvalds wrote:
> The long-term solution for this is to create the new VM space for the
> new process early, and add it to the list of mm_struct's that the
> swapper knows about, and then just get rid of the pages[MAX_ARG_PAGES]
> array completely and instead just populate the new VM directl
In article <[EMAIL PROTECTED]>,
Jamie Lokier <[EMAIL PROTECTED]> wrote:
>Pavel Machek wrote:
>> > the space allowed for arguments is not a userland issue, it is a kernel
>> > limit defined by MAX_ARG_PAGES in binfmts.h, so one could tweak it if one
>> > wanted to without breaking any userland.
>>
Pavel Machek wrote:
> > the space allowed for arguments is not a userland issue, it is a kernel
> > limit defined by MAX_ARG_PAGES in binfmts.h, so one could tweak it if one
> > wanted to without breaking any userland.
>
> Which is exactly what I done on my system. 2MB for command line is
> very
Pavel Machek wrote:
> Hi!
> > I was hoping to point out that in real life, most systems that
> > need to access large numbers of files are already designed to do
> > some kind of hashing, or at least to divide-and-conquer by using
> > multi-level directory structures.
> Yes -- because their wo
On Fri, Mar 02, 2001 at 10:04:10AM +0100, Pavel Machek wrote:
>
> xargs is very ugly. I want to rm 12*. Just plain "rm 12*". *Not* "find
> . -name "12*" | xargs rm, which has terrible issues with files names
>
> "xyzzy"
> "bla"
> "xyzzy bla"
> "12 xyzzy bla"
>
Getting a bit OffTopic(TM) here,
On Fri, Mar 02, 2001 at 10:04:10AM +0100, Pavel Machek wrote:
> Hi!
>
> > > > * userland issues (what, you thought that limits on the
> > > > command size will go away?)
> > >
> > > Last I checked, the command line size limit wasn't a userland issue, but
> > > rather a limit of the kerne
On 2 Mar 2001, Oystein Viggen wrote:
> Pavel Machek wrote:
> > xargs is very ugly. I want to rm 12*. Just plain "rm 12*". *Not* "find
> These you work around using the smarter, \0 terminated, version:
Another example demonstrating why xargs is not always good (and why a
bigger command line is nee
Pavel Machek wrote:
> xargs is very ugly. I want to rm 12*. Just plain "rm 12*". *Not* "find
> . -name "12*" | xargs rm, which has terrible issues with files names
>
> "xyzzy"
> "bla"
> "xyzzy bla"
> "12 xyzzy bla"
These you work around using the smarter, \0 terminated, version:
find . -name
Hi!
> > > * userland issues (what, you thought that limits on the
> > > command size will go away?)
> >
> > Last I checked, the command line size limit wasn't a userland issue, but
> > rather a limit of the kernel exec(). This might have changed.
>
> I _really_ don't want to trust the
Hi!
> > > I was hoping to point out that in real life, most systems that
> > > need to access large numbers of files are already designed to do
> > > some kind of hashing, or at least to divide-and-conquer by using
> > > multi-level directory structures.
> >
> > Yes -- because their workaround
Hi!
> > * userland issues (what, you thought that limits on the
> > command size will go away?)
>
> the space allowed for arguments is not a userland issue, it is a kernel
> limit defined by MAX_ARG_PAGES in binfmts.h, so one could tweak it if one
> wanted to without breaking any userland.
H. Peter Anvin writes [re hashed directories]:
> I don't see there being any fundamental reason to not do such an
> improvement, except the one Alan Cox mentioned -- crash recovery --
> (which I think can be dealt with; in my example above as long as the leaf
> nodes can get recovered, the tree ca
Before I reply: I apologise for starting this argument, or at least
making it worse, and please let me say again that I really would like
to see improvements in directory searching etc. ... my original point
was simply a half-joking aside to the effect that we should not
encourage people to put t
Alexander Viro wrote:
>
> I _really_ don't want to trust the ability of shell to deal with long
> command lines. I also don't like the failure modes with history expansion
> causing OOM, etc.
>
> AFAICS right now we hit the kernel limit first, but I really doubt that
> raising said limit is a go
On Thu, 1 Mar 2001, H. Peter Anvin wrote:
> > * userland issues (what, you thought that limits on the
> > command size will go away?)
>
> Last I checked, the command line size limit wasn't a userland issue, but
> rather a limit of the kernel exec(). This might have changed.
I _really
On Thu, 1 Mar 2001, Alexander Viro wrote:
> * userland issues (what, you thought that limits on the
> command size will go away?)
the space allowed for arguments is not a userland issue, it is a kernel
limit defined by MAX_ARG_PAGES in binfmts.h, so one could tweak it if one
wanted to witho
Alexander Viro wrote:
> >
> > Yes -- because their workaround kernel slowness.
>
> Pavel, I'm afraid that you are missing the point. Several, actually:
> * limits of _human_ capability to deal with large unstructured
> sets of objects
Not an issue if you're a machine.
> * userla
On Sat, 1 Jan 2000, Pavel Machek wrote:
> Hi!
>
> > I was hoping to point out that in real life, most systems that
> > need to access large numbers of files are already designed to do
> > some kind of hashing, or at least to divide-and-conquer by using
> > multi-level directory structures.
>
Hi!
> I was hoping to point out that in real life, most systems that
> need to access large numbers of files are already designed to do
> some kind of hashing, or at least to divide-and-conquer by using
> multi-level directory structures.
Yes -- because their workaround kernel slowness.
I had
"H. Peter Anvin" wrote:
> Bill Crawford wrote:
...
> > We use Solaris and NFS a lot, too, so large directories are a bad
> > thing in general for us, so we tend to subdivide things using a
> > very simple scheme: taking the first letter and then sometimes
> > the second letter or a pair of letters
Bill Crawford wrote:
>
> A particular reason for this, apart from filesystem efficiency,
> is to make it easier for people to find things, as it is usually
> easier to spot what you want amongst a hundred things than among
> a thousand or ten thousand.
>
> A couple of practical examples from w
30 matches
Mail list logo