My impression as an undergraduate in CS was that most of my peers were
mechanics, rather than artists. They could ape things, but only few
could see past what was shown and apply the principles abstractly.
This may have to do with failure in the earlier education--I remember
that again, peers coul
@9fans.net>
Sent: Tue, May 4, 2010 5:38:35 PM
Subject: Re: [9fans] du and find
On Tue, May 4, 2010 at 6:01 AM, Ethan Grammatikidis wrote:
>
> On 3 May 2010, at 19:34, Jorden M wrote:
>
>> On Mon, May 3, 2010 at 10:53 AM, erik quanstrom
>> wrote:
>>>>
&g
On Tue, May 4, 2010 at 6:01 AM, Ethan Grammatikidis wrote:
>
> On 3 May 2010, at 19:34, Jorden M wrote:
>
>> On Mon, May 3, 2010 at 10:53 AM, erik quanstrom
>> wrote:
It's always been easier for me to use python's/perl's regular
expressions when I needed to process a text file than
On Tue, May 4, 2010 at 11:01 AM, Ethan Grammatikidis wrote:
> On 3 May 2010, at 19:34, Jorden M wrote:
>
>> I've yet to find out why this happens so much, but I think I can
>> narrow it to a combination of ignorance, laziness, and perhaps that
>> all-too-frequent assumption `oh, I can do this in 1
On 3 May 2010, at 19:34, Jorden M wrote:
On Mon, May 3, 2010 at 10:53 AM, erik quanstrom
wrote:
It's always been easier for me to use python's/perl's regular
expressions when I needed to process a text file than to use
plan9's.
For simple things, e.g. while editing an ordinary text in acm
On Mon, May 3, 2010 at 10:53 AM, erik quanstrom wrote:
>> It's always been easier for me to use python's/perl's regular
>> expressions when I needed to process a text file than to use plan9's.
>> For simple things, e.g. while editing an ordinary text in acme/sam,
>> plan9's regexps are just fine.
On 3 May 2010, at 16:29, j...@9srv.net wrote:
On 3 May 2010, at 14:41, Steve Simon wrote:
Or just apply runs grep -r patch...
% man 1 grep | grep '\-r'
%
Key word being patch.
Oh right! Well, if the point of this thread is to talk about something
better than grep -r in the first place.
> > Or just apply runs grep -r patch...
> % man 1 grep | grep '\-r'
s/runs/ron's/
see 9fans passim for the patch.
-Steve
> On 3 May 2010, at 14:41, Steve Simon wrote:
>> Or just apply runs grep -r patch...
>
> % man 1 grep | grep '\-r'
> %
>
Key word being patch.
On 3 May 2010, at 14:41, Steve Simon wrote:
on Plan 9 you'd probably want to make a wrapper for grep anyway if
you
do a lot of recursive searching.
Or just apply runs grep -r patch...
% man 1 grep | grep '\-r'
%
-Steve
--
Simplicity does not precede complexity, but follows it. -- Al
> It's always been easier for me to use python's/perl's regular
> expressions when I needed to process a text file than to use plan9's.
> For simple things, e.g. while editing an ordinary text in acme/sam,
> plan9's regexps are just fine.
i find it hard to think of cases where i would need
such so
> http://betterthangrep.com/
>
> it does not seem to work out of the box (expecting some unix paths), but
> since there's a perl port and that thing is supposed to be more or
> less self contained (for the standalone version), maybe it's not too
> much work for someone interested enough.
don't be
> on Plan 9 you'd probably want to make a wrapper for grep anyway if you
> do a lot of recursive searching.
Or just apply runs grep -r patch...
-Steve
On 3 May 2010 14:18, Akshat Kumar wrote:
> Forgive my ignorance and irrelevance to this topic,
> but what are the advantages of Perl's regular
> expressions, over the implementation we have
> currently in Plan 9?
Regexps in Plan9 are on one hand much less powerful than Perl's, on
the other hand t
On 3 May 2010, at 13:26, Mathieu Lonjaret wrote:
No idea, probably none.
that would not be the interesting point, if any. it's just that the
tool is already there and (should be) simpler to use than piping
various commands around, as they illustrate below.
Ack looks cute, but I think a fair
trate below.
> Date: Mon, 3 May 2010 05:18:56 -0700
> From: Akshat Kumar
> Subject: Re: [9fans] du and find
> To: Fans of the OS Plan 9 from Bell Labs <9fans@9fans.net>
>
> >From the website:
>
> "ack is written purely in Perl, and takes advantage
> of the po
No idea, probably none.
that would not be the interesting point, if any. it's just that the
tool is already there and (should be) simpler to use than piping
various commands around, as they illustrate below.
--- Begin Message ---
>From the website:
"ack is written purely in Perl, and takes adva
>From the website:
"ack is written purely in Perl, and takes advantage
of the power of Perl's regular expressions."
Forgive my ignorance and irrelevance to this topic,
but what are the advantages of Perl's regular
expressions, over the implementation we have
currently in Plan 9?
Thanks,
ak
On
Hello,
just because reviving old threads is fun...
I've just found out about this:
http://betterthangrep.com/
it does not seem to work out of the box (expecting some unix paths), but
since there's a perl port and that thing is supposed to be more or
less self contained (for the standalone versio
> Given the way Unix programs
> behave you can't replace arg list with an arg fd (I used to
didn't know this was "unixfans". will keep that in mind.
- erik
On Sat, 02 Jan 2010 20:49:39 EST erik quanstrom wrote:
> > And can eat up a lot of memory or even run out of it. On a
> > 2+ year old MacBookPro "find -x /" takes 4.5 minutes for 1.6M
> > files and 155MB to hold paths. My 11 old machine has 64MB
> > and over a million files on a rather slow dis
Rog said:
that's why breadth-first might be useful, by putting
shallower files earlier in the search results - i often
do grep foo *.[ch] */*.[ch] */*/*.[ch] to achieve
a similar result, but you have to guess the depth that way.
for what it's worth, dan's walk.c has a -d option for limiting se
> And can eat up a lot of memory or even run out of it. On a
> 2+ year old MacBookPro "find -x /" takes 4.5 minutes for 1.6M
> files and 155MB to hold paths. My 11 old machine has 64MB
> and over a million files on a rather slow disk. Your solution
> would run out of space on it.
modern cat woul
On Sat, 02 Jan 2010 14:47:26 EST erik quanstrom wrote:
>
> my beef with xargs is only that it is used as an excuse
> for not fixing exec in unix. it's also used to bolster the
> "that's a rare case" argument.
I often do something like the following:
find . -type f | xargs grep -l | xargs
> i'm not saying it can't be passed in an argument list, just that
> xargs gives you a lazy evaluation of the walk
> of the file tree which can result in a faster result
> when the result is found earlier in the file list.
i have no problem with breadth-first.
my beef with xargs is only that it i
2010/1/2 erik quanstrom :
>> and /sys/src isn't by any means the largest tree i like to grep
>> (for instance, searching for lost files with a name i longer remember,
>> i've been known to search through all the files in my home directory,
>> ~425000 files at last count)
>>
>> sometimes i think it
Yes, you are right. I have forgot about cache. But probably cache is
the reason why du -a takes 25s?
> On Fri, Jan 01, 2010 at 09:02:28PM -0500, erik quanstrom wrote:
> > > you've got a fast system.
> > > in at least one system i use, du -a of /sys/src takes about 25s.
> >
> > i have a humble 2y.o. single-core 35w celeron as a fileserver.
> >
> Speed of `du' depends on I/O, not CPU.
really? ha
On Fri, Jan 01, 2010 at 09:02:28PM -0500, erik quanstrom wrote:
> > you've got a fast system.
> > in at least one system i use, du -a of /sys/src takes about 25s.
>
> i have a humble 2y.o. single-core 35w celeron as a fileserver.
>
Speed of `du' depends on I/O, not CPU.
> because the limit is big enough that cases that break the
> limit almost never happen except in this case?
we can easily fit all the files in most any system in memory.
why shouldn't that be the limit? see below.
> > i'm not sure i understand when and why this would be useful. nobody
> > has
2010/1/2 erik quanstrom :
> using xargs does work around the problem. but then why not
> go all the way and remove ` from rc? after all, ` only works some
> of the time?
because the limit is big enough that cases that break the
limit almost never happen except in this case?
> i'm not sure i und
> i don't really see why xargs (the idea, not the usual unix implementations)
> is inherently such a bad idea. years ago i wrote an ultra simple version
> with no options, and it's about 80 lines of code, which i use to grep
> through all of /sys/src for example.
that's interesting. my objection
2009/12/29 erik quanstrom :
> what seems more important to me is a way to unlimit the size
> of argv. otherwise we'll need to go down the hideous xargs path.
> (apoligizes to hideous functions everywhere for the slur.)
i don't really see why xargs (the idea, not the usual unix implementations)
is
Ok, so it is better to use
du -a | sed 's/^.* //'
The 'g' is unnecessary.
-rob
On Wed, Dec 30, 2009 at 4:59 AM, Tim Newsham wrote:
>> It is suggested to use
>> du -a | awk '{print $2}'
>> instead of find. But what if filename contains spaces? For example if
>> file is named "foo bar" then awk will output "foo" only.
>
> What about
>
> du -a
Chicken dinner!
On Tue, Dec 29, 2009 at 10:59 AM, Tim Newsham wrote:
> It is suggested to use
>> du -a | awk '{print $2}'
>> instead of find. But what if filename contains spaces? For example if
>> file is named "foo bar" then awk will output "foo" only.
>>
>
> What about
>
> du -a | sed 's/
It is suggested to use
du -a | awk '{print $2}'
instead of find. But what if filename contains spaces? For example if
file is named "foo bar" then awk will output "foo" only.
What about
du -a | sed 's/^[0-9]*//g'
no loss on spaces in filenames.
no loss on tabs in filenames.
Tim Newsham
On Mon Dec 28 20:04:48 EST 2009, lyn...@orthanc.ca wrote:
> > what seems more important to me is a way to unlimit the size
> > of argv. otherwise we'll need to go down the hideous xargs path.
>
> How often have you run up against the current limit? I've yet to hit
> it in anything other than con
To be fair, the correct script on Plan 9 is academic. Just do what gets the
job done for you now. Don't go down an academic black hole. These guys have
been arguing about `find` since 2002.
D
On Mon, Dec 28, 2009 at 6:00 PM, anonymous wrote:
> > While it's true that you'll have misses on tabs i
> While it's true that you'll have misses on tabs in filenames, it's much more
> rare to have a tab in a filename than it is to have a space, yes?
>
I don't have spaces too, but correct script should not make any assumptions.
There is interesting date on http://swtch.com/plan9history/:
March 23,
> what seems more important to me is a way to unlimit the size
> of argv. otherwise we'll need to go down the hideous xargs path.
How often have you run up against the current limit? I've yet to hit
it in anything other than contrived tests. And even those took work.
> find and walk are about
> du -a | awk '-F\t' '{print $2}' -
All this nonsense because the dogmatists refuse to accept
/n/sources/contrib/cross/walk.c into the distribution.
While it's true that you'll have misses on tabs in filenames, it's much more
rare to have a tab in a filename than it is to have a space, yes? There is
no loss on a single quote character. You're quoting the command line
argument.
On Mon, Dec 28, 2009 at 4:35 PM, erik quanstrom wrote:
> On Mon De
du -a | awk '-F\t' '{print $2}' -
On Mon, Dec 28, 2009 at 4:25 PM, erik quanstrom wrote:
> i agree that du -a has a few holes. too bad whitespace
> is allowed in file names. i use the attached find.c.
> it's also available as contrib quanstro/find. by default
> the output is quoted so that it
On Mon Dec 28 18:32:36 EST 2009, don.bai...@gmail.com wrote:
> du -a | awk '-F\t' '{print $2}' -
>
lossage on tabs and ' in filenames.
- erik
i agree that du -a has a few holes. too bad whitespace
is allowed in file names. i use the attached find.c.
it's also available as contrib quanstro/find. by default
the output is quoted so that it can be reparsed properly
with rc or gettokens.
- erik#include
#include
#include
char*defar
> It is suggested to use
>du -a | awk '{print $2}'
> instead of find. But what if filename contains spaces?
how about
du -a | awk '{$1=""; print}'
This does print a leading space but is simple enough,
or perhaps
du -a | while(s=`{read}) echo $s(2-)
which is more accurate bu
> du -a | awk '{print $2}'
du -a | awk '{$1=""; print}'
will be a good approximation...
++L
48 matches
Mail list logo