On Fri, 14 Jan 2011 17:53:04 +1030, Wayne Sierke wrote:
> I suspect it is a transcription error by Robert in his email.
>
> From man awk:
>
>sub(r, t, s)
> substitutes t for the first occurrence of the regular
> expression
> r in the string s. If s is not giv
On Fri, 2011-01-14 at 07:17 +0100, Polytropon wrote:
> On Thu, 13 Jan 2011 18:22:18 -0600 (CST), Robert Bonomi
> wrote:
> > True. But
> > sub(nr,"[a-z]"," &");
> >
> > does the trick. (tested on Freebsd 7.2)
> >
> > Explamation: "&" is a 'replacement side' magic incantation to the rege
On Thu, 13 Jan 2011 18:22:18 -0600 (CST), Robert Bonomi
wrote:
> True. But
> sub(nr,"[a-z]"," &");
>
> does the trick. (tested on Freebsd 7.2)
>
> Explamation: "&" is a 'replacement side' magic incantation to the regex
> library that means 'that which was matched by the pattern regex'
> Date: Thu, 13 Jan 2011 06:28:19 +0100
> From: Polytropon
> Subject: awk question: replacing "%d%s" by "%d %s"
>
> I'm aware that this is not an awk question list, but I'm confident there
> are many awk gurus here who can surely help me with such a stupid
> problem. I also know that I get more
On Thu, Jan 13, 2011 at 12:28 AM, Polytropon wrote:
> I have strings of the form either "" or
> "". I catch them with
...
> where "nr" is the name of the string. What I need
> is a simple space between and ,
> so for example "12a" would get "12 a", "6d" would
> get "6 d", and "58" would stay unc
On Thu, 13 Jan 2011 01:00:17 -0500, Tom Limoncelli wrote:
> $ awk < data.txt > experiment.txt '{ num = $1 ; sub(/[^0-9]+$/, "",
> num) ; lets = $1 ; sub(/^[0-9]+/, "", lets); print num " " lets }' ;
> diff -cw control.txt experiment.txt
> $ # The above puts a space at the end of the first 3 lines
On Wed, 22 Apr 2009 12:38:47 -0700, Evuraan::ഏവൂരാന്
wrote:
> but this below, does not work
>
> tail -f /var/log/apache2/access.log |awk ' /192.168.1.100/ { print
> $0 | "mail m...@email.address "}'
I would suggest to keep the system() approach:
tail -f /var/log/apache2/access.log
nevermind, i got it to work, with a little help from
http://student.northpark.edu/pemente/awk/awk_sys.txt,
tail -f /var/log/apache2/access.log | awk '/192.168.1.100/
{system("echo " $0 "| mailx -s test_email m...@email.com" ) }'
thx..!
2009/4/22 Bill Campbell :
> You might want to look at ``swat
You might want to look at ``swatch'' which is designed to do
this, and monitors multiple log files simultaneously.
On Wed, Apr 22, 2009, Evuraan:: wrote:
>Greetings..!
>
>this works,
>
>tail -f /var/log/apache2/access.log | nawk '/192.168.1.100/{ print $0 }'
>
>and this to
On Thu, 9 Apr 2009 15:32:51 +0200 (CEST), Oliver Fromme
wrote:
> If ";" is the delimiter character, you need to tell awk
> about it (i.e. use the -F option). This one should work:
>
> awk -F';' '$3 ~ /^[a-z]{5}$/ {print}' file
You can even omit {print} because it's the default action
(to pr
Just add a filter
NF > 2
to the script. You can even take care of 1 token lines and empty lines
in whatever way you wish with other filters.
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebs
"n j" writes:
> Or awk only i.e. no sed:
>
> awk '!(/^$/) { print $(NF-1) }' user.csv
That's right. I originally suggested the sed and then was
thinking about it as I walked home yesterday and knew that awk
could test for the blank line condition before committing
suicide.:-)
Martin
> > > awk '{print $(NF-1)}' user.csv
>
> Yup, those blank lines will kill it for sure. A sed filter to
> remove blank lines ahead of the awk statement should allow it to
> work properly.
Or awk only i.e. no sed:
awk '!(/^$/) { print $(NF-1) }' user.csv
--
Nino
__
Don Hinton writes:
> On Thursday 26 July 2007 15:26:02 Peter Boosten wrote:
> > P.U.Kruppa wrote:
> > > > awk '{print $(NF-1)}' user.csv
Yup, those blank lines will kill it for sure. A sed filter to
remove blank lines ahead of the awk statement should allow it to
work properly.
Martin McCormi
On Thursday 26 July 2007 15:26:02 Peter Boosten wrote:
> P.U.Kruppa wrote:
> > Hi (and sorry for this slightly OT question),
> >
> > I would like to extract the second last field of each line of a file
> > called user.csv .
> > So I try
> >
> > > awk '{print $(NF-1)}' user.csv
> >
> > awk:
On Thu, 26 Jul 2007, Peter Boosten wrote:
P.U.Kruppa wrote:
Hi (and sorry for this slightly OT question),
I would like to extract the second last field of each line of a file
called user.csv .
So I try
> awk '{print $(NF-1)}' user.csv
awk: trying to access out of range field -1
i
P.U.Kruppa wrote:
> Hi (and sorry for this slightly OT question),
>
> I would like to extract the second last field of each line of a file
> called user.csv .
> So I try
> > awk '{print $(NF-1)}' user.csv
> awk: trying to access out of range field -1
> input record number 1, file us
At 07:43 PM 4/10/2007, Gary Kline wrote:
On Tue, Apr 10, 2007 at 06:35:33PM -0500, Derek Ragona wrote:
> At 06:17 PM 4/10/2007, Gary Kline wrote:
> >On Mon, Apr 09, 2007 at 06:54:07PM -0700, Rick Olson wrote:
> >> I'm assuming you've already taken care of this, but to answer your
> >> original qu
On Tue, Apr 10, 2007 at 06:35:33PM -0500, Derek Ragona wrote:
> At 06:17 PM 4/10/2007, Gary Kline wrote:
> >On Mon, Apr 09, 2007 at 06:54:07PM -0700, Rick Olson wrote:
> >> I'm assuming you've already taken care of this, but to answer your
> >> original question in AWK form, you could have done the
At 06:17 PM 4/10/2007, Gary Kline wrote:
On Mon, Apr 09, 2007 at 06:54:07PM -0700, Rick Olson wrote:
> I'm assuming you've already taken care of this, but to answer your
> original question in AWK form, you could have done the following:
>
> ls -l | awk '$8 == 2006 {system("rm " $9)}'
>
On Mon, Apr 09, 2007 at 06:54:07PM -0700, Rick Olson wrote:
> I'm assuming you've already taken care of this, but to answer your
> original question in AWK form, you could have done the following:
>
> ls -l | awk '$8 == 2006 {system("rm " $9)}'
>
i'Ll save your snippet to my growing %%%
You are trying to remove the files whose names are given by
ls -lt | awk '{if ($8 == 2006) print $9}';
If you are in the same directory, or you have full pathnames, you can
do just (and avoid the 'for do done' loop)
rm $( ls -lt | awk '{if ($8 ==
On Tue, Mar 06, 2007 at 07:27:56AM -0600, Derek Ragona wrote:
> You can loop through them using a shell script:
> for i in `ls -lt | awk '{if ($8 == 2006) print $9}'`;do rm $i;done
This is the safest way to rm or rm -i each file ($i); the
ls -ls | [awkstuff] spits out the entire
Gary Kline wrote:
Guys,
Having found $9 , how do I /bin/rm it (using system()--yes??)
in an awk one-liner?
I'm trying to remove from packages from long ago and find and
print them with
ls -lt | awk '{if ($8 == 2006) print $9}';
but what
You can loop through them using a shell script:
for i in `ls -lt | awk '{if ($8 == 2006) print $9}'`;do rm $i;done
-Derek
At 06:35 PM 3/5/2007, Gary Kline wrote:
Guys,
Having found $9 , how do I /bin/rm it (using system()--yes??)
in an awk one-liner?
On Mon, Mar 05, 2007 at 04:46:35PM -0800, Chuck Swiger wrote:
> On Mar 5, 2007, at 4:35 PM, Gary Kline wrote:
> > Having found $9 , how do I /bin/rm it (using system()--yes??)
> > in an awk one-liner?
>
> I gather that you are looking under /var/db/pkg...?
>
> >I'm trying to remove from p
On Mar 5, 2007, at 4:35 PM, Gary Kline wrote:
Having found $9 , how do I /bin/rm it (using system()--yes??)
in an awk one-liner?
I gather that you are looking under /var/db/pkg...?
I'm trying to remove from packages from long ago and find and
print them with
l
On Mon, Mar 05, 2007, Gary Kline wrote:
>
> Guys,
>
> Having found $9 , how do I /bin/rm it (using system()--yes??)
> in an awk one-liner?
>
> I'm trying to remove from packages from long ago and find and
> print them with
>
> ls -lt | awk '{if ($8 == 2006) print
Bart Silverstrim wrote:
On Mar 6, 2006, at 4:45 PM, Noel Jones wrote:
On 3/6/06, Bart Silverstrim <[EMAIL PROTECTED]> wrote:
I'm totally drawing a blank on where to start out on this.
If I have a list of URLs like
http://www.happymountain.com/archive/digest.gif
How could I use Awk or Sed
On Mar 6, 2006, at 4:45 PM, Noel Jones wrote:
On 3/6/06, Bart Silverstrim <[EMAIL PROTECTED]> wrote:
I'm totally drawing a blank on where to start out on this.
If I have a list of URLs like
http://www.happymountain.com/archive/digest.gif
How could I use Awk or Sed to strip everything after t
On 3/6/06, Bart Silverstrim <[EMAIL PROTECTED]> wrote:
> I'm totally drawing a blank on where to start out on this.
>
> If I have a list of URLs like
> http://www.happymountain.com/archive/digest.gif
>
> How could I use Awk or Sed to strip everything after the .com? Or is
> there a "better" way to
On 1/20/06, Don Hinton <[EMAIL PROTECTED]> wrote:
>
> Hi Alexandre:
>
> On Friday 20 January 2006 16:59, Alexandre Vieira wrote:
> > Hello folks,
> >
> > I'm making a script to generate some statistics for a batch job and I'm
> > stuck with awk.
> >
> > For example:
> >
> > %echo 1 2 3 4 5 6 | awk
Hi Alexandre:
On Friday 20 January 2006 16:59, Alexandre Vieira wrote:
> Hello folks,
>
> I'm making a script to generate some statistics for a batch job and I'm
> stuck with awk.
>
> For example:
>
> %echo 1 2 3 4 5 6 | awk {'print $1 $2 $3 $4 $5 $6'}
>
> it will output:
>
> 1 2 3 4 5 6
>
> I wan
# [EMAIL PROTECTED] / 2003-12-15 16:30:33 -0700:
> i would like to do something like
>
> df | awk '{print $1}'
>
> to capture all the current file systems. But I would like to strip
> off the first and last lines, since these are generally -- not needed.
>
> the goal is to write a generalized s
man head
and
man tail
At 05:30 PM 12/15/2003, you wrote:
i would like to do something like
df | awk '{print $1}'
to capture all the current file systems. But I would like to strip
off the first and last lines, since these are generally -- not needed.
the goal is to write a generalized script
On Mon, Dec 15, 2003 at 08:39:06PM -0300, Fernando Gleiser wrote:
> On Mon, 15 Dec 2003, David Bear wrote:
>
> > i would like to do something like
> >
> > df | awk '{print $1}'
> >
> > to capture all the current file systems. But I would like to strip
> > off the first and last lines, since these
On Mon, 15 Dec 2003, David Bear wrote:
> i would like to do something like
>
> df | awk '{print $1}'
>
> to capture all the current file systems. But I would like to strip
> off the first and last lines, since these are generally -- not needed.
df | awk '$1 ~/^\/dev/ {print $1}'
37 matches
Mail list logo