Ian wrote:
>>> Uri Guttman
[...snip...]
> As far as "top posting" ... it makes a lot of sense not to do that.
> I will cultivate new posting habits on this list and propagate them to the
> other lists I belong to. It will greatly improve their readability.
...and garner you some respect which wi
>>Uri Guttman
Thank you for the code pointers Uri, It's much appreciated.
>a couple of things. it is good that you are offering to help but as you
>claim not to be an expert, it can hurt more than help. there are plenty
>of experts on this list who can help and fixing up weak code offered
>from b
Uri Guttman wrote:
>> "I" == Ian writes:
>
> I> Unfortunately I'm not an expert. I just read a few books and this list
> etc.
>
> a couple of things. it is good that you are offering to help but as you
> claim not to be an expert, it can hurt more than help. there are plenty
> of experts
On Mon, Aug 17, 2009 at 04:09:34PM -0400, Uri Guttman wrote:
> > "I" == Ian writes:
>
> I> Unfortunately I'm not an expert. I just read a few books and this list
> etc.
>
> a couple of things. it is good that you are offering to help but as you
> claim not to be an expert, it can hurt mo
> "I" == Ian writes:
I> Unfortunately I'm not an expert. I just read a few books and this list etc.
a couple of things. it is good that you are offering to help but as you
claim not to be an expert, it can hurt more than help. there are plenty
of experts on this list who can help and fixi
> "I" == Ian writes:
you asked for some code review earlier and here it is.
I> use strict;
use warnings ;
I> my $inpf = 'student.txt';
I> my $uidf = 'uidf.txt';
I> open (DATA, $inpf) or die "Can't open file $inpf!\n";
don't use the DATA handle for your own files. DATA is the stan
up
>> http://yahoogroups.com/group/Spirituality-Paradise
>>
>> Are You Spiritually Aware !!! Are You Enjoying Yourself !!! See What All
>> You Had Been Missing
>> Please Join Immediately By Sending A Blank Mail @
>> spirituality-paradise-subscr...@yahoogroups.
> --
> *From:* Ian
> *To:* beginners@perl.org
> *Sent:* Monday, August 17, 2009 11:55:23 PM
> *Subject:* Re: Attentipn Please!! Need A Perl script to read the UIDs from
> two text files having the details in the format as given in the file(lLDIF
&g
From: Ian
> This is how I would do it.
>
> To every one else, please feel free to critique my perl programming skills.
> I'm a Mainframe Assembler programmer and still learning perl. Any critique
> will just help to improve my skills.
The script is fine, the fact that you wrote it for free for s
This will do it.
I don't exactly know what you mean by "neatly written" but your email may
mangle the formatting of the program.
Can't do anything about that.
The results will be in uidf.txt.
_
use strict;
my $inpf = 'student.txt';
my $ui
This is how I would do it.
To every one else, please feel free to critique my perl programming skills.
I'm a Mainframe Assembler programmer and still learning perl. Any critique
will just help to improve my skills.
use strict;
> use Data::Dumper;
> my $inpf = 'student.txt';
> open (DATA, $inpf) o
Read the DN's into a hash using a pattern, the UID being the key and
the DN being the value.
Then order the UID's via a sort.
Then foreach over the sorted keys.
Jarrod
On Mon, Aug 17, 2009 at 10:11 AM, Jyotishmaan Ray wrote:
> Dear All,
>
> I have to read the uid from this file for each studen
Dear All,
I have to read the uid from this file for each student entry, which is itself a
LDIF file, and extract it in another file, in a serial order.
Can you provide me the simplest way to do it.
Thanks,
Jyotishmaan Ray
Moderator Of Spirituality-Paradise Group
http://yahoogroups.com/g
frazzmata wrote:
I am writing a program where I want to be able to locate information
regarding a person in one file, if they appear in another.
For instance:
I have a file that just has student IDs (for students that are new)
It has a long list of Student IDs like this:
100955
104024
564765
> -Original Message-
> From: frazzmata [mailto:[EMAIL PROTECTED]
> Sent: Monday, September 08, 2008 10:13
> To: beginners@perl.org
> Subject: comparing text files, in a way
>
> I am writing a program where I want to be able to locate information
> regarding a per
I am writing a program where I want to be able to locate information
regarding a person in one file, if they appear in another.
For instance:
I have a file that just has student IDs (for students that are new)
It has a long list of Student IDs like this:
100955
104024
564765
123456
765437
1233
On 10/28/07, Telemachus Odysseos <[EMAIL PROTECTED]> wrote:
> On 10/25/07, Chas. Owens <[EMAIL PROTECTED]> wrote:
> > From the sound of it what you want is in-place-editing:
> >
> > #!/usr/bin/perl -i
> >
> > use strict;
> > use warnings;
> >
> > while (<>) {
> > s/this/that/
> > }
> >
> > The
On 10/25/07, Chas. Owens <[EMAIL PROTECTED]> wrote:
>
> From the sound of it what you want is in-place-editing:
>
> #!/usr/bin/perl -i
>
> use strict;
> use warnings;
>
> while (<>) {
> s/this/that/
> }
>
> The code above will read in any number of files modifying "this" to
> "that" in each on
On Thursday 25 October 2007 09:30, Joseph L. Casale wrote:
> I need to make some edits on small (~30 lines) text files. From
> reading perldoc and recent posts, am I correct in understanding that
> the proper way is to read the file in through one handle and write it
> out through anot
On 10/25/07, Joseph L. Casale <[EMAIL PROTECTED]> wrote:
> I need to make some edits on small (~30 lines) text files. From reading
> perldoc and
> recent posts, am I correct in understanding that the proper way is to read
> the file in
> through one handle and write it ou
I need to make some edits on small (~30 lines) text files. From reading perldoc
and recent posts, am I correct in understanding that the proper way is to read
the file in through one handle and write it out through another? If that is the
case, I suppose I need to write in some code then to
On Sep 11, 2007, at 10:05 PM, Gerald Wheeler wrote:
Correct
That can be done with a one-liner:
perl -0777 -pi.bak -e '$_ = q{new text goes here}' *.txt
The options -p, -i, and -e are documented in perlrun. The flag -0777
has the side-effect of slurping the whole file into $_ (one file at
On 9/11/07, Gerald Wheeler <[EMAIL PROTECTED]> wrote:
> I have about 400 text files I need to replace the contents. All files
> are in the current directory as the perl script runs out of
snip
What you need to do depends heavily on what you mean by "replace the
contents".
On Sep 11, 2007, at 9:43 PM, Gerald Wheeler wrote:
I have about 400 text files I need to replace the contents. All files
are in the current directory as the perl script runs out of
my $newText = "This is my new text.. anybody's text goes here";
open(INFILE, $plants)
while (&
I have about 400 text files I need to replace the contents. All files
are in the current directory as the perl script runs out of
my $newText = "This is my new text.. anybody's text goes here";
open(INFILE, $plants)
while (<>)
{
print $newText;
}
I need help..
Thanks
"Chas Owens" schreef:
> Dr.Ruud:
>> Sundeep:
>>> Is there any module that will fetch me the records from a text file
>>> (like csv file) when I pass a SQL statement.
>>
>> Consider starting with SQLite. It is easy to import the text files
>&
On 8/18/07, Dr.Ruud <[EMAIL PROTECTED]> wrote:
> Sundeep schreef:
>
> > I am doing requirement analysis to build a forum (discussion board)
> > using Perl.
> > Currently the data is planned to be kept in text files, but later it
> > may be moved to a database.
Sundeep schreef:
> I am doing requirement analysis to build a forum (discussion board)
> using Perl.
> Currently the data is planned to be kept in text files, but later it
> may be moved to a database.
>
> I am new to database module (in fact to database related stuff). I
>
Hi,
I am doing requirement analysis to build a forum (discussion board)
using Perl.
Currently the data is planned to be kept in text files, but later it
may be moved to a database.
I am new to database module (in fact to database related stuff). I
read that DBI module is database independent. I
>>>>> ""sfantar"" == "sfantar" <[EMAIL PROTECTED]> writes:
"sfantar"> I would like to know how possible it is to create XML files from
plain text files.
"sfantar"> I have a huge number of text files I want to convert
W. [mailto:[EMAIL PROTECTED]
Sent: Friday, September 15, 2006 2:13 PM
To: Beginners List
Subject: Re: How to create XML files from simple plain text files?
On 09/15/2006 07:12 AM, sfantar wrote:
Hello All!
I would like to know how possible it is to create XML files from plain
text
: Friday, September 15, 2006 2:13 PM
To: Beginners List
Subject: Re: How to create XML files from simple plain text files?
On 09/15/2006 07:12 AM, sfantar wrote:
> Hello All!
>
> I would like to know how possible it is to create XML files from plain
text files.
> I have a huge number o
On 09/15/2006 07:12 AM, sfantar wrote:
Hello All!
I would like to know how possible it is to create XML files from plain text
files.
I have a huge number of text files I want to convert into XML format.
Is there any Perl modules for this purpose?
[...]
XML::Simple can do it.
--
To
>
> Yes, I have already checked the CPAN but I haven't found yet what I am
> looking for.
> Do you have to write my own script from scratch to convert plain text
> files into XML files?
Your question makes no sense. XML matching what doctype? What kind
of plain text?
-
Tom Phoenix a écrit :
On 9/15/06, sfantar <[EMAIL PROTECTED]> wrote:
I would like to know how possible it is to create XML files from
plain text
files. I have a huge number of text files I want to convert into XML
format.
Is there any Perl modules for this purpose?
Have you checke
On 9/15/06, sfantar <[EMAIL PROTECTED]> wrote:
I would like to know how possible it is to create XML files from plain text
files. I have a huge number of text files I want to convert into XML format.
Is there any Perl modules for this purpose?
Have you checked CPAN yet?
Hello All!
I would like to know how possible it is to create XML files from plain text
files.
I have a huge number of text files I want to convert into XML format.
Is there any Perl modules for this purpose?
Thanks in advance for your help.
Accédez au courrier électronique de La Poste
sur
On Sat, 11 Feb 2006, I BioKid wrote:
> I need technical advice from all perlbuddies,
>
> I have 20000 text files and I want to convert it to XML and then to a
> database. I need a proper technical descritption to do it using PERL.
> I need details like which db should i use a
hi all,
I need technical advice from all perlbuddies,
I have 2 text files and I want to convert it to XML and then to a
database. I need a proper technical descritption to do it using PERL.
I need details like which db should i use also.
S K
--
To unsubscribe, e-mail: [EMAIL PROTECTED]
For
Hi,
I have got two versions of a script to eliminate single line-feeds from
a file. The first one does weird stuff - duplicating lines and messing
the text file up. The second one works (I copied it from a Perl guide),
but I don't understand why. I would much prefer the first one to work -
C
Hey all, I've got this PERL program written that queries my LDAP for basic info, ie
what users are in what groups, what groups are in what OU, etc etc etc. The problem
is the results are HUGE, I am talking like 20Mb+ each. So I was thinking all I'd
really need to do is echo/print to an .html,
On Mar 14, 2004, at 2:25 AM, R. Joseph Newton wrote:
I will have to give some thought to the implications of this for
programming
practice.
If you find yourself playing with slurping (I'm a fan, as you can
probably tell), treat yourself to IO:All from the CPAN, while you're at
it. That modules
James Edward Gray II wrote:
> On Mar 13, 2004, at 1:09 PM, R. Joseph Newton wrote:
>
> > "West, William M" wrote:
> >
> >
> >> by slurping the WHOLE file into memory at once (do you have room to
> >> but a 45
> >> megabyte file into memory?) the speed of the processing went up-
> >
> > Highly unl
On Mar 13, 2004, at 1:09 PM, R. Joseph Newton wrote:
"West, William M" wrote:
by slurping the WHOLE file into memory at once (do you have room to
but a 45
megabyte file into memory?) the speed of the processing went up-
Highly unlikely. There is no real speed advantage with Perl in
slurping
"West, William M" wrote:
> by slurping the WHOLE file into memory at once (do you have room to but a 45
> megabyte file into memory?) the speed of the processing went up-
Highly unlikely. There is no real speed advantage with Perl in slurping a
file. The only good reason I can think of is if
>
>Price, Jason wrote:
>> I'm trying to optimize a script used for processing large text log
>> files (around 45MB). I think I've got all the processing fairly well
>> optimized, but I'm wondering if there's anything I can do to speed up
>> the initial loading of the file.
oh the pain and suf
.
>
> Currently, I'm performing operations on the file one line at a time,
> using a "while ()" loop. I'm pushing important lines into an
> array, so further processing is done in memory, but the initial pass
> on the file is rather time consuming. Is there a
le is
rather time consuming. Is there a more efficient way to work with large
text files?
If you post the script, we might me able to give some helpful suggestions.
James
Hi, I do have similar situation, say I have text a file with 5000 columns
and 3000 rows. I need to subset the data about
loading of the file.
Currently, I'm performing operations on the file one line at a time,
using a
"while ()" loop. I'm pushing important lines into an array, so
further processing is done in memory, but the initial pass on the file
is
rather time consuming. Is there a more effi
ions on the file one line at a time, using a
"while ()" loop. I'm pushing important lines into an array, so
further processing is done in memory, but the initial pass on the file is
rather time consuming. Is there a more efficient way to work with large
text files?
Thanks.
J
> How's abouts this (pls excuse any syntax errs) .
>
> my $out1 = "$file1.txt";
> my $out2 = "$file2.txt";
>
> open(INPUT, $filename) or die "error: $filename cannot be opened\n";
> open(OUT1, ">$out1") or die "error: $out1 cannot be opened
> for writing\n";
> open(OUT2, ">$out2") or die "er
On Dec 29, 2003, at 12:05 AM, danl001 wrote:
Hi,
Howdy.
If this question would be better posted to another perl list, please
let me know.
I think you found the right place.
I have a very large text files (~2 GB) and it's in the following
format:
header line
header line
header line
mar
How's abouts this (pls excuse any syntax errs) .
my $out1 = "$file1.txt";
my $out2 = "$file2.txt";
open(INPUT, $filename) or die "error: $filename cannot be opened\n";
open(OUT1, ">$out1") or die "error: $out1 cannot be opened for writing\n";
open(OUT2, ">$out2") or die "error: $out2 cannot b
On 12/29/2003 1:06 AM, danl001 wrote:
Hi,
If this question would be better posted to another perl list, please let
me know.
I have a very large text files (~2 GB) and it's in the following format:
header line
header line
header line
marker 1
header line
header line
header line
marker 2
ut at me for
I/O.
Have you tried benchmark on it?
-Tom Kinzer
-Original Message-
From: danl001 [mailto:[EMAIL PROTECTED]
Sent: Sunday, December 28, 2003 10:06 PM
To: [EMAIL PROTECTED]
Subject: techniques for handling large text files
Hi,
If this question would be better posted to an
Hi,
If this question would be better posted to another perl list, please let
me know.
I have a very large text files (~2 GB) and it's in the following format:
header line
header line
header line
marker 1
header line
header line
header line
marker 2
line type 1
line type 1
line type 1
...
Hi,
If this question would be better posted to another perl list, please let
me know.
I have a very large text files (~2 GB) and it's in the following format:
header line
header line
header line
marker 1
header line
header line
header line
marker 2
line type 1
line type 1
line type 1
...
s a categorized list of internet domains, also space-delimited. A
> snippet of both text files is below:
>
> Proxy Log
> snip
> 10/23/2003 4:18:32 192.168.0.100 http://www.squid-cache.org OK
> 10/23/2003 4:18:33 192.168.1.150 http://msn.com OK
> 10/23/2003 4:18:33 192.168.1.
In article <[EMAIL PROTECTED]>, Mike M wrote:
> I've found this script on another message board that is close, but still
> doesn't work with my data. Any ideas on modifications? I think my
> biggest problem is the regex in the split function, because what this does
> is match ONLY against the fir
what I hope is a simple question: I have a
Perl
> > script that parses a log file from our proxy server and reformats it to
a
> > more easily readable space-delimited text file. I also have another
file
> > that has a categorized list of internet domains, also space-del
also space-delimited. A
snippet of both text files is below:
Proxy Log
snip
10/23/2003 4:18:32 192.168.0.100 http://www.squid-cache.org OK
10/23/2003 4:18:33 192.168.1.150 http://msn.com OK
10/23/2003 4:18:33 192.168.1.150 http://www.playboy.com DENIED
snip
Categorized Domains List
---
mited. A
snippet of both text files is below:
Proxy Log
snip
10/23/2003 4:18:32 192.168.0.100 http://www.squid-cache.org OK
10/23/2003 4:18:33 192.168.1.150 http://msn.com OK
10/23/2003 4:18:33 192.168.1.150 http://www.playboy.com DENIED
snip
Categorized Domains List
snip
ms
From: "Josh" <[EMAIL PROTECTED]>
> Ok Heres the deal :) I have a script to write to a dat file and the
> basic output in the dat file is
>
> username:plan: so an example would be
> computer:50: meaning the username computer has 50 hours of paid dialup
> access.
>
> Now Lets say "computer" cal
Josh wrote at Wed, 29 May 2002 23:56:48 +0200:
> Ok Heres the deal :) I have a script to write to a dat file and the basic output in
>the dat file
> is
>
> username:plan: so an example would be
> computer:50: meaning the username computer has 50 hours of paid dialup access.
>
> Now Lets say "c
Ok Heres the deal :) I have a script to write to a dat file and the basic output in
the dat file is
username:plan: so an example would be
computer:50: meaning the username computer has 50 hours of paid dialup access.
Now Lets say "computer" calls and wishes to upgrade his account to 100 hours
Deniz hepþen wrote:
>
> Hi,
Hello,
> I want to do a parse fonction which reads prices.txt. My prices.txt file is like
>
> 20::30::1
> 3::2::7
>
> When i want to see for example "2", i want to type "print "$x[1][1]";"
> Why this function below is false?
#!/usr/bin/perl -w
use stric
Hi,
I want to do a parse fonction which reads prices.txt. My prices.txt file is like
20::30::1
3::2::7
When i want to see for example "2", i want to type "print "$x[1][1]";"
Why this function below is false?
thanx
sub parse {
open (PRICES , ";
clo
}
> close (NEWQUOTES);
> close (QUOTEP);
>
> Rob
>
>
> -Original Message-
> From: Scott [mailto:[EMAIL PROTECTED]]
> Sent: Wednesday, January 02, 2002 2:50 PM
> To: Hanson, Robert
> Cc: [EMAIL PROTECTED]
> Subject: RE: Parsing large text files
>
D]
Subject: RE: Parsing large text files
Ok, I think I have it opening the file correctly, now I want to create a
loop to process every record in the file. With the code below it does one
line and stops. Should I put a while in front of the line processing?
Thanks,
open(QUOTEP,"$qivQu
Ok, I think I have it opening the file correctly, now I want to create a
loop to process every record in the file. With the code below it does one
line and stops. Should I put a while in front of the line processing?
Thanks,
open(QUOTEP,"$qivQuoteFile");
open(NEWQUOTES, ">newtest5.txt");
If the file is too large, don't create a list of lines in memory.
You may have problems if you do it.
Instead, do
open(NEWQUOTES,"newquotes.txt");
open(OTHERFILE,">otherfile.txt");
while ($line=) {
chomp $line;
my @fields = map { $_ .= " " x (255-length($_)) } split(/,/,$line);
Ok, do you recommend I open the file and create a list of the lines in
that file? Example below on how I did the comma seperate list:
open(QUOTE2,"$QuoteFile");
open(NEWQUOTES, ">>newquotes.txt");
while ($line = )
{
$line =~s/\t/,/g;
print NEWQUOTES "$line";
}
close (NEWQUOTE
= substr($fields[$i],0,255);
> }
>
> # only include fields #0 and #2 (change to your needs)
> @fields = @fields[0,2];
>
> # print the comma delimeted fields
> print join(',', @fields);
>
>
> Rob
>
> -Original Message-
> From: Scott [mailto:[EMA
m: Scott [mailto:[EMAIL PROTECTED]]
Sent: Wednesday, January 02, 2002 11:22 AM
To: [EMAIL PROTECTED]
Subject: Parsing large text files
Hi:
I am developing a input routine in perl that will take a SQL exported file
in tab delimited format and convert it to something that our mainframe can
proces
Hi:
I am developing a input routine in perl that will take a SQL exported file
in tab delimited format and convert it to something that our mainframe can
process. The file will ultimately have 255 characters for each field.
I have been successful in replacing the tabs with comma's, but now
Hello All,
I have two files one contains entries that need to be placed in
the other. With some parameters around it.
1. I have to check and make sure the entries are not already there
(I did that using grep)
2. 3 of the lines I have to add need to be added in the middle of the
file.
(I do
On Fri, Jul 27, 2001 at 04:13:08PM -0500, Shepard, Gregory R wrote:
> It still appears that the -T is also reading the directories in to the hash.
> the statement below looks logical. Is there a test to specifically exclude
> directories?
There is a test to check if a file is a directory, perldo
You guys are great!
-Original Message-
From: Wagner-David [mailto:[EMAIL PROTECTED]]
Sent: Friday, July 27, 2001 4:46 PM
To: [EMAIL PROTECTED]
Subject: RE: Read text files only into a hash
I made the change as
$dir_file = "$dir/$file";
next if ( -d
I made the change as
$dir_file = "$dir/$file";
next if ( -d $dir_file ); # if directory then next
next unless -T "$dir_file"; # if not a text or what exhibits
text file next
Ran it it got only text files and no directories.
Wags ;)
-
--
From: Mooney Christophe-CMOONEY1 [mailto:[EMAIL PROTECTED]]
Sent: Friday, July 27, 2001 3:56 PM
To: Mooney Christophe-CMOONEY1; [EMAIL PROTECTED]
Subject: RE: Read text files only into a hash
Sorry -- let me try that again ...
Why don't you change your next statement to:
next unless -T
Sorry -- let me try that again ...
Why don't you change your next statement to:
next unless -T "$dir/$file" || /^\./;
-Original Message-
From: Shepard, Gregory R [mailto:[EMAIL PROTECTED]]
Sent: Friday, July 27, 2001 3:47 PM
To: [EMAIL PROTECTED]
Subject: Read text fi
Why don't you change your next
'next unless -T "$dir/$file"';
That will skip over any non-text files, including dirs and '.'
-Original Message-
From: Shepard, Gregory R [mailto:[EMAIL PROTECTED]]
Sent: Friday, July 27, 2001 3:47 PM
To: [EMAIL PROTE
All,
I am trying to read only text files (using the -T test) from a directory
into a hash, and not directories themselves. How do I prevent it from it
reading in directories? This sounds like a basic question and know there is
probably an easy answer... but I can't think of it. T
Is someone out there to help me please
Thanx
> -Original Message-
> From: Najamuddin, Junaid
> Sent: Wednesday, July 25, 2001 5:59 PM
> To: '[EMAIL PROTECTED]'
> Subject: Comparing two text files and outputting difference
>
> Hi,
>
&
Hi,
I am brand new on Perl
Learning slowly though
I will appreciate someone's help on this
I wrote a small script with someone's help
The script opens up two text files and goes thru it line by line
They both are in same format. firstfile has all the entries the secondfile
has som
a snippet of the files would be msot helpfull...
- Original Message -
From: "Carl Rogers" <[EMAIL PROTECTED]>
To: <[EMAIL PROTECTED]>
Sent: Tuesday, July 24, 2001 4:51 PM
Subject: Extracting data from text files using Perl
> Good day;
> I have been banging
Good day;
I have been banging my head against the wall on this problem for a while
and am hoping someone out there may have a resource (web site, book, etc)
that can help me figure this puppy out...
I have been given a couple text files that looks like it pulled names,
addresses and phone
I think you need to start reading on a few books my friend.
here are a few places to start:
read the regexp tutorial i wrote at http://japh.nu
read "perldoc perlre" for more details
look into getting "learning perl" by randal schwartz, it will teach you many
of the basics you'll need for Perl
n
t;[EMAIL PROTECTED]>
To: "Tyler Longren" <[EMAIL PROTECTED]>
Cc: <[EMAIL PROTECTED]>
Sent: Wednesday, July 18, 2001 2:25 PM
Subject: Re: looking at rows in text files
> using 'my' within the while loop will make it lose it's value once the
loop
> exi
using 'my' within the while loop will make it lose it's value once the loop
exits and it will be reset every time you read in a new line
declare 'my @list' above the loop, and you'll not have that problem.
for readabillity, use:
if(/(\S+/){ push @list, $1 }
altho that's a bit superfluous i think,
t print anything to the screen.
Really, thank you!
- Original Message -
From: "Jos I. Boumans" <[EMAIL PROTECTED]>
To: "Tyler Longren" <[EMAIL PROTECTED]>
Cc: <[EMAIL PROTECTED]>
Sent: Wednesday, July 18, 2001 1:29 PM
Subject: Re: looking at row
it's essentially the same thing, only instead of handing the match to the
print operator, we store it in @list
like so:
open(FP, "$tempfile"); # Open $tempfile for reading
while() { push @list, /(\S+)/ }
$list[0] will hold the first match, $list[1] the 2nd and so forth
hope this helps,
Jos
>
If your file really looks like that and you're really only
trying to print the first column just:
cut -d' ' -f1 < your.file
from the command prompt.
or if you want the dns server name instead:
tr -s '[:blank:]' '\t' < cu.txt | cut -f2
-will
Tyler Longren wrote:
>
> Hello everyone,
>
> I h
a short way:
open I, "file.txt" or die $!;
while () { print /(\S+)/ }
hope this helps,
Jos Boumans
--
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]
gt;
>An: "Perl Beginners" <[EMAIL PROTECTED]>
>Betreff: looking at rows in text files
>Datum: Mit, 18. Jul 2001 15:43 Uhr
>
> Hello everyone,
>
> I have a file that contains a few domain names, and their dns servers (kind
> of like a zonefile). Here's the
How about:
use strict;
open(IN, "file.txt") || die("RRRGGGHHH $!");
while() # loop through file setting each line to $_
{
chomp; # lose newline
/^([^\s]+)\s/; # look for anything at the beginning of the string
up to the first space or tab character and remembe
Hello everyone,
I have a file that contains a few domain names, and their dns servers (kind
of like a zonefile). Here's the format:
my.comdns1.blah.com
me.comdns1.blah.com
we.comdns1.blah.com
you.com dns1.blah.com
How can I get ONLY the domain's out of that file, and print each do
On Tue, Apr 17, 2001 at 01:33:39PM -0700, Paul Jasa wrote:
> I want to be able to divide up a text file by using specific
> words in the text, in other words, I'd like to be able to tell the script
> to:
>
> Read file ./FILE, and from that file,
my $file = do { local $/; }; # slu
I have used join in order to make one big line of a document for searches such as the
one you suggest. Also, in another script I did this:
open(FILE,"$_");
my $txt = "";
$searchfile = $_;
while () {$txt .= $_;}
close(FILE);
## Search $txt for $string; if it is found, write the name
Dear all,
I have tried various things, and read and searched the Camel book, the Llama
book, and another Perl reference to no avail. Your help would be much
appreciated. I want to be able to divide up a text file by using specific
words in the text, in other words, I'd like to be able to tell
100 matches
Mail list logo