On Sun, Nov 29, 2015 at 03:30:53PM +0530, Pritish Pattanaik wrote:
> *Remove duplicate elelments from an array, It will maintain the original
> order*
>
> __CODE__
> @array = qw(11 2 3 4 55 4 3 2);
> %hash = ();
> for(my $i=0;$i<=$#array;$i++){
> # store the position from array. use array ele
Hi Jitendra,
some comments on your code:
On Sun, 29 Nov 2015 13:09:42 +0530
Jitendra Barik wrote:
> Hi Sunita,
>
> Please find the code snippet here:
>
> @array = qw(1 2 3 4 5 4 3 2);
You are missing "use strict;", "use warnings;" and "my" declarations here:
http://perl-begin.org/tutorials/
Hi,
*Remove duplicate from an array using hash, It will disorder the original
position of an array *
__CODE__
my @array = qw(11 2 3 4 55 4 3 2);
my %hash;
map { $hash{$_}++ } @array;
@array = keys %hash;
print "Array:@array\n";
__END__
*Remove duplicate elelments from an array, It will main
Hi Sunita,
Please find the code snippet here:
@array = qw(1 2 3 4 5 4 3 2);
foreach (@array){
$myhash{$_} = $myhash{$_} + 1;
}
while(($s,$k) = each(%myhash)){
push @res,$k;
}
print "Array in unique is : @res\n";
Regards,
Jitendra
On Thu, Nov 26, 2015 a
See if this meets your needs:
http://www.arl.wustl.edu/projects/fpx/references/perl/cookbook/ch04_07.htm
Mike
On 11/25/2015 1:53 AM, beginners-digest-h...@perl.org wrote:
Hi
I want to create a unique array .
I have the code below. It is creating a array which will have
duplicate data
Hi Sunita,
On Wed, 25 Nov 2015 13:23:24 +0530
Sunita Pradhan wrote:
> Hi
> I want to create a unique array .
You probably mean an "array with unique elements". To do so, you should use a
hash. See:
http://perl-begin.org/topics/hashes/
> I have the code below. It is creating a array which wi
Hi
I want to create a unique array .
I have the code below. It is creating a array which will have duplicate data .
How can I filter duplicate data ?
#!/usr/bin/perl
use Data::Dumper;
#my $cnt = scalar @$net_int_parsed_output;
$net_int_parsed_output = [lif_1 ,lif_2,lif_3,lif_4,lif_1,lif_2];
Thank you Jim. Your advice worked perfectly.
--
To unsubscribe, e-mail: beginners-unsubscr...@perl.org
For additional commands, e-mail: beginners-h...@perl.org
http://learn.perl.org/
At 11:15 PM -0600 1/8/12, Chris Stinemetz wrote:
How do I extract uniq elements from an array? Basically I want to
preserve the order of the elements as they are first seen, but I would
like to remove any duplicates.
Use a hash to keep track of elements. Create an entry in the hash
with the
On Sun, Jan 8, 2012 at 11:15 PM, Chris Stinemetz
wrote:
> How do I extract uniq elements from an array? Basically I want to
> preserve the order of the elements as they are first seen, but I would
> like to remove any duplicates.
>
You may want to check out List::MoreUtils on CPAN.
How do I extract uniq elements from an array? Basically I want to
preserve the order of the elements as they are first seen, but I would
like to remove any duplicates.
Below is what I have so far.
Thank you in advance.
Chris
#!/usr/bin/perl
use warnings;
use strict;
use Data::Dumper;
my
Rob Dixon schreef:
> $uniq{$_} = 1 foreach @holdArr;
I prefer "foreach" to "for", mainly because it is shorter.
Alternative:
@[EMAIL PROTECTED] = (1) x @holdArr;
Test-1:
perl -MData::Dumper -wle'
@keys = qw(a b c) ;
@hash{ @keys } = (1) x @keys ;
Shiping Wang wrote:
At 02:04 PM 10/18/2006, Johnson, Reginald (GTI) wrote:
I am trying to understand this sort and uniq code that a came across in
the archive. This works, but I thought the %uniq would have the sort
and uniqed values. What is needed if I didn't want to print the value
On Wed, Oct 18, 2006 at 03:04:32PM -0400, Johnson, Reginald (GTI) wrote:
> I am trying to understand this sort and uniq code that a came across in
> the archive.
I don't see anything to do with sort in this code.
> This works, but I thought the %uniq would have
At 02:04 PM 10/18/2006, Johnson, Reginald (GTI) wrote:
I am trying to understand this sort and uniq code that a came across in
the archive. This works, but I thought the %uniq would have the sort
and uniqed values. What is needed if I didn't want to print the values
out immediatedly bu
I am trying to understand this sort and uniq code that a came across in
the archive. This works, but I thought the %uniq would have the sort
and uniqed values. What is needed if I didn't want to print the values
out immediatedly but put them in an array or hash?
#!/usr/bin/perl
use strict
On Wed, 13 Apr 2005, M. Kristall wrote:
> Who do you think has more viewers? This particular thread or Google?
Google links to things, in part, because people mention them online.
When *Google* sees this thread, it will register that as a positive vote
for the site in question, and will as a r
Chris Devers wrote:
On Wed, 30 Mar 2005, Peter Rabbitson wrote:
Anyway my 2c - I myself use the [$elided] archives quite a bit, which
does not prevent me from owning hard prints of the Cookbook, the
Pocket Ref and recently Object Oriented Perl. It however prevents from
owning 2 pcs of each of t
On Wed, 30 Mar 2005, Peter Rabbitson wrote:
> Anyway my 2c - I myself use the [$elided] archives quite a bit, which
> does not prevent me from owning hard prints of the Cookbook, the
> Pocket Ref and recently Object Oriented Perl. It however prevents from
> owning 2 pcs of each of those not-so-
>
> Not that I see why this came up in the first place...
>
How funny... By very same talking you can bring a gun to a social meeting
and 10 minutes later yell "Huh?! WTF did this come from?!". Anyway my 2c - I
myself use the unix.org.ua archives quite a bit, which does not prevent me
from
As in free beer :)
Thats mighty nice of you :)
I like Guinness, I can send you my address off list :)
Oh and I'd like you to do help on a project next week, 40 hours sound
ok? I'll send the info you need so you can buy a plane ticket and get a
hotel to come "share knowledge" for free...
I appre
On Wed, 30 Mar 2005, Michael Gale wrote:
> Besides I am not sure where you are going with this ? Are you saying
> that the content on the web page if pirated so I should not view it ?
That's debatable, but you *definitely* shouldn't publicize it.
> Information should be free, I believe all book
Google returned the page from a search result, where it gets the pages
from is not my fault.
Besides I am not sure where you are going with this ? Are you saying
that the content on the web page if pirated so I should not view it ?
Information should be free, I believe all books and information
Michael,
I noted from your original posting that you quoted from a URL which is
controversial
in that the Perl-related books from O'Reilly are actually pirated
copies. This was pointed out by one of the
subscribers to this list in a recent thread.
Alfred,
Michael Gale wrote:
Hello,
I fi
Hello,
I fixed it, the string would initially contain special characters such
as % signs. I strip out all the characters before the check, instead of
after.
Michael.
On Wed, 2005-03-30 at 16:19 -0700, Michael Gale wrote:
> Hello,
>
> I tried your example, it worked as expected,
Hello,
I tried your example, it worked as expected, here is the entire code:
--snip--
my @servers;
my $each_server;
my $each_host;
my %seen = ( );
print "List of systems being performance monitored\n";
print "Please select a host\n";
open
On Wed, 30 Mar 2005 15:53:25 -0700, Michael Gale wrote:
>
> But it is not working as expected, even if the array has the first three
> entries the same, the function "show_host" gets called three time ?
>
> foreach $each_server(@servers)
> {
> unless ($see
Hello,
I have using the following lines to run through an array (@servers) and
run a function on each one. But I want to avoid running the function
twice if an entry exist twice.
I got the following from
http://www.unix.org.ua/orelly/perl/cookbook/ch04_07.htm
But it is not working as exp
/
Including:
uniq
sort
grep & egrep
tar
And plenty of others. I just throw them all into my c:\windows\system32
directory.
I am a Linux user, but I have to use Windows at work. These tools make it
bearable.
Note:
If you are using Windows XP, some files (such as find.exe) will
automatically
be overwri
Harry Putnam wrote:
>
> "John W. Krahn" <[EMAIL PROTECTED]> writes:
> >
> > And then get the sorted list like this:
> >
> > @sorted = sort { $name{ $a } cmp $name{ $b } } keys %name;
>
> I've seen that in books and stuff but was never able to fathom out
> how it does what it does. So never actua
"John W. Krahn" <[EMAIL PROTECTED]> writes:
>> But it doesn't really give the results I was after either. I wanted
>> the sort on short *.pod name. Other wise I have things like:
>>/usr/lib/perl5/5.6.1/Win32.pod
>> coming first.
>>
>> So my formulation may be better in that regard.
>
> Popu
Harry Putnam wrote:
>
> "John W. Krahn" <[EMAIL PROTECTED]> writes:
>
> > Harry Putnam wrote:
> >>
> >> @sorted = sort @unsorted;
> >
> > You probably should use the keys of %name which are unique.
>
> How cool, I hadn't realized this expression:
>if ($name{$File::Find::name}++ == 0){
> Actu
Harry Putnam <[EMAIL PROTECTED]> writes:
> Whew, for a minute there I thougt I'd screwed that up too. But why
> are there dups to begin with?
Gets us right back to SPENCERS who had it right from the start.
I wish I knew better what this code is doing:
if (-d && /^[a-z]/) { $File::Find::pr
"John W. Krahn" <[EMAIL PROTECTED]> writes:
> Harry Putnam wrote:
>>
>> "Charles K. Clarkson" <[EMAIL PROTECTED]> writes:
>>
>> > Harry Putnam said:
>> > :
>> > : Here is the problem:
>> > :
>> > :
"John W. Krahn" <[EMAIL PROTECTED]> writes:
> "John W. Krahn" wrote:
>>
>> The initial value in $name{$File::Find::name} will be undef not zero so
>> comparing it to zero will not work.
>>
>> if ( $name{ $File::Find::name }++ ) {
>
> Sorry, my mistake, it does work.
Whew, for a minute t
"John W. Krahn" wrote:
>
> The initial value in $name{$File::Find::name} will be undef not zero so
> comparing it to zero will not work.
>
> if ( $name{ $File::Find::name }++ ) {
Sorry, my mistake, it does work.
John
--
use Perl;
program
fulfillment
--
To unsubscribe, e-mail: [EMAIL
Harry Putnam wrote:
>
> "Charles K. Clarkson" <[EMAIL PROTECTED]> writes:
>
> > Harry Putnam said:
> > :
> > : Here is the problem:
> > :
> > : Summary run home made tools against only the
> > : uniq paths that might contain perl *.p
"Charles K. Clarkson" <[EMAIL PROTECTED]> writes:
> Harry Putnam said:
> :
> : Here is the problem:
> :
> : Summary run home made tools against only the
> : uniq paths that might contain perl *.pm files.
>
> Harry, that doesn't' make a bit
"SPENCERS" <[EMAIL PROTECTED]> writes:
> Hello, Harry.
>
>Is this what you are looking for?
Well it gives the right results. I think I'm making a mountain where
there was only a molehill. Thanks
--
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECT
roblem:
>
> Summary run home made tools against only the uniq paths that might contain
> perl *.pm files.
>
> In my case:
> $ perl -e 'print $newINC = join("\n",@INC),"\n";'
> /usr/local/lib/perl5/5.8.0/i686-linux
> /usr/local/lib/perl5/5.8.0
> /us
Harry Putnam said:
:
: Here is the problem:
:
: Summary run home made tools against only the
: uniq paths that might contain perl *.pm files.
Harry, that doesn't' make a bit of sense.
Could you rephrase the question?
Charles K. Clarkson
--
Head Bottle Washer,
Clarkson Energy
-Original Message-
From: SPENCERS [mailto:[EMAIL PROTECTED]
Sent: Saturday, June 28, 2003 12:37 PM
To: Harry Putnam
Subject: RE: find uniq paths in @INC
Hello, Harry.
Is this what you are looking for?
#!/usr/bin/perl -w
# list all of the perl modules installed
use strict;
use
Harry Putnam wrote:
> I've stumbled around with this before and someone posted something
> that came close but still doesn't quite do what I want.
>
> Maybe it isn't important enough to get this involved with it.
>
> Here is the problem:
>
> Summary run home
I've stumbled around with this before and someone posted something
that came close but still doesn't quite do what I want.
Maybe it isn't important enough to get this involved with it.
Here is the problem:
Summary run home made tools against only the uniq paths that might co
On Aug 30, david said:
>Ramprasad A Padmanabhan wrote:
>
>> Hi All,
>>I have been using perl on linux for quiet some time now and I have
>> found that Perl does not work the same in windows
>>
>>eg. To get all the unique elements of an array in linux I use a
>> simple one liner
>>
>>
The one sent in by david which was @uniq{@a} = () was even faster by
a significant margin
where you have
sub using_array {
@uniq{@a} = ();
}
Need to define my %uniq above as you do my @uniq.
Wags ;)
-Original Message-
From: NYIMI Jose (BMB) [mailto:[EMAIL
(n=100)
José.
-Original Message-
From: NYIMI Jose (BMB)
Sent: Friday, August 30, 2002 9:07 PM
To: Felix Geerinckx; [EMAIL PROTECTED]
Subject: RE: uniq elements of an array
Slice is simple and also faster !
Look:
#!/usr/bin/perl
use strict;
use Benchmark;
my @a=qw(a c b c a a b
Slice is simple and also faster !
Look:
#!/usr/bin/perl
use strict;
use Benchmark;
my @a=qw(a c b c a a b d c c);
my @uniq=();
sub using_grep{
my %seen=();
@uniq=sort grep{ ! $seen{$_}++ } @a;
}
sub using_slice{
my %h=();
@h{@a}=();
@uniq=keys %h
on Fri, 30 Aug 2002 18:26:03 GMT, Tom Allison wrote:
> david wrote:
>> @hash{@all_elements} = ();
>>
>> now "keys %hash" gives you the unique elements.
>
> Would these exist but be undef?
>
>
Why don't you write a little program to try it out, using the aptly named
'exists' and 'defined' fu
david wrote:
> Ramprasad A Padmanabhan wrote:
>
>
>>Hi All,
>> I have been using perl on linux for quiet some time now and I have
>>found that Perl does not work the same in windows
>>
>> eg. To get all the unique elements of an array in linux I use a
>>simple one liner
>>
>> @unique = gr
Ramprasad A Padmanabhan wrote:
> Hi All,
>I have been using perl on linux for quiet some time now and I have
> found that Perl does not work the same in windows
>
>eg. To get all the unique elements of an array in linux I use a
> simple one liner
>
>@unique = grep{!/$seen{$_}++/} @
Dharmendra rai wrote:
> have u seen the values in @unique when @all_elements contains (1,2,3,1,2) when u
>apply @unique = grep { !$seen{$_}} @all_elements ???
>
> its is not working
>
>
>
>
> -
> Get a bigger mailbox -- choose a size that fits your need
On 30 Aug 2002, Felix Geerinckx wrote:
> on Fri, 30 Aug 2002 17:32:36 GMT, [EMAIL PROTECTED] (Sudarshan
> Raghavan) wrote:
>
> > Why do you need to do a pattern match anyways? Just a
> > @unique = grep{!$seen{$_}} @all_elements;
> > should do
>
> You forgot to increment. The correct way is:
>
have u seen the values in @unique when @all_elements contains (1,2,3,1,2) when u apply
@unique = grep { !$seen{$_}} @all_elements ???
its is not working
-
Get a bigger mailbox -- choose a size that fits your needs.
on Fri, 30 Aug 2002 17:32:36 GMT, [EMAIL PROTECTED] (Sudarshan
Raghavan) wrote:
> Why do you need to do a pattern match anyways? Just a
> @unique = grep{!$seen{$_}} @all_elements;
> should do
You forgot to increment. The correct way is:
@unique = grep{!$seen{$_}++} @all_elements;
--
feli
On Fri, 30 Aug 2002, Ramprasad A Padmanabhan wrote:
> Hi All,
>I have been using perl on linux for quiet some time now and I have
> found that Perl does not work the same in windows
>
>eg. To get all the unique elements of an array in linux I use a
> simple one liner
>
>@unique =
>eg. To get all the unique elements of an array in linux I use a
> simple one liner
>
>@unique = grep{!/$seen{$_}++/} @all_elements;
>
The above expression did not work for me,
I could not even found out how this should work. So I created a small file
with...
@all_elements = qw(hello al
Hi All,
I have been using perl on linux for quiet some time now and I have
found that Perl does not work the same in windows
eg. To get all the unique elements of an array in linux I use a
simple one liner
@unique = grep{!/$seen{$_}++/} @all_elements;
But this does not even pass sy
Craig Hammer wrote:
>
> I am working on a script to read in a firewall logfile, pull out the IP
> addresses of denied packets, then give me a count per IP address, and
> perform a whois on each address.
>
> This previously ran as a VERY SLOW shell script. In bourne, I used so
On May 23, Jeff 'japhy' Pinyan said:
>Now you have %seen, which holds each element of @sorted and how many times
>it appeared. Thus:
>
> @duplicates = grep $seen{$_} == 1, keys %seen;
That should be != 1, not == 1.
--
Jeff "japhy" Pinyan [EMAIL PROTECTED] http://www.pobox.com/~japh
On May 23, Elias Assmann said:
>On Thu, 23 May 2002, Jeff 'japhy' Pinyan wrote:
>
>> On May 23, Craig Hammer said:
>>
>> >Very nice explanation. One thing though, I am not using uniq to remove
>> >duplicates. I am using it to get a count o
ra
Webmaster, Pollstar.com / PollstarOnline.com
- Original Message -
From: "Elias Assmann" <[EMAIL PROTECTED]>
To: <[EMAIL PROTECTED]>
Cc: "Craig Hammer" <[EMAIL PROTECTED]>; <[EMAIL PROTECTED]>
Sent: Thursday, May 23, 2002 9:51 AM
Subject: RE: uniq
> On Thu, 2
On Thu, 23 May 2002, Jeff 'japhy' Pinyan wrote:
> On May 23, Craig Hammer said:
>
> >Very nice explanation. One thing though, I am not using uniq to remove
> >duplicates. I am using it to get a count of duplicates. In my case, I am
> >creating a threshhold to
On May 23, Craig Hammer said:
>Very nice explanation. One thing though, I am not using uniq to remove
>duplicates. I am using it to get a count of duplicates. In my case, I am
>creating a threshhold to determine when someone (malicious) is scanning my
>address ranges.
Ah, I see
Jeff,
Very nice explanation. One thing though, I am not using uniq to remove
duplicates. I am using it to get a count of duplicates. In my case, I am
creating a threshhold to determine when someone (malicious) is scanning my
address ranges
y ran as a VERY SLOW shell script. In bourne, I used sort
>> andthen uniq to get a count per IP address. Is there something similar
>> to uniq within perl? (I already have it sorting correctly)
>
>sub uniq {
> my @in=@_;
> my (%saw,@out);
> undef %saw;
> @out =
I use this subroutine for uniq
sub uniq
{
my @in=@_;
my (%saw,@out);
undef %saw;
@out = grep(!$saw{$_}++, @in);
return @out;
}
Unfortunately, I have no idea how it works.
Regards,
Agustin Rivera
Webmaster, Pollstar.com / PollstarOnline.com
- Original
I am working on a script to read in a firewall logfile, pull out the IP
addresses of denied packets, then give me a count per IP address, and
perform a whois on each address.
This previously ran as a VERY SLOW shell script. In bourne, I used sort and
then uniq to get a count per IP address
68 matches
Mail list logo