Hi, All, :)

I'm trying to write a short script to remove duplicate entries in my
PATH variable.  I use Cygwin (Windows NT) and so I use perl from a
bash prompt.  Win32 programs *constantly* diddle with the PATH during
installation and I'm sick of having to manually fix it up.

So, I want to write a script (in Perl, of course!) so that I can use
it in my .bash_profile like so:

export $PATH=`perl fixpath.pl`

So, the perl script would read the PATH, chop out any duplicate
entries, and write the new path to stdout.

First off, is this the right way to go about this?  I know this isn't
a shell list, but I'm assuming that the perl script can't actually
*change* the PATH in the parent process - only for its own process.
Right?

Assuming this is the way I want to go, here's what I've cooked up so
far:

#!/usr/bin/perl

use strict;
use warnings;

# Strip redundant entries from the PATH environment variable.  Print
# out the fixed path.

# The original path.
my @path = split /:/, $ENV{"PATH"};
# Holds the fixed path.
my @newpath;
# Hash used to ignore duplicate entries.
my %entries;

# Loop through each entry in the path sequentially.
while( my $entry = shift @path ) {
   # Is this the first occurrence of this entry in the PATH?
   if( ! $entries{$entry} ) {
      # Yes.  Add this to our fixed PATH and store it in our entries
      # hash.
      push @newpath, $entries{$entry} = $entry;
   }
}

# Print out the new path array joined by colons to be a *nix PATH.
print join ":", @newpath;

# End of fixpath.pl.

This works, but it feels really clunky.  In my exploration of perl
RE's, I can "sense" that there might be a RE solution to this problem
without having to go to the hassle of creating a bunch of arrays, a
hash, and then looping through the entries.

However, I'm at a loss to see the structure of such an 's///g'
construct.  Anybody have any ideas on how I should go about tackling
this?

Thanks in advance for any help!

---Jason Tiller
[EMAIL PROTECTED]


-- 
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]

Reply via email to