RE: fetchrow_hashref

2002-11-14 Thread wiggins
See inline comments.



On Thu, 14 Nov 2002 11:22:09 -0800, "T. Murlidharan Nair" <[EMAIL PROTECTED]> wrote:

> Hi!!
> I am retriving data in a while loop using fetchrow_hashref
> How do I assign it to another hash.  I am trying the following
>  

In the below line of code you are trying to catch your fetchrow_hashref into a hash 
rather than a scalar. 

> while (my %hashRef =$sth->fetchrow_hashref()){
  while (my $hashRef = $sth->fetchrow_hashref()) {

Then you can derefernce $hashRef:

> foreach $keys (keys %hashRef){
  foreach $keys (keys %$hashRef) {

> print $keys;
> print "$hashRef{$keys}\t";
  print "$hashRef->{$keys}\n";


> }
> print "\n";
> }
> 
> Its  does not return me anything. Please let me know if there is a  better
> way to handle it.
> 
> Thanks and cheers always!!
> Murli
> 
> 
> 
> -- 
> To unsubscribe, e-mail: [EMAIL PROTECTED]
> For additional commands, e-mail: [EMAIL PROTECTED]
> 

-- 
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]




RE: Finding out the file name

2002-11-19 Thread wiggins
This isn't possible, and even the pages where you specify the full file name may not 
actually indicate that a file with that name actually exists, unless you are fetching 
your own site and know it isn't being dynamically generated.

A URL ending in a / either just the domain or a particular subdirectory depth 
indicates to *most* (talking 98%) of today's web servers that if it can't find 
something indicating an index file, which is set in the configuration of the server 
then it should either give permission denied or generate one for the user. In both of 
these cases no index file even exists.  And while most servers I would imagine use a 
standard naming convention of index.htm, etc. there is nothing stopping the person 
from using wiggins.html, or even 27348782374878234.wig as an index file.  On top of 
that even the / directory that you are referring to need not even exist, or at least 
have anything under it, it may just be an internal redirect to a script for instance 
that prints to your web session dynamically.

And then there is the whole frames issue...

http://danconia.org/



On Tue, 19 Nov 2002 05:10:18 +0200, "Octavian Rasnita" <[EMAIL PROTECTED]> wrote:

> Hi all,
> 
> I am getting an HTML page from the net using the LWP module.
> If I specify the full link, including the file name it is all right.
> 
> However, sometimes I specify only the directory name like
> http://www.site.com/
> 
> Well, in this case I want to find out the name of the file that will be
> downloaded by the LWP module.
> 
> I want to know if it is index.html, index.htm, index.php, default.asp, etc.
> 
> I've searched through the POD documentation of LWP module, but I couldn't
> find an answer for this problem.
> 
> Thank you. I hope it is possible.
> 
> 
> Teddy,
> Teddy's Center: http://teddy.fcc.ro/
> Email: [EMAIL PROTECTED]
> 
> 
> 
> -- 
> To unsubscribe, e-mail: [EMAIL PROTECTED]
> For additional commands, e-mail: [EMAIL PROTECTED]
> 

-- 
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]




Re: listing perl modues on system

2002-11-21 Thread wiggins
I get no documentation found for this method (solaris w/ 5.6.1), but I would imagine 
that would only list the base modules. If you are on unix the following is more 
elegant, from the perldoc perlmodlib page (not sure why it took me so long to find 
again, grr..)

"To find out all modules installed on your system, including those without 
documentation or outside the standard release, just do this:

   % find `perl -e 'print "@INC"'` -name '*.pm' -print"

Ah the beauty of find with a perl one liner, don't even need a pipe ;-)

http://danconia.org




On Thu, 21 Nov 2002 10:59:08 -0500, "Jason Purdy" <[EMAIL PROTECTED]> 
wrote:

> One easy way is to run the command:
> perldoc perllocal
> 
> Jason
> 
> "Jerry M . Howell II" <[EMAIL PROTECTED]> wrote in message
> [EMAIL PROTECTED]">news:[EMAIL PROTECTED]...
> > Hello there,
> >
> >I was wondering if there is an easy way to list the perl modules that
> are
> > installed on a system?
> >
> > --
> > Jerry M. Howell II
> 
> 
> 
> -- 
> To unsubscribe, e-mail: [EMAIL PROTECTED]
> For additional commands, e-mail: [EMAIL PROTECTED]
> 

-- 
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]




RE: Question about wantarray()

2002-11-26 Thread wiggins
Well there is the obvious place to start (perldoc -f wantarray):

 wantarray
 Returns true if the context of the currently
 executing subroutine is looking for a list value.
 Returns false if the context is looking for a
 scalar.  Returns the undefined value if the context
 is looking for no value (void context).

 return unless defined wantarray;# don't bother doing more
 my @a = complex_calculation();
 return wantarray ? @a : "@a";

 This function should have been named wantlist()
 instead.

So then the simple answer is, so you can tell when the user is calling a function/sub 
wanting a list returned, a scalar returned, or nothing (void).

For example you create an object that has an underlying structure similar to an array, 
then you write one method to do a couple of different things based on how it is 
called, for instance if the method is called in scalar context you can return the 
number of items in the list, if it is called in list context you return a list of 
references to each of those items in the list.

The place I have used it is in (a previously poorly written so I had to) a 
configuration file reader, where a person could call the function with a list of 
options they wanted and the values associated with that list were returned. I wanted 
to update the function so it could just return a hash of all the values, or the values 
specified with their associated keys, but I couldn't break old code.  So I used 
wantarray to tell if I was called in scalar context, if so I passed back a reference 
to my hash, but in list context I did what it had always done, that is passed back a 
list of the items they wanted.  The real question is, knowing the basics how can you 
use it?  ...

http://danconia.org


On Tue, 26 Nov 2002 15:35:01 +0100, "Mystik Gotan" <[EMAIL PROTECTED]> wrote:

> I can't really figure out what the purpose of wantarray() is.
> Can someone please give me a good, decent explanation?
> 
> Thanks in advance :-)
> 
> 
> --
> Bob Erinkveld (Webmaster Insane Hosts)
> www.insane-hosts.net
> MSN: [EMAIL PROTECTED]
> 
> 
> 
> 
> _
> MSN Zoeken, voor duidelijke zoekresultaten! 
> http://search.msn.nl/worldwide.asp
> 
> 
> -- 
> To unsubscribe, e-mail: [EMAIL PROTECTED]
> For additional commands, e-mail: [EMAIL PROTECTED]
> 

-- 
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]




RE: 5.005_03 vs. 5.8

2002-12-03 Thread wiggins


On Mon, 2 Dec 2002 21:28:49 -0600, "Scot Robnett" <[EMAIL PROTECTED]> wrote:

> I am working with a university on a web project which entails setting up a
> new server. We're going with Red Hat Linux on Dell hardware (RAID 5
> configuration), but I am not sure which version of Perl to recommend. I am
> very used to 5.005_03, but 5.8 is the current release.
> 

Nice choice. I would suggest you use the latest release so that you can have all the 
new features if you want them, but Perl is remarkably backwards compatible so little 
if anything should have to change in the old scripts.

> Do I have a large learning curve associated with going the 5.8 route as
> opposed to 5.005_03?
> 

The learning curve is really as steep as you want it to be, many perlers still write 
what is basically perl 4 code in perl 5, that is very procedural, little scoping, etc. 
Others write everything in OOP with all the trimmings. Really comes down to how 
important effeciency on both sides aka the process itself and the design/development 
is to your project. If you can afford the little bit of time up front to design it 
more "cleanly" modular, etc. or whether it needs to be done yesterday may determine 
how many of the new features you can learn/use.

> If I want to port scripts, will they need to be rewritten?
> 

Doubtful, possibly a few lines of code here and there.

> Are there significant benefits to using 5.8?
> 

I defer to the internals gurus on this one, having some of the modules that are now 
standard is a big enough benefit to me, and knowing that if I ask for help and get an 
answer that was derived in 5.8 I don't have to be concerned about whether it will work 
in my version, etc.

> On one hand, I want to keep up with the Joneses and take advantage of the
> most available power, but on the other hand, I'm used to one flavor and you
> know what they say about fixing things that ain't broke.
> 

Yep. Really a situational question, but I would say you at least want to be running at 
5.6.1 if for no other reason than to keep your own personal skills up to the current 
trends.  Definitely have a look at the history (see perldoc perl) for the changes. You 
might also try installing perl 5.8 in a different location on the system, installing 
your app(s) and seeing if they run with that perl, then you can be sure.

> Opinions, comments, suggestions?
> 

Good luck.

http://danconia.org

-- 
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]




RE: 5.005_03 vs. 5.8

2002-12-04 Thread wiggins


On Wed, 4 Dec 2002 11:03:37 -0600, "Scot Robnett" <[EMAIL PROTECTED]> wrote:



> Modules that are not pure perl have some sort of compiled
> supporting file associated with them 

True.

> and must be installed by root using a 'make install'.

False, at least in most cases, though there may need to be other steps taken, like 
updating LD_LIBRARY_PATH, etc.

http://danconia.org


-- 
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]




RE: use_named_parameters()

2002-12-10 Thread wiggins
The method has been removed. From the docs:

Version 2.57
6. Removed use_named_parameters() because of dependency problems and general lameness. 

http://stein.cshl.org/WWW/software/CGI/

I think "param" is now the preferred method.

http://danconia.org


On Tue, 10 Dec 2002 10:59:05 +0100, Kris Gaethofs <[EMAIL PROTECTED]> 
wrote:

> Hi,
> 
> I have this problem and don't immediately know the sollution. I would 
> like perl to use named parameters only while doing cgi scripting. To 
> force this I use the following code:
> 
> #!/opt/bin/perl -Tw
> 
> use CGI;
> use strict;
> 
> my $q = CGI->new();
> $q->use_named_parameters(1);
> 
> etc...
> 
> However, it doesn't work. Perl complains with the following code:
> 
> Undefined subroutine CGI::use_named_parameters
> 
> How does it work?
> 
> Greets,
> 
> Kris
> -- 
> -
> Kris Gaethofs
> Labo Kwantumchemie
> Celestijnenlaan 200F
> B-3001 Heverlee (Leuven)
> Tel: +32 16 32 78 03 Fax: +32 16 32 79 92
> Email: [EMAIL PROTECTED]
> -
> 
> 
> 
> -- 
> To unsubscribe, e-mail: [EMAIL PROTECTED]
> For additional commands, e-mail: [EMAIL PROTECTED]
> 

-- 
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]




Re: script runs in console but not in browser

2002-12-10 Thread wiggins


On Tue, 10 Dec 2002 12:35:56 +0100, "Mystik Gotan" <[EMAIL PROTECTED]> wrote:

> #!/path/to/perl -wT
> use strict;
> use warnings;
> 
> That might help.
> 

Any reason to use -w and use warnings?

http://danconia.org

-- 
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]




RE: can we use "system()" inside cgi ?

2002-12-10 Thread wiggins
You need to read up on tainted variables, I think.

perldoc perlsec

The problem isn't that it is a CGI, pretty sure the problem is that it is setuid.

http://danconia.org


On Tue, 10 Dec 2002 07:20:16 -0800 (PST), Admin-Stress <[EMAIL PROTECTED]> wrote:

> I got this error :
> 
> [error] [client 10.0.0.88] Insecure $ENV{PATH} while running setuid at
> /var/www/cgi-bin/ifcfg_rh80.pl line 60., referer: 
>http://10.0.0.50/cgi-bin/editconfig.pl
> 
> And line 60 of ifcfg_rh80.pl is :
> 
>system("/sbin/ifdown $device");
>sleep 2;
>system("/sbin/ifup $device");
> 
> I chmoded +s both editconfig.pl and ifcfg_rh80.pl.
> 
> And I installed suid-perl ...
> 
> Anything else that I can do? I made a cgi to change server ip address.
> 
> Thanks.
> 
> 
> __
> Do you Yahoo!?
> Yahoo! Mail Plus - Powerful. Affordable. Sign up now.
> http://mailplus.yahoo.com
> 
> -- 
> To unsubscribe, e-mail: [EMAIL PROTECTED]
> For additional commands, e-mail: [EMAIL PROTECTED]
> 

-- 
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]




RE: Installing CGI.pm in RedHat 8.0.

2002-12-10 Thread wiggins


On Tue, 10 Dec 2002 06:51:27 -0800 (PST), Admin-Stress <[EMAIL PROTECTED]> wrote:

> I dont know why, in my OTHER RedHat 8.0 installation, I cant find CGI.pm. I did 
>install perl5.8.0.
> 

I believe in RH8.0 the CGI module is in a separate RPM. Check your dist, you might try 
a find with "*cgi*" or the like.

> Anyone know how to install it? I looked in cpan, it seems it's default perl module. 
>And I cant
> find any installer for it.

In 5.8.0 it is a default module, but you should be able to issue a 

perl -MCPAN -e 'install CGI';

or invoke CPAN in a shell:

perl -MCPAN -e shell

and then issue 

install CGI

> 
> Is it OK if I just copy CGI.pm from cpan? into 
> 
> /usr/lib/perl5/5.8.0/CGI.pm

Don't think so as CGI may/probably relies on other modules. You will want to make sure 
they are installed as well which CPAN will help you with.

> 
> but, what is /usr/lib/perl5/5.8.0/CGI ? Because I saw in my other redhat 8.0 box 
>there are two
> files :
> 
> find / -name "CGI*"
> /usr/lib/perl5/5.8.0/CGI
> /usr/lib/perl5/5.8.0/CGI.pm
> 

CGI should be a directory which has other modules related to CGI stuff which may or 
may not be used by the main CGI module. 

My first approach would be to go through the RPM database of your install looking for 
perl and cgi related items, and then check the list of RPMs for the distro.  If you 
can install from RPM on a RH box you will be better off in the long run.

http://danconia.org

-- 
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]




Re: automatically downloading files into a certain directory

2002-12-12 Thread wiggins


On Thu, 12 Dec 2002 08:01:59 -0500, "Christopher G Tantalo" <[EMAIL PROTECTED]> 
wrote:

> Anette Seiler wrote:
> 
> > Dear members of the list!
> >
> > I want to do something where I am not sure it can be done with perl...
> >
> > Basically a user should klick on a button on a website. Then the script
> > should create a file with certain information from a database on the
> > webserver (that's easy) and that file should automatically be downloaded
> > into a certain directory on the user's computer (that is the difficult
> > part). The user is not computer illaterate, but he should not bother about
> > downloading and choosing certain directories (as he will have to download
> > hundreds of these files)  and definitely not about ftp or something like that.
> >
> > Well, as I said, it is the downloading part that I don't know how to do.
> > Can it be done with perl or should I look at something else - maybe javascript?
> >
> > Greetings
> >
> > Anette Seiler
> 
> i should hope that you cant do this.
> if it can be done... i am glad i have measures to stop this on my home pc...
> chris

Agreed. 

Though it could be done with perl, just not in the way you intended... That is it 
could be done from the client side instead.  Depends on how set your environment is.  
You could probably use an ActiveX control over IE or something to do this, but I don't 
claim to have any knowledge of such matters...

http://danconia.org

-- 
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]




RE: GD Graphs

2002-12-12 Thread wiggins
O'Reilly offers two books (or they claim the second as of Dec) on this subject that 
you might want to look into, I have not read/seen either so cannot provide a review 
sorry.

http://www.oreilly.com/catalog/prowg/
http://www.oreilly.com/catalog/perlgp/ (December)

http://danconia.org


On Thu, 12 Dec 2002 09:39:41 -0600, "Mike(mickako)Blezien" <[EMAIL PROTECTED]> 
wrote:

> Hello,
> 
> I am going to be working on a project, that will be utilizing the GD::Graphs 
> module to create various graph reports. I was hoping that someone could point me 
> to some good documentation or working examples of the uses of this module... of 
> course I've been to CPAN, but was hoping to find more working examples of it's use.
> 
> If you've used this module, would appreciate if you can direct me to some good 
> docs and examples,... or if some one can recomend a better method of creating 
> and working with graphs and Perl.
> 
> TIA as always
> 
> Happy Holidays,
> 
> -- 
> MikeBlezien
> =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
> Thunder Rain Internet Publishing
> Providing Internet Solutions that work!
> http://www.thunder-rain.com
> Tel:  1(985)902-8484
> MSN: [EMAIL PROTECTED]
> =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
> 
> 
> -- 
> To unsubscribe, e-mail: [EMAIL PROTECTED]
> For additional commands, e-mail: [EMAIL PROTECTED]
> 

-- 
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]




RE: cronjob via perl

2002-12-12 Thread wiggins
You could setup the script to run through CGI and then use cron on another box to 
curl, lynx, LWP, etc. the script at your intervals. This is obviously less secure, but 
you could do things like require a password, check the IP of the machine doing the 
call (though this can be spoofed as well, just less easily), etc.

http://danconia.org


On Thu, 12 Dec 2002 15:31:53 +0100, Sven Bentlage <[EMAIL PROTECTED]> wrote:

> My provider doesn`t allow any user cronjobs, but I have to perform a 
> regular task (set back passwords ).
> So I just wrote a small script to the job for me, but I am not sure if 
> it will need to much resources (and because of this would be killed by 
> my provider)
> 
> Does anyone know a better solution than continuously running a perl 
> script?
> 
> Thanks for your help in advance.
> 
> Sven
> 
> #!/bin/perl -w
> BEGIN {
>   $| = 1;
>   push(@INC,'/htdocs/www/cgi-bin/lib');
>   use DBI;
>   }
> 
> 
> #Version 0.1
> 
> 
> 
> 
> 
> my $date = localtime(time);
> my $dsn = ''; # Data 
>Source Name
> my $db_user = ''; # 
>Database User
> my $db_pass = ''; # 
>database pass
> my $logfile = "cron_log.txt"; # Logfile
> 
> 
> open (LOG, ">>$logfile") || mydie($_);
> 
> &pw_switch();
> 
> sub pw_switch {
> 
>   sleep(8*60*60);
>   my $new_password = pw_create();
>   my $dbh = DBI->connect( $dsn, $db_user, $db_pass) || 
> mydie($DBI::errstr);
> #my $number = $dbh->do("select count(*) from memberscopy where
> my $del_pw = $dbh ->do( $sql ) || mydie($DBI::errstr);
> print LOG " $date -- $sql\n";
>   
>   $dbh->disconnect;
>   close (LOG);
>   print qq ~
>   
>   
>   Password switching done
>   
>   
>   Password switched
>   
>   
>   ~;
> 
>   
>   }
> 
> sub pw_create {
>   my $password = join '', ("1".."9","a".."z","A".."Z",0..9)[map {rand 
> 36} 0..9];
>   $password = substr($password, 0, 8);
> 
> 
>   chomp $password;
>   return ($password);
> }
> 
> 
> 
> sub mydie {
>   open (LG, ">>.Mydie_cron.txt");
>   print LG "$_  -- $date \n";
>   close (LG);
>   exit;
> }
> 
> 
> -- 
> To unsubscribe, e-mail: [EMAIL PROTECTED]
> For additional commands, e-mail: [EMAIL PROTECTED]
> 

-- 
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]




RE: start_tr

2002-12-13 Thread wiggins
I don't believe there is a start_tr() method, which would explain the problem :-). 
Check out the POD documentation for the CGI module by either issuing

perldoc CGI

or here:

http://search.cpan.org/author/JHI/perl-5.8.0/lib/CGI.pm

In particular seach for:  "This is extremely useful for creating tables"  in that 
document. It explains how to easily create a table row, and its associated cells 
quickly.

Most of the books that I have seen talk about how to use perl in a CGI context, and 
how to do the basic CGI tasks, many of which can be handled by the CGI module, but few 
if any really touch on real-world full application development using something like 
CGI or LWP.

http://danconia.org



On Fri, 13 Dec 2002 05:07:18 -0800 (PST), Rob Richardson <[EMAIL PROTECTED]> wrote:

> Greetings!
> 
> I am having some trouble with a script designed to create a table. 
> Here is the important part:
> 
> use CGI::Carp qw(fatalsToBrowser);
> use CGI qw/:standard :html center *big *strong *table *tr *td
> delete_all/;
> use Time::Local;
> 
> build_calendar(param('month'), param('year'));
> 
> sub build_calendar
> {
>   my ($month, $year);
>   my (@calendar);
>   my ($currentDate);
> 
>   $month = $_[0];
>   $year = $_[1];
> 
>   @calendar = rawCalendar($month, $year);
>   print start_table({-align=>center, -width=>"95%", -border=>1});
>   for ($i = 0; $i < 6; $i++)
>   {
>   # $line = "";
>   print start_tr();
>   for ($j = 0; $j < 7; $j++)
>   {
>   $currentDate = shift(@calendar);
>   # $line = $line . $currentDate . ' ';
>   # $line = "$line$currentDate ";
>   print td($currentDate);
>   }
>   # print "$line";
>   print end_tr();
>   }
>   print end_table();
> }
> 
> When I run this, I get the following error:
> Undefined subroutine CGI::start_tr
>  at c:\INDIGO~1\HTDOCS\CREW\CALENDAR.CGI line 82
> 
> I don't have a good book on CGI, but I think the use statement is
> correct, and the start_table() routine is working.  Why don't I have
> start_tr()?
> 
> Also, is there a bood book that actually talks about CGI?  I have a
> couple whose titles include "CGI", but they say nothing about CGI.  All
> they talk about is Perl.
> 
> Thanks!
> 
> Rob
> 
> 
> 
> __
> Do you Yahoo!?
> Yahoo! Mail Plus - Powerful. Affordable. Sign up now.
> http://mailplus.yahoo.com
> 
> -- 
> To unsubscribe, e-mail: [EMAIL PROTECTED]
> For additional commands, e-mail: [EMAIL PROTECTED]
> 

-- 
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]




RE: 2 Code Interpretation ?s

2002-12-17 Thread wiggins
One more that the other poster didn't mention:

perldoc perlretut

This goes over Regular Expressions in a slightly higher level than just Perl's 
thoughts on the subject.

http://danconia.org


On Tue, 17 Dec 2002 05:41:57 -0800 (PST), Will <[EMAIL PROTECTED]> wrote:

> Greets Folks,
> 
> 2 Questions here...
> 
> 1.) I am self taught, so I dont know all the formal
> details of Perl coding, but every now and then I run
> across code that reads like alphabet soup...  for
> instance, lines like (taken at random):
> 
> The conditional here...
> 
> if ($cgi->param('forum') =~ /^(\d+)$/)) 
> 
> Or, like:
> 
> sub trim
> {
> my $str = shift;
> 
> return "" if ! defined $str;
> $str =~ s/^\s+//;
> $str =~ s/\s+//;
> return($str);
> }
> 
> I dont understand how to read whats happening in those
> sections.  I think it's called parsing, but I need
> some sort of confirmation and direction there...  I
> mean, its not just a question of how to do it, but
> also why and when to parse input...
> 
> Anyway, is parsing what I need to learn to understand
> code like the examples above?   And, if it is, then
> where could I begin to learn how to parse effectively
> enough to get a real grip on it?
> 
> Thanks,
> 
> Will
> 
> __
> Do you Yahoo!?
> Yahoo! Mail Plus - Powerful. Affordable. Sign up now.
> http://mailplus.yahoo.com
> 
> -- 
> To unsubscribe, e-mail: [EMAIL PROTECTED]
> For additional commands, e-mail: [EMAIL PROTECTED]
> 

-- 
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]




RE: CGI scripts permissions

2002-12-24 Thread wiggins
This is going to depend somewhat on your setup, mainly what user the web server is 
running as and what group it might be in.  You could probably set your script to be 
710 if your web server is in the same group as you but not the same user as you. Or if 
you go to 711, then anyone can execute the script but not read it, except for the 
owner which may be what you want.  The same essentially applies to teh directory, if 
the web server (owner/group) can't read the directory then it can't execute the 
script, so setting the directory to 700 with the web server running under a different 
owner/group will mean the script can't be run in a cgi context.

So determine whether the web server is running as the same user as you, if so you can 
limit it to 700, if it is running as a different user in the same group, then 710 
should do the trick, if it is a different user and group then you are looking at 711.

There is a chance though I didn't think this was the case that the script also has to 
be readable, in which case you are looking at 750 or 755.

1st digit = user
2nd digit = group
3rd digit = all

1 = execute
2 = write
4 = read

sum the permission values,

7 = (4+2+1) = read, write, execute
6 = read, write
5 = read, execute
4 = read
3 = write, execute
2 = write
1 = execute

http://danconia.org


On Tue, 24 Dec 2002 18:09:52 +0200, "Octavian Rasnita" <[EMAIL PROTECTED]> wrote:

> Hello all,
> 
> Please tell me what file permissions should I use for a CGI script.
> 
> I don't want others users from that server to view the content of my scripts
> because they contain passwords for MySQL databases.
> If I chmod 755 the scripts, the other users will also be able to see the
> files.
> 
> Can I deny other users to see the content of the cgi-bin directory (chmod
> 700) and chmod 755 only the files?
> Or, ... do I have other options?
> 
> Thank you.
> 
> Teddy,
> Teddy's Center: http://teddy.fcc.ro/
> Email: [EMAIL PROTECTED]
> 
> 
> 
> -- 
> To unsubscribe, e-mail: [EMAIL PROTECTED]
> For additional commands, e-mail: [EMAIL PROTECTED]
> 

-- 
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]




RE: Post Method versus Get Method

2003-01-07 Thread wiggins
There are many reasons to choose one or the other. A good one for POST is file 
uploads, where the input values could be very very large (think mega or giga bytes of 
data) or binary which would have to be encoded and then unencoded in some manner, 
whereas this is less of a problem over a stream.

POST however requires "more" user interaction, that is submitting a form, etc. Where a 
GET is just a simple link, which makes building dynamic sites easy, for instance where 
a CGI prints links to a page that is different for each user, rather than requiring 
each of those links to be a form, a user can just click, and they "unknowingly" just 
submitted a GET to a CGI that then generates another dynamic page (sure this can be 
done with cookies but who needs the hassle, and there is a limit to how many and how 
large cookies can be).

This is a rather broad question and generally makes more sense with experience

http://danconia.org


On Tue, 07 Jan 2003 16:17:36 -0500, Susan Aurand <[EMAIL PROTECTED]> wrote:

> I know the POST Method the data is sent to STDIN, and GET method the data is 
>attached to the URL and
> then submitted. When and why would you want to use the GET method versus the POST 
>method. Is is
> because of  firewalls? or what?  I would appreciate any input on this subject.
> Thank You.
> Susan Evans
> 
> 
> 
> -- 
> To unsubscribe, e-mail: [EMAIL PROTECTED]
> For additional commands, e-mail: [EMAIL PROTECTED]
> 

-- 
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]




Re: Post Method versus Get Method

2003-01-08 Thread wiggins


On Wed, 8 Jan 2003 10:02:20 -0800, drieux <[EMAIL PROTECTED]> wrote:

> 
> On Wednesday, Jan 8, 2003, at 01:21 US/Pacific, Gary Stainburn wrote:
> [..]
> > The only benefits of using GET that I can think of is that you can 
> > emulate a
> > form by manually keying the data in the URL, and you can even create a
> > bookmark containing the completed form details.  I personally use this 
> > to
> > bookmark specific queries to some of my databases tosave mehaving to 
> > complete
> > the form every time I want a status update.
> >
> > The benefits of POST are tidier URLs, and not having the limits I 
> > mentioned
> > above.
> > -- 
> > Gary Stainburn
> 
> 
> There is perchance the 'unintended' side effect here
> that most folks forget
> 
>   http://xanana/Demo/?sysname=bob&config_host=libex
> 
> is a 'unique' URL from
> 
>   http://xanana/Demo/
> 
> The former is seen with a GET the later is what
> would be seen with a POST - where this can get
> messy is when the browser has caching on - and
> one's web-design has multiple queries that will
> be routing through the same URL
> 
> ciao
> drieux
> 
> ---
>

Good point, this for a *very* long time (and may still be) caused major headaches for 
the Mozilla development squad as it render most of their original implementation of 
caching completely unusable, as it related to things such as "View Source" and "Save 
As."

Haven't been following that discussion anymore as I think they closed the original two 
bugs and opened new ones.  What a nightmare

http://danconia.org

-- 
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]




RE: ISP won't install Perl modules

2003-01-17 Thread wiggins


On Fri, 17 Jan 2003 15:18:42 +0100, Rene Verharen <[EMAIL PROTECTED]> wrote:

> Hi all,
> 
> My ISP won't install some Perl modules I want to use (Email::Valid and 
> Mail::Address).
> Is there a way to install these modules in some directory I have access to 
> so I'll be able to use them ?
> 
> 

You didn't state whether you can login at a terminal prompt, so I will assume so, but 
this isn't necessary either. At a terminal you should be able to set PERL5LIB 
environment variable in your shell to point to whatever directory you wish the modules 
to live under, for instance HOME/lib where HOME is your home directory, then just run 
perl -MCPAN -e shell and install as normal. Then in your scripts you will have to add 
a 'use lib' line, similar to:

use lib '/HOME/lib';

Before you 'use' any of the modules in that directory.

This is the easiest way, if you must install by hand then you will most likely need to 
pass a PREFIX=/path/to/lib/dir directive to perl Makefile.pl, and if you don't have a 
terminal, then you should be able to install a local version where you are doing your 
development and then just recursively upload the directory, but this is dangerous as 
you must make sure the two environments match before installing.

HTH,

http://danconia.org

-- 
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]




RE: XML with perl

2003-01-17 Thread wiggins


On Fri, 17 Jan 2003 15:00:22 +, Paluparthi Kumar 
<[EMAIL PROTECTED]> wrote:

> Hi,
> 
> I am using perl for the first time. Can anyone please send me some links where I can 
>find information and examples of Perl modules for XML programming.
> 

O'Reilly has an excellent book on this subject though it does expect some 
understanding of Perl and using modules:

http://www.oreilly.com/catalog/perlxml/

For a list of available modules, you can check here:
http://search.cpan.org/modlist/String_Language_Text_Processing/XML

Most modules on that list come with good documentation and some examples.  You may 
want to start with XML::Simple and work your way to more complex tasks.

And this should answer many questions of where to head next,
http://www.xmlproj.com/perl-xml-faq.dkb

http://danconia.org

-- 
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]




RE: Possibly OT, but CGI related.

2003-01-23 Thread wiggins


On Thu, 23 Jan 2003 03:41:30 -0800 (PST), Will <[EMAIL PROTECTED]> wrote:

> Not about perl, but CGI...
> 
> Does anyone know why it would be more secure not to
> allow HTML files in a cgi-bin?  
> 
> I was working on a project with both perl cgis and
> html files in the cgi-bin.  The cgi's ran fine, but I
> was getting all sorts of errors from the HTML.  I
> asked the host, and they said it was fot security
> reasons but nothing more.
> 
> Just wondering if anyone would know what they meant.
> 
> Thanks,
> 
> Will
> 

Personally sounds like a cookie cutter, security is on the brain, response.  If you 
have access to write to the cgi-bin and *any* kind of file is setup as a handler than 
you have a security risk.There is no difference between a perl file and an html 
file at the system level, only that one usually contains Perl and one contains HTML.  
Without setting up a specific handler for HTML files found in script directories most 
likely the server will try and execute the script (HTML file), at which point no 
shebang will be found (in a normal HTML file that is) and then cough up an internal 
server error fur ball.  Along these lines we used to setup cgi handlers for extensions 
like .bri, .matt, etc. so that each of our developers could write scripts with their 
own extension all in Perl (because it doesn't care about extensions) (granted this was 
1998 and we should have been using RCS, etc.) but it worked well. The real question is 
why would you want/need to?

http://danconia.org



-- 
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]




RE: Hash of hash crash!

2003-01-23 Thread wiggins


On Thu, 23 Jan 2003 12:54:25 -0500, Ed Sickafus <[EMAIL PROTECTED]> wrote:

Not sure if this is it, but try lowercasing 'if'.  I believe Perl is case sensitive in 
this respect.

>  If ($tmp{$keyinner} == "LN")



> 
>   "BUGS/  Adding or altering substructures to a hash value is not
>   entirely transparent in current perl.
>   $mldb{key}{subkey}[3] = 'stuff';# won't work
> 
>   Instead, that must be written as:
> 
>   $tmp = $mldb{key};  # retrieve value
>   $tmp->{subkey}[3] = 'stuff';
>   $mldb{key} = $tmp;  # store value"
> 

You should be able to do:

${{$mldb{key}}->{subkey}}[3] = 'stuff';

or

${$mldb{key}}->{subkey}->[3] = 'stuff';

I can't quite get your structure in my head correctly, but there should be a way to 
access the location directly like the above.

http://danconia.org

-- 
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]




RE: HoH crash

2003-01-23 Thread wiggins


On Thu, 23 Jan 2003 13:27:31 -0500, Ed Sickafus <[EMAIL PROTECTED]> wrote:

> That did it!!
> Perl is sensitive to case -- If doesn't compile, if does.  :)
> Thanks for the help. Ed
> 
> 

Glad we finally got that taken care of, whew.  :-)

http://danconia.org

-- 
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]




RE: HoH Crash II

2003-01-23 Thread wiggins


On Thu, 23 Jan 2003 14:59:07 -0500, Ed Sickafus <[EMAIL PROTECTED]> wrote:

>  -- Back to square one?
> 

I think you are still having problems with the notion of what the key is and what the 
value is

> # I define 3 variables  ...
> 
>   $TS = time();
>   $LN = "Washington"; # test value
>   $CN = "Costa Rica"; #  " "
> 

OK, so $TS now contains the current time...which I think is what you want your outer 
hash key to be...correct me if I am wrong?

> # ... in order to build a HoH ...
> 
>   $newH{'TS'} => $TS; # TS is the outer key.   <<== line 33

This sets the outer hash key to the string 'TS' and gives it the value of the time 
($TS) from above.


> 
> # ... using this code ...
> 
>   %newH   =  (
>  TS   => {# Note outer hash key, TS
>  'LN' => $LN, # Last Name
>  'CN' => $CN, # Country
>  },
>  );
> 

Now you have *reset* the value of the 'TS' key of the hash to an anonymous hash 
reference, overwriting your earlier value (that is the time stamp is now gone).

Lets say

$TS = time();
$LN = 'dAnconia';
$CN = 'USA';

now, we want:

$newH{$TS} = { 'LN' => $LN, 'CN' => $CN };

This sets the 'outer key' to be the time stamp (sort of better written directly)

$newH{time()} = { 'LN' => $LN, 'CN' => $CN };

foreach my $timestamp (sort(keys(%newH))) {
   print "$timestamp  Last Name = " . $newH->{$timestamp}->{'LN'};
   print "Country = " . $newH->{$timestamp}->{'CN'};
}

Does this make sense, help? Keep posting, we will get it there, even if it kills one 
of us ;-).

http://danconia.org

-- 
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]




RE: Number format control

2003-01-29 Thread wiggins


On Wed, 29 Jan 2003 14:02:47 -, "Jattie van der Linde" 
<[EMAIL PROTECTED]> wrote:

> c syntax equivalent: printf ("%3.0f",Value); /*if value = 123.456 result would be 
>123*/
> 
> What is the equivalent perl command outputting a floating point number without 
>decimals. I read telephone from the database and they print as 6704046.0 I want to 
>get rid of the .0
> 

Should work the same, perldoc -f printf , perldoc -f sprintf.  Assuming you don't 
care, you could always turn it into an int, perldoc -f int

http://danconia.org

-- 
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]




Re: Search for a string

2003-02-03 Thread wiggins


On Mon, 03 Feb 2003 13:09:47 -0500, Jeremy Schwartz <[EMAIL PROTECTED]> wrote:

> Not trying to reinvent the wheel.
> 
> I am using Analog for the analysis.
> 
> I am trying to split the server combined log into individual vhost logs. I
> can then run each through Analog to produce individual reports.

> > Don't reinvent the wheel.  There are a number of fine log analysis
> > utilities, such as analog.
> > 
> > xoa

Out of curiousity is there a reason why you are not handling this at the Apache level? 
 Each vhost can have its own set of logs at the start that then would not need to be 
pulled apart.  Is this a possible scenario for you going forward? (granted it doesn't 
help now).  It would seem that your task would be better handled with shell script 
possibly since you already have the command line for creating the file(s) from the 
main log, so then just wrap that command in a foreach that takes your directory names 
as input. 

Something along the lines of:

#!/bin/sh

for dir in `ls -1 /webroot/`; do
  cat /var/log/httpd/access_log | grep "$dir" >
/var/log/httpd/access_log_$dir
done

I am no shell hacker and the above is untested, but you get the idea.  In general Perl 
would not be a good choice for performing something so simple that already has a 
command line solution available. 

If you were going to do it in Perl, rather than looking for each vhost in the log 
file, you would be better off unpacking or splitting, etc. the log line and storing 
that line to an array that is associated with the particular vhost in the log line and 
then printing each vhost's array to a file, or you would have to open a filehandle for 
each vhost at the beginning of the script and then just print the line to whichever 
filehandle is associated with a particular vhost.  Stepping through every line of the 
log file foreach of the vhosts in Perl would probably be a really bad way to handle 
things.

I would still suggest letting Apache do the splitting by not storing one main log with 
all vhost content, it is much easier to put the logs back together to get a complete 
picture than it is to disect them after the fact.

http://danconia.org

-- 
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]




Re: Search for a string

2003-02-04 Thread wiggins

On Tue, 4 Feb 2003 00:14:30 -0900, "Dennis Stout" <[EMAIL PROTECTED]> wrote:

> Sheesh.
> 
> Wil ya'll just help a man with a perl problem instead of battering him with
> other ways to do it?
>

At least one of these lists is a beginners list, the other is related to Mac OS X 
which the question really wasn't (other than I figure he is running Apache on OS X), 
this is not a contract out free coding list, my original post suggested a better 
overall system, that is to split the log with Apache (as it is easier to put them back 
together again than to take them apart).  My post also offered several suggestions of 
how the task would be best handled within a perl script.
 
> Sometimes people like ot pose a challenge to themselves and see if it can be
> done.
> 

Right, which is what my last paragraph alluded to, I suggested splitting on each of 
the lines and catching the vhost, then printing that line to a separate file, but 
without first trying and then asking further questions about how to do this I am not 
going to offer up a solution, as that would defeat the purpose of a challenge.

> Instead of being counterproductive and refering peopel to other things, help
> the man!
> 

See above. This message hasn't been all that productive for the original poster, and 
while I am not easily discouraged, this could suggest that the help offered (myself 
and the others) (freely - beer and speech) was not appreciated and may keep other 
posters from offering advice on how best to do something rather than on how a poster 
*is* doing something, which would be the most counterproductive.

http://danconia.org

-- 
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]




RE: Cookie INfo

2003-02-04 Thread wiggins


On Tue, 4 Feb 2003 16:22:17 -0500, "Kipp, James" <[EMAIL PROTECTED]> wrote:

> Can anyone point me to some good docs for using cookies with Perl/CGI. I can
> only seem to find docs using Javascript. I have already read the cgi.pm
> docs, but looking for something with more info.
> 

Do you have a specific question? it seems:

http://search.cpan.org/author/LDS/CGI.pm-2.89/CGI.pm#HTTP_COOKIES

Covers things pretty extensively, short of looking at the source to see how things are 
implemented rather than just the API.  Really very little to them.

http://danconia.org

-- 
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]




RE: Using variable to another script

2003-02-06 Thread wiggins


On Thu, 6 Feb 2003 13:59:51 +0800, "Glynn S. Condez" <[EMAIL PROTECTED]> 
wrote:

> Hi all,
> 
> I have a question. From a web form, i pass the info to a cgi script.
> 
> i have this variable on the script:
> 
> $username = param('username');
> $lastname = param('lastname');
> 
> my question is, how can i use that two variable to another script 
> open by this command:
> 
> system('sudo', '-u', 'www', '/path/to/anotherscript.pl');
> 

Assuming you have worked out the details with sudo ahead of time (aka not having to 
enter a password, etc.), then:

system('sudo','-u','www',"/path/to/anotherscript.pl $username $lastname");

Should work, and 'anotherscript.pl' will have to expect the arguments in ARGV[0,1].  

If anotherscript.pl is and *must* be a cgi script, then you really have two options, 
develop it in such a way that it expects parameters in ARGV, which I believe CGI 
module will handle, or you can call a third party provider, like lynx, curl, etc. to 
call the script over HTTP but this gets harry and would seemingly lose your sudo 
privileges.

http://danconia.org

-- 
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]




RE: Finding the size of the structure/data type

2003-02-12 Thread wiggins


On Wed, 12 Feb 2003 10:45:20 +0530, "Roopa.Mahishi" <[EMAIL PROTECTED]> wrote:

> 
> 
> Hello friends,
> I would like to find the size of the data type /struct in Perl. Is there any
> command/keyword available in Perl which i can use in my scripts?
> Any help is this regard would be really useful to me.
> 

Are you referring to the size of a particular type of variable, or are you talking 
about the internal memory size?

In the former case,

perldoc -f length for finding the length of strings

or for lists, if you use them in a scalar context will give you the number of 
elements, for instance:

$size = scalar(@array);
$size = scalar(keys(%hash));

In the latter case you may want to have a look at:

perldoc perlrun

In particular the section on "PERL_DEBUG_MSTATS" 

Or you may want to have a look at Devel::Size from CPAN:
http://search.cpan.org/author/DSUGAL/Devel-Size-0.54/Size.pm

http://danconia.org

-- 
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]




Re: Parsing a file as it is being served

2003-02-18 Thread wiggins


On Tue, 18 Feb 2003 10:39:25 +, Philip Pawley <[EMAIL PROTECTED]> wrote:



> 
> Thanks for the suggestions. The problem is I need to start from someone requesting 
>an html file, not someone calling a script.
> 
> How can I do it?
> 

mod_perl. I believe is going to be your only option in Perl.  You might be able to 
accomplish the whole thing with javascript on the client side using a whole lotta 
document.write's but I don't pretend to be an expert on such things, and that may be 
to late in the case of the document leader.

http://danconia.org

-- 
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]




RE: Getting a STDOUT value

2003-03-07 Thread wiggins
Remember to include the list in your replies, so that everyone can help and be helped.


On Fri, 7 Mar 2003 08:52:04 -0600, "Scot Robnett" <[EMAIL PROTECTED]> wrote:

> Thanks - but I don't understand how that redirect works. I've read about
> open, select, and filehandles and I still don't "get" the STDERR or STDOUT
> redirect. 

Essentially the process being called needs to know how to handle this type of 
interaction so using regular opens/filehandles is not likely to work. However, since 
you are calling a process using backticks or you can use the IPC::Open3 mechanism then 
either in the former case the process will be the shell, which does understand how to 
do redirects, etc. or in the latter that is the very reason for Open3.

I don't see any examples of working code with the 2>&1
> functionality and I don't know in what context that gets used.
> 

2>&1 are parameters that are parsed by the shell and instructs it how to handle the 
outputs of the process it is about to run. Because backticks gets its information from 
the shell, specifically STDOUT (of the shell) then if you tell the shell to redirect 
the stderr of the process into the stdout (which is what 2>&1) does then the STDERR 
will show up on the STDOUT channel, which will be caught by the backticks.  The only 
context where you need to watch out for this is when the process is *NOT* shelled out, 
which I believe only occurs in the multiple argument version of 'system' (please list 
correct me if I have this backwards).

> I am able to get the pid with IPC::Open2, but I want to display the results
> of the process, not the ID of the process.
> 

Sorry I should have been more specific on this, I believe Open2 will not accomplish 
what you need since it only works for STDOUT/STDIN. Open3 should be able to handle it 
however as it handles STDIN/STDOUT/STDERR. You will *NOT* want to use the 2>&1 
construct with an Open3!!  Check the docs for info on getting the output, but 
essentially you call Open3 on the command giving it filehandles where it should send 
the output, then you should just read from those file handles like a normal 'open'd 
file handle. 

> Here's what it looks like using a standard filehandle, but the results
> aren't being saved into my variable and therefore not displaying in the
> browser. What should this code look like with the "missing link" added?
> 
> #!/usr/bin/perl -w
> 
> use strict;
> use CGI::Carp qw(fatalsToBrowser);
> 
> my $cmd  = `perl -c myscript.cgi`;

Change the above to:

my @output = `perl -c myscript.cgi 2>&1`;

Then you will want to test $? for the return code, on a failed return code (likely 
anything but 0) then you will want to step through the @output array and print the 
stderr.  Check in:

perldoc perlop

Under the section "qx/STRING/" for examples and further description.


> open(STDERR,"<$cmd"); # No idea how to redirect 2>1 here
> while($line = ) {
>  print $line;
> }
> close(STDERR);
> 

Here if my understanding is correct you are re-opening your own STDERR using the 
command's STDOUT (which doesn't have anything on it). Then reading your own STDERR and 
printing on own STDOUT. Which won't print anything because there is nothing on your 
own STDERR because there is nothing on the processes STDOUT...  but then you really 
don't want to do things this way, at least outside of the context of an obfuscated 
Perl contest :-).

HTH,

http://danconia.org

-- 
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



RE: mod_perl books, cpan modules.

2003-03-21 Thread wiggins


On 20 Mar 2003 09:33:56 -, entius <[EMAIL PROTECTED]> wrote:

> I want to learn mod_perl and there're two books who talk about this.
> Now a third has arrives and it's talking about 2.0 mod_perl but i am a 
> Lincoln Stein fan and i want the "Writing Apache Modules with Perl and C". 
> But the last edition is still about 1.3, anyone knows anything about any 
> intention about making a super-new edition about 2.0??
> 

I don't do a lot of mod_perl but I would think that the basis for how the various 
steps in a request/response are handled, etc. are very similar between the two 
versions, so the underlying core of the book mentioned above would still apply, with 
various changes in syntax for the interface, etc.  I will comment that when I went 
through the book a little over a year ago it was excellent for anyone with a good 
background in Perl and its use on the web.

> Another question about cpan modules, only very few are really classes 
> aren't them? i mean, oop is still rare in cpan? or i'm wrong and all new 
> modules get rid of exporter.pm??
> 

This I would disagree with currently, though I couldn't say when a critical mass was 
reached, though obviously I feel it safe to say that much of the OOP is concentrated 
in newer modules.  And part of the beauty and flexibility of the Perl OOP model means 
it only requires a little more effort to make a module both OOP and non-OOP.  the 
CGI.pm is a perfect example, for those from the old school they will find CGI fits 
their purposes and provides all of the same functionality, that it does to those that 
prefer the OOP approach, through virtually the same interface.  Some other examples 
include the Mail::Box module for all things mail oriented, it is extremely extensive 
and uses Perl's OOP tactics to the fullest.  POE, Crypt::OpenPGP, DBI, and then the 
obvious examples of things like Class::MethodMaker that handle building much of the 
OOP interface for you (isn't Perl grand!).

I think it is dangerous to think of OOP and using the exporter as mutually exclusive 
however...

http://danconia.org

-- 
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



RE: Downloading Email Attachement from POP3 Server

2003-03-26 Thread wiggins


On Wed, 26 Mar 2003 16:39:16 +0530, "Akshay Kumar" <[EMAIL PROTECTED]> wrote:

> Hi All,
> 
> Daily a mail with some attachment is sent to an email account. I have got full 
> access of the email account with pop reading facility. Now, I need to download the 
> attachment of any particular email based on the date.
> 
> Waiting for TIPS for a script in PERL.
> 

Please don't cross post to multiple lists. If you post to one and don't get the 
answers you seek then post to the other...

You should check out one of the many mail handling modules, for instance:

Mail::Box
Mail::POP3Client

I have had success with the first (though not specifically with POP) and heard of 
people who have had success with the other.

http://danconia.org

-- 
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: "safe" system()?

2003-03-28 Thread wiggins


On Fri, 28 Mar 2003 12:00:09 -0800, drieux <[EMAIL PROTECTED]> wrote:

> 
> On Friday, Mar 28, 2003, at 11:01 US/Pacific, Jerry LeVan wrote:

> 
> And BEFORE wiggins whines at me for not pointing at
> putting stuff that could be in a Module INTO a Module,
> y'all do know about
> 
>   Digest::MD5
> 
> that is available from the CPAN that would mean not
> having to invoke it remotely...
> 

Good thing I still read your whole rants as that was going to be my next post 
;-).. besides, I learned from the best, et tu brute.

http://danconia.org


-- 
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



RE: Why not Class Objects

2003-06-27 Thread wiggins


On Fri, 27 Jun 2003 09:01:59 -0500, "Daniel J. Rychlik" <[EMAIL PROTECTED]> wrote:

> I have been a perl developer or about a year and half and I have a question.  How 
> come perl does not support the use of Class Objects ?  
> 

What do you mean by a "Class Object"?  I am assuming you mean a data structure that is 
associated with and manipulated with respect to all objects of a particular class 
(instances).  In which case Perl supports this by providing package scoped variables 
and any method of the package that does not expect to take an object ref as its first 
parameter.  

You can also go to extremes to encapsulate the data so that only specific methods can 
act on it, truly making it a "class object", but that is beyond the scope of my post. 

have a look at:

perldoc perltoot

In particular the section on "Class Data"

http://danconia.org

-- 
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



RE: select multiple

2003-07-10 Thread wiggins


On Thu, 10 Jul 2003 11:39:23 -0800, "Dennis Stout" <[EMAIL PROTECTED]> wrote:

> Beginners-CGI;
> 
> If I have a form with a lot of values (such as Tech ID, Tech Name, Tech
> Queues..) and one of the fields is a select multiple, with a varied amount of
> options selected, how are those values sent to the cgi script?
> 
> Is it something like ?queue=lvl1,lvl2,admin,sysad&foo=bar or what?
> 

Because there is no way to create a delimiter that the potential data doesn't contain, 
the browser doesn't have the option to choose an arbitrary delimiter like a comma, or 
the like.  So (though I can't speak for all browsers most will do the same) each value 
is passed with the same key, so your string ends up like:

?queue=lvl1&queue=lvl2&queue=admin&queue=sysad&foo=bar

This punts the problem to the server side (or whatever does the query string parsing) 
so there are multiple ways to handle it, build a complex data structure that stores an 
array reference for any multi-valued keys, store the keys with some known delimiter 
(aka cgi-lib.pl used to use the null character \0).  So it depends on your request 
parser, some provide multiple manners (I think the standard CGI does). Have a look at 
the respective docs for how your parser handles it, unless you are writing a 
parser...but then why do that with so many good freely available ones?

http://danconia.org

-- 
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: Net::FTP

2003-07-11 Thread wiggins
Please don't top post.


On Thu, 10 Jul 2003 18:42:45 -0600, "Gregg R. Allen" <[EMAIL PROTECTED]> wrote:

> 
> If you don't mind escaping to the shell, this is how I get a list of 
> files I want to ftp.
> 
> 
> #This returns a list of files to be ftp'ed
> my $files = `ls`;
> 
> 
> #turn the files variable into an array of file names.
> my @ftpfiles = split(/\n/ , $files);
> 
> 

I won't say this is the "wrong" way to do it in so much as it gets the job done.  It 
is probably the least efficient, most insecure, and most error prone way to do it.

perldoc -f opendir
perldoc -f readdir
perldoc -f closedir
perldoc File::Find

If you are going to shell out please at least use a full path, check the return 
status, store the output to an array directly rather than splitting, and detaint the 
values.

You *should* mind escaping to the shell unless it is an absolute must, in other words 
you have exhausted all other possibilities.

Though I am also not sure which question of the OP this addressed.

OP,

Your question seems a bit odd about $Directory, can't that be set prior to calling the 
'dir'?  And you might consider a better name for @Directory possibly @RemoteFileList, 
so you end up with:

my $Directory = '/path/to/directory';
my @RemoteFileList = $ftp->dir($Directory);

That way your variables are more descriptive.  Check the docs for Net::FTP for your 
long vs. short file list.  In particular 'dir' gives you the long list, where 'ls' 
gives you the short one. So,

my @RemoteFileList = $ftp->ls($Directory);

As to your internal server error, are you printing the header?

Also you have either a typo in the code or you should consider copying and pasting 
instead, as you have the following line:

$my Directory;

You are not checking to make sure 'open' worked.

Sorry I didn't have the original post to provide inline comments.

http://danconia.org


> 
> 
> On Thursday, Jul 10, 2003, at 04:13 US/Mountain, Sara wrote:
> 
> > #!/usr/bin/perl -w
> >
> > use strict;
> > use warnings;
> > use CGI::Carp 'fatalsToBrowser';
> > use CGI qw/:standard/;
> > use Net::FTP;
> >
> >
> >
> > my $ftp = Net::FTP->new("ftp.yourserver.com", Debug => 0)
> > or die "Cannot connect to some.host.name: $@";
> >
> > $ftp->login("username",'password')
> > or die "Cannot login ", $ftp->message;
> >
> > $ftp->cwd("/")
> > or die "Cannot change working directory ", $ftp->message;
> >
> > my @Directory = $ftp->dir("/path/to/directory");
> > print "@Directory";
> >
> > $ftp->quit;
> >
> >
> > I am using the following to login to remote FTP;
> > and its working fine and I am getting the list of files from remote 
> > FTP from my desired directory but;
> >
> > - The script is working fine in my Window IDE and giving an Internal 
> > Server Error (without any error message) while on my Host.
> >
> > - its returning @Directory in long format
> > "-rw-r--r-- 1 username username 8654 Jul 5 18:20 test.html"
> > Is it possible to get file names only like test.html
> >
> >
> > and how to provide $Directory in the script given below because above 
> > is an array context @Directory?
> >
> > because after getting the list of files from the directory above I 
> > want to match/compare the file names with a text list on my server, 
> > see below.
> >
> > ###
> > $my Directory = ".";
> >
> > if ( open( NO, 'data.txt' ) )
> > {
> > while (  )
> > {
> > chomp;
> > # Optional: Add check for blank/incomplete lines.
> >
> > if ( -f "$Directory/$_" )
> > {
> > print "File '$_' exists in '$Directory'.\n";
> > # Optional: Add file to 'exists' list for later reporting.
> > }
> > else
> > {
> > print "File '$_' does NOT exist in '$Directory'.\n";
> > # Optional: Add file to 'not exists' list for later 
> > reporting.
> > }
> > }
> > close( NO );
> > }
> > else
> > {
> > print "ERROR: Unable to open file: $!\n";
> > }
> >
> >
> >
> > Thanks,
> >
> > SARA.
> >
> >
> >
> >
> >
> 
> 
> 
> -- 
> To unsubscribe, e-mail: [EMAIL PROTECTED]
> For additional commands, e-mail: [EMAIL PROTECTED]
> 

-- 
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



RE: Clearing a Form

2003-07-21 Thread wiggins


On Mon, 21 Jul 2003 17:05:57 -0500, Peter Fleck <[EMAIL PROTECTED]> wrote:

> I have a cgi that creates a form and then receives the output of the 
> form and sends it to a mysql database. It also displays the form 
> information as a preview of what is getting sent to the database.
> 
> You can return to the original form with data if you need to correct 
> something via an 'Edit' button. This button also makes sure that 
> nothing gets stored in the database (by deleting the data that was 
> just sent and I know there must be a better way to do that part but 
> it's not my current question).
> 
> If the visitor uses the browser 'Back' button to return to the form, 
> their data will be there but the record won't get deleted from the 
> database.
> 
> How do I erase all the data from the form if the visitor chooses to 
> use the Back button? I tried the CGI.pm method mentioned in 
> O'Reilly's CGI Programming:
> 
> print $dataIn->header( -type => "text/html", -expires => "now");
> 
> but that doesn't do it.
> 
> I'm wondering if javascript is the answer?
> 

I think the difficulty you are experiencing in your implementation is a direct result 
and indicator of a design that needs to be re-examined.  You are running into the 
standard problem with a protocol that is stateless (aka HTTP). Rather than switching 
to use Javascript to handle the "abnormal" use of the browser's back button, you would 
be better off assuming that 50% of the time that is how a person is going to navigate 
and do something like store the values as hidden fields in a form on the preview page, 
then when they use the back button make some edits and resubmit, your script need only 
re-display the preview page, which makes the implementation easier. Then the data is 
*only* stored to the DB once the final version is submitted from the preview page. To 
implement your "edit" feature you need only return them to the original form and 
pre-fill the fields based on the values submitted in your preview form (which is also 
your "Edit" form).

Sorry if this is confusing, I would re-examine why and when you store data to the DB 
in your design rather than the implementation details.

http://danconia.org

-- 
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



RE: urgent help needed! (PHP translation)

2003-07-22 Thread wiggins


On Tue, 22 Jul 2003 17:10:36 +0430, "S. Naqashzade" <[EMAIL PROTECTED]> wrote:

> Dear Friends,
> I need to trnaslate thid code to PHP.
> Can any one help me?

This is a Perl list. You might try a PHP list for PHP help, even if it is coming from 
Perl code. Most people here would prefer to help you with finding a way to keep this 
being done in Perl, but that is a philosophical discussion.

http://danconia.org

> 
> use constant MD5_CRYPT_MAGIC_STRING => '$1$';
> use constant I_TO_A64 =>
> './0123456789ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz';
> 
> sub _to_yahoo_base64
> {
>  pos($_[0]) = 0;
> 
>  my $res = join '', map( pack('u',$_)=~ /^.(\S*)/, ($_[0]=~/(.{1,45})/gs));
>  $res =~ tr{` -_}{AA-Za-z0-9\._};
> 
>  my $padding = (3 - length($_[0]) % 3) % 3;
>  $res =~ s/.{$padding}$/'-' x $padding/e if $padding;
>  return $res;
> }
> 
> 
> sub _to64
> {
>  my ($v, $n) = @_;
>  my $ret = '';
>  while (--$n >= 0) {
>   $ret .= substr(I_TO_A64, $v & 0x3f, 1);
>   $v >>= 6;
>  }
>  $ret;
> }
> 
>  my $Magic = MD5_CRYPT_MAGIC_STRING;
>  $salt =~ s/^\Q$Magic//;
>  $salt =~ s/^(.*)\$.*$/$1/;
>  $salt = substr $salt, 0, 8;
> 
> 

-- 
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



RE: Clearing a Form

2003-07-22 Thread wiggins


On Tue, 22 Jul 2003 09:33:28 -0500, Peter Fleck <[EMAIL PROTECTED]> wrote:

> Thanks to all for the help on 'clearing the form' and for forcing me 
> to face the design limitations.
> 

As long as you realize them then that is most of the battle, all of us have to face 
hacking around them because outside factors (like budget and schedule) have caused the 
need.

> I would prefer to preview the data before storing in the DB and had 
> hoped to get this in place but ran into a problem on the way which 
> led to my workaround which and the STORE-PREVIEW-DELETE ENTRY-EDIT 
> sequence. I don't like it either.
> 
> The problem is that I am storing a series of entries to a table in a 
> hash with values that reference arrays.  I don't think I can store 
> something like that in a hidden form field so I have to 
> rethink/redesign that whole data storage process. Would it be safe to 
> say that only scalar values can really be hidden away in fields on a 
> form? (I also have some values in an array that would have to be 
> taken care of.)
> 

Well storing complex data structures is possible, you just have to serialize them 
first, then on the other end reload them. Granted this can get ugly depending on how 
complex the structure is, especially since you have to store it into HTML which means 
you need to account for special characters, etc..  There are numerous CPAN modules 
that provide this type of functionality if doing conversions on your own isn't 
possible. Because a hidden field can really contain any amount of data (though I 
wouldn't use a GET) you should be able to serialize the data into a single string and 
store it in a hidden field.

> I'm starting to conceive as to how it can be done but I'm also way 
> behind schedule on this project. So the 'good' design will have to be 
> implemented in the next rev of my ap.
> 

Understood.

> Just to provide a bit more entertainment for you experienced perlers, 
> my intermim/ugly hack solution to the back button problem is to add a 
> field in the DB that must be true for the data to display on a Web 
> page. The default value is 'false'. After previewing the data, the 
> user clicks an button which simply sets the display column value to 
> true. So if user uses back button, an extra record will exist in the 
> database but that record will not show up in public display.
> 

Actually I like this, the only thing I would add would be a revision identifier. Make 
it a two step process, aka a new revision must be added then it must be "committed" or 
the like, not unlike a version control system, I am working on implementing this on 
all components of my new site. Then you end up selecting the most recent (greatest) 
revision that is active for display, but can always roll back to a previous revision, 
simply by deactivating the most recent addition.

Of course there are other issues with this technique, aka storage space, do you store 
the complete entry or just a diff, locking, etc.

> Background: This is a data entry system for one person/editor that 
> will then result in dynamic display of information for the public on 
> our Web site. Right now the person is maintaining the page in HTML 
> Netscape Composer so it's hard for anything scripted not to be an 
> improvement. You can check the current site at:
> 
> http://www.cancer.umn.edu/page/aboutus/grantopp.html
>

Sounds like it is definitely an improvement. I would say the issues you have 
encountered are fairly regular and can't really be taught around, in most cases the 
only way to avoid them is to have already experienced them (and trust me I have).

http://danconia.org
 
-- snip old messages --

-- 
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



RE: PHP vs Perl

2003-07-22 Thread wiggins


On Wed, 23 Jul 2003 00:39:30 +0300, "Octavian Rasnita" <[EMAIL PROTECTED]> wrote:

> Hi all,
> 
> Talking about PHP, someone asked me to tell him why is PHP better and why it
> is used more than Perl.
> I don't know what to tell him because I don't know PHP, but I've seen that
> it is used more and more and I guess that there are more PHP scripts than
> Perl scripts now, so I think he could be right.
> 
> Of course, he was talking about CGI programming, because I know that Perl
> can do much more than CGI programming like PHP.
> 

Can you say hello to Pandora for me...

The only sure answer to this is that Vim is better than Emacs.

Let the flaming begin!

http://danconia.org

-- 
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: PHP vs Perl

2003-07-23 Thread wiggins


On Wed, 23 Jul 2003 00:59:29 -0700, drieux <[EMAIL PROTECTED]> wrote:

> 
> On Tuesday, Jul 22, 2003, at 20:10 US/Pacific, Wiggins d'Anconia wrote:
> > Octavian Rasnita wrote:
> [..]
> >> Of course, he was talking about CGI programming, because I know that 
> >> Perl
> >> can do much more than CGI programming like PHP.
> >
> > As much as we jest at your expense I suppose a "real" answer should
> > be given, well other than drieux's long rant about MVC and layers ;-).
> 
> first off, I actually wish that the process of WOOFing
> up 'browser neutral' 'html like stuff' WERE simpler,
> { and that there was BUT one DOM, vice 3++ }
> and as such, there are things in PHP that 'simplify'
> some of that - just as
> 

But what fun would that be? I mean if we were efficient in our pursuit of what was 
possible, rather than just stabbing at it in 40 different methods we wouldn't be 
appreciated things like the dot-com bubble, off-shore out-sourcing, and would in 
general be a much happier (global) society because we could spend our time drinking 
canned crappy macrobrewed beer while playing softball...though I am not sure that is 
what you were driving at...

>   use CGI;
> 
> helps in some cases so it really is not a matter
> of jesting at Octavian, as much as it is the HORROR
> that HTTP/HTML/XHMTL/XML/XPATH/XSLT/OtherWordsWithX_inThem
> were not so much 'released' as LEAKED OUT like toxic waste.
> 

Actually I meant more myself and the other poster, you we were the ones jesting at him.

> As such we have all been improvising ways to 'woof Up Html Stuff',

Definitely, and I thought it was just that we didn't know what we were doing, or I 
guess that is correct, I just thought it was we were the ONLY ones that didn't know 
what we were doing.

> [..]
> > The closest answer is "whatever works best in your situation".
> > The problem with this answer is there are at least a hundred
> > different factors in determining what your situation is, and
> > even if you could define it that explicitly there would still
> > be overlap of features, etc.
> [..]
> 
> Where this somewhat fails is that I might have argued not that
> long ago that one would be better positioned were one to
> construct the 'perl code' - in modules - as I do, and test
> that the underlying 'controller code' can work as a stand-alone
> 
>   dumb.plx
> 
> that I can run at the command line - and then knowing that I
> can get all of the right information back as an '@menu_list'
> that I could then unwrap into html to show up as
> 
>
>   4.01
>   8049
>   1
>   8347
>   2333
>
> 
> which I so love - because this is actually an instance of
> "OOOPSIE" - since the same "Named" piece of "foo.cgi" exists on
> the 'remote web-server' - it is just a rev back, running
> an out of date version of the underlying Perl Module, and
> the 'correct' new style one would have done say:
> 
>
>   xanana
>   libex
>   jeeves
>
> 
> So having a Perl Module didn't save me, and using HTTP
> as the session layer to 'query back to the config host'
> didn't save me from a 'version skew problem'.
> 
> I'm not convinced by my own demonstration here, that
> clearly PHP must be better
> 
> I think a part of why the PHP v. Perl 'flame wars' gets
> going is that at times folks change 'cults' for the wrong
> reason. They ran into a problem that was less easy to
> 'code up' in Perl - and because they could use 'fewer
> lines of text' in PHP - that this must mean that PHP is
> better at 'generating' 'web-pages' - hence must be better
> at being the CGI friend...
> 

I would tend to disagree with why they "changed" cults, and argue that it wasn't that 
one was easier to use to code it up, but more that their situation had changed. Which 
gets me directly to my main reason for preferring the two separated, and hits directly 
back to the concept of MVC that you brought up originally. If I am developing a site 
that is heavily dynamic in nature, but has a number of different formats (views), then 
presumably I am not the only one developing it, and the person, read college freshman 
who took a class in HTML during summer school because he likes video games and will 
work for $8/hr and all the free mountain dew he can drink, who has to code up the HTML 
because I am a snooty overpaid programmer, read who actually has 

RE: Another Regex question.

2003-07-23 Thread wiggins


On Wed, 23 Jul 2003 21:58:52 +0500, "Sara" <[EMAIL PROTECTED]> wrote:

> $TS = "THIS INPUT IS IN ALL CAPS";
> 
> $TS_cont = lc $TS;
> 
> $TS now prints out "this input is in all caps"
> 
> What If I want first letter in caps for every word in string? which should be "This 
> Input Is In All Caps"
> 

Well TMTOWTDI, and this way results in a little data loss and definitely will *NOT* 
win any beauty contests. If you are trying to maintain a non-sane delimiter you will 
need to resort to other methods

my $TS = "THIS INPUT IS IN ALL CAPS";
print join(" ", map { $_ = ucfirst(lc($_)) } split(/\s+/, $TS));

http://danconia.org

-- 
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



RE: Another Regex question.

2003-07-23 Thread wiggins
At some point I really do need to read that damn regex book...pass the advil please.

http://danconia.org


On Wed, 23 Jul 2003 17:51:09 -0400, "Hall, Scott" <[EMAIL PROTECTED]> wrote:

> Sara,
> 
> You can use "ucfirst".
> 
> $TS =~ s/(\w+)/ucfirst lc $1/ge;
> 
> Scott
> 
> -Original Message-
> From: Sara [mailto:[EMAIL PROTECTED]
> Sent: Wednesday, July 23, 2003 12:59 PM
> To: org
> Subject: Another Regex question.
> 
> 
> $TS = "THIS INPUT IS IN ALL CAPS";
> 
> $TS_cont = lc $TS;
> 
> $TS now prints out "this input is in all caps"
> 
> What If I want first letter in caps for every word in string? which should
> be "This Input Is In All Caps"
> 
> TIA,
> 
> Sara.
> 
> 
> 
> 
> 
> 
> -- 
> To unsubscribe, e-mail: [EMAIL PROTECTED]
> For additional commands, e-mail: [EMAIL PROTECTED]
> 

-- 
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



RE: Preserve line endings via ftp

2003-08-04 Thread wiggins


On Mon, 4 Aug 2003 10:47:42 -0500, Peter Fleck <[EMAIL PROTECTED]> wrote:

> Greetings,
> 
> I'm using Net::FTP to move a file from a Linux server to a Novell 
> server. Before I move the file, I have perl change all the line 
> endings from unix-based to DOS-based. The line endings seem to get 
> lost during the ftp process and end up as unrecognized characters (at 
> least when viewing the text file from my Mac).
> 
> Trying to figure out how to change this so I can preserve those endings.

Try setting the transfer type to binary. There is a 'binary' method...

http://danconia.org

-- 
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: Mail::Send question

2003-08-05 Thread wiggins


On Tue, 5 Aug 2003 08:13:06 +0300, "Octavian Rasnita" <[EMAIL PROTECTED]> wrote:

> What mail module can be used to send email messages with attachments?
> I would prefer a module included in the perl package by default.
> 

MIME::Lite is a good choice for simple mail handling, though as the other poster 
mentioned it is not as robust, as the one I have come to prefer Mail::Box. I tend to 
do A LOT with mail and didn't mind the steep learning curve and performanance hit.  It 
is also somewhat still in development, though with a responsive mailing list and 
extensive documentation I have found that I can get around in it pretty well now.

There is also the Mail-Tools bundle which is essentially replaced by Mail::Box, that 
provides the same functionality as well.

If MIME::Entity is a separate module, not sure that it is from MIME::Lie (aka 
MIME::Lite might just wrap MIME::Entity) then that worked as well

I don't believe any of the packages are included by default but many have binary 
packages available, and naturally exist on CPAN.

http://danconia.org

-- 
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



RE: obtain web page contents with perl through a proxy server

2003-08-08 Thread wiggins


On Wed, 6 Aug 2003 21:07:50 -0700 (PDT), wendy soros <[EMAIL PROTECTED]> wrote:

> Hi,
> 
> I am a new perl user. I have a question of obtaining
> web page contents with perl. Hope you can help me.
> 
> Here is my question: how can I use perl to access a
> webpage, fill in some search parameters, pull out
> parts of the search results returned that I really
> need and save them in a file? 
> 
> For example, a web site has some information about a
> bunch of companies for various years. After one puts
> in the company name and the relevant years, one would
> be directed to another page that has the text contents
> needed. 
> 
> I have the file containing the company names and the
> years I want. Since the number of companies I have is
> large, it would be really nice to have perl pull out
> the contents for me. Could it do this? How? What if I
> need to access the web page through a proxy server? 
> 
> Once again, I am new to the perl community and I am
> sorry if the question is too simple. But I do need you
> help. Thanks a lot.
> 

"No job is to small, no fee is to small!" -Ghostbusters

You should check out the LWP set of modules from CPAN. It makes this kind of task 
bearable and should provide all of the functionality you need. 

http://search.cpan.org/author/GAAS/libwww-perl-5.69/lib/LWP.pm

In particular you may want to start with the lwp-cook documentation as it provides 
good working examples.  Have a read through the docs, give it a stab and come back 
with questions when you get stuck...

http://danconia.org

-- 
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



RE: cpan module for mail login and scanning

2003-08-19 Thread wiggins


On Tue, 19 Aug 2003 14:08:37 +0200, Alex <[EMAIL PROTECTED]> wrote:

> Hello cgi-people,
> 
> is there such a module that logs into a pop3 account, scans the
> content / headers and sends pop3 commands?
> 
> the idea is to frequently log in into  my pop3 account and remove
> spam-mail.
> 
> one pattern, that I found is the whole range of [EMAIL PROTECTED]
> accounts in the recipient line... as soon as one of these mails
> with this header is found it'd being deleted.
> 

You should have a look at the Mail::Box distribution. It handles many types of mail 
boxes (including remote POP3) very well and can have built in spam detection support 
using spamassassin. There is also a lighter weight Mail::POP3Client module, though I 
am not familar with its working so can't provide a rating.  Both are available from 
CPAN.

> or maybe I'd use the simple telnet-module... but I'd need a pop3
> command list...
> 

I would avoid this if at all possible.

> any help is greatly appreciated.
> 

HTH,

http://danconia.org

-- 
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



RE: Mail Reading and Parsing

2003-08-19 Thread wiggins


On Tue, 19 Aug 2003 09:46:11 -0600, "eliu" <[EMAIL PROTECTED]> wrote:

> Hello everybody,
> 
> Im looking for some modules that may help me in a proyect where all I
> have to do is to analyze all the emails (which supposedly are erronous
> mails) and parse them.
> 
> So  I have to do different actions depending on if the mails are
> returned because 
> 1. The recipient does not exists
> 2. the recipient has his mailbox full
> 3. Other erronous mails.
> 
> So im looking for a module that may help me in parsing the emails and if
> there is one that can tell me what error is better yet.
> 
> Somebody knows a module in which i can relay on?
> 

There are numerous modules for parsing mail, though I don't know of any that will tell 
you whether a recipient has exceeded his quota, etc. as that is generally the work of 
an MTA not a parser.  You might start with the documentation for Mail::Box and 
MIME::Parser and see if they meet your needs. I have had success with both for mail 
parsing uses.

http://danconia.org

-- 
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



RE: [PBML] PLEASE USE [PBML] in your subject line

2003-08-22 Thread wiggins


On Fri, 22 Aug 2003 12:04:25 +0100, "brad" <[EMAIL PROTECTED]> wrote:

> 
>  
> hi peep's 
> I don't post often but what i have found is that alot of you do not put 
> [PBML] in your subject lines, i get your posting direct to my trash 
> without it and i would very much like to read your posting and responses 
> but with all the spam rubbish around at the moment this is the best way i 
> have found to sort my mail.

You should set your filter based on addresses rather than subject line, or get a 
better filter.  What does PBML stand for? Perl Beginner's Mailing List?, but this was 
sent to the PBCML, there are at least two that fit into that classification, of course 
some of our members could also be on the Pro Bowler's Mailing List, and I might 
consider joining a Peanut Butter Mailing List, hmmm, Peanut Butter but then dang 
it someone might also be subscribed to the crypto mailing list which might be thought 
to be PBCML, so maybe we better make that PBCGIML, yikes,

Then others use the newsgroup those lucky dogs

Good luck on your crusade...

http://danconia.org

-- 
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



RE: Secure Form Submission

2003-08-22 Thread wiggins


On Fri, 22 Aug 2003 05:48:14 +, Greenhalgh David <[EMAIL PROTECTED]> wrote:

> Hi All,
> 
> I need to implement a form that is submitted securely. My client does 
> not have access to SSL on his host. I was thinking in terms of a 
> session cookie with a client side RC4 encrypt and a decrypt in the Perl 
> script. Do peoople here consider that to be a secure scenario, or is 
> there another method that you could recommend? The encryption needs to 
> be reversible.
> 

It seems like it should be secure. I am assuming the "session cookie" would store the 
server's public key? or some such?  My question would be how do you implement an RC4 
encryption (or any encryption other than the built-in SSL) on the client side? 
Possibly a Java applet with the encryption built-in? I suppose you could implement an 
encryption algorithm in javascript and then just call that via a form's onSubmit, but 
how would you generate a random number (built into javascript?)... yikes thats a lot 
of javascript :-)... and at that point you would also have to generate a private key 
on the client side, and send the corresponding public key to the server... and this 
would have to be done each time which could get slow...

The problem I see is implementation rather than security, well other than there are 
better encryption methods than RC4, but choosing the cipher is probably secondary, if 
you can get one to work you should be able to get any to.

I saw in your other post about the limited IPs, if this really is a temp solution, the 
implementation difficulty still might suggest springing for extra hosting, or the 
similar until the upgrade is in place...

http://danconia.org

-- 
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



RE: Call to a function is a seperate file

2003-08-25 Thread wiggins


On Mon, 25 Aug 2003 17:30:35 +0100, "Jattie van der Linde" <[EMAIL PROTECTED]> wrote:

> How can I call a function from my main script file located in a seperate file i.e. 
> for instance a reusable piece of code that will generate the index of my site 
> without having to copy the code to every singe page that I generate?
> 

Well on the Perl side of things you can look into 'require' and 'use' in particular. 
But you may want to read less specific documentation on the subject such as the 
perlmod man page:

perldoc -f use
perldoc -f require
perldoc perlmod
perldoc perlmodlib
perldoc perlsub

And drieux's rants on the subject of building out libraries...

However, if you just want to include the same index on every page you are better off 
building it statically and then using a server side include to drive it rather than a 
script, your cpu will thank you...

http://danconia.org

-- 
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



RE: Net::FTP or similar

2003-08-26 Thread wiggins


On Tue, 26 Aug 2003 15:06:08 -0400, Andrew Brosnan <[EMAIL PROTECTED]> wrote:

> Hello,
> 
> Has anyone used Net::FTP to create a browser based FTP client? (I assume
> that's one use for it) 
> 
> I have a client that needs to let their clients transfer files to and
> from their server. Some of the files may be big (30-40MB) and using a
> regular FTP client isn't an option, unfortunately.
> 
> I'm interested to hear peoples experience or suggestions.
> 

I am not sure I take your meaning??  I have used CGI to allow someone to upload files, 
and I have used Net::FTP to transfer files between servers, but I am not sure I know 
what you mean by a browser based FTP client.  Presumably you have a web server at your 
disposal which can take uploads, but then I am at a loss as to why you need FTP? If 
you mean to act as a proxy, so the file gets transferred to the web server, and then 
routed from there somewhere else, then basically you have a proxy and the underlying 
technology could be anything that allows file transfers, aka SSH, NFS, FTP, etc. But 
at that point the technologies are in effect not linked, so you would be better off 
dividing the tasks. In other words, have a CGI that just stores a file locally, then 
have another process watch/poll/scheduled to check for new uploads, in the event one 
exists, then "do whatever it is you do when a new file is present", if that is FTP so 
be it. 

The one problem I can definitely see you running into, especially based on the file 
size you mentioned, is timeouts both on the server and client side.

There is also another HTTP protocol request type specifically for this, but I can't 
remember what it is right now (because it is almost never used for anything...) that 
essentially just allows a file to be stored to the server, but yikes

Unless you mean something along the lines of Mozilla and XPI? But then you are talking 
about Net::FTP installed on the client side...

http://danconia.org

-- 
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



RE: Installing/Uninstalling CPAN modules

2003-08-26 Thread wiggins


On Tue, 26 Aug 2003 19:20:03 +, Greenhalgh David <[EMAIL PROTECTED]> wrote:

> This may be the wrong forum to ask this...
> 
> How do I uninstall a module downloaded from CPAN? Specifically, I 
> cannot get the DBD::MySQL driver to install under OS X. It refuses to 
> find the mysql_config file, and when I run Makefile again with the -L 
> and -I switches it won't work after install.
> 
> I am running the Complete MySQL distribution which has worked before, 
> but since this installs into /usr/Library and not /usr/local, I am 
> wondering if the module ignores symlinks to /usr/local. Hence I want to 
> pull everything out and reload the MySQL AB distribution and try again. 
> A quick search shows DBD in dozens of places, in perl/ in DBI/ in 
> mySQL/ etc etc, which ones do I trash?
> 

You may have better luck in the [EMAIL PROTECTED] list since OS X likes to do all 
kinds of weird things with its Perl, and with regards to DBI/DBD in [EMAIL PROTECTED]  
Typically removing a Perl module is not as easy as installing them, removing the 
associated files is only one step...

Sorry I can't provide specifics I haven't tried this and don't have my PB with me

http://danconia.org

-- 
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



RE: automated file removal / cache clearing

2003-08-27 Thread wiggins


On Wed, 27 Aug 2003 16:18:00 +0200, Shahar Evron <[EMAIL PROTECTED]> wrote:

> hi...
> I'm working on a CGI program that creates some user-specific file on the 
> server when accessed. is there a good way to make sure theese files are 
> cleared when they're no longer needed - IE if a file in a specific 
> directory was not accessed for 5 minutes, delete it.
> Right now i'm thinking a croned script - but it would have to be run 
> quite oftenly - won't that have a bad effect on my system?
> I'd love to hear some ideas.

This is highly dependent on the frequency with which you run the script, the number of 
files, possibly the size of the files, etc. cron in itself is going to be running 
anyways (most likely) having it fire up a single process to remove some files isn't 
terribly slow, until you are talking about large amounts of files or you run it very 
often like every couple of seconds.  So the question becomes how long "may" a file 
stay out there, because this determines how frequently you must run the script, in 
other words, if a file has not been accessed in five minutes but must be deleted 
before it is stale for 6 minutes then you have to run the script every 59 seconds. etc.

If you must take this approach and are on a unix system, I would avoid a Perl script 
and install a recent version of 'find' and use 'rm' instead. That should speed up the 
file location and removal without the overhead of the perl interpreter being fired up.

http://danconia.org

http://danconia.org

-- 
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



RE: automated file removal / cache clearing

2003-08-27 Thread wiggins


On Wed, 27 Aug 2003 10:28:46 -0400, [EMAIL PROTECTED] wrote:

> Correct me if i'm wrong, you say use cron and a shell script (let's say bash) 
> instead of perl? 
> well, if there are any other ways of doing it instead of using cron, id love to head 
> about it.
> Is there a way to know when a user leaves the site? if so can i make it run the 
> clearing script when a user leaves? (i know this isn't close, but IE like the way 
> bash runs ~/.bash_logout when user logs out?)
> 

Sorry I wasn't terribly clear, I wasn't suggesting a bash shell script, more a single 
command.

--Untested--
find . -amin -1 -exec rm {};

For instance works under RH 9.0. Some versions of 'find' may not have the 'amin' 
switch, for instance Solaris 8's 'find' does not. Though in that case you could still 
build a command using some of the switches such as 'newer' that might work.

As to being able to track when a user has left the site, it isn't really possible 
hence stateless connection, but since I am somewhat of a purist just because someone 
hasn't accessed the site for 5 minutes doesn't mean they aren't still there.  So to me 
temporary information should be stored with the user, then their session ends when 
they close the browser (weeks later), etc. and only permanant information is stored on 
the server, then there is no cleanup.

As far as scheduling a task, cron works best, but there are other options such as 
having a script constantly running that 'sleep's, etc. If you are doing this 
frequently then the amount of time to fire up the interpreter does come into play, the 
longer the interval than the less frequent the compilation/interpreter has to run the 
less likely that it matters.  However a constantly running script has its problems 
too.

My questions would be, 
1) do you need to store the state on the server side? 
2) how many files are you talking about? 
3) how long "can" they last for? 
4) how long must they not be stale for?
5) what happens when a file is not cleaned up on time? (aka how important is it that 
they be whacked?)

http://danconia.org

> [EMAIL PROTECTED] wrote:
> 
> >
> >
> >On Wed, 27 Aug 2003 16:18:00 +0200, Shahar Evron <[EMAIL PROTECTED]> wrote:
> >
> >> hi...
> >> I'm working on a CGI program that creates some user-specific file on the 
> >> server when accessed. is there a good way to make sure theese files are 
> >> cleared when they're no longer needed - IE if a file in a specific 
> >> directory was not accessed for 5 minutes, delete it.
> >> Right now i'm thinking a croned script - but it would have to be run 
> >> quite oftenly - won't that have a bad effect on my system?
> >> I'd love to hear some ideas.
> >
> >This is highly dependent on the frequency with which you run the script, the number 
> >of files, possibly the size of the files, etc. cron in itself is going to be 
> >running anyways (most likely) having it fire up a single process to remove some 
> >files isn't terribly slow, until you are talking about large amounts of files or 
> >you run it very often like every couple of seconds.  So the question becomes how 
> >long "may" a file stay out there, because this determines how frequently you must 
> >run the script, in other words, if a file has not been accessed in five minutes but 
> >must be deleted before it is stale for 6 minutes then you have to run the script 
> >every 59 seconds. etc.
> >
> >If you must take this approach and are on a unix system, I would avoid a Perl 
> >script and install a recent version of 'find' and use 'rm' instead. That should 
> >speed up the file location and removal without the overhead of the perl interpreter 
> >being fired up.
> >
> >http://danconia.org
> >
> >http://danconia.org
> >
> >-- 
> >To unsubscribe, e-mail: [EMAIL PROTECTED]
> >For additional commands, e-mail: [EMAIL PROTECTED]
> >
> >
> 
> __
> McAfee VirusScan Online from the Netscape Network.
> Comprehensive protection for your entire computer. Get your free trial today!
> http://channels.netscape.com/ns/computing/mcafee/index.jsp?promo=393397
> 
> Get AOL Instant Messenger 5.1 free of charge.  Download Now!
> http://aim.aol.com/aimnew/Aim/register.adp?promo=380455
> 
> -- 
> To unsubscribe, e-mail: [EMAIL PROTECTED]
> For additional commands, e-mail: [EMAIL PROTECTED]
> 

-- 
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: automated file removal / cache clearing

2003-08-28 Thread wiggins


On Thu, 28 Aug 2003 23:19:21 +0300, "Octavian Rasnita" <[EMAIL PROTECTED]> wrote:

> But why forking more processes?
> 

Right, but the same can be asked for the below...

> The cgi program might check which of the files need to be deleted, then
> create a temporary lock file, then it could fork a process that will delete
> those files.
> The next visitor will come and execute the same cgi script, but it will see
> the lock file and it won't delete any file.
> 

Why have the scripts check for a lock file, etc. when their job really isn't 
maintenance of the system, their job is passing back HTML like stuff? Especially on 
high traffic sites.

> In fact, if the web site has many visitors, the script could be put on a
> script which is not so often executed by all visitors.
> Or that script could check and start deleting files only after a period of
> time, let's say... 10 minutes, 1 hour... etc.
> 

But then you are back to a "cron-like" system, but it is random because it still 
depends on a user appearing which is unpredictable.

I still hold that a scheduler is the best way to do this type of thing, unless the 
time taken to recompile/reinterpet the script is significant (aka the schedule is so 
frequent that it is cheaper to leave it in memory), but it still gets back to the 
design issue of why you should need to do this anyways (at least for a website)...

http://danconia.org


> - Original Message -
> From: "drieux" <[EMAIL PROTECTED]>
> To: "cgi cgi-list" <[EMAIL PROTECTED]>
> Sent: Thursday, August 28, 2003 6:58 PM
> Subject: Re: automated file removal / cache clearing
> 
> 
> 
> On Wednesday, Aug 27, 2003, at 14:22 US/Pacific, Octavian Rasnita wrote:
> 
> > Or if you don't want to depend on Unix's cron and want your program to
> > do
> > everything, you can set it so each time a new visitor comes to your
> > site,
> > checks which files are not needed, and delete them.
> > You can use fork to avoid putting the visitors to wait until the
> > program is
> > doing its background job.
> [..]
> 
> at first blush that CAN seem to be an interesting
> idea - but in the worst case one can have N connections,
> each of which has generated N forked children to
> walk through M possible files... and one starts
> asking one's self,
> 
> is this an order N square or N factorial solution?
> 
> while in the worst case the cron job based solution
> is merely an order N problem...
> 

I am glad someone put this into easily understood terms, I was thinking the same thing 
and couldn't come up with a compact way of saying it :-)...


-- 
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: CGI and mySQL book, any recommendation.

2004-05-31 Thread Wiggins d'Anconia
Paul Archer wrote:
Yesterday, Randal L. Schwartz wrote:

By the way... it's consensus amongst experts that MySQL has hit
nearly end-of-life.  If you're starting a new project, use PostgreSQL
instead.  A real Database... not a database wannabe.
The only reason to use MySQL these days is ignorance or legacy.
I've done a quick search on Google ("mysql vs postresql") and all the hits
I've seen, including a very detailed comparison of the two by Sourceforge
principal Tim Perdue give both high marks. I haven't read anything so far
supporting your position (MySQL is a wannabe and nearly end-of-life).
While I am not refuting your claims, I would appreciate a little evidence in
support of these claims so that I can better judge for myself.
Agreed. I have requested this from him before, but didn't get much.
--
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]
 



Re: CGI Book Advice sought ...

2004-09-15 Thread Wiggins d'Anconia
hcohen2 wrote:
Recently I jumped in to the list and gave a correct answer, but 
unfortunately not really appropriate answer to the question asked.  I 
took the 'Beginners' adjective for this list too seriously.

Hence, to keep myself out of trouble, I think it is time to get 
another book on CGI for perl to study this topic in greater depth.  I 
have been spending time in a book store and researching further using 
Google and I have run across a title that may be appropriate.  The 
title is: "CGI Programming 101: Programming Perl for the WWW" and is a 
2004 publication.  The first 6 chapters are on line, so I can get some 
idea of the book's value by going through those chapters.  However, 
there is a reader's review of the first edition that seems to ring 
true and, moreover, is not complimentary.  That reader maintains the 
first 40 pages are good, but due to the choice of using the CGI 
package only modestly devalues the contents.  The other four reviewers 
were much more taken with this book.

I wonder has anyone here read this title and would they be willing to 
give an evaluation, having  someone in mind that has only had a brief 
introduction to perl's cgi scripting?

TIA
I would echo Chris' remarks about mod_perl, and the book not being worth 
its oats if it shys away from using CGI.pm.  Personally I would skip 
*all* of the CGI books, and get a good book on Perl itself. Learn the 
language not how to use it for a specific task. This will serve you in 
everything you do much better in the long run, then once you have 
learned Perl basics, it should be trivial to read any module's 
documentation to absorb the API and apply it.  CGI.pm has excellent 
docs, as do some of the other CGI helping tools, such as the various 
template systems, and modules like CGI::Application.  This path will 
also serve you well when it comes time to learn DBI for your database 
access, or any other module you need for core logic.  There are also 
excellent free online tutorials to teach you basic CGI programming, 
Ovid's comes highly endorsed (google).  CGI just isn't that difficult, 
-> print a header, maybe include a cookie or two, do a redirect, that's 
about it, the rest is just time and experience.

The two books I would suggest are Learning Perl and its sequence 
Learning Perl Objects, References, and Modules.  I give comments about 
them as well as other Perl books on my site.

Good luck,
http://danconia.org
--
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]
 



Re: moving files

2004-09-25 Thread Wiggins d'Anconia
Jeff Herbeck wrote:
Hello,
I am trying to start a "transloading" and webhosting company.  This
will allow users to login via apache .htaccess type authentication and
then be able to put a url of a file into a form and the script  will
go get that file, download it to the webserver into their html
directory and then be available for world access. I have the security
setup (apache athentication) and I can get their username with
env(REMOTE_USER) but I dont' know what to do from here.  I know I want
to use "wget" to get the file, but i can't seem to get it to work in a
script.  I am a linux admin, but don't know perl.  How can I log in
the http user into linux and go get that file with wget so it will go
in their home directory, which will be their html directory.  Can
someone please get me started?
Thanks in advance
Jeff
I would avoid using 'wget' and instead opt for the LWP suite of modules. 
They should make retrieving a remote file simple (at least over http). 
Take a look at the documentation and examples and give it a shot, when 
you get stuck ask some more specific questions.

http://search.cpan.org/~gaas/libwww-perl-5.800/lib/LWP.pm
LWP is available from CPAN.
http://danconia.org
--
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]
 



Re: How to use sub directories in IIS and Apache HTTPD

2005-02-07 Thread Wiggins d'Anconia
Siegfried Heintze wrote:
When I try to employ subdirectories my perl cgi programs stop working. This
is because the "use" statements cannot find their files.
I could convert the "use evidence_db;" statements to "require
'../evidence_db.pm';" and that works. But this is painful.
Surely there is an easier way. I thought of going into the IIS setup and put
a "-I" switch for the perl statement IIS uses to invoke perl for CGI.
However, that could mess up other applications in other sites if I am not
the only site on the machine.
I don't know where I would change the setting in Apache HTTPD for just the
current site either.
Can someone tell me?
Thanks,
Siegfried

perldoc lib
You can add directories to the @INC array at compile time.  You may also 
be interested in

perldoc FindBin
For instance in the top of my CGI handler I have,
use FindBin;
use lib "$FindBin::Bin/../../lib";
use lib "$FindBin::Bin/../lib";
http://danconia.org
--
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]
 



Re: Favorite Perl CGI Framework for Web Site Development?

2005-02-11 Thread Wiggins d'Anconia
Chris Devers wrote:
On Fri, 11 Feb 2005, Siegfried Heintze wrote:

I notice there are a lot of frameworks out there for .NET (eg, .NETNUK),
PHP, and Java (eg AppFuse) programmers. These are sets of files that form a
typical starter site (or skeleton) that have the basic common features for a
web site: (1) cookie/password authentication authorization, send email for
forgotten password, (2) file upload, (3) calendar etc...
Are there any such frameworks for perl cgi? I googled for perl cgi framework
but could not find any matches.

Does Bricolage count? Or Slashcode?
I would count them. There is also TypePad/Moveable Type by Six Apart.
Perl mainly offers sets of tools for plugging such things together, 
using components like CGI.pm &/or template libraries (Template Toolkit, 
Mason, HTML::Template, etc) to build sites.

We don't, however, really have any prominent web application frameworks 
to compare with, say, Zope (Python's main offering) or the many suites 
that are now available with PHP.

I'd be delighted to be corrected about this, but it seems like most of 
the people that are working on such frameworks are using other languages 
these days. 


I have one in development and have been intending to release it under a 
GPL like license but haven't gotten quite that far. Of course it has no 
documentation or testing, why would it need it there are no bugs ;-).

I agree with the other posters about MayPole, and WebGUI.
I will also throw out InterChange since I just started working for a 
company that uses it, but it appears to have a very steep learning curve 
and probably isn't as elegant as the others. It is really geared towards 
e-commerce.

http://danconia.org
--
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]
 



sorting and printing text files (was: hello all)

2005-02-17 Thread Wiggins d'Anconia
Don Doucette wrote:
hello everyone.
my name is don doucette and I am 38 years old and have been involved 
with computers since the Timex Sinclair.

Ok. Please use a more informative subject line.
I have recently set up a server and am hosting a web site and forum for 
my community association.

I am running the YaBB forum (http://www.yabbforum.com/) and I would like 
to do the following...

I would like to use perl to look in a directory of numerical named .txt 
files (as in 0123456789.txt), find the file name with the largest number 
(as in 1234567890.txt is greater numerically than 0123456789.txt) then 
open the file and extract data from that file so it can be posted into a 
web page?

For instance...
I have a directory named Messages, in this directory there are the 
following files...

1108577587.txt
1108519222.txt
1108490078.txt
1108489912.txt
Obviously 1108577587.txt is greater numerically than the rest, this also 
happens to signify that this is the newest message. In this file is the 
following information...

Title of Post|Author|[EMAIL PROTECTED]|02/16/05 at 
12:13:07|Group|xx|0|192.168.1.1|Message||

As you can see this file is delimited by | and ends with ||
I would like to parse out the Message field first then the Author field 
and assign their value to a variable then insert the variable into html 
on a page.

Something like...


Untitled Document


Here is the newest post to the forum
$Message 
Posted by $author 


The idea is when the main web page loads it always shows the newest post 
to the forum and who posted it.

My question REALLY is do you think this can be easily done or is this a 
huge programming effort for someone just trying to figure out perl... I 
have been thumbing through my Perl book (The Complete Reference Perl 
Second Edition) but it hasn't really been helpful so far.

Thanks for your advice.

Clearly you have a good spec which is about the best start. In general 
this is a forum for specific questions about CGI, which sort of fits, 
but you might be better off with just the beginners@perl.org for non-cgi 
related questions, as yours are more general. Though they are also 
forums for learning, rather than getting free code so we usually like to 
see what you have tried first. Having said that

To get to your actual question, what you are talking about doing is 
pretty simple in Perl. I am not familar with that particular book, if 
you are interested in learning Perl you should check out the Llama from 
O'Reilly, aka Learning Perl. Having read that you should be able to 
solve the above problem. If you don't want more books and can read tech 
docs, then you should start with,

perldoc perlopentut
perldoc -f opendir
perldoc -f readdir
perldoc -f open
perldoc -f sort
perldoc -f split
perldoc -f print
perldoc CGI
Very little is a HUGE programming effort in Perl, that is why it is so 
loved by its users...

Good luck,
http://danconia.org
--
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]
 



Re: cgi scripts as root or similar - best method

2005-02-25 Thread Wiggins d'Anconia
Chris Devers wrote:
On Fri, 25 Feb 2005, Gavin Henry wrote:

[...] the problem is [...] cdrecord needs to be run as root.

I assume cdrecord is being invoked from a system command, right?
Have you considered prefixing that command with `sudo`, and going into 
the sudoers file to allow the www user that privilige?

Of course, it would be a bit more complicated than that, as sudo will 
prompt for a password that you have to pass back to it somehow, but 
after hurdle that I suspect that it should work fine...

[snip]
You can use the 'NOPASSWD' flag in the sudoers file for a particular 
command/alias, etc. so that the user does not have to enter a password.

man sudoers
http://danconia.org
--
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]
 



Re: Calling subroutines

2005-03-21 Thread Wiggins d'Anconia

Denzil Kruse wrote:
Hi,
I have a script for a cgi form that covers about 20
pages, and want to name a subroutine to handle each
page like this: page1, page2, page3, etc.
Once the script figures out which page it should go
to, I dont want to have to do this:
if ($page == 1) { &page1() }
if ($page == 2) { &page2() }
if ($page == 3) { &page3() }
.
.
.
I would like to call the subroutine with one
statement, something like this:
$page = $in->param('page');
&page$page()
but the "compiler" doesn't seem to substitute the
variable $page before figuring out the name of the
subroutine and it gives me an error.  I thought about
loading the subroutine referencees into an array, but
run into the same problem.
Is there a way to do this?  Or is there a better way
for the beginning part of the script to play traffic
cop and direct it to the right page?
Have you considered the CGI::Application module? It works essentially as 
you describe but has a good following, is likely better tested, and may 
provide a little more support structure.

http://search.cpan.org/~markstos/CGI-Application-3.31/lib/CGI/Application.pm
In any case the array method you describe should work, can you show us 
the code you have tried?  You may just not be dereferencing your sub 
correctly.  You might also consider a hash and drop the numeric (and 
confusing names) unless there really is an order to the pages.

http://danconia.org
--
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]
 



Re: Variables in Modules

2005-04-13 Thread Wiggins d'Anconia
Ovid wrote:
--- Sergio Pires de Albuquerque <[EMAIL PROTECTED]> wrote:
I tried with our, importing that var and fully qualified it, but
always 
it get the previous value. My question is: Which method is common to 

share data among apps? What Am I doing wrong? Also, I always use 
"strict" and "warnings".

You're talking about global variables and, in general, using them is a
bad thing as it makes code more difficult to maintain.  I won't go into
that, though, as there's plenty of info elsewhere about that.
In your case, don't give access to the variables.  If you're not going
to go OO, give access to subroutines which can read and alter those
variables.
  package Foo;
  use strict;
  use warningsl
  my $var = 7;
  sub var { $var }
  sub set_var {
my $new_var = shift;
unless (defined $new_var and $new_var =~ /^\d+$/) {
  require Carp;
  Carp::croak("Argument to set_var() can only be digits: ($var));
}
$var = $new_var;
  }
  1;
With techniques like that, you can at least ensure that there is one
canonical place where the data is being fetched and altered.  This
gives you some measure of protection to ensure that naughty code isn't
doing things it should not be doing and, if it ever gets set to a bad
value, you have only one place to put your debugging code.
Cheers,
Ovid
I agree with what Ovid said, but in the case where the data won't need 
or shouldn't be changed, you might consider using constants, which have 
a global nature but are considered less messy than global variables.

perldoc constant
Though I would still drop them into their own module that can be 'use'd. 
This works well for DB DSNs, base URLs, etc.

http://danconia.org
--
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]
 



Re: PATH problem

2005-04-25 Thread Wiggins d'Anconia
TapasranjanMohapatra wrote:
> All,
> My script goes like this...
> --
> #!/usr/bin/perl
> print "Content-type: text/html\n\n";
> $cmd = "cat file_one";
> $content = qx!$cmd!;
> print "$content";
> --

You should not shell out to read a file, Perl is a full programming
language, you should use the builtin functions whenever possible.
Especially when they are common and simple ones. It is faster, safer,
and less bug/error prone.

#!/usr/bin/perl
use strict; # always
use warnings; # pretty much always

print "Content-type: text/html\n\n";

open my $HANDLE, 'file_one' or die "Can't open file for reading: $!";
while (my $line = <$HANDLE>) {
  print $line;
}
close $HANDLE;

The same thing only faster, much safer, and it will give you diagnostic
output in the error log of the web server to find out why it can't read
a particular file.  See the other poster's comments too.

http://danconia.org

> I have a case where file_one is not in the same directory. So I give the 
> absolute path of file_one
> in place of file_one. 
> When run it using perl on command line I get the contents of file printed 
> correctly.
> 
> But when accessed through browser (cgi-bin), I get nothing printed.
> 
> I thought it might be path problem , but it is not. Because when the file is 
> in same directory,
> it prints the content of the file.
> 
> Can anybody let me know where I am going wrong?
> TIA
> tapas
> 

-- 
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]
 




Re: About unoffical HTTP headers

2005-04-26 Thread Wiggins d'Anconia
Shu Cao wrote:
> Hi,
> 
> Sorry, my last email has some errors. The unofficial HTTP header should
> look like "x: y" not "x=y". 
> 
> I am new to Perl CGI programming. And I encounter a difficult problem 
> wish you guys can help me. Thank you! Here is the problem, how can Perl
> CGI program get the unofficial HTTP header value like "x: y". I examine
> the %ENV hash, and found nothing but some standard HTTP headers like
> "Accept", "User-Agent", etc..
> 
> And I check the CGI.pm module too, seems there is no method to get the
> unofficial HTTP headers.
> 

This is going to depend on the web server, as it is the software parsing
the HTTP request, it just passes execution to the CGI and sets up the
environment before hand. So it is up to the web server software to set
in the environment the extra headers, you should check the documentation
for it. It appears that Apache, if you use it, should be passing through
the additional headers with an 'HTTP_' prepended but there is no
guarantee.  Docs:

http://hoohoo.ncsa.uiuc.edu/cgi/env.html

"In addition to these, the header lines received from the client, if
any, are placed into the environment with the prefix HTTP_ followed by
the header name. Any - characters in the header name are changed to _
characters. The server may exclude any headers which it has already
processed, such as Authorization, Content-type, and Content-length. If
necessary, the server may choose to exclude any or all of these headers
if including them would exceed any system environment limits."

Which is linked from:

http://httpd.apache.org/docs/howto/cgi.html#environmentvariables

Which is (obviously) specifically for Apache.

> If you guys know the HOWTO, pls help me. Thank you!
> 
> BTW, English is not my native language, if I have any syntax or grammar
> error, pls forgive me:P
>

Thought it was fine.

> Best Regards,
> Shu Cao
> 
> 

Good luck,

http://danconia.org

-- 
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]
 




Re: Alternative Modules

2005-04-27 Thread Wiggins d'Anconia
Mike Blezien wrote:
> Hello,
> 
> we are currently using the Mail::Audit module to resend piped incoming
> emails to a particular domain then sends it to various aliases emails
> from a database. I'm trying locate a similar module but doesn't put alot
> of the header garbage into the body of the email. Then Mail::Audit
> doesn't really remove certain headers from the actual body of the email.
> 
> Is there a module that works similar as the Mail::Audit, but extracts
> the actual body content of the email without some of the headers
> included in it??
> 
> TIA

This is a rather confusing description of what you are doing, if you can
 clean it up a bit we might be able to give you a better solution.

Although it has a very steep learning curve, there isn't a lot that the
Mail::Box suite can't do.  If I wanted to do anything remotely complex
with mail (and I have) I would use it hands down.  It has incredible
documentation, though that too has a little bit of a learning curve :-).

http://perl.overmeer.net/mailbox/

There is also a mailing list...

http://danconia.org

-- 
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]
 




Re: CGI form Caching problem

2005-05-03 Thread Wiggins d'Anconia
[EMAIL PROTECTED] wrote:
[snip]

> 
> I have tried sending all sorts of headers to the browser to stop it 
> caching the page but nothing has made a difference. So perhaps the problem 
> is not in the browser, but in apache somewhere??
> 
> anyone ever had this (or a similar) problem!!
> 

Are you behind a proxy?  Is there a web cache on the server, a la Squid
or similar.  Is this mod_perl related?

http://danconia.org

-- 
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]
 




Re: Undefined subroutine Error

2005-05-09 Thread Wiggins d'Anconia
Mike Blezien wrote:
> Hello,
> 
> occasionally we get this error, due to a mis coding error or type-O
> error, but was wondering is there away to check, with a perl code, to
> make sure the sub routine exists before displaying this system 500
> internal error message. the error I'm referring too is this:
> 
> Undefined subroutine &main::some_routine_name called at script_name line
> XXX
> 
> TIA

If they are miscodings then not really, mostly because Perl allows you
to choose subroutines/methods at runtime. So the interpreter can't know
a pri ori what subs need to exist. If you need to check for a sub to see
whether or not you should be calling it, for instance to see if an
object has a particular method you can use the UNIVERSAL::can function,
see perldoc UNIVERSAL for more info.

Of couse having said all that, aren't you testing ;-) 

http://danconia.org

-- 
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]
 




Re: Multiple Languages

2005-05-13 Thread Wiggins d'Anconia
Mike Blezien wrote:
> Hello,
> 
> are there Perl modules available to handle multiple language input like
> from forms. we have a situtation where we may need to enter various
> languages strings or characters and it's english counter-part version.
> As there are many languages that use unique characters in the language
> that the scripts may not understand or handled incorrectly or cause
> problems with processing the data entered.
> 
> If you've had some experence with working with multi-lingual
> applications, any info or documentation would be appreciated.
> 
> TIA,

You may want to read through:

perldoc utf8
perldoc Encode

And the suggested reading in the "SEE ALSO" section. Perl in general
should handle the input correctly, I am not sure in what context you
want to handle them.

http://danconia.org

-- 
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]
 




Re: Help wanted

2005-05-31 Thread Wiggins d'Anconia
Yuanxin wrote:
> Hi,
> I had a lot of data, which are all x-y values. I stored these data in MySQL 
> database. Now I wanna use Perl to get these data from database to build a 
> figure(2-dimensional) and people can use internet to see the figure. In 
> addition, when people see the figure via internet, the x-y value of a point 
> need be shown when mouse is dragged on one point of the figure. In my system, 
> I already install Perl, Perl::DBD::Mysql, CGI, Perl:DBI. So I just wanna know 
> any other Perl:Module should be installed for my case. Thank you very much!
> 
> Terence
> 

Doing the mouse over part may prove difficult, it is client side. You
might want to look into the GD::Graph module to help you with generating
the figures.  There are other modules available to do graphing with as well.

HTH,

http://danconia.org

-- 
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]
 




Re: Perl Newbee

2005-06-03 Thread Wiggins d'Anconia
Yuga Chodagam wrote:
> Hi all,
>   I am newbee to Perl. Could anybody please give me head start
> with Perl programming? I am interested in playing with some CGI stuff
> and want to sese the Perl's data manipulation power.
> 
> 
> Thanks all. I appreciate any help.
> 
> Yuga.
> 

Though he lurks here I doubt he would plug it himself, so I will suggest
that several people have been pleased with Ovid's CGI Course:

http://users.easystreet.com/ovid/cgi_course/

By the time I found it/he wrote it, I was already fairly experienced so
can't say I started with it. Of course when I started I wished it had
already been written.

HTH,

http://danconia.org

-- 
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]
 




Re: Need a perl module

2005-06-07 Thread Wiggins d'Anconia
Tantalo, Christopher G wrote:
>  I use Mail::Box::Manager 2.00.  Works pretty well for me.
> 
 +1

http://danconia.org

-- 
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]
 




Re: quote problem and mysql

2005-07-15 Thread Wiggins d'Anconia
Bob Showalter wrote:
> Andrew Kennard wrote:
> 

[snip]

> 
> No, you shouldn't have to do that. Your first approach is correct, so we
> need to find out what's going wrong there...
> 

Can you show us GenMainRecData?  Are you sure it isn't the culprit here,
possibly it is already doing data munging that it shouldn't.

As a side note generally you should leave the '&' off the call to the
subroutine, no need for it anymore, unless you know specifically why to
include it.

my @TheRecord = GenMainRecData();

Should do.

http://danconia.org

-- 
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]
 




Re: MIME::Lite attachments

2005-07-17 Thread Wiggins d'Anconia
Mike Blezien wrote:
> Hello,
> 
> we're setting up a script to attach mainly PDF files. And was wondering
> when setting up the code to attach the file, what TYPE attribute is used:
> 
> 
> $msg->attach(Type =>'', # WHAT TYPE HERE TO USE ??
>  Path =>'/path/to/somefile.pdf',
>  Filename =>'Document.pdf',
>  Disposition => 'attachment'
>  );
> 
> 
> is the specific what to code MIME Lite to send a TEXT/HTML message with
> a PDF file attached ??
> 
> TIA

I'm assuming you want the MIME/Type,  application/pdf.

Not sure what this has to do with CGI though.

http://danconia.org

-- 
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]
 




Re: SetEnv Variable from Apache

2005-07-20 Thread Wiggins d'Anconia
Robert wrote:
> How do I use a variable that is set with 'SetEnv' in the Apache config file?
> 
> Robert
> 
> 
> 

Have you checked the %ENV hash?

print $ENV{'VAR_NAME'};

http://danconia.org

-- 
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]
 




Re: Problem with https post using LWP

2005-08-01 Thread Wiggins d'Anconia
Denzil Kruse wrote:
> Hi all,
> 
> I'm trying send an https post:
> 
> my $url = "https://some.secure.server/secure.dll";;
> 
> my $ua = LWP::UserAgent->new;
> 
> # assemble the request
> #
> my $request = HTTP::Request->new(POST => "$url");
> $request->content_type('application/x-www-form-urlencoded');
> $request->content($content);
> 
> # send the request and get the result
> #
> my $result = $ua->request($request);
> 
> print $result->as_string;
> 
> But I'm getting this error:
> 
> 501 (Not Implemented) Protocol scheme 'https' is not
> supported
> 
> Looking on cpan, it looks like you do an https post
> the same way as a http post, but I must be missing
> something.
> 
> Can anyone help?
> 
> Thanks,
> Denzil
> 

Have you read:

http://search.cpan.org/src/GAAS/libwww-perl-5.803/README.SSL

And do you have an appropriate SSL interface installed?

http://danconia.org

-- 
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]
 




Re: Script execution time.

2005-08-03 Thread Wiggins d'Anconia
zentara wrote:
> On Wed, 3 Aug 2005 03:00:01 -0700, [EMAIL PROTECTED] (Sara) wrote:
> 
> 
>>I have to test/optimize a script on a shared server. The script contains a 
>>couple of mySQL queries etc.
>>
>>Is there a way that I can see the query execution time (like in phymyAdmin) 
>>and total script execution time, plus memory & CPU usage? There are a lot of 
>>ways doing it in PHP but nothing in PERL/CGI? 
>>
>>I searched CPAN but failed to find my desired requirements. Are there any 
>>system commands that can be executed within the script to get these values.
>>
>>TIA.
>>
>>Sara.
> 
> 
> This should give you the idea. 
> 
> #!/usr/bin/perl -w
> sleep 5;
> print "This script took ". (time - $^T) .
>   " seconds in Perl $] on $^O\n";
> 
> 
> If you want to test just subs,  get a start time when entering
> the sub, and a finish time just before  the sub returns. Take the
> difference and you should have a pretty good indicator.
> 

Additionally if you are finding the time difference too small you can do
the same but with higher precision using Time::HiRes,

perldoc Time::HiRes

http://danconia.org

-- 
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]
 




Re: How do I make two different web pages come up from one CGI?

2005-08-04 Thread Wiggins d'Anconia
David Dorward wrote:
> On Wed, Aug 03, 2005 at 10:45:35PM -0700, Luinrandir wrote:
> 
>>I want to create two web pages in two different windows
>>from one CGI.
> 
> 
> Each request gives one file, that's how HTTP works. You will need at
> least two requests, with the script running twice (or two scripts
> running once each).
> 
> You can use JavaScript to spawn a second window, although it might be
> blocked by popup blockers (the specifics of such a solution are rather
> off topic for this list though, so I'll suggest you look elsewhere if
> you want to go down that path).
> 

Just to be thorough, not specifically because I like them, I will
mention frames. Frames are an easy way to give the appearance of two
requests (because there are actually three) without many client side
limitations. Most *graphical* browsers support frames these days.

And though I don't yet have experience with it I suppose you could look
into Ajax.

http://danconia.org

-- 
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]
 




Re: Insecure setuid?

2005-08-09 Thread Wiggins d'Anconia
Tantalo, Christopher G wrote:
> Can anyone shed some light on what this error means?
>   Insecure $ENV{PATH} while running setuid at
> /var/appl/sls/bin/driver.pl line 1104.
> Line 1104 is
> print `date`;
> 

You shouldn't shell out to date anyways, especially in the above manner
with no error checking, etc. Perl has builtin functions for collecting
date information.

> If I comment this out, then the following error message appears:
>   Insecure dependency in open while running setuid at
> /var/appl/sls/bin/driver.pl line 1249.
> Line 1249 is
> my $err_file = $ENV{"SLS_LOG_PATH"} . "/drivererror" . $rt_id ..
> ".err";
> actually 1249 --->  open(ERR_FILE,">>$err_file") ||die "cannot open
> $err_file for reading:$!";
> 
> Not sure what insecure warnings mean in terms of setuid.  Any answer
> would be much appreciated.
> Thanks
> Chris

Because you are running setuid the taint mechanism is on. See,

perldoc perlsec

For more info. Whenever you have an error/warning you don't understand
that was thrown by Perl you can find more info in:

perldoc perldiag

HTH,

http://danconia.org

-- 
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]
 




Re: Clearing cookies

2005-08-16 Thread Wiggins d'Anconia
Denzil Kruse wrote:
> Hi all,
> 
> I'm trying to clear a cookie using CGI::Cookies, and
> can't seem to do it :(
> 
> if ($clear_cookie eq 'yes') {
> 
>  my %cookies = fetch CGI::Cookie;
> 
>  print "getting cookie";
> 
>  if ($cookies{'id'}) {
> 
>   print "clearing cookie";
>   $cookies{'id'}->expires('-1s');
>   print "cleared cookie";
>  }
> }
> 
> I'm getting the "cleard cookie" message, but it is
> still there.
> 
> Denzil
> 

To clear the cookie in the user's client (browser) you have to "set" the
cookie again by printing it in the response headers. You are only
setting the local expiration, to have that maintained across the rest of
the session you have to tell the browser about it, which is done by
passing it back as if you were setting it initially.

HTH,

http://danconia.org

-- 
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]
 




Re: Regex Problem.

2005-08-18 Thread Wiggins d'Anconia
Sara wrote:
> I am at a loss here to generate REGEX for my problem.
> 
> I have an input query coming to my cgi script, containg a word (with or 
> without spaces e.g. "blood" "Globin Test" etc).
> What I am trying to do is to split this word (maximum of 3 characters) and 
> find the BEST possible matching words within mySQL database. For example if 
> the word is "blood"
> 
> I want to get results using regex: 
> 
> for "blood": &check(blo) then &check(loo)  &check(ood)
> for "Globin Test": &check(Glo) then &check(lob)  &check(obi) &check(bin) 
> &check(Tes) &check(est)
> 
> TIA.
>

Sounds like you need a "split" then a "substr" rather than a regex,
though I suppose it would work if you really wanted one, I wouldn't.

perldoc -f split
perldoc -f substr

It will also be faster to combine everything into one select rather than
for each possible "token", but at the least if you are going to do
multiple selects use 'prepare' with placeholders and only prepare the
query once.

So,

-- UNTESTED --

my @tokens = split ' ', $entry;
my @words;
foreach my $token (@tokens) {
  push @words, substr $token, 0, 3;
  push @words, substr $token, -3, 3;
}

(or you can put the following into the above foreach however you would like)

my $where = '';
my @bind;
foreach my $word (@words) {
  $where .= ' OR ' if $where ne '';
  $where .= "(def LIKE ?)";
  push @bind, "%$word%";
}

my $sth = $dbh->prepare("SELECT * FROM medical WHERE $where");
$sth->execute(@bind);

while (my @row = $sth->fetchrow_array) {
  print join ' ', @row;
  print "\n";
}

This also prevents SQL injection by quoting the query words properly.

> Sara.
>

http://danconia.org

> sub check {
> my $check = $dbh -> prepare("SELECT * FROM medical WHERE def LIKE '%$query%' 
> ");
> $check->execute();
> while (my @row = $check -> fetchrow_array()) {
> print "blah blah blah\n";
> }
> }
> 

-- 
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]
 




Re: $ENV{'HTTP_REFERER'}

2005-08-24 Thread Wiggins d'Anconia
Denzil Kruse wrote:
> Hi,
> 
> I want to know the web site that someone came from,
> and so I was planning on reading $ENV{'HTTP_REFERER'}
> to figure it out.  How reliable is that?  Do browsers
> or other situations block it or obfuscate it?  Is
> there another way to do it or any other issues
> involved?  I'm using apache on red hat.
> 
> Thanks,
> Denzil
> 

Depends on your definition of reliable. From experience it would seem
most browsers set it pretty reliably.

Having said that, it is just a value passed as part of the HTTP request
so anyone can spoof it at anytime, so relying on it from a security
stand point, well, isn't secure.

I imagine if you are doing something where someone can benefit from
obfuscating it, they will.  If you want to use it for ease of UI
handling (aka redirects, prepopulating fields, marketing metrics) I
think you are safe.

HTH,

http://danconia.org

-- 
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]
 




Re: Force a file download for link

2005-08-30 Thread Wiggins d'Anconia
Denzil Kruse wrote:
> 
> --- Bob Showalter <[EMAIL PROTECTED]>
> wrote:
> 
> 
> 
>>   use CGI qw(:standard);
>>
>>   open FILE, ...blah blah...
>>   print header('application/octet-stream');
>>   print while ;
>>
> 
> 
> Thanks for the help Bob! Is there another way besides
> the content-disposition to specify an attachment or
> filename?  I was trying to find a way to get a file
> download box to come up to ask where to save the file
> on their local computer.  With just the above, it will
> display it in the browser, and my users will just
> go...huh?
> 
> Denzil

The 'header' function can take a key/value list of arguments to include
additional header lines. So you can pass your content-disposition header
as you had it before. The key here is that the content type be set to
'application/octet-stream' and that the disposition header will only
work if the client understands it, but in most cases it is worth a shot.

In one my libraries I use,

my %header_options = ( -Content_Type   => 'application/octet-stream',
   -Content_Length => $content_length,
 );

if (defined $self->{'filename'} and $self->{'filename'} ne '') {
$header_options{-Content_Disposition} = "attachment;
filename=\"$self->{'filename'}\"";
}

It seems to have worked for me. Obviously you need to replace
$self->{'filename'} with your variable, and preferably set
$content_length with the file size.

HTH,

http://danconia.org

-- 
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]
 




Re: Exact matching using GREP.

2005-09-08 Thread Wiggins d'Anconia
Please bottom post

Sara wrote:
> No, it's not working, probably you didnt' get my question.

How is it not working now?  What Ovid sent is exactly what I would have
answered so you probably need to provide more information. You mention
man pages and switches to grep, there are two greps here, 1) the command
line program used for searching files which is where your -x, etc. come
in and 2) 'grep' the function which is a Perl built-in. For the docs for
it, you need to check:

perldoc -f grep

They are very different things.

http://danconia.org

> 
> Anyways, thanks for a prompt reply.
> 
> Sara.
> 
> - Original Message - From: "Ovid"
> <[EMAIL PROTECTED]>
> To: 
> Sent: Friday, September 09, 2005 12:01 AM
> Subject: Re: Exact matching using GREP.
> 
> 
>> --- Sara <[EMAIL PROTECTED]> wrote:
>>
>>> while (my $row = $sth->fetchrow_hashref)
>>> {
>>>   if (grep /$row->{CAT_TITLE}/, @present) {
>>>   #matching title with @present elements
>>>   print "$row->{CAT_TITLE}";
>>> }
>>>
>>> Question is how to do EXACT matching using GREP? because the above
>>> code prints every element from array for 'php' if the
>>> $row->{CAT_TITLE} is 'php' it prints php/counters, php/forums and
>>> every element containing php.
>>
>>
>> Assuming I understood your question correctly:
>>
>>  if (grep { $_ eq $row->{CAT_TITLE} } @present) {
>># do something
>>  }
>>
>> Cheers,
>> Ovid

-- 
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]
 




Re: File Modification Date

2005-09-14 Thread Wiggins d'Anconia
Vance M. Allen wrote:
> I'm trying to find out how to determine the date and/or time that a file was 
> created in a simple procedure.  I have heard about a few different libraries 
> but the examples I have found haven't been very useful.
> 
> The basic purpose I want to do is a simple footer provided by a package 
> module through CGI to inform users of the latest update to the code based on 
> the URL.  Something simple saying "Version x.xx, Last Modified MM/DD/." 
> which would automatically get the file modified timestamp.
> 
> I'd prefer to have, if possible, a simple scalar variable to store the 
> date...for example:
> 
> $modtime = filemoddate_func(filename.cgi);
> 
> If anyone can help me with the libraries I need to use for this (if any 
> special), and a code snippet if possible, I'd really appreciate it.
> 
> Thanks!
> 
> Vance
> 
> 
> 

Generally this type of information is provided by C, see,

perldoc -f stat

For the details.

To get the modification time for example you could use something like,

my $mod_time = (stat 'filename.cgi')[9];

File creation time is rarely if ever available. Obviously you could wrap
the above in a sub, but I suspect there isn't a lot of reason to since
it is so short anyways.

HTH,

http://danconia.org

-- 
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]
 




Re: problems with CGI.pm upload feature

2005-09-16 Thread Wiggins d'Anconia
Scott R. Godin wrote:
> script is at http://phpfi.com/78748
> 
> I followed the instructions in CGI.pm as best I could, and from what I
> read the upload() function is supposed to return a filehandle ? (it
> doesn't say whether this is a direct FH to the tempfile or not)
> 
> I had dome some preliminary testing with one-liners and was pretty sure
> this would work, but what I wind up with in the attachment is a file
> containing the name of the file, not its actual contents.
> 
> what did I do wrong? I can't figure it out. :/
> 

Just because it is easy to overlook and fairly common, did you include
the 'enctype' in the form tag?  For instance,

[form enctype="multipart/form-data" action="/cgi-bin/request" method="POST"]

HTH,

http://danconia.org

-- 
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]
 




Re: problems with CGI.pm upload feature

2005-09-17 Thread Wiggins d'Anconia
Bill Stephenson wrote:
> On Sep 16, 2005, at 7:51 PM, Scott R. Godin wrote:
> 
>> Wiggins d'Anconia wrote:
>>
>>> Scott R. Godin wrote:
>>>
>>>> script is at http://phpfi.com/78748
>>>>
>>>> I followed the instructions in CGI.pm as best I could, and from what I
>>>> read the upload() function is supposed to return a filehandle ? (it
>>>> doesn't say whether this is a direct FH to the tempfile or not)
>>>>
>>>> I had dome some preliminary testing with one-liners and was pretty sure
>>>> this would work, but what I wind up with in the attachment is a file
>>>> containing the name of the file, not its actual contents.
>>>>
>>>> what did I do wrong? I can't figure it out. :/
> 
> 
> Possibly used the wrong web browser to upload the file. Not all of them
> support this feature. Firefox does not. It will however provide the CGI
> script with the file name.
> 
> Kindest Regards,
> 
> -- 
> Bill Stephenson
> 
> 

Firefox doesn't support file uploads?  I use it all the time to test
scripts that accept uploads.

Huh?

http://danconia.org

-- 
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]
<http://learn.perl.org/> <http://learn.perl.org/first-response>




Re: Forcing a "save as' dialogue box to come up on left click

2005-09-18 Thread Wiggins d'Anconia

Tony Frasketi wrote:

Hello Listers
I'm trying to find a way to force a download dialogue box to come up 
when the user clicks on a link on a web page (the link will primarily be 
for htm, .txt files on the server).  Normally when the user left clikcs 
on the link the .htm or .txt file appears in the browser. And also 
normally when the user right clicks on the link, he is given the choice 
to 'Save Link Target as' in order to download the file.


What I'm looking for is to avoid right clicking and choosing to save the 
file  Is there a way to implement left clicking the link and 
automatically bringing up a "Save As" dialogue box?


I've googled for such things as "mime type download save as etc" but 
came up with dead ends


TIA
Tony Frasketi



Most browsers will provide this functionality if the return header is 
"application/octet-stream" rather than "text/html" or the like.  In the 
case of IE you may have to fool the browser into thinking it is getting 
something different than it is based on the URL, because it  likes to 
look there for a file extension to determine the file type too.


HTH,

http://danconia.org

--
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]
 




Re: Forcing a "save as' dialogue box to come up on left click

2005-09-19 Thread Wiggins d'Anconia
Tony Frasketi wrote:
> Wiggins d'Anconia wrote:
> 
>> Most browsers will provide this functionality if the return header is
>> "application/octet-stream" rather than "text/html" or the like.  In
>> the case of IE you may have to fool the browser into thinking it is
>> getting something different than it is based on the URL, because it 
>> likes to look there for a file extension to determine the file type too.
>> HTH,
>> http://danconia.org
>>
> Hi Wiggins
> Thanks... I just tried putting these three lines in a .htaccess file in
> a particular directory...
> 
>   AddType application/octet-stream .cgi
>   AddType application/octet-stream .txt
>   AddType application/octet-stream .htm
> 
> Alternatively, I also triedAddType application/octet-stream .cgi
> .txt .htm
>

Interesting way to do it, not what I intended. Have you 100% confirmed
that the header is being passed back correctly? I am not sure how Apache
(or whatever web server you are using) handles setting the type, it
might be finding it from somewhere else that is overriding it. There is
a module for Firefox/Mozilla called "Live HTTP Headers" that will help
you confirm what is being sent.

What I was intending was to call the cgi script and rather than it
printing the normal text/html header it would print the header directly,
that way you are guaranteed to be operating the way you intended.

> Using the Mozilla 5.0 Suite browser, I get the following results
> 
> And then trried clicked on links to a .cgi file, a .txt file, and a .htm
> file in that directory Only  the .htm file automaticaly brought up a
> 'Save as...' diaglogue box as I left clicked on the link to the .htm file
> 
> when I clicked on the link to the .txt file, the contents of the .txt
> file appeared in a Wordpad window.

This is windows picking up the association based on the extension
(probably), didn't you know it was smarter than you? ;-)

> 
> When I clicked on the link to the .cgi file, the .cgi file was executed
> and the results displayed in the browser window.
> 
> One out of three right!
> 
> 
> Using the Microsoft IE 6.0 browser, I get the following results
> 
> .htm file -> Displays the .htm file in browser window
> .txt file ->  Displays the .txt file in browser window
> .cgi file -> Executes the .cgi script and displays results in browser
> window
> 
> zero out of three right!

Yeh I suspect that is IE picking up on the URL extension.

> 
> 
> Using Firefox 1.0 browser, I get the following results...
> .htm file -> brings up the 'Save as...' dialogue box
> .txt file -> brings up the 'Save as...' dialogue box
> .cgi file -> Executes the .cgi script and displays results in browser
> window
>

The .cgi problem is probably because you have a handler setup for .cgi
files that trumps the AddType call. What is the header printed by the
.cgi file? It should be able to print its own header of
"application/octet-stream" followed by the contents of a "file" and it
should work correctly.

> SO... Filefox wins by getting two out of the three
> right!
> 
> This method doesn't look very hopeful at this point!

I know I have used this method before, and I have done so recently from
within the context of a different web framework (specifically Interchange).

As to your question about "Content-Disposition" it can be used to preset
the filename that the user sees in the "Save as..." dialogue, but is
again not standard, it is just convention and will depend on the browser
supporting it, to my knowledge the major browsers[1] do support it.

> Thanks again
> Tony Frasketi

Keep at it, I suspect you will get it to work...

http://danconia.org

[1] When I say major browsers I mean Mozilla (and variants, Firefox,
etc.), IE (some recent version, probably newer than 4.x), Safari. I know
there are others but I don't have a copy so can't say specifically how
they act, they may work as well.

-- 
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]
<http://learn.perl.org/> <http://learn.perl.org/first-response>




Re: Forcing a "save as' dialogue box to come up on left click

2005-09-19 Thread Wiggins d'Anconia
Tony Frasketi wrote:
> 
>> What I was intending was to call the cgi script and rather than it
>> printing the normal text/html header it would print the header directly,
>> that way you are guaranteed to be operating the way you intended.
> 
> Hi Wiggins
> Thanks for this suggestion... I've tried the following bit of CGI script
> with three different file types (.txt, .cgi, .dat) and in each case *DO*
> get the 'Save as...' dialogue box to come up and when I select 'Save to
> disk' and click ok, It appears as if a transfer occurs but only a
> zero-length file is saved to my local hard disk directory.  The code is
> as follows...

Ok, so almost there let's back up a step and understand the whole
thing. The HTTProtocol is very similar to other common net protocols
where there is a header section and a data (or payload) section. The two
are separated by a blank line. So right now you are providing the header
(apparently correctly). The client reads that information and based on
what it knows how to handle it decides on a way to use the payload. In
this case we are giving the browser less than optimal descriptors of the
payload. Basically we are telling it that there is something coming and
that neither I nor you know what it is. So it does the only thing it
can, "Save as...". The only hint you are providing it a suggestion about
what to call that thing. So right now you are providing a header, but
not a payload, let's add it

> 
> 
> #!/usr/local/bin/perl -w
> 
> # testoctetstream_1.cgi
> 
> # My base directory and base directory URL
> my($basedir) = "";
> my($baseurl) = "";
> 
> # Location of the file to be downloaded
> $fileloc = "$basedir/cgi-bin/test/xxx.txt";
> #$fileloc = "$basedir/cgi-bin/test/testdu1.cgi";
> #$fileloc = "$baseurl/cgi-bin/test/testdu.dat";
> 
> # Name of file to be downloaded
> $filename = "xxx.txt";
> #$filename = "testdu1.cgi";
> #$filename = "testdu.dat";
> 
> # NOTE: Uncomment the desired $filename and $fileloc above
> 
> # Set The headers
> print "Content-Type: application/octet-stream;\n";
> print "Content-Disposition: attachment; filename=\"$filename\"\n";
> 
> # ---
> # Note: Can't figure out what to put in here to actually download
> # the darn file!!!
> # ---
> 

This is the simple part, and probably looks a little like.

open file...
read in file...
print file back to browser...
close file

There are simpler ways to do this, but what I have come to use looks like:

my $READHANDLE;
unless (open $READHANDLE, $file) {
# error handling...
}
binmode $READHANDLE;

$| = 1;

while ( my $length = sysread $READHANDLE, my $buffer, $block_size ) {
next unless defined $length;

my $offset = 0;
while ( $length ) {
my $written  = syswrite STDOUT, $buffer, $length, $offset;
$length -= $written;
$offset += $written;
}
}

close $READHANDLE;

> print "\n";

You will want to remove this as it will be an addition to the actual
data which can break some formats.

> exit;
> 
> 
> The above cod was tested in both Mozilla and IE browers with the same
> results.
> 
> 
> It appears I'm missing some statement that should follow the
> Content-Disposition
> statement.
> 

Does this hit the mark?

http://danconia.org

-- 
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]
<http://learn.perl.org/> <http://learn.perl.org/first-response>




Re: html file with form and onSubmit="return check_form(this)"

2005-09-23 Thread Wiggins d'Anconia
Edgardo Lust wrote:
> Hi.
> 
> I have a html file (created with Dreamweaver) with one form and submit
> button with 
> 
> 
>value="/contacto/message.htm">
> value="[EMAIL PROTECTED]">
> 
> 
> I need my perl script to return a valid value then the user can see
> message.htm page
> 
> 
> How can I do?
> 
> Thanks
> 
> Edgardo
> 
> 

What do you mean by return a valid value? You can either redirect to
message.htm or read it in and return the contents.

As a side note the above is basically an open relay, depending on your
other form fields a specially crafted message can probably be used to
send an e-mail to anyone.

http://danconia.org

-- 
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]
 




Re: Query on sendmail problem

2005-10-05 Thread Wiggins d'Anconia
Dale wrote:
> Hi,
> 
> I'm hoping someone can help me with an issue I've got with, I assume,
> sendmail.
> 
> I've copied part of a script below.  If I use the first To: line (which
> takes the e-mail address from a file - and this works) then a mail
> doesn't arrive.  If, however, I used the second To: line (which is
> currently commented out - the xx is to stop spammers picking the
> address up from this mail) then a mail is received.
> 
> I added a page after the sendmail just to make sure it was reading the
> e-mail address from the file (which it was) but it still doesn't seem to
> send. $manageremail is the correct variable name.
> 
> In the datafile, I've had the e-mail addresses formatted properly (e.g.
> [EMAIL PROTECTED]) but also tried with \'s at appropriate places (e.g.
> [EMAIL PROTECTED]).  I just can't get anything to work.
> 
> Anyone have any ideas what I could be doing wrong.
>

You're not using a module... Don't think that is what you meant though.

> Thanks in advance!
> 
> -
> -
>   open(MAIL, "|/usr/local/bin/sendmail -t");
> 
> print MAIL "To: $manageremail\n";

Are you chomp'ing $manageremail?  If it has a newline on it in the file
and you have added an additional new line above then you have stopped
the header, surprisingly it should still be sending but below should be
in the body. You should probably have a look at the mail logs generated
by sendmail to see if it is complaining, and more specifically about what.

http://danconia.org

> #   print MAIL "To: [EMAIL PROTECTED]";
> print MAIL "From: [EMAIL PROTECTED]";
> print MAIL "Subject: Escalation logged\n";
> 
> print MAIL "Hi $manager\.\n\n";
> print MAIL "The following escalation has been logged from your
> team.\n\n";
> print MAIL "Agent's Manager   : $manager\n";
> print MAIL "Agent's Name  : $agentname\n";
> print MAIL "Date Logged   : $day\-$month\-$year\n";
> print MAIL "Escalation Reason : $reason\n";
> print MAIL "Short Description : $short\n";
> print MAIL "Long  Description : $long\n";
> print MAIL "Justified?: $justified\n";
> print MAIL "Name of Logger: $logger\n";
> 
>   close (MAIL);
> -
> -
> 

-- 
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]
 




Re: XML [AntiVir checked]

2005-10-11 Thread Wiggins d'Anconia
Naji, Khalid wrote:
> Hi,
> 
> Which Module  could you recommend for the use of the XML (XML::Simple,
> XML::Parser and XML::Writer, XML::DOM, XML::PATH...) ?
> 
> Thanks in advance!
> 
> KN
> 
> 

Yes.

Which one is most appropriate depends on what you need to do. I would
generally suggest starting with XML::Simple until you determine it
doesn't fit the profile of what is needed.

http://danconia.org

-- 
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]
 




Re: $CGI::DISABLE_UPLOADS

2005-10-18 Thread Wiggins d'Anconia
Bill Stephenson wrote:
> I've been testing the "$CGI::DISABLE_UPLOADS" and "$CGI::POST_MAX"
> variables and I don't think I've got it feature working as it should.
> The docs say this:
> 
> "CGI.pm also has some simple built-in protections against denial of
> service attacks, but you must activate them before you can use them.
> 
> 
> 
> $CGI::POST_MAX
> If set to a non-negative integer, this variable puts a ceiling on
> the size of POSTings, in bytes. If CGI.pm detects a POST that is
> greater than the ceiling, it will immediately exit with an error
> message."
> 
> It seems to me that my script will not exit until uploading the entire
> POST has been completed. So, here are my questions about this:
>

Right, but the script exits immediately. I *suspect* the complete
request must be sent to the web server regardless of whether the script
is going to fail. Exiting immediately just means that CGI will not allow
execution of anything beyond its initial preparations, rather than
meaning it will truncate the request.

At least that would be my interpretation... But I didn't have a look at
the modules source, you might want to check there for confirmation.

http://danconia.org


> Do I misunderstand the above? (ie. the script should upload the entire
> POST before exiting with an error)
> 
> Is there something wrong with my test script (I suspect this must be the
> case, please see it below)
> 
> Or... is there something wrong with CGI.pm? (this seems to be a longshot)
> 
> I'd really appreciate any help you all can give me with this.
> 
> Kindest Regards,
> 
> -- 
> Bill Stephenson
> 
> 
> 
> 
> #!/usr/bin/perl
> 
> # deny_upload.cgi
> 
> use CGI;
> use File::Basename;
> use strict;
> 
> $CGI::POST_MAX=1024 * 5;  # max 100K posts
> $CGI::DISABLE_UPLOADS = 1;  # no uploads
> 
> my $Q = new CGI;
> my $message;
> 
> # trap error with this...
> if (!$Q->param('file') && $Q->cgi_error()) {
> $message = $Q->cgi_error();
> &error_trap( "$message");
> }
> 
> # or this...
> # if ($Q->cgi_error()) {
> # $message = $Q->cgi_error();
> # &error_trap( "$message");
> # }
> 
> if (!$Q->param) {
> print $Q->header;
> print qq ~ Transitional//EN"
> "http://www.w3.org/TR/html4/loose.dtd";>
> 
> 
> Upload Test
> 
> 
>  action="/cgi-bin/test/deny_upload.cgi" enctype="multipart/form-data">
> 
> put too much text in
> here
> 
> 
> 
> ~;
> exit 0;
> }
> 
> # get on to uploading the file...
> 
> if ($Q->param('file')) {
> my $data;
> my $filePath;
> my $file = $Q->param('file');
> 
> fileparse_set_fstype("MSDOS");
> $file = basename($file);
> $filePath = "/test/$file";
> 
> open (SAVE,">$filePath") or &error_trap($message= " Error:: $! ::
> Can Not Upload $file: \n");
> while (read($Q->param('file'),$data,1024)) {print SAVE $data;}
> close SAVE;
> 
> print $Q->header;
> print $Q->start_html(-title => "Uploaded it anyway");
> print "Uploaded it anyway";
> print $Q->end_html;
> exit 0;
> }
> 
> if ($Q->param('test')) {
> print $Q->header;
> print $Q->start_html(-title => "Lotsa Text");
> print $Q->param('test');
> print $Q->end_html;
> exit 0;
> }
> 
> sub error_trap  {
> print $Q->header;
> print  $Q->start_html(-title => "MyApp Error Screen");
> print "$message";
> print  $Q->end_html;
> exit 0;
> }
> 
> 
> 
> 

-- 
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]
 




Re: hardcoded paths

2005-10-28 Thread Wiggins d'Anconia
Dermot Paikkos wrote:
> Hi,
> 
> I am moving a site from once host to another. There are lots of 
> hardcoded fully qualified paths to the localhost (EG 
> http://myserver/cgi-bin/someprog.pl?name=val & 
> http://myserver/css/mystyle.css).
> 
> I am pretty sure this isn't good practise but I am not a bit lost as 
> to what are good alternatives. I don't want to make the same mistake 
> and have to do all this editing again next time the hardware changes.
> 
> Should I be trying to create a module with these paths in? Or is 
> there some other way?
> Thanx.
> Dp.
> 
> 

Relative URLs as David mentioned would be good whereever they can be
used. The other option I use is to create a Setup.pm module that
contains constants for these things.

use constant CONFIG_URL_BASE => 'http://yoursite.com';
use constant CONFIG_URL_CGI_BASE => 'http://yoursite.com/cgi-bin';

Then you need to export those constants into the calling code. The only
bummer about using constants is that they don't interpolate inside
strings, but I have settled on using concatenation and prefer to have
the extra overhead to have the ease of changing all URLs on a site at once.

This also makes module code that generates links reusable across sites.

http://danconia.org

-- 
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]
 




Re: CGI Upload() for nonexistent files?

2005-10-28 Thread Wiggins d'Anconia
Joby Jones wrote:
> Hello all,
>   I have a question about the CGI upload()
> function.
> 
>   Why does it return a valid file handle to a file
> that does not exist on the client (web browser)
> machine, and what's the best way to handle this?
> 

Presumably because this is really a client side error. It is not an
error to upload a zero length file, a file could need to be created but
empty, and you could want to have a program do that. So it is not
unreasonable to think that a file upload could be empty, which means CGI
has to handle the case where a file is uploaded but empty.  Chances are
good the browser is creating the proper HTTP request despite the "local"
file not existing, but to me that is a browser fault.  CGI just sees the
header that says there is a file coming, then no data so it creates the
file, puts nothing in it, and happily crunches along.

> 
> Details:
> 
> 
> 1. A user enters a nonexistent file name in an upload
> field in a form handled by my cgi script. (E.g. on
> windows c:\uplaodthis.txt <-- note typo). 
>
> 2. My cgi script does something like this:
> 
> my $q = new CGI;
> if(defined($q->param('Upload'))){
> my $upload_file_handle = 
>$q->upload('upload_file');
> if(defined($upload_file_handle)){
>print "Valid file handle to empty file is:\n" .
> 
>   Dumper($upload_file);
> }
> }
> 
> 
> 3. It outputs: 
> "Valid file handle ... 
>  $VAR1 = bless( \*{FH::uplaodthis.txt ...}, 'Fh' );"
> 
> 
> 
> What to do?
> ---
> 
> Currently, I write out all valid file handles (checked
> for basic security problems as described in perlsec). 
> If the file is zero length, I delete it and report an
> error.  Which just isn't very satisfying.
> 

Why is it not satisfying? It is a requirement of yours that the file not
be empty NOT the CGI world at large, so this is an error you should be
handling. Sounds like you are doing a fine job.

> What am I missing?
> Is there a better way?

Doubtful. You could write a web server module to handle the case at the
front end of the request but the only thing that really saves is a
little processing time, you still have a server side error to throw.

> 
> 
> 
> All advice and documentation pointers (beyond 'CGI'
> :-) appreciated.
>

I say move on and work on bigger problems. Faulty user input, is well
the fault of the user. Let them deal with the consequences of having to
resubmit the form, assuming your error message is clear.

http://danconia.org


> Thanks,
> joby
> 

-- 
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]
 




  1   2   3   4   5   >