Hi Guys,
I guess I'm the slow one on the list.
Is there more to the patch than commenting out
# $command =~ s/%s/'$url'/g;
and replacing it with
$command=~s/&/\\&/g
Because either way, extract_url.pl isn't working for me.
I can see the list of urls, but if I click on one I still get a page not found
error. Another odd thing is that if I press c for context I sometimes get the
full url displayed in a little box, like it should, but I often see the text
that is near the link in the email, in the little box.
I've tried changing the view, but that doesn't seem to have much effect.
I've fallen back to using urlscan which seems to work albeit not very elegantly.
On Tue, Apr 02, 2013 at 01:51:22AM +1300, Chris Bannister wrote:
On Tue, Apr 02, 2013 at 01:06:05AM +1300, Chris Bannister wrote:
On Sun, Mar 31, 2013 at 10:00:58PM -0600, s. keeling wrote:
> Incoming from Luis Mochan:
> > I found a mistake in the extract_url.pl program: it doesn't escape
> > ampersands when present in the url, so when the command to actually
> > view the url is invoked, the shell gets confused. I made a quick fix
> > by substituting $command=~s/&/\\&/g before running command.
>
> Line 633? 634? So:
>
> # $command =~ s/%s/'$url'/g;
> $command=~s/&/\\&/g;
>
> I'm a perl guy, yet that's non-trivial here. Thx. :-)
Are you sure that will work? You've just commented out a line of code.
(Just wondering what your patch would look like.)
Ahh, see it's included in a message by Luis Mochan in this thread.
--
"If you're not careful, the newspapers will have you hating the people
who are being oppressed, and loving the people who are doing the
oppressing." --- Malcolm X