Re: sending mailing list with smtplib

2006-08-15 Thread Filip Salomonsson
On 15 Aug 2006 13:41:53 -0700, 3KWA <[EMAIL PROTECTED]> wrote:
> What would be the best way to go about it then? Instantiate a new msg
> in the loop?
>
> I guess I must read the doc more carefully, thanks for your time (if
> you can spare some more I would be grateful).

You can reuse your message object, but you need to delete the old
header before setting a new one:

<http://docs.python.org/lib/module-email.Message.html#l2h-3843>
-- 
filip salomonsson
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Visibility against an unknown background

2006-10-24 Thread Filip Salomonsson
On 10/24/06, Sergei Organov <[EMAIL PROTECTED]> wrote:
> I'd be very upset to see, say, 5-6 highly intersecting
> scientific plots on the same picture drawn using the
> "marching ants" approach.

I'd be a bit upset to see scientific plots *on a picture* at all,
regardless of approach.
-- 
filip salomonsson
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: A critique of cgi.escape

2006-09-25 Thread Filip Salomonsson
On 25 Sep 2006 15:13:30 GMT, Jon Ribbens <[EMAIL PROTECTED]> wrote:
>
> Here's a point for you - the documentation for cgi.escape says that
> the characters "&", "<" and ">" are converted, but not what they are
> converted to.

If the documentation isn't clear enough, that means the documentation
should be fixed.

It does _not_ mean "you are free to introduce new behavior because
nobody should trust what this function does anyway".
-- 
filip salomonsson
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Why doesn't Python's "robotparser" like Wikipedia's "robots.txt" file?

2007-10-02 Thread Filip Salomonsson
On 02/10/2007, John Nagle <[EMAIL PROTECTED]> wrote:
>
> But there's something in there now that robotparser doesn't like.
> Any ideas?

Wikipedia denies _all_ access for the standard urllib user agent, and
when the robotparser gets a 401 or 403 response when trying to fetch
robots.txt, it is equivalent to "Disallow: *".

http://infix.se/2006/05/17/robotparser

It could also be worth mentioning that if you were planning on
crawling a lot of Wikipedia pages, you may be better off downloading
the whole thing instead: <http://download.wikimedia.org/>
(perhaps adding <http://code.google.com/p/wikimarkup/> to convert the
wiki markup to HTML).
-- 
filip salomonsson
-- 
http://mail.python.org/mailman/listinfo/python-list