On Mon, 04 Feb 2008 21:58:46 +, Steven D'Aprano wrote:
> On Mon, 04 Feb 2008 17:08:02 +, Marc 'BlackJack' Rintsch wrote:
>
>>> Surprisingly, Method 2 is a smidgen faster, by about half a second over
>>> 500,000 open-write-close cycles. It's not much faster, but it's
>>> consistent, over m
On Mon, 04 Feb 2008 17:08:02 +, Marc 'BlackJack' Rintsch wrote:
>> Surprisingly, Method 2 is a smidgen faster, by about half a second over
>> 500,000 open-write-close cycles. It's not much faster, but it's
>> consistent, over many tests, changing many of the parameters (e.g. the
>> number of f
On Mon, 04 Feb 2008 10:48:32 -0800, rdahlstrom wrote:
> It doesn't matter how many doors opening and closing there are, it
> matters the order in which the opening, walking through, and closing
> are done. That's my point. In the second example, all of the disk
> operations are done at the same
It doesn't matter how many doors opening and closing there are, it
matters the order in which the opening, walking through, and closing
are done. That's my point. In the second example, all of the disk
operations are done at the same time. That's what I meant by people
going through the doors.
On Mon, 04 Feb 2008 10:18:39 -0800, rdahlstrom wrote:
> On Feb 4, 1:12 pm, Carl Banks <[EMAIL PROTECTED]> wrote:
>> On Feb 4, 12:53 pm, rdahlstrom <[EMAIL PROTECTED]> wrote:
>> > You have 500,000 people to fit through a door. Here are your options:
>>
>> > 1. For each person, open the door, walk
On Feb 4, 1:12 pm, Carl Banks <[EMAIL PROTECTED]> wrote:
> On Feb 4, 12:53 pm, rdahlstrom <[EMAIL PROTECTED]> wrote:
>
>
>
> > On Feb 4, 10:17 am, Steven D'Aprano <[EMAIL PROTECTED]
>
> > cybersource.com.au> wrote:
> > > After reading an earlier thread about opening and closing lots of files,
> > >
On Feb 4, 12:53 pm, rdahlstrom <[EMAIL PROTECTED]> wrote:
> On Feb 4, 10:17 am, Steven D'Aprano <[EMAIL PROTECTED]
>
>
>
> cybersource.com.au> wrote:
> > After reading an earlier thread about opening and closing lots of files,
> > I thought I'd do a little experiment.
>
> > Suppose you have a whole
En Mon, 04 Feb 2008 15:53:11 -0200, rdahlstrom <[EMAIL PROTECTED]>
escribi�:
> On Feb 4, 10:17 am, Steven D'Aprano <[EMAIL PROTECTED]
> cybersource.com.au> wrote:
>>
>> Suppose you have a whole lot of files, and you need to open each one,
>> append a string, then close them. There's two obvious
On Feb 4, 10:17 am, Steven D'Aprano <[EMAIL PROTECTED]
cybersource.com.au> wrote:
> After reading an earlier thread about opening and closing lots of files,
> I thought I'd do a little experiment.
>
> Suppose you have a whole lot of files, and you need to open each one,
> append a string, then clos
On Mon, 04 Feb 2008 15:17:18 +, Steven D'Aprano wrote:
> # Method one: grouped by file.
> for each file:
> open the file, append the string, then close it
>
>
> # Method two: grouped by procedure.
> for each file:
> open the file
> for each open file:
> append the string
> for ea
Steven D'Aprano wrote:
> So, what's going on? Can anyone explain why the code which does more work
> takes less time?
Short answer: CPU and RAM are much faster than hard disks.
The three loops and the creation of a list costs only a few CPU cycles
compared to flushing the new data to disk.
Chri
After reading an earlier thread about opening and closing lots of files,
I thought I'd do a little experiment.
Suppose you have a whole lot of files, and you need to open each one,
append a string, then close them. There's two obvious ways to do it:
group your code by file, or group your code b
On Fri, 24 Feb 2006 10:51:09 +, Roel Schroeven wrote:
[snip]
> I think it looks like the problem is in the system call.
Thank you Roel for going the extra mile. I appreciate the unexpected bonus.
--
Steven.
--
http://mail.python.org/mailman/listinfo/python-list
Steven D'Aprano schreef:
> On Fri, 24 Feb 2006 10:11:18 +0100, Magnus Lycka wrote:
>
>> Steven D'Aprano wrote:
>>> It looks like the time function under Linux at least is very slow.
>> Perhaps you should try doing the same thing in C.
>> Then you can see whether the problem is in the
>> wrapper or
On Fri, 24 Feb 2006 10:11:18 +0100, Magnus Lycka wrote:
> Steven D'Aprano wrote:
>> It looks like the time function under Linux at least is very slow.
>
> Perhaps you should try doing the same thing in C.
> Then you can see whether the problem is in the
> wrapper or in the system call.
Good idea
Steven D'Aprano wrote:
> It looks like the time function under Linux at least is very slow.
Perhaps you should try doing the same thing in C.
Then you can see whether the problem is in the
wrapper or in the system call.
--
http://mail.python.org/mailman/listinfo/python-list
On Thu, 23 Feb 2006 10:07:14 -0600, Larry Bates wrote:
>> Of course I expect timer2 should take longer to execute in total,
>> because it is doing a lot more work. But it seems to me that all that
>> extra work should not affect the time measured, which (I imagine) should
>> be about the same as t
Steven D'Aprano wrote:
> I have two code snippets to time a function object being executed. I
> expected that they should give roughly the same result, but one is more
> than an order of magnitude slower than the other.
>
> Here are the snippets:
>
> def timer1():
> timer = time.time
> fu
Steven D'Aprano wrote:
> I have two code snippets to time a function object being executed. I
> expected that they should give roughly the same result, but one is more
> than an order of magnitude slower than the other.
>
> Here are the snippets:
>
> def timer1():
> timer = time.time
> fu
I have two code snippets to time a function object being executed. I
expected that they should give roughly the same result, but one is more
than an order of magnitude slower than the other.
Here are the snippets:
def timer1():
timer = time.time
func = lambda : None
itr = [None] * 100
20 matches
Mail list logo