On Mon, 04 Feb 2008 21:58:46 +, Steven D'Aprano wrote:
> On Mon, 04 Feb 2008 17:08:02 +, Marc 'BlackJack' Rintsch wrote:
>
>>> Surprisingly, Method 2 is a smidgen faster, by about half a second over
>>> 500,000 open-write-close cycles. It's not much faster, but it's
>>> consistent, over m
On Mon, 04 Feb 2008 17:08:02 +, Marc 'BlackJack' Rintsch wrote:
>> Surprisingly, Method 2 is a smidgen faster, by about half a second over
>> 500,000 open-write-close cycles. It's not much faster, but it's
>> consistent, over many tests, changing many of the parameters (e.g. the
>> number of f
On Mon, 04 Feb 2008 10:48:32 -0800, rdahlstrom wrote:
> It doesn't matter how many doors opening and closing there are, it
> matters the order in which the opening, walking through, and closing
> are done. That's my point. In the second example, all of the disk
> operations are done at the same
It doesn't matter how many doors opening and closing there are, it
matters the order in which the opening, walking through, and closing
are done. That's my point. In the second example, all of the disk
operations are done at the same time. That's what I meant by people
going through the doors.
On Mon, 04 Feb 2008 10:18:39 -0800, rdahlstrom wrote:
> On Feb 4, 1:12 pm, Carl Banks <[EMAIL PROTECTED]> wrote:
>> On Feb 4, 12:53 pm, rdahlstrom <[EMAIL PROTECTED]> wrote:
>> > You have 500,000 people to fit through a door. Here are your options:
>>
>> > 1. For each person, open the door, walk
On Feb 4, 1:12 pm, Carl Banks <[EMAIL PROTECTED]> wrote:
> On Feb 4, 12:53 pm, rdahlstrom <[EMAIL PROTECTED]> wrote:
>
>
>
> > On Feb 4, 10:17 am, Steven D'Aprano <[EMAIL PROTECTED]
>
> > cybersource.com.au> wrote:
> > > After reading an earlier thread about opening and closing lots of files,
> > >
On Feb 4, 12:53 pm, rdahlstrom <[EMAIL PROTECTED]> wrote:
> On Feb 4, 10:17 am, Steven D'Aprano <[EMAIL PROTECTED]
>
>
>
> cybersource.com.au> wrote:
> > After reading an earlier thread about opening and closing lots of files,
> > I thought I'd do a little experiment.
>
> > Suppose you have a whole
En Mon, 04 Feb 2008 15:53:11 -0200, rdahlstrom <[EMAIL PROTECTED]>
escribi�:
> On Feb 4, 10:17 am, Steven D'Aprano <[EMAIL PROTECTED]
> cybersource.com.au> wrote:
>>
>> Suppose you have a whole lot of files, and you need to open each one,
>> append a string, then close them. There's two obvious
On Feb 4, 10:17 am, Steven D'Aprano <[EMAIL PROTECTED]
cybersource.com.au> wrote:
> After reading an earlier thread about opening and closing lots of files,
> I thought I'd do a little experiment.
>
> Suppose you have a whole lot of files, and you need to open each one,
> append a string, then clos
On Mon, 04 Feb 2008 15:17:18 +, Steven D'Aprano wrote:
> # Method one: grouped by file.
> for each file:
> open the file, append the string, then close it
>
>
> # Method two: grouped by procedure.
> for each file:
> open the file
> for each open file:
> append the string
> for ea
Steven D'Aprano wrote:
> So, what's going on? Can anyone explain why the code which does more work
> takes less time?
Short answer: CPU and RAM are much faster than hard disks.
The three loops and the creation of a list costs only a few CPU cycles
compared to flushing the new data to disk.
Chri
11 matches
Mail list logo