Morgan L. Owens wrote:
I accept your point about not caring about how the data was created, but
on the other side, if the data creation is handling a lot more data than
the consumer needs there is an amount of processing time that is wasted.
The quick way of doing something does not equate to the best way of
doing it.

Only if the producer does work unnecessary for determining the next datum
required by the consumer. It doesn't have to create all the data at once (if it
did you might as well stuff it all in a big array and use that).

I think my only point here was that a 'generic' producer many benefit from additional work to make it more efficient, but it's only the consumer end that knows what it needs? This is the problem with a lot of the database 'abstraction' ... there are a lot of additional processes carried out to make the abstraction stuff work which could be optimised a lot easier if they weren't wrapped up away so well. The iterator/generator approach to work flow is going down the same path which is fine for some applications but not so flexible for others? BOTH approaches to workflow have a place, it's working out at what point one is better then the other? Currently we do not have any real basis to compare things :(

--
Lester Caine - G8HFL
-----------------------------
Contact - http://lsces.co.uk/wiki/?page=contact
L.S.Caine Electronic Services - http://lsces.co.uk
EnquirySolve - http://enquirysolve.com/
Model Engineers Digital Workshop - http://medw.co.uk
Rainbow Digital Media - http://rainbowdigitalmedia.co.uk



--
PHP Internals - PHP Runtime Development Mailing List
To unsubscribe, visit: http://www.php.net/unsub.php

Reply via email to