Tim Starling wrote:
> <?php
> class C {
>     var $v1, $v2, $v3, $v4, $v5, $v6, $v7, $v8, $v9, $v10;
> }
> 
> $m = memory_get_usage();
> $a = array();
> for ( $i = 0; $i < 10000; $i++ ) {
>     $a[] = new C;
> }
> print ((memory_get_usage() - $m) / 10000) . "\n";
> ?>
> 
> 1927 bytes (I'll use 64-bit from now on since it gives the most shocking
> numbers)

PHP 5.3.3-dev (cli) (built: Jan 11 2010 11:26:25)
Linux colo 2.6.31-1-amd64 #1 SMP Sat Oct 24 17:50:31 UTC 2009 x86_64

php > class C {
php {     var $v1, $v2, $v3, $v4, $v5, $v6, $v7, $v8, $v9, $v10;
php { }
php >
php > $m = memory_get_usage();
php > $a = array();
php > for ( $i = 0; $i < 10000; $i++ ) {
php {     $a[] = new C;
php { }
php > print ((memory_get_usage() - $m) / 10000) . "\n";
1479.5632

So you need 1500 bytes per object in your array.  I still fail to see
the problem for a web request.  Maybe I am just old-fashioned in the way
I look at this stuff, but if you have more than 1000 objects loaded on a
single request, you are doing something wrong as far as I am concerned.

This is why we do things like unbuffered mysql queries, zero-copy stream
passing, etc.  We never want entire result sets or entire files in
memory because even if we optimize the crap out of it, it is still going
to be way faster to simply not do that.

-Rasmus

-- 
PHP Internals - PHP Runtime Development Mailing List
To unsubscribe, visit: http://www.php.net/unsub.php

Reply via email to