ID: 6746 Updated by: [EMAIL PROTECTED] Reported By: joschug at aol dot com -Status: Open +Status: Closed Bug Type: Feature/Change Request Operating System: Redhat 6.2 PHP Version: 4.0.2 New Comment:
As of 4.3.0, sybase_unbuffered_query() with store_results = FALSE provides the necessary functionality. As for the BCP wishes, I don't think this fits into ext/sybase_ct, maybe there should be a different (PECL) extension for that. Previous Comments: ------------------------------------------------------------------------ [2000-09-13 22:31:13] joschug at aol dot com I recently tried to do a select * from a large table (about 2 million rows, whole DB is about 500 Megs). I noticed that PHP quickly ran out of memory after I did a sybase_query() and before accessing the result set. After a quick browse through the source of php_sybase_ct.c I saw that PHP first reads all results into an internal buffer, and after that returns each row from that buffer via sybase_fetch_array() and the like. I'd really like to see an incremental approach here; if I run the same query against an Oracle DB (using the OCI-interface) I don't have these problems. I would like to dump and (re-)insert a whole table with PHP, but with Sybase this is currently not achievable. I tried it with server-side cursors, and it worked - but the performance dropped by factor 20 :( I'd also like to see an interface for doing bulk inserts with bcp, maybe with an interface like sybperl for Perl implements it. This would also be very useful for large inserts (about factor 30 on my test system). ------------------------------------------------------------------------ -- Edit this bug report at http://bugs.php.net/?id=6746&edit=1