Bruce Momjian wrote: > > Greg Copeland wrote: > > Well, it occurred to me that if a large result set were to be identified > > before transport between a client and server, a significant amount of > > bandwidth may be saved by using a moderate level of compression. > > Especially with something like result sets, which I tend to believe may > > lend it self well toward compression. > > I should have said compressing the HTTP protocol, not FTP. > > > This may be of value for users with low bandwidth connectivity to their > > servers or where bandwidth may already be at a premium. > > But don't slow links do the compression themselves, like PPP over a > modem?
Yes, but that's packet level compression. You'll never get even close to the result you can achieve compressing the set as a whole. Speaking of HTTP, it's fairly common for web servers (Apache has mod_gzip) to gzip content before sending it to the client (which unzips it silently); especially when dealing with somewhat static content (so it can be cached zipped). This can provide great bandwidth savings. I'm sceptical of the benefit such compressions would provide in this setting though. We're dealing with sets that would have to be compressed every time (no caching) which might be a bit expensive on a database server. Having it as a default off option for psql migtht be nice, but I wonder if it's worth the time, effort, and cpu cycles. ---------------------------(end of broadcast)--------------------------- TIP 2: you can get off all lists at once with the unregister command (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])