Reydan Cankur wrote:
I have compiled PostgreSQL 8.4 from source code and in order to
install pgbench, I go under contrib folder and run below commands:
make
make install
when I write pgbench as a command system cannot find pgbench as a command.
Do regular PostgreSQL command such as psql work
On Sun, Mar 21, 2010 at 8:46 PM, Craig Ringer
wrote:
> On 22/03/2010 1:04 AM, Dave Crooke wrote:
>>
>> If you are really so desparate to save a couple of GB that you are
>> resorting to -Z9 then I'd suggest using bzip2 instead.
>>
>> bzip is designed for things like installer images where there wi
On 22/03/2010 1:04 AM, Dave Crooke wrote:
If you are really so desparate to save a couple of GB that you are
resorting to -Z9 then I'd suggest using bzip2 instead.
bzip is designed for things like installer images where there will be
massive amounts of downloads, so it uses a ton of cpu during
c
Note however that Oracle offeres full transactionality and does in place row
updates. There is more than one way to do it.
Cheers
Dave
On Mar 21, 2010 5:43 PM, "Merlin Moncure" wrote:
On Sat, Mar 20, 2010 at 11:47 PM, Andy Colson wrote:
> Don't underestimate my...
for non trivial selects (myis
On Sat, Mar 20, 2010 at 11:47 PM, Andy Colson wrote:
> Don't underestimate mysql. It was written to be fast. But you have to
> understand the underling points: It was written to be fast at the cost of
> other things... like concurrent access, and data integrity. If you want to
> just read from
If you have a multi-processor machine (more than 2) you could look into pigz,
which is a parallelized implementation of gzip. I gotten dramatic reductions in
wall time using it to zip dump files. The compressed file is readable by
ungzip.
Bob Lunney
From: Dave Crooke
Subject: Re: [PERFORM] p
If you are really so desparate to save a couple of GB that you are resorting
to -Z9 then I'd suggest using bzip2 instead.
bzip is designed for things like installer images where there will be
massive amounts of downloads, so it uses a ton of cpu during compression,
but usually less than -Z9 and ma
Tom Lane wrote:
I would bet that the reason for the slow throughput is that gzip
is fruitlessly searching for compressible sequences. It won't find many.
Indeed, I didn't expect much reduction in size, but I also didn't expect
a four-order of magnitude increase in run-time (i.e. output at
Craig Ringer writes:
> On 21/03/2010 9:17 PM, David Newall wrote:
>> and wonder if I should read up on gzip to find why it would work so
>> slowly on a pure text stream, albeit a representation of PDF which
>> intrinsically is fairly compressed.
> In fact, PDF uses deflate compression, the same a
One more from me
If you think that the pipe to GZIP may be causing pg_dump to stall, try
putting something like buffer(1) in the pipeline ... it doesn't generally
come with Linux, but you can download source or create your own very easily
... all it needs to do is asynchronously poll stdin an
On 21/03/2010 9:17 PM, David Newall wrote:
Thanks for all of the suggestions, guys, which gave me some pointers on
new directions to look, and I learned some interesting things.
Unfortunately one of these processes dropped eventually, and, according
to top, the only non-idle process running w
Thanks for all of the suggestions, guys, which gave me some pointers on
new directions to look, and I learned some interesting things.
The first interesting thing was that piping (uncompressed) pg_dump into
gzip, instead of using pg_dump's internal compressor, does bring a lot
of extra paralle
12 matches
Mail list logo