On Thu, Jan 10, 2008 at 12:08:39PM +0100, Stephane Bailliez wrote:
> Jared Mauch wrote:
>> I do large databases in Pg, like 300GB/day of new data.
>
> That's impressive. Would it be possible to have details on your hardware,
> schema and configuration and type
uld represent some of your
performance difference. If you're doing a lot of write operations
and fewer read, perhaps the cost of an index isn't worth it in the
cpu time spent creating it vs the amount of time for a seq scan.
- Jared
--
Jared Mauch | pgp key available via f
trying to sort everything out.
Hi,
I do large databases in Pg, like 300GB/day of new data. Need a lot
more data on what you're having issues with.
Is your problem with performance database reads?
writes? (insert/copy?) How many indicies do you have?
- jared
--
On Thu, Dec 27, 2007 at 01:14:25PM +, Gregory Stark wrote:
> "Jared Mauch" <[EMAIL PROTECTED]> writes:
>
> > pg_dump is utilizing about 13% of the cpu and the
> > corresponding postgres backend is at 100% cpu time.
> > (multi-cor
ith my host. There may be some optimizations that the compiler
could do when linking the C library but I currently think they're on
sound footing.
(* drift=off mode=end *)
- Jared
--
Jared Mauch | pgp key available via finger from [EMAIL PROTECTED]
clue++;
On Wed, Dec 26, 2007 at 10:52:08PM +0200, Heikki Linnakangas wrote:
> Jared Mauch wrote:
>> pg_dump is utilizing about 13% of the cpu and the
>> corresponding postgres backend is at 100% cpu time.
>> (multi-core, multi-cpu, lotsa ram, super-fast disk).
>> ...
rm
much closer to 500k/sec or more? This would also aide me when I upgrade
pg versions and need to dump/restore with minimal downtime (as the data
never stops coming.. whee).
Thanks!
- Jared
--
Jared Mauch | pgp key available via finger from [EMAIL PROTECTED]
clue++; |