Dear All
thanks for your precious help. I'll come back to the list once analyzed our
system.
Roberto
- Messaggio originale -
Da: k...@rice.edu
A: "Roberto Grandi"
Cc: pgsql-performance@postgresql.org
Inviato: Venerdì, 3 ottobre 2014 15:00:03
Oggetto: Re: [PERFOR
Dear Pg people,
I would ask for your help considering this scaling issue. We are planning to
move from 3Millions of events/day instance of postgres (8 CPU, 65 gb ram) to 5
millions of items/day.
What do you suggest in order to plan this switch? Add separate server? Increase
RAM? Use SSD?
Any r
, b, c FROM table1) but sometime it's
very slow, what could be your suggestion? Is it possible to detect if we are
facing problem on IO or Linux systemItself?
Many thank in advance for all your help.
Regards,
Roberto
- Messaggio originale -
Da: "Jeff Janes"
A: "
nding for more than 100
minutes and the destination file continues to be at 0 Kb. Can you advise me how
to solve this issue?
Is it here a best way to bulk download data avoiding any kind of block when
running in parallel?
Many thanks in advance
- Messaggio originale -----
Da: "Jeff Jan
Dear All,
I'm dealing with restore 3 DB at the same time. Previously this task was
sequential but we need to make shorter as possible our daily maintenance window.
Is this possible from your point of view to restore on the same server more
than 1 DB at time? I havwn't found any clear answer on t
, form your point of view, working with isolation levels or table
partitioning to minimize table space growing?
Thanks again for all your help.
BR,
Roberto
- Messaggio originale -
Da: "Jeff Janes"
A: "Roberto Grandi"
Cc: "Kevin Grittner" , pgsql-performance@p
ks in advance again.
BR,
Roberto
- Messaggio originale -
Da: "Kevin Grittner"
A: "Roberto Grandi" ,
pgsql-performance@postgresql.org
Inviato: Martedì, 3 settembre 2013 22:34:30
Oggetto: Re: [PERFORM] COPY TO and VACUUM
Roberto Grandi wrote:
> I'm running
Dear All
I'm running Postgres 8.4 on Ubuntu 10.4 Linux server (64bit)
I have a big table tath contains product information: during the day we perform
a process that import new product continously with statemtn COPY TO .. from
files to this table.
As result the table disk space is growing fast,