On Fri, Jun 11, 2010 at 08:43:53AM +0200, Adrian von Bidder wrote:
>
> Just speculation, I've not tried this. Perhaps pipe the output of pg_dump
> through a software that bandwidth-limits the throughput?
Perhaps. However, moving the pg_dump to a Slony slave has solved my problem.
Thanks!!
Al
On 06/10/2010 11:43 PM, Adrian von Bidder wrote:
Just speculation, I've not tried this. Perhaps pipe the output of pg_dump
through a software that bandwidth-limits the throughput? (I don't know if
such a command exists,
pv (pipe view)
Allows you to monitor rate of transfers through a pip
Heyho!
On Thursday 10 June 2010 23.49:58 Aleksey Tsalolikhin wrote:
> 3. We ARE disk bound. Disk utilization goes to 94% during
> the dump. There is an order of magnitude more reads than
> writes.
Just speculation, I've not tried this. Perhaps pipe the output of pg_dump
through a software tha
On Thu, Jun 10, 2010 at 3:22 PM, Andy Colson wrote:
> On 6/10/2010 3:25 PM, Aleksey Tsalolikhin wrote:
>>
>> PostgreSQL 8.4.2 response time slows during pg_dump of a 14 GB database.
>>
>> Specifically, I am monitoring how long it takes to login to our Web
>> app - usually under 2 seconds, but when
Aleksey Tsalolikhin wrote:
Is there any way I can keep my response fast but still get my pg_dump?
http://wiki.postgresql.org/wiki/Priorities covers the usual techniques
you might use to slow down your pg_dump to where it doesn't impact the
main server as much.
Or should I move my pg_dum
* Aleksey Tsalolikhin (atsaloli.t...@gmail.com) wrote:
> I am moving the pg_dump process to a Slony slave.
Good idea.
> Good point. I tried that, actually, but was still disk-bound.
> (Mostly read activity.)
You could maybe try ionice'ing the PG process that is the pg_dump
connection... Identi
On 6/10/2010 4:28 PM, Scott Marlowe wrote:
On Thu, Jun 10, 2010 at 3:22 PM, Andy Colson wrote:
On 6/10/2010 3:25 PM, Aleksey Tsalolikhin wrote:
PostgreSQL 8.4.2 response time slows during pg_dump of a 14 GB database.
Specifically, I am monitoring how long it takes to login to our Web
app - u
PostgreSQL 8.4.2 response time slows during pg_dump of a 14 GB database.
Specifically, I am monitoring how long it takes to login to our Web
app - usually under 2 seconds, but when pg_dump is running, that
changes to 2 - 18 seconds.
Is there any way I can keep my response fast but still get my pg
On 06/10/2010 02:49 PM, Aleksey Tsalolikhin wrote:
...
Is there any way I can keep my response fast but still get my pg_dump?
Or should I move my pg_dump to a Slony-I slave?
1. We're not dumping across the network. The pg_dump
process runs on the same physical server as Postgre
On Thu, Jun 10, 2010 at 3:41 PM, Steve Crawford
wrote:
> So are you writing the backup to the same disk(s) that support the db? What
> happens if you do the dump from a different machine? You'll increase network
> traffic, of course, but reduce your disk load. And you won't be keeping your
> dump
On Thu, Jun 10, 2010 at 1:25 PM, Aleksey Tsalolikhin
wrote:
> PostgreSQL 8.4.2 response time slows during pg_dump of a 14 GB database.
>
...
>
> Is there any way I can keep my response fast but still get my pg_dump?
> Or should I move my pg_dump to a Slony-I slave?
Thank you for your kind replie
On 6/10/2010 3:25 PM, Aleksey Tsalolikhin wrote:
PostgreSQL 8.4.2 response time slows during pg_dump of a 14 GB database.
Specifically, I am monitoring how long it takes to login to our Web
app - usually under 2 seconds, but when pg_dump is running, that
changes to 2 - 18 seconds.
Is there any
On Thursday, June 10, 2010, Aleksey Tsalolikhin
wrote:
> Thanks anyway - glad to hear dumping from a slave is
> a popular approach, that's what we'll do. This is good,
> it'll allow us to increase our pg_dumps from 1 a day to
> as many as we want to feel safe.
You should look into PITR backups.
13 matches
Mail list logo