:) yah that makes sense no big deal. i'll probably just push this head
buiild of pg_dump onto the production machines till it comes out.
Thanks again!
On Sat, Mar 31, 2012 at 3:44 PM, Tom Lane wrote:
> Mike Roest writes:
> > Any idea when 9.1.4 with this change will be out so we can pull the
Mike Roest writes:
> Any idea when 9.1.4 with this change will be out so we can pull the cluster
> up.
Well, we just did some releases last month, so unless somebody finds a
really nasty security or data-loss issue, I'd think it will be a couple
of months before the next set.
>
>
> I'm just pulling another backup using the stock 9.1.1 pg_dump to ensure
> the backups are equivalent.
>
> Schema & data are identical between the 2 backups. the new backup passes
all our tests for validating a tenant.
Thank you again for the quick response!
--Mike
>
> I've committed fixes for both these issues. If you are in a position to
> test with 9.1 branch tip from git, it'd be nice to have confirmation
> that these patches actually cure your problem. For both of them, the
> issue seems to only show up in a subset of cases, which may explain why
> we'
I wrote:
> So this is dumb; we should manage the "is the object already processed"
> component of that with an O(1) check, like a bool array or some such,
> rather than an O(N) search loop.
> As for the getTables slowdown, the only part of that I can see that
> looks to be both significant and ent
Mike Roest writes:
> The file is 6 megs so I've dropped it here.
> That was doing perf for the length of the pg_dump command and then a perf
> report -n
> http://dl.dropbox.com/u/13153/output.txt
Hmm ... that's a remarkably verbose output format, but the useful part
of this info seems to be just
>
> That could be out-of-date info though. Here's some info about
> another possibility:
> http://wiki.postgresql.org/wiki/Profiling_with_perf
>
>
There we go this perf worked on the VM.
The file is 6 megs so I've dropped it here.
That was doing perf for the length of the pg_dump command and the
Mike Roest writes:
> That was on the CentOS 5.8 x64 machine. The one I'm trying it from now is
> Ubuntu 11.10 x64
Hm. On current Red-Hat-derived systems I'd recommend oprofile, although
you need root privileges to use that. Not real sure what is available
on Ubuntu, but our crib sheet for opro
That was on the CentOS 5.8 x64 machine. The one I'm trying it from now is
Ubuntu 11.10 x64
On Fri, Mar 30, 2012 at 11:30 AM, Tom Lane wrote:
> Mike Roest writes:
> > Ok I just realized that's probably not going to be much help :)
>
> gmon.out would be of no value to anybody else anyway --- m
Mike Roest writes:
> Ok I just realized that's probably not going to be much help :)
gmon.out would be of no value to anybody else anyway --- making sense of
it requires the exact executable you took the measurements with.
> 0.00 0.00 0.005 0.00 0.00 canonicalize_path
Ok I just realized that's probably not going to be much help :)
0.00 0.00 0.005 0.00 0.00 canonicalize_path
0.00 0.00 0.005 0.00 0.00
trim_trailing_separator
0.00 0.00 0.003 0.00 0.00 strlcpy
0.00 0.00 0
For sure I'll work on that now. One thing I noticed looking through the
pg_dump code based on the messages and the code one thing I noticed it
seems to be grabbing the full dependency graph for the whole db rather then
limiting it by the schema (not sure if limiting this would be possible)
This q
Mike Roest writes:
> This dump is currently taking around 8 minutes. While dumping the pg_dump
> process is using 100% of one core in the server (24 core machine). Doing a
> -v pg_dump I found that the following stages are taking the majority of the
> time
> reading user_defined tables (2 minut
13 matches
Mail list logo