Re: [HACKERS] pg_dump tries to do too much per query

2000-09-19 Thread Tom Lane
Philip Warner <[EMAIL PROTECTED]> writes: > It's a real pity about pg_get_userbyid - the output for non-existant users > is pretty near useless. I presume it's there for convenience of a specific > piece of code. No, I imagine it's just that way because it seemed like a good idea at the time. Re

Re: [HACKERS] pg_dump tries to do too much per query

2000-09-17 Thread Philip Warner
At 16:29 17/09/00 -0400, Tom Lane wrote: >As somebody pointed out a few days ago, pg_dump silently loses tables >whose owners can't be identified. This is now fixed in CVS. The owner of all objects are now retrieved by using column select expressions. If you can recall where it was, I'd be int

Re: [HACKERS] pg_dump tries to do too much per query

2000-09-17 Thread Philip Warner
At 12:48 18/09/00 +1000, Philip Warner wrote: >> >>You should be able to fix the latter problem by doing an outer join, >>though it doesn't quite work yet in current sources. pg_get_userbyid() >>offers a different solution, although it won't return NULL for unknown >>IDs, which might be an easier

Re: [HACKERS] pg_dump tries to do too much per query

2000-09-17 Thread Philip Warner
At 16:29 17/09/00 -0400, Tom Lane wrote: > >getTables(): SELECT failed. Explanation from backend: 'ERROR: cache lookup of attribute 1 in relation 400384 failed >'. > >This is just about entirely useless as an error message, wouldn't you >say? I agree but the is representative of the error handl

[HACKERS] pg_dump tries to do too much per query

2000-09-17 Thread Tom Lane
I was experimenting today with pg_dump's reaction to missing dependencies, such as a rule that refers to a no-longer-existing table. It's pretty bad. For example: create table test (f1 int); create view v_test as select f1+1 as f11 from test; drop table test; then run pg_dump: getTables(): SE