Wow... I found it. The postgres database contained more
default privs. But PGAdmin III nothing say about dependents in it's reports.
Thanks!
2016-10-21 16:19 GMT+02:00 Durumdara :
> Dear Tom!
>
> Is there any tool what can show me the dependents or dependencies?
>
> In PGAdmi
Dear Tom!
Is there any tool what can show me the dependents or dependencies?
In PGAdmin I don't see any dependencies or dependents... :-(
Thanks
dd
2016-10-21 16:08 GMT+02:00 Tom Lane :
> Durumdara writes:
> > The DB_X dropped, so I can't choose it as "actual database".
> > I tried this in
Durumdara writes:
> The DB_X dropped, so I can't choose it as "actual database".
> I tried this in a neutral database:
> drop owned by role_x;
> But nothing happened, the error is same.
The error you are reporting is describing default privileges that
exist in the *current* database. You nee
Dear Tom!
The DB_X dropped, so I can't choose it as "actual database".
I tried this in a neutral database:
drop owned by role_x;
But nothing happened, the error is same.
As I read it have "CASCADE" mode, but I'm afraid to start it, because I
don't know what will happen.
It is a really use
Durumdara writes:
> We have a ROLE_MAIN.
> This gave default privileges to all next objects in DB_X to ROLE_X.
> Somebody dropped DB_X, and later he tried to drop ROLE_X.
> But he got errors in PGAdmin.
> ERROR: role "role_x" cannot be dropped because some objects depend on it
> DETAIL: privileg
Hello!
We created a DB named DB_X, and a role ROLE_X.
We have a ROLE_MAIN.
This gave default privileges to all next objects in DB_X to ROLE_X.
Somebody dropped DB_X, and later he tried to drop ROLE_X.
But he got errors in PGAdmin.
---
pgAdmin III
On Feb 26, 2014, at 2:59 AM, Tomas Vondra wrote:
> On 26 Únor 2014, 8:45, john gale wrote:
>
>> munin2=# delete from testruns where ctid = '(37069305,4)';
>> ERROR: tuple concurrently updated
>
> AFAIK this error is raised when a before trigger modifies the row that is
> being deleted. Imagin
On 26 Únor 2014, 8:45, john gale wrote:
>
> Does anybody have any ideas about this.
>
> We restarted the postmaster and the issue persists. So previously in
> 9.0.4 where we could clean corruption, it seems in 9.3.2 we can no longer
> clean corruption.o I'm assuming this because our data insert e
Does anybody have any ideas about this.
We restarted the postmaster and the issue persists. So previously in 9.0.4
where we could clean corruption, it seems in 9.3.2 we can no longer clean
corruption.o I'm assuming this because our data insert environment has not
changed, so we shouldn't be
We ran into an open file limit on the DB host (Mac OS X 10.9.0, Postgres 9.3.2)
and caused the familiar "ERROR: unexpected chunk number 0 (expected 1) for
toast value 155900302 in pg_toast_16822" when selecting data.
Previously when we've run into this kind of corruption we could find the
spe
I have a workaround for the mysterious inability to delete records
from one particular table not notably different from many others.
This does not explain the problem, but at least enables me to move
on... Whether all of the following steps are necessary I can't say.
Instead of loading the 9.3 D
On Thu, 5 Dec 2013, Andy Colson wrote:
On 12/5/2013 4:05 PM, Frank Miles wrote:
The table schema is {\d
credmisc}:
And this is all owned by: {\dp credmisc}
You have a table credmisc, in schema credmisc, owned by credmisc?
It could be a path problem. Maybe trigger should be:
Sorry for the
On Thu, 5 Dec 2013, Andy Colson wrote:
On 12/5/2013 4:05 PM, Frank Miles wrote:
[snip]
Table "public.credmisc"
Column | Type |Modifiers
--+--+-
On 12/5/2013 4:05 PM, Frank Miles wrote:
The table schema is {\d
credmisc}:
And this is all owned by: {\dp credmisc}
You have a table credmisc, in schema credmisc, owned by credmisc?
It could be a path problem. Maybe trigger should be:
trig_credmisc_updt BEFORE UPDATE ON credmisc.credmisc
On 12/5/2013 4:05 PM, Frank Miles wrote:
I'm in the process of moving from a server running postgresql-8.4
(Debian-oldstable)
to a newer machine running postgresql-9.3. The dumpall-restore process
seemed to
go perfectly. In running my self-test script, I discovered that one of
the tables
couldn
I'm in the process of moving from a server running postgresql-8.4
(Debian-oldstable)
to a newer machine running postgresql-9.3. The dumpall-restore process seemed
to
go perfectly. In running my self-test script, I discovered that one of the
tables
couldn't be cleared of some unit-test entries
Alex <[EMAIL PROTECTED]> writes:
> hi have a table with 2.5 million records which i try do delete. i have
> several constraints on it too.
> i tried to delete the records using delete but it does not seem to work.
> the delete runs forever. hrs...
> i cannot truncate it as it complains about fore
Not sure about 2.5 million records but try running "VACUUM ANALYSE"
before the delete and during (every now and then).
Had the same problem with 100,000 records and it did the trick nicely.
Hi,
hi have a table with 2.5 million records which i try do delete. i have
several constraints on it too.
Hi,
hi have a table with 2.5 million records which i try do delete. i have
several constraints on it too.
i tried to delete the records using delete but it does not seem to work.
the delete runs forever. hrs...
i cannot truncate it as it complains about foreign keys.
What is the problem ?
Thank
19 matches
Mail list logo