KaiGai Kohei wrote:
> The attached patch fixed up the cleanup query as follows:
> + appendPQExpBuffer(dquery,
> + "SELECT pg_catalog.lo_unlink(oid) "
> + "FROM pg_catalog.pg_largeobject_metadata "
> + "WHERE oid = %s;\n", binfo->dobj.
(2010/02/09 21:18), KaiGai Kohei wrote:
> (2010/02/09 20:16), Takahiro Itagaki wrote:
>>
>> KaiGai Kohei wrote:
>>
I don't think this is necessarily a good idea. We might decide to treat
both things separately in the future and it having them represented
separately in the dump would
(2010/02/09 20:16), Takahiro Itagaki wrote:
KaiGai Kohei wrote:
I don't think this is necessarily a good idea. We might decide to treat
both things separately in the future and it having them represented
separately in the dump would prove useful.
I agree. From design perspective, the singl
KaiGai Kohei wrote:
> > I don't think this is necessarily a good idea. We might decide to treat
> > both things separately in the future and it having them represented
> > separately in the dump would prove useful.
>
> I agree. From design perspective, the single section approach is more
> sim
(2010/02/08 22:23), Alvaro Herrera wrote:
Takahiro Itagaki escribió:
KaiGai Kohei wrote:
default:both contents and metadata
--data-only:same
--schema-only: neither
However, it means only large object performs an exceptional object class
that dumps its o
Takahiro Itagaki escribió:
>
> KaiGai Kohei wrote:
>
> > > default:both contents and metadata
> > > --data-only:same
> > > --schema-only: neither
> >
> > However, it means only large object performs an exceptional object class
> > that dumps its owner, acl and comment even if
(2010/02/05 13:53), Takahiro Itagaki wrote:
>
> KaiGai Kohei wrote:
>
>>> default:both contents and metadata
>>> --data-only:same
>>> --schema-only: neither
>>
>> However, it means only large object performs an exceptional object class
>> that dumps its owner, acl and co
(2010/02/05 13:53), Takahiro Itagaki wrote:
>
> KaiGai Kohei wrote:
>
>>> default:both contents and metadata
>>> --data-only:same
>>> --schema-only: neither
>>
>> However, it means only large object performs an exceptional object class
>> that dumps its owner, acl and co
(2010/02/05 13:53), Takahiro Itagaki wrote:
>
> KaiGai Kohei wrote:
>
>>> default:both contents and metadata
>>> --data-only:same
>>> --schema-only: neither
>>
>> However, it means only large object performs an exceptional object class
>> that dumps its owner, acl and co
KaiGai Kohei wrote:
> > default:both contents and metadata
> > --data-only:same
> > --schema-only: neither
>
> However, it means only large object performs an exceptional object class
> that dumps its owner, acl and comment even if --data-only is given.
> Is it really w
(2010/02/05 3:27), Alvaro Herrera wrote:
Robert Haas escribió:
2010/2/4 KaiGai Kohei:
(2010/02/04 0:20), Robert Haas wrote:
2010/2/1 KaiGai Kohei:
I again wonder whether we are on the right direction.
I believe the proposed approach is to dump blob metadata if and only
if you are also dumpi
Robert Haas escribió:
> 2010/2/4 KaiGai Kohei :
> > (2010/02/04 0:20), Robert Haas wrote:
> >> 2010/2/1 KaiGai Kohei:
> >>> I again wonder whether we are on the right direction.
> >>
> >> I believe the proposed approach is to dump blob metadata if and only
> >> if you are also dumping blob contents
2010/2/4 KaiGai Kohei :
> (2010/02/04 0:20), Robert Haas wrote:
>> 2010/2/1 KaiGai Kohei:
>>> I again wonder whether we are on the right direction.
>>
>> I believe the proposed approach is to dump blob metadata if and only
>> if you are also dumping blob contents, and to do all of this for data
>>
(2010/02/04 17:30), KaiGai Kohei wrote:
> (2010/02/04 0:20), Robert Haas wrote:
>> 2010/2/1 KaiGai Kohei:
>>> I again wonder whether we are on the right direction.
>>
>> I believe the proposed approach is to dump blob metadata if and only
>> if you are also dumping blob contents, and to do all of t
(2010/02/04 0:20), Robert Haas wrote:
> 2010/2/1 KaiGai Kohei:
>> I again wonder whether we are on the right direction.
>
> I believe the proposed approach is to dump blob metadata if and only
> if you are also dumping blob contents, and to do all of this for data
> dumps but not schema dumps. Th
2010/2/1 KaiGai Kohei :
> I again wonder whether we are on the right direction.
I believe the proposed approach is to dump blob metadata if and only
if you are also dumping blob contents, and to do all of this for data
dumps but not schema dumps. That seems about right to me.
> Originally, the r
>>> The --schema-only with large objects might be unnatural, but the
>>> --data-only with properties of large objects are also unnatural.
>>> Which behavior is more unnatural?
>>
>> I think large object metadata is a kind of row-based access controls.
>> How do we dump and restore ACLs per rows whe
(2010/02/02 9:33), Takahiro Itagaki wrote:
>
> KaiGai Kohei wrote:
>
>>> Can we remove such path and raise an error instead?
>>> Also, even if we support the older servers in the routine,
>>> the new bytea format will be another problem anyway.
>>
>> OK, I'll fix it.
>
> I think we might need t
KaiGai Kohei wrote:
> > Can we remove such path and raise an error instead?
> > Also, even if we support the older servers in the routine,
> > the new bytea format will be another problem anyway.
>
> OK, I'll fix it.
I think we might need to discuss about explicit version checks in pg_restore.
(2010/02/01 14:19), Takahiro Itagaki wrote:
> As far as I read, the patch is almost ready to commit
> except the following issue about backward compatibility:
>
>> * "BLOB DATA"
>> This section is same as existing "BLOBS" section, except for _LoadBlobs()
>> does not create a new large object befor
KaiGai Kohei wrote:
> The attached patch uses one TOC entry for each blob objects.
This patch does not only fix the existing bugs, but also refactor
the dump format of large objects in pg_dump. The new format are
more similar to the format of tables:
Section
-
KaiGai Kohei wrote:
> > When I'm testing the new patch, I found "ALTER LARGE OBJECT" command
> > returns "ALTER LARGEOBJECT" tag. Should it be "ALTER LARGE(space)OBJECT"
> > instead?
>
> Sorry, I left for fix this tag when I was pointed out LARGEOBJECT should
> be LARGE(space)OBJECT.
Committed
(2010/01/28 18:21), Takahiro Itagaki wrote:
>
> KaiGai Kohei wrote:
>
>> The attached patch uses one TOC entry for each blob objects.
>
> When I'm testing the new patch, I found "ALTER LARGE OBJECT" command
> returns "ALTER LARGEOBJECT" tag. Should it be "ALTER LARGE(space)OBJECT"
> instead? A
Takahiro Itagaki writes:
> When I'm testing the new patch, I found "ALTER LARGE OBJECT" command
> returns "ALTER LARGEOBJECT" tag. Should it be "ALTER LARGE(space)OBJECT"
> instead? As I remember, we had decided not to use LARGEOBJECT
> (without a space) in user-visible messages, right?
The comm
KaiGai Kohei wrote:
> The attached patch uses one TOC entry for each blob objects.
When I'm testing the new patch, I found "ALTER LARGE OBJECT" command
returns "ALTER LARGEOBJECT" tag. Should it be "ALTER LARGE(space)OBJECT"
instead? As I remember, we had decided not to use LARGEOBJECT
(withou
The attached patch uses one TOC entry for each blob objects.
It adds two new section types.
* "BLOB ITEM"
This section provides properties of a certain large object.
It contains a query to create an empty large object, and restore
ownership of the large object, if necessary.
| --
| -- Name: 1
"Kevin Grittner" writes:
> "Kevin Grittner" wrote:
>> Tom Lane wrote:
>>> Did you happen to notice anything about pg_dump's memory
>>> consumption?
> I took a closer look, and there's some bad news, I think. The above
> numbers were from the ends of the range. I've gone back over and
> found
"Kevin Grittner" wrote:
> Tom Lane wrote:
>
>> Did you happen to notice anything about pg_dump's memory
>> consumption?
>
> Not directly, but I was running 'vmstat 1' throughout. Cache
> space dropped about 2.1 GB while it was running and popped back up
> to the previous level at the end.
Tom Lane wrote:
> Did you happen to notice anything about pg_dump's memory
> consumption?
Not directly, but I was running 'vmstat 1' throughout. Cache space
dropped about 2.1 GB while it was running and popped back up to the
previous level at the end.
-Kevin
--
Sent via pgsql-hackers mail
"Kevin Grittner" writes:
> Tom Lane wrote:
>> It might be better to try a test case with lighter-weight objects,
>> say 5 million simple functions.
> Said dump ran in about 45 minutes with no obvious stalls or
> problems. The 2.2 GB database dumped to a 1.1 GB text file, which
> was a little
Tom Lane wrote:
> It might be better to try a test case with lighter-weight objects,
> say 5 million simple functions.
Said dump ran in about 45 minutes with no obvious stalls or
problems. The 2.2 GB database dumped to a 1.1 GB text file, which
was a little bit of a surprise.
-Kevin
--
S
Tom Lane wrote:
> It might be better to try a test case with lighter-weight objects,
> say 5 million simple functions.
A dump of that quickly settled into running a series of these:
SELECT proretset, prosrc, probin,
pg_catalog.pg_get_function_arguments(oid) AS funcargs,
pg_catalog.pg_get_fun
"Kevin Grittner" writes:
> Tom Lane wrote:
>> It might be better to try a test case with lighter-weight objects,
>> say 5 million simple functions.
> So the current database is expendable?
Yeah, I think it was a bad experimental design anyway...
regards, tom lane
--
Tom Lane wrote:
> It might be better to try a test case with lighter-weight objects,
> say 5 million simple functions.
So the current database is expendable? I'd just as soon delete it
before creating the other one, if you're fairly confident the other
one will do it.
-Kevin
--
Sent via p
"Kevin Grittner" writes:
> I'm afraid pg_dump didn't get very far with this before:
> pg_dump: WARNING: out of shared memory
> pg_dump: SQL command failed
> Given how fast it happened, I suspect that it was 2672 tables into
> the dump, versus 26% of the way through 5.5 million tables.
Yeah,
Tom Lane wrote:
> I'm not so worried about the amount of RAM needed as whether
> pg_dump's internal algorithms will scale to large numbers of TOC
> entries. Any O(N^2) behavior would be pretty painful, for
> example. No doubt we could fix any such problems, but it might
> take more work than w
KaiGai Kohei writes:
> (2010/01/23 5:12), Tom Lane wrote:
>> Now the argument against that is that it won't scale terribly well
>> to situations with very large numbers of blobs.
> Even if the database contains massive number of large objects, all the
> pg_dump has to manege on RAM is its metadat
"Kevin Grittner" writes:
> ... After a few minutes that left me curious just how big
> the database was, so I tried:
> select pg_size_pretty(pg_database_size('test'));
> I did a Ctrl+C after about five minutes and got:
> Cancel request sent
> but it didn't return for 15 or 20 minutes.
Hm
"Kevin Grittner" wrote:
> So I'm not sure whether I can get to a state suitable for starting
> the desired test, but I'll stay with a for a while.
I have other commitments today, so I'm going to leave the VACUUM
ANALYZE running and come back tomorrow morning to try the pg_dump.
-Kevin
--
S
Tom Lane wrote:
> "Kevin Grittner" writes:
>> Tom Lane wrote:
>>> Do you have the opportunity to try an experiment on hardware
>>> similar to what you're running that on? Create a database with
>>> 7 million tables and see what the dump/restore times are like,
>>> and whether pg_dump/pg_restor
(2010/01/23 5:12), Tom Lane wrote:
> KaiGai Kohei writes:
>> The attached patch is a revised version.
>
> I'm inclined to wonder whether this patch doesn't prove that we've
> reached the end of the line for the current representation of blobs
> in pg_dump archives. The alternative that I'm think
"Kevin Grittner" wrote:
> I'll get started.
After a couple false starts, the creation of the millions of tables
is underway. At the rate it's going, it won't finish for 8.2 hours,
so I'll have to come in and test the dump tomorrow morning.
-Kevin
--
Sent via pgsql-hackers mailing list (pgsq
Tom Lane wrote:
> Empty is fine.
I'll get started.
-Kevin
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
"Kevin Grittner" writes:
> Tom Lane wrote:
>> Do you have the opportunity to try an experiment on hardware
>> similar to what you're running that on? Create a database with 7
>> million tables and see what the dump/restore times are like, and
>> whether pg_dump/pg_restore appear to be CPU-bound
Tom Lane wrote:
> Do you have the opportunity to try an experiment on hardware
> similar to what you're running that on? Create a database with 7
> million tables and see what the dump/restore times are like, and
> whether pg_dump/pg_restore appear to be CPU-bound or
> memory-limited when doing
"Kevin Grittner" writes:
> Tom Lane wrote:
>> We've heard of people with many tens of thousands of
>> tables, and pg_dump speed didn't seem to be a huge bottleneck for
>> them (at least not in recent versions). So I'm feeling we should
>> not dismiss the idea of one TOC entry per blob.
>>
>> Th
Tom Lane wrote:
> Now the argument against that is that it won't scale terribly well
> to situations with very large numbers of blobs. However, I'm not
> convinced that the current approach of cramming them all into one
> TOC entry scales so well either. If your large objects are
> actually la
KaiGai Kohei writes:
> The attached patch is a revised version.
I'm inclined to wonder whether this patch doesn't prove that we've
reached the end of the line for the current representation of blobs
in pg_dump archives. The alternative that I'm thinking about is to
treat each blob as an independ
The attached patch is a revised version.
List of updates:
- cleanup: getBlobs() was renamed to getBlobOwners()
- cleanup: BlobsInfo was renamed to BlobOwnerInfo
- bugfix: pg_get_userbyid() in SQLs were replaced by username_subquery which
constins a right subquery to obtain a username for
(2010/01/21 19:42), Takahiro Itagaki wrote:
>
> KaiGai Kohei wrote:
>
>>> I'm not sure whether we need to make groups for each owner of large objects.
>>> If I remember right, the primary issue was separating routines for dump
>>> BLOB ACLS from routines for BLOB COMMENTS, right? Why did you mak
KaiGai Kohei wrote:
> > I'm not sure whether we need to make groups for each owner of large objects.
> > If I remember right, the primary issue was separating routines for dump
> > BLOB ACLS from routines for BLOB COMMENTS, right? Why did you make the
> > change?
>
> When --use-set-session-aut
(2010/01/21 16:52), Takahiro Itagaki wrote:
>
> KaiGai Kohei wrote:
>
>> This patch renamed the hasBlobs() by getBlobs(), and changed its
>> purpose. It registers DO_BLOBS, DO_BLOB_COMMENTS and DO_BLOB_ACLS
>> for each large objects owners, if necessary.
>
> This patch adds DumpableObjectType D
KaiGai Kohei wrote:
> This patch renamed the hasBlobs() by getBlobs(), and changed its
> purpose. It registers DO_BLOBS, DO_BLOB_COMMENTS and DO_BLOB_ACLS
> for each large objects owners, if necessary.
This patch adds DumpableObjectType DO_BLOB_ACLS and struct BlobsInfo. We
use three BlobsInfo
2009/12/22 KaiGai Kohei :
> (2009/12/21 9:39), KaiGai Kohei wrote:
>> (2009/12/19 12:05), Robert Haas wrote:
>>> On Fri, Dec 18, 2009 at 9:48 PM, Tom Lane wrote:
Robert Haas writes:
> Oh. This is more complicated than it appeared on the surface. It
> seems that the string "BLOB C
(2009/12/21 9:39), KaiGai Kohei wrote:
> (2009/12/19 12:05), Robert Haas wrote:
>> On Fri, Dec 18, 2009 at 9:48 PM, Tom Lane wrote:
>>> Robert Haas writes:
Oh. This is more complicated than it appeared on the surface. It
seems that the string "BLOB COMMENTS" actually gets inserted i
(2009/12/19 12:05), Robert Haas wrote:
> On Fri, Dec 18, 2009 at 9:48 PM, Tom Lane wrote:
>> Robert Haas writes:
>>> Oh. This is more complicated than it appeared on the surface. It
>>> seems that the string "BLOB COMMENTS" actually gets inserted into
>>> custom dumps somewhere, so I'm not sure
On Fri, Dec 18, 2009 at 9:51 PM, Tom Lane wrote:
> Robert Haas writes:
>> Part of what I'm confused about (and what I think should be documented
>> in a comment somewhere) is why we're using MVCC visibility in some
>> places but not others. In particular, there seem to be some bits of
>> the com
On Fri, Dec 18, 2009 at 9:48 PM, Tom Lane wrote:
> Robert Haas writes:
>> Oh. This is more complicated than it appeared on the surface. It
>> seems that the string "BLOB COMMENTS" actually gets inserted into
>> custom dumps somewhere, so I'm not sure whether we can just change it.
>> Was this
Robert Haas writes:
> Part of what I'm confused about (and what I think should be documented
> in a comment somewhere) is why we're using MVCC visibility in some
> places but not others. In particular, there seem to be some bits of
> the comment that imply that we do this for read but not for wri
Robert Haas writes:
> Oh. This is more complicated than it appeared on the surface. It
> seems that the string "BLOB COMMENTS" actually gets inserted into
> custom dumps somewhere, so I'm not sure whether we can just change it.
> Was this issue discussed at some point before this was committed?
On Fri, Dec 18, 2009 at 1:48 AM, Takahiro Itagaki
wrote:
>> In both cases, I'm lost. Help?
>
> They might be contrasted with the comments for myLargeObjectExists.
> Since we use MVCC visibility in loread(), metadata for large object
> also should be visible in MVCC rule.
>
> If I understand them,
On Fri, Dec 18, 2009 at 9:00 AM, Robert Haas wrote:
> 2009/12/18 KaiGai Kohei :
>> (2009/12/18 15:48), Takahiro Itagaki wrote:
>>>
>>> Robert Haas wrote:
>>>
In both cases, I'm lost. Help?
>>>
>>> They might be contrasted with the comments for myLargeObjectExists.
>>> Since we use MVCC visi
2009/12/18 KaiGai Kohei :
> (2009/12/18 15:48), Takahiro Itagaki wrote:
>>
>> Robert Haas wrote:
>>
>>> In both cases, I'm lost. Help?
>>
>> They might be contrasted with the comments for myLargeObjectExists.
>> Since we use MVCC visibility in loread(), metadata for large object
>> also should be
(2009/12/18 15:48), Takahiro Itagaki wrote:
>
> Robert Haas wrote:
>
>> In both cases, I'm lost. Help?
>
> They might be contrasted with the comments for myLargeObjectExists.
> Since we use MVCC visibility in loread(), metadata for large object
> also should be visible in MVCC rule.
>
> If I
Robert Haas wrote:
> In both cases, I'm lost. Help?
They might be contrasted with the comments for myLargeObjectExists.
Since we use MVCC visibility in loread(), metadata for large object
also should be visible in MVCC rule.
If I understand them, they say:
* pg_largeobject_aclmask_snapshot
On Thu, Dec 17, 2009 at 7:27 PM, Takahiro Itagaki
wrote:
>> > Another comment is I'd like to keep > > linkend="catalog-pg-largeobject-metadata">
>> > for the first pg_largeobject in each topic.
>> Those two things aren't the same. Perhaps you meant > linkend="catalog-pg-largeobject">?
> Oops, yes
Robert Haas wrote:
> > Another comment is I'd like to keep > linkend="catalog-pg-largeobject-metadata">
> > for the first pg_largeobject in each topic.
>
> Those two things aren't the same. Perhaps you meant linkend="catalog-pg-largeobject">?
Oops, yes. Thank you for the correction.
We als
2009/12/17 Takahiro Itagaki :
>
> Robert Haas wrote:
>
>> 2009/12/16 KaiGai Kohei :
>> ? ?long desc: When turned on, privilege checks on large objects perform
>> with
>> ? ? ? ? ? ? ? backward compatibility as 8.4.x or earlier releases.
>
>> Mostly English quality, but there are so
Robert Haas wrote:
> 2009/12/16 KaiGai Kohei :
> ? ?long desc: When turned on, privilege checks on large objects perform
> with
> ? ? ? ? ? ? ? backward compatibility as 8.4.x or earlier releases.
> Mostly English quality, but there are some other issues too. Proposed
> patch a
(2009/12/17 13:20), Robert Haas wrote:
> 2009/12/16 KaiGai Kohei:
>> (2009/12/17 7:25), Robert Haas wrote:
>>> On Thu, Dec 10, 2009 at 10:41 PM, Takahiro Itagaki
>>> wrote:
KaiGai Koheiwrote:
> What's your opinion about:
> long desc: When turned on, privilege chec
2009/12/16 KaiGai Kohei :
> (2009/12/17 7:25), Robert Haas wrote:
>> On Thu, Dec 10, 2009 at 10:41 PM, Takahiro Itagaki
>> wrote:
>>>
>>> KaiGai Kohei wrote:
>>>
What's your opinion about:
long desc: When turned on, privilege checks on large objects perform
with
(2009/12/17 7:25), Robert Haas wrote:
> On Thu, Dec 10, 2009 at 10:41 PM, Takahiro Itagaki
> wrote:
>>
>> KaiGai Kohei wrote:
>>
>>> What's your opinion about:
>>>long desc: When turned on, privilege checks on large objects perform with
>>> backward compatibility as 8.4.x or ea
On Thu, Dec 10, 2009 at 10:41 PM, Takahiro Itagaki
wrote:
>
> KaiGai Kohei wrote:
>
>> What's your opinion about:
>> long desc: When turned on, privilege checks on large objects perform with
>> backward compatibility as 8.4.x or earlier releases.
>
> I updated the description as yo
KaiGai Kohei wrote:
> We don't have any reason why still CASE ... WHEN and subquery for the given
> LOID. Right?
Ah, I see. I used your suggestion.
I applied the bug fixes. Our tools and contrib modules will always use
pg_largeobject_metadata instead of pg_largeobject to enumerate large objects
KaiGai Kohei wrote:
> > What happens when
> > there is no entry in pg_largeobject_metadata for a specific row?
>
> In this case, these rows become orphan.
> So, I think we need to create an empty large object with same LOID on
> pg_migrator. It makes an entry on pg_largeobject_metadata without
> w
Takahiro Itagaki wrote:
KaiGai Kohei wrote:
We have to reference pg_largeobject_metadata to check whether a certain
large objct exists, or not.
It is a case when we create a new large object, but write nothing.
OK, that makes sense.
In addition of the patch, we also need to fix pg_rest
Bruce Momjian さんは書きました:
KaiGai Kohei wrote:
Takahiro Itagaki wrote:
KaiGai Kohei wrote:
Tom Lane wrote:
Takahiro Itagaki writes:
pg_largeobject should not be readable by the
public, since the catalog contains data in large objects of all users.
This is going to be a problem, becaus
KaiGai Kohei wrote:
> >>> We use "SELECT loid FROM pg_largeobject LIMIT 1" in pg_dump. We could
> >>> replace pg_largeobject_metadata instead if we try to fix only pg_dump,
> >>> but it's no wonder that any other user applications use such queries.
> >>> I think to allow reading loid is a balanced
KaiGai Kohei wrote:
> Takahiro Itagaki wrote:
> > KaiGai Kohei wrote:
> >
> >> Tom Lane wrote:
> >>> Takahiro Itagaki writes:
> pg_largeobject should not be readable by the
> public, since the catalog contains data in large objects of all users.
> >>> This is going to be a proble
Takahiro Itagaki wrote:
> In addition of the patch, we also need to fix pg_restore with
> --clean option. I added DropBlobIfExists() in pg_backup_db.c.
>
> A revised patch attached. Please check further mistakes.
...and here is an additional fix for contrib modules.
Regards,
---
Takahiro Ita
KaiGai Kohei wrote:
> >> We have to reference pg_largeobject_metadata to check whether a certain
> >> large objct exists, or not.
> It is a case when we create a new large object, but write nothing.
OK, that makes sense.
In addition of the patch, we also need to fix pg_restore with
--clean
Takahiro Itagaki wrote:
> KaiGai Kohei wrote:
>
>> The attached patch fixes these matters.
>
> I'll start to check it.
Thanks,
>> We have to reference pg_largeobject_metadata to check whether a certain
>> large objct exists, or not.
>
> What is the situation where there is a row in pg_lar
KaiGai Kohei wrote:
> The attached patch fixes these matters.
I'll start to check it.
> We have to reference pg_largeobject_metadata to check whether a certain
> large objct exists, or not.
What is the situation where there is a row in pg_largeobject_metadata
and no corresponding rows in
KaiGai Kohei wrote:
> Takahiro Itagaki wrote:
>> KaiGai Kohei wrote:
>>
>>> Tom Lane wrote:
Takahiro Itagaki writes:
>pg_largeobject should not be readable by the
>public, since the catalog contains data in large objects of all users.
This is going to be a problem, becau
Jaime Casanova wrote:
> besides if a normal user can read from pg_class why we deny pg_largeobject
pg_class and pg_largeobject_metadata contain only metadata of objects.
Tables and pg_largeobject contain actual data of the objects. A normal user
can read pg_class, but cannot read contents of ta
Takahiro Itagaki wrote:
> KaiGai Kohei wrote:
>
>> Tom Lane wrote:
>>> Takahiro Itagaki writes:
pg_largeobject should not be readable by the
public, since the catalog contains data in large objects of all users.
>>> This is going to be a problem, because it will break application
KaiGai Kohei wrote:
> Tom Lane wrote:
> > Takahiro Itagaki writes:
> >>pg_largeobject should not be readable by the
> >>public, since the catalog contains data in large objects of all users.
> >
> > This is going to be a problem, because it will break applications that
> > expect to be
2009/12/10 KaiGai Kohei :
>
> If so, we can inject a hardwired rule to prevent to select pg_largeobject
> when lo_compat_privileges is turned off, instead of REVOKE ALL FROM PUBLIC.
>
it doesn't seem like a good idea to make that GUC act like a GRANT or
REVOKE on the case of pg_largeobject.
beside
Tom Lane wrote:
> Takahiro Itagaki writes:
>> OK, I'll add the following description in the documentation of
>> pg_largeobject.
>
>>pg_largeobject should not be readable by the
>>public, since the catalog contains data in large objects of all users.
>
> This is going to be a problem, be
Takahiro Itagaki writes:
> OK, I'll add the following description in the documentation of pg_largeobject.
>pg_largeobject should not be readable by the
>public, since the catalog contains data in large objects of all users.
This is going to be a problem, because it will break application
KaiGai Kohei wrote:
> What's your opinion about:
> long desc: When turned on, privilege checks on large objects perform with
> backward compatibility as 8.4.x or earlier releases.
I updated the description as your suggest.
Applied with minor editorialization,
mainly around tab-c
Takahiro Itagaki wrote:
> KaiGai Kohei wrote:
>
>>> we still allow "SELECT * FROM pg_largeobject" ...right?
>> It can be solved with revoking any privileges from anybody in the initdb
>> phase. So, we should inject the following statement for setup_privileges().
>>
>> REVOKE ALL ON pg_largeobje
KaiGai Kohei wrote:
> > we still allow "SELECT * FROM pg_largeobject" ...right?
>
> It can be solved with revoking any privileges from anybody in the initdb
> phase. So, we should inject the following statement for setup_privileges().
>
> REVOKE ALL ON pg_largeobject FROM PUBLIC;
OK, I'll a
Takahiro Itagaki wrote:
> Hi, I'm reviewing LO-AC patch.
>
> KaiGai Kohei wrote:
>> Nothing are changed in other codes, including something corresponding to
>> in-place upgrading. I'm waiting for suggestion.
>
> I have a question about the behavior -- the patch adds ownership
> management of lar
Hi, I'm reviewing LO-AC patch.
KaiGai Kohei wrote:
> Nothing are changed in other codes, including something corresponding to
> in-place upgrading. I'm waiting for suggestion.
I have a question about the behavior -- the patch adds ownership
management of large objects. Non-privileged users canno
95 matches
Mail list logo