ry_target = 'immediate'
So that's basically what you are looking for. Now you are on 9.2, and
new features are not back-ported.
--
Michael
signature.asc
Description: PGP signature
nformation on this thread.
--
Michael
signature.asc
Description: PGP signature
/* ALTER TABLE pg_temp.testing123 REPLICA IDENTITY FULL; */
UPDATE testing123 SET value = 2;
*Michael Lewis | Software Engineer*
*Entrata*
*c: **619.370.8697 <619-370-8697>*
Did you update the stats by running ANALYZE on the tables involved, or
perhaps the entire database on the 'Non prod Aurora RDS instance'? Can you
share the two execution plans?
*Michael Lewis | Software Engineer*
*Entrata*
*c: **619.370.8697 <619-370-8697>*
On Tue, Feb 12,
fo_2019_2_part4;
analyze asset_info_2019_2_part2;
etc? If data are very similar, indexes all exist, and
default_statistics_target are the same, then you should be getting the same
plans.
*Michael Lewis | Software Engineer*
*Entrata*
*c: **619.370.8697 <619-370-8697>*
On Wed, Feb 13, 2019 a
You don't need an fkey to write a select statement with a join. I think I
must be missing something. Are you wanting it to enforce integrity across
the dblink? Or are you adding an fkey with the assumption that you will get
an index?
*Michael Lewis | Software Engineer*
*Entrat
Ah. I didn't realize Postgrest was something, rather than just a typo. An
fkey to a foreign table is not supported.
Related-
https://dba.stackexchange.com/questions/138591/foreign-key-references-constraint-on-postgresql-foreign-data-wrapper
*Michael Lewis | Software Engineer*
*Entrat
/wiki/SlowQueryQuestions
*Michael Lewis | Software Engineer*
*Entrata*
*c: **619.370.8697 <619-370-8697>*
On Thu, Feb 14, 2019 at 8:48 AM github kran wrote:
>
>
> On Wed, Feb 13, 2019 at 11:38 AM Michael Lewis wrote:
>
>> I didn't see your email yesterday, sorry abo
ld be ignored by normal
processes.
Glad you got your issue resolved.
*Michael Lewis*
On Thu, Feb 14, 2019 at 3:11 PM github kran wrote:
>
>
> On Thu, Feb 14, 2019 at 12:43 PM Michael Lewis wrote:
>
>> How many total rows in these tables? I am assuming these are partitions
nction has been added in OpenSSL 1.0.2, so it seems to me that
you have an OpenSSL version mismatch between your client and the
server. My guess is that the client uses OpenSSL 1.0.2, but the
server is linked to OpenSSL 1.0.1 or older.
(Note: I am not seeing anything bad in the code.)
--
Michael
signat
as
EIO. Just looking at the code for data_sync_retry we should really
have some errno filtering.
--
Michael
signature.asc
Description: PGP signature
failovers. Let me
guess: you stop the standby, delete its recovery.conf and then restart
the former standby? This would prevent a timeline jump at promotion
which would explain the conflicts you are seeing when archiving two
times the same segment.
--
Michael
signature.asc
Description: PGP signature
le to test things in an environment that performs significantly
different.
*Michael Lewis*
On Sun, Feb 17, 2019 at 10:01 AM github kran wrote:
>
>
> On Thu, Feb 14, 2019 at 4:58 PM Michael Lewis wrote:
>
>> This is beyond my expertise except to say that if your storage is SSDs in
tification ON
public.user_event USING btree (parameters ->> 'suggestion_id', what) WHERE
parameters ? 'suggestion_id';
-Michael
>
> On Wed, 20 Feb 2019 at 10:14, Tom Lane wrote:
>
>> Samuel Williams writes:
>> > When I do this query:
>>
>> > EXPLAIN SELECT COUNT(*) FROM "user_event" WHERE ((parameters ->>
>> > 'suggestion_id'::text)::integer = 26) AND what =
>> 'suggestion_notification';
>>
>> > It's slow. I need to expli
IFICATE_HASH.
Good catch! Indeed that's not a good idea. What do you think about
the attached to fix the issue?
--
Michael
diff --git a/src/interfaces/libpq/fe-auth.c b/src/interfaces/libpq/fe-auth.c
index 08a5a9c1f3..4bb529ba3b 100644
--- a/src/interfaces/libpq/fe-auth.c
+++ b/src/interface
On Thu, Feb 21, 2019 at 08:32:01PM +0100, Peter Eisentraut wrote:
> On 2019-02-21 05:47, Michael Paquier wrote:
>> if (conn->ssl_in_use)
>> +{
>> +/*
>> + * The server
On Thu, Feb 21, 2019 at 09:14:24PM -0800, Adrian Klaver wrote:
> This would be a question for AWS RDS support.
And this depends also a lot on your schema, your column alignment and
the level of bloat of your relations..
--
Michael
signature.asc
Description: PGP signature
te a compilation
with OpenSSL 1.0.1 features and older, while still linking with
1.0.2.
If you want to test the patch and check by yourself, that's of course
fine by me. Just let me know when you are done and if you think the
patch is good for commit.
--
Michael
signature.asc
Description: PGP signature
On Wed, Feb 27, 2019 at 7:56 AM Jeremy Finzel wrote:
> I was hoping to use idle_in_transaction_session_timeout to prevent schema
> change migrations from running too long and thereby locking up the
> application for an extended period even if any one statement in the
> migration is very short.
>
>
> If those 50-100 connections are all active at once, yes, that is high.
> They can easily spend more time fighting each other over LWLocks,
> spinlocks, or cachelines rather than doing useful work. This can be
> exacerbated when you have multiple sockets rather than all cores in a
> single sock
On Wed, Feb 27, 2019 at 10:21:00AM +0100, Peter Eisentraut wrote:
> On 2019-02-26 23:35, Michael Paquier wrote:
>> What I do in such cases is to compile OpenSSL by myself and link
>> Postgres to it, here is a command to build shared libraries (all that
>> is documented in
On Wed, Feb 27, 2019 at 10:39:10AM -0800, Stephen Eilert wrote:
> Are you running Vacuum on the slave node? It has to run on the master.
VACUUM performs an equivalent write activity so it has to be
restricted. ANALYZE can work though.
--
Michael
signature.asc
Description: PGP signature
>
> Yeah, because it's an exact datatype match while the core operator
> is anyarray && anyarray which is not.
Can you dumb down how to change the index or column type such that an index
will be used for the && operator while intarray extension is installed? We
have the intarray extension install
On Thu, Feb 28, 2019 at 3:34 PM Tom Lane wrote:
> Michael Lewis writes:
> > Can you dumb down how to change the index or column type such that an
> index
> > will be used for the && operator while intarray extension is installed?
> We
> > have the intarray ext
On Thu, Feb 28, 2019 at 4:57 PM Ron wrote:
> On 2/28/19 4:53 PM, Michael Lewis wrote:
> [snip]
>
> Would a sixth option be to re-create the column as array type
>
>
> Codd is spinning in his grave...
>
I'd hope he would be fine with people asking questions to le
>
> Arrays are -- by definition -- not atomic, and so they fundamentally break
> the model that relational databases are founded upon. If you want to be a
> good database designer, don't use arrays.
>
Thanks. I was reading about Codd after your last email, but couldn't guess
at which point was ob
I'll try to stay off your lawn.
>
in the
core code, and we have folks interested in it (just committed a patch
to fix a rather old problem with MSVC port 30 minutes ago).
--
Michael
signature.asc
Description: PGP signature
On Mon, Mar 11, 2019 at 6:32 AM Sonam Sharma wrote:
> Hi All,
>
> We are planning to migrate our database into any open source DB.
> Can someone please help me in knowing which one will be better among
> POSTGRESQL and MYSQL.
>
> In what terms postgres is better than MYSQL.
>
> Regards,
> Sonam
>
On Mon, Mar 11, 2019 at 2:20 PM Gavin Flower
wrote:
> On 12/03/2019 05:35, Michael Nolan wrote:
> [...]
> > MySQL is better at isolating users from each other and requires less
> > expertise to administer.
>
> [...]
>
> I keep reading that MySQL is easier to
The MySQL manual says that INNODB 'adheres closely' to the ACID model,
though there are settings where you can trade some ACID compliance for
performance.
See https://dev.mysql.com/doc/refman/5.6/en/mysql-acid.html
I've been running PostgreSQL for a client since 2005, we're on our 5th
hardware pl
st ! -f /archives/wal/%f && gzip < %p >
/archives/wal/%f'
archive_timeout = 15min
Regards,
Michael Cassaniti
-BEGIN PGP SIGNATURE-
iG0EAREIAB0WIQT0DIHSqEo48gI0VT9pF1oDt4Q+5wUCXInRqgAKCRBpF1oDt4Q+
5661AN4nRJPXF/M0ZoLg3JVH8f0UsO1WlouHruIRMnsnAN4q9x4G6S4RcobUm5Kh
qTNOD2F3v6A8ng4ABFpm
=5qCA
-END PGP SIGNATURE-
On Thu, Mar 14, 2019 at 02:59:38PM +1100, Michael Cassaniti wrote:
> I've got master/slave replication setup between a few hosts. At any
> point a slave could become a master. I've got appropriate locking in
> place using an external system so that only one master can exist at a
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
On 14/3/19 3:10 pm, Michael Paquier wrote:
> On Thu, Mar 14, 2019 at 02:59:38PM +1100, Michael Cassaniti wrote: >> I've
> got master/slave replication setup between a few hosts. At any
>> point a slave could become a maste
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
On 14/3/19 5:15 pm, Michael Cassaniti wrote:
> > On 14/3/19 3:10 pm, Michael Paquier wrote: > > On Thu, Mar 14, 2019
at 02:59:38PM +1100, Michael Cassaniti wrote: >> I've got master/slave
replication setup between a few hos
>
> *autovacuum_analyze_threshold*
> *autovacuum_analyze_scale_factor*
>
Changing these will impact how often the table is analyzed based on the
rough count of changed rows. You may want to adjust autovacuum settings as
well so that dead space can be reused.
> *default_statistics_target*
>
Increa
>
> On Fri, Mar 15, 2019 at 10:55 AM basti
> wrote:
>
>> Hello,
>>
>> I want to insert data into table only if condition is true.
>> For example:
>>
>> INSERT into mytable (domainid, hostname, txtdata)
>> VALUES (100,'_acme.challenge.example', 'somedata');
>>
>
Alternative to a trigger implemen
ds that are within this WAL segment".
--
Michael
signature.asc
Description: PGP signature
e larger
retention policies in the archives.
--
Michael
signature.asc
Description: PGP signature
m on all databases.
Only one command will be effective for all databases.
--
Michael
signature.asc
Description: PGP signature
gin of what you think is a problem? So, say, if you issue a
checkpoint again, don't you see 00010CEA00B1 going away?
In Postgres 11, WAL segments worth only one checkpoint are kept
around.
--
Michael
signature.asc
Description: PGP signature
ise through our various channels.
Need a hand? Not sure if I am reputable enough though :)
By the way, it could be the occasion to consider an official
PostgreSQL blog on the main website. News are not really a model
adapted for problem analysis and for entering into technical details.
--
Michael
"Sometimes a table's usage pattern involves much more updates than
inserts, which gradually uses more and more unused space that is never
used again by postgres, and plain autovacuuming doesn't return it to the
OS."
Can you expound on that? I thought that was exactly what autovacuum did for
old ve
>
> Is there a way to tell Postgres “please don’t use index X when queries
> that could use index Y instead occur?”
>
No. But you could re-write the query to make the date index useless. The
simplest way that comes to mind is putting the query that does your
full-text search in a CTE (WITH keyword
Thanks for that advance warning since it is a handy option to force the
planning barrier in my experience. What's a resource to see other coming
changes in v12 especially changes to default behavior like this? Will there
be a new cte_collapse_limit setting or similar?
we talk here about Windows SERVER 2019, which
has been released by Microsoft recently. VS2019 is a different
thing.
> I'd be somewhat surprised if it didn't just work however.
Agreed.
--
Michael
signature.asc
Description: PGP signature
>
> vacuum frees tuples just fine. It's just that by the time each run
> finishes many more accumulate due to table update activity, ad nauseum. So
> this unused space constantly grows. Here's a sample autovacuum run:
>
> 2019-04-11 19:39:44.450841500 [] LOG: automatic vacuum of table
> "foo.publi
>
> > 2019-04-11 19:39:44.450844500 tuples: 19150 removed, 2725811 remain,
> 465 are dead but not yet removable
>
> What Jeff said. This vacuum spent a lot of time, only to remove miserly
> 19k tuples, but 2.7M dead tuples remained ... probably because you have
> long-running transactions preven
On Thu, Apr 11, 2019 at 11:13:17AM -0600, Michael Lewis wrote:
> Wouldn't "dead but not yet removable" be high if there were long running
> transactions holding onto old row versions?
You got it right. You need to look at the number behind the tuples
dead, but not removab
>
> Way to many indexes. I'm going to have a hard time convincing our
> programmers to get rid of any of them )
>
You can create (concurrently) an identical index with a new name, then drop
old version concurrently and repeat for each. It doesn't help you figure
out the root cause and how to preve
On Thu, Apr 11, 2019 at 10:39:12PM -0600, Michael Lewis wrote:
> You can create (concurrently) an identical index with a new name, then drop
> old version concurrently and repeat for each. It doesn't help you figure
> out the root cause and how to prevent it from happening again, but
On Sun, Apr 14, 2019 at 4:06 AM Peter J. Holzer wrote:
>
> If you want to prevent a user from logging in (which is functionally
> equivalent but a bit stronger than "instantly kick off"), then this is
> definitely something that could and should be implemented via PAM (I'm
> not sure what informa
Which version? What are the queries you are running which give unexpected
behavior? Have your run explain analyze on those to check what plan is
being used? Have your reindexed all or only the one you suspect?
>
>
> > * Michael Lewis (mle...@entrata.com) wrote:
> > > Thanks for that advance warning since it is a handy option to force the
> > > planning barrier in my experience. What's a resource to see other
> coming
> > > changes in v12 especially changes
>
> Thus, what I'm looking for here is way to store the information and then
> pass that information to the next query efficiently.
> For example, is it possible to define a struct of my choice, private to
> the current transaction, that would store the data and then pass it around
> to the next qu
I assume it is in the documentation, but I am not aware of how stats are
handled for uncommitted work. Obviously in the example you provided the
table would be empty, but in your real tests do they start out empty? Would
it suffice to use temp tables created like the regular ones and analyze
after
On Thu, Apr 25, 2019, 11:34 AM Martin Kováčik wrote:
> Turning off autovacuum for the tests is a valid option and I will
> definitely do this as a workaround. Each test pretty much starts with empty
> schema and data for it is generated during the run and rolled back at the
> end. I have a lot of
Best optionCopy/move the entire pgdata to a larger space. It may also
be enough to just move the WAL (leaving a symlink) freeing up the 623M but
I doubt it since VACUUM FULL occurs in the same table space and can need an
equal amount of space (130G) depending on how much it can actually free u
Assuming you get the database back online, I would suggest you put a
procedure in place to monitor disk space and alert you when it starts to
get low.
--
Mike Nolan
On Fri, May 3, 2019 at 9:35 AM Ravi Krishna wrote:
> >
> > In what format are you dumping the DB2 data and with what specifications
> e.g. quoting?
> >
>
> DB2's export command quotes the data with "". So while loading, shouldn't
> that take care of delimiter-in-the-data issue ?
>
I don't think
I'm still not clear what the backslash is for, it is ONLY to separate first
and last name? Can you change it to some other character?
Others have suggested you're in a Windows environment, that might limit
your options. How big is the file, is it possible to copy it to another
server to manipul
On Mon, May 6, 2019 at 6:05 AM Arup Rakshit wrote:
SELECT MAX(id) FROM chinese_price_infos; max 128520(1 row)
SELECT nextval('chinese_price_infos_id_seq'); nextval - 71164(1
row)
Not sure how it is out of sync. How can I fix this permanently. I ran
vacuum analyze verbose;
*I did find a scenario where this approach does run into trouble. That is,
if the function/procedure is executed against the permanent table and then
you go to run it against a temporary table. In that case, I do get the
wrong answer, and I haven't yet figured out how to reset that without
droppi
For each row-
Insert into organizations table if the record does not exist, returning ID.
Insert into people using that ID.
Else, load all the data with empty ID column on person table,then just
update the person table afterward and drop the org name column.
Perhaps I am missing something.
>
> So, a related question, since we have dozens of temp tables and a lot of
> code, is there a way to look up what temp tables are being created by the
> current session, so I can do a VACUUM or ANALYZE on all of them in bulk? I
> know I can inspect pg_temp_* schema, but how to figure out which on
How big does the data stored in that field get? More than 2KB? Real
question- is it getting stored plain, compressed inline, or toasted? Have
you set the storage strategy/type, or is it the "extended" default behavior
that compresses and then stores in the toast table if still more than 2000
bytes?
*":foo" named placeholders*
If I may, is this supported natively in Postgres prepared statements? Can I
see an example? I do not care much for the numbers of positional
placeholders and would love to use names instead if possible.
For 3TB pg_upgrade can also be very fast if you use --link. Be wary
of having backups though, all the time.
--
Michael
signature.asc
Description: PGP signature
On Fri, Jun 14, 2019 at 08:02 Tiemen Ruiten wrote:
> Hello,
>
> I setup a new 3-node cluster with the following specifications:
>
> 2x Intel(R) Xeon(R) CPU E5-2630 v4 @ 2.20GHz (2*20 cores)
> 128 GB RAM
> 8x Crucial MX500 1TB SSD's
>
> FS is ZFS, the dataset with the PGDATA directory on it has th
few subsets of the columns will be needed to support
the kinds of queries we want.
What settings should be changed to maximize performance?
--
Michael J. Curry
>
> If your entire database can comfortably fit in RAM, I would make
> shared_buffers large enough to hold the entire database. If not, I would
> set the value small (say, 8GB) and let the OS do the heavy lifting of
> deciding what to keep in cache. If you go with the first option, you
> probably
d
> IMO one should turn on the flushing by backends in most cases too
> (e.g. backend_flush_after=2MB), unless it's a really latency/jitter
> insensitive application, or storage is *REALLY* slow.
>
> There's a few things we don't flush that we maybe should (file extension
> writes, SLRUs), so it can still be sensible to tune
> dirty_background_bytes. But that has the disadvantage of also affecting
> temp file writes etc, which is usually not wanted.
>
> Greetings,
>
> Andres Freund
>
--
Michael J. Curry
cs.umd.edu/~curry
urse which
have no checksums to look at yet.
--
Michael
signature.asc
Description: PGP signature
>
> Actually we have notice that Auto vacuum in PG10 keeps vacuuming the
> master tables and that takes a lot of time and Don't go the child table to
> remove the dead tuples.
>
What do the logs say actually got done during these long running
autovacuums? Is it feasible to increase the work allo
ld be).
The constraint that a cluster needs to be cleanly shut down to be able
to enable checksums with pg_checksums is the actual deal here. After
that of course comes the WAL retention on the primary or in the WAL
archives that a standby would need again to catch up while it was
offline.
--
Michael
s
quot;MyTableName".
Is there settings in pssql server or db level to change it back to its
default to allow double quotes around schema.
We're using PostgreSQL 13.7 on x86_64-pc-linux-gnu, compiled by gcc (GCC)
8.5.0 20210514 (Red Hat 8.5.0-10), 64-bit
Thanks!
Michael
anytime PG or
anything else Linux based says thread they’re talking about a POSIX Thread
environment.
On Wed, May 3, 2023 at 05:12 Michael J. Baars <
mjbaars1977.pgsql.hack...@gmail.com> wrote:
> Hi Peter,
>
> The shared common address space is controlled by the clone(2) CLONE_VM
&
ll these
steps are stable in the backend, at least here. Or do we have some
low-hanging fruit with the WAL_LOG strategy? That could always be
possible, of course, but that looks like the same issue to me, just
with a different symptom showing up.
--
Michael
signature.asc
Description: PGP signature
in a way similar to Evgeny.
--
Michael
signature.asc
Description: PGP signature
LE, or is that 15.2-ish without fa5dd46?
One thing I was wondering about to improve the odds of the hits is to
be more aggressive with the number of relations created at once, so as
we are much more aggressive with the number of pages extended in
pg_class from the origin database.
--
Michael
sign
we, err, revisit the choice of making WAL_LOG
the default strategy even for this set of minor releases? FWIW, I've
mentioned that this choice was too aggressive in the thread of
8a86618..
--
Michael
signature.asc
Description: PGP signature
Please, use the following runbook.
1. Disable the subscription to pg10.
2. Disable Application Users on Publisher.
3. Drop all replication slots on Publisher (The upgrade can not be executed
if there are any replication slots)
4. Run RDS's upgrade (which runs pg_upgrade).
5. Recreate replication sl
Clearly I'm a 73 year old dinosaur, because I believe in having the
business logic in the database wherever possible. But the development
projects I've been around lately aren't using triggers at all. (And
it should not surprise anyone, certainly not me, that consistency of
data enforcement is an
You're gonna lock yourself into SOMETHING, that's why there are still
thousands of COBOL programs still being maintained.
Mike Nolan
On Fri, Jun 9, 2023 at 3:39 PM Ron wrote:
>
> You can be sure that banks and academic research projects have different
> needs. Heck, your University's class sch
Can you use a CASE statement? The real issue with date conversion is
not knowing if a value of 02-03-2023 is mm-dd- or dd-mm-.
On Wed, Jun 14, 2023 at 11:42 AM Marc Millas wrote:
>
> Hi,
>
> I would like to load data from a file via file_fdw or COPY.. its a postgres
> 14 cluster
>
> but
32 ─┐ ┌──┬──> PG_Cluster1@localhost:5433
>> ─> pgs2.server.net:5432 ─┤ │ ├──> PG_Cluster2@localhost:5434
>> ─> pgs3.server.net:5432 ─┼─>─┤ 192.168.0.1:5432 ├──>
>> PG_Cluster3@localhost:5435
>> ─> pgs4.server.net:5432 ─┤ │ ├──> PG_Cluster4
team implement them.
>
> Operationally much simpler to have a listener handle that.
>
> -- Born in Arizona, moved to Babylonia.
Hello Ron,
I have to agree with you there as well. The workflow you have to go through is
also often a time issue.
There are many places that have to agree and then application owners still have
to provide justifications.
At the same time, we have to be flexible and fast and allocate the resources
well at any time and provide the application with the maximum possible
performance.
Regards
Michael
speaks
directly the protocol and because it has no dependency to libpq. Are
there any specific failures you are seeing in the PostgreSQL backend
that you find confusing?
--
Michael
signature.asc
Description: PGP signature
s all the
development discussions are done by email on the mailing list
pgsql-hackers, mostly.
--
Michael
signature.asc
Description: PGP signature
It's not just Ruby, dumb databases are preferred in projects like
WordPress, Drupal and Joomla, too.
Now, if it's because they're used to using MySQL, well maybe that's
not so hard to understand. :-)
On Mon, Jun 26, 2023 at 8:05 PM Guyren Howe wrote:
>
> This is a reasonable answer, but I want
om here on would be
* persisted. To avoid that, fsync the entire data directory.
--
Michael
signature.asc
Description: PGP signature
more details about what's happening. For example, what do the
logs of the standby tell you? Are you sure that the reload was done
on the correct node? Did you check with a SHOW command that the new
value is reflected on your standby to what you want it to be?
--
Michael
signature.asc
Description: PGP signature
, I would agree with you that it is not especially useful to keep
it around once the cluster has been recovered from a base backup. It
would actually lead to various errors if attempting to run
pg_verifybackup on its data folder, for instance.
--
Michael
signature.asc
Description: PGP signature
We are experiencing different functionality once we upgraded from Postgres
14.3 to Postgres 15.3.
Below is a test case that we created which shows a schema user who has a
VIEW that accesses a table in another schema. In 14.3 the schema user is
able to create the VIEW against the other schema's ta
sten_schema, but in 15.3 I am unable due to a permission issue.
On Tue, Sep 19, 2023 at 8:17 PM Erik Wienhold wrote:
> On 2023-09-19 15:09 -0400, Michael Corey wrote:
> > We are experiencing different functionality once we upgraded from
> Postgres
> > 14.3 to Postgres 15.3.
> >
atabases and received different results.
On Wed, Sep 20, 2023 at 12:33 PM Erik Wienhold wrote:
> On 2023-09-20 09:15 -0400, Michael Corey wrote:
> > Thanks for responding. All of the DDL is just the setup for the test
> > case. I ran those steps in both databases to setup the exac
original 14 server and made two copies. I kept one as 14 and upgraded
the other to 15. Lastly, I created the test case.
On Wed, Sep 20, 2023 at 3:07 PM Erik Wienhold wrote:
> On 2023-09-20 13:17 -0400, Michael Corey wrote:
> > PG 14 Server
> > psql (14.2, server 14.3)
> > You
hema?
>> Because it must inherit SELECT on ref_media_code on 14.3. It can't be
>> from object_creator because that role also gets newly created.
>>
>
> Your description also suggests that maybe the v14 instance has altered
> default privileges setup that maybe the v15 doesn't have.
>
> David J.
>
>
--
Michael Corey
,
*pg_write_all_data*, rds_password, rds_replication TO rds_superuser WITH
ADMIN OPTION;
AWS added these permissions, but based on what they do you would think this
would allow the SELECTs in 15.
On Wed, Sep 20, 2023 at 4:40 PM Erik Wienhold wrote:
> On 2023-09-20 15:19 -0400, Michael Corey wr
501 - 600 of 889 matches
Mail list logo