Hi:
11.3 on linux
I've come up with a plan to archive data from my main DB which involves
creating other DBs on the same server. But even though there will be zero
activity on the archive DBs in terms of insert/update/delete, and almost no
activity in terms of select, I'm still worried that the
Ok, thanks.
I was also planning on manually running vacuum, reindex and analyze on the
main DB after removing the data from the main DB after archiving. Does
that sound necessary and reasonable ?
On Tue, Nov 24, 2020 at 10:15 AM Adrian Klaver
wrote:
> On 11/24/20 6:36 AM, David Gauthier wr
chine has 64G of RAM.
(I understand that I need to change my db design .. I have been backing up
one schema at a time until now so wasn't aware of this problem)
Thanks
David
As of Java-16 there is a replacement for JNI which allows direct calling of
native interfaces
without the need for hand coded stub code.
On Monday, 7 December 2020 22:10:01 GMT Martin Gainty wrote:
> Nota Bene:
> Almost all official interfaces to/from Oracle are coded in Java
> FWIK google's imp
My presumption of views and instead of trigger behavior is that the VIEW first
gets populated with the WHERE filter and then the "DELETE or UPDATE" operation
will fire against each of the rendered view rows. ( ? )
If this is true then I can't explain the more then one row returned error.
[11-1]
Dave
From: Tom Lane
Sent: Tuesday, January 12, 2021 6:24 PM
To: Day, David
Cc: pgsql-general@lists.postgresql.org
Subject: Re: Views and triggers more then one row returned by subquery.
"Day, David" writes:
> My presumption of views and instead of t
We are using MS-Access as a front-end to work with tables served by
a Postrges/11.3 DB on linux as the MS-Access backend through ODBC. I have
my users install on their Windows laptops a PG driver for ODBC from here...
https://www.postgresql.org/ftp/odbc/versions/msi and selecting
psqlodbc-12_02_00
affect things for other queries too. Also, if you
were to only change 1 partition and that partition were to be pruned,
then you'd not get the 64 workers.
David
n data wrapper. I'm not sure of the status of that work, but if
you want to learn more about it, please see [1]
David
[1] https://commitfest.postgresql.org/32/2491/
equation
was. There are some plan shapes such as GROUP BY or aggregate queries
with very few or just 1 group where the serial portion of the
execution is very small indeed.
David
possible) or granting access
to the “hidden” schema?
Thanks in advance!
Best regards,
David Wheeler, Inomial Architect
E. dwhee...@dgitsystems.com<mailto:dwhee...@dgitsystems.com>
313 La Trobe Street, Victoria 3000, Australia
+61 3 8820 5200
MELBOURNE . DENPASAR . AUCKLAND
WWW.DGITSYSTEMS.COM&
is works, but I see this as a workaround, because the function is simply a
utility that makes understanding the data in the table a little easier. Why
should it be security definer? If you don’t have access to the table you
shouldn’t be able to use the function to access it.
Regards,
Dav
follow each of the "Referenced by" from the table
you're deleting from. In the \d output, just ignore the tables
mentioned in "Foreign-key constraints:". Those are only checked on
INSERT/UPDATE and must already contain a proper unique constraint and
therefore index.
David
om PostgreSQL.
You might have better luck asking the authors of pgbackrest. Check
their website for details on how they want issues reported.
David
one day want to start dropping old unused columns
out of your tables. If you alias these then dropping the columns will
properly cause queries that reference these columns to ERROR. Without
aliases, your queries might just start doing something you don't want
them to do and you might not realise that for a very long time.
David
On Mon, 4 Apr 2022 at 23:33, Sebastien Flaesch
wrote:
> Any arguments I could give to programmers? Maybe:
> "You better replay all SQL of the whole transaction... (and make them as
> short as possible!)"
> ?
There are SAVEPOINTs [1].
David
[1] https://www.postgresq
7;s what the SQL standard says, so that's the way we do it.
David
Hi:
psql (11.5, server 11.3) on linux
I have a table with a bytea column which, of course, contains binary data.
After 60 days, I no longer need the binary data but want to retain the rest
of the record. Of course it's easy to just update the bytea column to null
for the older records. But I can
Doesn't vacuum run automatically (or can it be set to run automatically) ?
On Wed, May 11, 2022 at 8:05 PM David G. Johnston <
david.g.johns...@gmail.com> wrote:
> On Wednesday, May 11, 2022, David Gauthier wrote:
>
>> Hi:
>> psql (11.5, server 11.3) on linux
>
TER TABLE data ALTER COLUMN data SET STATISTICS 1000;
ANALYZE data;
The standard number of buckets is 100. The above will set it to 1000.
You can go as high as 1, but going too high is going to slow down
the planner, so you should only go as high as you need to go.
David
eight than a btree index. You only need the index to instruct
ANALYZE to gather statistics. You might also want to ramp up the pages
per range.
With that, the planner might then realise that parallel aggregate
might be worthwhile.
David
[1]
https://www.enterprisedb.com/postgres-tutorials/using-forceparallelmode-correctly-postgresql
ing there will be fewer groups than it currently
thinks there will be, which will likely result in the parallel plan
that you desire.
David
I understand the planner can use multiple indexes to get the best plan.
Can someone point me a paper that explains how this works.
Thanks
On Thu, 7 Jul 2022 at 04:07, DAVID ROTH wrote:
> I understand the planner can use multiple indexes to get the best plan.
> Can someone point me a paper that explains how this works.
I don't know of a paper, but if you're talking about using multiple
indexes to scan a single rela
Has anything been published on Oracle to Postgress migration.
I am finding plenty of information about schema migration but,
I think this is the easy part.
I have a a tremendous amount of SQL and PL/SQL code that needs to be
translated. I know Oracle "packages" will not translate. DECODE is not
Looks good.
Thanks
> On 07/14/2022 3:10 PM Bruce Momjian wrote:
>
>
> On Thu, Jul 14, 2022 at 03:06:58PM -0400, DAVID ROTH wrote:
> > Has anything been published on Oracle to Postgress migration.
> >
> > I am finding plenty of information about schema migratio
ter job of this as
it could have done Limit -> Unique -> Merge Append -> Index Scan. I
do have some work in progress code to rewrite the union planner so
that it able to request pre-sorted input to allow Merge Append to be
used rather than Unique -> Sort. I'm not sure if it'll help this case
or not.
David
ery contained expensive joins or many rows were
filtered out. That could be more expensive as the subquery version
would have to do that twice whereas the window agg version would only
have to do that once.
David
ely cause is the planner favouring an early startup plan.
It's probably more likely that lower() is providing the planner with
bad estimates and there's likely far less than the expected rows,
resulting in the LIMIT 1 being a much larger proportion of the
total rows than the planner expects.
David
ructure itself must guarantee that there are not
duplicates in different partitions."
Maybe that could be more clear and mention that all the primary key
columns must be present and not be part of a function call or
expression.
David
[1] https://www.postgresql.org/docs/13/ddl-partitioning.html
a operation. If you have an index over all partitions
then that operation is no longer a metadata-only operation. It
suddenly needs to go and remove or invalidate all records pointing to
the partition you want to detach/drop.
David
we may be able to inform you about
features/performance improvements in newer versions which help with
the problem you're facing.
You mention "constraint-exclusion", that's no longer how we perform
partition pruning and hasn't been since (if I remember correctly)
PostgreSQL 11. Perhaps you're using PG10?
David
I'm
surprised that executor startup / shutdown for a plan which accesses a
large number of partitions is not drowning out the locking overheads.
As far as I knew, this problem was only visible when run-time
partition pruning removed the large majority of the Append/MergeAppend
subnodes and made executor startup/shutdown significantly faster.
David
s with the disabled high cost penalty can
cause other issues like the planner thinking two Paths are "fuzzily"
similar enough in costs and rejecting better Paths when in fact the
better Path is really quite a bit better when you subtract the
disabling cost penalty.
David
ins out over the LimitPath with
the index scan due to the Index scan having a higher startup cost.
It feels like something is a bit lacking in our cost model here. I'm
just not sure what that is.
David
> Index Only Scan using t1_a_idx on t1 (actual time=0.072..0.073
rows=1 loops=1)
Index Cond: (a = 0)
Heap Fetches: 1
Planning Time: 0.146 ms
Execution Time: 0.100 ms
(6 rows)
However, I might be wrong about that. I've not given it too much thought.
David
psql (11.5, server 11.3) on linux
Trying to insert a string containing a double-quote into a varchar arr.
Here's my attempt
dvdb=# create table testarr (arr varchar[]);
CREATE TABLE
dvdb=# insert into testarr (arr) values ('{"abcdefg"}');
INSERT 0 1
dvdb=# select * from testarr
the locking of partitions which have been
pruned. If that patch makes it then the problem will be solved, at
least starting with the version the patch makes it into.
David
psql (11.5, server 11.3) on linux
Someone who had privileges in a lot of tables/schemas/DBs left the
company. I want to drop that role but "rop role xxx" won't let me
because... "xxx cannot be dropped because some objects depend on it".
Is there a way to just get rid of this role without revoki
me, thanks.
Would this have been more clear if [1] didn't mention both declarative
partitioning and inheritance partition on the same page? I've wondered
before if we should split that into two separate pages.
David
[1]
https://www.postgresql.org/docs/current/ddl-partitioning.html#DDL-PARTITIONING-USING-INHERITANCE
old table.
You'll need to think carefully about unique constraints and any other
constraints which are on the table in question. You'll want to do a
lot of testing before committing to doing this too.
David
Is there a way to reverse engineer the original code (or its equivalent) from
what is saved in the database?
> On 12/02/2022 8:48 AM Dominique Devienne wrote:
>
>
> On Thu, Dec 1, 2022 at 8:51 PM Tom Lane wrote:
> > Do you really fail to see the contradictions in this? You want the
> > data
the performance is also improved on their
hardware.
David
ool that could achieve this?
Is this a partial index? If not, those should just increase at the
same rate that's mentioned in pg_stat_user_tables for the table that
the index belongs to.
David
When a fdw table participates in query planning and finds itself as
part of a join it can output a parameterized path. If chosen, Postgres
will dutifully call the fdw over and over via IterateForeignScan to
fetch matching tuples. Many fdw extensions do network traffic, though,
and it would be benef
On Thu, 22 Dec 2022 at 13:31, David Gilman wrote:
>
> When a fdw table participates in query planning and finds itself as
> part of a join it can output a parameterized path. If chosen, Postgres
> will dutifully call the fdw over and over via IterateForeignScan to
> fetch matchi
On Wed, Dec 21, 2022, 10:57 PM David Rowley wrote:
> On Thu, 22 Dec 2022 at 13:31, David Gilman wrote:
> >
> > When a fdw table participates in query planning and finds itself as
> > part of a join it can output a parameterized path. If chosen, Postgres
> >
Hi. New subscriber and intermediate level SQL writer here, still pretty new
to Postgresql (but I don't know how to do the following in TSQL either).
I've figured out how to write a crosstab query I need; the problem is that
the number of row results should be 72, but I'm only getting 41. I'm
pret
ot;2021" TEXT
);
A row for each one of the matching s.s_id values should be displayed, even
if all the ad.s_a_qty values for that Yr are NULL; right now, the query
works, but it only returns matching rows for which at least one year has a
non
Thanks, David. There's a lot of stuff i'm not familiar with in there: I
don't suppose you know of a tutorial they covers this?
On Sun, Dec 25, 2022 at 8:46 AM David G. Johnston <
david.g.johns...@gmail.com> wrote:
> On Sun, Dec 25, 2022 at 12:25 AM
*That* covers this.
On Sun, Dec 25, 2022 at 8:56 AM David Goldsmith
wrote:
>
>
> Thanks, David. There's a lot of stuff i'm not familiar with in there: I
> don't suppose you know of a tutorial they covers this?
>
>
> On Sun, Dec 25, 2022 at 8:46 AM
in plan vs the Hash Join plan. Since we do buffered I/O,
without track_io_timing, we don't know if the read buffers resulted in
an actual disk read or a read from the kernel buffers.
David
was some discussion about
various aspects of making this better. Perhaps changing the typmod is
easier than changing the type completely, but we still don't have any
code for it. So for now, you're just stuck manually dropping and
recreating your views.
David
ther cases can be a pretty difficult and frustrating process.
You have to remember, even if you think the slowdown is some corner
case that only applies ~1% of the time, for some users in the real
world, that might be 100% of their queries.
There are probably other ways you could consider doing this, I just
can't think of them right now.
David
On Thu, 9 Feb 2023 at 14:49, Tom Lane wrote:
>
> David Rowley writes:
> > Tom's argument seems to think it's impossible, so if you find that
> > it's definitely not impossible, then you can assume he's wrong about
> > that.
>
> My point was that
which columns are NOT NULL then the additional overhead of just
looking at the NullTest quals would likely be cheap enough to be
worthwhile. I imagine we'd need to find some other reason to record
attnotnull columns before we'd consider doing this.
David
>> Are there techniques for situations like this?
Just have two triggers, one for each column, and ensure that if your trigger
doesn’t change the value then it doesn’t do an update on the other column. Each
time you do update both triggers will run but only one will make a change, so
that will
.
I suspect ed1a88dda would be what made this faster in master. We'll
check for peer rows to check "NULL IS NOT DISTINCT FROM NULL" prior to
that change with the ORDER BY NULL query.
David
id of the useless targetlist column altogether would
> be way more invasive, and I'm not inclined to try.
Yeah, that would likely add more complexity than it would be worth.
David
ants to use the seq scan and if you disable
enable_seqscan and it becomes slower, then that's an indication that
the planner is likely making the correct choice. The problem might
just be with your expectations of index scan performance.
David
rm necessary work, which can
cause problems such as table bloat or out-dated table statistics. A
database-wide ANALYZE is recommended after the statistics have been
reset."
David
[1]
https://www.postgresql.org/docs/14/monitoring-stats.html#MONITORING-PG-STAT-ALL-TABLES-VIEW
every table first.
I would make not running pg_stat_reset() ever a priority.
David
t for `IS` as basically an alias for `IS NOT DISTINCT FROM`? (and `IS
NOT` for `IS DISTINCT FROM`)
TIA
David Wheeler
[1] https://github.com/cashapp/sqldelight/issues/3863
t2.y where t1.x > 2 and t1.x
< 4 and t2.y > 2 and t2.y < 4;
you could likely do this.
I still hope to improve this in the planner one day. A few other
things are getting closer which sets the bar a bit lower on getting
something like this committed. There's some rel
_number = 30926000.
I think your best solution will be to just also include the seemingly
surplus: AND (b1."number" >= 30926000) AND (b1."number" <= 31957494)
quals.
David
fford to block any traffic for the brief moment it would
take to drop the constraint.
David
on.
"Specifying FREEZE is equivalent to performing VACUUM with the
vacuum_freeze_min_age and vacuum_freeze_table_age parameters set to
zero." [0]
David
[0] https://www.postgresql.org/docs/current/sql-vacuum.html
On Sat, 4 Mar 2023 at 10:55, Ron wrote:
> On 3/3/23 04:54, David Rowley wrote:
> If you have a look at
> https://www.postgresql.org/docs/15/sql-dropindex.html check out the
> CONCURRENTLY option. That option allows an index to be dropped without
> blocking concurrent reads and writ
times. So, don't be too surprised that there's some discussion of
other methods which might make this work which don't involve copying
what someone else has done.
David
On Tue, 7 Mar 2023 at 16:11, David G. Johnston
wrote:
>
> On Mon, Mar 6, 2023 at 7:51 PM David Rowley wrote:
>> the transfn for bottom() would need to remember the city and the
>> population for the highest yet seen value of the 2nd arg.
>
>
> BOTTOM() remember
On Tue, 14 Mar 2023 at 21:01, Alban Hertroys wrote:
> > On 7 Mar 2023, at 4:11, David G. Johnston
> > wrote:
> > TOP(city, ROW(population, land_area)) ?
>
> What should be the expected behaviour on a tie though?
Undefined. Same as having an ORDER BY on a column tha
ooks great. I just wanted to mention that as it may be a factor that
matters at some point, even if it does not right now.
David
Is there any practical limit on the number of schemas in a database?
Will the number of schemas in a user's search path impact performance?
Thanks
Dave Roth
> On 03/20/2023 10:15 AM Ron Johnson wrote:
>
>
> Real-time CDC is the difficult part. ora2pg (using views) can do a
> static m
Is there any good reference to explain the best usage of each of these
structures.
I am coming from Oracle. What is the best analog to Oracle's "user".
Thanks
Dave Roth
> On 03/20/2023 10:15 AM Ron Johnson wrote:
>
>
> Real-time CDC is the difficult part. ora2pg (using views) can do
7;s patch, I was surprised to see it didn't set
amcanorder to true, so I'm a little unsure how that patch is adding
more usable optimisations which the planner can make use of.)
David
On Thu, 30 Mar 2023 at 22:21, 任重 wrote:
> Here is anyway to Get dead tuples data from table when I had disable
> autovacuum?
> I need whole row data
https://www.postgresql.org/docs/current/pageinspect.html#id-1.11.7.34.5
David
not, I have no idea why the sequential scans
> take so much longer in the new database, even though the I/O is even faster
> than before.
Looks that way to me too.
> Can anybody give me a hint into which direction I should investigate further?
Probably just run ANALYZE on the database in question.
David
m remains with the finalfunc. It has to be ready to receive
> both types.
What's the use case for that?
David
This is just an enum that code can look at
to determine the node type of whichever pointer it is looking at.
Perhaps you can get away with coding your aggregate function's
component functions in a way that can handle both types, you'd just
need to look at the first 4 bytes of the pointer so you know what to
do. In Postgres, we have an IsA macro to help us with that. Have a
look at nodes.h.
David
o be serialised before sending over the
network. It feels like just a matter of time before we grow the
ability to do that. Lots of work has been done on foreign data
wrappers in the past few years. It feels like it has tailed off a bit,
but I wouldn't be surprised if we had the ability to do that in the
next few years.
David
Has anyone tried using Postgres as a LRU cache for data in cloud blob
storage? I have a lot of data in Cassandra and want to move that to
Backblaze (like S3 but cheaper and less available). Cassandra is working
well but I have to add new nodes more frequently that I would like. The
older data is ra
#x27;d need to be careful never to use that in a view or
even a PREPAREd statement. Those abbreviations are evaluated when the
query is parsed. In those cases, you'd just get the results for
whatever day you did CREATE VIEW or PREPARE.
--
David Rowley http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services
I think I know the answer to this one but I'll ask anyway...
Is there a spreadsheet style interface to a PG DB where users can...
- lock records
- edit records
- submit changes (transaction)
Is there any after-market tool for PG that does something like this ?
Hi:
I'm trying/failing to write a recursive plpgsql function where the function
tries to operate on a hierary of records in a reflexive table.
parent-child-grandchild type of recursion.
I tried with a cursor, but got a "cursor already in use" error. So that
looks like scoping.
I know I did this
with plpsql because that wasn't necessary
anymore.
On Tue, Nov 6, 2018 at 7:29 PM Tom Lane wrote:
> David Gauthier writes:
> > I'm trying/failing to write a recursive plpgsql function where the
> function
> > tries to operate on a hierary of records in a reflexive table.
ost a bit or increasing effective_cache_size. This will
lower the estimated cost of random IO for indexes scans.
--
David Rowley http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services
h some other random
name. If that patch gets in with a good amount of time to spare then
there's a decent chance we'd see a reindex concurrently command patch
appear soon after.
[1] https://commitfest.postgresql.org/20/1726/
--
David Rowley http://www.2ndQuadrant.com
I have installed the intarray extension installed in the public schema and am
attempting to use this in a plpgsql trigger function from another schema.
When the triggers executes this I get an exception to the effect
{
"hint": "No operator matches the given name and argument type(s). You
ovember 19, 2018 12:56 PM
To: Day, David
Cc: pgsql-gene...@postgresql.org
Subject: Re: plpgsql and intarray extension; int[] - int[] operator does not
exist ?
"Day, David" writes:
> Any suggestions as to why the int[] operations are not understood in the
> trigger context.?
The
clauses, LIKE or any other type of condition."
[1] https://www.postgresql.org/docs/10/planner-stats.html
--
David Rowley http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services
executor. I didn't look
at why the cost is estimated to be slightly higher, but the planner
wouldn't consider rewriting the queries to one of the other cases
anyway, so it's likely not that critical that the costings are
slightly out from reality.
> where U.KEY_U in ({correlated
use that plan for all subsequent EXECUTEs for the session
until you DEALLOCATE the prepared query or DISCARD PLANS/ALL;
--
David Rowley http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services
On Mon, 7 Jan 2019 at 21:40, Mitar wrote:
>
> On Mon, Jan 7, 2019 at 12:09 AM David Rowley
> wrote:
> > On Mon, 7 Jan 2019 at 18:54, Mitar wrote:
> > > If I have a PREPAREd query without parameters (static) and I EXECUTE
> > > it repeatedly in the same sessio
/augmenting-the-postgresql-planner-with-machine-learning/
Perhaps there are others that have worked on similar things, however,
I don't recall any conversations on these postgresql.org mailing lists
though. Maybe it's worth trying searching the archives?
--
David Rowley h
See documentation for more information:
https://pgbackrest.org/user-guide.html#delete-stanza
--
-David
da...@pgmasters.net
On 1/25/19 8:02 AM, Ron wrote:
On 1/24/19 11:22 PM, David Steele wrote:
On 1/24/19 5:07 PM, Ron wrote:
Are these the steps, or am I missing something?
$ pgbackrest stop
$ pgbackrest stanza-delete --stanza=mystanza--force
That looks right but no need for --force. That's what the `sto
I have an API server and I'm trying to be conscientious managing Postgres's resources carefully. On the client side, I have a Hikari Pool.
Usually when I need a connection, I simply create a default read/write connection, even if I don't plan to make any updates or inserts or hold any locks. But
(resending to remove HTML formatting)
I have an API server and I'm trying to be conscientious managing Postgres's
resources carefully. On the client side, I have a Hikari Pool.
Usually when I need a connection, I simply create a default read/write
connection, even if I don't plan to make any up
In my Java API server, I am using SERIALIZABLE transaction isolation mode, so
I'm specially handling the error code of "40001 serialization_failure", which
can occur often. I'm getting the error code String using SQLException's
getSQLState() function.
Therefore I was surprised to see that the J
E
Time: 179.750 ms
# alter table rp attach partition rp1 for values from(1) to (201);
INFO: partition constraint for table "rp1" is implied by existing constraints
ALTER TABLE
Time: 4.969 ms
# alter table rp detach partition rp1;
# alter table rp1 drop constraint rp1_a_chk;
--
David
201 - 300 of 1933 matches
Mail list logo