x to
a) improve the estimate
and
b) speed up the queries:
CREATE INDEX ON xyz ((gs & 1), (gs & 2), (gs & 4));
Don't forget to ANALYZE afterwards.
Yours,
Laurenz Albe
so I'd expect PostgreSQL
to have no problems with it with decent storage and CPU power,
but you'd have to test that.
Tune autovacuum if it cannot keep up (tables get bloated).
The first knob to twiddle is probably lowering "autovacuum_vacuum_cost_delay".
Autovacuum might be your biggest problem (only guessing).
Yours,
Laurenz Albe
.Does running vacuum at the end of the function on the tables that were
> truncated and
> then populated with data will have any impact or is it better just to
> analyze them ?
For up-to-date statistics, ANALYZE is enough.
If you want to set hint bits so that the first reader doesn't have to do it,
VACUUM will help. But that is not necessary.
Yours,
Laurenz Albe
ortfolio_allocations_pnsa, which seems like it
> would prevent it from having to sort?
In a bitmap index scan, the table is scanned in physical order, so the result
is not sorted in index order.
I don't know if PostgreSQL is smart enough to figure out that it could use an
index
scan and preserve the order through the joins to obviate the sort.
You could try to set enable_bitmapscan=off and see if things are different then.
Perhaps the slower index scan would outweigh the advantage of avoiding the sort.
Yours,
Laurenz Albe
r.
Do you connect via the network, TCP localhost or UNIX sockets?
The last option should be the fastest.
Yours,
Laurenz Albe
ackoverflow.com/q/46617329/6464308
If your queries look similar, then you might indeed be the victim of an attack.
Figure out where the function and the executable come from.
In case of doubt, disconnect the server from the network.
Yours,
Laurenz Albe
our applications can help too.
Yours,
Laurenz Albe
ing at the plan, I'd guess that the following index could be helpful:
CREATE INDEX ON ap.site_exposure(portfolio_id, peril_id, account_id);
Don't know how much it would buy you, but you could avoid the
sequential scan and the sort that way.
Yours,
Laurenz Albe
e listening on port 5432?
Connect as user "postgres" and run the following queries:
SHOW port;
SHOW unix_socket_directories;
That will show the port and the directories where UNIX sockets are created.
You can use a socket directory name with the -h option of psql.
Yours,
Laurenz Albe
a.iav_iat_id)
...
FROM item_attribute_value a JOIN item_attribute b ON ...
ORDER BY a.iav_version DESC;
Yours,
Laurenz Albe
this i'm asking above questions.
Are you sure that you see the private memory of the process and not the
shared memory common to all processes?
An "idle" connection should not hav a lot of private memory.
If you get OOM on the server, the log entry with the memory context dump
might be
nce_work_mem = 256MB# min 1MB
> > shared_buffers = 16GB # min 128kB
> > temp_buffers = 16MB # min 800kB
> > wal_buffers = 64MB
> > effective_cache_size = 64GB
> > max_connections = 600
It would be interesting to know the output from
sysctl vm.overcommit_memory
sysctl vm.overcommit_ratio
Also interesting:
sar -r 1 1
I think that max_connections = 600 is way to high.
Are you running large, complicated queries on that machine? That could
be a problem with such a high connection limit.
Is the machine dedicated to PostgreSQL?
Yours,
Laurenz Albe
Stefan Petrea wrote:
> During some database imports(using pg_restore), we're noticing fast
> and unbounded growth of pg_xlog up to the point where the
> partition(280G in size for us) that stores it fills up and PostgreSQL
> shuts down.
What do you see in pg_stat_archiver?
Yours,
Laurenz Albe
ult),
only a nested loop join is possible.
I don't know how selective sp.migration_sourcename= 'KatRisk_SC_Flood_2015_v9'
is; perhaps an index on the column can help a little.
But you won't get around the 617937 loops, which is the cause of the
long query duration. I don't think there is a lot of potential for
optimization.
Yours,
Laurenz Albe
esults it gets by scanning then indexes.
To have the above query perform fast, add additional indexes with either
ASC NULLS FIRST or DESC NULLS LAST for all used keys.
Yours,
Laurenz Albe
into the same
page as the old one, the indexes don't have to be updated.
That will speed up the UPDATE considerably.
On the other hand, an UPDATE like yours would then always use a
sequential scan, but that may still be a net win.
Other than that, setting checkpoint_segments high en
l indexes on one table.
No wonder planning and DML statements take very long, they have to consider all
the
indexes.
> explain analyse select id from form_data_copy where id between 3001 and 4000
> and bigint50=789;
Use a single index on (bigint50, id) for best performance.
Yours,
Laurenz Albe
--
Cybertec | https://www.cybertec-postgresql.com
> Note that this is using the built in polygon and not pgsphere (spoly)"
That sounds about right.
You could use a single-point polygon like '((1,1))'::polygon
and the <@ or && operator.
Yours,
Laurenz Albe
--
Cybertec | https://www.cybertec-postgresql.com
se files are only necessary for point-in-time-recovery,
so you don't have to retain them any longer than you retain
your WAL archives.
Yours,
Laurenz Albe
--
Cybertec | https://www.cybertec-postgresql.com
BUFFERS) SELECT ...
we have a chance of telling you what's wrong.
Yours,
Laurenz Albe
--
Cybertec | https://www.cybertec-postgresql.com
an STABLE.
The reason is that the subquery with the VOLATILE function can be
flattened; see the EXPLAIN (VERBOSE) output.
There is not guarantee that less volatility means better performance.
Yours,
Laurenz Albe
--
Cybertec | https://www.cybertec-postgresql.com
e almost all the
execution time is spent.
The question how to fix that is more complicated, and I cannot solve
it off-hand with a complicated query like that.
Setting "enable_nestloop = off" is as coarse as forcing "from_collapse = 1"
and will negatively impact other querie
ecision if the data distribution changes
and the chosen query plan becomes inefficient.
Yours,
Laurenz Albe
--
Cybertec | https://www.cybertec-postgresql.com
ly disable nested loop joins for the whole
query, rather than
doing the right thing and fixing the estimates:
BEGIN;
SET LOCAL enable_nestloop = off;
SELECT ...;
COMMIT;
Yours,
Laurenz Albe
--
Cybertec | https://www.cybertec-postgresql.com
. at
> database and it will not
> create any issues to queries running in database.
That will very likely cause problems in your database, because sometimes a
nested loop join
is by far the most efficient way to run a query.
Yours,
Laurenz Albe
--
Cybertec | https://www.cybertec-postgresql.com
your permises?
Apart from all other things, compare the network latency. If a single request
results in 500 database queries, you will be paying 1000 times the network
latency per request.
Yours,
Laurenz Albe
--
Cybertec | https://www.cybertec-postgresql.com
ons
> worked fine.
Some application that uses the database has a connection leak: it opens new
connections
without closing old ones. Examine "pg_stat_activity" to find out which
application is
at fault, and then go and fix that application.
Yours,
Laurenz Albe
--
Cybertec | https://www.cybertec-postgresql.com
. I may have made a mistake, and I have no reproducer, but I would
be curious to know if there is an explanation for that.
(I am aware that "top" shows shared buffers multiple times).
Yours,
Laurenz Albe
dp1.daily_production_id) prodId
FROM ps_daily_production_v dp1
WHERE dp1.fleet_object_number = cast(coalesce(nullif
(cast(40001000277313 AS varchar), ''), NULL) AS numeric)
AND dp1.activity_code IS NOT NULL
GROUP BY dp1.fleet_object_number
Remove the superfluous GRO
he second case, the whole IN list is shipped to the remote side.
In short, the queries are quite different, and I don't think it is possible to
get
the first query to perform as well as the second.
Yours,
Laurenz Albe
--
Cybertec | https://www.cybertec-postgresql.com
just make there citext column, and try searching for value in it,
> and check explain from the search. If it will cast column - it's no
> good.
>
> Sorry I can't tell you what to fix, but perhaps this will be enough for
> you to find solution.
Quite so.
You are pro
cade;
>
> Wihout the views, table can be dropped in 20ms.
You misconfigured your operating system and didn't disable memory overcommit,
so you got killed
by the OOM killer. Basically, the operation ran out of memory.
Yours,
Laurenz Albe
ee structure, the random key values will cause more index page splits
> and merges as
>there is no pre-determined order of filling the tree structure.
I'd say that is quite accurate.
Yours,
Laurenz Albe
LTER TABLE ... VALIDATE CONSTRAINT
...,
which takes a while too, but does not lock the table quite that much.
But I don't think there is a way to do that with a domain.
Yours,
Laurenz Albe
--
Cybertec | https://www.cybertec-postgresql.com
see anything obvious
> in the catalog.
The view is "pg_stat_replication", but you won't see there if an entry is
abandoned before PostgreSQL does and terminates it. You can set
"tcp_keepalived_idle"
low enough so that the kernel will detect broken connections early on.
Yours,
Laurenz Albe
ve?
Updating a newly inserted row is about as expensive as inserting the row in the
first place.
You can reduce the overall impact somewhat by creating the table with a
"fillfactor" below
100, in your case 90 would probably be enough. That won't speed up the UPDATE
itself, but
it should greatly reduce the need for VACUUM.
Yours,
Laurenz Albe
ITH (MODULUS 26, REMAINDER 0);
[...]
CREATE TABLE humans_2002_25
PARTITION OF humans_2002 FOR VALUES WITH (MODULUS 26, REMAINDER 25);
and so on for the other years.
Yours,
Laurenz Albe
E) to see the effects that
partitioning has on your queries.
Yours,
Laurenz Albe
On Fri, 2023-04-28 at 15:19 +0300, Παρασκευη Πασσαρη wrote:
> We are facing a performance issue with the following query. Executing this
> query takes about 20 seconds.
> (the database version is 14.1)
The execution plan seems to be incomplete.
Yours,
Laurenz Albe
want to benefit from plan caching, you
can set
the configuration parameter "plan_cache_mode" to "force_custom_plan".
Yours,
Laurenz Albe
make backend to cache every SQL
> statement
> plan in that function too? and for table triggers, have similar caching
> behavior ?
Yes, as long as the functions are written in PL/pgSQL.
It only affects static SQL, that is, nothing that is run with EXECUTE.
Yours,
Laurenz Albe
of 8140 bytes.
If you want your block size to be a power of two, the limit would be 4kB, which
would waste
almost half your storage space.
Yours,
Laurenz Albe
plit the JSONB in several parts and store each
of those parts in a different table row. That would reduce the impact.
Yours,
Laurenz Albe
;
> Btw this is Postgres 9.6
>
> (we tried unlogged table (that did nothing), we tried creating indexes after
> (that helped), we're experimenting with RRI)
Why are you doing this the hard way, when pg_squeeze or pg_repack could do it?
You definitely should not be using PostgreSQL 9.6 at this time.
Yours,
Laurenz Albe
g Time: 0.732 ms
> Execution Time: 0.039 ms
>
>
> Where the planning time gets in the way as it takes an order of magnitude
> more time than the actual execution.
>
> Is there a possibility to reduce this time? And, in general, to understand
> why planning takes so much time.
You could try to VACUUM the involved tables; indexes with many entries pointing
to dead tuples
can cause a long planing time.
Also, there are quite a lot of indexes on "test_db_bench_1". On a test
database, drop some
indexes and see if that makes a difference.
Finally, check if "default_statistics_target" is set to a high value, or if the
"Stats target"
for some column in the "\d+ tablename" output is set higher than 100.
Yours,
Laurenz Albe
imes. If a generic plan is used (which should happen), you will
see $1 instead of the literal argument in the execution plan.
Prepared statements are probably your best bet.
Yours,
Laurenz Albe
the first reader has to set hint bits, which is
an extra
chore. You can avoid that if you VACUUM the table before you query it.
Yours,
Laurenz Albe
On Fri, 2023-09-22 at 10:35 +0200, Koen De Groote wrote:
> On Thu, Sep 21, 2023 at 9:30 PM Laurenz Albe wrote:
> > On Thu, 2023-09-21 at 17:05 +0200, Koen De Groote wrote:
> > > I'm doing the following query:
> > > select * from my_table where hasbeench
rmat please, no JSON.
Yours,
Laurenz Albe
able auto_explain and look at the execution plan
when the statement is run by the JDBC driver. Then you can compare the
execution plans and spot the difference.
Yours,
Laurenz Albe
ow statements.
One theory could be that there was a long running transaction or something else
that prevented VACUUM from cleaning up. For that, the output of
"VACUUM (VERBOSE) shortened_url" would be interesting.
> Additional details
> PostgreSQL version: 14.7 on db.t3.micro RDS
> PG configuration: Default of RDS
We can only speak about real PostgreSQL...
Yours,
Laurenz Albe
e initial data load is fast, because you have not yet exceeded your
I/O quota,
and then I/O is throttled.
Yours,
Laurenz Albe
w to resolve this
> waits ?
Today, the only feasible solution is not to create more than 64 subtransactions
(savepoints or PL/pgSQL EXCEPTION clauses) per transaction.
Don't use extensions or the JDBC driver option to simulate statement level
rollback,
that is the road to hell.
Yours,
Laurenz Albe
On Fri, 2024-02-02 at 02:04 -0800, Nikolay Samokhvalov wrote:
> On Thu, Feb 1, 2024 at 04:42 Laurenz Albe wrote:
> > Today, the only feasible solution is not to create more than 64
> > subtransactions
> > (savepoints or PL/pgSQL EXCEPTION clauses) per transaction.
>
>
e available from the index. In this example, x is not needed except in
the context f(x), but the planner does not notice that and concludes that
an index-only scan is not possible. If an index-only scan seems sufficiently
worthwhile, this can be worked around by adding x as an included column,
for example
CREATE INDEX tab_f_x ON tab (f(x)) INCLUDE (x);
Yours,
Laurenz Albe
ke sure that your database transactions are short.
Don't use table or row locks to synchronize application threads.
What you could use to synchronize your application threads are advisory locks,
they are not tied to a database transaction.
Yours,
Laurenz Albe
No, that is not a problem. Keeping *connections* open is a good thing. It is
keeping data modifying transactions, cursors or long-running queries open
that constitutes a problem.
Yours,
Laurenz Albe
rsions
of PostgreSQL.
The problem is that you bind the query parameters with the wrong data types.
Don't use "setBigDecimal()", but "setLong()" if you want to bind a "bigint".
An alternative is "setObject()" with "targetSqlType" set to "Types.BIGINT".
Yours,
Laurenz Albe
lan_cache_mode=force_custom_plan or 2) some other parameters can
> workaround this issue?
You can set "prepareThreshold" to 0 to keep the JDBC driver from using
prepared statements in PostgreSQL. I am not sure if that is enough to
fix the problem.
Yours,
Laurenz Albe
62,080 instead
> of 1,292,010).
Looking at the samples you provided, I get the impression that the statistics
for
the table are quite outdated. That will affect the estimates. Try running
ANALYZE
and see if that improves the estimates.
Yours,
Laurenz Albe
l partitions, except that sometimes PostgreSQL
can forgo scanning some of the partitions.
If you use very many partitions, the overhead for query planning and
execution can become noticable.
Yours,
Laurenz Albe
1681 loops=3)
Why does it take over 41 seconds to read a table with less than
3 million rows? Are the rows so large? Is the tabe bloated?
What is the size of the table as measured with pg_relation_size()
and pg_table_size()?
Yours,
Laurenz Albe
er. I have 128GB memory.
> 1. Any advice or thoughts?
> 2. Is there any other parameter that can accelerate index creation?
It is safe as long as you have enough free memory on the machine.
You can verify with tools like "free" on Linux (look for "available" memory).
Yours,
Laurenz Albe
er | 43
> 2024-04-08 09:00:07.114015+00 | DataFileRead | 28
> 2024-04-08 09:00:07.114015+00 | ClientRead | 11
> 2024-04-08 09:00:07.114015+00 | | 11
That's quite obvious: too many connections cause internal contention in the
database.
Reduce the number of connections by using a reasonably sized connection pool.
Yours,
Laurenz Albe
u can, you could "VACUUM (FULL) videos" and see if that makes a difference.
If you can bring the table size down, it will speed up query performance.
Yours,
Laurenz Albe
--
Cybertec | https://www.cybertec-postgresql.com
store from a backup.
Did you experiences any crashes recently? Is your storage reliable?
Yours,
Laurenz Albe
--
Cybertec | https://www.cybertec-postgresql.com
NDEX location_search_tsvector_idx SET (fastupdate = off);
Then clean the pending list with
SELECT gin_clean_pending_list('location_search_tsvector_idx'::regclass);
Disabling the pending list will slow down data modification, but should
keep the SELECT performance stable.
Yours,
Laurenz Albe
--
Cybertec | https://www.cybertec-postgresql.com
age 20
items per block. That is few, and the index seems indeed bloated.
Looking at the read times, you average out at about 1 ms per block
read from I/O, but with that many blocks that's of course still a long time.
Yours,
Laurenz Albe
--
Cybertec | https://www.cybertec-postgresql.com
ad from the file system cache",
but you say you observe a difference even after dropping the cache.
To verify if the difference comes from the physical placement, you could
run VACUUM (FULL) which rewrites the table and see if that changes the behavior.
Another idea is that the operating system rearranges I/O in a way that
is not ideal for your storage.
Try a different I/O scheduler by running
echo deadline > /sys/block/sda/queue/scheduler
(replace "sda" with the disk where your database resides)
See if that changes the observed I/O speed.
Yours,
Laurenz Albe
--
Cybertec | https://www.cybertec-postgresql.com
it is on IPv6, make sure that IPv6 is up, otherwise
that would explain why you have no accurate statistics.
Are there any log messages about statistics collection?
Yours,
Laurenz Albe
--
Cybertec | https://www.cybertec-postgresql.com
u can just manually vacuum all tables once - if all it
checks is if it *ever* has been vacuumed.
Yours,
Laurenz Albe
--
Cybertec | https://www.cybertec-postgresql.com
on1_.id
LEFT JOIN fm_order order2_ ON order2_.id = order0_.consumer
LEFT JOIN fm_session session3_ ON order2_.session_id = session3_.id
WHERE coalesce(order2_.id, 1) > 0
AND /* all the other conditions */;
Yours,
Laurenz Albe
--
Cybertec | https://www.cybertec-postgresql.com
lse the PGDG would not release it.
There is no need to upgrade via v10, I recommend that you upgrade from 9.6
to v11 directly, either via dump/restore or with pg_upgrade.
https://www.postgresql.org/docs/current/upgrading.html
Yours,
Laurenz Albe
--
Cybertec | https://www.cybertec-postgresql.com
t tells the optimizer how likely it is to
find index data cached if the index is used repeatedly, so it
is not important to get the value exactly right.
Yours,
Laurenz Albe
: 2467.848 ms
EXPLAIN (ANALYZE, BUFFERS) SELECT t FROM sort2 ORDER BY t COLLATE "en_US.utf8";
[...]
Execution Time: 2927.667 ms
This is on my x86_64 Fedora 29 system, kernel 4.20.6, glibc 2.28.
Yours,
Laurenz Albe
--
Cybertec | https://www.cybertec-postgresql.com
e cost_limit / cost_delay ?
Maybe configuring autovacuum to run faster will help:
alter table orig_table set (toast.autovacuum_vacuum_cost_limit = 2000);
Or, more extreme:
alter table orig_table set (toast.autovacuum_vacuum_cost_delay = 0);
Yours,
Laurenz Albe
--
Cybertec | https://www.cybertec-postgresql.com
ents is set to 300. It means that when the
> total_size of
> the pg_xlog directory will reach 1GB, checkpoint will be forced but old wal
> files
> wont be recycled/deleted ?
Checkpoints are not forced by the size of pg_xlog, but by the amount of WAL
created since the last checkpo
ured
> > together is a mistake right ?
> > In that case, if you have both configured, and you set wal_keep_segments to
> > 0, the db should
> > delete all the unused wals ?
It is pointless to have both a replication slot and wal_keep_segments, yes.
Setting wal_keep_segment
eport1_201711_ccd on
> call_report1_201711 a_1 (cost=0.14..8.16 rows=1 width=2190) (actual
> time=0.029..0.034 rows=7 loops=1)"
> "Index Cond: ((call_created_date >= '2017-11-01'::date)
> AND (call_created_date <= '2017-11-30'::date))"
> "Planning Time: 20.866 ms"
> "Execution Time: 1.205 ms"
There is no condition on the table "call_report2" in your query,
so it is not surprising that all partitions are scanned, right?
You have to add a WHERE condition that filters on the partitioning
column(s) of "call_report2".
Yours,
Laurenz Albe
--
Cybertec | https://www.cybertec-postgresql.com
ink ? Am I
> wrong here ?
Increasing cost_limit or reducing cost_delay improves the situation.
cost_delay = 0 makes autovacuum as fast as possible.
Yours,
Laurenz Albe
You can set STATISTICS up to 1, but don't forget that high values
make ANALYZE and planning slower.
Yours,
Laurenz Albe
--
Cybertec | https://www.cybertec-postgresql.com
rent
session.
What is "n_live_tup" and "n_dead_tup" in "pg_stat_user_tables" for these tables?
Are there any autovacuum workers running currently?
Yours,
Laurenz Albe
--
Cybertec | https://www.cybertec-postgresql.com
de=$8
> <<
>
> how to get the content of the bind variables ?
Can we see the whole log entry and the following one?
Perhaps there was a syntax error or similar, and the statement was never
executed.
Yours,
Laurenz Albe
--
Cybertec | https://www.cybertec-postgresql.com
t; It is the same problem for all the statements.
> I can not get the content of the bind variables.
You should set "log_error_verbosity" back from "terse" to "default".
Then you will see the DETAIL messages.
Yours,
Laurenz Albe
--
Cybertec | https://www.cybertec-postgresql.com
.
*/
if (!(rel->rd_rel->relkind == RELKIND_RELATION ||
rel->rd_rel->relkind == RELKIND_MATVIEW))
ereport(ERROR,
(errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
errmsg("\"%s\" is not a table or materialized view",
ssion:
test=> SELECT NULL IS NOT DISTINCT FROM 21580;
?column?
--
f
(1 row)
test=> SELECT NULL = 21580;
?column?
--
(1 row)
One expression is FALSE, the other NULL.
It doesn't matter in the context of your specific query, but it could matter.
Yours,
Laurenz Al
tart with shared_buffers being the minimum of a quarter of the
available RAM and 8 GB.
Yours,
Laurenz Albe
--
Cybertec | https://www.cybertec-postgresql.com
ocess 10541.
Process 10541 waits for ShareLock on transaction 77323; blocked by process
10517.
HINT: See server log for query details.
CONTEXT: while deleting tuple (0,3) in relation "b"
Yours,
Laurenz Albe
--
Cybertec | https://www.cybertec-postgresql.com
ocess 10541.
Process 10541 waits for ShareLock on transaction 77323; blocked by process
10517.
HINT: See server log for query details.
CONTEXT: while deleting tuple (0,3) in relation "b"
Yours,
Laurenz Albe
--
Cybertec | https://www.cybertec-postgresql.com
gt; I should add 8 to the NUM_OF_CONNECTIONS to have a new value for the work_mem
> in order to consider queries that run in parallel..
Yes, but don't forget that one query can use "work_mem" several times if the
execution plan has several memory intensive nodes.
Yours,
Laurenz Albe
--
Cybertec | https://www.cybertec-postgresql.com
se clarify
> and let me know if I misunderstand the concept..
shared_buffers only determines the shared memory cache, each database
process still needs private memory.
As a rule of thumb, start with shared_buffers set to 1/4 of your
available RAM, but no more than 8GB.
Yours,
Laurenz Albe
Duncan Kinnear wrote:
> Bingo! Adding 'SET LOCAL synchronous_commit TO OFF;' to my 'BEGIN; UPDATE
> ; COMMIT;'
> block has given me sub-1ms timings! Thanks Andres.
That's a pretty clear indication that your I/O subsystem was overloaded.
Yours,
rows".
Parallelism currently cannot be used if there is a limit on the row count.
Imagine you want ten rows and already have nine, now if two workers are busy
calculating the next row, there is no good way to stop one of them when the
other
returns a row.
Yours,
Laurenz Albe
--
Cybertec | https://www.cybertec-postgresql.com
emote logins
with that user.
But for your application users LDAP authentication is a fine thing, and not
hard to set up if you know a little bit about LDAP.
Yours,
Laurenz Albe
--
Cybertec | https://www.cybertec-postgresql.com
hat hooks into PostgreSQL, but I
have no idea how hard that would be.
Yours,
Laurenz Albe
es.
> Are there any differences in managing connections in Oracle and postgres.
I'd say that is way too high in both Oracle and PostgreSQL.
Set the value to 50 or 100 and get a connection pooler if the
application cannot do that itself.
Yours,
Laurenz Albe
--
Cybertec | https://www.cybertec-postgresql.com
oo many connections on
too few table rows?
Yours,
Laurenz Albe
--
Cybertec | https://www.cybertec-postgresql.com
ch other is
unavoidable, and all database management systems I know do it the same
way.
Yours,
Laurenz Albe
--
Cybertec | https://www.cybertec-postgresql.com
()", that is, an aggregate without arguments.
"count(1)" has to check if 1 IS NULL for each row, because NULL
values are not counted. "count(*)" doesn't have to do that.
Yours,
Laurenz Albe
--
Cybertec | https://www.cybertec-postgresql.com
6704 | table_B
> (5 rows)
>
> for unclear reason, both table A and table B depends on the sequence.
> When I check table_A I dont see any column that might use it..
Could you select all rows from pg_depend so that it is easier to see
what is going on?
Yours,
Laurenz Albe
--
Cybertec | https://www.cybertec-postgresql.com
1 - 100 of 181 matches
Mail list logo