On Wed, Apr 30, 2025 at 7:15 AM Luca Ferrari wrote:
> Rationale: I'm using ansible to configure, thru different steps,
> PostgreSQL instances and I would like to have every part to append its
> configuration on the previous one.
>
Ansible is good for bringing your systems to a known consistent s
On Mon, Apr 21, 2025 at 9:03 AM KK CHN wrote:
>
> ERROR: [082]: WAL segment 000102200038 was not archived before
> the 6ms timeout
>
...
> How can I make the full backup command not to check the WAL was archived
> or not to the repo server for atleast once ?
>
You cannot. WAL
On Fri, Apr 18, 2025 at 5:18 PM Jeremy Schneider
wrote:
> but i wanted to confirm with someone - my easiest options for online
> verifying checksums would be to trigger pg_basebackup then check
> pg_stat_database, or to install michael's utility?
>
Your easiest option is to just use pgbackrest,
On Tue, Apr 15, 2025 at 7:20 AM Karsten Hilbert
wrote:
> > The survey [...] is completely anonymous.
>
> Nope.
>
Can you elaborate on this, please, for my sake and others?
On Mon, Apr 7, 2025 at 5:32 AM KK CHN wrote:
> *ERROR: [082]: WAL segment 000101EB00*4B was not archived
> before the 6ms timeout
>
This is the part you need to focus on. Look at your Postgres logs and find
out why the archiver is failing. You can also test this without trying a
On Wed, Mar 19, 2025 at 10:02 AM E-BLOKOS wrote:
> is it possible a crash happened with a VACUUM and a machine reboot in same
> time?
>
More likely to be a problem with pg_repack. Please tell us the exact
versions of pg_repack and Postgres in use here.
Cheers,
Greg
--
Crunchy Data - https://w
First figure out which database is having that issue, by using pg_dump
--schema-only on each database in turn. Then run this SQL on the database
giving the error to see if the type exists, or what is nearby:
select oid, typname, typtype, typnamespace::regnamespace from pg_type where
oid <= 794978
On Thu, Mar 13, 2025 at 11:49 PM Durgamahesh Manne <
maheshpostgr...@gmail.com> wrote:
> To return one row takes 43ms is not optimal
>
It's actually much faster than that, but even 43ms is overall good. The
query is already pretty optimal, as it uses a single index only scan. There
are a few tric
On Tue, Mar 11, 2025 at 9:48 PM Siraj G wrote:
> What are the features available in Postgresql to hide PII (personal
> identifiable information) from the Admin team?
>
Can you explain your threat model here, and who exactly the "Admin team" is
and what access they have? As a general rule of thum
On Tue, Mar 11, 2025 at 2:35 PM Paul Foerster
wrote:
> The question was a bit of an idea. So the glibc version in not known yet,
> but I'm highly confident that they will differ. A reindex could in theory
> be possible in most cases, but is a definite show stopper on some of our
> databases, beca
A reindex is not going to remove rows from the table, so we need to see how
you came to the conclusion that it did.
Cheers,
Greg
--
Crunchy Data - https://www.crunchydata.com
Enterprise Postgres Software Products & Tech Support
On Tue, Mar 11, 2025 at 10:29 AM mark bradley
wrote:
> An "interesting" effect of reindexing is that all the records that were
> dups in the nodes table were deleted, both copies.
>
Er...that's not just interesting, but alarming - if true. Can you show the
steps you took?
Cheers,
Greg
--
Crunc
On Tue, Mar 11, 2025 at 3:06 AM Achilleas Mantzios asked:
> is it still harder than the trigger ?
>
I think the trigger wins: no extension needed, arguably better error
output, easier to understand at a glance, and can quickly change the
business logic by adjusting the function. Pretty short too.
Take a look at the pg_stat_archiver view, if you have not already:
https://www.postgresql.org/docs/current/monitoring-stats.html#MONITORING-PG-STAT-ARCHIVER-VIEW
So I want to check how many wal file got created in x minutes and how many
> .ready files got created in those x minutes.
>
It's not c
On Fri, Mar 7, 2025 at 9:35 AM mark bradley wrote:
> This is what MS Copilot has to say about this apparent bug where Postgres
> inserts extra rows violating a primary keys uniqueness constraint:
>
> Yes, this issue has been encountered by others. There are a few potential
> reasons why this migh
On Thu, Mar 6, 2025 at 6:49 AM chandan Kumar
wrote:
> need any correction or advise.
>
Honestly, this all seems overly complex and fragile. I'm not sure what the
overall goal is, but if it's to have a general PITR solution, use
pgBackRest. If it's just to have a fall back method for a particular
CVE-2025-1094 has a narrow blast radius. If you are not directly affected,
I would focus your efforts on getting to 17. But the lack of an existing
process to smoothly upgrade minor revisions is worrying and something that
needs to get addressed as well.
Cheers,
Greg
--
Crunchy Data - https://www
On Wed, Mar 5, 2025 at 9:44 PM me nefcanto wrote:
> Anyway, that's why I asked you guys. However, encouraging me to go back to
> monolith without giving solutions on how to scale, is not helping.
>
We did. In addition to the ongoing FDW discussion, I mentioned read-only
replicas and Citus. As fa
On Wed, Mar 5, 2025 at 10:22 AM Marc Millas wrote:
> Then the flow contains a single line: \. to my understanding this means
> end of the copy
>
> but, Postgres generates an error : invalid input syntax for type numeric
> "\."
>
This can happen when you send TWO backslashes and a dot, rather tha
On Wed, Mar 5, 2025 at 7:15 AM me nefcanto wrote:
> I think if we put all databases into one database, then we have blocked
> our growth in the future.
>
I think this is premature optimization. Your products table has 100,000
rows. That's very tiny for the year 2025. Try putting everything on on
FYI the -bugs thread in question:
https://www.postgresql.org/message-id/flat/CAEHBEOBCweDWGNHDaUk4%3D10HG0QXXJJAGXbEnFLMB30M%2BQw%2Bdg%40mail.gmail.com
seems to imply the primary blocker was a unique constraint.
Cheers,
Greg
--
Crunchy Data - https://www.crunchydata.com
Enterprise Postgres Soft
That's harmless, it is the walsender process, and it is meant to be
long-running. You can modify your query and add this:
AND backend_type = 'client backend'
to filter out any background processes.
Cheers,
Greg
--
Crunchy Data - https://www.crunchydata.com
Enterprise Postgres Software Products
On Thu, Feb 27, 2025 at 1:32 PM Tom Lane wrote:
> > -c 'ALTER USER timeshift_user PASSWORD '"'"'timeshift_pass'"'"';'
> > I am still trying to work out what that quoting is doing?
>
That's standard for -x output for some versions of bash. FWIW, none of the
shells I had access to output it quite
On Wed, Feb 26, 2025 at 2:21 AM sivapostg...@yahoo.com <
sivapostg...@yahoo.com> wrote:
> issue in PostgreSQL 15.7
>
Still missing a ton of bug fixes - Postgres 15 is on version 15.12. Try to
get that upgraded.
> We got the following error
> ERROR: SSL error: bad length
> SSL SYSCALL error: No
On Tue, Feb 25, 2025 at 8:51 AM wrote:
> Does this sound familiar anyone? Any ideas what might prompt Postgres to
> stop?
>
Postgres needs full read and write access to its own data directory. If it
doesn't, it will eventually PANIC, as your logs show.
See if you can figure out what else happen
On Tue, Feb 25, 2025 at 7:03 AM sivapostg...@yahoo.com <
sivapostg...@yahoo.com> wrote:
> 1. Why is corruption happening? How to find out the reason?
>
It sounds more like user error than actual corruption, but without more
data we cannot determine. Show us the exact commands you ran, along wit
On Mon, Feb 24, 2025 at 11:50 AM Dominique Devienne
wrote:
> We lookup whether there's a list of aliases for "Allison". If there are,
> we send them in $3 as an array
of string (e.g. ['All', 'Alli', ...], and the first one matching (thanks to
> order by ord limit 1) is returned, if any.
>
Thank
On Mon, Feb 24, 2025 at 4:46 AM Dominique Devienne
wrote:
> But now we have a new requirement, for "fuzzy find". I.e. the client can
> ask for names
> which are not the exact in-DB names, but also aliases of those names.
>
...
> join unnest($3::text[]) with ordinality as aliases(name, ord) on c.
On Sat, Feb 22, 2025 at 8:58 PM Tobias McNulty
wrote:
> "Without this option a refresh which affects a lot of rows will tend to
> use fewer resources"
...
> either that (1) the refresh operation actually updates the contents of a
> lot of rows in the materialized view
This is the correct inte
Since you are willing to break the all one transaction rule, and if the
restores were created via pg_dump, you could use the --section argument to
split things up, run the "pre-data" sections serially, and the rest ("data"
and "post-data") concurrently.
--
Cheers,
Greg
--
Crunchy Data - https://
Not related to the main question, but that query could use a little
adjustment. Something like:
WITH x AS (
select kelt from javaink_forgalma
where en_kaptam is true and az_aru_neve = 'nyugdíjam'
order by kelt desc limit 2
)
,y AS (select min(kelt) from x)
,z AS (select max(kelt) from x)
INS
Another variation:
select avg(max) from (select distinct max(val) over(partition by id) from
mytable);
Cheers,
Greg
--
Crunchy Data - https://www.crunchydata.com
Enterprise Postgres Software Products & Tech Support
On Mon, Feb 17, 2025 at 2:55 AM 馬 騰飛 wrote:
> Interestingly, when we isolate the problematic SQL statement and replace
> its parameters with actual values, it executes in just a few seconds in
> pgAdmin.
> However, when we run the same SQL query through our application using
> Npgsql, it takes ov
On Sun, Feb 16, 2025 at 5:58 PM Marcelo Fernandes
wrote:
> - The foreign keys are not being updated to point to the new table.
>
You started out okay with your test script, but the pg_depend bit needs
work. I would recommend examining that table closely until you have a
really good understanding
On Fri, Feb 14, 2025 at 1:02 AM Michał Kłeczek wrote:
> Create index concurrently and then fiddle with the catalog tables to
> define the constraint using this index?
>
You mean an ALTER TABLE ... ADD CONSTRAINT ... EXCLUDE without actually
doing an ALTER TABLE. Nope, that's far worse than the p
On Fri, Feb 14, 2025 at 12:41 AM Laurenz Albe
wrote:
> Moreover, you have to make sure to send out invalidation messages so that
> every session that caches statistics or
> execution plans for the tables discards them.
Hmm...is that really necessary? Because if so, there is no direct SQL-level
On Thu, Feb 13, 2025 at 6:06 PM Marcelo Fernandes
wrote:
> > It's technically possible to do something similar for your use case, but
> it's not trivial. All the cab to trailer wires must be precisely changed.
> Everything directly related to the data must be swapped: heap, indexes,
> toast.
>
>
On Wed, Feb 12, 2025 at 9:02 PM Marcelo Fernandes
wrote:
> What I am after is the same, but I seek a deeper understanding of what it
> does, and why it does it. For example, it swaps relfilenode. Why?
It is surgically replacing all pointers to the old data with pointers to
the new data. Yes, wi
I'm not sure why we are focused on the other errors - the database fails to
get dropped (or created), so future errors are to be expected.
pg_restore should be run with the --exit-on-error flag, and handle the
errors one by one as someone mentioned upthread.
I would use the --section=pre-data --s
On Tue, Feb 4, 2025 at 1:50 PM Ayush Vatsa
wrote:
> Also, what would be the best way to ensure that, by default, no roles
> (except the function owner) have any privileges on new functions created in
> my protected schema?
>
Create them in another schema altogether, then move it to my_schema once
On Wed, Jan 22, 2025 at 2:00 AM Runxi Yu wrote:
> I therefore propose a feature, to be able to specify in a table schema
> that a row should be deleted if orphaned.
>
I think you mean "childless" rows, as "orphaned" has a different meaning
traditionally.
When and how would this deletion take pl
You might also look into using a bitmap, for some or all of those fields.
It depends on how many distinct values each can have, of course, and also
on how exactly they are accessed, but bitmaps can save you quite a bit of
space.
Cheers,
Greg
On Mon, Dec 16, 2024 at 5:32 AM 張宸瑋 wrote:
> We have both regular accounts and system accounts. For regular accounts,
> we still require password complexity and the lockout functionality after
> multiple failed login attempts.
>
Again, what is the threat model here? Most people have their passwo
On Wed, Dec 11, 2024 at 1:44 PM Ron Johnson wrote:
> Isn't this a pretty common password setting? I know that for at least 35
> years, and going back to the VAX/VMS days I've been locked out for X hours
> if I typed an invalid password. Same on Windows and I think also Linux
> (though ssh publ
On Wed, Dec 11, 2024 at 5:46 AM 張宸瑋 wrote:
> In the use of the Credcheck suite, the parameter
> "credcheck.max_auth_failure = '3'" is set in the postgresql.conf file to
> limit users from entering incorrect passwords more than three times, after
> which their account will be locked.
>
Won't that
On Mon, Dec 9, 2024 at 6:43 AM Joe Wildish wrote:
Overall, your solution seems okay, but:
> a fix has gone in to PG17 that sorts this problem.
>
> However, we can't go to 17 yet, so need a solution for 15 and 16.
Honestly, this would seem like a really, really strong reason to push for
v17.
On Wed, Dec 4, 2024 at 7:42 AM Bharani SV-forum
wrote:
> a) is the above said steps is correct with the given existing and proposed
> setup
>
No. Here are some steps:
* Install Postgres on the new VM
However you get it, use the newest version you can. As of this writing, it
is Postgres 17.2. Ve
On Sat, Nov 23, 2024 at 1:10 PM Bruce Momjian wrote:
> and say bounce the database server and install the binaries. What I
> have never considered before, and I should have, is the complexity of
> doing this for many remote servers. Can we improve our guidance for
> these cases?
>
Hmm I'm not
On Mon, Nov 11, 2024 at 6:23 AM David Lynam
wrote:
> Are there any plans or discussions about adding native support for
> SQL:2011 temporal tables, so we don’t need extensions?
No concrete plans I've heard of (but see below). For the record, "so we
don't need extensions" is not a winning argume
As a superuser, rename pg_stat_reset inside one of the commonly affected
databases:
alter function pg_stat_reset rename to
hey_stop_running_pg_stat_reset_already;
Then see who starts complaining. Additionally, your server log will get
helpful entries like this:
ERROR: function pg_stat_reset() d
As far as the application being able to change those fields itself, you can
prevent that via column permissions, by leaving out the four audit columns
and doing something like:
GRANT INSERT (email, widget_count), UPDATE (email, widget_count) ON TABLE
foobar TO PUBLIC;
That way, inserts are guaran
Some of those requirements are vague, but yes, Patroni should probably be
the first approach you look at. If the second datacenter is just for
redundancy, then a simple setup would be:
DCA (data center A):
Postgres server 1
Postgres server 2
DCB:
Postgres server 3 (set no_failover: true)
You wil
>
> Yes. Looks like the DBD::Pg module is what needs the upgrade.
>
Specifically, you will need DBD::Pg version 3.9.0 or higher, but I *highly*
recommend using the latest release you can (right now, that is 3.18.0)
Cheers,
Greg
What problem are you trying to solve? If you tell us that, we can guide you
to some better solutions.
There are numerous issues here, but the most important are:
1) Calling a shell via \! invokes an entirely new process: there is no link
to the parent or grandparent process
2) The run-bash-funct
Start by seeing where the port is actually being set by running this:
select setting, source, sourcefile, sourceline from pg_settings where name
= 'port';
Cheers,
Greg
On Fri, Nov 1, 2024 at 2:40 AM Muhammad Usman Khan
wrote:
> For immediate space, move older files from pg_Wal to another storage but
> don't delete them.
>
No, do not do this! Figure out why WAL is not getting removed by Postgres
and let it do its job once fixed. Please recall the original poste
I'd echo the suggestion to strace this. You can use the pre_auth_delay
setting to help facilitate that. See:
https://www.postgresql.org/docs/current/runtime-config-developer.html
Cheers,
Greg
>
> Additionally in the plan which mysql makes and showing the highest
> response time, is it suffering because of differences of the speed of the
> underlying IO/storage or is it just because of the optimization features
> which are available in postgres and not there in mysql ? Trying to
> unders
To be frank, there is so much wrong with this query that it is hard to know
where to start. But a few top items:
* Make sure all of the tables involved have been analyzed. You might want
to bump default_statistics_target up and see if that helps.
* As mentioned already, increase work_mem, as you
On Fri, Oct 11, 2024 at 9:28 AM Durgamahesh Manne
wrote:
> composite key (placedon,id)
> In concurrent mode if i use id at where clause then query plan for that id
> column changes
>
> How to mitigate it rather than use seperate index for id to continue
> without change in query plan (index scan)
(please start a new thread in the future rather than replying to an
existing one)
You cannot query on b and use an index on (a,b) as you observed. However,
you can have two indexes:
index1(a)
index2(b)
Postgres will be able to combine those when needed in the case where your
WHERE clause needs t
>
> if we have any column with large string/text values and we want it to be
> indexed then there is no choice but to go for a hash index. Please correct
> me if I'm wrong.
>
There are other strategies / solutions, but we would need to learn more
about your use case.
Cheers,
Greg
On Wed, Oct 9, 2024 at 4:10 AM Mikael Petterson
wrote:
> Hi,
>
> I find our disk is filling up.
>
> sudo find /var/lib -type f -size +100M -exec ls -lh {} \; | awk '{ print
> $9 ": " $5 }'
>
...
Those files only add up to about 30GB. That's pretty small these days :
time for a bigger disk? Or pe
On Sat, Oct 5, 2024 at 5:03 PM Tom Lane wrote:
> As I mentioned upthread, we currently promise that xact_start matches the
> query_start of the transaction's first statement. (I'm not sure
> how well that's documented, but the code goes out of its way to make it
> so, so somebody thought it was
Adrian and Veem were saying:
> > so does it mean that we should increase the pg_stat_statement.max to
> further higher value?
>
Yes, you should raise this setting if you find your queries are getting
pushed out. Moving to version 17 will also help, as myself and others have
been working on norma
While working on a doc patch for this, I realized that the situation is
worse than I originally thought. This means that anyone relying on
pg_stat_activity.xact_start is not really seeing the time of the snapshot.
They are seeing the time that BEGIN was issued. Further, there is no way to
tell (AFA
On Tue, Oct 1, 2024 at 1:57 AM sud wrote:
> *Where are you getting the ~2000 count from?*
> Seeing this in the "performance insights" dashboard and also its matching
> when I query the count of sessions from pg_stat_activity.
>
So I'm guessing this is perhaps RDS or Aurora? Stating that up fron
On Tue, Oct 1, 2024 at 2:52 AM yudhi s wrote:
> When I execute the query with explain (analyze, buffers),I see the section
> below in the plan having "sort method" information in three places
> each showing ~75MB size, which if combined is coming <250MB. So , does that
> mean it's enough to set t
On Tue, Oct 1, 2024 at 9:53 AM Pecsök Ján wrote:
> We see significant difference in explain analyze Actual time in the first
> line of execution plan and Execution time in the last line of execution
> plan. What can be the reason?
>
>
>
> For example, first line of execution plan:
>
> Gather (co
Please do not spam the mailing lists with requests for follow ups. In this
particular case, you received an answer two days after you posted it. It
went to both psql-general, pgsql-in-general, and to you directly, so there
seems little excuse for missing it.
Also note that pglogical is a third-par
On Wed, Sep 25, 2024 at 4:54 PM Christophe Pettus wrote:
> On Sep 25, 2024, at 13:49, Greg Sabino Mullane wrote:
> > BEGIN ISOLATION MODE REPEATABLE READ SNAPSHOT NOW;
>
> This might well be a failure of imagination on my part, but when would it
> pragmatically matter th
On Wed, Sep 25, 2024 at 2:13 AM KK CHN wrote:
> PgBackRest : I tried to restore the latest backup taken at my RepoServer
> to a testing EPAS server freshly deployed .
> ...
>
Now I comment out the archive command in the test EPAS server
> postgresql.conf and started the EPAS server.
>
* To d
>
> Since transactions should be "as short as possible, without being too
>> short", how much time is there between when you run "BEGIN;" and the first
>> "work statement"?
>>
>
I don't know that it really matters. For something automated, it would be a
few milliseconds. Either way, I'm sure most p
On Wed, Sep 25, 2024 at 1:53 PM Tom Lane wrote:
> Because we're not going to analyze the statement in the amount of depth
> needed to make that distinction before we crank up the
> transactional machinery. If it says SELECT, it gets a snapshot.
>
Ok, thanks. So to the original poster's point, p
On Tue, Sep 24, 2024 at 10:28 AM Tom Lane wrote:
> It's even looser than that, really: it's the first statement that
> requires an MVCC snapshot.
Hmso why does "SELECT 1;" work as a transaction start marker then, as
opposed to "SHOW work_mem;", which does not? Do we simply consider anything
On Sat, Sep 21, 2024 at 3:08 PM Ramakrishna m wrote:
> I would greatly appreciate any suggestions you may have to help avoid
> logical replication delays, whether through tuning database or operating
> system parameters, or any other recommendations
>
In addition to the things already answered:
On Mon, Sep 23, 2024 at 8:22 AM Asad Ali wrote:
> There is no direct prompt escape sequence like %n for displaying the
> current_role in the psql prompt. However, you can work around this by using
> a \set command to define a custom prompt that includes the result of
> current_role.
>
Please do
You may be able to solve this with advisory locks. In particular,
transaction-level advisory locks with the "try-pass/fail" variant. Here,
"123" is a unique number used by your app, related to this particular
table. You also need to force read committed mode, as the advisory locks go
away after the
On Thu, Sep 19, 2024 at 5:17 AM veem v wrote:
> 2024-09-18 17:05:56 UTC:100.72.10.66(54582):USER1@TRANDB:[14537]:DETAIL:
> Process 14537 waits for ShareLock on transaction 220975629; blocked by
> process 14548.
>
You need to find out exactly what commands, and in what order, all these
processes
On Tue, Sep 17, 2024 at 8:22 AM Zwettler Markus (OIZ) <
markus.zwett...@zuerich.ch> wrote:
> pg_dump -F p -f dump.sql …
>
> sed -i "s/old_name/new_name/g"
>
> psql -f dump.sql …
>
Why not rename afterwards? Just "pg_dump mydb | psql -h newhost -f -" and
rename things via ALTER. Certainly much saf
On Mon, Sep 16, 2024 at 11:56 PM veem v wrote:
> So what can be the caveats in this approach, considering transactions
> meant to be ACID compliant as financial transactions.
>
Financial transactions need to be handled with care. Only you know your
business requirements, but as Christophe pointe
On Fri, Sep 13, 2024 at 11:35 AM Wong, Kam Fook (TR Technology) <
kamfook.w...@thomsonreuters.com> wrote:
> 1) Where does query parsing occur?
>
>
Always on the server side, although your driver may do something as well.
2) Will this cause extra parsing to the posgress DB?
>
Yes
> Any pg sys
On Thu, Sep 12, 2024 at 9:21 AM Andreas Joseph Krogh
wrote:
> Yes, it *is* theater, but that doesn't prevent “compliance people” to
> care about it. We have to take measures to prevent “information leaks”.
>
*shrug* Then the compliance people are not good at their jobs, frankly.
But if it works
On Thu, Sep 12, 2024 at 9:12 AM Dominique Devienne
wrote:
> On Thu, Sep 12, 2024 at 3:06 PM Greg Sabino Mullane
> wrote:
> > (Also note that determining if a database or user exists does not even
> require a successful login to the cluster.)
>
> Hi. How so? I was not aware
On Thu, Sep 12, 2024 at 12:52 AM Andreas Joseph Krogh
wrote:
> I know PG is not designed for this, but I have this requirement
> nonetheless…
> I think preventing “most users and tools" from seeing/presenting this
> information is “good enough”.
>
As pointed out, there are very many workarounds.
On Wed, Sep 11, 2024 at 1:02 AM Durgamahesh Manne
wrote:
> Hi
> createdat | timestamp with time zone | | not null | now()
>| plain| | |
> modified | timestamp with time zone | | not null | now()
>| plain| |
I'd go with option 2 (jsonb), as it's likely already well supported by your
applications, while the other approaches will require a good bit of
customization. JSONB can be indexed, so performance should be on par with
"traditional" tables.
Cheers,
Greg
>
> Dumping changes periodically, sending them directly or uploading to cloud
> storage and then downloading and applying them on the subscriber side.
> But maybe there's a simpler option someone here knows about?
How about using WAL shipping to populate a replica, and either query that
directly
Any updates on this?
A few replies from me inline:
On Wed, Aug 28, 2024 at 12:18 PM Jacob Biesinger
wrote:
> There aren't many details in the docs around failure modes... is there
> anything there that could cause this issue?
>
Nothing that I know of, but it's possible there is some sort of we
On Wed, Sep 11, 2024 at 6:14 AM Fabrice Chapuis
wrote:
> status = active
>> wait event = NULL
>> wait event type = NULL
>>
>
That seems quite unlikely. Perhaps you are seeing the pg_stat_activity
query itself? Try this:
select state, now()-state_change, wait_event_type, wait_event, query
from pg
Your questions are a little too vague to answer well, but let me try a bit.
1)In the query below , if the optimizer chooses tab1 as the driving table,
> the index on just col1 should be enough or it should be (col1, tab1_id)?
>
No way to tell without trying it yourself. We need information on how
On Fri, Sep 6, 2024 at 7:05 AM Xavier Solomon
wrote:
> > explain select b_id from b natural left join a;
> results in a `Seq Scan on b`. Whereas the query
> > explain select b_id from b natural join a;
> results in a join with sequential scans on both a and b.
>
I think your example is a little
>
> database size: 146.9GB, database backup size: 146.9GB
> repo1: backup size: 20.6GB
It looks to me as though everything is working as expected. You took a full
backup of your system, which was around 147GB - most of which is in a
tablespace. It got compressed down to 20GB. You then took two in
On Thu, Aug 29, 2024 at 9:31 AM KK CHN wrote:
> "Unable to acquire lock on file '/tmp/pgbackrest/Repo-backup.lock'
>
When this happens, take a look inside this file. If there is another
pgbackrest process running, the pid will be inside that file. Kill that
process before trying to run another b
On Thu, Aug 29, 2024 at 2:21 AM KK CHN wrote:
> I am doing a full backup using PgBackRest from a production server to
> Reposerver.
>
...
> If so, does the backup process start again from scratch ? or it
> resumes from where the backup process is stopped ?
>
It resumes. You will see a m
On Tue, Aug 27, 2024 at 9:03 PM Jacob Biesinger
wrote:
> I'm scratching my head at a few rows in the root DB, where it seems the
> corresponding tenant transaction rolled back, but the root DB transaction
> committed
>
...
> Before I jump into particulars, does this sound like expected behavior?
On Wed, Aug 28, 2024 at 1:39 AM KK CHN wrote:
> In this DB serverI have other databases than the default "edb"
> database. Specifying the above line aspg1-database=edb // I am
> not sure this line is necessary or not ?
>
The pgbackrest process needs to connect to the database, w
On Tue, Aug 27, 2024 at 9:20 AM Dominique Devienne
wrote:
> Once again, this is late, although my original questions are now 2 weeks
> old.
> After all, PQsocketPoll() has not been released yet officially. Thanks,
> --DD
>
As this is so new, you might have better luck on -hackers than here. I've
On Thu, Aug 22, 2024 at 8:49 AM o1bigtenor wrote:
>
>
> On Thu, Aug 22, 2024 at 6:24 AM Ron Johnson
> wrote:
>
>> That's great on small databases. Not so practical when they're big.
>>
>> So - - - - what is the recommended procedure for 'large' databases?
>
Use a real backup system like pgBack
On Fri, Jul 19, 2024 at 3:22 AM Laurenz Albe
wrote:
> Why not say that authors who repeatedly post grossly counterfactual or
> misleading content can be banned?
>
I like this, and feel we are getting closer. How about:
"Posts should be technically and factually correct. Use of AI should be
used
1 - 100 of 177 matches
Mail list logo