Dear Adrian,
I appreciate your reply. Your reply gave me a new idea,
it should not be the problem that the lower() function causes the unique index
to fail.
I checked the postgresql.conf file and found that shared_buffers, work_mem and
maintenance_work_mem are default value,
but in the pos
milist ujang writes:
> On Fri, Aug 19, 2022 at 11:02 AM Julien Rouhaud wrote:
>> What it means is that you likely have data corruption. You could try
>> to reindex pg_attribute and see if that fixes that specific problems,
>> but even if it does you will have to investigate how data got
>> corru
On Fri, Aug 19, 2022 at 11:02 AM Julien Rouhaud wrote:
> Hi,
>
> Please don't top-post on this list (and please trim quoted messages too).
>
>
>
> No, this function is comparing pg_class.relnatts to rows fetched querying
> pg_attribute for the given relation.
>
> any concern if I update pg_class.
Hi,
Please don't top-post on this list (and please trim quoted messages too).
On Fri, Aug 19, 2022 at 05:55:03AM +0700, milist ujang wrote:
> On Thu, Aug 18, 2022 at 5:33 PM milist ujang wrote:
> >
> > 3rd query ERROR same as subject (ERROR: catalog is missing 3 attribute(s)
> > for relid 15024
Apparently just leaving it alone until tomorrow managed to finish
creating/dropping the index. Thank you all very much.
On Thu, Aug 18, 2022 at 5:00 PM hubert depesz lubaczewski
wrote:
> On Thu, Aug 18, 2022 at 01:57:48PM +0800, Abdul Qoyyuum wrote:
> > Hi list,
> >
> > We have a running Master-
On 8/18/22 14:54, W.P. wrote:
[snip]
But You are right, I am taking backups irregular.
cron job.
--
Angular momentum makes the world go 'round.
my version is 12.9 on x86_64.
reading source code, got the message form RelationBuildTupleDesc() function:
/*
* end the scan and close the attribute relation
*/
systable_endscan(pg_attribute_scan);
table_close(pg_attribute_desc, AccessShareLock);
if (need != 0)
I am reasonably sure the answer to this is "no", but can the oldest xmin that
hot_standby_feedback sends to the primary also delay SIReadLocks cleanup?
Going through the code, it looks like they're independent, but this isn't a
part of the system I know particularly well.
W dniu 18.08.2022 o 16:46, Laurenz Albe pisze:
On Thu, 2022-08-18 at 08:39 +0200, W.P. wrote:
I have a PostgreSQL 11 server on OrangePi3 (ARM 64 bit, Armbian, PG from
distro).
Database (all cluster) is located on USB disk. This approach give me
already 2 times loosing DB contents (it is a repli
On 8/18/22 09:39, Anant ngo wrote:
Hello
Is there a postgres extension or project related to
application-level/foreign-table data caching ? The postgres_fdw
extension fetches data from foreign table for each command.
I have seen previous messages in archive about caching in form of global
t
Hello
Is there a postgres extension or project related to
application-level/foreign-table data caching ? The postgres_fdw extension
fetches data from foreign table for each command.
I have seen previous messages in archive about caching in form of global
temp tables, query cache etc. There are go
On 8/18/22 02:50, gzh wrote:
Dear Tom,
Thanks for your reply.
Please refer to the information below:
I run following sql in PostgreSQL 8.2 and PostgreSQL 12.5, it returns
different execution plan.
There are ~13 years of improvements to the planner and the database as a
whole, I would mo
On Thu, 2022-08-18 at 08:39 +0200, W.P. wrote:
> I have a PostgreSQL 11 server on OrangePi3 (ARM 64 bit, Armbian, PG from
> distro).
>
> Database (all cluster) is located on USB disk. This approach give me
> already 2 times loosing DB contents (it is a replica of DB on i7).
>
> But the whole th
Yes, pgbackrest seems the best solution for faster backup and restore.
We can close the discussion here for now, replaying million WAL files is
just slow and this is normal.
On Thu, Aug 18, 2022 at 12:10 PM Vijaykumar Jain <
vijaykumarjain.git...@gmail.com> wrote:
> Sorry for top posting, from p
But you should do that according to the documentation:
...
After restoring a backup, it is wise to run ANALYZE on each database so the
query optimizer has useful statistics.
...
Dear Adrian,
Thanks for your reply.
>Did you run ANALYZE on the 12.5 server after restoring the data to it?
No, I did not run ANALYZE on the 12.5 server after restoring the data.
When I change the select clause like this ( crew_base.crewid → count(*) ), I
can retrieve the number of data r
Dear David,
Thanks for your reply.
>In addition to that, I couldn't help notice that the quoted SQL does
>not seem to belong to the explain. The EXPLAIN has a Limit node, but
>the query does not. I'm assuming this isn't due to the relations being
>views since we don't pull up subqueries with a L
Hi lists,
I have an index anomaly on a table; getting the error as subject.
dumping queries behind \dS+ of a table, got 4 queries (see detail below)
1st query --> OK
2nd query --> OK
3rd query ERROR same as subject (ERROR: catalog is missing 3 attribute(s)
for relid 150243)
comment on 3rd query
Dear Tom,
Thanks for your reply.
Please refer to the information below:
PostgreSQL 8.2
[root@PostgreSQL8 ~]# cat /etc/redhat-release
CentOS release 6.6 (Final)
[root@PostgreSQL8 ~]# locale | grep LANG
LANG=ja_JP.UTF-8
PostgreSQL 12.5
[root@Postgr
Sorry for top posting, from phone.
But pgbackrest exactly helped with that. With compression and parallel
process in backup, the backup and restore was quick. I used this, where I
took a backup and immediately did a restore so less wals to replay, else
wal replay is indeed slow.
On Thu, Aug 18, 2
On Thu, Aug 18, 2022 at 01:57:48PM +0800, Abdul Qoyyuum wrote:
> Hi list,
>
> We have a running Master-Slave High Availability set up. Naturally, we
> can't run any changes on read-only databases on slave, so we have to do it
> on the master node.
>
> When trying to run the following command:
>
On Thu, Aug 18, 2022 at 08:39:27AM +0200, W.P. wrote:
> Is it possible to move DB tables etc to this internal storage (sure
> connection) and put only indexes on USB HDD?
Sure. There is a thing called tablespace, which is basically, directory
where files for db objects reside.
You can specif
Thank you, people. The big problem in my case, which I have not mentioned,
is that I think the network is a bottleneck, because I am running
pg_basebackup through internet from local country to Amazon instance in
Germany and the speed in copying is around 50 MB/sec max, that is why it
takes 2 days
23 matches
Mail list logo