PG does not have a concept of differential backup since it does not track block level changes. Pgbackrest has implemented a different backup using timestamp of last update in data files. Not sure whether it works in windows.--Sent from phone.From: Rajmohan Masa Sent: Thursday, April 27, 2023, 7:25
Aurora PG based on PG 13.4
Our create concurrent index on a very large partitioned table (5 billion rows)
waits in Lock: Speculative token.Never seen this error in PG. Google search
also shows nothing.
Is this Aurora thingy ? If this is PG related I can provide full details.
on a diff note, is the word memoize inspired from Perl Module memoize which use
todo the same thing.
Our issue is that it takes 20hrs to index the full table. Hopefully we can add
FK in multiple child partitions concurrently, otherwise doing it per partition
offers no advantage from performance pov.
Need to test. Hopefully PG should not lock the referred table during the first
build, stopping c
AWS Aurora based on PG 13.
Large partitioned table of 5+ billion rows and 7TB in size.
ALTER TABLE abc ADD CONSTRAINT fk_123 FOREIGN KEY (a,b,c) REFERENCES xyz(1,2,3);
It seems this is not parallelized. Is there a way. Or directly going into each
partition is the only way ( not even sure it is po
> You can commit in a loop, but not in BEGIN / END block that has an exception
> handler:> that creates a subtransaction for the duration of the BEGIN / END.
The reason I have to deal with error exception is that I want to ignore failure
on a table and move on to next table.
I thought I can tric
AWS Aurora based on PG 13
I am writing a sproc to copy a schema into another. Here is the relevant
portion of the code.
Basically I want to commit after every table is created. In big schemas with
hundreds of table I do not want to run entire operation in one transaction.
I am getting error a
https://www.postgresql.org/docs/current/infoschema-table-constraints.html
"The view table_constraints contains all constraints belonging to tables
that the current user owns or has some privilege other than SELECT on."
thank you. that explains.
Any idea why select on this table does not yield any output for a user who
otherwise can get output for other information_schema tables. Does this table
require any special privilege compared to other tables of
information_schema.Thanks
LANGUAGE 'edbspl'
This is the root cause of your issue.
You are not using PGSQL, but EDB version of it which is compatible with Oracle
PL/SQL.
No. PostgreSQL may remove a dead row, but a dead row is by definition
no longer visible, so it wouldn't be found by a query.
I am wondering whether it is a good practice to use CTID in a where
clause. years ago when I use to code in Informix, using ROWID as a
generic substitute for primar
I've really only ever worked in web development. 90+% of web
developers regard doing anything at all clever in the database with
suspicion.
One common argument they use is that if you write your business logic in
stored procedure, you are locked to that database since stored procedure
languag
select count(1) from snapshotlist where id not in (select id from q);
count
---
0
(1 row)
Doesn't this usually happen if q.id contains NULL. That is as per ANSI
standard.
>
>
> how to take incremental backup in postgresql windows machine.
>
AFAIK PG has no concept of incremental backup. pgbackrest has, but not sure
whether it runs on Windows.
>
> Hello Team,
> Good day!!!
>
> We are using source as PostgreSQL with Power BI reports. We have established
> connection and created test dashboard in the PowerBI report. But we have
> installed trial for Progress DataDirect PostgreSQL.
> So now we need to use full version of DataDirect Pos
https://www.postgresql.org/docs/current/sql-notify.html
https://www.postgresql.org/docs/13/sql-listen.html
>
> initially pgsql_tmp size was 87 GB and after execution of query it was
> started increasinng beyond 87GBs so yes, pgsql_tmp directory size is
> getting increased while exporting data into csv file.
>
> But once I rolled back the query, pgsql_tmp directory it was back to
> 87 GBs so please
this is a very interesting case. Atul keep us posted.
I am not sure about that
"It creates a new empty table , followed by rename of the existing table to the new
empty table and finally dropping of the old table."
You mean table is re-created with new oid?
I don't think oid changes, but the file relnode on the disk changes. So let me
rephrase i
Truncate is not delete + vaccum.
It creates a new empty table , followed by rename of the existing table to the
new empty table and finally dropping of the old table.
On May 28, 2021 at 7:05 AM, Vijaykumar Jain
wrote:
Yes,
I too see growth when text type is used, but not when int or even
This looks like early vs late binding problem, also seen in other products. When you prepare the sql, the optimizer
has no way of knowing the values which is going to be supplied in future. So it is possible that at the time of preparing
PG settles on a plan and uses it for all values, regardles
>There is no error message, when I try to connect the database while
>running vacuumdb with 300 jobs, it gets stuck.
But you mentioned max connection which now seems to be a red herring.
Based on your description, the impression I got is that you are getting
"sorry, too many clients already" e
>The CPU and RAM are normal even on 300 jobs ( only 1-4% of consumption)
>but I don’t understand one thing here that if max_connections is set to 700
>then
>why I am not able to connect the db. As the running jobs (300) are lesser than
>half of max_connections.
Please paste the error message
> Everyone is free to use whatever he/she wants. For me a we based MUA
> would be the worst thing ever.
Oh well. I have created a seperate email account for this to keep the clutter
out.
thanks all who took time to reply to this.
>Is there a central place where i can get postgres blogs as they are
>written by different blog sites, e.g. can google news app be tuned
to just bring postgres blogs only?
I use https://planet.postgresql.org/
I am planning to switch to a web based tool to read this mailing list. While
reading is easy via web, how do I post a reply from web.I recollect there use
to be a website from where one can reply from web.
thanks
In SQLServer each db has its own data file and transaction log file and hence
can be copied the way you described.
> Limitation of Backup utility. Transfer to another server was done
> successfully. When we want to re-copy to original server,
>we have to delete the original db in the original se
>
>
> I have a table in an analytics database (Postgres 12.3), that gathers data
> continuously. It is at 5B rows, with an average row size of 250 bytes. The
> table has five indexes, on bigint and varchar columns, all with keys of one
> or two columns.
>
> There are currently frequent update
>
> Experience shows that global index in Oracle lead to problems when dropping a
> partition. rebuilding an index, or other such nice administrative stuff,
> often leading to unnecessarily long downtimes.
>
>
I think Oracle fixed it later by allowing asynchronous update of global index
afte
> My understanding is that when CONCURRENTLY is specified, Postgres implements
> the refresh as a series of INSERT, UPDATE,
> and DELETE statements on the existing view. So the answer to your question is
> no, Postgres doesn’t create another table and
> then swap it.
The INSERTS/UPDATE/DELETE ha
>I haven't replaced the broken View2 yet. >Hope someone can point me to some
>>further investigation.
Did you look at the actual definition of view2. Like all RDBMS PG materializes
the Ddl as it existed at the time of creation and converts it into an in line
sql.
There is a possibility it ma
I can’t for the life of me imagine how you arrived at this. SQLite is very
capable indeed.
>Its dialect of SQL is (deliberately) very similar to Postgres, featuring such
>niceties as
>recursive CTEs and window functions, and it can handle heavy use and
>multi-terabyte
> databases if you need (cf
This is assuming other sessions change the same block your session is trying to
read.
===
It's been a while since I worked with Oracle as a developer. But my
understanding
is that even a read-only transaction, like the one you described above, requires
a point in time consistent image of th
>>Interesting. Are you telling the Oracle version of the code had no
>>intermittent COMMIT and relied on one final COMMIT at the end. Even
>>in Oracle developers must have planned for commit since a long running
>>open transaction can lead to “snapshot too old” error.
>Yes, I am saying just
This is the problem. A and B were developed for Oracle where SELECT does not
open a transaction. We moved them to PG
and now we have to very accurately add COMMITs without breaking the flow. It is
quite a complex thing. I hoped we can
avoid that.
Interesting. Are you telling the Oracle vers
Both can handle concurrent writes. auto-increment is nothing but serial or
sequence cols and they can handle unique concurrent request. That is why
sometimes you may have gaps.UUID is not only unique, but is also unique across
space. You can have two different databases generate UUID at the sa
>
> The main problem here is that "Amazon Aurora" is not PostgreSQL.
> If I understand Amazon's documentation, what you are using is
> officially named "Amazon Aurora with PostgreSQL Compatibility",
> and that sums is up quite nicely: Aurora is a database engine
> developed at Amazon - and it's in
On 7/13/2020 4:52 AM, Rama Krishnan wrote:
For example, two transactions are going to increase the amount on the
same account by $100 . The first transaction reads the current value
($1000) and then the second transaction reads the same value. The first
transaction increases the amount (thi
>
> But if you want to log in with encrypted password and someone can grab
> it from the file not sure what the difference is from grabbing the plain
> text one if they both end up logging the user in?
Exactly. saved me the trouble of typing this.
> That's what I understood as well, but I'm creating those concurrently
> WITHIN one and the same session and transaction. :-)
Did I interpret this as "two different sessions via application threads within
the same transactions of PG".
Does the thread create its own PG session for each thread or
>> Plus PG does not directly support cross database queries using 3 part name,
>> something
>> sqlserver excels at.
>Maybe because SQL server does not have real databases but schemas instead ?
>This sucks security wise.
SQLServer has real databases with its own transaction log files. You can
r
>
> If the data size is more than 6TB, which approach better?
Do you require cross tenants queries? If yes, then schemas are a better
solution.
>
> Eh, that's something that I think we should be looking at supporting, by
> using FDWs, but I haven't tried to figure out how hard it'd be.
>
How good will that be in performance.
In db2 you can do it using dblinks and that kills performance. isn't FDW
something like dblink.
The cool part
>
> Generally speaking, I discourage having lots of databases under one PG
> cluster for exactly these kinds of reasons. PG's individual clusters
> are relatively lightweight, after all.
>
Plus PG does not directly support cross database queries using 3 part name,
something
sqlserver excels
Oracle is losing market share consistently and irreversibly for the last 4-5
yrs. It is not due to migration to open source
RDBMS, but also due to the fact that now there are many alternatives to RDBMS
for data storage. Until about 10-15 yrs back,
if the application has to store data, then RDBM
>
> The database/schema per tenant solution can be tedious when you want to
> modify something on the structure and you have numerous tenants.
> Therefore I used the "tables with tenant_id" version in a similar situation
> but with a slight twist. One of the biggest issue of this solution is tha
Why should the backup land in S3, and not local somewhere?
Any good reason why one should pay for the additional storage and
transfer costs?
Good question. The key point in my statement was "db of this size".
The problem with local backup is that space is not infinite. If your
business requ
IMO a database of this size should only be backed up in s3. pgbackrest
has support for backup to s3.
On 5/14/20 12:37 AM, Santhosh Kumar wrote:
Can you please help me understand, why the following news is published
in "postgresql" with an encouraging message acknowledging BDR as an open
source?
In my opinion it is not a bright idea to not have support for any
product. Support is an inde
the pgm does an INSERT, after this the row is there and can be seen
with SELECT; than I CLOSE a non existing cursor, which rolls back
the INSERTed data:
I have not done coding in ESQL/C in a long time, but shouldn't that be
expected as any error should trigger a rollback.
> On May 6, 2020, at 10:52 AM, Ashish Chugh
> wrote:
>
> Hello Ravi,
>
>
> Total number of indexes are 10 and size is 65 GB. Shall we consider this as a
> normal scenario or we need to look into the growth of the indexes as this is
> increasing day by day and table data is not increasing
>
> Hello Ravi,
>
> Total number of index is 10 and 65GB is the sum total of index size of all
> indexes for table “tstock_movement”
> I am attaching the screen shot for your reference.
>
In that case 65GB is not surprising.
> On May 6, 2020, at 5:48 AM, Ram Pratap Maurya
> wrote:
>
> Hi Team,
>
> We are facing a problem in our PostgreSQL production database related to
> abnormal growth of index size. Some of the indexes are having abnormal growth
> and index size is larger than table data size.
> One table is
> - does dbeaver is a good frontend for pg administration?
It is an excellent sql client tool and I use it heavily for Redshift,
SQLServer, Snowflake and PG.
However it is a not dba tool in the sense that it can DBA specific details and
graphs as shown in pgadmin.
>
> Do you have similar setup like Oracle RAC in postgres core . I found in edb
> but didn't find anything in postgres core. We are looking for setting up
> replication with no outage and other node will be up , if the primary is
> down. Any help would be great 😊
Oracle RAC is based on shared
How do we get the notification mentioned in RAISE INFO of PL/PGSQL in JDBC.
I can get it in psql, but not in JDBC.
I am talking about redshift which has the same procedural language like
postgres.
thanks
>
> If you have mirrored disks, and you write junk (e.g, because of
> a flaw in a fibre channel cable, something I have witnessed),
> then you have two perfectly fine copies of the junk.
>
Few years ago didn't this happen to Salesforce where a firmware bug corrupted
the Disk, resulting in corru
how can I determine what the data type of the value element is?
perl has a ref function which can tell what type of object.
https://perldoc.perl.org/functions/ref.html
--
This email has been checked for viruses by Avast antivirus software.
https://www.avast.com/antivirus
"If they enter any data, those data also need to be taken into account in this
report. "
Pls read on PG's MVCC architecture. In SQLServer, unless you enabled its bad
implementation of Snapshot isolation,
you can't achieve the same. So it makes sense there. In PG it is easy to
ensure that you
>
> Before taking a few reports, we need to ensure that only one connection is
> made to the database and all other computers need to close the connection to
> that database. This is to prevent any additional entry(ies) that could be
> made during the course of the report taking. This single
>
> "already suffering from a complex of coding in an unreadable language"
>
> hearsay and conjecture
>
I was somewhat facetious. But what finished perl was the reputation it earned
that it is a write only
language. I don't think that reputation it earned is without merit.
IMO the early adop
>
> Not that I know of. pgAdmin is just another client so you just have the
> methods available here:
>
> https://www.postgresql.org/docs/12/auth-pg-hba-conf.html
>
will the col application_name from pg_stat_activity list pgadmin as pgadmin. if
yes, then the closest one can achieve
is to have
>
> Depending on who wrote the code, they may have extracted the rows
> as hashrefs rather than arrays; that can be a 10x slowdown right
> there. [I have no idea why so many people are so addicted to storing
> rows in hashes, but it is always a significant slowdown; and
> array slices are no more
>
> If rows are added, deleted etc from specific child tables of a partition is
> it necessary when performing an analyze to analyze the entire partition or
> just the children that experienced the row changes?
Under the hood each partition is a separate table, so why is analyzing entire
paren
>
> > My initial idea was something along the lines of :
> > select (select sum(statcount) from stats_residence) as
> > aggstat,statcount,short_name_en from stats_residence where
> > aggstat>some_number;
>
> Am I missing something basic. The above can be done using
> GROUP BY and HAVING claus
> My initial idea was something along the lines of :
> select (select sum(statcount) from stats_residence) as
> aggstat,statcount,short_name_en from stats_residence where
> aggstat>some_number;
Am I missing something basic. The above can be done using
GROUP BY and HAVING clause.
>In our production, we use postgres 9.5 with streaming replication
>using repmgr, there is a large table of 140GB size which receives
>lots of inserts,
>Is it possible to partition this table in this version of postgres?
>and if so, please, can someone let me know the best way to accomplish
>thi
-k option is kept precisely for this. The upgrades are pretty fast, but still
with some downtime. may be 30-45 min tops.
>
> Never, unless you drop and recreate the table. Removing a dropped
> column would change the attnums of following columns, which we
> can't support because the tableoid+attnum is the only persistent
> identifier of a column.
>
> (From memory, operations like VACUUM FULL and CLUSTER will rewri
>> Just to confirm, we are talking about tables from where some cols were
>> deleted in the past, but
>> VACUUM FULL not run on that table, right ?
>
> VACUUM would not change the state of the dropped columns.
>
When does it change?
> On Nov 3, 2019, at 11:03 PM, Steve Baldwin wrote:
>
> Thanks very much for the explanation Tom !! You are correct - there are
> dropped columns in the original.
Just to confirm, we are talking about tables from where some cols were deleted
in the past, but
VACUUM FULL not run on that ta
>
> I have created indexes with certain set of columns. Now I want to remove one
> of the column and add new column. Can this done without dropping the index?
>
If you intent is to avoid taking that table offline while the index is being
built, you can do this:
1. create index concurrently on
So reading responses from all, here is a SQL test I did on few RDBMS:
select 'John'
'Doe' as rrr
from information_schema.tables
limit 1;
PG output
rrr
---
JohnDoe
Oracle and mysql gave same output as PG with no error
SQLServer: Syntax error near 'as'.
DB2 gave same error as SQLServ
>
> Simplify:
>
> select 'a'
> db-# 'b';
> ?column?
> --
> ab
> (1 row)
>
> This is not a bug.
>
> https://www.postgresql.org/docs/9.2/sql-syntax-lexical.html
>
> Two string constants that are only separated by whitespace with at
> least one newline are concatenated and effectively tre
We noticed this bug in Redshift. It then occurred to me that this may
very well be a PG bug since Redshift is based on PG.
Reproduced it in Version 11.5
create table bugtest (fld1 char(1)) ;
insert into bugtest values('a');
insert into bugtest values('b');
insert into bugtest values('c');
ins
>
>
>
> Surprisingly (to me), no….
>
> db=# select to_date('20181501','MMDD');
> to_date
>
> 2019-03-03
The above fails in 11.5 SQL Error [22008]: ERROR: date/time field value out
of range: "20181501"
> postgres=# select to_date('2018150X','MMDD');
> to_date
> 2019-03-03
> postgres=# select to_date('20181501','MMDD');
> to_date
>
> 2019-03-03
is this a cut-n-paste mistake?
> ctrlmdb=> select to_date('2018100X','MMDD');
> to_date
>2018-10-01
>(1 row)
I am able to reproduce this in 11.5 It seems PG can take a single digit for Day
too.
select to_date('2018109','MMDD') produces 2018-10-09.
https://postgresapp.com/downloads.html
The link which says PG 12 is actually PG 11.
>
> As the failed primary is having more data, How is it possible that primary is
> committing transaction before they were applied on standby with
> synchronous_commit=remote_apply?
If I am not mistaken remote_apply is only from ver 11.
> More generally: I find this complaint a little confusing. We did not
> consider reporting the "show row contents" DETAIL to the client to be a
> security hazard when it was added, because one would think that that's
> just data that the client already knows anyway. I'd be interested to see
> a
On 6/14/19 10:01 AM, Tiemen Ruiten wrote:
LOG: checkpoint starting: immediate force wait
Does it mean that the DB is blocked until the completion of checkpoint.
Years ago
Informix use to have this issue until they fixed around 2006.
>
> I was thinking of asynchonously cleaning it up rather than blocking
> DROP/DETACH ... which means you need to keep state somewhere. I don't
> think blocking DROP/DETACH is valuable -- a global index that blocks
> DROP/DETACH until the index is clean serves no useful purpose. (You
> could thi
--- Original Message
--
> On May 14, 2019, at 9:06 AM, Rob Sargent wrote:
>
>
> Which part confused you Ravi?
Same as you, this one
"Am taking directly single backup restore in dev its means in dev server only
restore the databa
>
>
> Not , am saying we have the daily backup and full backup in prod server only
> and there is one database like a4 the db size is 1.5TB.
> so am not restore again in prod .
> Am taking directly single backup restore in dev its means in dev server only
> restore the database in new cluster.
>
> Note - if am taking same prod single database backup and restore in new
> cluster no use for us and it will take more time.
> so business and team they need every 3 weeks for restore in dev server one
> single database and cant we do it in pg_dump and restore .
> They want using pgbackrest t
>
> ==> ldd /usr/lib64/perl5/vendor_perl/auto/DBD/Pg/Pg.so
> linux-vdso.so.1 => (0x7fffddd8f000)
> libpq.so.5 => /usr/lib64/perl5/CORE/libpq.so.5 (0x7f5ecdbd6000)
I may be wrong, but is the above path in the
> and perl DBD also installed but it cant load that is the issue please advise
> me.
what is the output of LD_LIBRARY_PATH on both dev and prod
what does ls /usr/lib64/perl5/vendor_perl/auto/DBD/Pg/Pg.so return?
If there is no such file, then it means that on the dev server perl DBD was
installed differently
than prod. I am assuming LD_LIBRARY_PATH is also correctly set.
apologize if my advise is wrong. It has been eons since I worked w
>@Ravi
>My company is trying to avoid another vendor lockin too , thats why we are bit
>skeptical
>on going to EDB as once we start using their Oracle compatability
>feature then it will be very difficult to move to community addition again.
As far as I know, vendor lock in applies only if
> EDB or Aurora to Open source Postgres assuming we dont use AWS services OR
> would you suggest to move to Community version from the start by taking
> support/consultancy
> from other companies like 2nd quadrant and etc?
EDB is mainly attractive to Oracle shops who want Oracle compatibility.
> I think the main "gotcha" when I moved from SQL Server to Postgres was
> I didn't even realize the amount of in-line t-sql I would use to just get
> stuff done
> for ad-hoc analysis.
T-SQL is an exceptionally powerful SQL based language. Add to it, the many
functions
SS has. I recently had
more:
1. No db level backup/restore in PG, at least no easy way.
2. No cross db query.
> I was wondering if anyone has any tips that are specific for SQL Server
> users? Best features? Known issues? Common rebuttals?
Are you talking about SS to PG migration.
Generally SQLServer shops use SS specific functions and T-SQL heavily since
they provide very good functionality.
For e
IMO you are using the slowest tool to import.
Just one quick question: Why can't you take cluster backup using any of the
tools available and then drop all
unwanted databases after you import the cluster.
pg_basebackup will do a good job.
>
> Hope Iam detail this time :-)
>
Unfortunately still not enough. Can you post sample of the data here. And
what command you used in DB2. Pls post the SQL used in DB2 to dump the data.
>
> I don't think we've seen enough representative data to know exactly what the
> backslash is doing. It doesn't appear to be an escape, based on the sole
> example I've seen it appears to be a data separator between first name and
> last name.
>
> It seems increasingly likely to me that you
>
> In what format are you dumping the DB2 data and with what specifications e.g.
> quoting?
>
DB2's export command quotes the data with "". So while loading, shouldn't that
take care of delimiter-in-the-data issue ?
> The above is not the same format as OP's query:
>
> Update tableA set col1 = null and col2 in (1,2);
I did include set in the sql. I typed it wrong here.
Oh wait. I see that it in both cases it did update correct target rows, but
the value of col1 for non matching rows
is different. In the first case (and col2), the non matching rows also got
updated.
So yes, same behavior like yours.
> Sent: Thursday, April 18, 2019 at 2:36 PM
> From:
1 - 100 of 204 matches
Mail list logo