On Thu, 9 Jan 2025 at 21:57, Ron Johnson wrote:
> On Thu, Jan 9, 2025 at 11:25 AM veem v wrote:
>
>> Hello,
>> It's postgres version 16.1, we want to convert an existing column data
>> type from integer to numeric and it's taking a long time. The size of the
&
Hello,
It's postgres version 16.1, we want to convert an existing column data type
from integer to numeric and it's taking a long time. The size of the table
is ~50GB and the table has ~150million rows in it and it's not partitioned.
We tried running the direct alter and it's going beyond hours, so
Hi,
As per my understanding , the pg_stat_statements stores the aggregated
statistics of the query execution history(with almost near real time
executions) and the max number of statements or queries it can store
depends on the pg_stat_statement.max value(which I am seeing as ~5000 in
pg_settings).
On Thu, 26 Sept 2024 at 16:33, yudhi s wrote:
> Hello All,
>
> In a RDS postgres we are seeing some select queries when running and doing
> sorting on 50 million rows(as its having order by clause in it) , the
> significant portion of wait event is showing as "IO:BufFileWrite" and it
> runs for ~
On Sat, 21 Sept 2024 at 03:47, Peter J. Holzer wrote:
> On 2024-09-20 14:11:38 -0700, Adrian Klaver wrote:
> > On 9/20/24 1:01 PM, veem v wrote:
> > > Able to reproduce this deadlock graph as below. Now my question is ,
> > > this is a legitimate scenario in which t
On Thu, 19 Sept, 2024, 8:40 pm Adrian Klaver,
wrote:
> On 9/19/24 05:24, Greg Sabino Mullane wrote:
> > On Thu, Sep 19, 2024 at 5:17 AM veem v
> > This is really difficult to diagnose from afar with only snippets of
> > logs and half-complete descriptions of your
On Thu, 19 Sept 2024 at 17:54, Greg Sabino Mullane
wrote:
> On Thu, Sep 19, 2024 at 5:17 AM veem v wrote:
>
>> 2024-09-18 17:05:56 UTC:100.72.10.66(54582):USER1@TRANDB:[14537]:DETAIL:
>> Process 14537 waits for ShareLock on transaction 220975629; blocked by
>> process 1
On Thu, 19 Sept 2024 at 03:02, Adrian Klaver
wrote:
>
>
> This needs clarification.
>
> 1) To be clear when you refer to parent and child that is:
> FK
> parent_tbl.fld <--> child_tbl.fld_fk
>
> not parent and child tables in partitioning scheme?
>
> 2) What are the table schemas
On Thu, 19 Sept 2024 at 02:01, veem v wrote:
>
> On Wed, 18 Sept 2024 at 05:07, Adrian Klaver
> wrote:
>
>> On 9/17/24 12:34, veem v wrote:
>> >
>>
>> It does if autocommit is set in the client, that is common to other
>> databases also:
>>
>
On Wed, 18 Sept 2024 at 05:07, Adrian Klaver
wrote:
> On 9/17/24 12:34, veem v wrote:
> >
>
> It does if autocommit is set in the client, that is common to other
> databases also:
>
> https://dev.mysql.com/doc/refman/8.4/en/commit.html
>
>
> https://docs.oracle
On Tue, 17 Sept 2024 at 21:24, Adrian Klaver
wrote:
>
> Which means you need to on Flink end:
>
> 1) Use Flink async I/O .
>
> 2) Find a client that supports async or fake it by using multiple
> synchronous clients.
>
> On Postgres end there is this:
>
> https://www.postgresql.org/docs/current/wa
On Tue, 17 Sept 2024 at 18:43, Greg Sabino Mullane
wrote:
>
> This is a better place to optimize. Batch many rows per transaction.
> Remove unused indexes.
>
> flushing of the WAL to the disk has to happen anyway(just that it will be
>> delayed now), so can this method cause contention in the dat
On Tue, 17 Sept 2024 at 03:41, Adrian Klaver
wrote:
>
> Are you referring to this?:
>
>
> https://nightlies.apache.org/flink/flink-docs-release-1.20/docs/dev/datastream/operators/asyncio/
>
> If not then you will need to be more specific.
>
>
Yes, I was referring to this one. So what can be the c
Hi,
One of our application using RDS postgres. In one of our streaming
applications(using flink) which processes 100's of millions of transactions
each day, we are using row by row transaction processing for inserting data
into the postgres database and commit is performed for each row. We are
seei
On Tue, 3 Sept 2024 at 01:14, Laurenz Albe wrote:
>
> You can keep the primary key defined on both columns if it is good enough
> for you.
> But it will give you lower guarantees of uniqueness: with that primary
> key, there could
> be two rows with a different timestamp, but the same "txn_id", a
On Mon, 2 Sept 2024 at 19:13, Laurenz Albe wrote:
> On Sun, 2024-09-01 at 01:32 +0530, veem v wrote:
> > due to postgres limitations we are unable to have this unique constraint
> or primary key
> > only on the transaction_id column, we have to include
> transaction_times
On Sun, 1 Sept 2024 at 11:38, veem v wrote:
>
> On Sun, 1 Sept 2024 at 10:03, veem v wrote:
>
>>
>> On Sun, 1 Sept 2024 at 09:13, David G. Johnston <
>> david.g.johns...@gmail.com> wrote:
>>
>>> On Saturday, August 31, 2024, veem v wrote:
>&
On Sun, 1 Sept 2024 at 10:03, veem v wrote:
>
> On Sun, 1 Sept 2024 at 09:13, David G. Johnston <
> david.g.johns...@gmail.com> wrote:
>
>> On Saturday, August 31, 2024, veem v wrote:
>>
>>>
>>>
>>> iii)And then alter the datatype of the
On Sun, 1 Sept 2024 at 09:13, David G. Johnston
wrote:
> On Saturday, August 31, 2024, veem v wrote:
>
>>
>>
>> iii)And then alter the datatype of the partition key transaction_date to
>> DATE in one shot at the table level(which should be fast as its having mor
On Sun, 1 Sept 2024 at 03:58, Adrian Klaver
wrote:
>
> The model is at odds with itself and untenable. If the tables hold
> multiple rows for a given transaction_id then you cannot have a
> PK/Unique constraint on that column. Seems there is a decided lack of
> any planning. The only way I can s
Hello,
We have our transaction tables daily range partitioned based on
transaction_timestamp column which is timestamptz data type and these are
having composite primary key on (transaction_id, transaction_timestamp).
And we were using an "insert on conflict" for loading data to our system ,
which
On Tue, 13 Aug 2024 at 19:39, Greg Sabino Mullane
wrote:
> On Sat, Aug 10, 2024 at 5:06 PM Lok P wrote:
>
>
>> Can someone through some light , in case we get 5-6hrs downtime for this
>> change , then what method should we choose for this Alter operation?
>>
>
> We can't really answer that. Only
This helps. Thank you very much.
On Fri, 9 Aug 2024 at 02:15, Greg Sabino Mullane wrote:
> _MM_DD is already setup for sorting, so just do:
>
> SELECT table_name FROM information_schema.tables WHERE table_name ~
> 'table_part_p' ORDER BY 1 DESC;
>
> If you need to grab the numbers:
>
> SELEC
Hi ,
We are using postgres version 15.4. We have a range partition table and the
partition naming convention is generated by pg_partman and is something
like "table_name>_p_MM_DD".
We have a requirement of extracting specific partitions ordered by the date
criteria and also do some operations
On Wed, 24 Jul 2024 at 02:02, Tom Lane wrote:
> Ron Johnson writes:
> > On Tue, Jul 23, 2024 at 4:10 PM veem v wrote:
> >> But we are getting an error while executing saying it cant be executed
> in
> >> transaction block with "CONCURRENTLY". So I
Hi,
It's postgres version 15.4. We have a requirement to create an index on a
big partition table and want to do it online. And to make the script run in
an automated way on any day , through our ci/cd pipeline we were trying to
write it as below inside a begin/end block. I.e. create index using "O
On Wed, 12 Jun 2024 at 00:26, hubert depesz lubaczewski
wrote:
>
>
> No, I meant building dynamic queries and then EXECUTE-ing, like docs
> show:
>
> https://www.postgresql.org/docs/current/plpgsql-statements.html#PLPGSQL-STATEMENTS-EXECUTING-DYN
>
> Best regards,
>
> depesz
>
>
My apology, if in
On Tue, 11 Jun 2024 at 17:03, hubert depesz lubaczewski
wrote:
> On Tue, Jun 11, 2024 at 12:47:14AM +0530, veem v wrote:
> > to be called from ~50 triggers? or any other better approach exists to
> > handle this?
>
> pgaudit extension?
>
> Or just write all the chan
Hi, It's version 15.4 of postgres. We have a requirement to have the audit
enabled for the delete queries on the base table. And for that we are
planning to have one audit table created for each base table and have
triggers on each of the base tables to be fired on delete which will insert
records
On Sun, 9 Jun 2024 at 09:45, Lok P wrote:
>
> On Sat, Jun 8, 2024 at 7:03 PM veem v wrote:
>
>>
>> There is a blog below (which is for oracle), showing how the index should
>> be chosen and it states , "*Stick the columns you do range scans on
>> last i
Hi ,
It's postgres version 15.4. A table is daily range partitioned on a column
transaction_timestamp. It has a unique identifier which is the ideal for
primary key (say transaction_id) , however as there is a limitation in
which we have to include the partition key as part of the primary key, so
i
On Sat, 13 Apr 2024 at 21:44, Adrian Klaver
wrote:
> On 4/13/24 00:03, veem v wrote:
> > Thank you Adrian.
> >
> > So it seems the heavy DML tables will see an impact if having triggers
> > (mainly for each row trigger) created on them.
> >
> > And also
as it will force it to make it happen row by row, as the
trigger is row based. Will test anyway though.
On Thu, 11 Apr 2024 at 22:00, Adrian Klaver
wrote:
> On 4/11/24 07:31, veem v wrote:
> > Hi, We used to use Oracle database in which we had audit
> > triggers(something as belo
Hi, We used to use Oracle database in which we had audit triggers(something
as below) mandated for all tables by the control team. Now we are going to
use the postgresql 15.4 database for one of our applications. So,wanted to
understand if there exists any downside of such audit trigger setup for a
On Fri, 5 Apr 2024 at 06:10, Adrian Klaver
wrote:
>
> > S3 is not a database. You will need to be more specific about '...
> then
> > from the S3 it will be picked and gets merged to the target postgres
> > database.'
> >
> >
> > The data from S3 will be dumped into the stage table an
On Sun, 24 Mar 2024 at 20:29, Laurenz Albe wrote:
> On Sun, 2024-03-24 at 00:37 +0530, veem v wrote:
> > > Instead, use foreign keys between the partitions.
> >
> > I am struggling to understand how to maintain those partitions then?
> > As because we were planning
On Sat, 23 Mar 2024 at 23:08, Laurenz Albe wrote:
> On Sat, 2024-03-23 at 22:41 +0530, veem v wrote:
> > 1)As we see having foreign key defined is making the detach partition run
> > for minutes(in our case 5-10minutes for 60 million rows partition), so
> > how to
Trying to consolidate the main questions here as below.
1)As we see having foreign key defined is making the detach partition run
for minutes(in our case 5-10minutes for 60 million rows partition), so how
to make the parent table partition detach and drop work fast in such a
scenario while maintai
care
drop partition automatically for us?
On Fri, 22 Mar, 2024, 12:42 am veem v, wrote:
> On Thu, 21 Mar 2024 at 23:39, Laurenz Albe
> wrote:
>
>> On Thu, 2024-03-21 at 22:50 +0530, veem v wrote:
>> > So when you mentioned "to create the foreign keys *not* between t
On Thu, 21 Mar 2024 at 23:39, Laurenz Albe wrote:
> On Thu, 2024-03-21 at 22:50 +0530, veem v wrote:
> > So when you mentioned "to create the foreign keys *not* between the
> > partitioned table but between the individual partitions" , can that
> > be done using
On Thu, 21 Mar, 2024, 6:18 pm Laurenz Albe,
wrote:
What you *can* do is detach the partition and then drop it, but detatching
will
be slow because PostgreSQL has to check for referencing rows.
The best solution is to create the foreign key *not* between the partitioned
tables, but between the i
_drop_parent');
OPEN drop_partition_cursor
loop
FETCH NEXT FROM drop_partition_cursor INTO drop_partition_record;
EXIT WHEN NOT FOUND;
drop table drop_partition_record.table_name;
end loop;
close drop_partition_cursor;
END;
$$;
SELECT cron.schedule('@hourly',
);
On Thu, 21 Mar, 2024, 11:07 am vee
Hello All,
We created two partitioned tables one being parent and other child. Both
are range partitioned and one table has a foreign key to the other parent
table. Now , when we run the partition maintenance job to drop the older
partitions which meet the partition retention criteria, it's not le
On Mon, 4 Mar 2024 at 21:46, Greg Sabino Mullane wrote:
> On Mon, Mar 4, 2024 at 12:23 AM veem v wrote:
>
>> Additionally if a query was working fine but suddenly takes a
>> suboptimal plan because of missing stats , do we have any hash value column
>> on any performa
Additionally if a query was working fine but suddenly takes a
suboptimal plan because of missing stats , do we have any hash value column
on any performance view associated with the queryid which we can refer to
see past vs current plans difference and identify such issues quickly and
fix it?
I am
On Mon, 4 Mar 2024 at 09:42, Laurenz Albe wrote:
>
> The only things that require manual ANALYZE are
>
> 1. partitioned tables (autoanalyze will collect statistics on the
>partitions, but not the partitioned table itself)
>
>
So the partitioned table stats is nothing but the rolledover stats
Hi,
We see in one of the RDS postgres instances, from pg_stat_user_tables , the
auto vacuum and auto analyze happening on the tables without our manual
intervention.
So is auto vacuum analyze is sufficient to make sure optimal stats and
unbloated table structure in place or should we do it manual
On Wed, 28 Feb 2024 at 01:24, sud wrote:
> While testing the pg_partman extension I see it by default creates a
> "default partition" even if we provide the parameter p_start_partition as
> "current date". But if someone purposely doesn't want to have the default
> partitions , so as to avoid any
On Wed, 28 Feb, 2024, 2:14 am Adrian Klaver,
wrote:
> On 2/27/24 12:40, veem v wrote:
> > Hi,
> > We have two different types of DBA group in current on-premise Oracle
> > databases, one who deploy code in prod and highest level of privileges
> > (i.e having write acc
Hi,
We have two different types of DBA group in current on-premise Oracle
databases, one who deploy code in prod and highest level of privileges
(i.e having write access to the database objects, performing backup
recovery, export/import and performing other database maintenance jobs etc).
and oth
On Fri, 23 Feb, 2024, 2:54 am Vick Khera, wrote:
> On Thu, Feb 22, 2024 at 4:03 PM veem v wrote:
>
>> Hi All,
>> As i understand we have pg_stats_activity which shows the real time
>> activity of sessions currently running in the database. And the
>> pg_stats_sta
Hi All,
As i understand we have pg_stats_activity which shows the real time
activity of sessions currently running in the database. And the
pg_stats_statement provides the aggregated information of the historical
execution of all the queries in the database. But I don't see any sampling
or timing i
On Fri, 16 Feb 2024 at 06:04, Peter J. Holzer wrote:
> On 2024-02-15 16:51:56 -0700, David G. Johnston wrote:
> > On Thu, Feb 15, 2024 at 4:31 PM Peter J. Holzer
> wrote:
> > On 2024-02-14 22:55:01 -0700, David G. Johnston wrote:
> > > On Tuesday, Febru
On Thu, 15 Feb 2024 at 22:40, Adrian Klaver
wrote:
> On 2/15/24 09:00, Greg Sabino Mullane wrote:
> > On Thu, Feb 15, 2024 at 11:43 AM Adrian Klaver
> > mailto:adrian.kla...@aklaver.com>> wrote:
> >
> > That is a mixed bag:
> >
> >
> > Ha! Good point. Our contrived example table does suffer f
On Thu, 15 Feb 2024 at 00:43, Adrian Klaver
wrote:
> It depends:
>
> https://www.postgresql.org/docs/current/sql-altertable.html
>
> "Adding a column with a volatile DEFAULT or changing the type of an
> existing column will require the entire table and its indexes to be
> rewritten. As an excepti
On Wed, 14 Feb, 2024, 10:30 am Ron Johnson, wrote:
> On Tue, Feb 13, 2024 at 4:17 PM veem v wrote:
> [sni[]
>
>> One question here, if we have defined one column as a fixed length data
>> type "integer" and slowly we noticed the length of data keeps incre
On Tue, 13 Feb 2024 at 20:32, Peter J. Holzer wrote:
> Please do not conflate "char(n)" with native machine types like int or
> float. These are very different things. A char(n) is string of fixed but
> arbitrary length. This is not something a CPU can process in a single
> instruction. It has to
On Tue, 13 Feb 2024 at 20:59, Peter J. Holzer wrote:
> For some kinds of queries a composite index can be dramatically faster.
> While Postgres can combine indexes that means scanning both indexes and
> combining the result, which may need a lot more disk I/O than scanning a
> composite index. In
On Tue, 13 Feb 2024 at 02:01, Ron Johnson wrote:
> On Mon, Feb 12, 2024 at 3:23 PM veem v wrote:
> [snip]
>
>> So it looks like the fixed length data type(like integer, float) should
>> be the first choice while choosing the data type of the attributes
>> wherever po
Thank you so much for the clarification.
On Mon, 12 Feb 2024 at 03:40, Peter J. Holzer wrote:
> The fixed width types are those that the CPU can directly process:
> Integers with 16, 32 and 64 bits, floating point numbers with 32 and 64
> bits. The CPU can read and write them with a single memor
Thank You.
On Mon, 12 Feb 2024 at 22:17, Greg Sabino Mullane
wrote:
> Sure will try to test and see how it behaves when the number of
>> simultaneous queries (here 32/4=8 concurrent queries) exceed the
>> max_parallel_workers limit. Though I am expecting the further queries
>> exceeding the limi
Thank you .
On Mon, 12 Feb 2024 at 03:52, Peter J. Holzer wrote:
> On 2024-02-11 12:08:47 -0500, Ron Johnson wrote:
> > On Sun, Feb 11, 2024 at 11:54 AM veem v wrote:
> > When you said "you would normally prefer those over numeric " I was
> > thinking
On Sun, 11 Feb 2024 at 19:02, Peter J. Holzer wrote:
> > Similarly for Number/Numeric data type.
>
> Number in Oracle and numeric in PostgreSQL are variable length types.
> But in PostgreSQL you also have a lot of fixed length numeric types
> (from boolean to bigint as well as float4 and float8)
Thank You so much for the detailed explanation.
On Sun, 11 Feb 2024 at 05:55, Peter J. Holzer wrote:
>
> Yes. Numbers in Oracle are variable length, so most Oracle tables
> wouldn't contain many fixed length columns. In PostgreSQL must numeric
> types are fixed length, so you'll have quite a lot
:43, Greg Sabino Mullane
wrote:
> There is a lot to unpack here. I'm going to take a quick pass, but you
> ought to consider getting some custom expert help.
>
> On Sat, Feb 10, 2024 at 2:39 PM veem v wrote:
>
>>
>> Pagination is already a hard problem, and does not
Hello,
We want to have the response time in <1 sec for our UI search query
requirement. These will be pagination queries. These read queries will be
on big transaction tables (will have ~500+ attributes approx will have
approx. rows size of ~1KB) having a continuous stream of inserts consu
On Thu, 8 Feb 2024 at 20:08, Greg Sabino Mullane wrote:
> On Thu, Feb 8, 2024 at 12:42 AM sud wrote:
> ...
>
>> The key transaction table is going to have ~450 Million transactions per
>> day and the data querying/filtering will always happen based on the
>> "transaction date" column.
>>
> ...
>
15:46, Laurenz Albe wrote:
> On Tue, 2024-02-06 at 00:26 +0530, veem v wrote:
> > On Mon, 5 Feb 2024 at 17:52, Laurenz Albe
> wrote:
> > > On Mon, 2024-02-05 at 03:09 +0530, veem v wrote:
> > > > In postgresql, Is it possible to partition an existing
> nonpart
On Tue, 6 Feb 2024 at 10:45, Lok P wrote:
> Thank you Greg.
>
> *"and keeping all your active stuff in cache. Since you have 127 columns,
> only pull back the columns you absolutely need for each query."*
>
> Understood the point. As postgres is a "row store" database, so keeping
> the size of th
I have not much experience with postgres on this scale though, others may
provide better suggestions. However, with this scale you will have a single
table size ~30TB+. And someone just pointed out in this thread, ~180
partitions/table as the good to have limit, and if that is true it would
be ~1
On Mon, 5 Feb 2024 at 17:52, Laurenz Albe wrote:
> On Mon, 2024-02-05 at 03:09 +0530, veem v wrote:
> > In postgresql, Is it possible to partition an existing nonpartitioned
> table having data
> > already residing in it and indexes and constraints defined in it,
>
Hello All,
In postgresql, Is it possible to partition an existing nonpartitioned table
having data already residing in it and indexes and constraints defined in
it, without the need of manually moving the data around, to make it faster?
Similarly merging multiple partitions to one partition or spli
On Fri, 2 Feb 2024 at 02:43, Laurenz Albe wrote:
> On Fri, 2024-02-02 at 02:27 +0530, veem v wrote:
> > We have the below query which is running for ~45 seconds on postgres
> aurora reader instance.
> > I have captured the explain analyze. Want to understand, where exactly
&
Hello All,
We have the below query which is running for ~45 seconds on postgres aurora
reader instance. I have captured the explain analyze. Want to understand,
where exactly the resources are getting spent and if we can be able to
optimize it further. It's a UI query showing top 50 rows and is sup
Hello Experts,
If some teams are well versed with the Oracle database architecture and its
optimizers working and designing applications on top of this. Now moving
same team to work on AWS aurora postgresql databases design/development
projects. Is any key design/architectural changes should the ap
Can someone please guide me, if any standard scripting is available for
doing such read/write performance test? Or point me to any available docs?
On Wed, 20 Dec, 2023, 10:39 am veem v, wrote:
> Thank you.
>
> That would really be helpful if such test scripts or similar setups are
case. So in that case, you may want to use some
> test scripts which others must have already done rather reinventing the
> wheel.
>
>
> On Wed, 20 Dec, 2023, 10:19 am veem v, wrote:
>
>> Thank you.
>>
>> Yes, actually we are trying to compare and see what maxi
approach to achieve this objective?
On Wed, 20 Dec, 2023, 2:42 am Peter J. Holzer, wrote:
> On 2023-12-20 00:44:48 +0530, veem v wrote:
> > So at first, we need to populate the base tables with the necessary
> data (say
> > 100million rows) with required skewness using random funct
END LOOP;
END;
-- Batch read
BEGIN
-- Batch read
OPEN data_set FOR SELECT * FROM BASE_TABLE LIMIT total_rows;
CLOSE data_set;
END;
On Tue, 19 Dec 2023 at 12:58, Ilya Kosmodemiansky wrote:
> Hi Veem,
>
> On Tue, Dec 19, 2023 at 7:36 AM veem v wrote:
> > 1)For wri
Hello All,
Its Aurora postgresql database in AWS. We have a couple of custom tables
created with some ~100 odd columns and required indexes and constraints
created on them and that represents a certain business use case. Currently
we don't have any data in them. Requirement is to test read and writ
pecified in the WHERE clause.
>
> On Sun, Dec 10, 2023 at 8:45 AM veem v wrote:
>
>> Thank you so much Ron. I have some more doubts related to this.
>>
>> We were thinking , if there is any utility in PG with which we can
>> create/generate large sample data
y
> way you'll *really* know.
>
> Beware of excessive partitioning. We had to "departion" most tables,
> because of the impact on performance.
>
> (*) Use ora2pg to export views in the Oracle database. It's *very* easy;
> a small EC2 VM running Linux with en
e a problem
in postgresql, is it going to give trouble in Aurora postgresql too, for
such a highly transactional read/write system? How to test/validate that?
On Sun, 10 Dec 2023 at 01:29, Ron Johnson wrote:
> On Sat, Dec 9, 2023 at 2:13 PM veem v wrote:
>
>>
>> Ron Johnson
&
flake considering it will store those as compressed
format and also storage is cheap(23$ per TB per month)?
Few colleagues are pointing to databricks for the analytics use case. Is
that a good option here?
On Sat, 9 Dec 2023 at 16:43, veem v wrote:
> Hello All,
> Although it's not
Hello All,
Although it's not exactly related to opensource postgre but want to ask
this question here to understand colleagues' view, considering having
decades of experience in the database world, We want some guidance, if the
below design looks okay for our customer use case.
We currently have
mazonRDS/latest/AuroraUserGuide/Concepts.DBInstanceClass.html
On Wed, 1 Nov 2023 at 04:55, Ben Chobot wrote:
> veem v wrote on 10/31/23 2:49 PM:
>
> Hello all,
> We are planning to use aurora Postgres for a few applications. But wanted
> some guidance on which instance, class type should
Hello all,
We are planning to use aurora Postgres for a few applications. But wanted
some guidance on which instance, class type should we use for lower
environment/non prod like Dev/Qa/Uat/perf and what should we use for
production database? Is there some recommendation based on usage etc. for
thi
same vacuuming technology for
maintaining the transactions?
Does Aurora Mysql opt for a similar strategy for transaction management? or
any different/better ones?
On Thu, 21 Sept 2023 at 02:09, Laurenz Albe
wrote:
> On Thu, 2023-09-21 at 01:17 +0530, veem v wrote:
> > I see multiple docs on t
Hello All,
I see multiple docs on the internet, stating the difference between Mysql
and postgres. But I want to understand the real views of the experienced
folks here.
While we are on AWS and planning to opt for one of the relational databases
out of mysql and postgres for our application(It is
k Wienhold wrote:
>
>> On 03/09/2023 00:35 CEST veem v wrote:
>>
>> > We are trying to create a monthly range partition table , partitioned on
>> > column PART_DATE. This will hold Orders and part_date is nothing but
>> invoice
>> > date. Some Te
Hello Friends,
We are trying to create a monthly range partition table , partitioned on
column PART_DATE. This will hold Orders and part_date is nothing but
invoice date. Some Team mates are asking to use the "PART_DATE" column as
data type "INTEGER" with "MM" format and also define partitions
Thank you So much all for such valuable feedback.
As "Julian" was pointing, I also tried to test the INSERT independently(as
in below test case) without keeping the "generate_series" in the inline
query. But in all the cases sequence is performing better as compared to
both UUID V4 and UUID V7. An
cases
of multi master replication/sharding etc, may be a factor but other than
that I can't think of any scenario where sequences can be used.
On Fri, 3 Feb 2023 at 23:07, Dominique Devienne wrote:
> On Fri, Feb 3, 2023 at 5:48 PM veem v wrote:
>
>> Actually I did the
nne wrote:
> Copying the list...
>
> -- Forwarded message -
> From: Dominique Devienne
> Date: Fri, Feb 3, 2023 at 4:57 PM
> Subject: Re: Sequence vs UUID
> To: veem v
>
> On Thu, Feb 2, 2023 at 8:47 PM veem v wrote:
>
>> Tested the UUIDv7 gener
2023, 10:17 AM Miles Elam
> wrote:
>
>> On Wed, Feb 1, 2023 at 10:48 AM Kirk Wolak wrote:
>>
>>>
>>>
>>> On Wed, Feb 1, 2023 at 1:34 PM veem v wrote:
>>>
>>>>
>>>> 1) sequence generation vs UUID generation, executi
Index Cond: (id = test_uuid_1.id)
Heap Fetches: 100
Planning Time: 0.180 ms
Execution Time: 0.510 ms
On Tue, 31 Jan 2023 at 03:28, Ron wrote:
>
> And populate that column with UUIDs generated by the gen_random_uuid()
> function.
>
> (Requires v13.)
>
> On 1/30/23 13:46
3 at 22:18, Tom Lane wrote:
> Dominique Devienne writes:
> > On Mon, Jan 30, 2023 at 5:11 PM veem v wrote:
> >> CREATE TABLE test1_UUID ( id bigint,source_id varchar(36) PRIMARY KEY,
> Name varchar(20) );
>
> > Maybe if you used a "native" 16-byte uuid, instea
ing from 1 in 11 s 145 ms (execution: 11 s 128 ms,
fetching: 17 ms)
On Mon, 30 Jan, 2023, 4:59 pm veem v, wrote:
> I have a question, As i understand here, usage wise there are multiple
> benefits of UUID over sequences like, in case of distributed app where we
> may not be able to r
I have a question, As i understand here, usage wise there are multiple
benefits of UUID over sequences like, in case of distributed app where we
may not be able to rely on one point generator like sequences, in case of
multi master architecture, sharding.
If we just look in terms of performance wi
Hello, We were trying to understand whether we should use UUID or Sequence
in general for primary keys. In many of the blogs(one is below) across
multiple databases, I saw over the internet and all are mostly stating the
sequence is better as compared to UUID. But I think in the case of
concurrent
100 matches
Mail list logo