Veem
You should also be familiar with Aurora Postgres's storage
architecture, which is very different from regular Postgres (see
https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/Aurora.Overview.html
)
Aurora is remote storage, which means if your read workload can't fit into
the PG's s
On Thu, Dec 21, 2023 at 8:31 AM veem v wrote:
> Can someone please guide me, if any standard scripting is available for
> doing such read/write performance test? Or point me to any available docs?
>
>
> ...
Veem, first things first... "Top Posting" is when you reply at the top of
th
As I mentioned your scenario looks generic one, but I don't have any sample
scripts/docs to share, sorry for that. Other people may suggest any sample
scripts etc if any. or you may post it on performance group, if someone has
done similar stuff in the past.
But as per me, the performance test sc
Can someone please guide me, if any standard scripting is available for
doing such read/write performance test? Or point me to any available docs?
On Wed, 20 Dec, 2023, 10:39 am veem v, wrote:
> Thank you.
>
> That would really be helpful if such test scripts or similar setups are
> already avai
Thank you.
That would really be helpful if such test scripts or similar setups are
already available. Can someone please guide me to some docs or blogs or
sample scripts, on same please.
On Wed, 20 Dec, 2023, 10:34 am Lok P, wrote:
> As Rob mentioned, the syntax you posted is not correct. You n
As Rob mentioned, the syntax you posted is not correct. You need to process
or read a certain batch of rows like 1000 or 10k etc. Not all 100M at one
shot.
But again your uses case seems common one considering you want to compare
the read and write performance on multiple databases with similar ta
Thank you.
Yes, actually we are trying to compare and see what maximum TPS are we able
to reach with both of these row by row and batch read/write test. And then
afterwards, this figure may be compared with other databases etc with
similar setups.
So wanted to understand from experts here, if th
On 2023-12-20 00:44:48 +0530, veem v wrote:
> So at first, we need to populate the base tables with the necessary data (say
> 100million rows) with required skewness using random functions to generate the
> variation in the values of different data types. Then in case of row by row
> write/read te
On 12/19/23 12:14, veem v wrote:
Thank you for the confirmation.
So at first, we need to populate the base tables with the necessary
data (say 100million rows) with required skewness using random
functions to generate the variation in the values of different data
types. Then in case of row b
Thank you for the confirmation.
So at first, we need to populate the base tables with the necessary data
(say 100million rows) with required skewness using random functions to
generate the variation in the values of different data types. Then in case
of row by row write/read test , we can travers
Hi Veem,
On Tue, Dec 19, 2023 at 7:36 AM veem v wrote:
> 1)For write performance , the rows needs to be inserted from multiple
> sessions at same time, with required random values as per the data types i.e.
> Character, Number, date columns. And this needs to be tested for row by row
> insert
11 matches
Mail list logo