On Mon, Dec 11, 2023 at 10:34 PM Chris Travers
wrote:
> On Tue, Dec 12, 2023 at 2:11 AM Ron Johnson
> wrote:
>
>> On Mon, Dec 11, 2023 at 4:41 AM Dominique Devienne
>> wrote:
>>
>>> On Sun, Dec 10, 2023 at 5:56 PM Ron Johnson
>>> wrote:
>>>
* We departitioned because SELECT statements wer
On Tue, Dec 12, 2023 at 2:11 AM Ron Johnson wrote:
> On Mon, Dec 11, 2023 at 4:41 AM Dominique Devienne
> wrote:
>
>> On Sun, Dec 10, 2023 at 5:56 PM Ron Johnson
>> wrote:
>>
>>> * We departitioned because SELECT statements were *slow*. All
>>> partitions were scanned, even when the partition
On Mon, Dec 11, 2023 at 4:41 AM Dominique Devienne
wrote:
> On Sun, Dec 10, 2023 at 5:56 PM Ron Johnson
> wrote:
>
>> * We departitioned because SELECT statements were *slow*. All
>> partitions were scanned, even when the partition key was specified in the
>> WHERE clause.
>>
>
> Surely that's
On Sun, Dec 10, 2023 at 5:56 PM Ron Johnson wrote:
> * We departitioned because SELECT statements were *slow*. All partitions
> were scanned, even when the partition key was specified in the WHERE clause.
>
Surely that's no the case on newer PostgreSQL, is it? Otherwise what's the
point of part
Thank you for your response.
Apology if it sounds silly, but is it advisable to use just one database in
this flow, say for e.g snowflake for olap usecase and use a cache
layer(like reddish) on top of it to cater our oltp Usecase, rather having a
dedicated oltp database like Aurora postgresql?
Ba
* PG has pgbench; *maybe* you can hack it to work on Oracle.
* If you want to know how well an RDBMS will work on your workload, then
you must provide it with a simulated workload. Right?
* AWS RDS Postgresql has a dashboard that *might* be similar to AWR. Or it
might not...
* We departitioned be
Thank you so much Ron. I have some more doubts related to this.
We were thinking , if there is any utility in PG with which we can
create/generate large sample data volume which we can use to run it on our
on premise Oracle exadata box and use the same on the aurora postgresql in
cloud to see the
I don't know anything about Aurora, only have experience with RDS
Postgresql.
We successfully migrated from on-prem Oracle (12c, I think) to RDS
Postgresql 12, and were very happy: little down time (I take pride in
designing/implementing that; thanks, ora2pg!), with lower disk (8TB, down
to 5TB) a
Thank you so much for the response.
Got your point, will check if we really need details or summary for the
historical data. But it looks like we will need detailed transaction data
for ~2 years at least.
My understanding was that AWS has two different offerings and "aurora
postgresql" is more pe
On Sat, Dec 9, 2023 at 2:13 PM veem v wrote:
>
> Ron Johnson
> wrote:
>
>> "OK" is relative, but it's what we did in a similar situation: two years
>> of data on-line and 5 years of data in compressed files in S3. (We're
>> required to keep data for 7 years, but they *never* ask for records mo
Ron Johnson
wrote:
> "OK" is relative, but it's what we did in a similar situation: two years
> of data on-line and 5 years of data in compressed files in S3. (We're
> required to keep data for 7 years, but they *never* ask for records more
> than 2 years old. If they ever do, we'll manually l
On Sat, Dec 9, 2023 at 6:14 AM veem v wrote:
[snip]
> Many of the applications are moving from on premise to AWS cloud as part
> of modernization journey and AWS being chosen cloud partner also the
> product is expected to expand across more regions and this system is
> expected to serve increase
12 matches
Mail list logo