Hi,
I am new to spark and no SQL databases.
So Please correct me if I am wrong.
Since I will be accessing multiple columns (almost 20-30 columns) of a row,
I will have to go with rowbased db instead column based right!
May be I can use Avro in this case. Does spark go well with Avroro? I will
do
Have you considered Presto with an Oracle connector?
From: Teemu Heikkilä
Date: Thursday, April 4, 2019 at 12:28 PM
To: Prasad Bhalerao
Cc: Jason Nerothin , user
Subject: Re: reporting use case
Based on your answers, I would consider using the update stream to update
actual snapshots ie. by
Based on your answers, I would consider using the update stream to update
actual snapshots ie. by joining the data
Ofcourse now it depends on how the update stream has been implemented how to
get the data in spark.
Could you tell little bit more about that?
- Teemu
> On 4 Apr 2019, at 22.23, P
Hi ,
I can create a view on these tables but the thing is I am going to need
almost every column from these tables and I have faced issues with oracle
views on such a large tables which involves joins. Some how oracle used to
choose not so correct execution plan.
Can you please tell me how creati
Hi Prasad,
Could you create an Oracle-side view that captures only the relevant
records and the use Spark JDBC connector to load the view into Spark?
On Thu, Apr 4, 2019 at 1:48 PM Prasad Bhalerao
wrote:
> Hi,
>
> I am exploring spark for my Reporting application.
> My use case is as follows...