d how to start ? Can you please guide me.
>
>
>
> Thank you.
> Shyam
>
>
>
>
> On Thu, Aug 29, 2019 at 5:03 PM Aayush Ranaut > wrote:
>
>> Cassandra is upsert, you should be able to do what you need with a single
>> statement unless you’re looking to maint
Cassandra is upsert, you should be able to do what you need with a single
statement unless you’re looking to maintain counters.
I’m not sure if there is a Cassandra connector library written for spark
streaming because we wrote one ourselves when we wanted to do the same.
Regards
Prathmesh Ran
This is the job of ContextCleaner. There are few a property that you can tweak
to see if that helps:
spark.cleaner.periodicGC.interval
spark.cleaner.referenceTracking
spark.cleaner.referenceTracking.blocking.shuffle
Regards
Prathmesh Ranaut
> On Jul 21, 2019, at 11:36 AM, Prathmesh Ranaut
Question 2:
You might be creating a dataframe while reading a parquet file.
df = spark.read.load(“file.parquet”)
df.select(rtrim(“columnName”));
Regards
Prathmesh Ranaut
https://linkedin.com/in/prathmeshranaut
> On Jul 12, 2019, at 9:15 AM, anbutech wrote:
>
> Hello All, Could you please hel