Hi,
We are using Spark Cassandra connector for our app.
And I am trying to create higher level roll up tables. e.g. minutes table
from seconds table.
If my tables are already defined. How can I read the schema of table?
So that I can load them in the Dataframe and create the aggregates.
Any
+1
I am trying to read avro from kafka and I don't want to limit to a small set
of schema. So I want to dynamically load the schema from avro file (as avro
contains schema as well). And then from this I want to create a dataframe
and run some queries on that.
Any help would be really thankful.