Hi,
Two JSON files but one of them miss some columns, like
{"firstName": "Jack", "lastName": "Nelson"}
{"firstName": "Landy", "middleName": "Ken", "lastName": "Yong"}
slqContext.sql("select firstName as first_name, middleName as middle_name,
lastName as last_name from jsonTable)
But there are a
Hi,
I met a problem of empty field in the nested JSON file with Spark SQL. For
instance,
There are two lines of JSON file as follows,
{
"firstname": "Jack",
"lastname": "Nelson",
"address": {
"state": "New York",
"city": "New York"
}
}{
"firstname": "Landy",
"middlename": "Ken",
"lastname": "Yong
I used the spark 1.3.1 to populate the event logs to Cassandra. But there
is an exception that I could not find out any clauses. Can anybody give me
any helps?
Exception in thread "main" java.lang.IllegalArgumentException: Positive
number of slices required
at
org.apache.spark.rdd.ParallelCollect