Hi Gerard,
How are you starting spark? Are you allocating enough RAM for processing? I
think the default is 512mb. Try to doing the following and see if it helps
(based on the size of your dataset, you might not need all 8gb).
$SPARK_HOME/bin/spark-shell \
--master local[4] \
--executor-memor
Hello everyone,
I am creating a graph from a `gz` compressed `json` file of `edge` and
`vertices` type.
I have put the files in a dropbox folder [here][1]
I load and map these `json` records to create the `vertices` and `edge` types
required by `graphx` like this:
val vertices_raw = sqlCo