Interestingly, after more digging, df.printSchema() in raw spark shows the columns as a long, not a bigint.
root |-- localEventDtTm: timestamp (nullable = true) |-- asset: string (nullable = true) |-- assetCategory: string (nullable = true) |-- assetType: string (nullable = true) |-- event: string (nullable = true) |-- extras: array (nullable = true) | |-- element: struct (containsNull = true) | | |-- name: string (nullable = true) | | |-- value: string (nullable = true) |-- ipAddress: string (nullable = true) |-- memberId: string (nullable = true) |-- system: string (nullable = true) |-- timestamp: long (nullable = true) |-- title: string (nullable = true) |-- trackingId: string (nullable = true) |-- version: long (nullable = true) I'm going to have to keep digging I guess. :( -- View this message in context: http://apache-spark-user-list.1001560.n3.nabble.com/SparkR-Supported-Types-Please-add-bigint-tp23975p23978.html Sent from the Apache Spark User List mailing list archive at Nabble.com. --------------------------------------------------------------------- To unsubscribe, e-mail: user-unsubscr...@spark.apache.org For additional commands, e-mail: user-h...@spark.apache.org