Is there anyway to map pyspark.sql.Row columns to JDBC table columns, or do
I have to just put them in the right order before saving?

I'm using code like this:

```
rdd = rdd.map(lambda i: Row(name=i.name, value=i.value))
sqlCtx.createDataFrame(rdd).write.jdbc(dbconn_string, tablename,
mode='append')
```

Since the Row class orders them alphabetically, they are inserted into the
sql table in alphabetical order instead of matching Row columns to table
columns.

Reply via email to