Hello,
*Some context:*
I have a Phoenix tenant-specific view named CUSTOM_ENTITY."z02" (Phoenix
tables can have quotes to specify case-sensitivity). I am attempting to
write to this table using Spark via a scala script. I am performing the
following read successfully:
val table = """CUSTOM_ENTITY."z02""""
val tenantId = "myTenantId"
val urlWithTenant =
"jdbc:phoenix:myZKHost1, myZKHost1, myZKHost2,
myZKHost3:2181;TenantId=myTenantId"
val driver = "org.apache.phoenix.jdbc.PhoenixDriver"
val readOptions = Map(driver" -> driver, "url" -> urlWithTenant, "dbtable"
-> table
)
val df = sqlContext.read.format("jdbc").options(readOptions).load
This gives me the dataframe with data successfully read from my tenant view.
Now when I try to write back with this dataframe:
df.write.format("jdbc").insertInto(table)
I am getting the following exception:
java.lang.RuntimeException: [1.15] failure: identifier expected
CUSTOM_ENTITY."z02"
^
(caret is pointing under the '.' before "z02")
at scala.sys.package$.error(package.scala:27)
at
org.apache.spark.sql.catalyst.SqlParser$.parseTableIdentifier(SqlParser.scala:56)
at
org.apache.spark.sql.DataFrameWriter.insertInto(DataFrameWriter.scala:164)
Looking at the stack trace it appears that Spark doesn't know what to do
with the quotes around z02. I've tried escaping them in every way I could
think of but to no avail.
Is there a way to have Spark not complain about the quotes and correctly
pass them along?
Thanks