Re: column identifiers in Spark SQL

2015-09-22 Thread Richard Hillegas
ysisException: cannot resolve 'c\"d' given input columns A, b, c"d; line 1 pos 7 sqlContext.sql("""select `c\"d` from test_data""").show Thanks, -Rick Michael Armbrust wrote on 09/22/2015 01:16:12 PM: > From: Michael Armbrust > To: R

Re: column identifiers in Spark SQL

2015-09-22 Thread Michael Armbrust
p: > > // this now returns rows consisting of the string literal "cd" > sqlContext.sql("""select "c""d" from test_data""").show > > Thanks, > -Rick > > Michael Armbrust wrote on 09/22/2015 10:58:36 AM: > > >

Re: column identifiers in Spark SQL

2015-09-22 Thread Richard Hillegas
ks, -Rick Michael Armbrust wrote on 09/22/2015 10:58:36 AM: > From: Michael Armbrust > To: Richard Hillegas/San Francisco/IBM@IBMUS > Cc: Dev > Date: 09/22/2015 10:59 AM > Subject: Re: column identifiers in Spark SQL > > Are you using a SQLContext or a HiveContext?  The program

Re: column identifiers in Spark SQL

2015-09-22 Thread Michael Armbrust
also a lot better. On Tue, Sep 22, 2015 at 10:53 AM, Richard Hillegas wrote: > I am puzzled by the behavior of column identifiers in Spark SQL. I don't > find any guidance in the "Spark SQL and DataFrame Guide" at > http://spark.apache.org/docs/latest/sql-programming-guide

column identifiers in Spark SQL

2015-09-22 Thread Richard Hillegas
I am puzzled by the behavior of column identifiers in Spark SQL. I don't find any guidance in the "Spark SQL and DataFrame Guide" at http://spark.apache.org/docs/latest/sql-programming-guide.html. I am seeing odd behavior related to case-sensitivity and to delimited (quot