Re: SparkSQL 1.3.0 JDBC data source issues

2015-03-19 Thread Pei-Lun Lee
JIRA and PR for first issue:
https://issues.apache.org/jira/browse/SPARK-6408
https://github.com/apache/spark/pull/5087

On Thu, Mar 19, 2015 at 12:20 PM, Pei-Lun Lee  wrote:

> Hi,
>
> I am trying jdbc data source in spark sql 1.3.0 and found some issues.
>
> First, the syntax "where str_col='value'" will give error for both
> postgresql and mysql:
>
> psql> create table foo(id int primary key,name text,age int);
> bash> SPARK_CLASSPATH=postgresql-9.4-1201-jdbc41.jar spark/bin/spark-shell
> scala>
> sqlContext.load("jdbc",Map("url"->"jdbc:postgresql://XXX","dbtable"->"foo")).registerTempTable("foo")
> scala> sql("select * from foo where name='bar'").collect
> org.postgresql.util.PSQLException: ERROR: operator does not exist: text =
> bar
>   Hint: No operator matches the given name and argument type(s). You might
> need to add explicit type casts.
>   Position: 40
> scala> sql("select * from foo where name like '%foo'").collect
>
> bash> SPARK_CLASSPATH=mysql-connector-java-5.1.34.jar spark/bin/spark-shell
> scala>
> sqlContext.load("jdbc",Map("url"->"jdbc:mysql://XXX","dbtable"->"foo")).registerTempTable("foo")
> scala> sql("select * from foo where name='bar'").collect
> com.mysql.jdbc.exceptions.jdbc4.MySQLSyntaxErrorException: Unknown column
> 'bar' in 'where clause'
>
>
>
> Second, postgresql table with json data type does not work:
>
> psql> create table foo(id int primary key, data json);
> scala>
> sqlContext.load("jdbc",Map("url"->"jdbc:mysql://XXX","dbtable"->"foo")).registerTempTable("foo")
> java.sql.SQLException: Unsupported type 
>
>
>
> Not sure these are bug in spark sql or jdbc. I can file JIRA ticket if
> needed.
>
> Thanks,
> --
> Pei-Lun
>
>


Spark scheduling, data locality

2015-03-19 Thread Zoltán Zvara
I'm trying to understand the task scheduling mechanism of Spark, and I'm
curious about where does locality preferences get evaluated? I'm trying to
determine if locality preferences are fetchable before the task get
serialized. A HintSet would be most appreciated!

Have nice day!

Zvara Zoltán



mail, hangout, skype: zoltan.zv...@gmail.com

mobile, viber: +36203129543

bank: 10918001-0021-50480008

address: Hungary, 2475 Kápolnásnyék, Kossuth 6/a

elte: HSKSJZ (ZVZOAAI.ELTE)


Spark SQL ExternalSorter not stopped

2015-03-19 Thread Michael Allman
I've examined the experimental support for ExternalSorter in Spark SQL, and it 
does not appear that the external sorted is ever stopped (ExternalSorter.stop). 
According to the API documentation, this suggests a resource leak. Before I 
file a bug report in Jira, can someone familiar with the codebase confirm this 
is indeed a bug?

Thanks,

Michael
-
To unsubscribe, e-mail: dev-unsubscr...@spark.apache.org
For additional commands, e-mail: dev-h...@spark.apache.org



Exception using the new createDirectStream util method

2015-03-19 Thread Alberto Rodriguez
Hi all,

I am trying to make the new kafka and spark streaming integration work (direct
approach "no receivers"
). I
have created an unit test where I configure and start both zookeeper and
kafka.

When I try to create the InputDStream using the createDirectStream method
of the KafkaUtils class I am getting the following error:

org.apache.spark.SparkException:* Couldn't find leader offsets for Set()*
org.apache.spark.SparkException: org.apache.spark.SparkException: Couldn't
find leader offsets for Set()
at
org.apache.spark.streaming.kafka.KafkaUtils$$anonfun$createDirectStream$2.apply(KafkaUtils.scala:413)

Following is the code that tries to create the DStream:

val messages: InputDStream[(String, String)] =
KafkaUtils.createDirectStream[String, String, StringDecoder, StringDecoder](
ssc, kafkaParams, topics)

Does anyone faced this problem?

Thank you in advance.

Kind regards,

Alberto


Re: Exception using the new createDirectStream util method

2015-03-19 Thread Ted Yu
Looking at KafkaCluster#getLeaderOffsets():

  respMap.get(tp).foreach { por: PartitionOffsetsResponse =>
if (por.error == ErrorMapping.NoError) {
...
} else {
  errs.append(ErrorMapping.exceptionFor(por.error))
}
There should be some error other than "Couldn't find leader offsets for
Set()"

Can you check again ?

Thanks

On Thu, Mar 19, 2015 at 12:10 PM, Alberto Rodriguez 
wrote:

> Hi all,
>
> I am trying to make the new kafka and spark streaming integration work
> (direct
> approach "no receivers"
> ). I
> have created an unit test where I configure and start both zookeeper and
> kafka.
>
> When I try to create the InputDStream using the createDirectStream method
> of the KafkaUtils class I am getting the following error:
>
> org.apache.spark.SparkException:* Couldn't find leader offsets for Set()*
> org.apache.spark.SparkException: org.apache.spark.SparkException: Couldn't
> find leader offsets for Set()
> at
>
> org.apache.spark.streaming.kafka.KafkaUtils$$anonfun$createDirectStream$2.apply(KafkaUtils.scala:413)
>
> Following is the code that tries to create the DStream:
>
> val messages: InputDStream[(String, String)] =
> KafkaUtils.createDirectStream[String, String, StringDecoder,
> StringDecoder](
> ssc, kafkaParams, topics)
>
> Does anyone faced this problem?
>
> Thank you in advance.
>
> Kind regards,
>
> Alberto
>


Re: Which linear algebra interface to use within Spark MLlib?

2015-03-19 Thread Ulanov, Alexander
Thank you! When do you expect to have gemm in Breeze and that version of Breeze 
to ship with MLlib?

Also, could someone please elaborate on the linalg.BLAS and Matrix? Are they 
going to be developed further, should in long term all developers use them?

Best regards, Alexander

18.03.2015, в 23:21, "Debasish Das" 
mailto:debasish.da...@gmail.com>> написал(а):

dgemm dgemv and dot come to Breeze and Spark through netlib-java

Right now both in dot and dgemv Breeze does a extra memory allocate but we 
already found the issue and we are working on adding a common trait that will 
provide a sink operation (basically memory will be allocated by user)...adding 
more BLAS operators in breeze will also help in general as lot more operations 
are defined over there...


On Wed, Mar 18, 2015 at 8:09 PM, Ulanov, Alexander 
mailto:alexander.ula...@hp.com>> wrote:
Hi,

Currently I am using Breeze within Spark MLlib for linear algebra. I would like 
to reuse previously allocated matrices for storing the result of matrices 
multiplication, i.e. I need to use "gemm" function C:=q*A*B+p*C, which is 
missing in Breeze (Breeze automatically allocates a new matrix to store the 
result of multiplication). Also, I would like to minimize gemm calls that 
Breeze does. Should I use mllib.linalg.BLAS functions instead? While it has 
gemm and axpy, it has rather limited number of operations. For example, I need 
sum of the matrix by row or by columns, or applying a function to all elements 
in a matrix. Also, MLlib Vector and Matrix interfaces that linalg.BLAS operates 
seems to be rather undeveloped. Should I use plain netlib-java instead (will it 
remain in MLlib in future releases)?

Best regards, Alexander


-
To unsubscribe, e-mail: dev-unsubscr...@spark.apache.org
For additional commands, e-mail: dev-h...@spark.apache.org



Re: Exception using the new createDirectStream util method

2015-03-19 Thread Cody Koeninger
What is the value of your topics variable, and does it correspond to topics
that already exist on the cluster and have messages in them?

On Thu, Mar 19, 2015 at 3:10 PM, Ted Yu  wrote:

> Looking at KafkaCluster#getLeaderOffsets():
>
>   respMap.get(tp).foreach { por: PartitionOffsetsResponse =>
> if (por.error == ErrorMapping.NoError) {
> ...
> } else {
>   errs.append(ErrorMapping.exceptionFor(por.error))
> }
> There should be some error other than "Couldn't find leader offsets for
> Set()"
>
> Can you check again ?
>
> Thanks
>
> On Thu, Mar 19, 2015 at 12:10 PM, Alberto Rodriguez 
> wrote:
>
> > Hi all,
> >
> > I am trying to make the new kafka and spark streaming integration work
> > (direct
> > approach "no receivers"
> > ).
> I
> > have created an unit test where I configure and start both zookeeper and
> > kafka.
> >
> > When I try to create the InputDStream using the createDirectStream method
> > of the KafkaUtils class I am getting the following error:
> >
> > org.apache.spark.SparkException:* Couldn't find leader offsets for Set()*
> > org.apache.spark.SparkException: org.apache.spark.SparkException:
> Couldn't
> > find leader offsets for Set()
> > at
> >
> >
> org.apache.spark.streaming.kafka.KafkaUtils$$anonfun$createDirectStream$2.apply(KafkaUtils.scala:413)
> >
> > Following is the code that tries to create the DStream:
> >
> > val messages: InputDStream[(String, String)] =
> > KafkaUtils.createDirectStream[String, String, StringDecoder,
> > StringDecoder](
> > ssc, kafkaParams, topics)
> >
> > Does anyone faced this problem?
> >
> > Thank you in advance.
> >
> > Kind regards,
> >
> > Alberto
> >
>


Re: Which linear algebra interface to use within Spark MLlib?

2015-03-19 Thread Debasish Das
I think for Breeze we are focused on dot and dgemv right now (along with
several other matrix vector style operations)...

For dgemm it is tricky since you need to do add dgemm for both DenseMatrix
and CSCMatrix...and for CSCMatrix you need to get something like
SuiteSparse which is under lgpl...so we have to think more on it..

For now can't you use dgemm directly from mllib.linalg.BLAS ? It's in
master...


On Thu, Mar 19, 2015 at 1:49 PM, Ulanov, Alexander 
wrote:

>  Thank you! When do you expect to have gemm in Breeze and that version of
> Breeze to ship with MLlib?
>
>  Also, could someone please elaborate on the linalg.BLAS and Matrix? Are
> they going to be developed further, should in long term all developers use
> them?
>
> Best regards, Alexander
>
> 18.03.2015, в 23:21, "Debasish Das"  написал(а):
>
>   dgemm dgemv and dot come to Breeze and Spark through netlib-java
>
>  Right now both in dot and dgemv Breeze does a extra memory allocate but
> we already found the issue and we are working on adding a common trait that
> will provide a sink operation (basically memory will be allocated by
> user)...adding more BLAS operators in breeze will also help in general as
> lot more operations are defined over there...
>
>
> On Wed, Mar 18, 2015 at 8:09 PM, Ulanov, Alexander <
> alexander.ula...@hp.com> wrote:
>
>> Hi,
>>
>> Currently I am using Breeze within Spark MLlib for linear algebra. I
>> would like to reuse previously allocated matrices for storing the result of
>> matrices multiplication, i.e. I need to use "gemm" function C:=q*A*B+p*C,
>> which is missing in Breeze (Breeze automatically allocates a new matrix to
>> store the result of multiplication). Also, I would like to minimize gemm
>> calls that Breeze does. Should I use mllib.linalg.BLAS functions instead?
>> While it has gemm and axpy, it has rather limited number of operations. For
>> example, I need sum of the matrix by row or by columns, or applying a
>> function to all elements in a matrix. Also, MLlib Vector and Matrix
>> interfaces that linalg.BLAS operates seems to be rather undeveloped. Should
>> I use plain netlib-java instead (will it remain in MLlib in future
>> releases)?
>>
>> Best regards, Alexander
>>
>
>


Re: Which linear algebra interface to use within Spark MLlib?

2015-03-19 Thread Ulanov, Alexander
Thanks for quick response.

I can use linealg.BLAS.gemm, and this means that I have to use MLlib Matrix. 
The latter does not support some useful functionality needed for optimization. 
For example, creation of Matrix given matrix size, array and offset in this 
array. This means that I will need to create matrix in Breeze and convert it to 
MLlib. Also, linalg.BLAS misses some useful BLAS functions I need, that can be 
found in Breeze (and netlib-java). The same concerns are applicable to MLlib 
Vector.

Best regards, Alexander

19.03.2015, в 14:16, "Debasish Das" 
mailto:debasish.da...@gmail.com>> написал(а):

I think for Breeze we are focused on dot and dgemv right now (along with 
several other matrix vector style operations)...

For dgemm it is tricky since you need to do add dgemm for both DenseMatrix and 
CSCMatrix...and for CSCMatrix you need to get something like SuiteSparse which 
is under lgpl...so we have to think more on it..

For now can't you use dgemm directly from mllib.linalg.BLAS ? It's in master...


On Thu, Mar 19, 2015 at 1:49 PM, Ulanov, Alexander 
mailto:alexander.ula...@hp.com>> wrote:
Thank you! When do you expect to have gemm in Breeze and that version of Breeze 
to ship with MLlib?

Also, could someone please elaborate on the linalg.BLAS and Matrix? Are they 
going to be developed further, should in long term all developers use them?

Best regards, Alexander

18.03.2015, в 23:21, "Debasish Das" 
mailto:debasish.da...@gmail.com>> написал(а):

dgemm dgemv and dot come to Breeze and Spark through netlib-java

Right now both in dot and dgemv Breeze does a extra memory allocate but we 
already found the issue and we are working on adding a common trait that will 
provide a sink operation (basically memory will be allocated by user)...adding 
more BLAS operators in breeze will also help in general as lot more operations 
are defined over there...


On Wed, Mar 18, 2015 at 8:09 PM, Ulanov, Alexander 
mailto:alexander.ula...@hp.com>> wrote:
Hi,

Currently I am using Breeze within Spark MLlib for linear algebra. I would like 
to reuse previously allocated matrices for storing the result of matrices 
multiplication, i.e. I need to use "gemm" function C:=q*A*B+p*C, which is 
missing in Breeze (Breeze automatically allocates a new matrix to store the 
result of multiplication). Also, I would like to minimize gemm calls that 
Breeze does. Should I use mllib.linalg.BLAS functions instead? While it has 
gemm and axpy, it has rather limited number of operations. For example, I need 
sum of the matrix by row or by columns, or applying a function to all elements 
in a matrix. Also, MLlib Vector and Matrix interfaces that linalg.BLAS operates 
seems to be rather undeveloped. Should I use plain netlib-java instead (will it 
remain in MLlib in future releases)?

Best regards, Alexander



-
To unsubscribe, e-mail: dev-unsubscr...@spark.apache.org
For additional commands, e-mail: dev-h...@spark.apache.org



Re: Exception using the new createDirectStream util method

2015-03-19 Thread Alberto Rodriguez
Thank you for replying,

Ted, I have been debuging and the getLeaderOffsets method is not appending
errors because the method findLeaders that is called at the first line of
getLeaderOffsets is not returning leaders.

Cody, the topics do not have any messages yet. Could this be an issue??

If you guys want to have a look at the code I've just uploaded it to my
github account: big-brother  (see
DirectKafkaWordCountTest.scala).

Thank you again!!

2015-03-19 22:13 GMT+01:00 Cody Koeninger :

> What is the value of your topics variable, and does it correspond to
> topics that already exist on the cluster and have messages in them?
>
> On Thu, Mar 19, 2015 at 3:10 PM, Ted Yu  wrote:
>
>> Looking at KafkaCluster#getLeaderOffsets():
>>
>>   respMap.get(tp).foreach { por: PartitionOffsetsResponse =>
>> if (por.error == ErrorMapping.NoError) {
>> ...
>> } else {
>>   errs.append(ErrorMapping.exceptionFor(por.error))
>> }
>> There should be some error other than "Couldn't find leader offsets for
>> Set()"
>>
>> Can you check again ?
>>
>> Thanks
>>
>> On Thu, Mar 19, 2015 at 12:10 PM, Alberto Rodriguez 
>> wrote:
>>
>> > Hi all,
>> >
>> > I am trying to make the new kafka and spark streaming integration work
>> > (direct
>> > approach "no receivers"
>> > ).
>> I
>> > have created an unit test where I configure and start both zookeeper and
>> > kafka.
>> >
>> > When I try to create the InputDStream using the createDirectStream
>> method
>> > of the KafkaUtils class I am getting the following error:
>> >
>> > org.apache.spark.SparkException:* Couldn't find leader offsets for
>> Set()*
>> > org.apache.spark.SparkException: org.apache.spark.SparkException:
>> Couldn't
>> > find leader offsets for Set()
>> > at
>> >
>> >
>> org.apache.spark.streaming.kafka.KafkaUtils$$anonfun$createDirectStream$2.apply(KafkaUtils.scala:413)
>> >
>> > Following is the code that tries to create the DStream:
>> >
>> > val messages: InputDStream[(String, String)] =
>> > KafkaUtils.createDirectStream[String, String, StringDecoder,
>> > StringDecoder](
>> > ssc, kafkaParams, topics)
>> >
>> > Does anyone faced this problem?
>> >
>> > Thank you in advance.
>> >
>> > Kind regards,
>> >
>> > Alberto
>> >
>>
>
>


Re: Which linear algebra interface to use within Spark MLlib?

2015-03-19 Thread Debasish Das
Yeah it will be better if we consolidate the development on one of
them...either Breeze or mllib.BLAS...

On Thu, Mar 19, 2015 at 2:25 PM, Ulanov, Alexander 
wrote:

>  Thanks for quick response.
>
>  I can use linealg.BLAS.gemm, and this means that I have to use MLlib
> Matrix. The latter does not support some useful functionality needed for
> optimization. For example, creation of Matrix given matrix size, array and
> offset in this array. This means that I will need to create matrix in
> Breeze and convert it to MLlib. Also, linalg.BLAS misses some useful BLAS
> functions I need, that can be found in Breeze (and netlib-java). The same
> concerns are applicable to MLlib Vector.
>
> Best regards, Alexander
>
> 19.03.2015, в 14:16, "Debasish Das"  написал(а):
>
>   I think for Breeze we are focused on dot and dgemv right now (along
> with several other matrix vector style operations)...
>
>  For dgemm it is tricky since you need to do add dgemm for both
> DenseMatrix and CSCMatrix...and for CSCMatrix you need to get something
> like SuiteSparse which is under lgpl...so we have to think more on it..
>
>  For now can't you use dgemm directly from mllib.linalg.BLAS ? It's in
> master...
>
>
> On Thu, Mar 19, 2015 at 1:49 PM, Ulanov, Alexander <
> alexander.ula...@hp.com> wrote:
>
>>  Thank you! When do you expect to have gemm in Breeze and that version
>> of Breeze to ship with MLlib?
>>
>>  Also, could someone please elaborate on the linalg.BLAS and Matrix? Are
>> they going to be developed further, should in long term all developers use
>> them?
>>
>> Best regards, Alexander
>>
>> 18.03.2015, в 23:21, "Debasish Das" 
>> написал(а):
>>
>>dgemm dgemv and dot come to Breeze and Spark through netlib-java
>>
>>  Right now both in dot and dgemv Breeze does a extra memory allocate but
>> we already found the issue and we are working on adding a common trait that
>> will provide a sink operation (basically memory will be allocated by
>> user)...adding more BLAS operators in breeze will also help in general as
>> lot more operations are defined over there...
>>
>>
>> On Wed, Mar 18, 2015 at 8:09 PM, Ulanov, Alexander <
>> alexander.ula...@hp.com> wrote:
>>
>>> Hi,
>>>
>>> Currently I am using Breeze within Spark MLlib for linear algebra. I
>>> would like to reuse previously allocated matrices for storing the result of
>>> matrices multiplication, i.e. I need to use "gemm" function C:=q*A*B+p*C,
>>> which is missing in Breeze (Breeze automatically allocates a new matrix to
>>> store the result of multiplication). Also, I would like to minimize gemm
>>> calls that Breeze does. Should I use mllib.linalg.BLAS functions instead?
>>> While it has gemm and axpy, it has rather limited number of operations. For
>>> example, I need sum of the matrix by row or by columns, or applying a
>>> function to all elements in a matrix. Also, MLlib Vector and Matrix
>>> interfaces that linalg.BLAS operates seems to be rather undeveloped. Should
>>> I use plain netlib-java instead (will it remain in MLlib in future
>>> releases)?
>>>
>>> Best regards, Alexander
>>>
>>
>>
>


Re: Exception using the new createDirectStream util method

2015-03-19 Thread Cody Koeninger
Yeah, I wouldn't be shocked if Kafka's metadata apis didn't return results
for topics that don't have any messages.  (sorry about the triple negative,
but I think you get my meaning).

Try putting a message in the topic and seeing what happens.

On Thu, Mar 19, 2015 at 4:38 PM, Alberto Rodriguez 
wrote:

> Thank you for replying,
>
> Ted, I have been debuging and the getLeaderOffsets method is not appending
> errors because the method findLeaders that is called at the first line of
> getLeaderOffsets is not returning leaders.
>
> Cody, the topics do not have any messages yet. Could this be an issue??
>
> If you guys want to have a look at the code I've just uploaded it to my
> github account: big-brother  (see
> DirectKafkaWordCountTest.scala).
>
> Thank you again!!
>
> 2015-03-19 22:13 GMT+01:00 Cody Koeninger :
>
> > What is the value of your topics variable, and does it correspond to
> > topics that already exist on the cluster and have messages in them?
> >
> > On Thu, Mar 19, 2015 at 3:10 PM, Ted Yu  wrote:
> >
> >> Looking at KafkaCluster#getLeaderOffsets():
> >>
> >>   respMap.get(tp).foreach { por: PartitionOffsetsResponse =>
> >> if (por.error == ErrorMapping.NoError) {
> >> ...
> >> } else {
> >>   errs.append(ErrorMapping.exceptionFor(por.error))
> >> }
> >> There should be some error other than "Couldn't find leader offsets for
> >> Set()"
> >>
> >> Can you check again ?
> >>
> >> Thanks
> >>
> >> On Thu, Mar 19, 2015 at 12:10 PM, Alberto Rodriguez 
> >> wrote:
> >>
> >> > Hi all,
> >> >
> >> > I am trying to make the new kafka and spark streaming integration work
> >> > (direct
> >> > approach "no receivers"
> >> >  >).
> >> I
> >> > have created an unit test where I configure and start both zookeeper
> and
> >> > kafka.
> >> >
> >> > When I try to create the InputDStream using the createDirectStream
> >> method
> >> > of the KafkaUtils class I am getting the following error:
> >> >
> >> > org.apache.spark.SparkException:* Couldn't find leader offsets for
> >> Set()*
> >> > org.apache.spark.SparkException: org.apache.spark.SparkException:
> >> Couldn't
> >> > find leader offsets for Set()
> >> > at
> >> >
> >> >
> >>
> org.apache.spark.streaming.kafka.KafkaUtils$$anonfun$createDirectStream$2.apply(KafkaUtils.scala:413)
> >> >
> >> > Following is the code that tries to create the DStream:
> >> >
> >> > val messages: InputDStream[(String, String)] =
> >> > KafkaUtils.createDirectStream[String, String, StringDecoder,
> >> > StringDecoder](
> >> > ssc, kafkaParams, topics)
> >> >
> >> > Does anyone faced this problem?
> >> >
> >> > Thank you in advance.
> >> >
> >> > Kind regards,
> >> >
> >> > Alberto
> >> >
> >>
> >
> >
>


Add Char support in SQL dataTypes

2015-03-19 Thread A.M.Chan
case class PrimitiveData(
charField: Char, // Can't get the char schema info
intField: Int,
longField: Long,
doubleField: Double,
floatField: Float,
shortField: Short,
byteField: Byte,

booleanField: Boolean)
I can't get the schema from case class PrimitiveData.
An error occurred while I use schemaFor[PrimitiveData]
Char (of class scala.reflect.internal.Types$TypeRef$$anon$6)
scala.MatchError: Char (of class scala.reflect.internal.Types$TypeRef$$anon$6)
at 
org.apache.spark.sql.catalyst.ScalaReflection$class.schemaFor(ScalaReflection.scala:112)





--

kaka1992

RE: Add Char support in SQL dataTypes

2015-03-19 Thread Cheng, Hao
Can you use the Varchar or String instead? Currently, Spark SQL will convert 
the varchar into string type internally(without max length limitation). 
However, "char" type is not supported yet.

-Original Message-
From: A.M.Chan [mailto:kaka_1...@163.com] 
Sent: Friday, March 20, 2015 9:56 AM
To: spark-dev
Subject: Add Char support in SQL dataTypes

case class PrimitiveData(
charField: Char, // Can't get the char schema info
intField: Int,
longField: Long,
doubleField: Double,
floatField: Float,
shortField: Short,
byteField: Byte,

booleanField: Boolean)
I can't get the schema from case class PrimitiveData.
An error occurred while I use schemaFor[PrimitiveData] Char (of class 
scala.reflect.internal.Types$TypeRef$$anon$6)
scala.MatchError: Char (of class scala.reflect.internal.Types$TypeRef$$anon$6)
at 
org.apache.spark.sql.catalyst.ScalaReflection$class.schemaFor(ScalaReflection.scala:112)





--

kaka1992

-
To unsubscribe, e-mail: dev-unsubscr...@spark.apache.org
For additional commands, e-mail: dev-h...@spark.apache.org