-user +dev
cc: xiao

Hi, ayan,

I made pr to fix the issue that your reported though, it seems all the
releases I checked (e.g., v1.6, v2.0, v2.1)
does not hit the issue. Could you described more about your environments
and conditions?

You first reported you used v1.6 though, I checked and found that the
exception does not exist there.
Do I miss anything?

// maropu



On Fri, Jan 27, 2017 at 11:10 AM, ayan guha <guha.a...@gmail.com> wrote:

> Hi
>
> I will do a little more testing and will let you know. It did not work
> with INT and Number types, for sure.
>
> While writing, everything is fine :)
>
> On Fri, Jan 27, 2017 at 1:04 PM, Takeshi Yamamuro <linguin....@gmail.com>
> wrote:
>
>> How about this?
>> https://github.com/apache/spark/blob/master/sql/core/src/
>> test/scala/org/apache/spark/sql/jdbc/JDBCSuite.scala#L729
>> Or, how about using Double or something instead of Numeric?
>>
>> // maropu
>>
>> On Fri, Jan 27, 2017 at 10:25 AM, ayan guha <guha.a...@gmail.com> wrote:
>>
>>> Okay, it is working with varchar columns only. Is there any way to
>>> workaround this?
>>>
>>> On Fri, Jan 27, 2017 at 12:22 PM, ayan guha <guha.a...@gmail.com> wrote:
>>>
>>>> hi
>>>>
>>>> I thought so too, so I created a table with INT and Varchar columns
>>>>
>>>> desc agtest1
>>>>
>>>> Name Null Type
>>>> ---- ---- -------------
>>>> PID       NUMBER(38)
>>>> DES       VARCHAR2(100)
>>>>
>>>> url="jdbc:oracle:thin:@mpimpclu1-scan:1521/DEVAIM"
>>>> table = "agtest1"
>>>> user = "bal"
>>>> password= "bal"
>>>> driver="oracle.jdbc.OracleDriver"
>>>> df = sqlContext.read.jdbc(url=url,table=table,properties={"user":
>>>> user,"password":password,"driver":driver})
>>>>
>>>>
>>>> Still the issue persists.
>>>>
>>>> On Fri, Jan 27, 2017 at 11:19 AM, Takeshi Yamamuro <
>>>> linguin....@gmail.com> wrote:
>>>>
>>>>> Hi,
>>>>>
>>>>> I think you got this error because you used `NUMERIC` types in your
>>>>> schema (https://github.com/apache/spark/blob/master/sql/core/src/ma
>>>>> in/scala/org/apache/spark/sql/jdbc/OracleDialect.scala#L32). So, IIUC
>>>>> avoiding the type is a workaround.
>>>>>
>>>>> // maropu
>>>>>
>>>>>
>>>>> On Fri, Jan 27, 2017 at 8:18 AM, ayan guha <guha.a...@gmail.com>
>>>>> wrote:
>>>>>
>>>>>> Hi
>>>>>>
>>>>>> I am facing exact issue with Oracle/Exadataas mentioned here
>>>>>> <http://stackoverflow.com/questions/41873449/sparksql-key-not-found-scale>.
>>>>>> Any idea? I could not figure out so sending to this grou hoping someone
>>>>>> have see it (and solved it)
>>>>>>
>>>>>> Spark Version: 1.6
>>>>>> pyspark command:
>>>>>>
>>>>>> pyspark --driver-class-path /opt/oracle/bigdatasql/bdcell-
>>>>>> 12.1/jlib-bds/kvclient.jar:/opt/oracle/bigdatasql/bdcell-12.
>>>>>> 1/jlib-bds/ojdbc7.jar:/opt/oracle/bigdatasql/bdcell-12.1/jli
>>>>>> b-bds/ojdbc7-orig.jar:/opt/oracle/bigdatasql/bdcell-12.1/jli
>>>>>> b-bds/oracle-hadoop-sql.jar:/opt/oracle/bigdatasql/bdcell-12
>>>>>> .1/jlib-bds/ora-hadoop-common.jar:/opt/oracle/bigdatasql/bdc
>>>>>> ell-12.1/jlib-bds/ora-hadoop-common-orig.jar:/opt/oracle/big
>>>>>> datasql/bdcell-12.1/jlib-bds/orahivedp.jar:/opt/oracle/bigda
>>>>>> tasql/bdcell-12.1/jlib-bds/orahivedp-orig.jar:/opt/oracle/bi
>>>>>> gdatasql/bdcell-12.1/jlib-bds/orai18n.jar:/opt/oracle/bigdat
>>>>>> asql/bdcell-12.1/jlib-bds/orai18n-orig.jar:/opt/oracle/bigda
>>>>>> tasql/bdcell-12.1/jlib-bds/oraloader.jar:/opt/oracle/bigd
>>>>>> atasql/bdcell-12.1/jlib-bds/oraloader-orig.jar   --conf
>>>>>> spark.jars=/opt/oracle/bigdatasql/bdcell-12.1/jlib-bds/oracl
>>>>>> e-hadoop-sql.jar,/opt/oracle/bigdatasql/bdcell-12.1/jlib-bds
>>>>>> /ora-hadoop-common.jar,/opt/oracle/bigdatasql/bdcell-12.1/jl
>>>>>> ib-bds/orahivedp.jar,/opt/oracle/bigdatasql/bdcell-12.1/jlib
>>>>>> -bds/oraloader.jar,/opt/oracle/bigdatasql/bdcell-12.1/jlib-b
>>>>>> ds/ojdbc7.jar,/opt/oracle/bigdatasql/bdcell-12.1/jlib-bds/
>>>>>> orai18n.jar/opt/oracle/bigdatasql/bdcell-12.1/jlib-bds/kvcli
>>>>>> ent.jar,/opt/oracle/bigdatasql/bdcell-12.1/jlib-bds/ojdbc7.
>>>>>> jar,/opt/oracle/bigdatasql/bdcell-12.1/jlib-bds/ojdbc7-
>>>>>> orig.jar,/opt/oracle/bigdatasql/bdcell-12.1/jlib-bds/oracle-
>>>>>> hadoop-sql.jar,/opt/oracle/bigdatasql/bdcell-12.1/jlib-
>>>>>> bds/ora-hadoop-common.jar,/opt/oracle/bigdatasql/bdcell-
>>>>>> 12.1/jlib-bds/ora-hadoop-common-orig.jar,/opt/oracle/bi
>>>>>> gdatasql/bdcell-12.1/jlib-bds/orahivedp.jar,/opt/oracle/bigd
>>>>>> atasql/bdcell-12.1/jlib-bds/orahivedp-orig.jar,/opt/oracle
>>>>>> /bigdatasql/bdcell-12.1/jlib-bds/orai18n.jar,/opt/oracle/
>>>>>> bigdatasql/bdcell-12.1/jlib-bds/orai18n-orig.jar,/opt/
>>>>>> oracle/bigdatasql/bdcell-12.1/jlib-bds/oraloader.jar,/opt/
>>>>>> oracle/bigdatasql/bdcell-12.1/jlib-bds/oraloader-orig.jar
>>>>>>
>>>>>>
>>>>>> Here is my code:
>>>>>>
>>>>>> url="jdbc:oracle:thin:@mpimpclu1-scan:1521/DEVAIM"
>>>>>> table = "HIST_FORECAST_NEXT_BILL_DGTL"
>>>>>> user = "bal"
>>>>>> password= "bal"
>>>>>> driver="oracle.jdbc.OracleDriver"
>>>>>> df = sqlContext.read.jdbc(url=url,table=table,properties={"user":
>>>>>> user,"password":password,"driver":driver})
>>>>>>
>>>>>>
>>>>>> Error:
>>>>>> Traceback (most recent call last):
>>>>>>   File "<stdin>", line 1, in <module>
>>>>>>   File "/opt/cloudera/parcels/CDH-5.8.3-1.cdh5.8.3.p2001.2081/lib/s
>>>>>> park/python/pyspark/sql/readwriter.py", line 289, in jdbc
>>>>>>     return self._df(self._jreader.jdbc(url, table, jprop))
>>>>>>   File "/opt/cloudera/parcels/CDH-5.8.3-1.cdh5.8.3.p2001.2081/lib/s
>>>>>> park/python/lib/py4j-0.9-src.zip/py4j/java_gateway.py", line 813, in
>>>>>> __call__
>>>>>>   File "/opt/cloudera/parcels/CDH-5.8.3-1.cdh5.8.3.p2001.2081/lib/s
>>>>>> park/python/pyspark/sql/utils.py", line 45, in deco
>>>>>>     return f(*a, **kw)
>>>>>>   File "/opt/cloudera/parcels/CDH-5.8.3-1.cdh5.8.3.p2001.2081/lib/s
>>>>>> park/python/lib/py4j-0.9-src.zip/py4j/protocol.py", line 308, in
>>>>>> get_return_value
>>>>>> py4j.protocol.Py4JJavaError: An error occurred while calling o40.jdbc.
>>>>>> : java.util.NoSuchElementException: key not found: scale
>>>>>>         at scala.collection.MapLike$class.default(MapLike.scala:228)
>>>>>>         at scala.collection.AbstractMap.default(Map.scala:58)
>>>>>>         at scala.collection.MapLike$class.apply(MapLike.scala:141)
>>>>>>         at scala.collection.AbstractMap.apply(Map.scala:58)
>>>>>>         at org.apache.spark.sql.types.Met
>>>>>> adata.get(Metadata.scala:108)
>>>>>>         at org.apache.spark.sql.types.Met
>>>>>> adata.getLong(Metadata.scala:51)
>>>>>>         at org.apache.spark.sql.jdbc.Orac
>>>>>> leDialect$.getCatalystType(OracleDialect.scala:33)
>>>>>>         at org.apache.spark.sql.execution
>>>>>> .datasources.jdbc.JDBCRDD$.resolveTable(JDBCRDD.scala:140)
>>>>>>         at org.apache.spark.sql.execution
>>>>>> .datasources.jdbc.JDBCRelation.<init>(JDBCRelation.scala:91)
>>>>>>         at org.apache.spark.sql.DataFrame
>>>>>> Reader.jdbc(DataFrameReader.scala:222)
>>>>>>         at org.apache.spark.sql.DataFrame
>>>>>> Reader.jdbc(DataFrameReader.scala:146)
>>>>>>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native
>>>>>> Method)
>>>>>>         at sun.reflect.NativeMethodAccess
>>>>>> orImpl.invoke(NativeMethodAccessorImpl.java:62)
>>>>>>         at sun.reflect.DelegatingMethodAc
>>>>>> cessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>>>>>>         at java.lang.reflect.Method.invoke(Method.java:498)
>>>>>>         at py4j.reflection.MethodInvoker.
>>>>>> invoke(MethodInvoker.java:231)
>>>>>>         at py4j.reflection.ReflectionEngi
>>>>>> ne.invoke(ReflectionEngine.java:381)
>>>>>>         at py4j.Gateway.invoke(Gateway.java:259)
>>>>>>         at py4j.commands.AbstractCommand.
>>>>>> invokeMethod(AbstractCommand.java:133)
>>>>>>         at py4j.commands.CallCommand.execute(CallCommand.java:79)
>>>>>>         at py4j.GatewayConnection.run(GatewayConnection.java:209)
>>>>>>         at java.lang.Thread.run(Thread.java:745)
>>>>>>
>>>>>>
>>>>>>
>>>>>> --
>>>>>> Best Regards,
>>>>>> Ayan Guha
>>>>>>
>>>>>
>>>>>
>>>>>
>>>>> --
>>>>> ---
>>>>> Takeshi Yamamuro
>>>>>
>>>>
>>>>
>>>>
>>>> --
>>>> Best Regards,
>>>> Ayan Guha
>>>>
>>>
>>>
>>>
>>> --
>>> Best Regards,
>>> Ayan Guha
>>>
>>
>>
>>
>> --
>> ---
>> Takeshi Yamamuro
>>
>
>
>
> --
> Best Regards,
> Ayan Guha
>



-- 
---
Takeshi Yamamuro

Reply via email to