Hi,

looks like you are mixing regular flink dependencies with hadoop1
dependencies.
Can you replace

        <dependency>
            <groupId>org.apache.flink</groupId>
            <artifactId>flink-table</artifactId>
            <version>0.9-hadoop1-SNAPSHOT</version>
            <type>jar</type>
        </dependency>

with

        <dependency>
            <groupId>org.apache.flink</groupId>
            <artifactId>flink-table</artifactId>
            <version>0.9-SNAPSHOT</version>
            <type>jar</type>
        </dependency>



On Mon, Apr 13, 2015 at 10:20 AM, Mohamed Nadjib MAMI <m...@iai.uni-bonn.de>
wrote:

>  Here it is:
> https://gist.github.com/MohamedNadjibMAMI/6dedbea0b33c64928d6a
>
>
> On 13.04.2015 10:14, Robert Metzger wrote:
>
>  Can you post your pom.xml here: https://gist.github.com/ ?
>
>  You can also send it privately to me if you don't want to share it via
> the mailing list.
>
> On Mon, Apr 13, 2015 at 10:12 AM, Mohamed Nadjib MAMI <
> m...@iai.uni-bonn.de> wrote:
>
>>  Hello Robert,
>>
>> I'm not using Scala, no Scala dependency in my pom file.
>>
>> Thanks
>>
>>
>> On 13.04.2015 09:31, Robert Metzger wrote:
>>
>> Hi,
>>
>>  the error looks like a mixup of the Scala versions.
>> Are you adding any scala 2.11 dependencies in your pom?
>>
>>
>>
>> On Sun, Apr 12, 2015 at 9:35 PM, Mohamed Nadjib MAMI <
>> m...@iai.uni-bonn.de> wrote:
>>
>>>  Hello,
>>>
>>> Suit to my previous email, apologize, the error was in: *Table table =
>>> tableEnv.toTable(input);*
>>>
>>> Cheers,
>>> Mohamed
>>>
>>>
>>> On 12.04.2015 21:33, Mohamed Nadjib MAMI wrote:
>>>
>>> Hello all,
>>>
>>> I've just tried the example of Java Table API as-is on the docs but I'm
>>> getting this exception:
>>>
>>> *Exception in thread "main" java.lang.NoSuchMethodError:
>>> scala.runtime.BooleanRef.create(Z)Lscala/runtime/BooleanRef;*
>>> *    at
>>> org.apache.flink.api.table.trees.TreeNode.exists(TreeNode.scala:93)*
>>> *    at
>>> org.apache.flink.api.java.table.JavaBatchTranslator$$anonfun$createSelect$1.apply(JavaBatchTranslator.scala:265)*
>>> *    at
>>> org.apache.flink.api.java.table.JavaBatchTranslator$$anonfun$createSelect$1.apply(JavaBatchTranslator.scala:264)*
>>> *    at
>>> scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)*
>>> *    at
>>> scala.collection.mutable.WrappedArray.foreach(WrappedArray.scala:34)*
>>> *    at
>>> org.apache.flink.api.java.table.JavaBatchTranslator.createSelect(JavaBatchTranslator.scala:263)*
>>> *    at
>>> org.apache.flink.api.java.table.JavaBatchTranslator.createTable(JavaBatchTranslator.scala:52)*
>>> *    at
>>> org.apache.flink.api.java.table.JavaBatchTranslator.createTable(JavaBatchTranslator.scala:42)*
>>> *    at
>>> org.apache.flink.api.table.plan.PlanTranslator.createTable(PlanTranslator.scala:152)*
>>> *    at
>>> org.apache.flink.api.table.plan.PlanTranslator.createTable(PlanTranslator.scala:64)*
>>> *    at org.apache.flink.api.java.table.Ta*
>>> *bleEnvironment.toTable(TableEnvironment.scala:57)*
>>> *    at Main.main(Main.java:20)*
>>>
>>> Line 20 is:
>>>
>>> *DataSet<WC> result = tableEnv.toSet(filtered, WC.class); *What could
>>> be the reason firing the exception?
>>>
>>> The version used is 0.9-hadoop1-SNAPSHOT
>>>
>>> *. *Sincerely,
>>> Mohamed
>>>
>>>
>>>
>>
>>
>
>

Reply via email to