[ 
https://issues.apache.org/jira/browse/FLINK-2179?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Metzger resolved FLINK-2179.
-----------------------------------
    Resolution: Invalid

Please write to u...@flink.apache.org or here 
http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/ for asking 
questions on how to use Flink.

This system (the bugtracker) is used by the developers to track system defects 
and features.

I'm closing this ticket as Invalid because. I've answered the question here: 
http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/when-return-value-from-linkedlist-or-map-and-use-in-filter-function-display-error-td1528.html

> when return value from linkedlist or map and use in filter function display 
> error 
> ----------------------------------------------------------------------------------
>
>                 Key: FLINK-2179
>                 URL: https://issues.apache.org/jira/browse/FLINK-2179
>             Project: Flink
>          Issue Type: Bug
>            Reporter: hagersaleh
>
> when return value from linkedlist or map and use in filter function display 
> error when run program from command line but when run from netbeans not 
> display error
> public static Map<String, Integer> map = new HashMap<String, Integer>();
> public static void main(String[] args) throws Exception {
> map.put("C_MKTSEGMENT", 2);
>    
>         ExecutionEnvironment env = 
> ExecutionEnvironment.getExecutionEnvironment();
>           DataSet<Customer3> 
> customers=env.readCsvFile("/home/hadoop/Desktop/Dataset/customer.csv")
>                                         .fieldDelimiter('|')
>                                         
> .includeFields("11000010").ignoreFirstLine()
>                                         .tupleType(Customer3.class);
>            customers = customers.filter(new FilterFunction<Customer3>()
>                         {
>                             @Override                    
>                             public boolean filter(Customer3 c) {
>                              int  
> index1=Integer.parseInt(map.get("C_MKTSEGMENT").toString());
>                             return c.getField(index1).equals("AUTOMOBILE");
>                         }
>                 });
>            
>            customers.print();
>            customers.writeAsCsv("/home/hadoop/Desktop/Dataset/out1.csv", 
> "\n", "|",WriteMode.OVERWRITE);              
>         env.execute("TPCH Query 3 Example");
> }
> hadoop@ubuntu:~/Desktop/flink-0.7.0-incubating$ bin/flink run 
> /home/hadoop/Desktop/where_operation_final/dist/where_operation_final.jar
> 06/06/2015 13:12:31: Job execution switched to status RUNNING
> 06/06/2015 13:12:31: CHAIN DataSource (CSV Input (|) 
> /home/hadoop/Desktop/Dataset/customer.csv) -> Filter 
> (org.apache.flink.examples.java.relational.TPCHQuery3$1) (1/1) switched to 
> SCHEDULED
> 06/06/2015 13:12:31: CHAIN DataSource (CSV Input (|) 
> /home/hadoop/Desktop/Dataset/customer.csv) -> Filter 
> (org.apache.flink.examples.java.relational.TPCHQuery3$1) (1/1) switched to 
> DEPLOYING
> 06/06/2015 13:12:31: CHAIN DataSource (CSV Input (|) 
> /home/hadoop/Desktop/Dataset/customer.csv) -> Filter 
> (org.apache.flink.examples.java.relational.TPCHQuery3$1) (1/1) switched to 
> RUNNING
> 06/06/2015 13:12:31: CHAIN DataSource (CSV Input (|) 
> /home/hadoop/Desktop/Dataset/customer.csv) -> Filter 
> (org.apache.flink.examples.java.relational.TPCHQuery3$1) (1/1) switched to 
> FAILED
> java.lang.NullPointerException
>         at 
> org.apache.flink.examples.java.relational.TPCHQuery3$1.filter(TPCHQuery3.java:73)
>         at 
> org.apache.flink.examples.java.relational.TPCHQuery3$1.filter(TPCHQuery3.java:70)
>         at 
> org.apache.flink.api.java.operators.translation.PlanFilterOperator$FlatMapFilter.flatMap(PlanFilterOperator.java:47)
>         at 
> org.apache.flink.runtime.operators.chaining.ChainedFlatMapDriver.collect(ChainedFlatMapDriver.java:79)
>         at 
> org.apache.flink.runtime.operators.DataSourceTask.invoke(DataSourceTask.java:215)
>         at 
> org.apache.flink.runtime.execution.RuntimeEnvironment.run(RuntimeEnvironment.java:235)
>         at java.lang.Thread.run(Thread.java:745)
> 06/06/2015 13:12:31: Job execution switched to status FAILING
> 06/06/2015 13:12:31: DataSink(Print to System.out) (1/1) switched to CANCELED
> 06/06/2015 13:12:31: DataSink(CsvOutputFormat (path: 
> /home/hadoop/Desktop/Dataset/out1.csv, delimiter: |)) (1/1) switched to 
> CANCELED
> 06/06/2015 13:12:31: Job execution switched to status FAILED
> Error: The program execution failed: java.lang.NullPointerException
>         at 
> org.apache.flink.examples.java.relational.TPCHQuery3$1.filter(TPCHQuery3.java:73)
>         at 
> org.apache.flink.examples.java.relational.TPCHQuery3$1.filter(TPCHQuery3.java:70)
>         at 
> org.apache.flink.api.java.operators.translation.PlanFilterOperator$FlatMapFilter.flatMap(PlanFilterOperator.java:47)
>         at 
> org.apache.flink.runtime.operators.chaining.ChainedFlatMapDriver.collect(ChainedFlatMapDriver.java:79)
>         at 
> org.apache.flink.runtime.operators.DataSourceTask.invoke(DataSourceTask.java:215)
>         at 
> org.apache.flink.runtime.execution.RuntimeEnvironment.run(RuntimeEnvironment.java:235)
>         at java.lang.Thread.run(Thread.java:745)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to