Does Flink support multi-way join which contains more than two tables such
as
T1⋈T2⋈T3⋈T4?
--
View this message in context:
http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/Does-Flink-support-multi-way-join-tp10799.html
Sent from the Apache Flink User Mailing List archive.
how can handles this query
SELECT a.id, (SELECT MAX(created) FROM posts WHERE author_id = a.id) AS
latest_post FROM authors a
--
View this message in context:
http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/how-can-handles-this-query-tp2769.html
Sent from the Apache Flink U
Hi Chiwan Park
not understand this solution please explain more
--
View this message in context:
http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/when-use-broadcast-variable-and-run-on-bigdata-display-this-error-please-help-tp2455p2676.html
Sent from the Apache Flink User Ma
where are any ways for use broadcast variable with bigdata
--
View this message in context:
http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/when-use-broadcast-variable-and-run-on-bigdata-display-this-error-please-help-tp2455p2566.html
Sent from the Apache Flink User Mailing
I run program on cluster Flink contain one master and one slave and
calculates the time in the four times
found a difference in the time
what means this different in time for the same program
1 min 37 secs 133 msecs
7 mins 13 secs 147 msecs
5 mins 24 secs 748 msecs
1 min 47 secs 19 msecs
-
I run program on cluster Flink contain one master and one slave and
calculates the time in the four times
found a difference in the time
what means this different in time for the same program
1 min 37 secs 133 msecs
7 mins 13 secs 147 msecs
5 mins 24 secs 748 msecs
1 min 47 secs 19 msecs
Note: As the content of broadcast variables is kept in-memory on each node,
it should not become too large. For simpler things like scalar values you
can simply make parameters part of the closure of a function, or use the
withParameters(...) method to pass in a configuration.
--
View this messa
When to use broadcast variable?
Distribute data with a broadcast variable when
The data is large
The data has been produced by some form of computation and is already a
DataSet (distributed result)
Typical use case: Redistribute intermediate results, such as trained
models
from link
why this is not good broadcast variable use in bigdata
--
View this message in context:
http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/when-use-broadcast-variable-and-run-on-bigdata-display-this-error-please-help-tp2455p2468.html
Sent from the Apache Flink User Mailing List
please help
--
View this message in context:
http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/when-use-broadcast-variable-and-run-on-bigdata-display-this-error-please-help-tp2455p2461.html
Sent from the Apache Flink User Mailing List archive. mailing list archive at
Nabble.c
please help
--
View this message in context:
http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/when-use-broadcast-variable-and-run-on-bigdata-display-this-error-please-help-tp2455p2456.html
Sent from the Apache Flink User Mailing List archive. mailing list archive at
Nabble.c
when run this program in big data display this error but when run on small
data not display error why
ExecutionEnvironment env = ExecutionEnvironment.getExecutionEnvironment();
DataSet customer = getCustomerDataSet(env,mask,l,map);
DataSet order= getOrdersDataSet(env,maskorder,l1,maporder);
cus
what max job manger heap size and task manger heap size
--
View this message in context:
http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/what-max-jobmanger-heab-size-and-taskmanger-heap-size-tp2454.html
Sent from the Apache Flink User Mailing List archive. mailing list archi
why when run program on Flink9 and
write in browser localhost:8081 and open History I found
in job history 2 run time
but Flink8
show 1 run time
--
View this message in context:
http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/why-when-run-program-on-Flink9-and-open-job-his
please help
--
View this message in context:
http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/when-write-this-code-display-error-no-interface-expected-here-public-static-class-MyCoGrouper-extendn-tp2205p2219.html
Sent from the Apache Flink User Mailing List archive. mailing l
when write this code display error
no interface expected here public static class MyCoGrouper extends
CoGroupFunction {
ExecutionEnvironment env = ExecutionEnvironment.getExecutionEnvironment();
DataSet customers = getCustomerDataSet(env,mask,l,map);
DataSet orders= getOrdersDat
I want use count() method in this link but when write
DataSet customers = getCustomerDataSet(env,mask,l,map);
Long i=customers.count();
not found count() in dataset why
https://ci.apache.org/projects/flink/flink-docs-master/api/java/org/apache/flink/api/java/DataSet.html#count%28%29
--
View
when write this code display error
no interface expected here public static class MyCoGrouper extends
CoGroupFunction {
ExecutionEnvironment env = ExecutionEnvironment.getExecutionEnvironment();
DataSet customers = getCustomerDataSet(env,mask,l,map);
DataSet orders= getOrdersDat
why
not found Although use library
.sortPartition(1, Order.ASCENDING)
--
View this message in context:
http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/Order-groups-by-their-keys-tp2056p2153.html
Sent from the Apache Flink User Mailing List archive. mailing list archive at
please help
I want example
--
View this message in context:
http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/How-can-handles-Exist-not-Exist-query-on-flink-tp1939p2068.html
Sent from the Apache Flink User Mailing List archive. mailing list archive at
Nabble.com.
very thanks
--
View this message in context:
http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/how-can-handles-Any-All-query-on-flink-tp1997p2067.html
Sent from the Apache Flink User Mailing List archive. mailing list archive at
Nabble.com.
import java.util.Collection;
import org.apache.flink.api.common.functions.MapFunction;
import org.apache.flink.configuration.Configuration;
import org.apache.flink.api.java.DataSet;
import org.apache.flink.api.java.ExecutionEnvironment;
ExecutionEnvironment env =
ExecutionEnvironment.getExecut
why in this use ! and <= in handle Any
override def filter(value: Product): Boolean = !bcSet.forall(value.model
<= _)
}).withBroadcastSet(pcModels, "pcModels").distinct("maker").map(_.maker)
--
View this message in context:
http://apache-flink-user-mailing-list-archive.2336050.n4.nabble
I want example on use join or co group for handles Exists or not Exists
--
View this message in context:
http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/How-can-handles-Exist-not-Exist-query-on-flink-tp1939p2006.html
Sent from the Apache Flink User Mailing List archive. mail
please help
--
View this message in context:
http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/how-can-handles-Any-All-query-on-flink-tp1997p2005.html
Sent from the Apache Flink User Mailing List archive. mailing list archive at
Nabble.com.
how can handles Any , All query on flink
Example
SELECT model, price
FROM Laptop
WHERE price > ALL
(SELECT price
FROM PC);
Example on any
SELECT DISTINCT maker
FROM Product
WHERE model > ANY
(SELECT model
FROM PC);
--
View this message in context:
http://apache-f
I did not understand what you mean
--
View this message in context:
http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/why-when-use-orders-aggregate-Aggregations-MAX-2-not-return-one-value-but-return-more-value-tp1977p1989.html
Sent from the Apache Flink User Mailing List archi
why when use orders.aggregate(Aggregations.MAX, 2) not return one value but
return more value
ExecutionEnvironment env = ExecutionEnvironment.getExecutionEnvironment();
DataSet orders=(DataSet)
env.readCsvFile("/home/hadoop/Desktop/Dataset/orders.csv")
.fieldDelimiter('|')
.inclu
Meaning it can not be implemented Exist and not Exist on Filnk but look for
alternative query
example
where 0 < (select count(selitem) from fromritem)
--
View this message in context:
http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/How-can-handles-Exist-not-Exist-query-on-fl
thanks
--
View this message in context:
http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/How-can-handles-Exist-not-Exist-query-on-flink-tp1939p1975.html
Sent from the Apache Flink User Mailing List archive. mailing list archive at
Nabble.com.
please help
--
View this message in context:
http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/How-can-handles-Exist-not-Exist-query-on-flink-tp1939p1973.html
Sent from the Apache Flink User Mailing List archive. mailing list archive at
Nabble.com.
How can handles Exist ,not Exist , all and any in query
Example
SELECT P.PRODUCT_ID,
P.PRODUCT_NAME
FROM PRODUCTS P
WHERE NOT EXISTS
(
SELECT 1
FROM SALES S
WHERE S.PRODUCT_ID = P.PRODUCT_ID);
--
View this message in context:
http://apache-flink-user-mailing-list-archive.2336050.n4.nabb
can combine between tow table but not use join
--
View this message in context:
http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/how-can-Combine-between-two-dataset-in-on-datset-and-execution-more-condition-in-the-same-time-tp1605p1712.html
Sent from the Apache Flink User Mai
when run progrm on big data customer 2.5GB orders 5GB disply error why
DataSource (at getCustomerDataSet(TPCHQuery3.java:252)
(org.apache.flink.api.java.io.CsvInputFormat)) (1/1) switched to FAILED
org.apache.flink.api.common.io.ParseException: Row too short:
1499|Customer#01499|3emQ49UZt
I want store return value from filter function in linked list the value
print in the enter but in external function when print linked list not have
any value
final LinkedList valuesfromsubquery1 =new LinkedList();
customers.writeAsFormattedText("/home/hadoop/Desktop/Dataset/out1.text",
W
apply not found in flink
and how can execute this
SELECT employees.last_name
FROM employees E, departments D
WHERE (D.department_id = E.department_id AND E.job_id = 'AC_ACCOUNT' AND
D.location = 2400)
OR
E.department_id = D.department_id AND E.salary > 6 AND D.location =
2400);
--
Vi
how can Combine between two dataset in on datset and execution more condition
in the same time
Example
SELECT employees.last_name
FROM employees E, departments D
WHERE (D.department_id = E.department_id AND D.location = 2400)
AND (E.job_id = 'AC_ACCOUNT' OR E.salary > 6);
--
View this m
I can solve problem when final Map map = new
HashMap();
very thanks
code run in command line not any error
public static void main(String[] args) throws Exception {
final Map map = new HashMap();
map.put("C_MKTSEGMENT", 2);
ExecutionEnvironment env =
ExecutionEnv
error dispaly
non-static variable map cannot be referenced from a static context
map.put("C_MKTSEGMENT", 2);
code
public Map map = new HashMap();
public static void main(String[] args) throws Exception {
map.put("C_MKTSEGMENT", 2);
ExecutionEnvironment env =
Exe
please help
why when run program from netbeans not display error
but when run command line display error
--
View this message in context:
http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/when-return-value-from-linkedlist-or-map-and-use-in-filter-function-display-error-tp152
when use non-static filed display error
and filter function not show map
--
View this message in context:
http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/when-return-value-from-linkedlist-or-map-and-use-in-filter-function-display-error-tp1528p1544.html
Sent from the Apache
very thanks
--
View this message in context:
http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/I-want-run-flink-program-in-ubuntu-x64-Mult-Node-Cluster-what-is-configuration-tp1444p1542.html
Sent from the Apache Flink User Mailing List archive. mailing list archive at
Nabble
I change IP to good IP
but when run program
Error: The program execution failed:
org.apache.flink.runtime.jobmanager.scheduler.NoResourceAvailableException:
Not enough free slots available to run the job. You can decrease the
operator parallelism or increase the number of slots per TaskManager in t
when run progam
display
error:null
--
View this message in context:
http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/I-want-run-flink-program-in-ubuntu-x64-Mult-Node-Cluster-what-is-configuration-tp1444p1534.html
Sent from the Apache Flink User Mailing List archive. mailing
1- I copy flink in the same path all machines
2- in master machine write 10.0.0.1 in conf/flink-conf.yaml
3- in slave machine write 10.0.0.1 in conf/flink-conf.yaml and 10.0.0.2 in
conf/slaves
then in master machine open command line write this
bin/start-cluster.sh
display
Starting job manager
St
when return value from linkedlist or map and use in filter function display
error when run program from command line but when run from netbeans not
display error
public static Map map = new HashMap();
public static void main(String[] args) throws Exception {
map.put("C_MKTSEGMENT", 2);
when run this example from command line more error display why
public static Map map = new HashMap();
public static void main(String[] args) throws Exception {
map.put("C_MKTSEGMENT", 2);
ExecutionEnvironment env =
ExecutionEnvironment.getExecutionEnvironment();
DataSet
custom
I run flink program in ubuntu x64 Single Node Cluster
I want run flink program in ubuntu x64 Mult Node Cluster
what is configuration for run program in Mult Node?
--
View this message in context:
http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/I-want-run-flink-program-in-u
I want example for use sortPartition()
--
View this message in context:
http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/why-when-use-groupBy-2-sortGroup-0-Order-DESCENDING-not-group-by-and-not-sort-tp1436p1439.html
Sent from the Apache Flink User Mailing List archive. maili
why when use groupBy(2).sortGroup(0, Order.DESCENDING); not group by and not
sort
I want sort DataSet How can I do that?
customers = customers.filter(
new FilterFunction() {
@Override
public boolean filter(Customer c) {
this is just output
--
View this message in context:
http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/I-write-program-flink-when-run-display-this-message-tp1433p1435.html
Sent from the Apache Flink User Mailing List archive. mailing list archive at
Nabble.com.
I write program when run display this message
log4j:WARN No appenders could be found for logger
(org.apache.flink.runtime.blob.BlobServer).
log4j:WARN Please initialize the log4j system properly.log4j:WARN See
http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.
--
View this me
this optimizer automatic mode or I determination it
--
View this message in context:
http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/how-Flink-Optimizer-work-and-what-is-process-do-it-tp1359p1363.html
Sent from the Apache Flink User Mailing List archive. mailing list archiv
very thanks
what meaning Optimizer in flink?
--
View this message in context:
http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/how-Flink-Optimizer-work-and-what-is-process-do-it-tp1359p1361.html
Sent from the Apache Flink User Mailing List archive. mailing list archive at
Na
how Flink Optimizer work and what is process do it?
--
View this message in context:
http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/how-Flink-Optimizer-work-and-what-is-process-do-it-tp1359.html
Sent from the Apache Flink User Mailing List archive. mailing list archive at
how differences between hadoop and apache flink?
--
View this message in context:
http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/how-differences-between-hadoop-and-apache-flink-tp1343.html
Sent from the Apache Flink User Mailing List archive. mailing list archive at
Nabble
Thank very much
--
View this message in context:
http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/how-can-rturn-all-row-in-dataset-include-mult-value-example-tp1289p1318.html
Sent from the Apache Flink User Mailing List archive. mailing list archive at
Nabble.com.
want return all row include all value in valuesfromsubquery this code just
return row include frist value BUILDING
public static ArrayList valuesfromsubquery = new
ArrayList();
valuesfromsubquery.add("BUILDING");
valuesfromsubquery.add("MACHINERY");
valuesfromsubquery.add("A
Thank you very much
--
View this message in context:
http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/I-want-writ-output-dataset-in-one-file-not-mult-file-tp1227p1229.html
Sent from the Apache Flink User Mailing List archive. mailing list archive at
Nabble.com.
I want writ output dataset in one file not mult file
--
View this message in context:
http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/I-want-writ-output-dataset-in-one-file-not-mult-file-tp1227.html
Sent from the Apache Flink User Mailing List archive. mailing list archive a
I solve error when use import
org.apache.flink.api.java.io.TextOutputFormat.TextFormatter;
very thanks
--
View this message in context:
http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/want-write-Specified-field-from-dataset-to-output-file-tp1223p1226.html
Sent from the Apach
when use the solve display error message
cannot find symbol
new TextFormatter() {
symbol: class TextFormatter
location: class TPCHQuery3
Note:
/home/hadoop/Desktop/sub_query/src/org/apache/flink/examples/java/relational/TPCHQuery3.java
uses unchecked or unsafe operations.
Note: Recompile with -Xl
I want write Specified field from dataset to output file
I want write field 2 to output file
example
DataSet
customers=env.readCsvFile("/home/hadoop/Desktop/Dataset/customer.csv")
.fieldDelimiter('|')
.includeFields(1110).ignoreFirstLine()
when upgrade from flink 0.7.0 to 0.8.1 display error message I use netbeanse
when replace lib flink 0.7.0 to to lib 0.8.1
Exception in thread "main" java.lang.NoClassDefFoundError:
org/objectweb/asm/ClassVisitor
at org.apache.flink.api.java.DataSet.clean(DataSet.java:133)
at org.apache.fl
use flink 7.0
--
View this message in context:
http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/Error-when-use-tow-datasink-tp1205p1209.html
Sent from the Apache Flink User Mailing List archive. mailing list archive at
Nabble.com.
Exception in thread "main" org.apache.flink.compiler.CompilerException: The
given program contains multiple disconnected data flows
example
DataSet
customers=env.readCsvFile("/home/hadoop/Desktop/Dataset/customer.csv")
.fieldDelimiter('|')
.includeFields(11
how handles subquery in flink
example
SELECT C_CUSTKEY,C_NAME
FROM Customers where C_MKTSEGMENT=(select C_CUSTKEY,C_MKTSEGMENT from
Customers where C_ADDRESS="MG9kdTD2WBHm")
I write code for handles
select C_CUSTKEY,C_MKTSEGMENT from Customers where C_ADDRESS="MG9kdTD2WBHm"
but how can handles
I solve mu problem very thanks
--
View this message in context:
http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/error-when-eun-program-left-outer-join-tp1141p1146.html
Sent from the Apache Flink User Mailing List archive. mailing list archive at
Nabble.com.
implement left outer join from two dataset Customer and Orders
using Tuple data type
--
View this message in context:
http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/error-when-eun-program-left-outer-join-tp1141p1143.html
Sent from the Apache Flink User Mailing List archive.
I want implement left outer join from two dataset i use Tuple data type
package org.apache.flink.examples.java.relational;
import org.apache.flink.api.common.functions.CoGroupFunction;
import org.apache.flink.api.common.functions.MapFunction;
import org.apache.flink.api.java.DataSet;
import org
how can handle left outer join for any two dataset this dataset inlcude any
filed number
example
data set one
ExecutionEnvironment env = ExecutionEnvironment.getExecutionEnvironment();
DataSet>
customer=env.readCsvFile("/home/hadoop/Desktop/Dataset/customer.csv")
.fieldDelimiter('|')
.includeFiel
when write this code value keep in linked list and return this value No
output code
new FilterFunction() {
@Override
public boolean filter(Customer c) {
return c.getField(4).equals(valu
I want jar file for
import org.apache.flink.api.table.Table;
import org.apache.flink.api.java.table.TableEnvironment;
--
View this message in context:
http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/Is-there-a-data-type-stores-name-filed-and-datatype-of-field-and-return-fiel
Is there a data type stores name filed and datatype of field and return field
by name
i want handles operation by name filed
--
View this message in context:
http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/Is-there-a-data-type-stores-name-filed-and-datatype-of-field-and-re
please add link to explain left join using cogroup
or add example
very thanks
--
View this message in context:
http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/Left-outer-join-tp1031p1034.html
Sent from the Apache Flink User Mailing List archive. mailing list archive at
Nabb
Thank you very much
example on like and in operation
--
View this message in context:
http://apache-flink-incubator-user-mailing-list-archive.2336050.n4.nabble.com/how-can-Handling-in-flink-this-operation-in-sql-bettween-like-In-tp985p987.html
Sent from the Apache Flink (Incubator) User Mailing
1- BETWEEN Operator Sql Example
SELECT * FROM Products
WHERE Price BETWEEN 10 AND 20;
2- The SQL LIKE Operator Example
SELECT * FROM Customers
WHERE City LIKE 's%';
3-IN Operator Example
SELECT * FROM Customers
WHERE City IN ('Paris','London');
--
View this message in context:
http://apac
user input csv file for database and Filed Name and Data type for all filed
then i want generate dataset function for handles this
because want use this in all input file not custom file
--
View this message in context:
http://apache-flink-incubator-user-mailing-list-archive.2336050.n4.nabble
I want write program flink on any database
user input filed and type of filed and
when read database want generate automatic function dataset
any example in flink i want database and write function to handles this
example
final ExecutionEnvironment env =
ExecutionEnvironment.getExecutionEnvironm
79 matches
Mail list logo