Re: Webinar: Unlocking the Power of Apache Beam with Apache Flink

2020-05-28 Thread Maximilian Michels
Thanks to everyone who joined and asked questions. Really enjoyed this
new format!

-Max

On 28.05.20 08:09, Marta Paes Moreira wrote:
> Thanks for sharing, Aizhamal - it was a great webinar!
> 
> Marta
> 
> On Wed, 27 May 2020 at 23:17, Aizhamal Nurmamat kyzy
> mailto:aizha...@apache.org>> wrote:
> 
> Thank you all for attending today's session! Here is the YT
> recording: https://www.youtube.com/watch?v=ZCV9aRDd30U
> And link to the
> slides: 
> https://github.com/aijamalnk/beam-learning-month/blob/master/Unlocking%20the%20Power%20of%20Apache%20Beam%20with%20Apache%20Flink.pdf
> 
> On Tue, May 26, 2020 at 8:32 AM Aizhamal Nurmamat kyzy
> mailto:aizha...@apache.org>> wrote:
> 
> Hi all,
> 
> Please join our webinar this Wednesday at 10am PST/5:00pm
> GMT/1:00pm EST where Max Michels - PMC member for Apache Beam
> and Apache Flink, will deliver a talk about leveraging Apache
> Beam for large-scale stream and batch analytics with Apache Flink. 
> 
> You can register via this
> link: https://learn.xnextcon.com/event/eventdetails/W20052710
> 
> Here is the short description of the talk:
> ---
> Apache Beam is a framework for writing stream and batch
> processing pipelines using multiple languages such as Java,
> Python, SQL, or Go. Apache Beam does not come with an execution
> engine of its own. Instead, it defers the execution to its
> Runners which translate Beam pipelines for any supported
> execution engine. Thus, users have complete control over the
> language and the execution engine they use, without having to
> rewrite their code.
> In this talk, we will look at running Apache Beam pipelines with
> Apache Flink. We will explain the concepts behind Apache Beams
> portability framework for multi-language support, and then show
> how to get started running Java, Python, and SQL pipelines.
> 
> Links to the slides and recordings of this and previous webinars
> you can find here: https://github.com/aijamalnk/beam-learning-month
> 
> Hope y'all are safe,
> Aizhamal
> 


How to create schema for flexible json data in Flink SQL

2020-05-28 Thread Guodong Wang
Hi !

I want to use Flink SQL to process some json events. It is quite
challenging to define a schema for the Flink SQL table.

My data source's format is some json like this
{
"top_level_key1": "some value",
"nested_object": {
"nested_key1": "abc",
"nested_key2": 123,
"nested_key3": ["element1", "element2", "element3"]
}
}

The big challenges for me to define a schema for the data source are
1. the keys in nested_object are flexible, there might be 3 unique keys or
more unique keys. If I enumerate all the keys in the schema, I think my
code is fragile, how to handle event which contains more  nested_keys in
nested_object ?
2. I know table api support Map type, but I am not sure if I can put
generic object as the value of the map. Because the values in nested_object
are of different types, some of them are int, some of them are string or
array.

So. how to expose this kind of json data as table in Flink SQL without
enumerating all the nested_keys?

Thanks.

Guodong


Re: Multiple Sinks for a Single Soure

2020-05-28 Thread Alexander Fedulov
Hi Prasanna,

if the set of all possible sinks is known in advance, side outputs will be
generic enough to express your requirements. Side output produces a stream.
Create all of the side output tags, associate each of them with one sink,
add conditional logic around `ctx.output(outputTag, ... *)*;`  to decide
where to dispatch the messages  (see [1]), collect to none or many side
outputs, depending on your logic.

[1]
https://ci.apache.org/projects/flink/flink-docs-stable/dev/stream/side_output.html

--

Alexander Fedulov | Solutions Architect



Follow us @VervericaData

--

Join Flink Forward  - The Apache Flink
Conference

Stream Processing | Event Driven | Real Time


On Tue, May 26, 2020 at 2:57 PM Prasanna kumar <
prasannakumarram...@gmail.com> wrote:

> Piotr,
>
> There is an event and subscriber registry as JSON file which has the table
> event mapping and event-subscriber mapping as mentioned below.
>
> Based on the set JSON , we need to job to go through the table updates and
> create events and for each event there is a way set how to sink them.
>
> The sink streams have to be added based on this JSON. Thats what i
> mentioned as no predefined sink in code earlier.
>
> You could see that each event has different set of sinks.
>
> Just checking how much generic could Side-output streams be ?.
>
> Source -> generate events -> (find out sinks dynamically in code ) ->
> write to the respective sinks.
>
> {
>   " tablename ": "source.table1",
>   "events": [
> {
>   "operation": "update",
>   "eventstobecreated": [
> {
>   "eventname": "USERUPDATE",
>   "Columnoperation": "and",
>   "ColumnChanges": [
> {
>   "columnname": "name"
> },
> {
>   "columnname": "loginenabled",
>   "value": "Y"
> }
>   ],
>   "Subscribers": [
> {
>   "customername": "c1",
>   "method": "Kafka",
>   "methodparams": {
> "topicname": "USERTOPIC"
>   }
> },
> {
>   "customername": "c2",
>   "method": "S3",
>   "methodparams": {
> "folder": "aws://folderC2"
>   }}, ]}]
> },
> {
>   "operation": "insert",
>   "eventstobecreated": [
>   "eventname": "USERINSERT",
>   "operation": "insert",
>   "Subscribers": [
> {
>   "teamname": "General",
>   "method": "Kafka",
>   "methodparams": {
> "topicname": "new_users"
>   }
> },
> {
>   "teamname": "General",
>   "method": "kinesis",
>   "methodparams": {
> "URL": "new_users",
> "username": "uname",
> "password":  "pwd"
>   }}, ]}]
> },
> {
>   "operation": "delete",
>   "eventstobecreated": [
> {
>   "eventname": "USERDELETE",
>   "Subscribers": [
> {
>   "customername": "c1",
>   "method": "Kafka",
>   "methodparams": {
> "topicname": "USERTOPIC"
>   }
> },
> {
>   "customername": "c4",
>   "method": "Kafka",
>   "methodparams": {
> "topicname": "deleterecords"
>  }}, ]}]
>  },
> }
>
> Please let me know your thoughts on this.
>
> Thanks,
> Prasanna.
>
> On Tue, May 26, 2020 at 5:34 PM Piotr Nowojski 
> wrote:
>
>> Hi,
>>
>> I’m not sure if I fully understand what do you mean by
>>
>> > The point is the sink are not predefined.
>>
>> You must know before submitting the job, what sinks are going to be used
>> in the job. You can have some custom logic, that would filter out records
>> before writing them to the sinks, as I proposed before, or you could use
>> side outputs [1] would be better suited to your use case?
>>
>> Piotrek
>>
>> [1]
>> https://ci.apache.org/projects/flink/flink-docs-stable/dev/stream/side_output.html
>>
>> On 26 May 2020, at 12:20, Prasanna kumar 
>> wrote:
>>
>> Thanks Piotr for the Reply.
>>
>> I will explain my requirement in detail.
>>
>> Table Updates -> Generate Business Events -> Subscribers
>>
>> *Source Side*
>> There are CDC of 100 tables which the framework needs to listen to.
>>
>> *Event Table Mapping*
>>
>> There would be Event associated with table in a *m:n* fashion.
>>
>> say there are tables TA, TB, TC.
>>
>> EA, EA2 and EA3 are generated from TA (based on conditions)
>> EB generated from TB (based on conditions)
>> EC generated from TC (no conditions.)
>>
>> Say there are events EA,EB,EC generated from the tables TA, TB, TC
>>
>> *Event Sink Mapping*
>>
>> EA has following sinks. kafka topic SA,SA2,SAC.
>> EB has following sinks. kafka topic SB , S3 

Re: How to create schema for flexible json data in Flink SQL

2020-05-28 Thread Benchao Li
Hi Guodong,

I think you almost get the answer,
1. map type, it's not working for current implementation. For example, use
map, if the value if non-string json object, then
`JsonNode.asText()` may not work as you wish.
2. list all fields you cares. IMO, this can fit your scenario. And you can
set format.fail-on-missing-field = true, to allow setting non-existed
fields to be null.

For 1, I think maybe we can support it in the future, and I've created
jira[1] to track this.

[1] https://issues.apache.org/jira/browse/FLINK-18002

Guodong Wang  于2020年5月28日周四 下午6:32写道:

> Hi !
>
> I want to use Flink SQL to process some json events. It is quite
> challenging to define a schema for the Flink SQL table.
>
> My data source's format is some json like this
> {
> "top_level_key1": "some value",
> "nested_object": {
> "nested_key1": "abc",
> "nested_key2": 123,
> "nested_key3": ["element1", "element2", "element3"]
> }
> }
>
> The big challenges for me to define a schema for the data source are
> 1. the keys in nested_object are flexible, there might be 3 unique keys or
> more unique keys. If I enumerate all the keys in the schema, I think my
> code is fragile, how to handle event which contains more  nested_keys in
> nested_object ?
> 2. I know table api support Map type, but I am not sure if I can put
> generic object as the value of the map. Because the values in nested_object
> are of different types, some of them are int, some of them are string or
> array.
>
> So. how to expose this kind of json data as table in Flink SQL without
> enumerating all the nested_keys?
>
> Thanks.
>
> Guodong
>


-- 

Best,
Benchao Li


New dates for Flink Forward Global Virtual 2020

2020-05-28 Thread Ana Vasiliuk
Hi everyone,

Flink Forward Global Virtual 2020 is now a 4-day conference, featuring two
training days on October 19 & 20! The organizers have decided to extend the
training program for this event to ensure that you get the most out of your
time with our team of Flink experts.

*New dates:*
Apache Flink Training - October 19 - 20
Flink Forward keynotes and breakout sessions - October 21 - 22

The conference days will be free to attend and there will be a limited
number of paid training tickets available soon. Please reserve your spot at
http://flink-forward.org/global-2020.

More information to follow, including pricing and further details of the
training agenda. If you have any questions, please feel free to reach out
to the organizing team via *he...@flink-forward.org
*.

The *Call for Presentations* is also open, so if you want to share your
real-world world use cases and best practices with an international
audience of Flink enthusiasts, don’t forget to submit your talk by *June 19*,
for a chance to be included in the program!

Submit your talk at
https://www.flink-forward.org/global-2020/call-for-presentations.

Hope to see you virtually in October!
Ana

-- 

*Ana Vasiliuk *| Community Marketing Manager





Follow us @VervericaData

--

Join Flink Forward  - The Apache Flink
Conference

Stream Processing | Event Driven | Real Time

--

Ververica GmbH | Invalidenstrasse 115, 10115 Berlin, Germany

--

Ververica GmbH
Registered at Amtsgericht Charlottenburg: HRB 158244 B
Managing Directors: Timothy Alexander Steinert, Yip Park Tung Jason, Ji
(Tony) Cheng


Re: Flink TTL for MapStates and Sideoutputs implementations

2020-05-28 Thread Alexander Fedulov
Hi Jaswin,

I would like to clarify something first - what do you key your streams by,
when joining them?
It seems that what you want to do is to match each CartMessage with a
corresponding Payment that has the same orderId+mid. If this is the case,
you probably do not need the MapState in the first place.

Best,

--

Alexander Fedulov | Solutions Architect



Follow us @VervericaData

--

Join Flink Forward  - The Apache Flink
Conference

Stream Processing | Event Driven | Real Time


On Fri, May 22, 2020 at 8:57 AM Jaswin Shah  wrote:

> public class CartPGCoprocessFunction extends 
> KeyedCoProcessFunction ResultMessage> {
>
> private static final Logger logger = 
> LoggerFactory.getLogger(CartPGCoprocessFunction.class);
>
> /**
>  * Map state for cart messages, orderId+mid is key and cartMessage is 
> value.
>  */
> private static MapState cartState = null;
>
> /**
>  * Map state for pg messages, orderId+mid is key and pgMessage is value.
>  */
> private static MapState pgState = 
> null;
>
> /**
>  * Intializations for cart and pg mapStates
>  *
>  * @param config
>  */
> @Override
> public void open(Configuration config) {
> MapStateDescriptor cartStateDescriptor = new 
> MapStateDescriptor<> (
> Constants.CART_DATA,
> TypeInformation.of(String.class),
> TypeInformation.of(CartMessage.class)
> );
> cartState = getRuntimeContext().getMapState(cartStateDescriptor);
>
> MapStateDescriptor 
> pgStateDescriptor = new MapStateDescriptor<>(
> Constants.PG_DATA,
> TypeInformation.of(String.class),
> TypeInformation.of(PaymentNotifyRequestWrapper.class)
> );
> pgState = getRuntimeContext().getMapState(pgStateDescriptor);
> }
>
> /**
>  * 1. Get orderId+mid from cartMessage and check in PGMapState if an 
> entry is present.
>  * 2. If present, match, checkDescripancy, process and delete entry from 
> pgMapState.
>  * 3. If not present, add orderId+mid as key and cart object as value in 
> cartMapState.
>  * @param cartMessage
>  * @param context
>  * @param collector
>  * @throws Exception
>  */
> @Override
> public void processElement1(CartMessage cartMessage, Context context, 
> Collector collector) throws Exception {
> String searchKey = cartMessage.createJoinStringCondition();
> PaymentNotifyRequestWrapper paymentNotifyObject = 
> pgState.get(searchKey);
> if(Objects.nonNull(paymentNotifyObject)) {
> generateResultMessage(cartMessage,paymentNotifyObject,collector);
> pgState.remove(searchKey);
> } else {
> cartState.put(searchKey,cartMessage);
> }
> }
>
> /**
>  * 1. Get orderId+mid from pgMessage and check in cartMapState if an 
> entry is present.
>  * 2. If present, match, checkDescripancy, process and delete entry from 
> cartMapState.
>  * 3. If not present, add orderId+mid as key and cart object as value in 
> pgMapState.
>  * @param pgMessage
>  * @param context
>  * @param collector
>  * @throws Exception
>  */
> @Override
> public void processElement2(PaymentNotifyRequestWrapper pgMessage, 
> Context context, Collector collector) throws Exception {
> String searchKey = pgMessage.createJoinStringCondition();
> CartMessage cartMessage = cartState.get(searchKey);
> if(Objects.nonNull(cartMessage)) {
> generateResultMessage(cartMessage,pgMessage,collector);
> cartState.remove(searchKey);
> } else {
> pgState.put(searchKey,pgMessage);
> }
> }
>
> /**
>  * Create ResultMessage from cart and pg messages.
>  *
>  * @param cartMessage
>  * @param pgMessage
>  * @return
>  */
> private void generateResultMessage(CartMessage cartMessage, 
> PaymentNotifyRequestWrapper pgMessage,Collector collector) {
> ResultMessage resultMessage = new ResultMessage();
> Payment payment = null;
>
> //Logic should be in cart: check
> for (Payment pay : cartMessage.getPayments()) {
> if (StringUtils.equals(Constants.FORWARD_PAYMENT, 
> pay.mapToPaymentTypeInPG()) && 
> StringUtils.equals(Constants.PAYTM_NEW_PROVIDER, pay.getProvider())) {
> payment = pay;
> break;
> }
> }
> if(Objects.isNull(payment)) {
> return;
> }
>
> resultMessage.setOrderId(cartMessage.getId());
> resultMessage.setMid(payment.getMid());
>
> 
> resultMessage.setCartOrderStatus(cartMessage.mapToOrderStatus().getCode());
> resultMessage.setPgOrderStatus(pgMessage.getOrderStatus());
>
> resultMessage.setCartOrderCompletionTime(payment.getUpdated_at());
> resultMessage.setPgOrd

Re: Flink TTL for MapStates and Sideoutputs implementations

2020-05-28 Thread Jaswin Shah
Thanks for responding Alexander.
We have solved the problem now with ValueState now. Basically, here we are 
implementing outer join logic with custom keyedCoprocessFunction 
implementations.


From: Alexander Fedulov 
Sent: 28 May 2020 17:24
To: Jaswin Shah 
Cc: user@flink.apache.org 
Subject: Re: Flink TTL for MapStates and Sideoutputs implementations

Hi Jaswin,

I would like to clarify something first - what do you key your streams by, when 
joining them?
It seems that what you want to do is to match each CartMessage with a 
corresponding Payment that has the same orderId+mid. If this is the case, you 
probably do not need the MapState in the first place.

Best,

--

Alexander Fedulov | Solutions Architect


[https://lh6.googleusercontent.com/BAYfe7E1EKlpcT1zGwlMWJEsZuwEv9KelOYQzIst9quO5oFdNebAja2EAsrJFipxig9u9ErB_5Tg2SQGSdLJo8lD3udSPG-uKope43NFO8lRMix-oMJSqwJLz9gOK8YtADdFSvR7]


Follow us @VervericaData

--

Join Flink Forward - The Apache Flink Conference

Stream Processing | Event Driven | Real Time


On Fri, May 22, 2020 at 8:57 AM Jaswin Shah 
mailto:jaswin.s...@outlook.com>> wrote:

public class CartPGCoprocessFunction extends 
KeyedCoProcessFunction {

private static final Logger logger = 
LoggerFactory.getLogger(CartPGCoprocessFunction.class);

/**
 * Map state for cart messages, orderId+mid is key and cartMessage is value.
 */
private static MapState cartState = null;

/**
 * Map state for pg messages, orderId+mid is key and pgMessage is value.
 */
private static MapState pgState = null;

/**
 * Intializations for cart and pg mapStates
 *
 * @param config
 */
@Override
public void open(Configuration config) {
MapStateDescriptor cartStateDescriptor = new 
MapStateDescriptor<> (
Constants.CART_DATA,
TypeInformation.of(String.class),
TypeInformation.of(CartMessage.class)
);
cartState = getRuntimeContext().getMapState(cartStateDescriptor);

MapStateDescriptor 
pgStateDescriptor = new MapStateDescriptor<>(
Constants.PG_DATA,
TypeInformation.of(String.class),
TypeInformation.of(PaymentNotifyRequestWrapper.class)
);
pgState = getRuntimeContext().getMapState(pgStateDescriptor);
}

/**
 * 1. Get orderId+mid from cartMessage and check in PGMapState if an entry 
is present.
 * 2. If present, match, checkDescripancy, process and delete entry from 
pgMapState.
 * 3. If not present, add orderId+mid as key and cart object as value in 
cartMapState.
 * @param cartMessage
 * @param context
 * @param collector
 * @throws Exception
 */
@Override
public void processElement1(CartMessage cartMessage, Context context, 
Collector collector) throws Exception {
String searchKey = cartMessage.createJoinStringCondition();
PaymentNotifyRequestWrapper paymentNotifyObject = 
pgState.get(searchKey);
if(Objects.nonNull(paymentNotifyObject)) {
generateResultMessage(cartMessage,paymentNotifyObject,collector);
pgState.remove(searchKey);
} else {
cartState.put(searchKey,cartMessage);
}
}

/**
 * 1. Get orderId+mid from pgMessage and check in cartMapState if an entry 
is present.
 * 2. If present, match, checkDescripancy, process and delete entry from 
cartMapState.
 * 3. If not present, add orderId+mid as key and cart object as value in 
pgMapState.
 * @param pgMessage
 * @param context
 * @param collector
 * @throws Exception
 */
@Override
public void processElement2(PaymentNotifyRequestWrapper pgMessage, Context 
context, Collector collector) throws Exception {
String searchKey = pgMessage.createJoinStringCondition();
CartMessage cartMessage = cartState.get(searchKey);
if(Objects.nonNull(cartMessage)) {
generateResultMessage(cartMessage,pgMessage,collector);
cartState.remove(searchKey);
} else {
pgState.put(searchKey,pgMessage);
}
}

/**
 * Create ResultMessage from cart and pg messages.
 *
 * @param cartMessage
 * @param pgMessage
 * @return
 */
private void generateResultMessage(CartMessage cartMessage, 
PaymentNotifyRequestWrapper pgMessage,Collector collector) {
ResultMessage resultMessage = new ResultMessage();
Payment payment = null;

//Logic should be in cart: check
for (Payment pay : cartMessage.getPayments()) {
if (StringUtils.equals(Constants.FORWARD_PAYMENT, 
pay.mapToPaymentTypeInPG()) && StringUtils.equals(Constants.PAYTM_NEW_PROVIDER, 
pay.getProvider())) {
payment = pay;
break;
}
}
if(Objects.isNull(payment)) {
return;
}

resultMessage.se

Re: Installing Ververica, unable to write to file system

2020-05-28 Thread Marta Paes Moreira
Hi, Charlie.

This is not the best place for questions about Ververica Platform CE.
Please use community-edit...@ververica.com instead — someone will be able
to support you there!

If you have any questions related to Flink itself, feel free to reach out
to this mailing list again in the future.

Thanks,

Marta

On Wed, May 27, 2020 at 11:37 PM Corrigan, Charlie <
charlie.corri...@nordstrom.com> wrote:

> Hello, I’m trying to install Ververica (community edition for a simple poc
> deploy) via helm using these directions
> , but the pod is
> failing with the following error:
>
>
>
> ```
>
> org.springframework.context.ApplicationContextException: Unable to start
> web server; nested exception is
> org.springframework.boot.web.server.WebServerException: Unable to create
> tempDir. java.io.tmpdir is set to /tmp
>
> ```
>
>
>
> By default, our file system is immutable in k8s. Usually for this error,
> we’d mount an emptyDir volume. I’ve tried to do that in ververica’s
> values.yaml file, but I might be configuring it incorrectly. Here is the
> relevant portion of the values.yaml. I can include the entire file if it’s
> helpful. Any advice on how to alter these values or proceed with the
> ververica installation with a read only file system?
>
>
>
> volumes:
>   - name: tmp
> emptyDir: {}
>
>
>
> *## ## Container configuration for the appmanager component ## *appmanager
> :
>   image:
> repository: registry.ververica.com/v2.1/vvp-appmanager
> tag: 2.1.0
> pullPolicy: Always
> volumeMounts:
>   - mountPath: /tmp
> name: tmp
>   resources:
> limits:
>   cpu: 1000m
>   memory: 1Gi
> requests:
>   cpu: 250m
>   memory: 1Gi
>
>   artifactFetcherTag: 2.1.0
>
>
>


Re: History Server Not Showing Any Jobs - File Not Found?

2020-05-28 Thread Chesnay Schepler
If it were a class-loading issue I would think that we'd see an 
exception of some kind. Maybe double-check that flink-shaded-hadoop is 
not in the lib directory. (usually I would ask for the full classpath 
that the HS is started with, but as it turns out this isn't getting 
logged :( (FLINK-18008))


The fact that overview.json and jobs/overview.json are missing indicates 
that something goes wrong directly on startup. What is supposed to 
happens is that the HS starts, fetches all currently available archives 
and then creates these files.

So it seems like the download gets stuck for some reason.

Can you use jstack to create a thread dump, and see what the 
Flink-HistoryServer-ArchiveFetcher is doing?


I will also file a JIRA for adding more logging statements, like when 
fetching starts/stops.


On 27/05/2020 20:57, Hailu, Andreas wrote:


Hi Chesney, apologies for not getting back to you sooner here. So I 
did what you suggested - I downloaded a few files from my 
jobmanager.archive.fs.dir HDFS directory to a locally available 
directory named 
/local/scratch/hailua_p2epdlsuat/historyserver/archived/. I then 
changed my historyserver.archive.fs.dir to 
file:///local/scratch/hailua_p2epdlsuat/historyserver/archived/ and 
that seemed to work. I’m able to see the history of the applications I 
downloaded. So this points to a problem with sourcing the history from 
HDFS.


Do you think this could be classpath related? This is what we use for 
our HADOOP_CLASSPATH var:


//gns/software/infra/big-data/hadoop/hdp-2.6.5.0/hadoop/*:/gns/software/infra/big-data/hadoop/hdp-2.6.5.0/hadoop/lib/*:/gns/software/infra/big-data/hadoop/hdp-2.6.5.0/hadoop-hdfs/*:/gns/software/infra/big-data/hadoop/hdp-2.6.5.0/hadoop-hdfs/lib/*:/gns/software/infra/big-data/hadoop/hdp-2.6.5.0/hadoop-mapreduce/*:/gns/software/infra/big-data/hadoop/hdp-2.6.5.0/hadoop-mapreduce/lib/*:/gns/software/infra/big-data/hadoop/hdp-2.6.5.0/hadoop-yarn/*:/gns/software/infra/big-data/hadoop/hdp-2.6.5.0/hadoop-yarn/lib/*:/gns/software/ep/da/dataproc/dataproc-prod/lakeRmProxy.jar:/gns/software/infra/big-data/hadoop/hdp-2.6.5.0/hadoop/bin::/gns/mw/dbclient/postgres/jdbc/pg-jdbc-9.3.v01/postgresql-9.3-1100-jdbc4.jar/

//

You can see we have references to Hadoop mapred/yarn/hdfs libs in there.

*// *ah**

*From:*Chesnay Schepler 
*Sent:* Sunday, May 3, 2020 6:00 PM
*To:* Hailu, Andreas [Engineering] ; 
user@flink.apache.org

*Subject:* Re: History Server Not Showing Any Jobs - File Not Found?

yes, exactly; I want to rule out that (somehow) HDFS is the problem.

I couldn't reproduce the issue locally myself so far.

On 01/05/2020 22:31, Hailu, Andreas wrote:

Hi Chesnay, yes – they were created using Flink 1.9.1 as we’ve
only just started to archive them in the past couple weeks. Could
you clarify on how you want to try local filesystem archives? As
in changing jobmanager.archive.fs.dir and historyserver.web.tmpdir
to the same local directory?

*// *ah

*From:*Chesnay Schepler 

*Sent:* Wednesday, April 29, 2020 8:26 AM
*To:* Hailu, Andreas [Engineering] 
; user@flink.apache.org

*Subject:* Re: History Server Not Showing Any Jobs - File Not Found?

hmm...let's see if I can reproduce the issue locally.

Are the archives from the same version the history server runs on?
(Which I supposed would be 1.9.1?)

Just for the sake of narrowing things down, it would also be
interesting to check if it works with the archives residing in the
local filesystem.

On 27/04/2020 18:35, Hailu, Andreas wrote:

bash-4.1$ ls -l /local/scratch/flink_historyserver_tmpdir/

total 8

drwxrwxr-x 3 p2epdlsuat p2epdlsuat 4096 Apr 21 10:43
flink-web-history-7fbb97cc-9f38-4844-9bcf-6272fe6828e9

drwxrwxr-x 3 p2epdlsuat p2epdlsuat 4096 Apr 21 10:22
flink-web-history-95b3f928-c60f-4351-9926-766c6ad3ee76

There are just two directories in here. I don’t see cache
directories from my attempts today, which is interesting.
Looking a little deeper into them:

bash-4.1$ ls -lr

/local/scratch/flink_historyserver_tmpdir/flink-web-history-7fbb97cc-9f38-4844-9bcf-6272fe6828e9

total 1756

drwxrwxr-x 2 p2epdlsuat p2epdlsuat 1789952 Apr 21 10:44 jobs

bash-4.1$ ls -lr

/local/scratch/flink_historyserver_tmpdir/flink-web-history-7fbb97cc-9f38-4844-9bcf-6272fe6828e9/jobs

total 0

-rw-rw-r-- 1 p2epdlsuat p2epdlsuat 0 Apr 21 10:43 overview.json

There are indeed archives already in HDFS – I’ve included some
in my initial mail, but here they are again just for reference:

-bash-4.1$ hdfs dfs -ls /user/p2epda/lake/delp_qa/flink_hs

Found 44282 items

-rw-r- 3 delp datalake_admin_dev  50569 2020-03-21
23:17
/user/p2epda/lake/delp_qa/fl

How do I make sure to place operator instances in specific Task Managers?

2020-05-28 Thread Felipe Gutierrez
For instance, if I have the following DAG with the respect parallelism
in parenthesis (I hope the dag appears real afterall):

  source01 -> map01(4) -> flatmap01(4) \

  |-> keyBy -> reducer(8)
  source02 -> map02(4) -> flatmap02(4) /

And I have 4 TMs in 4 machines with 4 cores each. I would like to
place source01 and map01 and flatmap01 in TM-01. source02 and map02
and flatmap02 in TM-02. I am using "disableChaning()" in the faltMap
operator to measure it. And reducer1-to-4 in TM-03 and reducer5-to-8
in TM-04.

I am using the methods "setParallelism()" and "slotSharingGroup()" to
define it but both source01 and source02 are placed in TM-01 and map01
is split into 2 TMs. The same with map02.

Thanks,
Felipe
--
-- Felipe Gutierrez
-- skype: felipe.o.gutierrez
-- https://felipeogutierrez.blogspot.com


Re: How to create schema for flexible json data in Flink SQL

2020-05-28 Thread Guodong Wang
Benchao,

Thank you for your quick reply.

As you mentioned, for current scenario, approach 2 should work for me. But
it is a little bit annoying that I have to modify schema to add new field
types when upstream app changes the json format or adds new fields.
Otherwise, my user can not refer the field in their SQL.

Per description in the jira, I think after implementing this, all the json
values will be converted as strings.
I am wondering if Flink SQL can/will support the flexible schema in the
future, for example, register the table without defining specific schema
for each field, to let user define a generic map or array for one field.
but the value of map/array can be any object. Then, the type conversion
cost might be saved.

Guodong


On Thu, May 28, 2020 at 7:43 PM Benchao Li  wrote:

> Hi Guodong,
>
> I think you almost get the answer,
> 1. map type, it's not working for current implementation. For example, use
> map, if the value if non-string json object, then
> `JsonNode.asText()` may not work as you wish.
> 2. list all fields you cares. IMO, this can fit your scenario. And you can
> set format.fail-on-missing-field = true, to allow setting non-existed
> fields to be null.
>
> For 1, I think maybe we can support it in the future, and I've created
> jira[1] to track this.
>
> [1] https://issues.apache.org/jira/browse/FLINK-18002
>
> Guodong Wang  于2020年5月28日周四 下午6:32写道:
>
>> Hi !
>>
>> I want to use Flink SQL to process some json events. It is quite
>> challenging to define a schema for the Flink SQL table.
>>
>> My data source's format is some json like this
>> {
>> "top_level_key1": "some value",
>> "nested_object": {
>> "nested_key1": "abc",
>> "nested_key2": 123,
>> "nested_key3": ["element1", "element2", "element3"]
>> }
>> }
>>
>> The big challenges for me to define a schema for the data source are
>> 1. the keys in nested_object are flexible, there might be 3 unique keys
>> or more unique keys. If I enumerate all the keys in the schema, I think my
>> code is fragile, how to handle event which contains more  nested_keys in
>> nested_object ?
>> 2. I know table api support Map type, but I am not sure if I can put
>> generic object as the value of the map. Because the values in nested_object
>> are of different types, some of them are int, some of them are string or
>> array.
>>
>> So. how to expose this kind of json data as table in Flink SQL without
>> enumerating all the nested_keys?
>>
>> Thanks.
>>
>> Guodong
>>
>
>
> --
>
> Best,
> Benchao Li
>


Re: Apache Flink - Question about application restart

2020-05-28 Thread M Singh
 Hi Till/Zhu/Yang:  Thanks for your replies.
So just to clarify - the job id remains same if the job restarts have not been 
exhausted.  Does Yarn also resubmit the job in case of failures and if so, then 
is the job id different.
ThanksOn Wednesday, May 27, 2020, 10:05:40 AM EDT, Till Rohrmann 
 wrote:  
 
 Hi,
if you submit the same job multiple times, then it will get every time a 
different JobID assigned. For Flink, different job submissions are considered 
to be different jobs. Once a job has been submitted, it will keep the same 
JobID which is important in order to retrieve the checkpoints associated with 
this job.
Cheers,Till
On Tue, May 26, 2020 at 12:42 PM M Singh  wrote:

 Hi Zhu Zhu:
I have another clafication - it looks like if I run the same app multiple times 
- it's job id changes.  So it looks like even though the graph is the same the 
job id is not dependent on the job graph only since with different runs of the 
same app it is not the same.
Please let me know if I've missed anything.
Thanks
On Monday, May 25, 2020, 05:32:39 PM EDT, M Singh  
wrote:  
 
  Hi Zhu Zhu:
Just to clarify - from what I understand, EMR also has by default restart times 
(I think it is 3). So if the EMR restarts the job - the job id is the same 
since the job graph is the same. 
Thanks for the clarification.
On Monday, May 25, 2020, 04:01:17 AM EDT, Yang Wang  
wrote:  
 
 Just share some additional information.
When deploying Flink application on Yarn and it exhausted restart policy, 
thenthe whole application will failed. If you start another instance(Yarn 
application),even the high availability is configured, we could not recover 
from the latestcheckpoint because the clusterId(i.e. applicationId) has changed.

Best,Yang
Zhu Zhu  于2020年5月25日周一 上午11:17写道:

Hi M,
Regarding your questions:1. yes. The id is fixed once the job graph is 
generated.2. yes
Regarding yarn mode:1. the job id keeps the same because the job graph will be 
generated once at client side and persist in DFS for reuse2. yes if high 
availability is enabled

Thanks,Zhu Zhu
M Singh  于2020年5月23日周六 上午4:06写道:

Hi Flink Folks:
If I have a Flink Application with 10 restarts, if it fails and restarts, then:
1. Does the job have the same id ?2. Does the automatically restarting 
application, pickup from the last checkpoint ? I am assuming it does but just 
want to confirm.
Also, if it is running on AWS EMR I believe EMR/Yarn is configured to restart 
the job 3 times (after it has exhausted it's restart policy) .  If that is the 
case:1. Does the job get a new id ? I believe it does, but just want to 
confirm.2. Does the Yarn restart honor the last checkpoint ?  I believe, it 
does not, but is there a way to make it restart from the last checkpoint of the 
failed job (after it has exhausted its restart policy) ?
Thanks




  

RE: History Server Not Showing Any Jobs - File Not Found?

2020-05-28 Thread Hailu, Andreas
Just created a dump, here's what I see:

"Flink-HistoryServer-ArchiveFetcher-thread-1" #19 daemon prio=5 os_prio=0 
tid=0x7f93a5a2c000 nid=0x5692 runnable [0x7f934a0d3000]
   java.lang.Thread.State: RUNNABLE
at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269)
at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:79)
at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86)
- locked <0x0005df986960> (a sun.nio.ch.Util$2)
- locked <0x0005df986948> (a java.util.Collections$UnmodifiableSet)
- locked <0x0005df928390> (a sun.nio.ch.EPollSelectorImpl)
at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97)
at 
org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335)
at 
org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157)
at 
org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161)
at 
org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.readChannelFully(PacketReceiver.java:258)
at 
org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:209)
at 
org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:171)
at 
org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:102)
at 
org.apache.hadoop.hdfs.RemoteBlockReader2.readNextPacket(RemoteBlockReader2.java:201)
at 
org.apache.hadoop.hdfs.RemoteBlockReader2.read(RemoteBlockReader2.java:152)
- locked <0x0005ceade5e0> (a 
org.apache.hadoop.hdfs.RemoteBlockReader2)
at 
org.apache.hadoop.hdfs.DFSInputStream$ByteArrayStrategy.doRead(DFSInputStream.java:781)
at 
org.apache.hadoop.hdfs.DFSInputStream.readBuffer(DFSInputStream.java:837)
- eliminated <0x0005cead3688> (a 
org.apache.hadoop.hdfs.DFSInputStream)
at 
org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy(DFSInputStream.java:897)
- locked <0x0005cead3688> (a org.apache.hadoop.hdfs.DFSInputStream)
   at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:945)
- locked <0x0005cead3688> (a org.apache.hadoop.hdfs.DFSInputStream)
at java.io.DataInputStream.read(DataInputStream.java:149)
at 
org.apache.flink.runtime.fs.hdfs.HadoopDataInputStream.read(HadoopDataInputStream.java:94)
at java.io.InputStream.read(InputStream.java:101)
at org.apache.flink.util.IOUtils.copyBytes(IOUtils.java:69)
at org.apache.flink.util.IOUtils.copyBytes(IOUtils.java:91)
at 
org.apache.flink.runtime.history.FsJobArchivist.getArchivedJsons(FsJobArchivist.java:110)
at 
org.apache.flink.runtime.webmonitor.history.HistoryServerArchiveFetcher$JobArchiveFetcherTask.run(HistoryServerArchiveFetcher.java:169)
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)

What problems could the flink-shaded-hadoop jar being included introduce?

// ah

From: Chesnay Schepler 
Sent: Thursday, May 28, 2020 9:26 AM
To: Hailu, Andreas [Engineering] ; 
user@flink.apache.org
Subject: Re: History Server Not Showing Any Jobs - File Not Found?

If it were a class-loading issue I would think that we'd see an exception of 
some kind. Maybe double-check that flink-shaded-hadoop is not in the lib 
directory. (usually I would ask for the full classpath that the HS is started 
with, but as it turns out this isn't getting logged :( (FLINK-18008))

The fact that overview.json and jobs/overview.json are missing indicates that 
something goes wrong directly on startup. What is supposed to happens is that 
the HS starts, fetches all currently available archives and then creates these 
files.
So it seems like the download gets stuck for some reason.

Can you use jstack to create a thread dump, and see what the 
Flink-HistoryServer-ArchiveFetcher is doing?

I will also file a JIRA for adding more logging statements, like when fetching 
starts/stops.

On 27/05/2020 20:57, Hailu, Andreas wrote:
Hi Chesney, apologies for not getting back to you sooner here. So I did what 
you suggested - I downloaded a few files from my jobmanager.archive.fs.dir HDFS 
directory to a locally available directory named 
/local/scratch/hailua_p2epdlsuat/histor

Re: ClusterClientFactory selection

2020-05-28 Thread M Singh
 HI Kostas/Yang/Lake:
I am looking at aws emr and did not see the execution.target in the 
flink-conf.yaml file under flink/conf directory. Is it defined in another place 
?  
 I also did search in the current flink source code and did find mention of it 
in the md files but not in any property file or the flink-yarn sub module.  
Please let me know if I am missing anything.
Thanks
On Wednesday, May 27, 2020, 03:51:28 AM EDT, Kostas Kloudas 
 wrote:  
 
 Hi Singh,

The only thing to add to what Yang said is that the "execution.target"
configuration option (in the config file) is also used for the same
purpose from the execution environments.

Cheers,
Kostas

On Wed, May 27, 2020 at 4:49 AM Yang Wang  wrote:
>
> Hi M Singh,
>
> The Flink CLI picks up the correct ClusterClientFactory via java SPI. You
> could check YarnClusterClientFactory#isCompatibleWith for how it is activated.
> The cli option / configuration is "-e/--executor" or execution.target (e.g. 
> yarn-per-job).
>
>
> Best,
> Yang
>
> M Singh  于2020年5月26日周二 下午6:45写道:
>>
>> Hi:
>>
>> I wanted to find out which parameter/configuration allows flink cli pick up 
>> the appropriate cluster client factory (especially in the yarn mode).
>>
>> Thanks  

Re: [DISCUSS] FLINK-17989 - java.lang.NoClassDefFoundError org.apache.flink.fs.azure.common.hadoop.HadoopRecoverableWriter

2020-05-28 Thread Israel Ekpo
Guowei,

What do we need to do to add support for it?

How do I get started on that?



On Wed, May 27, 2020 at 8:53 PM Guowei Ma  wrote:

> Hi,
> I think the StreamingFileSink could not support Azure currently.
> You could find more detailed info from here[1].
>
> [1] https://issues.apache.org/jira/browse/FLINK-17444
> Best,
> Guowei
>
>
> Israel Ekpo  于2020年5月28日周四 上午6:04写道:
>
>> You can assign the task to me and I will like to collaborate with someone
>> to fix it.
>>
>> On Wed, May 27, 2020 at 5:52 PM Israel Ekpo  wrote:
>>
>>> Some users are running into issues when using Azure Blob Storage for the
>>> StreamFileSink
>>>
>>> https://issues.apache.org/jira/browse/FLINK-17989
>>>
>>> The issue is because certain packages are relocated in the POM file and
>>> some classes are dropped in the final shaded jar
>>>
>>> I have attempted to comment out the relocated and recompile the source
>>> but I keep hitting roadblocks of other relocation and filtration each time
>>> I update a specific pom file
>>>
>>> How can this be addressed so that these users can be unblocked? Why are
>>> the classes filtered out? What is the workaround? I can work on the patch
>>> if I have some guidance.
>>>
>>> This is an issue in Flink 1.9 and 1.10 and I believe 1.11 has the same
>>> issue but I am yet to confirm
>>>
>>> Thanks.
>>>
>>>
>>>
>>


Re: How to create schema for flexible json data in Flink SQL

2020-05-28 Thread Leonard Xu
Hi, guodong 
 
> I am wondering if Flink SQL can/will support the flexible schema in the 
> future,

It’s an interesting topic, this feature is more close to the scope of schema 
inference.
The schema inference should come in next few releases. 

Best,
Leonard Xu




> for example, register the table without defining specific schema for each 
> field, to let user define a generic map or array for one field. but the value 
> of map/array can be any object. Then, the type conversion cost might be 
> saved. 
> 
> Guodong
> 
> 
> On Thu, May 28, 2020 at 7:43 PM Benchao Li  > wrote:
> Hi Guodong,
> 
> I think you almost get the answer,
> 1. map type, it's not working for current implementation. For example, use 
> map, if the value if non-string json object, then 
> `JsonNode.asText()` may not work as you wish.
> 2. list all fields you cares. IMO, this can fit your scenario. And you can 
> set format.fail-on-missing-field = true, to allow setting non-existed fields 
> to be null.
> 
> For 1, I think maybe we can support it in the future, and I've created 
> jira[1] to track this.
> 
> [1] https://issues.apache.org/jira/browse/FLINK-18002 
> 
> Guodong Wang mailto:wangg...@gmail.com>> 于2020年5月28日周四 
> 下午6:32写道:
> Hi !
> 
> I want to use Flink SQL to process some json events. It is quite challenging 
> to define a schema for the Flink SQL table. 
> 
> My data source's format is some json like this
> {
> "top_level_key1": "some value",
> "nested_object": {
> "nested_key1": "abc",
> "nested_key2": 123,
> "nested_key3": ["element1", "element2", "element3"]
> }
> }
> 
> The big challenges for me to define a schema for the data source are
> 1. the keys in nested_object are flexible, there might be 3 unique keys or 
> more unique keys. If I enumerate all the keys in the schema, I think my code 
> is fragile, how to handle event which contains more  nested_keys in 
> nested_object ?
> 2. I know table api support Map type, but I am not sure if I can put generic 
> object as the value of the map. Because the values in nested_object are of 
> different types, some of them are int, some of them are string or array.
> 
> So. how to expose this kind of json data as table in Flink SQL without 
> enumerating all the nested_keys?
> 
> Thanks.
> 
> Guodong
> 
> 
> -- 
> 
> Best,
> Benchao Li



Streaming multiple csv files

2020-05-28 Thread Nikola Hrusov
Hello,

I have multiple files (file1, file2, file3) each being CSV and having
different columns and data. The column headers are finite and we know
their format. I would like to take them and parse them based on the column
structure. I already have the parsers

e.g.:

file1 has columns (id, firstname, lastname)
file2 has columns (id, name)
file3 has columns (id, name_1, name_2, name_3, name_4)

I would like to take all those files, read them, parse them and output
objects to a sink as Person { id, fullName }

Example files would be:

file1:
--
id, firstname, lastname
33, John, Smith
55, Labe, Soni

file2:
--
id, name
5, Mitr Kompi
99, Squi Masw

file3:
--
id, name_1, name_2, name_3, name_4
1, Peter, Hov, Risti, Pena
2, Rii, Koni, Ques,,

Expected output of my program would be:

Person { 33, John Smith }
Person { 55, Labe Soni }
Person { 5, Mitr Kompi }
Person { 99, Squi Masw }
Person { 1, Peter Hov Risti Pena }
Person { 2, Rii Koni Ques }



What I do now is:

My code (very simplified) is: env.readFile().flatMap(new
MyParser()).addSink(new MySink())
The MyParser receives the rows 1 by 1 in string format. Which means that
when I run with parallelism > 1 I receive data from any file and I cannot
say this line comes from where.



What I would like to do is:

Be able to figure out which is the file I am reading from.
Since I only know the file type based on the first row (columns) I need to
either send the 1st row to MyParser() or send a tuple <1st row of file
being read, current row of file being read>.
Another option that I can think about is to have some keyed function based
on the first row, but I am not sure how to achieve that by using readFile.


Is there a way I can achieve this?


Regards
,
Nikola


Re: Multiple Sinks for a Single Soure

2020-05-28 Thread Prasanna kumar
Alexander,

Thanks for the reply. Will implement and come back in case of any
questions.

Prasanna.

On Thu, May 28, 2020 at 5:06 PM Alexander Fedulov 
wrote:

> Hi Prasanna,
>
> if the set of all possible sinks is known in advance, side outputs will be
> generic enough to express your requirements. Side output produces a stream.
> Create all of the side output tags, associate each of them with one sink,
> add conditional logic around `ctx.output(outputTag, ... *)*;`  to decide
> where to dispatch the messages  (see [1]), collect to none or many side
> outputs, depending on your logic.
>
> [1]
> https://ci.apache.org/projects/flink/flink-docs-stable/dev/stream/side_output.html
>
> --
>
> Alexander Fedulov | Solutions Architect
>
> 
>
> Follow us @VervericaData
>
> --
>
> Join Flink Forward  - The Apache Flink
> Conference
>
> Stream Processing | Event Driven | Real Time
>
>
> On Tue, May 26, 2020 at 2:57 PM Prasanna kumar <
> prasannakumarram...@gmail.com> wrote:
>
>> Piotr,
>>
>> There is an event and subscriber registry as JSON file which has the
>> table event mapping and event-subscriber mapping as mentioned below.
>>
>> Based on the set JSON , we need to job to go through the table updates
>> and create events and for each event there is a way set how to sink them.
>>
>> The sink streams have to be added based on this JSON. Thats what i
>> mentioned as no predefined sink in code earlier.
>>
>> You could see that each event has different set of sinks.
>>
>> Just checking how much generic could Side-output streams be ?.
>>
>> Source -> generate events -> (find out sinks dynamically in code ) ->
>> write to the respective sinks.
>>
>> {
>>   " tablename ": "source.table1",
>>   "events": [
>> {
>>   "operation": "update",
>>   "eventstobecreated": [
>> {
>>   "eventname": "USERUPDATE",
>>   "Columnoperation": "and",
>>   "ColumnChanges": [
>> {
>>   "columnname": "name"
>> },
>> {
>>   "columnname": "loginenabled",
>>   "value": "Y"
>> }
>>   ],
>>   "Subscribers": [
>> {
>>   "customername": "c1",
>>   "method": "Kafka",
>>   "methodparams": {
>> "topicname": "USERTOPIC"
>>   }
>> },
>> {
>>   "customername": "c2",
>>   "method": "S3",
>>   "methodparams": {
>> "folder": "aws://folderC2"
>>   }}, ]}]
>> },
>> {
>>   "operation": "insert",
>>   "eventstobecreated": [
>>   "eventname": "USERINSERT",
>>   "operation": "insert",
>>   "Subscribers": [
>> {
>>   "teamname": "General",
>>   "method": "Kafka",
>>   "methodparams": {
>> "topicname": "new_users"
>>   }
>> },
>> {
>>   "teamname": "General",
>>   "method": "kinesis",
>>   "methodparams": {
>> "URL": "new_users",
>> "username": "uname",
>> "password":  "pwd"
>>   }}, ]}]
>> },
>> {
>>   "operation": "delete",
>>   "eventstobecreated": [
>> {
>>   "eventname": "USERDELETE",
>>   "Subscribers": [
>> {
>>   "customername": "c1",
>>   "method": "Kafka",
>>   "methodparams": {
>> "topicname": "USERTOPIC"
>>   }
>> },
>> {
>>   "customername": "c4",
>>   "method": "Kafka",
>>   "methodparams": {
>> "topicname": "deleterecords"
>>  }}, ]}]
>>  },
>> }
>>
>> Please let me know your thoughts on this.
>>
>> Thanks,
>> Prasanna.
>>
>> On Tue, May 26, 2020 at 5:34 PM Piotr Nowojski 
>> wrote:
>>
>>> Hi,
>>>
>>> I’m not sure if I fully understand what do you mean by
>>>
>>> > The point is the sink are not predefined.
>>>
>>> You must know before submitting the job, what sinks are going to be used
>>> in the job. You can have some custom logic, that would filter out records
>>> before writing them to the sinks, as I proposed before, or you could use
>>> side outputs [1] would be better suited to your use case?
>>>
>>> Piotrek
>>>
>>> [1]
>>> https://ci.apache.org/projects/flink/flink-docs-stable/dev/stream/side_output.html
>>>
>>> On 26 May 2020, at 12:20, Prasanna kumar 
>>> wrote:
>>>
>>> Thanks Piotr for the Reply.
>>>
>>> I will explain my requirement in detail.
>>>
>>> Table Updates -> Generate Business Events -> Subscribers
>>>
>>> *Source Side*
>>> There are CDC of 100 tables which the framework needs to listen to.
>>>
>>> *Event Table Mapping*
>>>
>>> There would be Event associated with table in a *m:n* fashion.
>>>
>>> say there are tables TA, TB, TC.
>>>
>>> 

Re: History Server Not Showing Any Jobs - File Not Found?

2020-05-28 Thread Chesnay Schepler

Looks like it is indeed stuck on downloading the archive.

I searched a bit in the Hadoop JIRA and found several similar instances:
https://issues.apache.org/jira/browse/HDFS-6999
https://issues.apache.org/jira/browse/HDFS-7005
https://issues.apache.org/jira/browse/HDFS-7145

It is supposed to be fixed in 2.6.0 though :/

If hadoop is available from the HADOOP_CLASSPATH and flink-shaded-hadoop 
in /lib then you basically don't know what Hadoop version is actually 
being used,

which could lead to incompatibilities and dependency clashes.
If flink-shaded-hadoop 2.4/2.5 is on the classpath, maybe that is being 
used and runs into HDFS-7005.


On 28/05/2020 16:27, Hailu, Andreas wrote:


Just created a dump, here’s what I see:

"Flink-HistoryServer-ArchiveFetcher-thread-1" #19 daemon prio=5 
os_prio=0 tid=0x7f93a5a2c000 nid=0x5692 runnable [0x7f934a0d3000]


java.lang.Thread.State: RUNNABLE

    at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)

    at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269)

    at 
sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:79)


    at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86)

    - locked <0x0005df986960> (a sun.nio.ch.Util$2)

    - locked <0x0005df986948> (a 
java.util.Collections$UnmodifiableSet)


    - locked <0x0005df928390> (a sun.nio.ch.EPollSelectorImpl)

    at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97)

    at 
org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335)


    at 
org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157)


    at 
org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161)


    at 
org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.readChannelFully(PacketReceiver.java:258)


    at 
org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:209)


    at 
org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:171)


    at 
org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:102)


    at 
org.apache.hadoop.hdfs.RemoteBlockReader2.readNextPacket(RemoteBlockReader2.java:201)


    at 
org.apache.hadoop.hdfs.RemoteBlockReader2.read(RemoteBlockReader2.java:152)


    - locked <0x0005ceade5e0> (a 
org.apache.hadoop.hdfs.RemoteBlockReader2)


    at 
org.apache.hadoop.hdfs.DFSInputStream$ByteArrayStrategy.doRead(DFSInputStream.java:781)


    at 
org.apache.hadoop.hdfs.DFSInputStream.readBuffer(DFSInputStream.java:837)


    - eliminated <0x0005cead3688> (a 
org.apache.hadoop.hdfs.DFSInputStream)


    at 
org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy(DFSInputStream.java:897)


    - locked <0x0005cead3688> (a 
org.apache.hadoop.hdfs.DFSInputStream)


   at 
org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:945)


    - locked <0x0005cead3688> (a 
org.apache.hadoop.hdfs.DFSInputStream)


    at java.io.DataInputStream.read(DataInputStream.java:149)

    at 
org.apache.flink.runtime.fs.hdfs.HadoopDataInputStream.read(HadoopDataInputStream.java:94)


    at java.io.InputStream.read(InputStream.java:101)

    at org.apache.flink.util.IOUtils.copyBytes(IOUtils.java:69)

    at org.apache.flink.util.IOUtils.copyBytes(IOUtils.java:91)

    at 
org.apache.flink.runtime.history.FsJobArchivist.getArchivedJsons(FsJobArchivist.java:110)


    at 
org.apache.flink.runtime.webmonitor.history.HistoryServerArchiveFetcher$JobArchiveFetcherTask.run(HistoryServerArchiveFetcher.java:169)


    at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)


    at 
java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)


    at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)


    at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)


    at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)


    at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)


    at java.lang.Thread.run(Thread.java:745)

What problems could the flink-shaded-hadoop jar being included introduce?

*// *ah**

*From:*Chesnay Schepler 
*Sent:* Thursday, May 28, 2020 9:26 AM
*To:* Hailu, Andreas [Engineering] ; 
user@flink.apache.org

*Subject:* Re: History Server Not Showing Any Jobs - File Not Found?

If it were a class-loading issue I would think that we'd see an 
exception of some kind. Maybe double-check that flink-shaded-hadoop is 
not in the lib directory. (usually I would ask for the full classpath 
that the HS is started with, but as it turns out this isn't getting 
logged :( (FLINK-18008))


The fact that ove

Re: How to create schema for flexible json data in Flink SQL

2020-05-28 Thread Benchao Li
Hi Guodong,

Does the RAW type meet your requirements? For example, you can specify
map type, and the value for the map is the raw JsonNode
parsed from Jackson.
This is not supported yet, however IMO this could be supported.

Guodong Wang  于2020年5月28日周四 下午9:43写道:

> Benchao,
>
> Thank you for your quick reply.
>
> As you mentioned, for current scenario, approach 2 should work for me. But
> it is a little bit annoying that I have to modify schema to add new field
> types when upstream app changes the json format or adds new fields.
> Otherwise, my user can not refer the field in their SQL.
>
> Per description in the jira, I think after implementing this, all the json
> values will be converted as strings.
> I am wondering if Flink SQL can/will support the flexible schema in the
> future, for example, register the table without defining specific schema
> for each field, to let user define a generic map or array for one field.
> but the value of map/array can be any object. Then, the type conversion
> cost might be saved.
>
> Guodong
>
>
> On Thu, May 28, 2020 at 7:43 PM Benchao Li  wrote:
>
>> Hi Guodong,
>>
>> I think you almost get the answer,
>> 1. map type, it's not working for current implementation. For example,
>> use map, if the value if non-string json object, then
>> `JsonNode.asText()` may not work as you wish.
>> 2. list all fields you cares. IMO, this can fit your scenario. And you
>> can set format.fail-on-missing-field = true, to allow setting non-existed
>> fields to be null.
>>
>> For 1, I think maybe we can support it in the future, and I've created
>> jira[1] to track this.
>>
>> [1] https://issues.apache.org/jira/browse/FLINK-18002
>>
>> Guodong Wang  于2020年5月28日周四 下午6:32写道:
>>
>>> Hi !
>>>
>>> I want to use Flink SQL to process some json events. It is quite
>>> challenging to define a schema for the Flink SQL table.
>>>
>>> My data source's format is some json like this
>>> {
>>> "top_level_key1": "some value",
>>> "nested_object": {
>>> "nested_key1": "abc",
>>> "nested_key2": 123,
>>> "nested_key3": ["element1", "element2", "element3"]
>>> }
>>> }
>>>
>>> The big challenges for me to define a schema for the data source are
>>> 1. the keys in nested_object are flexible, there might be 3 unique keys
>>> or more unique keys. If I enumerate all the keys in the schema, I think my
>>> code is fragile, how to handle event which contains more  nested_keys in
>>> nested_object ?
>>> 2. I know table api support Map type, but I am not sure if I can put
>>> generic object as the value of the map. Because the values in nested_object
>>> are of different types, some of them are int, some of them are string or
>>> array.
>>>
>>> So. how to expose this kind of json data as table in Flink SQL without
>>> enumerating all the nested_keys?
>>>
>>> Thanks.
>>>
>>> Guodong
>>>
>>
>>
>> --
>>
>> Best,
>> Benchao Li
>>
>

-- 

Best,
Benchao Li


Re: Apache Flink - Question about application restart

2020-05-28 Thread Till Rohrmann
Hi,

Yarn won't resubmit the job. In case of a process failure where Yarn
restarts the Flink Master, the Master will recover the submitted jobs from
a persistent storage system.

Cheers,
Till

On Thu, May 28, 2020 at 4:05 PM M Singh  wrote:

> Hi Till/Zhu/Yang:  Thanks for your replies.
>
> So just to clarify - the job id remains same if the job restarts have not
> been exhausted.  Does Yarn also resubmit the job in case of failures and if
> so, then is the job id different.
>
> Thanks
> On Wednesday, May 27, 2020, 10:05:40 AM EDT, Till Rohrmann <
> trohrm...@apache.org> wrote:
>
>
> Hi,
>
> if you submit the same job multiple times, then it will get every time a
> different JobID assigned. For Flink, different job submissions are
> considered to be different jobs. Once a job has been submitted, it will
> keep the same JobID which is important in order to retrieve the checkpoints
> associated with this job.
>
> Cheers,
> Till
>
> On Tue, May 26, 2020 at 12:42 PM M Singh  wrote:
>
> Hi Zhu Zhu:
>
> I have another clafication - it looks like if I run the same app multiple
> times - it's job id changes.  So it looks like even though the graph is the
> same the job id is not dependent on the job graph only since with different
> runs of the same app it is not the same.
>
> Please let me know if I've missed anything.
>
> Thanks
>
> On Monday, May 25, 2020, 05:32:39 PM EDT, M Singh 
> wrote:
>
>
> Hi Zhu Zhu:
>
> Just to clarify - from what I understand, EMR also has by default restart
> times (I think it is 3). So if the EMR restarts the job - the job id is the
> same since the job graph is the same.
>
> Thanks for the clarification.
>
> On Monday, May 25, 2020, 04:01:17 AM EDT, Yang Wang 
> wrote:
>
>
> Just share some additional information.
>
> When deploying Flink application on Yarn and it exhausted restart policy,
> then
> the whole application will failed. If you start another instance(Yarn
> application),
> even the high availability is configured, we could not recover from the
> latest
> checkpoint because the clusterId(i.e. applicationId) has changed.
>
>
> Best,
> Yang
>
> Zhu Zhu  于2020年5月25日周一 上午11:17写道:
>
> Hi M,
>
> Regarding your questions:
> 1. yes. The id is fixed once the job graph is generated.
> 2. yes
>
> Regarding yarn mode:
> 1. the job id keeps the same because the job graph will be generated once
> at client side and persist in DFS for reuse
> 2. yes if high availability is enabled
>
> Thanks,
> Zhu Zhu
>
> M Singh  于2020年5月23日周六 上午4:06写道:
>
> Hi Flink Folks:
>
> If I have a Flink Application with 10 restarts, if it fails and restarts,
> then:
>
> 1. Does the job have the same id ?
> 2. Does the automatically restarting application, pickup from the last
> checkpoint ? I am assuming it does but just want to confirm.
>
> Also, if it is running on AWS EMR I believe EMR/Yarn is configured to
> restart the job 3 times (after it has exhausted it's restart policy) .  If
> that is the case:
> 1. Does the job get a new id ? I believe it does, but just want to confirm.
> 2. Does the Yarn restart honor the last checkpoint ?  I believe, it does
> not, but is there a way to make it restart from the last checkpoint of the
> failed job (after it has exhausted its restart policy) ?
>
> Thanks
>
>
>


Re: How to create schema for flexible json data in Flink SQL

2020-05-28 Thread Guodong Wang
Yes. Setting the value type as raw is one possible approach. And I would
like to vote for schema inference as well.

Correct me if I am wrong, IMO schema inference means I can provide a method
in the table source to infer the data schema base on the runtime
computation. Just like some calcite adaptor does. Right?
For SQL table registration, I think that requiring the table source to
provide a static schema might be too strict. Let planner to infer the table
schema will be more flexible.

Thank you for your suggestions.

Guodong


On Thu, May 28, 2020 at 11:11 PM Benchao Li  wrote:

> Hi Guodong,
>
> Does the RAW type meet your requirements? For example, you can specify
> map type, and the value for the map is the raw JsonNode
> parsed from Jackson.
> This is not supported yet, however IMO this could be supported.
>
> Guodong Wang  于2020年5月28日周四 下午9:43写道:
>
>> Benchao,
>>
>> Thank you for your quick reply.
>>
>> As you mentioned, for current scenario, approach 2 should work for me.
>> But it is a little bit annoying that I have to modify schema to add new
>> field types when upstream app changes the json format or adds new fields.
>> Otherwise, my user can not refer the field in their SQL.
>>
>> Per description in the jira, I think after implementing this, all the
>> json values will be converted as strings.
>> I am wondering if Flink SQL can/will support the flexible schema in the
>> future, for example, register the table without defining specific schema
>> for each field, to let user define a generic map or array for one field.
>> but the value of map/array can be any object. Then, the type conversion
>> cost might be saved.
>>
>> Guodong
>>
>>
>> On Thu, May 28, 2020 at 7:43 PM Benchao Li  wrote:
>>
>>> Hi Guodong,
>>>
>>> I think you almost get the answer,
>>> 1. map type, it's not working for current implementation. For example,
>>> use map, if the value if non-string json object, then
>>> `JsonNode.asText()` may not work as you wish.
>>> 2. list all fields you cares. IMO, this can fit your scenario. And you
>>> can set format.fail-on-missing-field = true, to allow setting non-existed
>>> fields to be null.
>>>
>>> For 1, I think maybe we can support it in the future, and I've created
>>> jira[1] to track this.
>>>
>>> [1] https://issues.apache.org/jira/browse/FLINK-18002
>>>
>>> Guodong Wang  于2020年5月28日周四 下午6:32写道:
>>>
 Hi !

 I want to use Flink SQL to process some json events. It is quite
 challenging to define a schema for the Flink SQL table.

 My data source's format is some json like this
 {
 "top_level_key1": "some value",
 "nested_object": {
 "nested_key1": "abc",
 "nested_key2": 123,
 "nested_key3": ["element1", "element2", "element3"]
 }
 }

 The big challenges for me to define a schema for the data source are
 1. the keys in nested_object are flexible, there might be 3 unique keys
 or more unique keys. If I enumerate all the keys in the schema, I think my
 code is fragile, how to handle event which contains more  nested_keys in
 nested_object ?
 2. I know table api support Map type, but I am not sure if I can put
 generic object as the value of the map. Because the values in nested_object
 are of different types, some of them are int, some of them are string or
 array.

 So. how to expose this kind of json data as table in Flink SQL without
 enumerating all the nested_keys?

 Thanks.

 Guodong

>>>
>>>
>>> --
>>>
>>> Best,
>>> Benchao Li
>>>
>>
>
> --
>
> Best,
> Benchao Li
>


Re: Tumbling windows - increasing checkpoint size over time

2020-05-28 Thread Till Rohrmann
Hi Matt,

when using tumbling windows, then the checkpoint size is not only dependent
on the number of keys (which is equivalent to the number of open windows)
but also on how many events arrive for each open window because the windows
store every window event in its state. Hence, it can be the case that you
see different checkpoint sizes depending on the actual data distribution
which can change over time. Have you checked whether the data distribution
and rate is constant over time?

What is the expected number of keys, size of events and number of events
per key per second? Based on this information one could try to estimate an
upper state size bound.

Cheers,
Till

On Wed, May 27, 2020 at 8:19 PM Wissman, Matt  wrote:

> Hello Till & Guowei,
>
>
>
> Thanks for the replies! Here is a snippet of the window function:
>
>
>
>   SingleOutputStreamOperator aggregatedStream = dataStream
>
> .keyBy(idKeySelector())
>
> .window(TumblingProcessingTimeWindows.of(seconds(15)))
>
> .apply(new Aggregator())
>
> .name("Aggregator")
>
> .setParallelism(3);
>
>
>
> Checkpoint interval: 2 secs when the checkpoint size grew from 100KB to
> 100MB (we’ve since changed the 5 minutes, which has slowed the checkpoint
> size growth)
>
> Lateness allowed: 0
>
> Watermarks: nothing is set in terms of watermarks – do they apply for
> Process Time?
>
> The set of keys processed in the stream is stable over time
>
>
>
> The checkpoint size actually looks pretty stable now that the interval was
> increased. Is it possible that the short checkpoint interval prevented
> compaction?
>
>
>
> Thanks!
>
>
>
> -Matt
>
>
>
>
>
> *From: *Till Rohrmann 
> *Date: *Wednesday, May 27, 2020 at 9:00 AM
> *To: *Guowei Ma 
> *Cc: *"Wissman, Matt" , "user@flink.apache.org" <
> user@flink.apache.org>
> *Subject: *Re: Tumbling windows - increasing checkpoint size over time
>
>
>
> *LEARN FAST: This email originated outside of HERE.*
> Please do not click on links or open attachments unless you recognize the
> sender and know the content is safe. Thank you.
>
>
>
> Hi Matt,
>
>
>
> could you give us a bit more information about the windows you are using?
> They are tumbling windows. What's the size of the windows? Do you allow
> lateness of events? What's your checkpoint interval?
>
>
>
> Are you using event time? If yes, how is the watermark generated?
>
>
>
> You said that the number of events per window is more or less constant.
> Does this is also apply to the size of the individual events?
>
>
>
> Cheers,
>
> Till
>
>
>
> On Wed, May 27, 2020 at 1:21 AM Guowei Ma  wrote:
>
> Hi, Matt
> The total size of the state of the window operator is related to the
> number of windows. For example if you use keyby+tumblingwindow there
> would be keys number of windows.
> Hope this helps.
> Best,
> Guowei
>
> Wissman, Matt  于2020年5月27日周三 上午3:35写道:
> >
> > Hello Flink Community,
> >
> >
> >
> > I’m running a Flink pipeline that uses a tumbling window and incremental
> checkpoint with RocksDB backed by s3. The number of objects in the window
> is stable but overtime the checkpoint size grows seemingly unbounded.
> Within the first few hours after bringing the Flink pipeline up, the
> checkpoint size is around 100K but after a week of operation it grows to
> around 100MB. The pipeline isn’t using any other Flink state besides the
> state that the window uses. I think this has something to do with RocksDB’s
> compaction but shouldn’t the tumbling window state expire and be purged
> from the checkpoint?
> >
> >
> >
> > Flink Version 1.7.1
> >
> >
> >
> > Thanks!
> >
> >
> >
> > -Matt
>
>


Re: [DISCUSS] FLINK-17989 - java.lang.NoClassDefFoundError org.apache.flink.fs.azure.common.hadoop.HadoopRecoverableWriter

2020-05-28 Thread Till Rohrmann
Hi Israel,

thanks for reaching out to the Flink community. As Guowei said, the
StreamingFileSink can currently only recover from faults if it writes to
HDFS or S3. Other file systems are currently not supported if you need
fault tolerance.

Maybe Klou can tell you more about the background and what is needed to
make it work with other file systems. He is one of the original authors of
the StreamingFileSink.

Cheers,
Till

On Thu, May 28, 2020 at 4:39 PM Israel Ekpo  wrote:

> Guowei,
>
> What do we need to do to add support for it?
>
> How do I get started on that?
>
>
>
> On Wed, May 27, 2020 at 8:53 PM Guowei Ma  wrote:
>
>> Hi,
>> I think the StreamingFileSink could not support Azure currently.
>> You could find more detailed info from here[1].
>>
>> [1] https://issues.apache.org/jira/browse/FLINK-17444
>> Best,
>> Guowei
>>
>>
>> Israel Ekpo  于2020年5月28日周四 上午6:04写道:
>>
>>> You can assign the task to me and I will like to collaborate with
>>> someone to fix it.
>>>
>>> On Wed, May 27, 2020 at 5:52 PM Israel Ekpo 
>>> wrote:
>>>
 Some users are running into issues when using Azure Blob Storage for
 the StreamFileSink

 https://issues.apache.org/jira/browse/FLINK-17989

 The issue is because certain packages are relocated in the POM file and
 some classes are dropped in the final shaded jar

 I have attempted to comment out the relocated and recompile the source
 but I keep hitting roadblocks of other relocation and filtration each time
 I update a specific pom file

 How can this be addressed so that these users can be unblocked? Why are
 the classes filtered out? What is the workaround? I can work on the patch
 if I have some guidance.

 This is an issue in Flink 1.9 and 1.10 and I believe 1.11 has the same
 issue but I am yet to confirm

 Thanks.



>>>


Re: [DISCUSS] FLINK-17989 - java.lang.NoClassDefFoundError org.apache.flink.fs.azure.common.hadoop.HadoopRecoverableWriter

2020-05-28 Thread Israel Ekpo
Hi Till,

Thanks for your feedback and guidance.

It seems similar work was done for S3 filesystem where relocations were
removed for those file system plugins.

https://issues.apache.org/jira/browse/FLINK-11956

It appears the same needs to be done for Azure File systems.

I will attempt to connect with Klou today to collaborate to see what the
level of effort is to add this support.

Thanks.



On Thu, May 28, 2020 at 11:54 AM Till Rohrmann  wrote:

> Hi Israel,
>
> thanks for reaching out to the Flink community. As Guowei said, the
> StreamingFileSink can currently only recover from faults if it writes to
> HDFS or S3. Other file systems are currently not supported if you need
> fault tolerance.
>
> Maybe Klou can tell you more about the background and what is needed to
> make it work with other file systems. He is one of the original authors of
> the StreamingFileSink.
>
> Cheers,
> Till
>
> On Thu, May 28, 2020 at 4:39 PM Israel Ekpo  wrote:
>
>> Guowei,
>>
>> What do we need to do to add support for it?
>>
>> How do I get started on that?
>>
>>
>>
>> On Wed, May 27, 2020 at 8:53 PM Guowei Ma  wrote:
>>
>>> Hi,
>>> I think the StreamingFileSink could not support Azure currently.
>>> You could find more detailed info from here[1].
>>>
>>> [1] https://issues.apache.org/jira/browse/FLINK-17444
>>> Best,
>>> Guowei
>>>
>>>
>>> Israel Ekpo  于2020年5月28日周四 上午6:04写道:
>>>
 You can assign the task to me and I will like to collaborate with
 someone to fix it.

 On Wed, May 27, 2020 at 5:52 PM Israel Ekpo 
 wrote:

> Some users are running into issues when using Azure Blob Storage for
> the StreamFileSink
>
> https://issues.apache.org/jira/browse/FLINK-17989
>
> The issue is because certain packages are relocated in the POM file
> and some classes are dropped in the final shaded jar
>
> I have attempted to comment out the relocated and recompile the source
> but I keep hitting roadblocks of other relocation and filtration each time
> I update a specific pom file
>
> How can this be addressed so that these users can be unblocked? Why
> are the classes filtered out? What is the workaround? I can work on the
> patch if I have some guidance.
>
> This is an issue in Flink 1.9 and 1.10 and I believe 1.11 has the same
> issue but I am yet to confirm
>
> Thanks.
>
>
>



Re: [DISCUSS] FLINK-17989 - java.lang.NoClassDefFoundError org.apache.flink.fs.azure.common.hadoop.HadoopRecoverableWriter

2020-05-28 Thread Till Rohrmann
I think what needs to be done is to implement
a org.apache.flink.core.fs.RecoverableWriter for the respective file
system. Similar to HadoopRecoverableWriter and S3RecoverableWriter.

Cheers,
Till

On Thu, May 28, 2020 at 6:00 PM Israel Ekpo  wrote:

> Hi Till,
>
> Thanks for your feedback and guidance.
>
> It seems similar work was done for S3 filesystem where relocations were
> removed for those file system plugins.
>
> https://issues.apache.org/jira/browse/FLINK-11956
>
> It appears the same needs to be done for Azure File systems.
>
> I will attempt to connect with Klou today to collaborate to see what the
> level of effort is to add this support.
>
> Thanks.
>
>
>
> On Thu, May 28, 2020 at 11:54 AM Till Rohrmann 
> wrote:
>
>> Hi Israel,
>>
>> thanks for reaching out to the Flink community. As Guowei said, the
>> StreamingFileSink can currently only recover from faults if it writes to
>> HDFS or S3. Other file systems are currently not supported if you need
>> fault tolerance.
>>
>> Maybe Klou can tell you more about the background and what is needed to
>> make it work with other file systems. He is one of the original authors of
>> the StreamingFileSink.
>>
>> Cheers,
>> Till
>>
>> On Thu, May 28, 2020 at 4:39 PM Israel Ekpo  wrote:
>>
>>> Guowei,
>>>
>>> What do we need to do to add support for it?
>>>
>>> How do I get started on that?
>>>
>>>
>>>
>>> On Wed, May 27, 2020 at 8:53 PM Guowei Ma  wrote:
>>>
 Hi,
 I think the StreamingFileSink could not support Azure currently.
 You could find more detailed info from here[1].

 [1] https://issues.apache.org/jira/browse/FLINK-17444
 Best,
 Guowei


 Israel Ekpo  于2020年5月28日周四 上午6:04写道:

> You can assign the task to me and I will like to collaborate with
> someone to fix it.
>
> On Wed, May 27, 2020 at 5:52 PM Israel Ekpo 
> wrote:
>
>> Some users are running into issues when using Azure Blob Storage for
>> the StreamFileSink
>>
>> https://issues.apache.org/jira/browse/FLINK-17989
>>
>> The issue is because certain packages are relocated in the POM file
>> and some classes are dropped in the final shaded jar
>>
>> I have attempted to comment out the relocated and recompile the
>> source but I keep hitting roadblocks of other relocation and filtration
>> each time I update a specific pom file
>>
>> How can this be addressed so that these users can be unblocked? Why
>> are the classes filtered out? What is the workaround? I can work on the
>> patch if I have some guidance.
>>
>> This is an issue in Flink 1.9 and 1.10 and I believe 1.11 has the
>> same issue but I am yet to confirm
>>
>> Thanks.
>>
>>
>>
>


RE: History Server Not Showing Any Jobs - File Not Found?

2020-05-28 Thread Hailu, Andreas
Okay, I will look further to see if we're mistakenly using a version that's 
pre-2.6.0. However, I don't see flink-shaded-hadoop in my /lib directory for 
flink-1.9.1.

flink-dist_2.11-1.9.1.jar
flink-table-blink_2.11-1.9.1.jar
flink-table_2.11-1.9.1.jar
log4j-1.2.17.jar
slf4j-log4j12-1.7.15.jar

Are the files within /lib.

// ah

From: Chesnay Schepler 
Sent: Thursday, May 28, 2020 11:00 AM
To: Hailu, Andreas [Engineering] ; 
user@flink.apache.org
Subject: Re: History Server Not Showing Any Jobs - File Not Found?

Looks like it is indeed stuck on downloading the archive.

I searched a bit in the Hadoop JIRA and found several similar instances:
https://issues.apache.org/jira/browse/HDFS-6999
https://issues.apache.org/jira/browse/HDFS-7005
https://issues.apache.org/jira/browse/HDFS-7145

It is supposed to be fixed in 2.6.0 though :/

If hadoop is available from the HADOOP_CLASSPATH and flink-shaded-hadoop in 
/lib then you basically don't know what Hadoop version is actually being used,
which could lead to incompatibilities and dependency clashes.
If flink-shaded-hadoop 2.4/2.5 is on the classpath, maybe that is being used 
and runs into HDFS-7005.

On 28/05/2020 16:27, Hailu, Andreas wrote:
Just created a dump, here's what I see:

"Flink-HistoryServer-ArchiveFetcher-thread-1" #19 daemon prio=5 os_prio=0 
tid=0x7f93a5a2c000 nid=0x5692 runnable [0x7f934a0d3000]
   java.lang.Thread.State: RUNNABLE
at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269)
at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:79)
at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86)
- locked <0x0005df986960> (a sun.nio.ch.Util$2)
- locked <0x0005df986948> (a java.util.Collections$UnmodifiableSet)
- locked <0x0005df928390> (a sun.nio.ch.EPollSelectorImpl)
at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97)
at 
org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335)
at 
org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157)
at 
org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161)
at 
org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.readChannelFully(PacketReceiver.java:258)
at 
org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:209)
at 
org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:171)
at 
org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:102)
at 
org.apache.hadoop.hdfs.RemoteBlockReader2.readNextPacket(RemoteBlockReader2.java:201)
at 
org.apache.hadoop.hdfs.RemoteBlockReader2.read(RemoteBlockReader2.java:152)
- locked <0x0005ceade5e0> (a 
org.apache.hadoop.hdfs.RemoteBlockReader2)
at 
org.apache.hadoop.hdfs.DFSInputStream$ByteArrayStrategy.doRead(DFSInputStream.java:781)
at 
org.apache.hadoop.hdfs.DFSInputStream.readBuffer(DFSInputStream.java:837)
- eliminated <0x0005cead3688> (a 
org.apache.hadoop.hdfs.DFSInputStream)
at 
org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy(DFSInputStream.java:897)
- locked <0x0005cead3688> (a org.apache.hadoop.hdfs.DFSInputStream)
   at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:945)
- locked <0x0005cead3688> (a org.apache.hadoop.hdfs.DFSInputStream)
at java.io.DataInputStream.read(DataInputStream.java:149)
at 
org.apache.flink.runtime.fs.hdfs.HadoopDataInputStream.read(HadoopDataInputStream.java:94)
at java.io.InputStream.read(InputStream.java:101)
at org.apache.flink.util.IOUtils.copyBytes(IOUtils.java:69)
at org.apache.flink.util.IOUtils.copyBytes(IOUtils.java:91)
at 
org.apache.flink.runtime.history.FsJobArchivist.getArchivedJsons(FsJobArchivist.java:110)
at 
org.apache.flink.runtime.webmonitor.history.HistoryServerArchiveFe

Best way to "emulate" a rich Partitioner with open() and close() methods ?

2020-05-28 Thread LINZ, Arnaud
Hello,



I would like to upgrade the performance of my Apache Kudu Sink by using the new 
“KuduPartitioner” of Kudu API to match Flink stream partitions with Kudu 
partitions to lower the network shuffling.

For that, I would like to implement something like

stream.partitionCustom(new KuduFlinkPartitioner<>(…)).addSink(new KuduSink(…)));

With KuduFLinkPartitioner a implementation of 
org.apache.flink.api.common.functions.Partitioner that internally make use of 
the KuduPartitioner client tool of Kudu’s API.



However for that KuduPartioner to work, it needs to open – and close at the end 
– a connection to the Kudu table – obviously something that can’t be done for 
each line. But there is no “AbstractRichPartitioner” with open() and close() 
method that I can use for that (the way I use it in the sink for instance).



What is the best way to implement this ?

I thought of ThreadLocals that would be initialized during the first call to 
int partition(K key, int numPartitions);  but I won’t be able to close() things 
nicely as I won’t be notified on job termination.



I thought of putting those static ThreadLocals inside a “Identity Mapper” that 
would be called just prior the partition with something like :

stream.map(richIdentiyConnectionManagerMapper).partitionCustom(new 
KuduFlinkPartitioner<>(…)).addSink(new KuduSink(…)));

with kudu connections initialized in the mapper open(), closed in the mapper 
close(), and used  in the partitioner partition().

However It looks like an ugly hack breaking every coding principle, but as long 
as the threads are reused between the mapper and the partitioner I think that 
it should work.



Is there a better way to do this ?



Best regards,

Arnaud







L'intégrité de ce message n'étant pas assurée sur internet, la société 
expéditrice ne peut être tenue responsable de son contenu ni de ses pièces 
jointes. Toute utilisation ou diffusion non autorisée est interdite. Si vous 
n'êtes pas destinataire de ce message, merci de le détruire et d'avertir 
l'expéditeur.

The integrity of this message cannot be guaranteed on the Internet. The company 
that sent this message cannot therefore be held liable for its content nor 
attachments. Any unauthorized use or dissemination is prohibited. If you are 
not the intended recipient of this message, then please delete it and notify 
the sender.


Custom trigger to trigger for late events

2020-05-28 Thread Poornapragna Ts
Hi,

I have a simple requirement where i want to have 10 second window with
allow late events upto 1 hour.

Existing TumblingEventTimeWindows with EventTimeTrigger will work for this.

But the EventTimeTrigger, triggers for every incoming event after watermark
has passed windows max time. I don't want this behaviour. Even for late
events, I want to fire for every 10 seconds.

For this, I thought of writing custom trigger, which will be similar to
EventTimeTrigger, but instead of firing on every late event, it will
register timer in onElement method for upcoming 10th second.

With this setup, I have some questions.

1) When we register timers to context, is it compulsory to delete them on
clear() call?

2) Will these triggers be stored in fault tolerance state? So that deleting
is must.

3) Will it be problematic, if I call delete trigger for unregistered time(
i.e., if I call delete for time T1 for which I had not registered before.)

4) Without implementing custom trigger, can it be achieved?

5) Lets say, late event came at 255 second so I will register a timer to
trigger at 260(next 10th second). If a failure happens before that time,
then restarting from the checkpoint, Will it trigger when watermark reaches
260? That means will the trigger be recovered when we restart from failure.

Thanks,
Poornapragna T S


Flink Iterator Functions

2020-05-28 Thread Roderick Vincent
Hi,

I am brand new to Apache Flink so please excuse any silly questions.  I
have an Iterator function defined as below and adding it as a source to a
Flink stream.  But when I try to pass configuration information to it (via
a Spring env), what I notice is that one of the threads calls hasNext() and
it is not the same object and the passed information is null.  Something is
constructing it, but what is strange is that if I add a default constructor
I do not see this being called by this thread with the null data so I am
wondering what is going on.  Any ideas?  How do we pass configuration
information to these functions?  Any help would be appreciated.

Thanks,
Rick

@Public
public class NodeSource extends
FromIteratorFunction> {


private static final long serialVersionUID = 1L;

public NodeSource(ArangoDBSource iterator) {
super(iterator);
}

}


Dropping messages based on timestamp.

2020-05-28 Thread Joe Malt
Hi,

I'm working on a custom TimestampAssigner which will do different things
depending on the value of the extracted timestamp. One of the actions I
want to take is to drop messages entirely if their timestamp meets certain
criteria.

Of course there's no direct way to do this in the TimestampAssigner, but
I'd like to keep this logic as close to the TimestampAssigner as possible
since this is going to be a pluggable component used in a bunch of
different Flink apps.

What would be the best way to implement this?

Thanks,
Joe


Re: Re: Flink Window with multiple trigger condition

2020-05-28 Thread aj
Hi,

I have implemented the below solution and its working fine but the biggest
problem with this is if no event coming for the user after 30 min then I am
not able to trigger because I am checking
time diff from upcoming events. So when the next event comes than only it
triggers but I want it to trigger just after 30 mins.

So please help me to improve this and how to solve the above problem.



public class DemandSessionFlatMap extends
RichFlatMapFunction,
DemandSessionSummaryTuple> {

private static final Logger LOGGER =
LoggerFactory.getLogger(DemandSessionFlatMap.class);

private transient ValueState>
timeState; // maintain session_id starttime and endtime
private transient MapState
sessionSummary; // map for hex9 and summarytuple

@Override
public void open(Configuration config) {

ValueStateDescriptor> timeDescriptor =
new ValueStateDescriptor<>(
"time_state", // the state name
TypeInformation.of(new TypeHint>() {
}), // type information
Tuple3.of(null, 0L, 0L)); // default value of
the state, if nothing was set
timeState = getRuntimeContext().getState(timeDescriptor);

MapStateDescriptor descriptor =
new MapStateDescriptor("demand_session",
TypeInformation.of(new TypeHint() {
}), TypeInformation.of(new
TypeHint() {
}));
sessionSummary = getRuntimeContext().getMapState(descriptor);

}

@Override
public void flatMap(Tuple2 recordTuple2,
Collector collector) throws Exception {
GenericRecord record = recordTuple2.f1;
String event_name = record.get("event_name").toString();
long event_ts = (Long) record.get("event_ts");
Tuple3 currentTimeState = timeState.value();

if (event_name.equals("search_list_keyless") &&
currentTimeState.f1 == 0) {
currentTimeState.f1 = event_ts;
String demandSessionId = UUID.randomUUID().toString();
currentTimeState.f0 = demandSessionId;
}

long timeDiff = event_ts - currentTimeState.f1;

if (event_name.equals("keyless_start_trip") || timeDiff >= 180) {
Tuple3 finalCurrentTimeState = currentTimeState;
sessionSummary.entries().forEach( tuple ->{
String key = tuple.getKey();
DemandSessionSummaryTuple sessionSummaryTuple =
tuple.getValue();
try {
sessionSummaryTuple.setEndTime(finalCurrentTimeState.f2);
collector.collect(sessionSummaryTuple);
} catch (Exception e) {
e.printStackTrace();
}

});
timeState.clear();
sessionSummary.clear();
currentTimeState = timeState.value();
}

if (event_name.equals("search_list_keyless") &&
currentTimeState.f1 == 0) {
currentTimeState.f1 = event_ts;
String demandSessionId = UUID.randomUUID().toString();
currentTimeState.f0 = demandSessionId;
}
currentTimeState.f2 = event_ts;

if (currentTimeState.f1 > 0) {
String search_hex9 = record.get("search_hex9") != null ?
record.get("search_hex9").toString() : null;
DemandSessionSummaryTuple currentTuple =
sessionSummary.get(search_hex9) != null ?
sessionSummary.get(search_hex9) : new DemandSessionSummaryTuple();

if (sessionSummary.get(search_hex9) == null) {
currentTuple.setSearchHex9(search_hex9);
currentTuple.setUserId(recordTuple2.f0);
currentTuple.setStartTime(currentTimeState.f1);
currentTuple.setDemandSessionId(currentTimeState.f0);
}

if (event_name.equals("search_list_keyless")) {
currentTuple.setTotalSearch(currentTuple.getTotalSearch() + 1);
SearchSummaryCalculation(record, currentTuple);
}
sessionSummary.put(search_hex9, currentTuple);
}
timeState.update(currentTimeState);
}






On Sun, May 24, 2020 at 10:57 PM Yun Gao  wrote:

> Hi,
>
>First sorry that I'm not expert on Window and please correct me if
> I'm wrong, but from my side, it seems the assigner might also be a problem
> in addition to the trigger: currently Flink window assigner should be all
> based on time (processing time or event time), and it might be hard to
> implement an event-driven window assigner that start to assign elements to
> a window after received some elements.
>
>   What comes to me is that a possible alternative method is to use the
> low-level *KeyedProcessFunction* directly:  you may register a timer 30
> mins later when received the "*search*" event and write the time of
> search event into the state. Then for the following events, they will be
> saved to the state s

Flink Elastic Sink

2020-05-28 Thread aj
Hello All,

I am getting many events in Kafka and I have written a link job that sinks
that Avro records from Kafka to S3 in parquet format.

Now, I want to sink these records into elastic search. but the only
challenge is that I want to sink record on time indices. Basically, In
Elastic, I want to create a per day index with the date as the suffix.
So in Flink stream job if I create an es sink how will I change the sink to
start writing  in a new index when the first event of the day arrives

Thanks,
Anuj.








Re: [DISCUSS] FLINK-17989 - java.lang.NoClassDefFoundError org.apache.flink.fs.azure.common.hadoop.HadoopRecoverableWriter

2020-05-28 Thread Israel Ekpo
Thanks Till.

I will take a look at that tomorrow and let you know if I hit any
roadblocks.

On Thu, May 28, 2020 at 12:11 PM Till Rohrmann  wrote:

> I think what needs to be done is to implement
> a org.apache.flink.core.fs.RecoverableWriter for the respective file
> system. Similar to HadoopRecoverableWriter and S3RecoverableWriter.
>
> Cheers,
> Till
>
> On Thu, May 28, 2020 at 6:00 PM Israel Ekpo  wrote:
>
>> Hi Till,
>>
>> Thanks for your feedback and guidance.
>>
>> It seems similar work was done for S3 filesystem where relocations were
>> removed for those file system plugins.
>>
>> https://issues.apache.org/jira/browse/FLINK-11956
>>
>> It appears the same needs to be done for Azure File systems.
>>
>> I will attempt to connect with Klou today to collaborate to see what the
>> level of effort is to add this support.
>>
>> Thanks.
>>
>>
>>
>> On Thu, May 28, 2020 at 11:54 AM Till Rohrmann 
>> wrote:
>>
>>> Hi Israel,
>>>
>>> thanks for reaching out to the Flink community. As Guowei said, the
>>> StreamingFileSink can currently only recover from faults if it writes to
>>> HDFS or S3. Other file systems are currently not supported if you need
>>> fault tolerance.
>>>
>>> Maybe Klou can tell you more about the background and what is needed to
>>> make it work with other file systems. He is one of the original authors of
>>> the StreamingFileSink.
>>>
>>> Cheers,
>>> Till
>>>
>>> On Thu, May 28, 2020 at 4:39 PM Israel Ekpo 
>>> wrote:
>>>
 Guowei,

 What do we need to do to add support for it?

 How do I get started on that?



 On Wed, May 27, 2020 at 8:53 PM Guowei Ma  wrote:

> Hi,
> I think the StreamingFileSink could not support Azure currently.
> You could find more detailed info from here[1].
>
> [1] https://issues.apache.org/jira/browse/FLINK-17444
> Best,
> Guowei
>
>
> Israel Ekpo  于2020年5月28日周四 上午6:04写道:
>
>> You can assign the task to me and I will like to collaborate with
>> someone to fix it.
>>
>> On Wed, May 27, 2020 at 5:52 PM Israel Ekpo 
>> wrote:
>>
>>> Some users are running into issues when using Azure Blob Storage for
>>> the StreamFileSink
>>>
>>> https://issues.apache.org/jira/browse/FLINK-17989
>>>
>>> The issue is because certain packages are relocated in the POM file
>>> and some classes are dropped in the final shaded jar
>>>
>>> I have attempted to comment out the relocated and recompile the
>>> source but I keep hitting roadblocks of other relocation and filtration
>>> each time I update a specific pom file
>>>
>>> How can this be addressed so that these users can be unblocked? Why
>>> are the classes filtered out? What is the workaround? I can work on the
>>> patch if I have some guidance.
>>>
>>> This is an issue in Flink 1.9 and 1.10 and I believe 1.11 has the
>>> same issue but I am yet to confirm
>>>
>>> Thanks.
>>>
>>>
>>>
>>


Re: Stateful functions Harness

2020-05-28 Thread Boris Lublinsky
Also I have noticed, that a few cludstate jars including statefun-flink-core, 
statefun-flink-io, statefun-flink-harness are build for Scala 11, is it 
possible to create versions of those for Scala 12?

> On May 27, 2020, at 3:15 PM, Seth Wiesman  wrote:
> 
> Hi Boris, 
> 
> Example usage of flink sources and sink is available in the documentation[1]. 
> 
> [1] 
> https://ci.apache.org/projects/flink/flink-statefun-docs-release-2.0/io-module/flink-connectors.html
>  
> 
> On Wed, May 27, 2020 at 1:08 PM Boris Lublinsky 
> mailto:boris.lublin...@lightbend.com>> wrote:
> Thats not exactly the usage question that I am asking
> When I am writing IO module I have to write Ingress and Egress spec.
> You have an example for Kafka, which looks like
> 
> def getIngressSpec: IngressSpec[GreetRequest] =
>   KafkaIngressBuilder.forIdentifier(GREETING_INGRESS_ID)
> .withKafkaAddress(kafkaAddress)
> .withTopic("names")
> .withDeserializer(classOf[GreetKafkaDeserializer])
> .withProperty(ConsumerConfig.GROUP_ID_CONFIG, "greetings")
> .build
> 
> def getEgressSpec: EgressSpec[GreetResponse] =
>   KafkaEgressBuilder.forIdentifier(GREETING_EGRESS_ID)
> .withKafkaAddress(kafkaAddress)
> .withSerializer(classOf[GreetKafkaSerializer])
> .build
> How is it going to look if I am using SourceSinkModule?
> Do I just specify stream names? Something else?
> 
> 
> 
> 
> 
>> On May 27, 2020, at 11:29 AM, Tzu-Li (Gordon) Tai > > wrote:
>> 
>> 
>> 
>> On Thu, May 28, 2020, 12:19 AM Boris Lublinsky 
>> mailto:boris.lublin...@lightbend.com>> wrote:
>> I think I figured this out.
>> The project seems to be missing
>> 
>> resources 
>> /META-INF
>>  
>> /services
>>  directory, which should contain services
>> 
>> Yes, the functions / ingresses / regresses etc. are not discoverable if the 
>> service file isnt present in the classpath.
>> 
>> For the examples, if you are running it straight from the repo, should all 
>> have that service file defined and therefore readily runnable.
>> 
>> If you are creating your own application project, you'll have to add that 
>> yourself.
>> 
>> 
>> Another question:
>> I see org.apache.flink.statefun.flink.io.datastream.SourceSinkModule
>> 
>> Class, which I think allows to use existing data streams as ingress/egress.
>> 
>> Are there any examples of its usage
>> 
>> On the Harness class, there is a withFlinkSourceFunction method in which you 
>> can directly add a Flink source function as the ingress.
>> 
>> If you want to use that directly in a normal application (not just execution 
>> in IDE with the Harness), you can define your ingesses/egresses by binding 
>> SourceFunctionSpec / SinkFunctionSpec.
>> Please see how they are being used in the Harness class for examples.
>> 
>> Gordon
>> 
>> 
>> 
>>> On May 27, 2020, at 11:10 AM, Tzu-Li (Gordon) Tai >> > wrote:
>>> 
>>> Hi,
>>> 
>>> The example is working fine on my side (also using IntelliJ).
>>> This could most likely be a problem with your project setup in the IDE, 
>>> where the classpath isn't setup correctly.
>>> 
>>> What do you see when you right click on the statefun-flink-harness-example 
>>> directory (in the IDE) --> Open Module Settings, and then under the 
>>> "Sources" / "Dependencies" tab?
>>> Usually this should all be automatically setup correctly when importing the 
>>> project.
>>> 
>>> Gordon
>>> 
>>> On Wed, May 27, 2020 at 11:46 PM Boris Lublinsky 
>>> mailto:boris.lublin...@lightbend.com>> 
>>> wrote:
>>> The project 
>>> https://github.com/apache/flink-statefun/tree/release-2.0/statefun-examples/statefun-flink-harness-example
>>>  
>>> 
>>> Does not work in Intellij.
>>> 
>>> The problem is that when running in Intellij, method public static Modules 
>>> loadFromClassPath() {
>>> Does not pick up classes, which are local in Intellij
>>> 
>>> Any work arounds?
>>> 
>>> 
>>> 
>>> 
 On May 22, 2020, at 12:03 AM, Tzu-Li (Gordon) Tai >>> > wrote:
 
 Hi,
 
 Sorry, I need to correct my comment on using the Kafka ingress / egress 
 with the Harness.
 
 That is actually doable, by adding an extra dependency to 
 `statefun-flink-distribution` in your Harness program.
 That pulls in all the other required dependencies required by the Kafka 
 ingress / egress, such as the source / sink providers and Flink Kafka 
 connectors.
 
 Cheers,
 Gordon
 
 On Fri, May 22, 2020 at 12:04 PM Tzu-Li (Gordon) Tai >>> 

Re: Apache Flink - Question about application restart

2020-05-28 Thread M Singh
 Thanks Till - in the case of restart of flink master - I believe the jobid 
will be different.  Thanks
On Thursday, May 28, 2020, 11:33:38 AM EDT, Till Rohrmann 
 wrote:  
 
 Hi,
Yarn won't resubmit the job. In case of a process failure where Yarn restarts 
the Flink Master, the Master will recover the submitted jobs from a persistent 
storage system.
Cheers,Till
On Thu, May 28, 2020 at 4:05 PM M Singh  wrote:

 Hi Till/Zhu/Yang:  Thanks for your replies.
So just to clarify - the job id remains same if the job restarts have not been 
exhausted.  Does Yarn also resubmit the job in case of failures and if so, then 
is the job id different.
ThanksOn Wednesday, May 27, 2020, 10:05:40 AM EDT, Till Rohrmann 
 wrote:  
 
 Hi,
if you submit the same job multiple times, then it will get every time a 
different JobID assigned. For Flink, different job submissions are considered 
to be different jobs. Once a job has been submitted, it will keep the same 
JobID which is important in order to retrieve the checkpoints associated with 
this job.
Cheers,Till
On Tue, May 26, 2020 at 12:42 PM M Singh  wrote:

 Hi Zhu Zhu:
I have another clafication - it looks like if I run the same app multiple times 
- it's job id changes.  So it looks like even though the graph is the same the 
job id is not dependent on the job graph only since with different runs of the 
same app it is not the same.
Please let me know if I've missed anything.
Thanks
On Monday, May 25, 2020, 05:32:39 PM EDT, M Singh  
wrote:  
 
  Hi Zhu Zhu:
Just to clarify - from what I understand, EMR also has by default restart times 
(I think it is 3). So if the EMR restarts the job - the job id is the same 
since the job graph is the same. 
Thanks for the clarification.
On Monday, May 25, 2020, 04:01:17 AM EDT, Yang Wang  
wrote:  
 
 Just share some additional information.
When deploying Flink application on Yarn and it exhausted restart policy, 
thenthe whole application will failed. If you start another instance(Yarn 
application),even the high availability is configured, we could not recover 
from the latestcheckpoint because the clusterId(i.e. applicationId) has changed.

Best,Yang
Zhu Zhu  于2020年5月25日周一 上午11:17写道:

Hi M,
Regarding your questions:1. yes. The id is fixed once the job graph is 
generated.2. yes
Regarding yarn mode:1. the job id keeps the same because the job graph will be 
generated once at client side and persist in DFS for reuse2. yes if high 
availability is enabled

Thanks,Zhu Zhu
M Singh  于2020年5月23日周六 上午4:06写道:

Hi Flink Folks:
If I have a Flink Application with 10 restarts, if it fails and restarts, then:
1. Does the job have the same id ?2. Does the automatically restarting 
application, pickup from the last checkpoint ? I am assuming it does but just 
want to confirm.
Also, if it is running on AWS EMR I believe EMR/Yarn is configured to restart 
the job 3 times (after it has exhausted it's restart policy) .  If that is the 
case:1. Does the job get a new id ? I believe it does, but just want to 
confirm.2. Does the Yarn restart honor the last checkpoint ?  I believe, it 
does not, but is there a way to make it restart from the last checkpoint of the 
failed job (after it has exhausted its restart policy) ?
Thanks




  
  

Question on stream joins

2020-05-28 Thread Sudan S
Hi ,

I have two usecases

1. I have two streams which `leftSource` and `rightSource` which i want to
join without partitioning over a window and find the difference of count of
elements of leftSource and rightSource and emit the result of difference.
Which is the appropriate join function ican use ?

join/cogroup/connect.

2. I want to replicate the same behaviour over a keyed source. Basically
leftSource and rightSource are joined by a partition key.

Plz let me know which is the appropriate join operator for the usecase

-- 
*"The information contained in this e-mail and any accompanying documents 
may contain information that is confidential or otherwise protected from 
disclosure. If you are not the intended recipient of this message, or if 
this message has been addressed to you in error, please immediately alert 
the sender by replying to this e-mail and then delete this message, 
including any attachments. Any dissemination, distribution or other use of 
the contents of this message by anyone other than the intended recipient is 
strictly prohibited. All messages sent to and from this e-mail address may 
be monitored as permitted by applicable law and regulations to ensure 
compliance with our internal policies and to protect our business."*


RE: History Server Not Showing Any Jobs - File Not Found?

2020-05-28 Thread Hailu, Andreas
May I also ask what version of flink-hadoop you're using and the number of jobs 
you're storing the history for? As of writing we have roughly 101,000 
application history files. I'm curious to know if we're encountering some kind 
of resource problem.

// ah

From: Hailu, Andreas [Engineering]
Sent: Thursday, May 28, 2020 12:18 PM
To: 'Chesnay Schepler' ; user@flink.apache.org
Subject: RE: History Server Not Showing Any Jobs - File Not Found?

Okay, I will look further to see if we're mistakenly using a version that's 
pre-2.6.0. However, I don't see flink-shaded-hadoop in my /lib directory for 
flink-1.9.1.

flink-dist_2.11-1.9.1.jar
flink-table-blink_2.11-1.9.1.jar
flink-table_2.11-1.9.1.jar
log4j-1.2.17.jar
slf4j-log4j12-1.7.15.jar

Are the files within /lib.

// ah

From: Chesnay Schepler mailto:ches...@apache.org>>
Sent: Thursday, May 28, 2020 11:00 AM
To: Hailu, Andreas [Engineering] 
mailto:andreas.ha...@ny.email.gs.com>>; 
user@flink.apache.org
Subject: Re: History Server Not Showing Any Jobs - File Not Found?

Looks like it is indeed stuck on downloading the archive.

I searched a bit in the Hadoop JIRA and found several similar instances:
https://issues.apache.org/jira/browse/HDFS-6999
https://issues.apache.org/jira/browse/HDFS-7005
https://issues.apache.org/jira/browse/HDFS-7145

It is supposed to be fixed in 2.6.0 though :/

If hadoop is available from the HADOOP_CLASSPATH and flink-shaded-hadoop in 
/lib then you basically don't know what Hadoop version is actually being used,
which could lead to incompatibilities and dependency clashes.
If flink-shaded-hadoop 2.4/2.5 is on the classpath, maybe that is being used 
and runs into HDFS-7005.

On 28/05/2020 16:27, Hailu, Andreas wrote:
Just created a dump, here's what I see:

"Flink-HistoryServer-ArchiveFetcher-thread-1" #19 daemon prio=5 os_prio=0 
tid=0x7f93a5a2c000 nid=0x5692 runnable [0x7f934a0d3000]
   java.lang.Thread.State: RUNNABLE
at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269)
at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:79)
at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86)
- locked <0x0005df986960> (a sun.nio.ch.Util$2)
- locked <0x0005df986948> (a java.util.Collections$UnmodifiableSet)
- locked <0x0005df928390> (a sun.nio.ch.EPollSelectorImpl)
at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97)
at 
org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335)
at 
org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157)
at 
org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161)
at 
org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.readChannelFully(PacketReceiver.java:258)
at 
org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:209)
at 
org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:171)
at 
org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:102)
at 
org.apache.hadoop.hdfs.RemoteBlockReader2.readNextPacket(RemoteBlockReader2.java:201)
at 
org.apache.hadoop.hdfs.RemoteBlockReader2.read(RemoteBlockReader2.java:152)
- locked <0x0005ceade5e0> (a 
org.apache.hadoop.hdfs.RemoteBlockReader2)
at 
org.apache.hadoop.hdfs.DFSInputStream$ByteArrayStrategy.doRead(DFSInputStream.java:781)
at 
org.apache.hadoop.hdfs.DFSInputStream.readBuffer(DFSInputStream.java:837)
- eliminated <0x0005cead3688> (a 
org.apache.hadoop.hdfs.DFSInputStream)
at 
org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy(DFSInputStream.java:897)
- locked <0x0005cead3688> (a org.apache.hadoop.hdfs.DFSInputStream)
   at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:945)
- locked <0x0005cead3688> (a org.apache.hadoop.hdfs.DFSInputStream

Re: ClusterClientFactory selection

2020-05-28 Thread Yang Wang
You could find more information about deployment target here[1]. As you
mentioned,
it is not defined in the flink-conf.yaml by default.

For the code, it is defined in flink-core/DeploymentOptions.

[1].
https://ci.apache.org/projects/flink/flink-docs-master/ops/cli.html#deployment-targets

Best,
Yang

M Singh  于2020年5月28日周四 下午10:34写道:

> HI Kostas/Yang/Lake:
>
> I am looking at aws emr and did not see the execution.target in the
> flink-conf.yaml file under flink/conf directory.
> Is it defined in another place ?
>
> I also did search in the current flink source code and did find mention of
> it in the md files but not in any property file or the flink-yarn sub
> module.
>
> Please let me know if I am missing anything.
>
> Thanks
>
> On Wednesday, May 27, 2020, 03:51:28 AM EDT, Kostas Kloudas <
> kklou...@gmail.com> wrote:
>
>
> Hi Singh,
>
> The only thing to add to what Yang said is that the "execution.target"
> configuration option (in the config file) is also used for the same
> purpose from the execution environments.
>
> Cheers,
> Kostas
>
> On Wed, May 27, 2020 at 4:49 AM Yang Wang  wrote:
> >
> > Hi M Singh,
> >
> > The Flink CLI picks up the correct ClusterClientFactory via java SPI. You
> > could check YarnClusterClientFactory#isCompatibleWith for how it is
> activated.
> > The cli option / configuration is "-e/--executor" or execution.target
> (e.g. yarn-per-job).
> >
> >
> > Best,
> > Yang
> >
> > M Singh  于2020年5月26日周二 下午6:45写道:
> >>
> >> Hi:
> >>
> >> I wanted to find out which parameter/configuration allows flink cli
> pick up the appropriate cluster client factory (especially in the yarn
> mode).
> >>
> >> Thanks
>


Re: Cannot start native K8s

2020-05-28 Thread Yang Wang
A quick update on this issue.

The root cause of this issue is compatibility of kubernetes-client and java
8u252[1]. And we have
bumped he fabric8 kubernetes-client version from 4.5.2 to 4.9.2 in master
and release-1.11 branch.
Now users could deploy Flink on K8s natively with java 8u252.

If you really could not use the latest Flink version, you could set the
environment "HTTP2_DISABLE=true"
in Flink client, jobmanager, taskmanager side.

[1]. https://github.com/fabric8io/kubernetes-client/issues/2212

Best,
Yang

Yang Wang  于2020年5月11日周一 上午11:51写道:

> Glad to hear that you could deploy the Flink cluster on K8s natively.
> Thanks for
> trying the in-preview feature and give your feedback.
>
>
> Moreover, i want to give a very simple conclusion here. Currently, because
> of the
> compatibility issue of fabric8 kubernetes-client, the native K8s
> integration have the
> following known limitation.
> * For jdk 8u252, the native k8s integration could only work on kubernetes
> v1.16 and
> lower versions.
> * For other jdk versions(e.g. 8u242, jdk11), i am not aware of the same
> issues. The native
> K8s integration works well.
>
>
> Best,
> Yang
>
> Dongwon Kim  于2020年5月9日周六 上午11:46写道:
>
>> Hi Yang,
>>
>> Oops, I forget to copy /etc/kube/admin.conf to $HOME/.kube/config so that
>> the current user account can access to K8s.
>> Now that I copied it, I found that kubernetes-session.sh is working fine.
>> Thanks very much!
>>
>> Best,
>> Dongwon
>>
>> [flink@DAC-E04-W06 ~]$ kubernetes-session.sh
>> 2020-05-09 12:43:49,961 INFO
>>  org.apache.flink.configuration.GlobalConfiguration- Loading
>> configuration property: jobmanager.rpc.address, DAC-E04-W06
>> 2020-05-09 12:43:49,962 INFO
>>  org.apache.flink.configuration.GlobalConfiguration- Loading
>> configuration property: jobmanager.rpc.port, 6123
>> 2020-05-09 12:43:49,962 INFO
>>  org.apache.flink.configuration.GlobalConfiguration- Loading
>> configuration property: jobmanager.heap.size, 1024m
>> 2020-05-09 12:43:49,962 INFO
>>  org.apache.flink.configuration.GlobalConfiguration- Loading
>> configuration property: taskmanager.memory.process.size, 24g
>> 2020-05-09 12:43:49,963 INFO
>>  org.apache.flink.configuration.GlobalConfiguration- Loading
>> configuration property: taskmanager.numberOfTaskSlots, 24
>> 2020-05-09 12:43:49,963 INFO
>>  org.apache.flink.configuration.GlobalConfiguration- Loading
>> configuration property: parallelism.default, 1
>> 2020-05-09 12:43:49,963 INFO
>>  org.apache.flink.configuration.GlobalConfiguration- Loading
>> configuration property: high-availability, zookeeper
>> 2020-05-09 12:43:49,963 INFO
>>  org.apache.flink.configuration.GlobalConfiguration- Loading
>> configuration property: high-availability.zookeeper.path.root, /flink
>> 2020-05-09 12:43:49,964 INFO
>>  org.apache.flink.configuration.GlobalConfiguration- Loading
>> configuration property: high-availability.storageDir, hdfs:///user/flink/ha/
>> 2020-05-09 12:43:49,964 INFO
>>  org.apache.flink.configuration.GlobalConfiguration- Loading
>> configuration property: high-availability.zookeeper.quorum, DAC-E04-W06:2181
>> 2020-05-09 12:43:49,965 INFO
>>  org.apache.flink.configuration.GlobalConfiguration- Loading
>> configuration property: jobmanager.execution.failover-strategy, region
>> 2020-05-09 12:43:49,965 INFO
>>  org.apache.flink.configuration.GlobalConfiguration- Loading
>> configuration property: rest.port, 8082
>> 2020-05-09 12:43:51,122 INFO
>>  org.apache.flink.runtime.clusterframework.TaskExecutorProcessUtils  - The
>> derived from fraction jvm overhead memory (2.400gb (2576980416 bytes)) is
>> greater than its max value 1024.000mb (1073741824 bytes), max value will be
>> used instead
>> 2020-05-09 12:43:51,123 INFO
>>  org.apache.flink.runtime.clusterframework.TaskExecutorProcessUtils  - The
>> derived from fraction network memory (2.291gb (2459539902 bytes)) is
>> greater than its max value 1024.000mb (1073741824 bytes), max value will be
>> used instead
>> 2020-05-09 12:43:51,131 INFO
>>  org.apache.flink.kubernetes.utils.KubernetesUtils - Kubernetes
>> deployment requires a fixed port. Configuration blob.server.port will be
>> set to 6124
>> 2020-05-09 12:43:51,131 INFO
>>  org.apache.flink.kubernetes.utils.KubernetesUtils - Kubernetes
>> deployment requires a fixed port. Configuration taskmanager.rpc.port will
>> be set to 6122
>> 2020-05-09 12:43:51,134 INFO
>>  org.apache.flink.kubernetes.utils.KubernetesUtils - Kubernetes
>> deployment requires a fixed port. Configuration
>> high-availability.jobmanager.port will be set to 6123
>> 2020-05-09 12:43:52,167 INFO
>>  org.apache.flink.kubernetes.KubernetesClusterDescriptor   - Create
>> flink session cluster flink-cluster-4a82d41b-af15-4205-8a44-62351e270242
>> successfully, JobManager Web Interface: http://cluster-endpoint:31

Re: Apache Flink - Question about application restart

2020-05-28 Thread Zhu Zhu
Restarting of flink master does not change the jobId if one yarn
application.
To be simple, in a yarn application that runs a flink cluster, the job id
of a job does not change once the job is submitted.
You can even submit a flink application multiples times to that cluster (if
it is session mode) but each submission will be treated as a different job
and will have a different job id.

Thanks,
Zhu Zhu

M Singh  于2020年5月29日周五 上午4:59写道:

> Thanks Till - in the case of restart of flink master - I believe the jobid
> will be different.  Thanks
>
> On Thursday, May 28, 2020, 11:33:38 AM EDT, Till Rohrmann <
> trohrm...@apache.org> wrote:
>
>
> Hi,
>
> Yarn won't resubmit the job. In case of a process failure where Yarn
> restarts the Flink Master, the Master will recover the submitted jobs from
> a persistent storage system.
>
> Cheers,
> Till
>
> On Thu, May 28, 2020 at 4:05 PM M Singh  wrote:
>
> Hi Till/Zhu/Yang:  Thanks for your replies.
>
> So just to clarify - the job id remains same if the job restarts have not
> been exhausted.  Does Yarn also resubmit the job in case of failures and if
> so, then is the job id different.
>
> Thanks
> On Wednesday, May 27, 2020, 10:05:40 AM EDT, Till Rohrmann <
> trohrm...@apache.org> wrote:
>
>
> Hi,
>
> if you submit the same job multiple times, then it will get every time a
> different JobID assigned. For Flink, different job submissions are
> considered to be different jobs. Once a job has been submitted, it will
> keep the same JobID which is important in order to retrieve the checkpoints
> associated with this job.
>
> Cheers,
> Till
>
> On Tue, May 26, 2020 at 12:42 PM M Singh  wrote:
>
> Hi Zhu Zhu:
>
> I have another clafication - it looks like if I run the same app multiple
> times - it's job id changes.  So it looks like even though the graph is the
> same the job id is not dependent on the job graph only since with different
> runs of the same app it is not the same.
>
> Please let me know if I've missed anything.
>
> Thanks
>
> On Monday, May 25, 2020, 05:32:39 PM EDT, M Singh 
> wrote:
>
>
> Hi Zhu Zhu:
>
> Just to clarify - from what I understand, EMR also has by default restart
> times (I think it is 3). So if the EMR restarts the job - the job id is the
> same since the job graph is the same.
>
> Thanks for the clarification.
>
> On Monday, May 25, 2020, 04:01:17 AM EDT, Yang Wang 
> wrote:
>
>
> Just share some additional information.
>
> When deploying Flink application on Yarn and it exhausted restart policy,
> then
> the whole application will failed. If you start another instance(Yarn
> application),
> even the high availability is configured, we could not recover from the
> latest
> checkpoint because the clusterId(i.e. applicationId) has changed.
>
>
> Best,
> Yang
>
> Zhu Zhu  于2020年5月25日周一 上午11:17写道:
>
> Hi M,
>
> Regarding your questions:
> 1. yes. The id is fixed once the job graph is generated.
> 2. yes
>
> Regarding yarn mode:
> 1. the job id keeps the same because the job graph will be generated once
> at client side and persist in DFS for reuse
> 2. yes if high availability is enabled
>
> Thanks,
> Zhu Zhu
>
> M Singh  于2020年5月23日周六 上午4:06写道:
>
> Hi Flink Folks:
>
> If I have a Flink Application with 10 restarts, if it fails and restarts,
> then:
>
> 1. Does the job have the same id ?
> 2. Does the automatically restarting application, pickup from the last
> checkpoint ? I am assuming it does but just want to confirm.
>
> Also, if it is running on AWS EMR I believe EMR/Yarn is configured to
> restart the job 3 times (after it has exhausted it's restart policy) .  If
> that is the case:
> 1. Does the job get a new id ? I believe it does, but just want to confirm.
> 2. Does the Yarn restart honor the last checkpoint ?  I believe, it does
> not, but is there a way to make it restart from the last checkpoint of the
> failed job (after it has exhausted its restart policy) ?
>
> Thanks
>
>
>


Re: Flink Elastic Sink

2020-05-28 Thread Yangze Guo
Hi, Anuj.

>From my understanding, you could send IndexRequest to the indexer in
`ElasticsearchSink`. It will create a document under the given index
and type. So, it seems you only need to get the timestamp and concat
the `date` to your index. Am I understanding that correctly? Or do you
want to emit only 1 record per day?

Best,
Yangze Guo

On Fri, May 29, 2020 at 2:43 AM aj  wrote:
>
> Hello All,
>
> I am getting many events in Kafka and I have written a link job that sinks 
> that Avro records from Kafka to S3 in parquet format.
>
> Now, I want to sink these records into elastic search. but the only challenge 
> is that I want to sink record on time indices. Basically, In Elastic, I want 
> to create a per day index with the date as the suffix.
> So in Flink stream job if I create an es sink how will I change the sink to 
> start writing  in a new index when the first event of the day arrives
>
> Thanks,
> Anuj.
>
>
>
>
>


Re: How do I make sure to place operator instances in specific Task Managers?

2020-05-28 Thread Weihua Hu
Hi, Felipe

Flink does not support run tasks on specified TM. 
You can use slotSharingGroup to control Tasks not in same Slot, but cannot 
specified which TM.

Can you please give the reason for specifying TM?


Best
Weihua Hu

> 2020年5月28日 21:37,Felipe Gutierrez  写道:
> 
> For instance, if I have the following DAG with the respect parallelism
> in parenthesis (I hope the dag appears real afterall):
> 
>  source01 -> map01(4) -> flatmap01(4) \
> 
>  |-> keyBy -> reducer(8)
>  source02 -> map02(4) -> flatmap02(4) /
> 
> And I have 4 TMs in 4 machines with 4 cores each. I would like to
> place source01 and map01 and flatmap01 in TM-01. source02 and map02
> and flatmap02 in TM-02. I am using "disableChaning()" in the faltMap
> operator to measure it. And reducer1-to-4 in TM-03 and reducer5-to-8
> in TM-04.
> 
> I am using the methods "setParallelism()" and "slotSharingGroup()" to
> define it but both source01 and source02 are placed in TM-01 and map01
> is split into 2 TMs. The same with map02.
> 
> Thanks,
> Felipe
> --
> -- Felipe Gutierrez
> -- skype: felipe.o.gutierrez
> -- https://felipeogutierrez.blogspot.com



Re: How to create schema for flexible json data in Flink SQL

2020-05-28 Thread Benchao Li
Hi Guodong,

After an offline discussion with Leonard. I think you get the right meaning
of schema inference.
But there are two problems here:
1. schema of the data is fixed, schema inference can save your effort to
write the schema explicitly.
2. schema of the data is dynamic, in this case the schema inference cannot
help. Because SQL is somewhat static language, which should know all the
data types at compile stage.

Maybe I've misunderstood your question at the very beginning. I thought
your case is #2. If your case is #1, then schema inference is a good
choice.

Guodong Wang  于2020年5月28日周四 下午11:39写道:

> Yes. Setting the value type as raw is one possible approach. And I would
> like to vote for schema inference as well.
>
> Correct me if I am wrong, IMO schema inference means I can provide a
> method in the table source to infer the data schema base on the runtime
> computation. Just like some calcite adaptor does. Right?
> For SQL table registration, I think that requiring the table source to
> provide a static schema might be too strict. Let planner to infer the table
> schema will be more flexible.
>
> Thank you for your suggestions.
>
> Guodong
>
>
> On Thu, May 28, 2020 at 11:11 PM Benchao Li  wrote:
>
>> Hi Guodong,
>>
>> Does the RAW type meet your requirements? For example, you can specify
>> map type, and the value for the map is the raw JsonNode
>> parsed from Jackson.
>> This is not supported yet, however IMO this could be supported.
>>
>> Guodong Wang  于2020年5月28日周四 下午9:43写道:
>>
>>> Benchao,
>>>
>>> Thank you for your quick reply.
>>>
>>> As you mentioned, for current scenario, approach 2 should work for me.
>>> But it is a little bit annoying that I have to modify schema to add new
>>> field types when upstream app changes the json format or adds new fields.
>>> Otherwise, my user can not refer the field in their SQL.
>>>
>>> Per description in the jira, I think after implementing this, all the
>>> json values will be converted as strings.
>>> I am wondering if Flink SQL can/will support the flexible schema in the
>>> future, for example, register the table without defining specific schema
>>> for each field, to let user define a generic map or array for one field.
>>> but the value of map/array can be any object. Then, the type conversion
>>> cost might be saved.
>>>
>>> Guodong
>>>
>>>
>>> On Thu, May 28, 2020 at 7:43 PM Benchao Li  wrote:
>>>
 Hi Guodong,

 I think you almost get the answer,
 1. map type, it's not working for current implementation. For example,
 use map, if the value if non-string json object, then
 `JsonNode.asText()` may not work as you wish.
 2. list all fields you cares. IMO, this can fit your scenario. And you
 can set format.fail-on-missing-field = true, to allow setting non-existed
 fields to be null.

 For 1, I think maybe we can support it in the future, and I've created
 jira[1] to track this.

 [1] https://issues.apache.org/jira/browse/FLINK-18002

 Guodong Wang  于2020年5月28日周四 下午6:32写道:

> Hi !
>
> I want to use Flink SQL to process some json events. It is quite
> challenging to define a schema for the Flink SQL table.
>
> My data source's format is some json like this
> {
> "top_level_key1": "some value",
> "nested_object": {
> "nested_key1": "abc",
> "nested_key2": 123,
> "nested_key3": ["element1", "element2", "element3"]
> }
> }
>
> The big challenges for me to define a schema for the data source are
> 1. the keys in nested_object are flexible, there might be 3 unique
> keys or more unique keys. If I enumerate all the keys in the schema, I
> think my code is fragile, how to handle event which contains more
> nested_keys in nested_object ?
> 2. I know table api support Map type, but I am not sure if I can put
> generic object as the value of the map. Because the values in 
> nested_object
> are of different types, some of them are int, some of them are string or
> array.
>
> So. how to expose this kind of json data as table in Flink SQL without
> enumerating all the nested_keys?
>
> Thanks.
>
> Guodong
>


 --

 Best,
 Benchao Li

>>>
>>
>> --
>>
>> Best,
>> Benchao Li
>>
>

-- 

Best,
Benchao Li


Re: Flink Elastic Sink

2020-05-28 Thread Leonard Xu
Hi,aj

In the implementation of ElasticsearchSink, ElasticsearchSink  won't create 
index and only start a Elastic client for sending requests to
the Elastic cluster. You can simply extract the index(date value in your case) 
from your timestamp field and then put it to an IndexRequest[2],  
ElasticsearchSink will send the IndexRequests to the Elastic cluster, Elastic 
cluster will create corresponding index and flush the records.

BTW, If you’re using Flink SQL you can use dynamic index in Elasticsearch sql 
connector [2], you can simply config 'connector.index' = 
‘myindex_{ts_field|-MM-dd}’ to achieve your goals.

Best,
Leoanrd Xu
[1] 
https://github.com/apache/flink/blob/master/flink-end-to-end-tests/flink-elasticsearch7-test/src/main/java/org/apache/flink/streaming/tests/Elasticsearch7SinkExample.java#L119
 

 
[2] 
https://ci.apache.org/projects/flink/flink-docs-master/dev/table/connect.html#elasticsearch-connector
 





> 在 2020年5月29日,02:43,aj  写道:
> 
> Hello All,
> 
> I am getting many events in Kafka and I have written a link job that sinks 
> that Avro records from Kafka to S3 in parquet format. 
> 
> Now, I want to sink these records into elastic search. but the only challenge 
> is that I want to sink record on time indices. Basically, In Elastic, I want 
> to create a per day index with the date as the suffix. 
> So in Flink stream job if I create an es sink how will I change the sink to 
> start writing  in a new index when the first event of the day arrives
> 
> Thanks,
> Anuj. 
> 
> 
>  
> 
> 
>  


Re: Re: Re: Flink Window with multiple trigger condition

2020-05-28 Thread Yun Gao
Hi,
 I think you could use timer to achieve that. In processFunction you could 
register a timer at specific time (event time or processing time) and get 
callbacked at that point. It could be registered like 
ctx.timerService().registerEventTimeTimer(current.lastModified + 6);
More details on timer could be found in [1] and an example is in [2]. In 
this example, a timer is registered in the last line of the processElement 
method, and the callback is implemented by override the onTimer method.

   [1] 
https://ci.apache.org/projects/flink/flink-docs-master/dev/stream/operators/process_function.html#timers
   [2] 
https://ci.apache.org/projects/flink/flink-docs-master/dev/stream/operators/process_function.html#example



 --Original Mail --
Sender:aj 
Send Date:Fri May 29 02:07:33 2020
Recipients:Yun Gao 
CC:user 
Subject:Re: Re: Flink Window with multiple trigger condition

Hi,

I have implemented the below solution and its working fine but the biggest 
problem with this is if no event coming for the user after 30 min then I am not 
able to trigger because I am checking
time diff from upcoming events. So when the next event comes than only it 
triggers but I want it to trigger just after 30 mins. 

So please help me to improve this and how to solve the above problem.



public class DemandSessionFlatMap extends RichFlatMapFunction, DemandSessionSummaryTuple> {

private static final Logger LOGGER = 
LoggerFactory.getLogger(DemandSessionFlatMap.class);

private transient ValueState> timeState; // 
maintain session_id starttime and endtime 
private transient MapState 
sessionSummary; // map for hex9 and summarytuple

@Override
public void open(Configuration config) {

ValueStateDescriptor> timeDescriptor =
new ValueStateDescriptor<>(
"time_state", // the state name
TypeInformation.of(new TypeHint>() {
}), // type information
Tuple3.of(null, 0L, 0L)); // default value of the 
state, if nothing was set
timeState = getRuntimeContext().getState(timeDescriptor);

MapStateDescriptor descriptor =
new MapStateDescriptor("demand_session",
TypeInformation.of(new TypeHint() {
}), TypeInformation.of(new 
TypeHint() {
}));
sessionSummary = getRuntimeContext().getMapState(descriptor);

}

@Override
public void flatMap(Tuple2 recordTuple2, 
Collector collector) throws Exception {
GenericRecord record = recordTuple2.f1;
String event_name = record.get("event_name").toString();
long event_ts = (Long) record.get("event_ts");
Tuple3 currentTimeState = timeState.value();

if (event_name.equals("search_list_keyless") && currentTimeState.f1 == 
0) {
currentTimeState.f1 = event_ts;
String demandSessionId = UUID.randomUUID().toString();
currentTimeState.f0 = demandSessionId;
}

long timeDiff = event_ts - currentTimeState.f1;

if (event_name.equals("keyless_start_trip") || timeDiff >= 180) {
Tuple3 finalCurrentTimeState = currentTimeState;
sessionSummary.entries().forEach( tuple ->{
String key = tuple.getKey();
DemandSessionSummaryTuple sessionSummaryTuple = 
tuple.getValue();
try {
sessionSummaryTuple.setEndTime(finalCurrentTimeState.f2);
collector.collect(sessionSummaryTuple);
} catch (Exception e) {
e.printStackTrace();
}

});
timeState.clear();
sessionSummary.clear();
currentTimeState = timeState.value();
}

if (event_name.equals("search_list_keyless") && currentTimeState.f1 == 
0) {
currentTimeState.f1 = event_ts;
String demandSessionId = UUID.randomUUID().toString();
currentTimeState.f0 = demandSessionId;
}
currentTimeState.f2 = event_ts;

if (currentTimeState.f1 > 0) {
String search_hex9 = record.get("search_hex9") != null ? 
record.get("search_hex9").toString() : null;
DemandSessionSummaryTuple currentTuple = 
sessionSummary.get(search_hex9) != null ? sessionSummary.get(search_hex9) : new 
DemandSessionSummaryTuple();

if (sessionSummary.get(search_hex9) == null) {
currentTuple.setSearchHex9(search_hex9);
currentTuple.setUserId(recordTuple2.f0);
currentTuple.setStartTime(currentTimeState.f1);
currentTuple.setDemandSessionId(currentTimeState.f0);
}

if (event_name.equals("search_list_keyless")) {
currentTuple.setTotalSearch(currentTuple.getTotalSearch() + 1);
SearchSummaryCalculation(record, cu

Re: Question on stream joins

2020-05-28 Thread Yun Gao
Hi Sudan,

   As far as I know, both join and cogroup requires keys (namely partitioning), 
thus for the non-keyed scenario, you may have to use low-level connect operator 
to achieve it. In my opinion it should be something like

  leftSource.connect(rightSource)
   .process(new TagCoprocessFunction()) // In this function,  tag the left 
source with "0" and the right source with "1"
​  .window(xx) 
​  .process(new XX()) // In this function, you could get all the left and 
right elements in this window, and you could distinguish them with the tag 
added in the previous step.

It should be pointed out that without key (partitioning) the paralellism of the 
window operator will have to be 1.


For the keyed scenarios, You may use high-level operators join/cogroup to 
achieve that. The join could be seen as a special example as cogroup that in 
cogroup, you could access all the left and right elements directly, and in join 
function, the framework will iterate the elements for you and you can only 
specify the logic for each (left, right) pair. 

Best,
 Yun


 --Original Mail --
Sender:Sudan S 
Send Date:Fri May 29 01:40:59 2020
Recipients:User-Flink 
Subject:Question on stream joins

Hi ,

I have two usecases

1. I have two streams which `leftSource` and `rightSource` which i want to join 
without partitioning over a window and find the difference of count of elements 
of leftSource and rightSource and emit the result of difference. Which is the 
appropriate join function ican use ?

join/cogroup/connect.

2. I want to replicate the same behaviour over a keyed source. Basically 
leftSource and rightSource are joined by a partition key.

Plz let me know which is the appropriate join operator for the usecase
"The information contained in this e-mail and any accompanying documents may 
contain information that is confidential or otherwise protected from 
disclosure. If you are not the intended recipient of this message, or if this 
message has been addressed to you in error, please immediately alert the sender 
by replying to this e-mail and then delete this message, including any 
attachments. Any dissemination, distribution or other use of the contents of 
this message by anyone other than the intended recipient is strictly 
prohibited. All messages sent to and from this e-mail address may be monitored 
as permitted by applicable law and regulations to ensure compliance with our 
internal policies and to protect our business."

pyflink Table Api连接 外部系统问题

2020-05-28 Thread 刘亚坤
目前在学习使用pyflink的Table api,请教一个问题:
1、Table Api连接kafka系统,能否把整条的kafka消息看成是一个table字段进行处理?比如,kafka 
topic连的消息为一个json字符串,把这个字符串整体当做是一个字段,这样可以方便使用 pyflink 的udf函数对消息进行处理转换等操作?
2、如果以上可行,连接kafka的数据格式如何设置,即with_format如何设置,目前官网这方便的资料较少。


新手入门,请多指教,感谢。

关于flink sql 滚动窗口无法输出结果集合

2020-05-28 Thread steven chen
数据没次都能进来,并且统计,但是为什么结果insert 不会保存到mysql 中?是sql的问题?还是?求大神解答
CREATE TABLE user_behavior (

itemCode VARCHAR,

ts BIGINT COMMENT '时间戳',

t as TO_TIMESTAMP(FROM_UNIXTIME(ts /1000,'-MM-dd HH:mm:ss')),

proctime as PROCTIME(),

WATERMARK FOR t as t - INTERVAL '5' SECOND

) WITH (

'connector.type' = 'kafka',

'connector.version' = '0.11',

'connector.topic' = 'scan-flink-topic',

'connector.properties.group.id' ='qrcode_pv_five_min',

'connector.startup-mode' = 'latest-offset',

'connector.properties.zookeeper.connect' = 'localhost:2181',

'connector.properties.bootstrap.servers' = 'localhost:9092',

'update-mode' = 'append',

'format.type' = 'json',

'format.derive-schema' = 'true'

);

CREATE TABLE pv_five_min (
item_code VARCHAR,
dt VARCHAR,
dd VARCHAR,
pv BIGINT
) WITH (
'connector.type' = 'jdbc',
'connector.url' = 'jdbc:mysql://127.0.0.1:3306/qrcode',
'connector.table' = 'qrcode_pv_five_min',
'connector.driver' = 'com.mysql.jdbc.Driver',
'connector.username' = 'root',
'connector.password' = 'root',
'connector.write.flush.max-rows' = '1'
);

INSERT INTO pv_five_min
SELECT
itemCode As item_code,
DATE_FORMAT(TUMBLE_START(t, INTERVAL '5' MINUTE),'-MM-dd HH:mm') dt,
DATE_FORMAT(TUMBLE_END(t, INTERVAL '5' MINUTE),'-MM-dd HH:mm') dd,
COUNT(*) AS pv
FROM user_behavior
GROUP BY TUMBLE(t, INTERVAL '5' MINUTE),itemCode;




 

Re: Running and Maintaining Multiple Jobs

2020-05-28 Thread Yun Tang
Hi Prasanna

As far as I know, Flink does not allow to submit new jobgraph without 
restarting it, and I actually not understand what's your 3rd question meaning.

From: Prasanna kumar 
Sent: Friday, May 29, 2020 11:18
To: Yun Tang 
Cc: user 
Subject: Re: Running and Maintaining Multiple Jobs

Thanks Yun for your reply.

Your thoughts on the following too?

2) We cannot afford downtime in our system. Say 5 tasks are pushed to 
production. Say we need to add / update tasks later should we restart the 
cluster with the new job and JAR ?

3) Now we have the job registry in files. Is it possible to read from the DB 
directly and create the Jobs (DAG) dynamically without restarting it ?

Prasanna.


On Fri 29 May, 2020, 08:04 Yun Tang, 
mailto:myas...@live.com>> wrote:
Hi Prasanna

At year of 2018, Flink can only restart all tasks to recover the job. That's 
why you would found the answer that multiple jobs might be good. However, Flink 
supports to restart only affected pipeline instead of the whole job, a.k.a 
"region failover" after Flink-1.9, and make this failover strategy as default 
after Flink-1.10 [1].

In a nutshell, I think multiple pipelines could be acceptable now.


[1] 
https://ci.apache.org/projects/flink/flink-docs-release-1.10/ops/config.html#jobmanager-execution-failover-strategy

Best
Yun Tang

From: Prasanna kumar 
mailto:prasannakumarram...@gmail.com>>
Sent: Friday, May 29, 2020 1:59
To: user mailto:user@flink.apache.org>>
Subject: Re: Running and Maintaining Multiple Jobs

Hi,

I also looked at this link. This says my approach is not good. Wanted to hear 
more on the same from the community.

https://stackoverflow.com/questions/52009948/multiple-jobs-or-multiple-pipelines-in-one-job-in-flink

Prasanna.

On Thu, May 28, 2020 at 11:22 PM Prasanna kumar 
mailto:prasannakumarram...@gmail.com>> wrote:
Hi,

I have a list of jobs that need to be run via flink.
For PoC we are implementing via JSON configuration file.
Sample JSON file
{
  "registryJobs": [
{ "inputTopic": "ProfileTable1",  "outputTopic": "Channel" },
{ "inputTopic": "Salestable", "outputTopic": "SalesChannel" },
{ "inputTopic": "billingsource", "outputTopic": "billing" },
{ "inputTopic": "costs", "outputTopic": "costschannel" },
{ "inputTopic": "leadsTable",  "outputTopic": "leadsChannel" },
  ]
}
But in Long run we do want to have this detail in a RDBMS.
There are many other properties for Job such as transformation,  filter , rules 
which would be captured in DB via UI.

Flink Supports Single Execution Environment. I ended up writing JobGenerator 
Module which reads from this JSON and creates Jobs.


public static void Generate Jobs(Registry job, StreamExecutionEnvironment env) {

  Properties props = new Properties();
  props.put("bootstrap.servers", BOOTSTRAP_SERVER);
  props.put("client.id", "flink-example1");

  FlinkKafkaConsumer011 fkC = new 
FlinkKafkaConsumer011<>(job.getInputTopic(),new SimpleStringSchema(), props);

  DataStream stream = env.addSource(fkC).name("Kafka: " + 
job.getInputTopic());

  stream.map( SOMEMAPCODE );

  stream.addSink(new FlinkKafkaProducer011<>(job.getOutputTopic(), new 
SimpleStringSchema(), props)).name("Kafka: " + job.getOutputTopic());
   }

This created 5 tasks in a single Job and it is seen this way.

[Screen Shot 2020-05-28 at 11.15.32 PM.png]

Questions

1) Is this a good way to design ? We might end up having 500 - 1000 such tasks 
in say 1 year down the lane. Or there is another way possible ?

2) We cannot afford downtime in our system. Say 5 tasks are pushed to 
production. Say we need to add / update tasks later should we restart the 
cluster with the new job and JAR ?

3) Now we have the job registry in files. Is it possible to read from the DB 
directly and create the Jobs (DAG) dynamically without restarting it ?

Thanks,
Prasanna.