Re: Hi I'm having problems with self-signed certificiate trust with Native K8S

2020-11-24 Thread Till Rohrmann
Hi Kevin, I expect the 1.12.0 release to happen within the next 3 weeks. Cheers, Till On Tue, Nov 24, 2020 at 4:23 AM Yang Wang wrote: > Hi Kevin, > > Let me try to understand your problem. You have added the trusted keystore > to the Flink app image(my-flink-app:0.0.1) > and

Re: Hi I'm having problems with self-signed certificiate trust with Native K8S

2020-11-23 Thread Yang Wang
Hi Kevin, Let me try to understand your problem. You have added the trusted keystore to the Flink app image(my-flink-app:0.0.1) and it could not be loaded. Right? Even though you tunnel in the pod, you could not find the key store. It is strange. I know it is not very convenient to bundle the

Re: Hi I'm having problems with self-signed certificiate trust with Native K8S

2020-11-23 Thread Till Rohrmann
rets are able to get mounted > > https://github.com/apache/flink/pull/14005 <- also can maintainers look > into this PR so we can mount other custom K8S resources? > > On Fri, Nov 20, 2020 at 9:23 PM Kevin Kwon wrote: > >> Hi I am using MinIO as a S3 mock backend for Native K8

Re: Hi I'm having problems with self-signed certificiate trust with Native K8S

2020-11-22 Thread Kevin Kwon
K8S resources? On Fri, Nov 20, 2020 at 9:23 PM Kevin Kwon wrote: > Hi I am using MinIO as a S3 mock backend for Native K8S > > Everything seems to be fine except that it cannot connect to S3 since > self-signed certificates' trusted store are not cloned in Deployment > resources >

Hi I'm having problems with self-signed certificiate trust with Native K8S

2020-11-20 Thread Kevin Kwon
Hi I am using MinIO as a S3 mock backend for Native K8S Everything seems to be fine except that it cannot connect to S3 since self-signed certificates' trusted store are not cloned in Deployment resources Below is in order, how I add the trusted keystore by using keytools and how I run m

Re: Hi all I'm having trouble with spinning up native Kubernetes cluster

2020-11-15 Thread Yang Wang
ld you own Flink image, not directly using the flink:1.11-scala_2.12-java8. Best, Yang Kevin Kwon 于2020年11月14日周六 上午5:26写道: > Hi guys, I'm trying out the native K8s cluster and having trouble with SSL > I think. > > I use *k3d* as my local cluster for experiment > > here&

Hi all I'm having trouble with spinning up native Kubernetes cluster

2020-11-13 Thread Kevin Kwon
Hi guys, I'm trying out the native K8s cluster and having trouble with SSL I think. I use *k3d* as my local cluster for experiment here's how I launch my cluster k3d cluster create docker run \ -u flink:flink \ -v /Users/user/.kube:/opt/flink/.kube \ --network host \ --entry-point

Re: Hi Flink Team

2018-03-01 Thread Ashish Attarde
Thanks Piotrek for your response. Teena responsed for same. I am implementing changes to try it out. Yes, Originally I did call keyBy for same reason so that I can parallelize the operation. On Thu, Mar 1, 2018 at 1:24 AM, Piotr Nowojski wrote: > Hi, > > timeWindowAll is a non

Re: Hi Flink Team

2018-03-01 Thread Piotr Nowojski
Hi, timeWindowAll is a non parallel operation, since it gathers all of the elements and process them together: https://ci.apache.org/projects/flink/flink-docs-release-1.4/api/java/org/apache/flink/streaming/api/datastream/DataStream.html#timeWindowAll

Fwd: Hi Flink Team

2018-03-01 Thread Ashish Attarde
Hi, I am new to Flink and in general data processing using stream processors. I am using flink to do real time correlation between multiple records which are coming as part of same stream. I am doing is "apply" operation on TimeWindowed stream. When I submit job with parallelism fact

Re: Hi

2017-04-07 Thread Fabian Hueske
Hi Wolfe, that's all correct. Thank you! I'd like to emphasize that the FsStateBackend stores all state on the heap of the worker JVM. So you might run into OutOfMemoryErrors if you state grows too large. Therefore, the RocksDBStatebackend is the recommended choice for most production

Re: Hi

2017-04-07 Thread Brian Wolfe
Hi Kant, Jumping in here, would love corrections if I'm wrong about any of this. In short answer, no, HDFS is not necessary to run stateful stream processing. In the minimal case, you can use the MemoryStateBackend to back up your state onto the JobManager. In any production scenario, you

Hi

2017-04-07 Thread kant kodali
Hi All, I read the docs however I still have the following question For Stateful stream processing is HDFS mandatory? because In some places I see it is required and other places I see that rocksDB can be used. I just want to know if HDFS is mandatory for Stateful stream processing? Thanks!

Re: Hi, There is possibly an issue with EventTimeSessionWindows where a gap is specified for considering items in the same session. Here the logic is, if two adjacent items have a difference in event

2017-01-02 Thread Jamie Grier
2, 2017 at 3:11 AM, Sujit Sakre wrote: > Hi, > > We are using Flink 1.1.4 version. > > > There is possibly an issue with EventTimeSessionWindows where a gap is > specified for considering items in the same session. Here the logic is, if > two adjacent items have a differenc

Hi, There is possibly an issue with EventTimeSessionWindows where a gap is specified for considering items in the same session. Here the logic is, if two adjacent items have a difference in event time

2017-01-02 Thread Sujit Sakre
Hi, We are using Flink 1.1.4 version. There is possibly an issue with EventTimeSessionWindows where a gap is specified for considering items in the same session. Here the logic is, if two adjacent items have a difference in event timestamps of more than the gap then the items are considered to

Re: Hi, join with two columns of both tables

2015-11-09 Thread Fabian Hueske
Why don't you use a composite key for the Flink join (first.join(second).where(0,1).equalTo(2,3).with(...)? This would be more efficient and you can omit the check in the join function. Best, Fabian 2015-11-08 19:13 GMT+01:00 Philip Lee : > I want to join two tables with two columns like > > //

Hi, join with two columns of both tables

2015-11-08 Thread Philip Lee
I want to join two tables with two columns like //AND sr_customer_sk = ws_bill_customer_sk //AND sr_item_sk = ws_item_sk val srJoinWs = storeReturn.join(webSales).where(_._item_sk).equalTo(_._item_sk){ (storeReturn: StoreReturn, webSales: WebSales, out: Collector[(Long,L

Re: Hi, question about orderBy two columns more

2015-11-04 Thread Maximilian Michels
Hi Philip, The issue has been fixed in rc5 which you can get here: https://people.apache.org/~mxm/flink-0.10.0-rc5/ Note that these files will be removed once 0.10.0 is out. Kind regards, Max On Mon, Nov 2, 2015 at 6:38 PM, Philip Lee wrote: > You are welcome. > > I am wondering if

Re: Hi, question about orderBy two columns more

2015-11-02 Thread Philip Lee
​​ You are welcome.​ I am wondering if there is a way of noticing when you update RC solving the *sortPartition* problem and then how we could apply the new version like just downloading the new relased Flink version? Thanks, Phil On Mon, Nov 2, 2015 at 2:09 PM, Fabian Hueske wrote: >

Re: Hi, question about orderBy two columns more

2015-11-02 Thread Fabian Hueske
Hi Philip, thanks for reporting the issue. I just verified the problem. It is working correctly for the Java API, but is broken in Scala. I will work on a fix and include it in the next RC for 0.10.0. Thanks, Fabian 2015-11-02 12:58 GMT+01:00 Philip Lee : > Thanks for your reply, Step

Re: Hi, question about orderBy two columns more

2015-11-02 Thread Philip Lee
> the same as in SQL when you state "ORDER BY col1, col2". > > The SortPartitionOperator created with the first "sortPartition(col1)" > call appends further columns, rather than instantiating a new sort. > > Greetings, > Stephan > > > On Sun, Nov

Re: Hi, question about orderBy two columns more

2015-11-01 Thread Stephan Ewen
)" call appends further columns, rather than instantiating a new sort. Greetings, Stephan On Sun, Nov 1, 2015 at 11:29 AM, Philip Lee wrote: > Hi, > > I know when applying order by col, it would be > sortPartition(col).setParralism(1) > > What about orderBy two columns more? &

Hi, question about orderBy two columns more

2015-11-01 Thread Philip Lee
Hi, I know when applying order by col, it would be sortPartition(col).setParralism(1) What about orderBy two columns more? If the sql is to state order by col_1, col_2, sortPartition().sortPartition () does not solve this SQL. because orderby in sql is to sort the fisrt coulmn and the second

Re: Hi, Flink people, a question about translation from HIVE Query to Flink fucntioin by using Table API

2015-10-20 Thread Philip Lee
Hi, one more simple quesiton about ORDER BY count, item1, item2 in HIVE SQL for flink 1) in SQL when trying order by 3 columns like the above example, it orders 'count' first then orders 'item1' in each same 'count' then orders item2, right? in Flink when using

Re: Hi, Flink people, a question about translation from HIVE Query to Flink fucntioin by using Table API

2015-10-19 Thread Fabian Hueske
tribute > By] + [Sort By]. Therefore, according to your suggestion, should it be > partitionByHash() + sortGroup() instead of sortPartition() ? > > Or probably I did not still get much difference between Partition and > scope within a reduce. > > Regards, > Philip > > On Mon,

Re: Hi, Flink people, a question about translation from HIVE Query to Flink fucntioin by using Table API

2015-10-19 Thread Philip Lee
: > Hi Philip, > > here a few additions to what Max said: > - ORDER BY: As Max said, Flink's sortPartition() does only sort with a > partition and does not produce a total order. You can either set the > parallelism to 1 as Max suggested or use a custom partitioner to ran

Re: Hi, Flink people, a question about translation from HIVE Query to Flink fucntioin by using Table API

2015-10-19 Thread Fabian Hueske
Hi Philip, here a few additions to what Max said: - ORDER BY: As Max said, Flink's sortPartition() does only sort with a partition and does not produce a total order. You can either set the parallelism to 1 as Max suggested or use a custom partitioner to range partition the data. - SORT BY:

Re: Hi, Flink people, a question about translation from HIVE Query to Flink fucntioin by using Table API

2015-10-19 Thread Maximilian Michels
Hi Philip, You're welcome. Just a small correction: Hive's SORT BY should be DataSet.groupBy(key).sortGroup(key) in Flink. This ensures sorted grouped records within the reducer that follows. No need to set the parallelism to 1. Best, Max On Mon, Oct 19, 2015 at 1:28 PM, Philip

Re: Hi, Flink people, a question about translation from HIVE Query to Flink fucntioin by using Table API

2015-10-19 Thread Maximilian Michels
Hi Philip, Thank you for your questions. I think you have mapped the HIVE functions to the Flink ones correctly. Just a remark on the ORDER BY. You wrote that it produces a total order of all the records. In this case, you'd have do a SortPartition operation with parallelism set to 1. Th

Hi, Flink people, a question about translation from HIVE Query to Flink fucntioin by using Table API

2015-10-18 Thread Philip Lee
Hi, Flink people, a question about translation from HIVE Query to Flink fucntioin by using Table API. In sum up, I am working on some benchmark for flink I am Philip Lee majoring in Computer Science in Master Degree of TUB. , I work on translation from Hive Query of Benchmark to Flink codes. As