Hi Gary:
I have tried the 1.5.6 version, it shows the same error.
org.apache.flink.client.program.ProgramInvocationException: Could not retrieve
the execution result.
at
org.apache.flink.client.program.rest.RestClusterClient.submitJob(RestClusterClient.java:258)
Hi,
I have a Flink job (version 1.5.3) that consumes from Kafka topic, does some
transformations and aggregates, and write to two Kafka topics respectively.
Meanwhile, there’s a custom source that pulls configurations for the
transformations periodically. The generic job graph is as below.
T
No particular reason for not using the process function, just wanted to
clarify if that was the correct way to do it. Thanks Knauf.
Regards,
Taher Koitawala
GS Lab Pune
+91 8407979163
On Wed, Feb 27, 2019 at 8:23 PM Konstantin Knauf
wrote:
> Hi Taher ,
>
> a ProcessFunction is actually the wa
Hi, Congxian:
I found it finally .
it did not be included in master branch:
https://github.com/apache/flink/blob/master/flink-libraries/flink-table/src/test/java/org/apache/flink/table/runtime/stream/sql/JavaSqlITCase.java
but still available in release-1.7
https://github.com/apache/flink/blob/r
I am using flink v1.71 now, and can not find the library you suggested,
is it deprecated?
Lifei Chen 于2019年2月28日周四 上午9:50写道:
> Thanks, I will try it !
>
> Congxian Qiu 于2019年2月27日周三 下午9:17写道:
>
>> Hi, Lifei
>>
>> Maybe org.apache.flink.table.runtime.stream.sql.JavaSqlITCase can be
>> helpful
Thank you very much.
--
Sent from: http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/
Thanks, I will try it !
Congxian Qiu 于2019年2月27日周三 下午9:17写道:
> Hi, Lifei
>
> Maybe org.apache.flink.table.runtime.stream.sql.JavaSqlITCase can be
> helpful.
>
> Best,
> Congxian
>
>
> Lifei Chen 于2019年2月27日周三 下午4:20写道:
>
>> Hi, all:
>>
>> I finished a flink streaming job with flink sql, which r
Hi,
this topic has been discussed a lot recently in the community as "Event
Time Alignment/Synchronization" [1,2]. These discussion should provide a
starting point.
Cheers,
Konstantin
[1]
http://apache-flink-mailing-list-archive.1008284.n3.nabble.com/Sharing-state-between-subtasks-td24489.html
HI Rinat,
to my knowledge your workaround is fine & necessary. You can also emit a
Long.MAX_VALUE instead of the processing time to save some CPU cycles.
Cheers,
Konstantin
On Wed, Feb 27, 2019 at 9:32 PM Rinat wrote:
> Hi mates, got some questions about using event time for the flink pipel
Hi mates, got some questions about using event time for the flink pipeline.
My pipeline consists of two connected streams, one is a stream with business
rules and another one is a stream with user events.
I’ve broadcasted stream with business rules and connected it to the stream of
events, thu
Thanks Gary,
I will try to look into why the child-first strategy seems to have failed
for this dependency.
Best,
Austin
On Wed, Feb 27, 2019 at 12:25 PM Gary Yao wrote:
> Hi,
>
> Actually Flink's inverted class loading feature was designed to mitigate
> problems with different versions of lib
Great, my team is eager to get started. I’m curious what progress had been
made so far?
-H
> On Feb 26, 2019, at 14:43, Chunhui Shi wrote:
>
> Hi Heath and Till, thanks for offering help on reviewing this feature. I just
> reassigned the JIRAs to myself after offline discussion with Jin. Let
Thanks Till,
It appears to occur when a task manager crashes and restarts – A new blob-store
directory gets created and the old one remains as is, and this piles up over
time. Should these *old* blob-stores be manually cleared every time a task
manager crashes and restarts?
Regards,
Harshith
Hi,
Actually Flink's inverted class loading feature was designed to mitigate
problems with different versions of libraries that are not compatible with
each other [1]. You may want to debug why it does not work for you.
You can also try to use the Hadoop free Flink distribution, and export the
HA
I couldn’t find reference to it anywhere in the docs, so I thought I will ask
here.
When I use KeyBy operator, say KeyBy (“customerId”) and some keys (i.e.
customers) are way too noisy than others, is there a way to ensure that too
many noisy customers do not land on the same taskslot? In gen
Short-term I'd try relocating the okio/okhttp dependencies in your jar.
I'm not too keen on adding more relocations to the hadoop jar; I can't
gauge the possible side-effects.
On 27.02.2019 14:54, Austin Cawley-Edwards wrote:
Following up to add more info, I am building my app with maven based
Hi Taher ,
a ProcessFunction is actually the way to do this. When chained to the
previous operator the overhead of such a ProcessFunction in negligible.
Any particular reason you don't want to go for a ProcessFunctio?
Cheers,
Konstantin
On Wed, Feb 27, 2019 at 8:36 AM Taher Koitawala
wrote:
Following up to add more info, I am building my app with maven based on the
sample Flink pom.xml
My shade plugin config is:
org.apache.maven.plugins
maven-shade-plugin
3.0.0
package
shade
Hi Sen Sun,
The question is already resolved. You can find the entire email thread here:
http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/flink-list-and-flink-run-commands-timeout-td22826.html
Best,
Gary
On Wed, Feb 27, 2019 at 7:55 AM sen wrote:
> Hi Aneesha:
>
> I
Hi,
How did you determine "jmhost" and "port"? Actually you do not need to
specify
these manually. If the client is using the same configuration as your
cluster,
the client will look up the leading JM from ZooKeeper.
If you have already tried omitting the "-m" parameter, you can check in the
clie
Hi, Lifei
Maybe org.apache.flink.table.runtime.stream.sql.JavaSqlITCase can be
helpful.
Best,
Congxian
Lifei Chen 于2019年2月27日周三 下午4:20写道:
> Hi, all:
>
> I finished a flink streaming job with flink sql, which read data from
> kafka and write bach to elasticsearch.
>
> I have no idea how to add
Hi Chesnay & Fabian,
Thanks for your replies.
I found it should be related to the CI runner. I moved to gitlab CI which runs
the script as root user by default, so it is always able to remove a write
protected file.
Best,
Paul Lam
> 在 2019年2月20日,17:08,Chesnay Schepler 写道:
>
> I ran into a s
I setup a yarn cluster use the :
./bin/yarn-session.sh -n 10 -tm 8192 -s 32
Then I submit a job to this cluster.It's OK,I've used for a long time。
In this way ,you can submit multi jobs in one cluster.
--
Sent from: http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/
Hi, all:
I finished a flink streaming job with flink sql, which read data from kafka
and write bach to elasticsearch.
I have no idea how to add a unit test for testing sql I wrote, any
suggestions?
Hi
@孙森
“/usr/local/flink/bin/flink run -m jmhost:port my.jar” is not submit on yarn .
If you want sumit job on yarn ,you should "/usr/local/flink/bin/flink run -m
yarn-cluster my.jar"
Please refer to
https://ci.apache.org/projects/flink/flink-docs-release-1.7/ops/cli.html
Best,
Shengj
25 matches
Mail list logo