Hi Hanjing,
I think the problem is that the Storm compatibility layer only works with
legacy mode at the moment. Please set `mode: legacy` in your
flink-conf.yaml. I hope this will resolve the problems.
Cheers,
Till
On Tue, Sep 11, 2018 at 7:10 AM jing wrote:
> Hi vino,
> Thank you very much.
Hi vino,
Thank you very much.
I'll try more tests.
| |
Hanjing
|
|
|
签名由网易邮箱大师定制
On 9/11/2018 11:51,vino yang wrote:
Hi Hanjing,
Flink does not currently support TaskManager HA and only supports JobManager
HA.
In the Standalone environment, once the JobManager triggers a failover, it will
al
Hi Till,
I've opened a JIRA issue: https://issues.apache.org/jira/browse/FLINK-10310.
Can we discuss it?
Jayant Ameta
On Thu, Aug 30, 2018 at 4:35 PM Till Rohrmann wrote:
> Hi Jayant,
>
> afaik it is currently not possible to control how failures are handled in
> the Cassandra Sink. What would
Hi Hanjing,
Flink does not currently support TaskManager HA and only supports
JobManager HA.
In the Standalone environment, once the JobManager triggers a failover, it
will also cause cancel and restart for all jobs.
Thanks, vino.
jing 于2018年9月11日周二 上午11:12写道:
> Hi vino,
> Thanks a lot.
> Besi
Hi vino,
Thanks a lot.
Besides, I'm also confused about taskmanager's HA.
There're 2 taskmangaer in my cluster, only one job A worked on taskmanager A.
If taskmangaer A crashed, what happend about my job.
I tried, my job failed, taskmanger B does not take over job A.
Is this right?
| |
Hanjing
Hi Austin,
It seems that your scene is very suitable for a usage scenario of Flink's
Savepoint: A/B Test (or upgrade application). Yes, Flink can support this
requirement, but you should understand that these two jobs will be subject
to certain restrictions. For more information, please refer to O
Oh, I thought the flink job could not be submitted. I don't know why the
storm's example could not be submitted. Because I have never used it.
Maybe Till, Chesnay or Gary can help you. Ping them for you.
Thanks, vino.
jing 于2018年9月11日周二 上午10:26写道:
> Hi vino,
> My job mangaer log is as below. I
Hi vino,
My job mangaer log is as below. I can submit regular flink job to this
jobmanger, it worked. But the flink-storm example doesn's work.
Thanks.
Hanjing
2018-09-11 18:22:48,937 INFO
org.apache.flink.runtime.entrypoint.ClusterEntrypoint -
--
Hi Benjamin,
The approximate location is this package, the more accurate location is
here.[1]
Specifically, Hash Join is divided into two steps:
1) build side
2) probe side
Thanks ,vino.
[1]:
https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/opera
Hi Hanjing,
Is your JobManager working properly? Can you share your JobManager log?
Thanks, vino.
jing 于2018年9月11日周二 上午10:06写道:
> Hi vino,
>
>I tried change "localhost" to the real IP. But still throw
> exception as below. JobManager configuration is as below.
>
>
>
> Thanks.
>
> Hanji
Hi vino,
I tried change "localhost" to the real IP. But still throw exception as
below. JobManager configuration is as below.
Thanks.
Hanjing
flink-conf.yaml:
jobmanager.rpc.address: 170.0.0.46
# The RPC port where the JobManager is reachabl
Hi there,
I would just like a quick sanity-check: it is possible to start a job with
a savepoint from another job, and have the new job save to a new checkpoint
directory without overwriting the original checkpoints, correct?
Thank you so much!
Austin
Hi,
Our application is financial data enrichment. What we want to do is that we
want to first key the positions by Account Number and then window them.
Within a window I want to get all the unique products across all the
accounts and make an external service call to hydrate the cache for that
wind
Might be related to this
https://issues.apache.org/jira/browse/FLINK-10135
--
Sent from: http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/
Hello,
I've upgraded my cluster to version 1.5.3 from 1.4.2. After the upgrade I
notice that some of the metrics reported via JMX, like the number of running
jobs, are missing.
I've listed all of the domains and this is what I have:
/$>domains
#following domains are available
JMImplementation
co
Hi Encho,
Flink does not provide Pubsub connector. I believe you are using Beam's
coder for PubsubIO. If this is correct, I guess you might want to ask
this question on Beam's mailing list.
Regards,
Dawid
On 10/09/18 17:24, Encho Mishinev wrote:
> Hello,
>
> I have a simple question - when doe
Hello,
I have a simple question - when does the Flink Runner for Apache Beam
acknowledge Pubsub messages when using PubsubIO?
Thanks
Hi,
can anyone tell me where the default hybrid hash join function for partitioning
(shuffle phase) is implemented?
Even after deeper dinning I was not able to figure out where it is located.
Might be somewhere here?
—>
https://github.com/apache/flink/tree/master/flink-runtime/src/main/java/or
Hi Andrey,
> the answer is yes, it is backed by state backend (should be RocksDB if you
> configure it),
> you can trace it through these method calls:
>
> sourceStream.keyBy(…)
> .timeWindow(Time.seconds(…))
> .trigger(CountTrigger.of(…))
> gives you WindowedStream,
> WindowedStream.aggregate(ne
> Back to my first question, is the accumulator state backed by RocksDB state
> backend? If so, I don’t need to use rich function for the aggregate function.
I did some testing and code reading. To answer my own question, the
accumulator state seems to be managed by RocksDB if I use it as the
sta
Hi Ning,
> Back to my first question, is the accumulator state backed by RocksDB state
> backend? If so, I don’t need to use rich function for the aggregate function.
the answer is yes, it is backed by state backend (should be RocksDB if you
configure it),
you can trace it through these method
Hi Hanjing,
OK, I mean you change the "localhost" to the real IP.
Try it.
Thanks, vino.
jing 于2018年9月10日周一 下午8:07写道:
> Hi vino,
> jonmanager rpc address value is setted by localhost.
> hadoop3@p-a36-72 is the node host the jobmanager jvm.
>
> Thanks.
> Hanjing
>
>
>
> jing
> 邮箱hanjingz...@163
Hi Joe,
Did the problem get resolved at the end?
Thanks,
Kostas
> On Aug 30, 2018, at 9:06 PM, Eron Wright wrote:
>
> I took a brief look as to why the queryable state server would bind to the
> loopback address. Both the qs server and the
> org.apache.flink.runtime.io.network.netty.NettyS
Hi vino,
jonmanager rpc address value is setted by localhost.
hadoop3@p-a36-72 is the node host the jobmanager jvm.
Thanks.
Hanjing
| |
jing
|
|
邮箱hanjingz...@163.com
|
签名由 网易邮箱大师 定制
On 09/10/2018 19:25, vino yang wrote:
Hi Hanjing,
I mean this configuration key.[1]
What's more, Is the "
Hi Vino,
> If you need access to the state API, you can consider using
> ProcessWindowFunction[1], which allows you to use ProcessWindowFunction.
I was hoping that I could use the aggregate function to do incremental
aggregation. My understanding is that ProcessWindowFunction either has to loop
Hi Hanjing,
I mean this configuration key.[1]
What's more, Is the "hadoop3@p-a36-72" also the node which host
JobManager's jvm process?
Thanks, vino.
[1]:
https://ci.apache.org/projects/flink/flink-docs-release-1.5/ops/config.html#jobmanager-rpc-address
jing 于2018年9月10日周一 下午6:57写道:
> Hi vino
Hi vino,
I commit the job on the jvm code with the command below.
hadoop3@p-a36-72 flink-1.6.0]$ ./bin/flink run WordCount-StormTopology.jar
input output
And I'm a new user, which configuation name should be set. All the
configuations are the default setting now.
Thanks.
Hanjing
| |
jing
|
Hello,
after switching from 1.4.2. to 1.5.2 we started to have problems with JM
container.
Our use case is as follows:
- we get request from user
- run DataProcessing job
- once finished we store details to DB
We have ~1000 jobs per day. After version update our container is dying
after ~1-2 da
Hi,
I have experience UnsatisfiedLinkError when I tried to use
flink-s3-fs-hadoop to sink to s3 in my local Windows machine.
I googled and tried several solutions like download hadoop.dll and
winutils.exe, set up HADOOP_HOME and PATH environment variables, copy
hadoop.dll to C:\Windows\System32,
In addition :
ProcessWindowFunction extends AbstractRichFunction, through
getRuntimeContext,you can access keyed state API.
vino yang 于2018年9月10日周一 下午3:19写道:
> Hi Ning,
>
> Answer you question:
>
> *And why is rich functions not allowed here?*
>
> If you need access to the state API, you can co
Hi Ning,
Answer you question:
*And why is rich functions not allowed here?*
If you need access to the state API, you can consider using
ProcessWindowFunction[1], which allows you to use ProcessWindowFunction.
Thanks, vino.
[1]:
https://ci.apache.org/projects/flink/flink-docs-release-1.6/dev/st
31 matches
Mail list logo