Hi Team,
Is streaming ledger is replaced by statefulfun.io. Or am i missing
something?
Thanks and regards,
Dinesh.
Hi Team,
I am writing my first stateful fun basic hello application. I am getting
the following Exception.
$ ./bin/flink run -c
org.apache.flink.statefun.flink.core.StatefulFunctionsJob
./stateful-sun-hello-java-1.0-SNAPSHOT-jar-with-dependencies.jar
---
Hi Jary,
What you mean by step banlence . Could you please provide a concrete example
On Fri, May 22, 2020 at 3:46 PM Jary Zhen wrote:
> Hello everyone,
>
>First,a brief pipeline introduction:
> env.setStreamTimeCharacteristic(TimeCharacteristic.EventTime)
> consume multi kafka
Hi Team,
1. How can we enable checkpointing in stateful-fun2.0
2. How to set parallelism
Thanks,
Dinesh.
Hi Annemarie,
You need to use http client to connect to the job managaer.
//Creating a HttpClient object
CloseableHttpClient httpclient = HttpClients.createDefault();
//Creating a HttpGet object
HttpGet httpget = new HttpGet("https://${jobmanager:port}/jobs ");
//Exec
d from topic A
> wil aways let
> the data from topic B as ‘late data’ in window operator. What I wanted
> is that 1 records from A and 3600 records from B by using
> FlinkKafkaConsumer.setStartFromTimestamp(timestamp) so that I can
> simulate consume data as in real production envi
Thanks Gordon,
I read the documentation several times. But I didn't understand at that
time, flink-conf.yaml is not there.
can you please suggest
1. how to increase parallelism
2. how to give checkpoints to the job
As far as I know there is no documentation regarding this. or Are these
features
understand the problem more clearly.
Thanks,
Dinesh.
On Mon, May 25, 2020 at 6:58 PM C DINESH wrote:
> HI Jary,
>
> The easiest and simple solution is while creating consumer you can pass
> different config based on your requirements
>
> Example :
>
> For creating consumer
Hi Team,
I mean to say that know I understood. but in the documentation page
flink-conf.yaml is not mentioned
On Mon, May 25, 2020 at 7:18 PM C DINESH wrote:
> Thanks Gordon,
>
> I read the documentation several times. But I didn't understand at that
> time, flink-conf.
Hi All,
In a flink job I have a pipeline. It is consuming data from one kafka topic
and storing data to Elastic search cluster.
without restarting the job can we add another kafka cluster and another
elastic search sink to the job. Which means i will supply the new kafka
cluster and elastic searc
> to run the appended operators you want,
> But the you should keep the consistency semantics by yourself.
>
> Best,
> Danny Chan
> 在 2020年6月28日 +0800 PM3:30,C DINESH ,写道:
>
> Hi All,
>
> In a flink job I have a pipeline. It is consuming data from one kafka
> top
Still restart the job, and optimize the downtime by using session mode.
>
> Best,
> Paul Lam
>
> 2020年7月2日 11:23,C DINESH 写道:
>
> Hi Danny,
>
> Thanks for the response.
>
> In short without restarting we cannot add new sinks or sources.
>
> For better understa
Hello all,
Can we implement TwoPhaseCommitProtocol for mongodb to get EXACTLY_ONCE
semantics. Will there be any limitation for it?
Thanks,
Dinesh.
Hello All,
Can we implement 2 Phase Commit Protocol for elastic search sink. Will
there be any limitations?
Thanks in advance.
Warm regards,
Dinesh.
Hi All,
In flink UI the name of the dashboard is Apache Flink dash Board.
we have different environments. If I want to change the name of the dash
board. Where do i need to change it?
Thanks & Regards,
Dinesh.
Hi All,
>From 1.9 version there is no *flink-shaded-hadoop2 dependency. To use
Hadoop APIS like *IntWritable , LongWritable. What are the dependencies we
need to add to use these APIs.
I tried searching in google. Not able to understand the solution. Please
guide me.
Thanks in Advance.
Dinesh.
es are required.
> The Flink hadoop-compatibility dependency requires hadoop-common and
> hadoop-mapreduce-client-core, and whatever transitive dependencies these
> have.
>
> On 06/08/2020 13:08, C DINESH wrote:
>
> Hi All,
>
> From 1.9 version there is no *flink-shaded-hado
17 matches
Mail list logo