Hi Shawn,
You could also take a look at the hybrid source[1]
Best,
Dawid
[1]https://nightlies.apache.org/flink/flink-docs-master/docs/connectors/datastream/hybridsource/
On 26/01/2022 08:39, Guowei Ma wrote:
Hi Shawn
Currently Flink can not trigger the sp at the end of the input. An
alterna
Hi Shawn
Currently Flink can not trigger the sp at the end of the input. An
alternative way might be that you need to develop a customized source,
which triggers a savepoint when it notices that all the input split has
been handled.
Or you could see the state process api[1], which might be helpful.
our application is stateful. processing live events depends on the
state. but for kinds of reason, we need rebuild the state. it will be very
costly to replay all data.
our historical events data are stored in s3. so we want to create
states/savepoints periodically so that we can
Hi, Shawn
I think Flink does not support this mechanism yet.
Would you like to share the scenario in which you need this savepoint at
the end of the bounded input?
Best,
Guowei
On Wed, Jan 26, 2022 at 1:50 PM Shawn Du wrote:
> Hi experts,
>
> assume I have several files and I want replay these
Hi experts,
assume I have several files and I want replay these files in order in streaming
mode and create a savepoint when files play at the end. it is possible?
I wrote a simple test app, and job are finished when source is at the end. I
have no chance to creat a savepoint. please help.
Than
I'm using: final StreamExecutionEnvironment env =
StreamExecutionEnvironment.getExecutionEnvironment();
But no go.
On Mon, 24 Jan 2022 at 16:35, John Smith wrote:
> Hi using Flink 1.14.3 with gradle. I explicitly added the flink client
> dependency and the job starts but it quits with...
>
> In
Hi all,
For Flink to treat a model class as a special POJO type, these are the
documented conditions:
https://nightlies.apache.org/flink/flink-docs-release-1.13/docs/dev/serialization/types_serialization/#pojos
It says the following:
-
All fields are either public or must be accessible th
Hi Jessy,
Queryable State is considered approaching end of life [1] per the Flink
Roadmap.
There are currently no development activities planned for it.
Best regards,
Martijn
[1]
https://flink.apache.org/roadmap.html
Op di 25 jan. 2022 om 18:00 schreef Jessy Ping
> Hi Matthias,
>
> I want t
It's hard for me to see the issue from what you posted, However i can post
how i added that jar to my flink pods and you can compare
Instead of creating a custom image i loaded the JAR as a config map
You can create a configMap easily from a file:
1.Download the jar you want
2.Create the configMap
Thanks Edward for your explanation. I missed the part about the aggregationKey
being added the processor. On Tuesday, January 25, 2022, 02:12:41 PM EST,
Colletta, Edward wrote:
Here is some sample data which may help visualize how the aggregation is
changed dynamically.
We star
I have Flink Kafka Consumer in place which works fine until I add the below
lines:
env.setRestartStrategy(RestartStrategies.failureRateRestart(
3,
*// max failures per unit* Time.of(5, TimeUnit.MINUTES),
*//time interval for measuring failure rate* Time.of(10, TimeUnit.SECONDS)
*// delay*))
It gi
Here is some sample data which may help visualize how the aggregation is
changed dynamically.
We start by aggregating by session and session+account by placing values into
aggregationKey based on the fields in groupByFIelds.
Then we delete the session+account aggregation, and add an aggregation b
You don’t have to add keyBy’s at runtime. You change what is in the value of
aggregationKey at run time
Some records may appear several times with different fields extracted to
aggregationKey. They dynamic building of the grouping is really done by the
flatMap
From: M Singh
Sent: Tuesday, J
Thanks Edward for your response.
The problem I have is that I am not sure how to add or remove keyBy's at run
time since the flink topology is based on that (as Caizhi mentioned).
I believe we can change the single keyBy in your example, but not add/remove
them.
Please let me know if I have mi
A general pattern for dynamically adding new aggregations could be something
like this
BroadcastStream broadcastStream =
aggregationInstructions
.broadcast(broadcastStateDescriptor);
DataStream
streamReadyToAggregate = dataToAggregate
.connect(broadcast
Hi Matthias,
I want to query the current state of the application at real-time. Hence,
state processor API won't fit here. I have the following questions,
* Is the queryable state stable enough to use in production systems ?.
Are there any improvements or development activities planned or going
Hi Jessy,
Have you considered using the state processor api [1] for offline analysis of
checkpoints and savepoints?
[1]
https://nightlies.apache.org/flink/flink-docs-release-1.13/docs/libs/state_processor_api/
Sincere greetings
Thias
From: Jessy Ping
Sent: Montag, 24. Januar 2022 16:47
To:
Hi Ingo,
So basically, I cannot deploy an older version of flink job in 1.14.3 flink
cluster, is it?
Thanks,
Sweta
On Tue, Jan 25, 2022 at 4:02 AM Ingo Bürk wrote:
> Hi Sweta,
>
> there was a non-compatible change to SourceReaderContext#metricGroup in
> the 1.14.x release line; I assume this i
Hi Fil,
If I understand correctly, you are looking for TLS client authentication,
i.e. the remote function needs to authenticate the request
that is coming from the StateFun runtime.
This is indeed not yet supported as it wasn't required by the community.
I'd be happy to create an issue and assign
In the documentation we have an example on how to implement deserialization
from bytes to Jackson ObjectNode objects - JSONKeyValueDeserializationSchema
https://nightlies.apache.org/flink/flink-docs-release-1.13/docs/connectors/datastream/kafka/
However, there is no example on the other direction:
Hi Krzysztof,
sorry for the late reply. The community is very busy at the moment
with the final two weeks of Flink 1.15.
The parameters you have mentioned are mostly relevant for the internal
conversion or representation from Parquet types to Flink's SQL type
system.
- isUtcTimestamp denotes whe
Hi Sweta,
there was a non-compatible change to SourceReaderContext#metricGroup in
the 1.14.x release line; I assume this is what you are seeing.
Did you make sure to update the connector (and any other) dependencies
as well?
Best
Ingo
On 25.01.22 05:36, Sweta Kalakuntla wrote:
Hi,
We ar
Hey Saravanan,
Please read the contribution guide [1]. It is a good idea to review the
code style guidelines [2] to reduce PR churn for nits.
If you can please raise a Jira and mention me, I will assign it to you.
[1] https://flink.apache.org/contributing/how-to-contribute.html
[2]
https://flink
23 matches
Mail list logo