ad dump to see where the
> operator is getting stuck.
>
> Best,
> Biao Geng
>
> Abhi Sagar Khatri via user 于2024年4月30日周二 19:38写道:
> >
> > Some more context: Our job graph has 5 different Tasks/operators/flink
functions of which we are seeing this issue every time
timeouts.
On Tue, Apr 30, 2024 at 3:05 PM Abhi Sagar Khatri
wrote:
> Hi Flink folks,
> Our team has been working on a Flink service. After completing the service
> development, we moved on to the Job Stabilisation exercises at the
> production load.
> During high load, we see
Hi Flink folks,
Our team has been working on a Flink service. After completing the service
development, we moved on to the Job Stabilisation exercises at the
production load.
During high load, we see that if the job restarts (mostly due to the
"org.apache.flink.util.FlinkExpectedException: The Task
ase? I
would like to understand the impact if we make changes in our local Flink
code with regards to testing efforts and any other affected modules?
Can you please clarify this?
Thanks,
Vidya Sagar.
On Wed, Dec 7, 2022 at 7:59 AM Yanfei Lei wrote:
> Hi Vidya Sagar,
>
> Thanks for brin
Thanks,
Vidya Sagar.
ity, I would recommend first upgrading to
> a later, supported version like Flink 1.16.
>
> Best regards,
>
> Martijn
>
> On Sat, Nov 12, 2022 at 8:07 PM Vidya Sagar Mula
> wrote:
>
>> Hi Yanfei,
>>
>> Thank you for the response. I have follow up answer and
on is such cases where RocksDB has
mount path to a Volume on host node?
Please clarify.
Thanks,
Vidya Sagar.
On Thu, Nov 10, 2022 at 7:52 PM Yanfei Lei wrote:
> Hi Vidya Sagar,
>
> Could you please share the reason for TaskManager restart? If the machine
> or JVM process of Ta
ey going to be purged later on?
If not, the disk is going to filled up with the older checkpoints. Please
clarify this.
Thanks,
Vidya Sagar.
ey going to be purged later on?
If not, the disk is going to filled up with the older checkpoints. Please
clarify this.
Thanks,
Vidya Sagar.
release-1.13/docs/ops/monitoring/back_pressure/
> [2]
> https://nightlies.apache.org/flink/flink-docs-release-1.13/docs/dev/datastream/fault-tolerance/custom_serialization/
>
> Best,
> Yun Tang
> --
> *From:* Vidya Sagar Mula
> *Sent:* Sund
Hi,
In my project, we are trying to configure the "Incremental checkpointing"
with RocksDB in the backend.
We are using Flink 1.11 version and RockDB with AWS : S3 backend
Issue:
--
In my pipeline, my window size is 5 mins and the incremental checkpointing
is happening for every 2 mins.
I am
allel instances, there will be no data with event time assigned to
> trigger downstream computation.
>
> Or you could try `WatermarkStrategy.withIdleness`.
>
>
> Best,
> Kezhu Wang
>
> On February 24, 2021 at 15:43:47, sagar (sagarban...@gmail.com) wrote:
>
> It
It is fairly simple requirement, if I changed it to PRocessing time it
works fine , but not working with event time..help appreciated!
On Wed, Feb 24, 2021 at 10:51 AM sagar wrote:
> HI
>
> Corrected with below code, but still getting same issue
>
> Instant instant =
>
:
> I saw one potential issue. Your timestamp assigner returns timestamp in
> second resolution while Flink requires millisecond resolution.
>
>
> Best,
> Kezhu Wang
>
> On February 24, 2021 at 11:49:59, sagar (sagarban...@gmail.com) wrote:
>
> I have simple flink st
,4,2019-12-31T00:00:03
p1,2019-12-31,5,34,USD,4,2019-12-31T00:00:04
p1,2019-12-31,10,34,USD,4,2019-12-31T00:00:01
p1,2021-12-31,15,34,USD,4,2021-12-31T00:00:01
p1,2018-12-31,10,34,USD,4,2018-12-31T00:00:01
--
---Regards---
Sagar Bandal
This is confidential mail ,All Rights are Reserved.If you are not intended
receipiant please ignore this email.
Hi Team, any answer for my below question?
On Wed, Jan 20, 2021 at 9:20 PM sagar wrote:
> Hi Team,
>
>
> I am creating a flink job with DataStream API and batch mode.
>
> It is having 5 different bounded sources and I need to perform some
> business operations on it like
nning to
create one common error handling stream and at each CoGroup operation I am
planning to write it to error stream by using Split operator after CoGroup
Let me know if that is the correct way of handling the errors?
--
---Regards---
Sagar Bandal
This is confidential mail ,All Right
Thanks Yun
On Thu, Jan 14, 2021 at 1:58 PM Yun Gao wrote:
> Hi Sagar,
>
> I rechecked and found that the new kafka source is not formally publish
> yet, and a stable method I think may be try adding the FlinkKafkaConsumer
> as a BOUNDED source first. Sorry for the incon
business
operation on all the available data.
I am not sure whether kafka as a datasource will be best for this use case,
but somehow I don't want to expose my flink job to database tables
directly.
Thanks & Regards,
Sagar
On Thu, Jan 14, 2021 at 12:41 PM Ardhani Narasimha <
ar
ce are there any alternatives to
achieve this?
--
---Regards---
Sagar Bandal
This is confidential mail ,All Rights are Reserved.If you are not intended
receipiant please ignore this email.
Hi Hequn,
Thanks for pointing that out.
We were wondering if there is anything else other than these examples, that
would help.
Thanks,
On Fri, Aug 17, 2018 at 5:33 AM Hequn Cheng wrote:
> Hi sagar,
>
> There are some examples in flink-jpmml git library[1], for example[2].
> H
Hi,
We are planning to use flink to run jpmml models using flink-jpmml library
from (radicalbit) over DataStream in Flink.
Is there any code example which we can refer to kick start the process ?
Thanks,
Hi,
We have a use case where we are consuming from more than 100s of Kafka
Topics. Each topic has different number of partitions.
As per the documentation, to parallelize a Kafka Topic, we need to use
setParallelism() == number of Kafka Partitions for a topic.
But if we are consuming multiple to
Thanks @zhangminglei and @Fabian for confirming.
Even I looked at the ORC parsing code and it seems that using type
is mandatory for now.
Thanks,
Sagar
On Wed, Jun 27, 2018 at 12:59 AM, Fabian Hueske wrote:
> Hi Sagar,
>
> That's more a question for the ORC community, but
@zhangminglei,
Question about the schema for ORC format:
1. Does it always need to be of complex type "" ?
2. Or can it be created with individual data types directly ?
eg. "name:string, age:int" ?
Thanks,
Sagar
On Fri, Jun 22, 2018 at 11:56 PM, zhangminglei <187
should be
metaSchema.
2. Will you be able to add more unit tests in the commit ? Eg. Writing some
example data with simple schema which will initialize OrcWriter object and
sinking it to local hdfs node ?
3. Are there plans to add support for other data types ?
Thanks,
Sagar
On Sun, Jun 17, 2018 at
oins and client, ... Following something
>>>> like a "tick-tock-model", I would suggest to focus the next release more on
>>>> integrations, tooling, and reducing user friction.
>>>>
>>>> Of course, this does not mean that no other pull request gets reviewed,
>>>> an no other topic will be examined - it is simply meant as a help to
>>>> understand where to expect more activity during the next release cycle.
>>>> Note that these are really the coarse focus areas - don't read this as a
>>>> comprehensive list.
>>>>
>>>> This list is my first suggestion, based on discussions with committers,
>>>> users, and mailing list questions.
>>>>
>>>> - Support Java 9 and Scala 2.12
>>>>
>>>> - Smoothen the integration in Container environment, like "Flink as a
>>>> Library", and easier integration with Kubernetes services and other
>>>> proxies.
>>>>
>>>> - Polish the remaing parts of the FLIP-6 rewrite
>>>>
>>>> - Improve state backends with asynchronous timer snapshots, efficient
>>>> timer deletes, state TTL, and broadcast state support in RocksDB.
>>>>
>>>> - Extends Streaming Sinks:
>>>> - Bucketing Sink should support S3 properly (compensate for
>>>> eventual consistency), work with Flink's shaded S3 file systems, and
>>>> efficiently support formats that compress/index arcoss individual rows
>>>> (Parquet, ORC, ...)
>>>> - Support ElasticSearch's new REST API
>>>>
>>>> - Smoothen State Evolution to support type conversion on snapshot
>>>> restore
>>>>
>>>> - Enhance Stream SQL and CEP
>>>> - Add support for "update by key" Table Sources
>>>> - Add more table sources and sinks (Kafka, Kinesis, Files, K/V
>>>> stores)
>>>> - Expand SQL client
>>>> - Integrate CEP and SQL, through MATCH_RECOGNIZE clause
>>>> - Improve CEP Performance of SharedBuffer on RocksDB
>>>>
>>>>
>>>>
>>>>
>>>>
>>>
>>
> --
Cheers,
Sagar
, Jörn Franke wrote:
> Don’t use the JDBC driver to write to Hive. The performance of JDBC in
> general for large volumes is suboptimal.
> Write it to a file in HDFS in a format supported by HIve and point the
> table definition in Hive to it.
>
> On 11. Jun 2018, at 04:47, sagar
I am trying to Sink data to Hive via Confluent Kafka -> Flink -> Hive using
following code snippet:
But I am getting following error:
final StreamExecutionEnvironment env =
StreamExecutionEnvironment.getExecutionEnvironment();
DataStream stream = readFromKafka(env);
private static final TypeI
29 matches
Mail list logo