hread.java:719) ~[?:1.8.0_412] at
org.apache.http.impl.client.IdleConnectionEvictor.start(IdleConnectionEvictor.java:96)
~[test.jar:?]
Regards,Madan
entire flink application is getting failed due to time out while getting metadata but we would like flink job to continue to consume data from other source topics even if one of the topic has any issue since failing entire flink application doesn’t make sense if one if the topic has issue.Regards,Madan
keep running even
after one of the topics has an issue? We tried to handle exceptions to make
sure the job wouldn't fail, but it didn't help out.
Caused by: java.lang.RuntimeException: Failed to get metadata for topics Can
you please provide any insights?
Regards,Madan
Hello Weihua,
I tried to increase akka.framesize from default to 30MB but I still see few
logs are missing from down stream operators if there's large amount of data
processing.
Regards,Madan
On Friday, 14 July 2023 at 11:45:36 am GMT-7, Madan D via user
wrote:
Hello Weihu
Hello Weihua,
I see all events are written, which are processed right after the event has
been consumed (source), but they are missing those that are coming from
downstream operators.
Regards,Madan
On Friday, 14 July 2023 at 02:16:11 am GMT-7, Weihua Hu
wrote:
Hi Madan
Flink UI
Hello Team,
Recently, we observed that Flink logs were missing while writing to files in
order to forward them to Splunk to see event metrics, even though the Flink UI
showed them accurate.Can you please help me understand what might be causing it?
Regards,Madan
ink parallelism based on incoming traffic.
Regards,Madan
On Wednesday, 28 June 2023 at 08:43:22 am GMT-7, Chen Zhanghao
wrote:
Hi Leon,
Adaptive scheduler alone cannot autoscale a Flink job. It simply adjusts the
parallelism of a job based on available slots [1]. To autoscale a job, we
consumer.
Attached job plans for both.
Regards,Madan
On Tuesday, 9 May 2023 at 05:50:49 pm GMT-7, Shammon FY
wrote:
Hi Madan,
Could you give the old and new versions of flink and provide the job plan? I
think it will help community to find the root cause
Best,Shammon FY
On Wed, May 10, 2023
might be causing even though we are using same
parallelism in old and new jobs and we are on Flink 1.14.2.
Regards,Madan
s are running on yarn as of today.
Regards,Madan
me know how can flink can adjust parallelism automatically
based on traffic (HPA)
Regards,Madan
similar to
Flink Kubernetes Operator which can perform auto-scaling.
Regards,Madan
On Friday, 13 January 2023 at 09:37:11 pm GMT-8, Gyula Fóra
wrote:
Hi Madan,
With reactive mode you need to build a completely custom auto scaling logic, it
can work but it takes considerable effort
Hello Team,I would like to understand auto scaling on EMR using either reactive
mode or adaptive scheduler with custom or managed scaling.Can some one help me
on this.
Regards,Madan
-m yarn-cluster -ys 6
-Dyarn.application.name=kafka-to-pubsub-e2e-test1
-Djobmanager.scheduler=adaptive
Regards,Madan
(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecut
Regards,
Madan
Apply(CompletableFuture.java:607)
at
java.util.concurrent.CompletableFuture$UniApply.tryFire(CompletableFuture.java:591)
... 35 more
Can you please help me on this.
Regards,
Madan
ery cannot be used since business logic
is around while fetching records
3. Write initially to csv and do transformation on csv. Last possible
option.
Please share your thoughts.
Thank you.
On Wed, Jan 16, 2019 at 2:50 PM madan wrote:
> Hi,
>
> Need help in the below scenario,
>
>
ing to connect to it using RemoveEnvironment. Since Spring applicaiton
context will not be initialized, NPE is thrown. Please suggest what could
be the solution in this scenario.
--
Thank you,
Madan.
great help).
Thank you,
Madan
On Tue, Nov 6, 2018 at 12:03 PM vino yang wrote:
> Hi madan,
>
> I think you need to hash partition your records.
> Flink supports hash partitioning of data.
> The operator is keyBy.
> If the value of your tag field is enumerable, you can also use
once.
--
Thank you,
Madan.
Hi Ken,
Yep correct.
Thank you.
On Wed, Oct 31, 2018 at 7:24 PM Ken Krugler
wrote:
> Hi Madan,
>
> If your source has a parallelism > 1, then when the CSV file is split,
> only one of the operators will get the split with the header row.
>
> So in that case, how woul
Hi,
When we are splitting a csv file into multiple parts we are not sure which
part is read first. Is there any way to make sure first part with header is
read first ? I need to read header line first to store column name vs index
and use this index for processing next records.
I could read heade
Tried with public ip and elastic ip of EC2 master instance, allowed all TCP
connections in security settings.
Please suggest.
Attaching the log.
--
Thank you,
Madan.
/usr/lib/jvm/java-8-oracle/bin/java -ea -Didea.test.cyclic.buffer.size=1048576
-javaagent:/home/mmyellanki/.local/share/JetBrains/To
.,
Can anyone please give some information or point me at proper documentation.
--
Thank you,
Madan.
isTupleType()) {
TupleTypeInfo tupleTypeInfo = (TupleTypeInfo) typeInfo;
RowTypeInfo returns 'true' for isTupleType() and cannot be casted.
Can someone please tell me, Is it that I have done wrong configuration or
bug in code ?
--
Thank you,
Madan.
{"id":"int",
"Name":"String","Salary":"Double","Dept":"String"}
Data file : csv data file with above fields data
Output required is : Calculate average of salary by department wise.
--
Thank you,
Madan.
26 matches
Mail list logo