Hi Folks,
Getting this error when sinking data to a firehosesink, would really
appreciate some help !
DataStream inputStream = env.addSource(new
FlinkKafkaConsumer<>("xxx", new SimpleStringSchema(), properties));
Properties sinkProperties = new Properties();
sinkProperties.put(AWSCon
Hi,
I am using a kinesis sink with flink 1.13.
The amount of data is in millions and it choke the 1MB cap for kinesis data
streams.
Is there any way to send data to kinesis sink in batches of less than 1MB?
or any other workaround
Hi,
I have a comma separated string of the format
x,y,z \n
a,b,c ...
I want to split the string on the basis of '\n' and insert each string into
the data stream separately using *flatmap*. Any example on how to do that?
If I add the chunks into an Arralist or List and return that from the
flatmap
Hi,
I have been running a streaming job which prints data to .out files the
size of the file has gotten really large and is choking the root memory for
my VM. Is it ok to delete the .out files? Would that affect any other
operation or functionality?
Hi,
Im fetching data from kafka topics converting them to chunks of <= 1MB and
sinking them to a kinesis data stream.
The streaming job is functional however I see bursts of data in kinesis
stream with intermittent dips where data received is 0. I'm attaching the
configuration parameters for kinesi
Hi,
I am trying to run a job in my local cluster and facing this issue.
org.apache.flink.client.program.ProgramInvocationException: The main method
caused an error: Failed to execute job 'Tracer Processor'.
at
org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.
Hi,
I'm running a job on a local flink cluster but metrics are showing as Bytes
received,records received,bytes sent,backpressure all 0 in the flink UI
even though I'm receiving data in the sink.
Do I need to additionally configure something to see these metrics work in
real time?
Hi,
I am using a kafka source with a kinesis sink and the speed of data coming
in is not the same as data flowing out hence the need to configure a
relatively larger queue to hold the data before backpressuring. Which
memory configuration corresponds to this that I'll need to configure?
Hi,
We are using flink version 1.13 with a kafka source and a kinesis sink with
a parallelism of 3.
On submitting the job I get this error
Could not copy native binaries to temp directory
/tmp/amazon-kinesis-producer-native-binaries
Followed by permission denied even though all the permissions ha
Hi,
Im running flink application on yarn cluster it is giving me this error, it
is working fine on standalone cluster. Any idea what could be causing this?
Exception in thread "main" java.lang.NoSuchMethodError:
org.apache.flink.client.deployment.application.ClassPathPackagedProgramRetriever.newBu
Hi Folks,
Would appreciate it if someone could help me out with this !
Cheers
On Thu, May 19, 2022 at 1:49 PM Zain Haider Nemati
wrote:
> Hi,
> Im running flink application on yarn cluster it is giving me this error,
> it is working fine on standalone cluster. Any idea what could b
Hey All,
How can I check logs for my job when it is running in application mode via
yarn
Hi,
Im seeing this behaviour in my flink job, what can I do to remove this
bottleneck
[image: image.png]
> Also, if possible, you can try optimize your code in the flatmap node .
>
> [1]:
> https://nightlies.apache.org/flink/flink-docs-master/docs/dev/table/config/#table-exec-resource-default-parallelism
>
> Best regards,
> Yuxia
>
> ------
> *
he Yarn UI to search the submitted job
>>> with the JobID?
>>>
>>> Best,
>>> Shengkai
>>>
>>> Weihua Hu 于2022年5月20日周五 10:23写道:
>>>
>>>> Hi,
>>>> You can get the logs from Flink Web UI if job is running.
>>>> Best,
>>>> Weihua
>>>>
>>>> 2022年5月19日 下午10:56,Zain Haider Nemati 写道:
>>>>
>>>> Hey All,
>>>> How can I check logs for my job when it is running in application mode
>>>> via yarn
>>>>
>>>>
>>>>
Hi,
I'm running a job with kafka source pulling in million of records with
parallelism 16 and Im seeing heap memory on task manager overflowing after
the job as ran for some time, no matter how much I increase the memory
allocation. Can someone help me out in this regards?
Job Stats and memory con
Hi Folks,
I have data coming in this format:
{
“data”: {
“oid__id”: “61de4f26f01131783f162453”,
“array_coordinates”:“[ { \“speed\” : \“xxx\“, \“accuracy\” :
\“xxx\“, \“bearing\” : \“xxx\“, \“altitude\” : \“xxx\“, \“longitude\” :
\“xxx\“, \“latitude\” : \“xxx\“, \“dateTimeS
Hi,
I'm getting this error in yarn application mode when submitting my job.
Caused by: java.lang.ClassCastException: cannot assign instance of
org.apache.commons.collections.map.LinkedMap to field
org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumerBase.pendingOffsetsToCommit
of type org
Hi,
Which port does flink UI run on in application mode?
If I am running 5 yarn jobs in application mode would the UI be same for
each or different ports for each?
19 matches
Mail list logo