Hi,
how I can get the UserId from the Properties in my DataStream?
I can read the userId if I extend the RMQSource Class:
QueueingConsumer.Delivery delivery = consumer.nextDelivery();
String userId = delivery.getProperties().getUserId();
But how can I provide this to my DataStream ?
Best regard
Hi,
what can be the reasons for the following exceptions.
We are using flink with a redis sink, but from time to time the flink job
failed with the follwing excpetions.
Thanks, Builder.
10/13/2018 15:37:48 Flat Map -> (Sink: Unnamed, Sink: Unnamed)(9/10)
switched to FAILED
redis.clients.jed
Hi,
my flink job fails continously(sometimes behind minutes, sometimes behind
hours) with the
follwing exception.
Flink run configuration:
run with yarn: -yn 2 -ys 5 -yjm 8192 -ymt 12288
streaming-job: kafka source and redis sink
The program finished with the following exception:
org.apache.fl
Hi,
So far I have added my keytab and principal in the flink-conf.yaml:
security.kerberos.login.keytab:
security.kerberos.login.principal:
But is there a way that I can add this to the "start script" -> run
yarn-cluster .
Thanks!
Hi,
what is the recommended way to implement the following use-case for
DataStream:
One data sink, same map() functions for parsing and normalization and
different map() function for format and two different sinks for the output?
The (same)data must be stored in both sinks.
And I prefere one job
Hi,
what is the prefered way to wirte streaming data to hbase?
Rolling File Sink or Streaming File Sink?
How can I configure this (open the connection with conf, and the write
handling(key,data)?
What do I have to consider about the partitions? I prefer a write pro
partition.
Thanks!
Marke
Hi,
I'm using the AbstractDeserializationSchema for my RabbitMQ source[1] and
try to deserialize the xml message with JAXB. The flink job are running
with YARN.
After the job was started I get follow exception:
javax.xml.bind.JAXBException: ClassCastException: attempting to cast
jar:file:/usr/jav
Hi,
we are using rabbitmq queue as streaming source.
Sometimes (if the queue contains a lot of messages) we get the follow ERROR:
ERROR org.apache.hadoop.yarn.client.api.impl.NMClientImpl -
Failed to stop Container container_1541828054499_0284_01_04when
stopping NMClientImpl
and som
Hi,
I tried to found somethink about "flink automatic scaling", is that
available and how does it work? There are any documentation or other
resources?
And especially how it works with YARN.
We are using flink1.61 with yarn , the parameters are still relevant (-yn,
-ys)?
Thanks for your support.
Hi,
I get the follow WARN and Exception in the Job Manager Logs (the job
continues).
Why do I get this exception and what do I have to consider?
I have a flink streaming job which write the data via OutputFormat to Hbase.
2018-11-25 12:08:34,721 WARN org.apache.hadoop.util.NativeCodeLoader
Hi,
I'm trying to run flink with docker (docker-compose) and job arguments
"config-dev.properties". But it seams that the job arguments are not
available:
docker-compose.yml
version: '2'
services:
job-cluster:
image: ${FLINK_DOCKER_IMAGE_NAME:-timeseries-v1}
ports:
- '8081:8081'
Hi,
I have a question regarding the "Broadcast State Pattern".
My job consume two streams (kafka, rabbitmq), on one of the streams come a
lot of data and continuously[1]. On the other very few and rarely[2]. I'm
using the Broadcast State pattern, because the stream[2] are updating data
which are
Hi,
I'm using a simply streaming app with processing time and without states.
The app read from kafka, transform the data and write the data to the
storage (redis).
But I see an interesting behavior, a few dates are getting through very
slowly.
Do you have any idea why this could be?
Best,
Marke
Hi,
I'm using a flink streaming job which read from kafka and write to hbase
with the OutputFormat. Like:
https://github.com/apache/flink/blob/master/flink-connectors/flink-hbase/src/test/java/org/apache/flink/addons/hbase/example/HBaseWriteStreamExample.java
But after a certain time, the job end
Hello,
I'm using flink 1.6.1 for streaming. In addition I need access to an
storage layer with kerberos auth. . I added the following parameter in the
flink-conf.yml
security.kerberos.login.use-ticket-cache: true
security.kerberos.login.keytab: /.../*.keytab
security.kerberos.login.principal: *@*
Hi,
it is possible that my flume-kafka consumer read only from part of all
Kafka patitions? What I mean is that I can use kafka to route certain
messages into specific partitions. And with my flink job I would only
consume this partitions (not all topic partitions).
Thanks!
Marke
Hi,
I want to implement the following behavior:
[image: image.png]
There are a lot of ingest signals with unique Id's, I would use for each
signal set a special function. E.g. Signal1, Signal2 ==> function1,
Signal3, Signal4 ==> function2.
What is the recommended way to implement this pattern?
T
17 matches
Mail list logo