Thanks, Fabian,
I opened an JIRA ticket and I'd like to work on it if people think this
would be a improvement:
https://issues.apache.org/jira/browse/FLINK-8599
Best,
Chengzhi
On Wed, Feb 7, 2018 at 4:17 AM, Fabian Hueske wrote:
> Hi Chengzhi Zhao,
>
> I think this is rather an issue with the
WE actually got it working. Essentially, it's an implementation of
HadoopFilesytem, and was written with the idea that it can be used with Spark
(since it has broader adoption than Flink as of now). We managed to get it
configured, and found the latency to be much lower than by using the s3
con
Thanks for the info!
-Original Message-
From: Piotr Nowojski [mailto:pi...@data-artisans.com]
Sent: Friday, February 02, 2018 4:37 PM
To: Marchant, Hayden [ICG-IT]
Cc: user@flink.apache.org
Subject: Re: Latest version of Kafka
Hi,
Flink as for now provides only a connector for Kafka 0.
Thanks for all the ideas!!
From: Steven Wu [mailto:stevenz...@gmail.com]
Sent: Tuesday, February 06, 2018 3:46 AM
To: Stefan Richter
Cc: Marchant, Hayden [ICG-IT] ;
user@flink.apache.org; Aljoscha Krettek
Subject: Re: Joining data in Streaming
There is also a discussion of side input
https://c
For future reference, the created JIRA:
https://issues.apache.org/jira/browse/FLINK-8580
On 07.02.2018 10:48, LINZ, Arnaud wrote:
Hi,
Without any other solution, I made a shell script that copies the
original content of FLINK_CONF_DIR in a temporary rep, modify
flink-conf.yaml to set yarn.p
Hi,
I'm not aware of a good example but I can give you some pointers.
- Implement the SourceFunction interface. This function will not be
executed in parallel, so you don't have to worry about parallelism.
- Since you said, you want to run it as a batch job, you might not need to
implement checkp
Hi Fabian,
Thank you for the suggestion. We will consider it.
Would be glad to hear other ideas how to handle such requirement.
Thanks again,
Tovi
From: Fabian Hueske [mailto:fhue...@gmail.com]
Sent: יום ד 07 פברואר 2018 11:47
To: Sofer, Tovi [ICG-IT]
Cc: user@flink.apache.org; Tzu-Li (Gordon) T
This could be a bug in Kafkas JmxReporter class:
https://issues.apache.org/jira/browse/KAFKA-6307
On 07.02.2018 13:37, Edward wrote:
We are using FlinkKafkaConsumer011 and FlinkKafkaProducer011, but we also
experienced the same behavior with FlinkKafkaConsumer010 and
FlinkKafkaProducer010.
-
Hi
Thanks for the reply, but because I am a newbie with Flink, do you have any
good Scala code examples about this ?
Esa
From: Fabian Hueske [mailto:fhue...@gmail.com]
Sent: Wednesday, February 7, 2018 11:21 AM
To: Esa Heikkinen
Cc: user@flink.apache.org
Subject: Re: Flink CEP with files and n
We are using FlinkKafkaConsumer011 and FlinkKafkaProducer011, but we also
experienced the same behavior with FlinkKafkaConsumer010 and
FlinkKafkaProducer010.
--
Sent from: http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/
Hi Piotrek
Yes I've compared rebalance with rescale. I adjusted the parallelism of the
source and target operators so that rescale would behave more or less like
the "local or shuffle grouping" option. I was able to show that for my use
case a "local or shuffle grouping" option would yield at leas
Ok thanks! I should have seen this. Sorry.
--
Christophe
On Wed, Feb 7, 2018 at 10:27 AM, Tzu-Li (Gordon) Tai
wrote:
> Hi Christophe,
>
> Yes, you can achieve writing to different topics per-message using the
> `KeyedSerializationSchema` provided to the Kafka producer.
> The schema interface ha
Hi,
Without any other solution, I made a shell script that copies the original
content of FLINK_CONF_DIR in a temporary rep, modify flink-conf.yaml to set
yarn.properties-file.location, and change FLINK_CONF_DIR to that temp rep
before executing flink.
I am now able to select the container I wa
Hi Tovi,
I've been thinking about this idea.
It might be possible, but I think you have to implement a custom source for
this.
I don't think it would work to have the JMSConsumer, KafkaSink, and
RecoverySource in separate operators because otherwise it would not be
possible to share the Kafka wri
Hi Gregory,
IMO, that would be a viable approach.
You have to ensure that all operators (except the sources) have the same
UIDs and state types but I guess you don't want to change the application
logic and just replace the sources.
What might be tricky is to perform the savepoint at the right po
Hi Christophe,
Yes, you can achieve writing to different topics per-message using the
`KeyedSerializationSchema` provided to the Kafka producer.
The schema interface has a `getTargetTopic` method which allows you to override
the default target topic for a given record.
I agree that the method is
This is expected behavior; we try to load the queryable state classes
via reflection as it is an optional feature.
I'll open a jira to make it less verbose if the classes cannot be found,
in which case the stacktrace isn't particularly
interesting anyway.
On 05.02.2018 10:18, Fabian Hueske wrot
Hi Esa,
you can also read files as a stream.
However, you have to be careful in which order you read the files and how
you generate watermarks.
The easiest approach is to implement a non-parallel source function that
reads the files in the right order and generates watermarks.
Things become more t
Hi Chengzhi Zhao,
I think this is rather an issue with the ContinuousFileReaderOperator than
with the checkpointing algorithm in general.
A source can decide which information to store as state and also how to
handle failures such as file paths that have been put into state but have
been removed f
Hi Gordon, or anyone else reading this,
Still on this idea that I consume a Kafka topic pattern.
I want to then to sink the result of the processing in a set of topics
depending on from where the original message came from (i.e. if this comes
from origin-topic-1 I will serialize the result in des
Hello
I am trying to use CEP of Flink for log files (as batch job), but not for
streams (as realtime).
Is that possible ? If yes, do you know examples Scala codes about that ?
Or should I convert the log files (with time stamps) into streams ?
But how to handle time stamps in Flink ?
If I can n
21 matches
Mail list logo