Woohoo, Just saw this (was travelling).
Congratulations, Stefan! Looking forward to the promising future of the
backend state.
On Mon, Feb 13, 2017 at 5:20 PM, Stefan Richter wrote:
> Thanks a lot! I feel very happy and will try help the Flink community as
> good as I can :-)
>
> Best,
> Stefan
Good to hear that.
On which machine you are running your Flink Job, also what are the
configurations you have used for RocksDB
I am currently running on C3.4xlarge with predefined option set to
FLASH_SSD_OPTIMIZED
Regards,
Vinay Patil
On Thu, Feb 16, 2017 at 10:31 AM, abhishekrs [via Apache Fli
Hi Joe,
you can also insert a MapFunction between the Kafka source and the keyBy to
validate the IDs.
The mapper will be chained and should not add only minimal overhead. If you
want to keep the events which were incorrectly deserialized, you can use
split() to move them somewhere.
Validation in
Is there a way to reload a log4j.properties file without stopping and starting the job server?
Sorry, that was a red herring. Checkpointing was not getting triggered
because we never enabled it.
Our application is inherently restartable because we can use our own output
to rebuild state. All that is working fine for us - including restart
semantics - without having to worry about upgrading
If I am processing a stream in the following manner:
val stream = env.addSource(consumer).name("KafkaStream")
.keyBy(x => (x.obj.ID1(),x.obj.ID2(),x.obj.ID3())
.flatMap(new FlatMapProcessor)
and the IDs bomb out because of deserialization issues, my job crashes with a
'Could not extract key'
Hi Abhishek,
You can disable checkpointing by not commenting env.enableCheckpointing
What do you mean by "We are trying to do application level checkpointing"
Regards,
Vinay Patil
On Thu, Feb 16, 2017 at 12:42 AM, abhishekrs [via Apache Flink User Mailing
List archive.] wrote:
> Is it possibl
See the tutorial at the beginning of:
flink-connectors/flink-jdbc/src/main/java/org/apache/flink/api/java/io/jdbc/JDBCInputFormat.java
Looks like plugging in "org.h2.Driver" should do.
On Wed, Feb 15, 2017 at 4:59 PM, Punit Tandel
wrote:
> Hi All
>
> Does flink jdbc support writing the data in
Hi All
Does flink jdbc support writing the data into H2 Database?
Thanks
Punit
Hi All,
I'm trying to run a streaming job with flink 1.2 version and there are 3
task managers with 12 task slots. Irrespective of the parallelism that I
give it always fails with the below error and I found a JIRA link
corresponding to this issue. Can I know by when this will be resolved since
I'
Hello,
Regarding the Filesystem abstraction support, we are planning to use a
distributed file system which complies with Hadoop Compatible File System
(HCFS) standard in place of standard HDFS.
According to the documentation
(https://ci.apache.org/projects/flink/flink-docs-release-1.3/internals
Is it possible to set state backend as RocksDB without asking it to checkpoint?
We are trying to do application level checkpointing (since it gives us better
flexibility to upgrade our flink pipeline and also restore state in a
application specific upgrade friendly way). So we don’t really need
Hi Guys,
Can anyone please help me with this issue
Regards,
Vinay Patil
On Wed, Feb 15, 2017 at 6:17 PM, Vinay Patil
wrote:
> Hi Ted,
>
> I have 3 boxes in my pipeline , 1st and 2nd box containing source and s3
> sink and the 3rd box is window operator followed by chained operators and a
> s3
Hi Ted,
I have 3 boxes in my pipeline , 1st and 2nd box containing source and s3
sink and the 3rd box is window operator followed by chained operators and a
s3 sink
So in the details link section I can see that that S3 sink is taking time
for the acknowledgement and it is not even going to the wi
Hi Geoffrey,
I think the "per job yarn cluster" feature does probably not work for one
main() function submitting multiple jobs.
If you have a yarn session + regular "flink run" it should work.
On Mon, Feb 13, 2017 at 5:37 PM, Geoffrey Mon wrote:
> Just to clarify, is Flink designed to allow su
What did the More Details link say ?
Thanks
> On Feb 15, 2017, at 3:11 AM, vinay patil wrote:
>
> Hi,
>
> I have kept the checkpointing interval to 6secs and minimum pause between
> checkpoints to 5secs, while testing the pipeline I have observed that that
> for some checkpoints it is taking
Hi,
I have kept the checkpointing interval to 6secs and minimum pause between
checkpoints to 5secs, while testing the pipeline I have observed that that
for some checkpoints it is taking long time , as you can see in the attached
snapshot checkpoint id 19 took the maximum time before it gets faile
Thanks Timo, removing `@BeanProperty` is giving no getters, no setters
error
On Wed, Feb 15, 2017 at 3:45 PM, Timo Walther wrote:
> Forget what I said about omitting `var`, this would remove the field from
> the POJO. I opened a PR for fixing the issue: https://github.com/apache/
> flink/pu
Hi,
I think the clean solution would be using raw keyed state once it becomes
available. For the meantime, your solution could work. However, you should be
aware that your approach does not rely on a contract but an implementation
detail that *could* change between versions and break your code
Forget what I said about omitting `var`, this would remove the field
from the POJO. I opened a PR for fixing the issue:
https://github.com/apache/flink/pull/3318
As a workaround: If you just want to have a POJO for the Cassandra Sink
you don't need to add the `@BeanProperty` annotation. Flink
Hello,
There is an open PR about adding support for case classes to the
cassandra sinks: https://github.com/apache/flink/pull/2633
You would have to checkout the branch and build it yourself. If this
works for you it would be great if you could also give some
feedback either here or in the P
Hi Adarsh,
I looked into your issue. The problem is that `var` generates
Scala-style getters/setters and the annotation generates Java-style
getters/setters. Right now Flink only supports one style in a POJO, I
don't know why we have this restriction. I will work on a fix for that.
Is it poss
Thanks Fabian, I need to sink data in Cassandra and direct sink with case
class is not available (correct me if I am wrong)
If we use Tuple then we are restricted to 22 fields
What do you suggest here?
On Wed, Feb 15, 2017 at 2:32 PM, Fabian Hueske wrote:
> Hi Adarsh,
>
> I think this is th
Hi Adarsh,
I think this is the same bug. I'm afraid you have to wait until the problem
is fixed.
The only workaround would be to use a different data type, for example a
case class.
Best, Fabian
2017-02-15 6:08 GMT+01:00 Adarsh Jain :
> Any help will be highly appreciable, am stuck on this one.
Hi Jordan,
it is not possible to generate watermarks per key. This feature has been
requested a couple of times but I think there are no plans to implement
that.
As far as I understand, the management of watermarks would be quite
expensive (maintaining several watermarks, purging watermarks of exp
Hi, all:
I'm learning flink's doc and curious about the fault tolerance of batch
process jobs. It seems that when one of task execution fails, the whole job
will be restarted, is it true? If so, isn't it impractical to deploy large
flink batch jobs?
--
Liu, Renjie
Software Engineer, MVAD
26 matches
Mail list logo