Hi Greg
My sink is a hashmap(acting as db store), I am reading from this hashmap
from other query stream. I need one instance / slot per task manager of
sink, so that everyone refers the same instance/slot.
So is there any way so that I can restrict 1 slot per task manager.
Thanks
Pushpendra Jais
Thank you for quickly fixing it!
On 20 September 2016 at 17:17, Fabian Hueske wrote:
> Hi Yukun,
>
> I debugged this issue and found that this is a bug in the serialization of
> the StateDescriptor.
> I have created FLINK-4640 [1] to resolve the issue.
>
> Thanks for reporting the issue.
>
> Be
Digging deeper, it looks like this was a configuration issue. I was pulling
from a different cluster that, in fact, has only 10 partitions for this topic.
(╯°□°)╯︵ ┻━┻
From: Curtis Wilde
Reply-To: "user@flink.apache.org"
Date: Tuesday, September 20, 2016 at 2:36 PM
To: "user@flink.apache.org"
Hi All
I am trying to get JOB accumulators. ( I am aware that I can get the
accumulators through REST APIs as well, but i wanted to avoid JSON
parsing).
Looking at JobAccumulatorsHandler i am trying to get execution graph for
currently running job. Following is my code:
InetSocketAddress i
Enabling logging, I see that it is only getting 10 partitions for some reason
(see the last two lines below).
2016-09-20 14:27:47 INFO ConsumerConfig:165 - ConsumerConfig values:
metric.reporters = []
metadata.max.age.ms = 30
value.deserializer = class
org.apache.kafka
Is this based on log messages, or can you confirm that from some
partitions, no data is read?
On Tue, Sep 20, 2016 at 6:03 PM, Curtis Wilde wrote:
> I’m just getting my feet wet with Flink, doing a quick implementation on
> my laptop following the examples.
>
>
>
> I’m consuming from a Kafka 0.
In addition, It supports enabling multiple Reporters. You can have same
data pushed to multiple systems. Plus its very easy to write new reporter
for doing any customization.
Regards
Sumit Chawla
On Tue, Sep 20, 2016 at 2:10 AM, Chesnay Schepler
wrote:
> Hello Eswar,
>
> as far as I'm aware
Fabian
Had a discussion with Frank. As he has no means to reproduce/test the bug I
will submit a patch for this.
--
View this message in context:
http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/Problem-with-CEPPatternOperator-when-taskmanager-is-killed-tp9024p9097.html
Sent
I’m just getting my feet wet with Flink, doing a quick implementation on my
laptop following the examples.
I’m consuming from a Kafka 0.9 cluster using FlinkKafkaConsumer09 from a topic
with 50 partitions.
I see offset commit messages for partitions 0-9, but I don’t see offsets being
committed
Hi LF,
this feature (not occurrence of an event) can definitely be implemented and
the community is currently working on it. I think that both scenarios
you're describing a legit use cases and Flink's implementation of the not
operator should cover both.
I hope that this feature can be used soon.
Hi Frank,
thanks for sharing your analysis. It indeed pinpoints some of the current
CEP library's shortcomings.
Let me address your points:
1. Lack of not operator
The functionality to express events which must not occur in a pattern is
missing. We've currently a JIRA [1] which addresses exactl
Hi Vinay,
I think the answer depends on what you want to achieve.
Subtasks of an operator are all executed by separate threads. Some of them
run in the same JVM others not. So if you want to share data across
subtasks being executed in the same process, you can use static variables
bound to the f
Hi,
could you check what happened to your TaskManagers in the logs? There seems
to be a problem with the connection of the TMs to the JM.
You're right that you don't strictly need HDFS to run a Flink job as long
as you don't want to access HDFS data or write to HDFS.
`netstat -atn` should list y
Hi David,
you should be able to solve this kind of problem with Flink's CEP library.
The important thing here is to define a pattern interval length so that
patterns can time out. Otherwise, you will end up accumulating state which
is never purged. This will eventually cause an OOM exception.
How
Hi Frank,
at the moment it is not yet fully supported to run branching patterns. This
is one thing to be added to Flink's CEP library [1].
As a workaround, you can run multiple CEP patterns, one for each path in
your branching pattern, individually. Alternatively, you can try to combine
events fr
Thanks Chesnay & Fabian for update.
I will create JIRA issue & open a pull request to fix it.
Thanks,
Swapnil
On Tue, Sep 20, 2016 at 2:54 PM, Fabian Hueske wrote:
> Yes, the condition needs to be fixed.
>
> @Swapnil, would you like to create a JIRA issue and open a pull request to
> fix it?
>
Hi Pushpendra,
This is the expected system behavior. Slots local to the same TaskManager
can transfer buffers in memory. Are you able to also run the Sink with a
parallelism of 4?
Greg
On Tue, Sep 20, 2016 at 6:16 AM, Pushpendra Jaiswal <
pushpendra.jaiswa...@gmail.com> wrote:
> Hi
> I have lau
On Tue, Sep 20, 2016 at 12:49 PM, Maximilian Michels wrote:
> Hi Luis,
>
> That looks like a bug but looking at the code I don't yet see how it may
> occur. We definitely need more information to reproduce it. Do you have an
> example job? Are you using master or a Flink release? Are your Flink
>
Hi Luis,
That looks like a bug but looking at the code I don't yet see how it may
occur. We definitely need more information to reproduce it. Do you have an
example job? Are you using master or a Flink release? Are your Flink
cluster and your job compiled with the exact same version of Flink?
Che
Hi
I have launched 2 task managers each with 2 slots. I have set parallelism 2
for one operator.
This operator should launch in both task managers with 1 slot each.
But it is launching on only 1 task manager.
Is this intended behavior?
Thanks
Pushpendra Jaiswal.
On Mon, Sep 19, 2016 at 8:02 PM, Fabian Hueske wrote:
> Hi Luis,
>
> this looks like a bug.
> Can you open a JIRA [1] issue and provide a more detailed description of
> what you do (Environment, DataStream / DataSet, how do you submit the
> program, maybe add a small program that reproduce the pr
Yes, the condition needs to be fixed.
@Swapnil, would you like to create a JIRA issue and open a pull request to
fix it?
Thanks, Fabian
2016-09-20 11:22 GMT+02:00 Chesnay Schepler :
> I would agree that the condition should be changed.
>
>
> On 20.09.2016 10:52, Swapnil Chougule wrote:
>
>> I c
I would agree that the condition should be changed.
On 20.09.2016 10:52, Swapnil Chougule wrote:
I checked following code in Flink JDBCOutputFormat while I was using
in my project work. I found following snippet:
@Override
public void writeRecord(Row row) throws IOException {
Hi Yukun,
I debugged this issue and found that this is a bug in the serialization of
the StateDescriptor.
I have created FLINK-4640 [1] to resolve the issue.
Thanks for reporting the issue.
Best, Fabian
[1] https://issues.apache.org/jira/browse/FLINK-4640
2016-09-20 10:35 GMT+02:00 Yukun Guo :
Hello Eswar,
as far as I'm aware the general structure of the Flink's metric system
is rather similar to DropWizard. You can use DropWizard metrics by
creating a simple wrapper, we even ship one for Histograms. Furthermore,
you can also use DropWizard reporters, you only have to extend the
Dr
How are you writing your data to Kafka? In that code you have to make sure
to not write to only one partition of your topic.
On Mon, 19 Sep 2016 at 08:06 Amir Bahmanyari wrote:
> Thanx
> Could you elaborate on writing to all partitions and not just one pls?
> How can I make sure ?
> I see all pa
I checked following code in Flink JDBCOutputFormat while I was using in my
project work. I found following snippet:
@Override
public void writeRecord(Row row) throws IOException {
if (typesArray != null && typesArray.length > 0 &&
typesArray.length == row.productArity()) {
Some detail: if running the FoldFunction on a KeyedStream, everything works
fine. So it must relate to the way WindowedStream handles type extraction.
In case any Flink experts would like to reproduce it, I have created a repo
on Github: github.com/gyk/flink-multimap
On 20 September 2016 at 10:33
28 matches
Mail list logo