Hi Robert,
Thank you for that information, can you please let me know when 1.2 is
planned to release ?
Regards,
Vinay Patil
On Wed, Oct 12, 2016 at 4:17 AM, rmetzger0 [via Apache Flink User Mailing
List archive.] wrote:
> Hi,
> I think that pull request will be merged for 1.2.
>
> On Fri, Oct
Hi Kostas:
Thanks for your answer.
So in your previous figure (yesterday) when e3 arrives, also e2 should be
included in the result, right?
--zhangrucong: In Oct 11 email, e2 is coming at 9:02, e3 is coming at 9:07,
and the aging time is 5 mins. So When e3 coming, e2 is aged. E2 is not in th
Hi Sunny,
As stated by Fabian try to see whether including the postgres classes in
the shaded jar solves the problem. If it doesn't, you're probably hitting
the same problem i had with an older version of Flink (
https://issues.apache.org/jira/plugins/servlet/mobile#issue/FLINK-4061) and
this you h
Hi Wouter,
Packing two or more values into the edge value using a Tuple is a common
practice. Does this work well for the algorithms you are writing?
Greg
On Wed, Oct 12, 2016 at 4:29 PM, Wouter Ligtenberg
wrote:
> Hi there,
>
> I'm currently working on making a Temporal Graph with Gelly, a
Hi there,
I'm currently working on making a Temporal Graph with Gelly, a Temporal
graph is a graph where edges have 2 extra values namely a beginning and
ending time.
I started with this project a couple of weeks ago, since i don't have much
experience with Gelly or Flink i wanted to ask you gu
Hi Pedro,
support for window aggregations in SQL and Table API is currently work in
progress.
We have a pull request for the Table API and will add this feature for the
next release.
For SQL we depend on Apache Calcite to include the TUMBLE keyword in its
parser and optimizer.
At the moment the o
Hi Robert,
Thanks for your suggestions. We are using the DataStream API and I tried
it with disabling it completely, but that didn't help.
I attached the plan and to add some context, it starts with a Kafka
source followed by a map operation ( parallelism 4). The next map is the
expensive pa
Hi Janardhan/Stephan,
I just figured out what the issue is (Talking about Flink KafkaConnector08,
don't know about Flink KafkaConnector09)
The reason why- bin/kafka-consumer-groups.sh --zookeeper
--describe --group is not showing any result
is because of the absence of the
/consumers//owners/
Hi Robert,
Thanks for your response. I just figured out what the issue is.
The reason why- bin/kafka-consumer-groups.sh --zookeeper
--describe --group is not showing any result
is because of the absence of the
/consumers//owners/ in the zookeeper.
The flink application is creating and upda
Hello,
I am trying to build an query using the StreamTableEnvironment API. I Am
trying to build this queries with tableEnvironment.sql("QUERY") so that I
can in the future load those queries from a file.
Code source:
Table accesses = tableEnvironment.sql
("SELEC
Hi Robert,
Thanks! I’ll likely pursue option #2 and see if I can copy over the code from
org.apache.flink….fs.bucketing.
Do you know a general timeline for when 1.2 will be released or perhaps a
location where I could follow its progress?
Thanks again!
From: Robert Metzger
Reply-To: "user@
Hi Robert,
I see two possible workarounds:
1) You use the unreleased Flink 1.2-SNAPSHOT version. From time to time,
there are some unstable commits in that version, but most of the time, its
quite stable.
We provide nightly binaries and maven artifacts for snapshot versions here:
http://flink.apac
Hi Jürgen,
Are you using the DataStream or the DataSet API?
Maybe the operator chaining is causing too many operations to be "packed"
into one task. Check out this documentation page:
https://ci.apache.org/projects/flink/flink-docs-master/dev/datastream_api.html#task-chaining-and-resource-groups
Hello,
So in your previous figure (yesterday) when e3 arrives, also e2 should be
included in the result, right?
In this case, I think that what you need is a Session window with gap equal to
your event aging duration and
an evictor that evicts the elements that lag behind more than the gap dura
Hi,
I have 2 streams which are partitioned based on key field. I want to join
those streams based on key fields on windows. This is an example I saw in
the flink website:
val firstInput: DataStream[MyType] = ...
val secondInput: DataStream[AnotherType] = ...
val firstKeyed = firstInput.keyBy("u
Hi,
apply() will be called for each key.
On Wed, Oct 12, 2016 at 2:25 PM, Swapnil Chougule
wrote:
> Thanks Aljoscha.
>
> Whenever I am using WindowFunction.apply() on keyed stream, apply() will
> be called once or multiple times (equal to number of keys in that windowed
> stream)?
>
> Ex:
> Data
Hi Kostas:
It doesn’t matter. Can you see the picture? My user case is:
1、The events are coming according to the following order
[cid:image004.png@01D224D0.A15CB290]
At 9:01 e1 is coming
At 9:02 e2 is coming
At 9:06 e3 is coming
At 9:08 e4 is coming
The time is system time.
2、And
Hello again,
Sorry for the delay but I cannot really understand your use case.
Could you explain a bit more what do you mean by “out-of-date” event and
“aging” an event?
Also your windows are of a certain duration or global?
Thanks,
Kostas
> On Oct 11, 2016, at 3:04 PM, Zhangrucong wrote:
>
Ok, thanks for the update Ufuk! Let me know if you need test or anything!
Best,
Flavio
On Wed, Oct 12, 2016 at 11:26 AM, Ufuk Celebi wrote:
> No, sorry. I was waiting for Tarandeep's feedback before looking into
> it further. I will do it over the next days in any case.
>
> On Wed, Oct 12, 2016
Thanks Aljoscha.
Whenever I am using WindowFunction.apply() on keyed stream, apply() will be
called once or multiple times (equal to number of keys in that windowed
stream)?
Ex:
DataStream dataStream = env
.socketTextStream("localhost", )
.flatMap(new Splitter(
Hi Flinksters,
At one stage in my data stream, I want to save the stream to a set of rolling
files where the file name used (i.e. the bucket) is chosen based on an
attribute of each data record. Specifically, I’m using a windowing function to
create aggregates of certain metrics and I want to
Hey,
I face the same problem and decided to go with your third solution. I use
Groovy as the scripting language, which has access to Java classes and
therefore also to Flink constructs like Time.seconds(10). See below for an
example of a pattern definition with Groovy:
private static Binding b
Hi,
we currently have an issue with Flink, as it allocates many tasks to the
same task manager and as a result it overloads it. I reduced the amount
of task slots per task manager (keeping the CPU count) and added some
more servers but that did not help to distribute the load.
Is there some
Hi guys,
I am facing following error message in flink scala JDBC wordcount.
could you please advise me on this?
*Information:12/10/2016, 10:43 - Compilation completed with 2 errors and 0
warnings in 1s 903ms*
*/Users/janaidu/faid/src/main/scala/fgid/JDBC.scala*
*Error:(17, 67) can't expand macro
Hi Anchit,
Can you re-run your job with the debug level for Flink set to DEBUG?
Then, you should see the following log message every time the offset is
committed of Zookeeper:
"Committing offsets to Kafka/ZooKeeper for checkpoint"
Alternatively, can you check whether the offsets are available in
Hi,
I haven't seen this error before. Also, I didn't find anything helpful
searching for the error on Google.
Did you check the GC times also for Flink? Is your Flink job doing any
heavy tasks (like maintaining large windows, or other operations involving
a lot of heap space?)
Regards,
Robert
O
No, sorry. I was waiting for Tarandeep's feedback before looking into
it further. I will do it over the next days in any case.
On Wed, Oct 12, 2016 at 10:49 AM, Flavio Pompermaier
wrote:
> Hi Ufuk,
> any news on this?
>
> On Thu, Oct 6, 2016 at 1:30 PM, Ufuk Celebi wrote:
>>
>> I guess that this
Hi,
I think that pull request will be merged for 1.2.
On Fri, Oct 7, 2016 at 6:26 PM, vinay patil wrote:
> Hi Stephan,
>
> https://github.com/apache/flink/pull/2518
> Is this pull request going to be part of 1.2 release ? Just wanted to get
> an idea on timelines so that I can pass on to the tea
Hi Ufuk,
any news on this?
On Thu, Oct 6, 2016 at 1:30 PM, Ufuk Celebi wrote:
> I guess that this is caused by a bug in the checksum calculation. Let
> me check that.
>
> On Thu, Oct 6, 2016 at 1:24 PM, Flavio Pompermaier
> wrote:
> > I've ran the job once more (always using the checksum branch
Hi Shannon,
I tried to reproduce the problem in a unit test without success.
My test configures a HadoopOutputFormat object, serializes and deserializes
it, cally open, and verifies that a configured String property is present
in the getRecordWriter() method.
Next I would try to reproduce the err
I've been thinking in several options to solve this problem:
1. I can use Flink savepoints in order to save the application state ,
change the jar file and submit a new job (as the new jar file with the
patterns added/changed). The problem in this case is to be able to correctly
handle the savepoi
31 matches
Mail list logo