Thanks Sameer and Till,
On Mon, Aug 1, 2016 at 9:31 AM, Till Rohrmann wrote:
> Yes you're right Sameer. That's how things work in Flink.
>
> Cheers,
> Till
>
> On Sun, Jul 31, 2016 at 12:33 PM, Sameer Wadkar
> wrote:
>
>> Vishnu,
>>
>> I would imagine based on Max's explanation and how other s
Thanks Till. I will take a look at your pointers. Mans
On Monday, August 1, 2016 6:27 AM, Till Rohrmann
wrote:
Hi Mans,
Milind is right that in general external systems have to play along if you want
to achieve exactly once processing guarantees while writing to these systems.
Eithe
Hi Till,
Thanks for your response. I m able to use flink-connector-kafka-0.9-2_11
with Kafka v10 to produce and consume messages.
Thanks,
Sivakumar Bhavanari.
On Mon, Aug 1, 2016 at 6:41 AM, Till Rohrmann wrote:
> Hi Siva,
>
> in version 1.0.0 we’ve added the Scala binary version suffix to all
+Till, looping him in directly, he probably missed this because he was away
for a while.
On Tue, 26 Jul 2016 at 18:21 Sameer W wrote:
> Hi,
>
> It looks like the WithIn clause of CEP uses Tumbling Windows. I could get
> it to use Sliding windows by using an upstream pipeline which uses Sliding
Hi folks,
I'm trying to run a DataSet program but after around 200k records are
processed a "java.lang.OutOfMemoryError: unable to create new native
thread" stops me.
I'm deploying Flink (via bin/yarn-session.sh) on a YARN cluster with
10 nodes (each with 8 cores) and starting 10 task managers,
+Ufuk, looping him in directly
Hmm, I think this is changed for the 1.1 release. Ufuk could you please
comment?
On Mon, 1 Aug 2016 at 08:07 Josh wrote:
> Cool, thanks - I've tried out the approach where we replay data from the
> Kafka compacted log, then take a savepoint and switch to the live
Hi,
yes, if you set the delay to high you will have to wait a long time until
your windows are emitted.
Cheers,
Aljoscha
On Mon, 1 Aug 2016 at 04:52 Sendoh wrote:
> Probably `processAt` is not used adequately because after increasing
> maxDelay
> in watermark to 10 minutes it works as expected.
Hello everyone,
I'm trying to understand how I can use the Incremental Aggregation + Window
Functions, as I've been unsuccessfully trying for a while now.
The use-case is one where I have a stream of objects, I want to count the
number of objects within a sliding window, and then within the w
Cool, thanks - I've tried out the approach where we replay data from the
Kafka compacted log, then take a savepoint and switch to the live stream.
It works but I did have to add in a dummy operator for every operator that
was removed. Without doing this, I got an exception:
java.lang.IllegalStateE
Hi Till,
thanks for the quick reply. Too bad, I thought I was on the right track with
savepoints here.
Some follow-up questions:
1.)Can I do the whole thing of transferring state and the position in the
Kafka topic manually for one stream? In other words: is this information
accessible e
Hi Till,
Thanks for the input. The error was in a training set which I found in the
.out file of the taskmanager. I corrected that and I am getting some
results.
Thanks and Regards,
Debaditya
On Mon, Aug 1, 2016 at 3:54 PM, Till Rohrmann wrote:
> Hi Debaditya,
>
> could you check what the log
Hi Claudia,
unfortunately neither taking partial savepoints nor combining multiple
savepoints into one savepoint is currently supported by Flink.
However, we're currently working on dynamic scaling which will allow to
adjust the parallelism of your Flink job. This helps you to scale in/out
depend
Hi Debaditya,
could you check what the log of the presumably failed task manager says? It
might contain hints to what actually went wrong.
Cheers,
Till
On Mon, Aug 1, 2016 at 9:49 PM, Debaditya Roy wrote:
> Hello users,
>
> I was running an experiment on a very simple cluster with two nodes (o
Hey everyone,
I've got some questions regarding savepoints in Flink. I have the following
situation:
There is a microservice that reads data from Kafka topics, creates Flink
streams from this data and does different computations/pattern matching
workloads. If the overall workload for this serv
Hello users,
I was running an experiment on a very simple cluster with two nodes (one
jobmanager and another taskmanager). However after starting the execution,
in a few seconds the program is aborted with the error.
The program finished with the following exception:
org.apache.flink.client.prog
Hi Siva,
in version 1.0.0 we’ve added the Scala binary version suffix to all Flink
dependencies which depend on Scala. Thus, you should look for
flink-streaming-scala_2.10 and flink-streaming-java_2.10. For these
artifacts you should be able to find a version 1.0.3 on maven central, for
example.
Yes you're right Sameer. That's how things work in Flink.
Cheers,
Till
On Sun, Jul 31, 2016 at 12:33 PM, Sameer Wadkar wrote:
> Vishnu,
>
> I would imagine based on Max's explanation and how other systems like
> MapReduce and Spark partition keys, if you have 100 keys and 50 slots, 2
> keys wou
Hi Mans,
Milind is right that in general external systems have to play along if you
want to achieve exactly once processing guarantees while writing to these
systems. Either by supporting idempotent operations or by allowing to roll
back their state.
In the batch world, this usually means to over
Hi All,
I am trying to read AVRO data from Kafka using Flink 1.0.3 but I am getting
error. I have posted this issue in Stack Overflow:
http://stackoverflow.com/questions/38698721/how-to-read-avro-data-from-kafka-using-flink
. Is there any mistake we can try to look into?
Thanks & Regards
Zees
Probably `processAt` is not used adequately because after increasing maxDelay
in watermark to 10 minutes it works as expected.
Is there any upper limit of setting this maxDelay? Because there might be
too many windows are waiting for the last instance?
Best,
Sendoh
--
View this message in con
https://github.com/apache/flink/pull/2317
On Mon, Aug 1, 2016 at 11:54 AM, Niels Basjes wrote:
> Thanks for the pointers towards the work you are doing here.
> I'll put up a patch for the jars and such in the next few days.
> https://issues.apache.org/jira/browse/FLINK-4287
>
> Niels Basjes
>
>
Aljoscha,
Thank you for your response.
It would be great if offset setting is available out-of-the box.
In the meantime, I will use my custom version.
Regards,
Hironori
2016-07-29 19:29 GMT+09:00 Aljoscha Krettek :
> Hi,
> yes, I'm afraid you would have to use a custom version of the
> TumblingP
Thanks for the pointers towards the work you are doing here.
I'll put up a patch for the jars and such in the next few days.
https://issues.apache.org/jira/browse/FLINK-4287
Niels Basjes
On Mon, Aug 1, 2016 at 11:46 AM, Stephan Ewen wrote:
> Thank you for the breakdown of the problem.
>
> Optio
Thank you for the breakdown of the problem.
Option (1) or (2) would be the way to go, currently.
The problem that (3) does not support HBase is simply solvable by adding
the HBase jars to the lib directory. In the future, this should be solved
by the YARN re-architecturing:
https://cwiki.apache.o
Thank you for helping the issue.
Those single-element-windows arrive within seconds and delay is configured
with watermark as 6 seconds.
Following are some samples after investigated.
...
{"hashCode":-1798107744,"count":1,"processAt":"2016-08-01T11:08:05.846","startDate":"2016-07-19T21:34:00.
in org.apache.flink.api.table.plan.PlanTranslator.
val inputType = set.getType().asInstanceOf[CompositeType[A]]
if (!inputType.hasDeterministicFieldOrder && checkDeterministicFields) {
throw new ExpressionException(s"You cannot rename fields upon Table
creation: " +
s"Field order o
Ok, then I think I have no better solution than use the Table API of the
upcoming 1.1 release. The Table API has been completely rewritten and
the POJO support is now much better. Maybe you could try the recent 1.1
RC1 release.
Am 01/08/16 um 11:11 schrieb Dong-iL, Kim:
I’ve tried like this,
I’ve tried like this, but not work.
dataSet.as(‘id as ‘id, ‘amount as ‘amount)
dataSet.as(‘id, ‘amount)
dataSet.as(“id, amount”)
thanks.
> On Aug 1, 2016, at 6:03 PM, Timo Walther wrote:
>
> I think you need to use ".as()" instead of "toTable()" to supply the field
> order.
>
> Am 01/08/16
I think you need to use ".as()" instead of "toTable()" to supply the
field order.
Am 01/08/16 um 10:56 schrieb Dong-iL, Kim:
Hi Timo.
I’m using scala API.
There is no error with java API.
my code snippet is this.
dataSet.toTable
.groupBy(“id")
.select(‘id, ‘am
Hi Timo.
I’m using scala API.
There is no error with java API.
my code snippet is this.
dataSet.toTable
.groupBy(“id")
.select(‘id, ‘amount.sum as ‘amount)
.where(‘amount > 0)
.toDataSet[TestPojo]
.print()
Thanks.
>
Hi Kim,
as the exception says: POJOs have no deterministic field order. You have
to specify the order during the DataSet to Table conversion:
Table table = tableEnv.fromDataSet(pojoDataSet, "pojoField as a,
pojoField2 as b");
I hope that helps. Otherwise it would help if you could supply a
Hi Davran,
unregistering tables is not possible at the moment. I have created an
issue for this: https://issues.apache.org/jira/browse/FLINK-4288
Timo
Am 29/07/16 um 20:24 schrieb Davran Muzafarov:
Hi,
I could not find the way to reuse table names.
tableEnv = TableEnvironment.getTableEnv
Just tried to reproduce the error reported by Aljoscha, but could not.
I used a clean checkpoint of the RC1 code and cleaned all local maven
caches before the testing.
@Aljoscha: Can you reproduce this on your machine? Can you try and clean
the maven caches?
On Sun, Jul 31, 2016 at 7:31 PM, Ufuk
my flink ver is 1.0.3.
thanks.
> On Aug 1, 2016, at 5:18 PM, Dong-iL, Kim wrote:
>
> I’ve create a program using table API and get an exception like this.
> org.apache.flink.api.table.ExpressionException: You cannot rename fields upon
> Table creation: Field order of input type PojoType<….> is
I’ve create a program using table API and get an exception like this.
org.apache.flink.api.table.ExpressionException: You cannot rename fields upon
Table creation: Field order of input type PojoType<….> is not deterministic.
There is an error not in java program, but in scala program.
how can I us
Hello Everyone,
I m new to Flink, wanted to try streaming API using flink-kafka connector
in scala.
But there are several versions of it. Please could some one help on below
questions
what are the differences between flink-streaming-core and
flink-sreaming-scala[java]?
Latest version of flink-s
Hi,
I have the situation that I have a Kerberos secured Yarn/HBase installation
and I want to export data from a lot (~200) HBase tables to files on HDFS.
I wrote a flink job that does this exactly the way I want it for a single
table.
Now in general I have a few possible approaches to do this fo
37 matches
Mail list logo