!
>
> Regards,
> Robert
>
>
> On Wed, Jun 1, 2016 at 6:42 PM, David Kim > wrote:
>
>> Hello!
>>
>> Using Flink 1.0.3.
>>
>> This is cosmetic but will help clean up logging I think.
>>
>> The Apache *KafkaConsumer* logs a warning [
Hello!
Using Flink 1.0.3.
This is cosmetic but will help clean up logging I think.
The Apache *KafkaConsumer* logs a warning [1] for any unused properties.
This is great in case the developer has a typo or should clean up any
unused keys.
Flink's Kafka consumer and producer have some custom pro
969
> <https://issues.apache.org/jira/browse/FLINK-3969>.
>
> Thanks for reporting!
>
> Aljoscha
>
> On Mon, 23 May 2016 at 22:08 David Kim
> wrote:
>
>> Hi Max!
>>
>> Unfortunately, that's not the behavior I'm seeing.
>>
>> I v
3, 2016 at 12:01 PM Maximilian Michels wrote:
> Hi David,
>
> I'm afraid Flink logs all exceptions. You'll find the exceptions in the
> /log directory.
>
> Cheers,
> Max
>
> On Mon, May 23, 2016 at 6:18 PM, David Kim <
> david@braintreepayments.com>
Hello!
Just wanted to check up on this. :)
I grepped around for `log.error` and it *seems* that currently the only
events for logging out exceptions are for non-application related errors.
Thanks!
David
On Fri, May 20, 2016 at 12:35 PM David Kim
wrote:
> Hello!
>
> Using flink
Hello!
Using flink 1.0.2, I noticed that exceptions thrown during a flink program
would show up on the flink dashboard in the 'Exceptions' tab. That's great!
However, I don't think flink currently logs this same exception. I was
hoping there would be an equivalent `log.error` call so that third p
Hello all,
I read the documentation at [1] on iterations and had a question on whether
an assumption is safe to make.
As partial solutions are continuously looping through the step function,
when new elements are added as iteration inputs will the insertion order of
all of the elements be preserv
Hi Stephan!
Following up on this issue, it seems the issue doesn't show itself when
using version 1.0.1. I'm able to run our unit tests in IntelliJ now.
Thanks!
David
On Wed, Apr 13, 2016 at 1:59 PM Stephan Ewen wrote:
> Does this problem persist? (It might have been caused by maven caches wit
Hi Robert!
Thank you! :)
David
On Tue, Mar 22, 2016 at 7:59 AM, Robert Metzger wrote:
> Hey David,
>
> FLINK-3602 has been merged to master.
>
> On Fri, Mar 11, 2016 at 5:11 PM, David Kim <
> david@braintreepayments.com> wrote:
>
>> Thanks Stephan! :)
&g
d come up with a fix...
>>
>> Greetings,
>> Stephan
>>
>>
>> On Thu, Mar 10, 2016 at 4:11 PM, David Kim <
>> david@braintreepayments.com> wrote:
>>
>>> Hello!
>>>
>>> Just wanted to check up on this again. Ha
Hello!
Just wanted to check up on this again. Has anyone else seen this before or
have any suggestions?
Thanks!
David
On Tue, Mar 8, 2016 at 12:12 PM, David Kim
wrote:
> Hello all,
>
> I'm running into a StackOverflowError using flink 1.0.0. I have an Avro
> schema that has
Hello all,
I'm running into a StackOverflowError using flink 1.0.0. I have an Avro
schema that has a self reference. For example:
item.avsc
{
"namespace": "..."
"type": "record"
"name": "Item",
"fields": [
{
"name": "parent"
"type": ["null, "Item"]
}
]
}
When ru
ting frequently again now, that's probably why you find a
> correct build today...
>
>
>
> On Wed, Feb 10, 2016 at 5:31 PM, David Kim <
> david@braintreepayments.com> wrote:
>
>> Hi Chiwan, Max,
>>
>> Thanks for checking. I also downloaded it n
-hadoop2_2.11.tgz” but
> there is no jar compiled with Scala 2.10. Could you check again?
> >
> > Regards,
> > Chiwan Park
> >
> >> On Feb 10, 2016, at 2:59 AM, David Kim
> wrote:
> >>
> >> Hello,
> >>
> >> I noticed that t
Hello,
I noticed that the flink binary for scala 2.11 located at
http://stratosphere-bin.s3.amazonaws.com/flink-1.0-SNAPSHOT-bin-hadoop2_2.11.tgz
contains the scala 2.10 flavor.
If you open the lib folder the name of the jar in lib is flink-dist_2.10
-1.0-SNAPSHOT.jar.
Could this be an error in
at SBT marks as wrong
> (org.apache.flink:flink-shaded-hadoop2,
> org.apache.flink:flink-core, org.apache.flink:flink-annotations) are
> actually those that are Scala-independent, and have no suffix at all.
>
> It is possible your SBT file does not like miking dependencies wit
pport" % "3.2" % "it,test",
"net.manub" %% "scalatest-embedded-kafka" % "0.4.1" % "it,test"
)
My project settings are in a file called MyBuild.scala
object MyBuild extends Build {
override lazy val settings = super.settings +
Hello again,
I saw the recent change to flink 1.0-SNAPSHOT on explicitly adding the
scala version to the suffix.
I have a sbt project that fails. I don't believe it's a misconfiguration
error on my end because I do see in the logs that it tries to resolve
everything with _2.11.
Could this possib
ur topology runs with a parallelism of 1? Running
> it with a parallelism higher than 1 will also work around the issue
> (because then the two Sinks are not executed in one Task).
>
> On Fri, Jan 22, 2016 at 4:56 PM, David Kim <
> david@braintreepayments.com> wrote:
>
;flink.disable-metrics" to "true" in the properties. This way, you
> disable the metrics.
> I'll probably have to introduce something like a client id to
> differentiate between the producers.
>
> Robert
>
> On Thu, Jan 21, 2016 at 11:51 PM, David Kim <
>
8 connectors.
>
> Please let me know if the updated code has any issues. I'll fix the issues
> asap.
>
> On Wed, Jan 13, 2016 at 5:06 PM, David Kim <
> david@braintreepayments.com> wrote:
>
>> Thanks Robert! I'll be keeping tabs on the PR.
>>
n Fri, Jan 15, 2016 at 4:02 PM, David Kim <
> david@braintreepayments.com> wrote:
>
>> Thanks Till! I'll keep an eye out on the JIRA issue. Many thanks for the
>> prompt reply.
>>
>> Cheers,
>> David
>>
>> On Fri, Jan 15, 2016 at 4:16 AM,
s/change-scala-version.sh
> 2.11 in the root directory and then mvn clean install -DskipTests
> -Dmaven.javadoc.skip=true. These binaries should depend on the right
> Scala version.
>
> Cheers,
> Till
>
>
> On Thu, Jan 14, 2016 at 11:25 PM, David Kim <
> david
Hi,
I have a scala project depending on flink scala_2.11 and am seeing a
compilation error when using sbt.
I'm using flink 1.0-SNAPSHOT and my build was working yesterday. I was
wondering if maybe a recent change to flink could be the cause?
Usually we see flink resolving the scala _2.11 counter
can merge the connector to master this week, then, the fix
> will be available in 1.0-SNAPSHOT as well.
>
> Regards,
> Robert
>
>
>
> Sent from my iPhone
>
> On 11.01.2016, at 21:39, David Kim
> wrote:
>
> Hello all,
>
> I saw that DeserializationSchema ha
Hello all,
I saw that DeserializationSchema has an API "isEndOfStream()".
https://github.com/apache/flink/blob/master/flink-streaming-java/src/main/java/org/apache/flink/streaming/util/serialization/DeserializationSchema.java
Can *isEndOfStream* be utilized to somehow terminate a streaming flink
26 matches
Mail list logo