Nick M created FLINK-37715:
--
Summary: Flink try use winutils.exe even without hadoop
Key: FLINK-37715
URL: https://issues.apache.org/jira/browse/FLINK-37715
Project: Flink
Issue Type: Bug
Nick Caballero created FLINK-37657:
--
Summary: Helm chart references invalid image repository
Key: FLINK-37657
URL: https://issues.apache.org/jira/browse/FLINK-37657
Project: Flink
Issue
Thanks Martijn.
That's really great context. In that case, then I'll change my previous
opinion. I agree that we should proceed with the simpler pull request and get
it into the Flink 2.0 release.
On 2025/02/25 14:06:20 Martijn Visser wrote:
> Hi all,
>
> For the record, I don't think we have
#2 relates to the Twitter Chill
dependency. I would recommend we still remove the twitter/chill dependency.
This project is no longer maintained and we were not able to get our Kryo 5.x
update merged in.
Nick
On 2025/02/24 11:58:35 Gyula Fóra wrote:
> Thanks for chiming in Timo, I completely agre
I think upgrading to 2.12.15 would be the correct choice, as 2.12.16 has a
known regression for projects compiled with Java & Scala, which Flink does:
https://github.com/scala/scala/releases/tag/v2.12.16
This regression will be addressed in a few months via 2.12.17, so for now
2.12.15 should be go
Hi,
It's mentioned in the following documentation,
https://ci.apache.org/projects/flink/flink-docs-stable/dev/table/connectors/formats/canal.html,
that "...currently Flink can’t combine UPDATE_BEFORE and UPDATE_AFTER into
a single UPDATE message."
Can anyone elaborate on this? Was decomposing a s
Nick Burkard created FLINK-20845:
Summary: Drop support for Scala 2.11
Key: FLINK-20845
URL: https://issues.apache.org/jira/browse/FLINK-20845
Project: Flink
Issue Type: Sub-task
Hi
first good job and tank you
i don't find in docker hub the new version 1.12
when it will be there ?
nick
בתאריך יום ה׳, 10 בדצמ׳ 2020 ב-14:17 מאת Robert Metzger <
rmetz...@apache.org>:
> The Apache Flink community is very happy to announce the release of Apache
Nick Chadwick created FLINK-4222:
Summary: Allow Kinesis configuration to get credentials from AWS
Metadata
Key: FLINK-4222
URL: https://issues.apache.org/jira/browse/FLINK-4222
Project: Flink
ally if flink could benefit
> from cassandra data locality. Cassandra/spark integration is using this for
> information to schedule spark tasks.
>
> On 9 June 2016 at 19:55, Nick Dimiduk wrote:
>
> > You might also consider support for a Bigtable
> > backend: HBase
You might also consider support for a Bigtable
backend: HBase/Accumulo/Cassandra. The data model should be similar
(identical?) to RocksDB and you get HA, recoverability, and support for
really large state "for free".
On Thursday, June 9, 2016, Chen Qin wrote:
> Hi there,
>
> What is progress on
I'm also curious for a solution here. My test code executes the flow from a
separate thread. Once i've joined on all my producer threads and I've
verified the output, I simply interrupt the flow thread. This spews
exceptions, but it all appears to be harmless.
Maybe there's a better way? I think y
For what it's worth, this is very close to how HBase attempts to manage the
community load. We break out components (in Jira), with a list of named
component maintainers. Actually, having components alone has given a Big
Bang for the buck because when properly labeled, it makes it really easy
for p
Hi Chenguang,
I've been using the class StreamingMultipleProgramsTestBase, found in
flink-streaming-java test jar as the basis for my integration tests. These
tests spin up a flink cluster (and kafka, and hbase, &c) in a single JVM.
It's not a perfect integration environment, but it's as close as
Nick Dimiduk created FLINK-3709:
---
Summary: [streaming] Graph event rates over time
Key: FLINK-3709
URL: https://issues.apache.org/jira/browse/FLINK-3709
Project: Flink
Issue Type: Improvement
find blocker for the current RC I prefer to continue evaluate
> and VOTE current RC.
>
> - Henry
>
> On Tuesday, February 9, 2016, Ufuk Celebi >
> wrote:
>
> > Hey Nick,
> >
> > I agree that this can be problematic when running multiple jobs on
> > YA
may help in such a case.
> > >
> > >
> > > On Mon, Feb 8, 2016 at 7:57 PM, Greg Hogan > wrote:
> > >
> > > > When is this useful in streaming?
> > > >
> > > > On Mon, Feb 8, 2016 at 1:46 PM, Nick Dimiduk >
>
Perhaps too late for the RC, but I've backported FLINK-3293 to this branch
via FLINK-3372. Would be nice for those wanting to monitory yarn
application submissions.
On Mon, Feb 8, 2016 at 9:37 AM, Ufuk Celebi wrote:
> Dear Flink community,
>
> Please vote on releasing the following candidate as
https://ci.apache.org/projects/flink/flink-docs-release-0.10/api/java/org/apache/flink/streaming/api/environment/StreamExecutionEnvironment.html#disableOperatorChaining()
On Mon, Feb 8, 2016 at 10:34 AM, Greg Hogan wrote:
> Is it possible to force operator chaining to be disabled? Similar to how
Nick Dimiduk created FLINK-3372:
---
Summary: Setting custom YARN application name is ignored
Key: FLINK-3372
URL: https://issues.apache.org/jira/browse/FLINK-3372
Project: Flink
Issue Type: Bug
+1 for a 0.10.2 maintenance release.
On Monday, February 1, 2016, Ufuk Celebi wrote:
> Hey all,
>
> Our release-0.10 branch contains some important fixes (for example a
> critical fix in the network stack). I would like to hear your opinions
> about doing a 0.10.2 bug fix release.
>
> I think it
Thanks Max. I'm accustomed to projects advertising a release with a fixed
ref such as a sha or tag, not a branch. Much obliged.
-n
On Friday, January 15, 2016, Maximilian Michels wrote:
> Hi Nick,
>
> That was an oversight when the release was created. As Stephan
> mentioned,
Hi folks,
I noticed today that the parent pom for the flink-shaded-hadoop pom (and
thus also it's children) are not using ${ROOT}/pom.xml as their parent.
However, ${ROOT}/pom.xml lists the hierarchy as a module. I'm curious to
know why this is. It seems one artifact of this disconnect is that
too
Nick Dimiduk created FLINK-3228:
---
Summary: Cannot submit multiple streaming involving JDBC drivers
Key: FLINK-3228
URL: https://issues.apache.org/jira/browse/FLINK-3228
Project: Flink
Issue
Nick Dimiduk created FLINK-3224:
---
Summary: The Streaming API does not call setInputType if a format
implements InputTypeConfigurable
Key: FLINK-3224
URL: https://issues.apache.org/jira/browse/FLINK-3224
What's the relationship between the streaming SQL proposed here and the CEP
syntax proposed earlier in the week?
On Sunday, January 10, 2016, Henry Saputra wrote:
> Awesome! Thanks for the reply, Fabian.
>
> - Henry
>
> On Sunday, January 10, 2016, Fabian Hueske > wrote:
>
> > Hi Henry,
> >
> >
sure it would other users.
On Friday, January 8, 2016, Stephan Ewen wrote:
> Hi Nick!
>
> We have not pushed a release tag, but have a frozen release-0.10.1-RC1
> branch (https://github.com/apache/flink/tree/release-0.10.1-rc1)
> A tag would be great, agree!
>
> Flink does
45 AM, Nick Dimiduk wrote:
> Hi Devs,
>
> It seems no release tag was pushed to 0.10.1. I presume this was an
> oversight. Is there some place I can look to see from which sha the 0.10.1
> release was built? Are the RC vote threads the only cannon in this matter?
>
> Thanks,
> Nick
>
Hi Devs,
It seems no release tag was pushed to 0.10.1. I presume this was an
oversight. Is there some place I can look to see from which sha the 0.10.1
release was built? Are the RC vote threads the only cannon in this matter?
Thanks,
Nick
-enforcer-plugin to require Maven
> 3.3.
> I guess many Linux distributions are still at Maven 3.2, so users might get
> unhappy users
>
>
> On Thu, Dec 10, 2015 at 6:33 PM, Nick Dimiduk wrote:
>
> > Lol. Okay, thanks a bunch. Mind linking back here with your discussi
s. Important for low-latency,
> shells, etc
>
> Flink itself respects these classloaders whenever dynamically looking up a
> class. It may be that Closure is written such that it can only dynamically
> instantiate what is the original classpath.
>
>
>
> On Fri, Dec 11, 2015 at 1
I extended my pom to include clojure-1.5.1.jar in my program jar.
> However, the problem is still there... I did some research on the
> Internet, and it seems I need to mess around with Clojure's class
> loading strategy...
>
> -Matthias
>
> On 12/10/2015 06:47 PM, N
wrote:
> I had the same though as Nick. Maybe Leiningen allows to somehow build a
> fat-jar containing the clojure standard library.
>
> On Thu, Dec 10, 2015 at 5:51 PM, Nick Dimiduk wrote:
>
> > What happens when you follow the packaging examples provided in the flink
> &
wget
>
> http://archive.apache.org/dist/maven/maven-3/3.2.5/binaries/apache-maven-3.2.5-bin.tar.gz
> and then use that maven for now ;)
>
>
> On Thu, Dec 10, 2015 at 12:35 AM, Nick Dimiduk > wrote:
>
> > Thanks, I appreciate it.
> >
> > On Wed, Dec 9, 20
What happens when you follow the packaging examples provided in the flink
quick start archetypes? These have the maven-foo required to package an
uberjar suitable for flink submission. Can you try adding that step to your
pom.xml?
On Thursday, December 10, 2015, Stephan Ewen wrote:
> This is a p
Thanks, I appreciate it.
On Wed, Dec 9, 2015 at 12:50 PM, Robert Metzger wrote:
> I can confirm that guava is part of the fat jar for the 2.7.0, scala 2.11
> distribution.
>
> I'll look into the issue tomorrow
>
> On Wed, Dec 9, 2015 at 7:58 PM, Nick Dimiduk wrote:
&g
gt; inside the "flink-dist" project a "mvn dependency:tree" run. That shows how
> the unshaded Guava was pulled in.
>
> Greetings,
> Stephan
>
>
> On Wed, Dec 9, 2015 at 6:22 PM, Nick Dimiduk wrote:
>
> > I did not. All I did was apply the PR f
dependency that might transitively pull Guava?
>
> Stephan
>
>
> On Tue, Dec 8, 2015 at 9:25 PM, Nick Dimiduk wrote:
>
> > Hi there,
> >
> > I'm attempting to build locally a flink based on release-0.10.0 +
> > FLINK-3147. When I build from this sandbox
watch
from both Guava-12 and Guava-18 are in my classpath.
Is there some additional profile required to build a dist package with only
the shaded jars?
Thanks,
Nick
$ tar xvzf flink-0.10.0-bin-hadoop27-scala_2.11.tgz
$ cd flink-0.10.0
$ $ unzip -t lib/flink-dist_2.11-0.10.0.jar | grep Stop
Nick Dimiduk created FLINK-3148:
---
Summary: Support configured serializers for shipping UDFs
Key: FLINK-3148
URL: https://issues.apache.org/jira/browse/FLINK-3148
Project: Flink
Issue Type
Nick Dimiduk created FLINK-3147:
---
Summary: HadoopOutputFormatBase should expose CLOSE_MUTEX for
subuclasses
Key: FLINK-3147
URL: https://issues.apache.org/jira/browse/FLINK-3147
Project: Flink
Nick Dimiduk created FLINK-3119:
---
Summary: Remove dependency on Tuple from HadoopOutputFormat
Key: FLINK-3119
URL: https://issues.apache.org/jira/browse/FLINK-3119
Project: Flink
Issue Type
>
> Do you know if Hadoop/HBase is also using a maven plugin to fail a build on
> breaking API changes? I would really like to have such a functionality in
> Flink, because we can spot breaking changes very early.
I don't think we have maven integration for this as of yet. We release
managers run
In HBase we keep an hbase-examples module with working code. Snippets from
that module are pasted into docs and referenced. Yes, we do see divergence,
especially when refactor tools are involved. I once looked into a doc tool
for automatically extracting snippets from source code, but that turned
i
Woo hoo!
On Thu, Nov 12, 2015 at 3:01 PM, Maximilian Michels wrote:
> Thanks for voting! The vote passes.
>
> The following votes have been cast:
>
> +1 votes: 7
>
> Stephan
> Aljoscha
> Robert
> Max
> Chiwan*
> Henry
> Fabian
>
> * non-binding
>
> -1 votes: none
>
> I'll upload the release arti
Nick Dimiduk created FLINK-3004:
---
Summary: ForkableMiniCluster does not call RichFunction#open
Key: FLINK-3004
URL: https://issues.apache.org/jira/browse/FLINK-3004
Project: Flink
Issue Type
For what it's worth, the new Apache Yetus [0] project includes an interface
audience annotations module [1]. We have (or intend to have, if it's not
available yet) tools for validation of public API compatibility across
release versions. For example, here's such a report [2] from a previous
HBase R
47 matches
Mail list logo