Hi All,
I'm getting the below exception when I start my flink job. I have verified
the elastic search host and it seems to be working well. I have also tried
including the below dependecies to my project but nothing works. Need some
help. Thanks.
compile group: 'org.apache.lucene', name: 'lucene-
Yelei Feng created FLINK-5920:
-
Summary: port range support for config query.server.port
Key: FLINK-5920
URL: https://issues.apache.org/jira/browse/FLINK-5920
Project: Flink
Issue Type: Improveme
Yelei Feng created FLINK-5919:
-
Summary: port range support for config taskmanager.data.port
Key: FLINK-5919
URL: https://issues.apache.org/jira/browse/FLINK-5919
Project: Flink
Issue Type: Impro
Yelei Feng created FLINK-5918:
-
Summary: port range support for config taskmanager.rpc.port
Key: FLINK-5918
URL: https://issues.apache.org/jira/browse/FLINK-5918
Project: Flink
Issue Type: Improv
Aljoscha Krettek created FLINK-5917:
---
Summary: Remove MapState.size()
Key: FLINK-5917
URL: https://issues.apache.org/jira/browse/FLINK-5917
Project: Flink
Issue Type: Improvement
Tao Wang created FLINK-5916:
---
Summary: make env.java.opts.jobmanager and
env.java.opts.taskmanager working in YARN mode
Key: FLINK-5916
URL: https://issues.apache.org/jira/browse/FLINK-5916
Project: Flink
Shaoxuan Wang created FLINK-5915:
Summary: Add support for the aggregate on multi fields
Key: FLINK-5915
URL: https://issues.apache.org/jira/browse/FLINK-5915
Project: Flink
Issue Type: Sub-t
Just want to clarify what unify code style here.
Is the intention to have IDE and Maven plugins to have the same check style
rules?
Or are we talking about having ONE code style for both Java and Scala?
- Henry
On Fri, Feb 24, 2017 at 8:08 AM, Greg Hogan wrote:
> I agree wholeheartedly with U
Shaoxuan Wang created FLINK-5914:
Summary: remove aggregateResultType from
streaming.api.datastream.aggregate
Key: FLINK-5914
URL: https://issues.apache.org/jira/browse/FLINK-5914
Project: Flink
Hi Jamie,
I think it does make consuming late arriving events more explicit! At cost of
fix a predefined OutputTag which user have no control nor definition
an extra UDF which essentially filter out all mainOutputs and only let
sideOutput pass (like filterFunction)
Thanks,
Chen
> On Feb 24, 20
Greg Hogan created FLINK-5913:
-
Summary: Example drivers
Key: FLINK-5913
URL: https://issues.apache.org/jira/browse/FLINK-5913
Project: Flink
Issue Type: Sub-task
Components: Gelly
Greg Hogan created FLINK-5912:
-
Summary: Inputs for CSV and graph generators
Key: FLINK-5912
URL: https://issues.apache.org/jira/browse/FLINK-5912
Project: Flink
Issue Type: Sub-task
Co
Greg Hogan created FLINK-5911:
-
Summary: Command-line parameters
Key: FLINK-5911
URL: https://issues.apache.org/jira/browse/FLINK-5911
Project: Flink
Issue Type: Sub-task
Components: Ge
Greg Hogan created FLINK-5910:
-
Summary: Framework for Gelly examples
Key: FLINK-5910
URL: https://issues.apache.org/jira/browse/FLINK-5910
Project: Flink
Issue Type: Sub-task
Component
Greg Hogan created FLINK-5909:
-
Summary: Interface for GraphAlgorithm results
Key: FLINK-5909
URL: https://issues.apache.org/jira/browse/FLINK-5909
Project: Flink
Issue Type: Sub-task
C
I prefer the ProcessFunction and side outputs solution over split() and
select() which I've never liked primarily due to the lack of type safety
and it also doesn't really seem to fit with the rest of Flink's API.
On the late data question I strongly prefer the late data concept being
explicit in
Hi,
I am wondering whether there is any scenario where the new way makes
anything better under normal circumstances.
I can only see how it will break things in subtle ways.
If you think there is any real benefit to the current approach I dont mind
having it as a default, otherwise I am in favor o
Hi Greg,
On 24 February 2017 at 18:09, Greg Hogan wrote:
> Thanks, Vasia, for starting the discussion.
>
> I was expecting more changes from the recent discussion on restructuring
> the project, in particular regarding the libraries. Gelly has always
> collected algorithms and I have personally
Thanks, Vasia, for starting the discussion.
I was expecting more changes from the recent discussion on restructuring
the project, in particular regarding the libraries. Gelly has always
collected algorithms and I have personally taken an algorithms-first
approach for contributions. Is that managea
The JIRA (https://issues.apache.org/jira/browse/FLINK-4913) doesn't mention
any particular user or use case.
I honestly care so much if we enable or disable it by default. But since
its the new default behavior of Flink 1.2. I'm against changing that in
Flink 1.2.1, that's why I proposed to add a
Did any user have problems with the Flink 1.1 behaviour? If not, we could
disable it again, by default, and add a flag for adding the user jar to all
the classpaths.
On Fri, 24 Feb 2017 at 14:50 Robert Metzger wrote:
> I agree with you Gyula, this change is dangerous. I have seen another case
>
Stephan Ewen created FLINK-5908:
---
Summary: Blob Cache can (rarely) get corrupted on failed blob
downloads
Key: FLINK-5908
URL: https://issues.apache.org/jira/browse/FLINK-5908
Project: Flink
I
I agree wholeheartedly with Ufuk. We cannot reformat the codebase, cannot
pause while flushing the PR queue, and won't find a consensus code style.
I think we can create a baseline code style for new and existing
contributors for which reformatting on changed files will be acceptable for
PR review
Ken and Fabian,
Is the use case to generate and act on the dot file from within the user
program? Would it be more maintainable to make the plan JSON more
accessible (through the CLI and web interface) which users could then pipe
through a converter script?
Greg
On Fri, Feb 24, 2017 at 4:55 AM,
Hello squirrels,
this is a discussion thread to organize the Gelly component development for
release 1.3 and discuss longer-term plans for the library.
I am hoping that with time-based releases, we can distribute the load for
PR reviewing and make better use of our time, and also point contributo
@Jin Mingjian: You can not use the paid travis version for open source
projects. It only works for private repositories (at least back then when
we've asked them about that).
@Stephan: I don't think that incremental builds will be available with
Maven anytime soon.
I agree that we need to fix the
I agree with you Gyula, this change is dangerous. I have seen another case
from a user with Hadoop dependencies that crashed in Flink 1.2.0 that
didn't in 1.1.x
I wonder if we should introduce a config flag for Flink 1.2.1 to disable
the behavior if needed.
On Fri, Feb 24, 2017 at 2:27 PM, Ufuk C
We'll link to the Bahir-flink connectors once Bahir has done a release and
published documentation for the connectors there.
On Fri, Feb 24, 2017 at 1:28 PM, Stephan Ewen wrote:
> I think the main difference is that Bahir Releases go against stable Flink
> releases, which within Flink you have s
I think the main difference is that Bahir Releases go against stable Flink
releases, which within Flink you have snapshot versions against snapshot
releases as well.
The source / sink API has been stable since Flink 1.0, so they should be
cross-compatible, actually.
On Fri, Feb 24, 2017 at 11:34
Flavio Pompermaier created FLINK-5907:
-
Summary: RowCsvInputFormat bug on parsing tsv
Key: FLINK-5907
URL: https://issues.apache.org/jira/browse/FLINK-5907
Project: Flink
Issue Type: Bug
Hi Till,
is there any guide about how to use bahir connectors with Flink?
Is there any pro/cons against their use? ANy compatibility matrix somewhere?
Best,
Flavio
On Fri, Feb 24, 2017 at 10:25 AM, Till Rohrmann
wrote:
> Hi Imalds,
>
> Flink's redis connector has been moved to Apache Bahir [1].
Hi Robert,
I was not aware of this big change (I know it's my fault) but I am not sure
if I agree with the rationale.
I read through the JIRA and it seems that this is mostly a convenience
change that we dont need to copy jars and mess with the classloading that
much.
On the other hand if user j
The problem with code style when it is not enforced is that it will be a
matter of luck to what parts of files / new files will it be applied. When
the code style is not applied to whole file, it is pretty much useless
anyway. You would need to manually select just the fragments one is
changing. Th
Fabian Hueske created FLINK-5906:
Summary: Add support to register UDAGGs in TableEnvironment
Key: FLINK-5906
URL: https://issues.apache.org/jira/browse/FLINK-5906
Project: Flink
Issue Type:
Fabian Hueske created FLINK-5905:
Summary: Add user-defined aggregation functions to documentation.
Key: FLINK-5905
URL: https://issues.apache.org/jira/browse/FLINK-5905
Project: Flink
Issue
Hi Ken,
I think this would be an interesting feature!
I'd suggest to open a JIRA for it.
When extending the API of core classes such as ExecutionEnvironment, there
is often some discussion whether the feature is important enough or whether
it should be rather added to some external util class (wh
On Fri, Feb 24, 2017 at 10:46 AM, Fabian Hueske wrote:
> I agree with Till that encouraging a code style without enforcing it does
> not make a lot of sense.
> If we enforce it, we need to touch all files and PRs.
I think it makes sense for new contributors to have a starting point
without enforc
We have discussed this issue several times in the past and never got around
to do it.
Although I agree that it would be nice to have a stricter code style, I
don't think we should introduce a code style.
IMO, at this point the costs (losing the history, PR hassle, etc.) would be
too high compared
Hi Imalds,
Flink's redis connector has been moved to Apache Bahir [1].
[1] https://github.com/apache/bahir-flink/tree/master/flink-connector-redis
Cheers,
Till
On Fri, Feb 24, 2017 at 7:42 AM, lmalds wrote:
> Hi!I want to know why the Redis Sink connector is disappeared in Flink 2.0
> version
Tao Wang created FLINK-5904:
---
Summary: jobmanager.heap.mb and taskmanager.heap.mb not work in
YARN mode
Key: FLINK-5904
URL: https://issues.apache.org/jira/browse/FLINK-5904
Project: Flink
Issue T
Hi!I want to know why the Redis Sink connector is disappeared in Flink 2.0
version?For Flink 2.0, I need to add a addSink() function and add a jedis
dependency, is it right?Thank you.
--
View this message in context:
http://apache-flink-mailing-list-archive.1008284.n3.nabble.com/Redis-sink-in-F
41 matches
Mail list logo