Dear Flink community,
I would like to bring Flink Forward San Francisco to your attention. After
hosting Flink Forward for two years in Berlin, Germany, we decided to bring
it to the US west coast as well.
Check out this very nice video summary from last year's conference in
Berlin: https://www.y
Please let me know, if you need help with the connector or if you want
to extend it.
Cheers,
Martin
On 24.03.2017 16:07, alex.decastro wrote:
Thanks Tim! I missed that one on Jira. :-)
--
View this message in context:
http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/Su
Hello Till,
Yes elements do have a timestamp associated which is parsed in the first map
function.
Yes, indeed if all timestamps lie within 1 hour the triggering will happen
after the complete file has been read. I had wrong window size and sliding
step for a dataset I was testing (I tested it in
Hi Sonex,
I assume the elements in your file have a timestamp associated which is
parsed in the first map function, right? Now my question would be: What is
the range of this timestamp value? In your program you've defined a time
window of 1 hour. If the timestamps lie all in a window of 1 hour, t
A small addition for the component discovery: Flink works so that the
TaskManagers register at the JobManager. This means that the TaskManager
have to somehow retrieve the JobManager's address. Either you do it as
Philippe described it or you use the HA mode with ZooKeeper. In the latter
case, the
Hi Mihail,
the close method of the RichFunction will always be called. Only if the JVM
crashes hard and the task thread does not have a chance to clean up the
user functions, this won't be the case.
Cheers,
Till
On Fri, Mar 24, 2017 at 12:00 PM, Mihail Vieru
wrote:
> Hi,
>
> quick question: is
Thanks Tim! I missed that one on Jira. :-)
--
View this message in context:
http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/Support-connector-for-Neo4j-tp12397p12399.html
Sent from the Apache Flink User Mailing List archive. mailing list archive at
Nabble.com.
Hi Alex,
there is a neo4j connector for Flink [1,2]. With that you should be able to
read data from neo4j.
[1] https://github.com/s1ck/flink-neo4j
[2] https://issues.apache.org/jira/browse/FLINK-2941
Cheers,
Till
On Fri, Mar 24, 2017 at 1:20 PM, alex.decastro
wrote:
> Any pointers for current
Any pointers for current hacks (from Flink users) on how to fetch table data
from Neo4j? Any plans on the roadmap to include a Neo4j connector?
--
View this message in context:
http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/Support-connector-for-Neo4j-tp12397.html
Sent fro
Hi,
quick question: is the close method in sinks, e.g. BucketingSink, called
when a job gets cancelled or stopped?
We're trying to have the following: A custom stoppable streaming source
with a modified BucketingSink which moves the files from ephemeral storage
to S3 Frankfurt upon the closing of
You are right and I'm sorry. I've opened a PR:
https://github.com/apache/flink/pull/3605
On Thu, Mar 23, 2017 at 3:47 PM, Greg Hogan wrote:
> A PR and review may have noted that the same regex is in
> stop-zookeeper-quorum.sh and recommended ignoring whitespace both before
> and after the equals
Weave allows encryption of the vpn, and your Flink containers can be secured
using kerberos
https://ci.apache.org/projects/flink/flink-docs-release-1.2/setup/config.html#kerberos-based-security.
> Le 24 mars 2017 à 11:16, Chakravarthy varaga a
> écrit :
>
> Hi,
>
> Thanks for your inpu
Hi Sonex,
I don't known well Scala as I know Java, but I guess it should be correct
if no error is raised.
The behaviour you described seems wierd to me and should not happen. I'm
unfortunately unable to identify an apparent cause, maybe someone in the
mailing list can shed a light on that.
Best,
Hi,
Thanks for your inputs. It kind of makes sense to use a container
orchestrator to plough through networking under the hood.
How do you tackle security?
I don't see a way to authorize users for job management. I understand
few orchestrators provide name space isolation and secur
yes,it is YARN single job,use the commend:
flink-1.1.1/bin/flink run -m yarn-cluster \-yn 2 \-ys 2 \-yjm 2048 \-ytm 2048
\--class statics.ComputeDocSim \--classpath
file:///opt/cloudera/parcels/CDH/lib/hbase/hbase-client.jar \--classpath
file:///opt/cloudera/parcels/CDH/lib/hbase/hbase-protocol.
The program consists of two executions - one that only collects() back to
the client, one that executes the map function.
Are you running this as a "YARN single job" execution? IN that case, there
may be an issue that this incorrectly tries to submit to a stopping YARN
cluster.
On Fri, Mar 24,
Hi,
I don't think that his error is actually an issue:
https://issues.apache.org/jira/browse/YARN-1022
Is something not working as expected in your application?
On Fri, Mar 24, 2017 at 5:53 AM, wrote:
> hi,i read file from hdfs,but there is error when run jon on yarn clutster,
> ---
Hi,
Can you provide more logs to help us understand whats going on?
One note regarding your application: You are calling .collect() and send
the collection with the map() call to the cluster again.
This is pretty inefficient and can potentially break your application (in
particular the RPC system
The code will not work properly, sorry.
The value returned by the state is whatever is stored under the key for
which the function was called the last time.
In addition, the unsynchronized access is most likely causing the RocksDB
fault.
TL:DR - The "ValueState" / "ListState" / etc in Flink are n
Hi,
If I can give my 2 cents.
One simple solution to your problem is using weave (https://www.weave.works/) a
Docker network plugin.
We’ve been working for more then year with dockerized
(Flink+zookeeper+Yarn+spark+Kafka+hadoop+elasticsearch ) cluster using weave.
Design your docker container
Hi,
I request someone to help here.
Best Regards
CVP
On Thu, Mar 23, 2017 at 10:13 PM, Chakravarthy varaga <
chakravarth...@gmail.com> wrote:
> I'm looking forward to hearing some updates on this...
>
> Any help here is highly appreciated !!
>
> On Thu, Mar 23, 2017 at 4:20 PM, Chakravarthy
Hi,
@Robert: I have uploaded all the log files that I could get my hands on to
https://www.dropbox.com/sh/l35q6979hy7mue7/AAAe1gABW59eQt6jGxA3pAYaa?dl=0. I
tried to remove all unrelated messages logged by the job itself. In
flink-root-jobmanager-0-micardo-dev.log I kept the Flink startup messag
22 matches
Mail list logo