Hi:
I don’t see a JIRA for Hadoop 3 support. I see a comment on a JIRA here from a
year ago that no one is looking into Hadoop 3 support [1]. Is there a document
or JIRA that now exists which would point to what needs to be done to support
Hadoop 3? Right now builds with Hadoop 3 don’t work obvi
Oh, sorry. I’m not using distributed libraries but trying to build from source.
So, using Maven 3.2.2 and building the connector doesn’t give me a jar for some
reason.
From: Chesnay Schepler
Date: Tuesday, June 13, 2017 at 1:44 PM
To: "Foster, Craig" , "user@flink.apache.org&quo
So, in addition to the question below, can we be more clear on if there is a
patch/fix/JIRA available since I have to use 1.3.0?
From: "Foster, Craig"
Date: Tuesday, June 13, 2017 at 9:27 AM
To: Chesnay Schepler , "user@flink.apache.org"
Subject: Re: Flink Kinesis connec
vented the 1.3.0
kinesis artifact from being released.
This will be fixed for 1.3.1, in the mean time you can use 1.3.0-SNAPSHOT
instead.
On 13.06.2017 17:48, Foster, Craig wrote:
Hi:
I’m trying to build an application that uses the Flink Kinesis Connector in
1.3.0. However, I don’t see that
Hi:
I’m trying to build an application that uses the Flink Kinesis Connector in
1.3.0. However, I don’t see that resolving anymore. It resolved with 1.2.x but
doesn’t with 1.3.0. Is there something I need to now do differently than
described here?
https://ci.apache.org/projects/flink/flink-docs-
Ah, maybe (1) wasn’t entirely clear so here’s the copy/pasted example with what
I suggested:
HadoopJarStepConfig copyJar = new HadoopJarStepConfig()
.withJar("command-runner.jar")
.withArgs("bash","-c", "aws s3 cp s3://mybucket/myjar.jar /
1) Since the jar is only required on the master node you should be able
to just run a step with a very simple script like ‘bash –c “aws s3 cp
s3://mybucket/myjar.jar .”’
So if you were to do that using the step similar to outlined in the EMR
documentation, but replacing withArgs with the a
that I could add a patch to this shell script to at least
allow someone to set the Hadoop and Yarn configuration directories in the
config file.
From: "Bajaj, Abhinav"
Date: Thursday, March 23, 2017 at 10:42 AM
To: "Foster, Craig"
Cc: "user@flink.apache.org"
Can I set these in the configuration file? This would be ideal vs. environment
variables for me but I’m not seeing it in the documentation.
Thanks,
Craig
3.3.x with the
process as documented, so hoping someone can at least duplicate or let me know
of a new workaround for 3.3.x.
Thanks!
Craig
From: "Foster, Craig"
Reply-To: "user@flink.apache.org"
Date: Friday, March 17, 2017 at 7:23 AM
To: "user@flink.apache.org"
Cc
17 at 8:14 AM, Ufuk Celebi
mailto:u...@apache.org>> wrote:
Pulling in Robert and Stephan who know the project's shading setup the best.
On Fri, Mar 17, 2017 at 6:52 AM, Foster, Craig
mailto:foscr...@amazon.com>> wrote:
> Hi:
>
> A few months ago, I was building F
Hi:
A few months ago, I was building Flink and ran into shading issues for
flink-dist as described in your docs. We resolved this in BigTop by adding the
correct way to build flink-dist in the do-component-build script and everything
was fine after that.
Now, I’m running into issues doing the s
Are connectors being included in the 1.2.0 release or do you mean Kafka
specifically?
From: Fabian Hueske
Reply-To: "user@flink.apache.org"
Date: Tuesday, January 17, 2017 at 7:10 AM
To: "user@flink.apache.org"
Subject: Re: Zeppelin: Flink Kafka Connector
One thing to add: Flink 1.2.0 has not
I would suggest using EMRFS anyway, which is the way to access the S3 file
system from EMR (using the same s3:// prefixes). That said, you will run into
the same shading issues in our build until the next release—which is coming up
relatively shortly.
From: Robert Metzger
Reply-To: "user@fl
t again after a successful build. That will always
work independently of the Maven 3.x version.
-Max
On Mon, Nov 21, 2016 at 6:27 PM, Foster, Craig wrote:
> Thanks for explaining, Robert and Gordon. For however it helps, I’ll
comment
> on the original Maven
need to wait for others more knowledgable
on the build infrastructure to chime in and
see if there’s a good long-term solution.
Best Regards,
Gordon
On November 19, 2016 at 8:48:32 AM, Foster, Craig
(foscr...@amazon.com<mailto:foscr...@amazon.com>) wrote:
I’m not even sure this was deliv
I’m not even sure this was delivered to the ‘dev’ list but I’ll go ahead and
forward the same email to the ‘user’ list since I haven’t seen a response.
---
I’m following up on the issue in FLINK-5013 about flink-dist specifically
requiring Maven 3.0.5 through to <3.3. This affects people who
ade the aws
>> dependencies in the Kinesis connector, the problem should be
>(hopefully)
>> resolved. I've added your problem description to the JIRA. Thanks for
>> reporting it.
>>
>> Cheers,
>> Till
>>
>> On Mon,
Oh, in that case, maybe I should look into using the KCL. I'm just using boto
and boto3 which are definitely having different problems but both related to
the encoding.
boto3 prints *something*:
(.96.129.59,-20)'(01:541:4305:C70:10B4:FA8C:3CF9:B9B0,0(Patrick
Barlane,0(Nedrutland,12(GreenC bo
lem remains? Thanks!
Regards,
Gordon
On September 1, 2016 at 1:34:19 AM, Foster, Craig
(foscr...@amazon.com<mailto:foscr...@amazon.com>) wrote:
Hi:
I am using the following WikiEdit example:
https://ci.apache.org/projects/flink/flink-docs-master/quickstart/run_example_quickstart.html
It
Hi:
I am using the following WikiEdit example:
https://ci.apache.org/projects/flink/flink-docs-master/quickstart/run_example_quickstart.html
It works when printing the contents to a file or stdout.
But I wanted to modify it to use Kinesis instead of Kafka. So instead of the
Kafka part, I put:
P
I'm trying to understand Flink YARN configuration. The flink-conf.yaml file is
supposedly the way to configure Flink, except when you launch Flink using YARN
since that's determined for the AM. The following is contradictory or not
completely clear:
"The system will use the configuration in co
er of TaskManagers
The number of TaskManagers will be equal to the number of entries in the
conf/slaves file.
On Wed, Aug 24, 2016 at 3:04 PM, Foster, Craig
mailto:foscr...@amazon.com>> wrote:
Is there a way to set the number of TaskManagers using a configuration file or
environment vari
Is there a way to set the number of TaskManagers using a configuration file or
environment variable? I'm looking at the docs for it and it says you can set
slots but not the number of TMs.
Thanks,
Craig
I'm trying to use the wordcount example with the local file system, but it's
giving me permissions error or it's not finding it. It works just fine for
input and output on S3. What is the correct URI usage for the local file system
and HDFS?
I have installed Flink on EMR and am just using the f
25 matches
Mail list logo