Hi Tim,
I have this dependency in my pom file . This jar is present in my jar with
dependencies. I exploded the jar and checked it. The class
NativeS3FileSystem.class is present there.
Thanks
Ashutosh
On Mon, Mar 21, 2016 at 7:20 AM, Timothy Farkas <
timothytiborfar...@gmail.com> wrote:
> Hi A
Hi Ashutosh,
I believe you need to add the hadoop-aws jar to your project.
http://mvnrepository.com/artifact/org.apache.hadoop/hadoop-aws/2.6.0
Thanks,
Tim
On Sun, Mar 20, 2016 at 9:39 AM, Ashutosh Kumar
wrote:
> Do I need to add some jars in lib ?
>
> Thanks
> Ashutosh
>
> On Sun, Mar 20, 20
All,
Do any of the Flink Data Sources support comma separated directories with
wildcards?
For example:
env.readFile("/data/2016/01/01/*/*,/data/2016/01/02/*/*,/data/2016/01/03/*/*
")
Thanks in advance for any help that you can provide.
--
*Gna Phetsarath*System Architect // AOL Platforms //
Hi all,
Are there any approaches here I could get intermediate solution sets from
every delta iteration? I tried union but the compiler gave me the error:
Exception in thread "main"
org.apache.flink.api.common.InvalidProgramException: Error: The only
operations allowed on the solution set are Joi
Do I need to add some jars in lib ?
Thanks
Ashutosh
On Sun, Mar 20, 2016 at 4:30 PM, Ashutosh Kumar
wrote:
> It is not there.
>
> Thanks
> Ashutosh
>
> On Sun, Mar 20, 2016 at 2:58 PM, Robert Metzger
> wrote:
>
>> Hi,
>>
>> did you check if the "org.apache.hadoop.fs.s3native.NativeS3FileSystem
Hello,
I'm working on a project where I stream in data from Kafka, massage it a bit,
and then wish to spit write it into HDFS using the RollingSink. This works just
fine using the provided examples, but I would like the data to be stored in ORC
on HDFS, rather than sequence files.
I am however un
It is not there.
Thanks
Ashutosh
On Sun, Mar 20, 2016 at 2:58 PM, Robert Metzger wrote:
> Hi,
>
> did you check if the "org.apache.hadoop.fs.s3native.NativeS3FileSystem"
> class is in the flink-dist.jar in the lib/ folder?
>
>
> On Sun, Mar 20, 2016 at 10:19 AM, Ashutosh Kumar <
> ashutosh.disc
Hi Stefano,
In my case running the program in a machine with more ram solved the
problem. Have you tried enabling debugging as Till's suggested?
Fred
On Wed, Mar 16, 2016 at 1:51 PM, stefanobaghino <
stefano.bagh...@radicalbit.io> wrote:
> Frederick,
>
> did you find the problem? I'm having a s
Hi,
did you check if the "org.apache.hadoop.fs.s3native.NativeS3FileSystem"
class is in the flink-dist.jar in the lib/ folder?
On Sun, Mar 20, 2016 at 10:19 AM, Ashutosh Kumar wrote:
> I have setup a 3 node YARN based cluster on EC2. I am running flink in
> cluster mode. I added these lines in
I have setup a 3 node YARN based cluster on EC2. I am running flink in
cluster mode. I added these lines in core-site.xml
fs.s3n.awsAccessKeyId
accesskey
fs.s3n.awsSecretAccessKey
secret key
fs.s3n.impl
org.apache.hadoop.f
I wonder how to work with a stream with event timestamps ascending by key.
I can have a huge time skew between different keys, for example if I
(re)connect an event producer,
it will send all buffered results possibly from the last days.
Is it possible to trigger the window computation per key
11 matches
Mail list logo