HI Flavio
Here is the example from Marton:
You can used env.writeAsText method directly.
StreamExecutionEnvironment env = StreamExecutionEnvironment.
getExecutionEnvironment();
env.addSource(PerisitentKafkaSource(..))
.map(/* do you operations*/)
.wirteAsText("hdfs://:/path/to/your/file
You could also just qualify the HDFS URL, if that is simpler (put host and
port of the namenode in there): "hdfs://myhost:40010/path/to/file"
On Thu, Jun 25, 2015 at 3:20 PM, Robert Metzger wrote:
> You have to put it into all machines
>
> On Thu, Jun 25, 2015 at 3:17 PM, Flavio Pompermaier
> w
You have to put it into all machines
On Thu, Jun 25, 2015 at 3:17 PM, Flavio Pompermaier
wrote:
> Do I have to put the hadoop conf file on each task manager or just on the
> job-manager?
>
> On Thu, Jun 25, 2015 at 3:12 PM, Chiwan Park
> wrote:
>
>> It represents the folder containing the hadoo
Do I have to put the hadoop conf file on each task manager or just on the
job-manager?
On Thu, Jun 25, 2015 at 3:12 PM, Chiwan Park wrote:
> It represents the folder containing the hadoop config files. :)
>
> Regards,
> Chiwan Park
>
>
> > On Jun 25, 2015, at 10:07 PM, Flavio Pompermaier
> wrot
It represents the folder containing the hadoop config files. :)
Regards,
Chiwan Park
> On Jun 25, 2015, at 10:07 PM, Flavio Pompermaier wrote:
>
> fs.hdfs.hadoopconf represents the folder containing the hadoop config files
> (*-site.xml) or just one specific hadoop config file (e.g. core-site
*fs.hdfs.hadoopconf* represents the folder containing the hadoop config
files (*-site.xml) or just one specific hadoop config file (e.g.
core-site.xml or the hdfs-site.xml)?
On Thu, Jun 25, 2015 at 3:04 PM, Robert Metzger wrote:
> Hi Flavio,
>
> there is a file called "conf/flink-conf.yaml"
> Ad
Hi Flavio,
there is a file called "conf/flink-conf.yaml"
Add a new line in the file with the following contents:
fs.hdfs.hadoopconf: /path/to/your/hadoop/config
This should fix the problem.
Flink can not load the configuration file from the jar containing the user
code, because the file system i
Could you describe it better with an example please? Why Flink doesn't load
automatically the properties of the hadoop conf files within the jar?
On Thu, Jun 25, 2015 at 2:55 PM, Robert Metzger wrote:
> Hi,
>
> Flink is not loading the Hadoop configuration from the classloader. You
> have to spe
Hi,
Flink is not loading the Hadoop configuration from the classloader. You
have to specify the path to the Hadoop configuration in the flink
configuration "fs.hdfs.hadoopconf"
On Thu, Jun 25, 2015 at 2:50 PM, Flavio Pompermaier
wrote:
> Hi to all,
> I'm experiencing some problem in writing a f
Hi to all,
I'm experiencing some problem in writing a file as csv on HDFS with flink
0.9.0.
The code I use is
myDataset.writeAsCsv(new Path("hdfs:///tmp", "myFile.csv").toString());
If I run the job from Eclipse everything works fine but when I deploy the
job on the cluster (cloudera 5.1.3) I ob
10 matches
Mail list logo