I'm not sue. I have overwritten the reachedEnd() method of the
TableInputFormat. There I have declared that it returns true if it the
nextRecord method was called 100 times.
Am 27.05.2015 um 15:57 schrieb Stephan Ewen:
Okay. Can there have been some bug in that logic, trying to address a
non e
Okay. Can there have been some bug in that logic, trying to address a non
existing row?
On Wed, May 27, 2015 at 2:25 PM, Hilmi Yildirim
wrote:
> In my job I modified the TableInputFormat so that it only reads the first
> 100 records of the HBase table. With this modification the errors occured.
In my job I modified the TableInputFormat so that it only reads the
first 100 records of the HBase table. With this modification the errors
occured. Now, I imported the first 100 entries of the HBase table to
another table and I configured that the job reads the whole table. As a
result, it wor
No, I can't unfortunately.
I just excluded flink-shaded-include-yarn from my dependencies for the
moment ;)
On Wed, May 27, 2015 at 11:20 AM, Robert Metzger
wrote:
> Flink should also work without the YARN code.
> The name of flink-shaded-include-yarn is a bit misleading. The name
> should actu
Flink should also work without the YARN code.
The name of flink-shaded-include-yarn is a bit misleading. The name should
actually be "hadoop-with-yarn".
So jersey is not necessarily a dependency of YARN, it could also come from
other components of YARN.
So on our side, we can most likely exclude t
If YARN is architected reasonable, we should be able to exclude them from
the YARN dependency, because we are not using any of those classes.
This needs careful validation and testing, though...
On Wed, May 27, 2015 at 10:44 AM, Flavio Pompermaier
wrote:
> Hi to all,
>
> in my Flink job I have
Hi to all,
in my Flink job I have to call an external REST web service (the client use
Jersey 2.13) but this conflicts with the jersey classes shaded within
flink-shaded-include-yarn. Is there a way to make them compatible?
Best,
Flavio
it is during the reading process
Am 27.05.2015 um 10:12 schrieb Stephan Ewen:
This looks like an HBase specific think.
At what point does this log come? After the data source task finished?
During processing?
On Wed, May 27, 2015 at 9:11 AM, Hilmi Yildirim
mailto:hilmi.yildi...@neofonie.de>
This looks like an HBase specific think.
At what point does this log come? After the data source task finished?
During processing?
On Wed, May 27, 2015 at 9:11 AM, Hilmi Yildirim
wrote:
> Hi,
> I built a batch process which reads from Hbase, process the data and
> writes the result into a text
I'll look into that! Thanks for the moment
On Wed, May 27, 2015 at 9:24 AM, Stephan Ewen wrote:
> If you just want to do something after the job finishes, just put the
> finalization code after the call to "execute()":
>
> env.execute();
> myCustomOutputFinalization();
>
> If you want each paral
If you just want to do something after the job finishes, just put the
finalization code after the call to "execute()":
env.execute();
myCustomOutputFinalization();
If you want each parallel output to do something after it finished, then
override the "close()" method of the OutputFormat.
To have
Hi,
I built a batch process which reads from Hbase, process the data and
writes the result into a text file. When I run the process local then it
works great. If I run it on a cluster then it seems to work but it does
not terminate. In the logs there is the following message:
09:04:39,194 INF
12 matches
Mail list logo