=sharing
Cheers,
Gyula
Hilmi Yildirim <mailto:hilmi.yildi...@neofonie.de>> ezt írta (időpont: 2015. jún.
17., Sze, 9:36):
Hi,
does Flink Streaming support state management? For example, I have a
state which will be used inside the streaming operations but the state
can b
/2015 09:36 AM, Hilmi Yildirim wrote:
Hi,
does Flink Streaming support state management? For example, I have a
state which will be used inside the streaming operations but the state
can be updated.
For example:
stream.map( use state for operation).updateState(update state).
Best Regards,
Hilmi
Hi,
does Flink Streaming support state management? For example, I have a
state which will be used inside the streaming operations but the state
can be updated.
For example:
stream.map( use state for operation).updateState(update state).
Best Regards,
Hilmi
--
--
Hilmi Yildirim
Software
en,
there are two options: either use Flink's TableInputFormat from flink-addons or
the Hadoop TableInputFormat, right? Which one are you using?
– Ufuk
On 09 Jun 2015, at 11:53, fhue...@gmail.com wrote:
Thank you very much!
From: Hilmi Yildirim
Sent: Tuesday, 9. June, 2015 1
lmi Yildirim <mailto:hilmi.yildi...@neofonie.de>>:
I want to add that I run the Flink job on a cluster with 13
machines and each machine has 13 processing slots which results in
a total number of processing slots of 169.
Am 09.06.2015 um 10:59 schrieb Hilmi Yildirim:
I want to add that I run the Flink job on a cluster with 13 machines and
each machine has 13 processing slots which results in a total number of
processing slots of 169.
Am 09.06.2015 um 10:59 schrieb Hilmi Yildirim:
Correct.
I also counted the rows with Spark and Hive. Both returned the
in the HBase table?
Cheers, Fabian
2015-06-09 10:34 GMT+02:00 Hilmi Yildirim <mailto:hilmi.yildi...@neofonie.de>>:
Hi,
Now I tested the "count" method. It returns the same result as the
flatmap.groupBy(0).sum(1) method.
Furthermore, the Hbase contains nearl
er1/apache-flink-hands-on
On Mon, Jun 8, 2015 at 3:04 PM, Hilmi Yildirim
mailto:hilmi.yildi...@neofonie.de>>
wrote:
Hi,
I implemented a simple Flink Batch job which reads from an
HBase Cluster of 13 machines and with nearly 100 million rows.
T
hat?
Best Regards,
Hilmi
--
--
Hilmi Yildirim
Software Developer R&D
T: +49 30 24627-281
hilmi.yildi...@neofonie.de
http://www.neofonie.de
Besuchen Sie den Neo Tech Blog für Anwender:
http://blog.neofonie.de/
Folgen Sie uns:
https://plus.google.com/+neofonie
http://www.linkedin.com/company
hat?
Best Regards,
Hilmi
--
--
Hilmi Yildirim
Software Developer R&D
http://www.neofonie.de
Besuchen Sie den Neo Tech Blog für Anwender:
http://blog.neofonie.de/
Folgen Sie uns:
https://plus.google.com/+neofonie
http://www.linkedin.com/company/neofonie-gmbh
https://www.xing.com/companie
ent. You have to use
Connection connection = connection =
ConnectionFactory.createConnection(hConf);
this.table = (HTable)
connection.getTable(TableName.valueOf(getTableName()));
Best Regards,
Hilmi
Am 05.06.2015 um 12:31 schrieb Hilmi Yildirim:
Hi,
I am using the SNAPSHOT version of flink to re
org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:57)
at
org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithoutRetries(RpcRetryingCaller.java:200)
... 10 more
Does anyone know the reason for that?
Best Regards,
Hilmi
--
--
Hilmi Yildirim
Software
I solved the problem by overwriting the method "configure". There, I
used the "addResource" method of the HBaseConfiguration to add the
correct hbase-site.xml
Best Regards,
Hilmi
Am 28.05.2015 um 11:21 schrieb Hilmi Yildirim:
When I execute both job locally then the resul
consecutive
getNextRecord() call the Habase server timeouts the scanner
resource's..In the house extension I just recreate the scanner from
the last read key. This is a workaround to avoid too much tuning with
caching and timeouts hbase params...
On 28 May 2015 10:24, "Hilm
Can you do the same with
ClassLoader.getSystemClassLoader().getResource("hbase-site.xml") and
see if that returns something different?
Thanks,
Stephan
On Thu, May 28, 2015 at 10:14 AM, Hilmi Yildirim
mailto:hilmi.yildi...@neofonie.de>> wrote:
Hi,
I used release-0.8.
||
"org.apache.hadoop.hbase.UnknownScannerException:
org.apache.hadoop.hbase.UnknownScannerException: Name 2423, already closed?"
When I restart the job then the exception occurs for other 1 or 2 machines.
Does anyone know why this exception occurs?
Best Regards,
--
--
Hilmi Yildirim
Software Developer R&am
cluster then the job uses the default hbase config instead of the config
defined in the hbase-site.xml. The hbase-site.xml is included in the fat
jar. To start the job I use the web interace.
Best Regards,
--
--
Hilmi Yildirim
Software Developer R&D
T: +49 30 24627-281
hilmi.yildi...@neofoni
non existing row?
On Wed, May 27, 2015 at 2:25 PM, Hilmi Yildirim
mailto:hilmi.yildi...@neofonie.de>> wrote:
In my job I modified the TableInputFormat so that it only reads
the first 100 records of the HBase table. With this modification
the errors occured. Now, I imported th
works now.
Weitergeleitete Nachricht
Betreff:Re: Not terminating process on a cluster
Datum: Wed, 27 May 2015 10:40:13 +0200
Von:Hilmi Yildirim
An: user@flink.apache.org
it is during the reading process
Am 27.05.2015 um 10:12 schrieb Stephan Ewen:
This looks
it is during the reading process
Am 27.05.2015 um 10:12 schrieb Stephan Ewen:
This looks like an HBase specific think.
At what point does this log come? After the data source task finished?
During processing?
On Wed, May 27, 2015 at 9:11 AM, Hilmi Yildirim
mailto:hilmi.yildi...@neofonie.de
retryTime=210163ms, msg=row
'5797669374912039332' on table 'table' at null
Does anyone know the reason for that?
Best Regards,
--
--
Hilmi Yildirim
Software Developer R&D
http://www.neofonie.de
Besuchen Sie den Neo Tech Blog für Anwender:
http://blog.neofonie.de/
Fo
Yildirim
mailto:hilmi.yildi...@neofonie.de>> wrote:
I want to add that it is strange that the client wants to
establish a connection to localhost but I have defined another
machine.
Am 26.05.2015 um 14:05 schrieb Hilmi Yildirim:
Hi,
I implemented a job
I want to add that it is strange that the client wants to establish a
connection to localhost but I have defined another machine.
Am 26.05.2015 um 14:05 schrieb Hilmi Yildirim:
Hi,
I implemented a job which reads data from HBASE with following code (I
replaced the real address by m1
1068)
Does anyone know what I make wrong?
Best Regards,
Hilmi
--
--
Hilmi Yildirim
Software Developer R&D
http://www.neofonie.de
Besuchen Sie den Neo Tech Blog für Anwender:
http://blog.neofonie.de/
Folgen Sie uns:
https://plus.google.com/+neofonie
http://www.linkedin.com/company/neo
/resources/archetype-resources/pom.xml
(Setting the regular flink dependencies to "provided" and using the
maven-shade-plugin).
Best,
Robert
On Tue, May 26, 2015 at 11:15 AM, Hilmi Yildirim
mailto:hilmi.yildi...@neofonie.de>> wrote:
Hi,
I want to deploy my job on a c
s
support org.apache.flink.addons.hbase.TableInputFormat?
Best Regards,
Hilmi
--
--
Hilmi Yildirim
Software Developer R&D
http://www.neofonie.de
Besuchen Sie den Neo Tech Blog für Anwender:
http://blog.neofonie.de/
Folgen Sie uns:
https://plus.google.com/+neofonie
http://www.linkedin.com/company
machine a file is
created by setting the degree of parallelism of the |DataSink| to *1*.
Cheers,
Till
On Fri, May 22, 2015 at 3:52 PM, Hilmi Yildirim
mailto:hilmi.yildi...@neofonie.de>> wrote:
Hi,
I want to write the result of a batch process into a file. I want
that the
file in its local filesystem.
Does anyone knows how I can do that?
Best Regards,
Hilmi
--
--
Hilmi Yildirim
Software Developer R&D
http://www.neofonie.de
Besuchen Sie den Neo Tech Blog für Anwender:
http://blog.neofonie.de/
Folgen Sie uns:
https://plus.google.com/+neofonie
a method called:
publicDataStreamSourcecreateInput(InputFormat?>inputFormat, TypeInformationtypeInfo)
That's exactly what you need, right?
The issue with that pull request is probably only that we have to wait
for another week until its merged.
On Wed, May 20, 2015 at 2:23 PM, Hilmi Yildi
re request:
https://issues.apache.org/jira/browse/FLINK-2055
Are you interested in implementing this?
On Wed, May 20, 2015 at 10:50 AM, Hilmi Yildirim
mailto:hilmi.yildi...@neofonie.de>> wrote:
Thank you Flavio,
these are examples for B
ample/HBaseWriteExample.java
*
https://github.com/apache/flink/blob/57615aaa19e9933e43ed0431a78dd231bf98b103/flink-staging/flink-hbase/src/test/java/org/apache/flink/addons/hbase/example/HBaseWriteExample.java
Best,
Flavio
On Wed, May 20, 2015 at 10:16 AM, Hilmi Yildirim
mailto:hilmi.yildi...@neofon
Hi,
I want to write a stream continuously into an HBase. For example, I have
1 source and 4 workers. I want that each worker writes autonomously into
HBase. Is there a proper way to do it?
Best Regards,
--
--
Hilmi Yildirim
Software Developer R&D
T: +49 30 24627-281
hilmi.y
32 matches
Mail list logo