Thank you Stephan!I'll let you know tomorrow!
On May 20, 2015 7:30 PM, "Stephan Ewen" wrote:
> Hi!
>
> I pushed a fix to the master that should solve this.
>
> It probably needs a bit until the snapshot repositories are synced.
>
> Let me know if it fixed your issue!
>
> Greetings,
> Stephan
>
>
Hi!
I pushed a fix to the master that should solve this.
It probably needs a bit until the snapshot repositories are synced.
Let me know if it fixed your issue!
Greetings,
Stephan
On Wed, May 20, 2015 at 1:48 PM, Flavio Pompermaier
wrote:
> Here it is:
>
> java.lang.RuntimeException: Reques
Thank very much
--
View this message in context:
http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/how-can-rturn-all-row-in-dataset-include-mult-value-example-tp1289p1318.html
Sent from the Apache Flink User Mailing List archive. mailing list archive at
Nabble.com.
Hi!
We have implemented a transformer that computes a cooccurrence matrix
for words within a given window.
This matrix will then be used for unsupervised learning of vector
representations for words (we basically implement this:
http://nlp.stanford.edu/projects/glove/)
Right now, we have implemen
Hi,
sorry. I was doing too many things at the same time. I confused inputs and
outputs ;)
Please open a pull request for the changed method name...
On Wed, May 20, 2015 at 2:44 PM, Hilmi Yildirim
wrote:
> createInput creates a stream out of a file and can be used for HBase,
> correct?
> But
createInput creates a stream out of a file and can be used for HBase,
correct?
But I do not want to read from HBase. I only want to write to HBase. For
that I implemented an HBaseOutputFormat which I pass to the writeToFile
method of the dataStream. Then, the results of the stream processsing
a
Hi,
great to hear that it is working.
If the PR is going to be only about adding the "write()" method, you
probably don't need to open the PR.
https://github.com/apache/flink/pull/521 is going to add a method called:
public DataStreamSource createInput(InputFormat
inputFormat, TypeInformation type
Hi,
I've changed "writeToFile" to public and then I implemented an
Outputformat to write the stream into the HBase. This is working very
well. I will do later a pull request.
Maybe the method name "writeToFile" should be changed in, for example,
"write".
Alternatively, I can create a metho
Here it is:
java.lang.RuntimeException: Requesting the next InputSplit failed.
at
org.apache.flink.runtime.taskmanager.TaskInputSplitProvider.getNextInputSplit(TaskInputSplitProvider.java:89)
at
org.apache.flink.runtime.operators.DataSourceTask$1.hasNext(DataSourceTask.java:340)
at
org.apache.flin
This is a bug in the HadoopInputSplit. It does not follow the general class
loading rules in Flink. I think it is pretty straightforward to fix, I'll
give it a quick shot...
Can you send me the entire stack trace (where the serialization call comes
from) to verify this?
On Wed, May 20, 2015 at 12
Dears,
I am still having problem retriving data from the S3. I followed all you
indication in the previous posts, but now I get this error:
15/05/20 10:47:05 INFO s3.S3FileSystem: Creating new S3 file system binding
with Reduced Redundancy Storage enabled
15/05/20 10:47:13 WARN io.DelimitedInputF
Now I'm able to run the job but I get another exception..this time it seems
that Flink it's not able to split my Parquet file:
Caused by: java.lang.ClassNotFoundException:
parquet.hadoop.ParquetInputSplit
at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
at java.net.URLClassLoader$1.run(UR
I'm merging the pull request, it was blocked by the streaming operator
rework so it is free to go since yesterday.
I do agree that it needs some additional love before it can be on the
master, but I am positive that it should be there this week.
On May 20, 2015 11:16 AM, "Robert Metzger" wrote:
There is this pending pull request which is addressing exactly the issues
I've mentioned (wrong naming, private method):
https://github.com/apache/flink/pull/521
I'll see whats blocking the PR ...
On Wed, May 20, 2015 at 11:11 AM, Robert Metzger
wrote:
> Maybe we can also use the Batch HBase Ou
Maybe we can also use the Batch HBase OutputFormat.
In the DataStream API there is a private method:
private DataStreamSink writeToFile(OutputFormat format, long millis) {
which seems to allow batch output formats.
The naming of the method seems weird because its called "toFile" but its
expectin
Hi,
I agree with Hilmi, Flavio's examples are for batch.
I'm not aware of a StreamingHBaseSink for Flink yet.
I'll filed a JIRA for the feature request:
https://issues.apache.org/jira/browse/FLINK-2055
Are you interested in implementing this?
On Wed, May 20, 2015 at 10:50 AM, Hilmi Yildirim wro
Yes it could be that the jar classes and those on the cluster are not
aligned for some days..Now I'll recompile both sides and if I still have
the error I will change line 42 as you suggested.
Tanks Max
On Wed, May 20, 2015 at 10:53 AM, Maximilian Michels wrote:
> Hi Flavio,
>
> It would be help
Hi Flavio,
It would be helpful, if we knew which class could not be found. In the
ClosureCleaner, can you change line 42 to include the class name in the
error message? Like in this example:
private static ClassReader getClassReader(Class cls) {
String className = cls.getName().replaceFirst("^
Thank you Flavio,
these are examples for Batch Processing. But I want to write a
continuous stream into an HBase within a StreamExecutionEnvironment
instead of a ExecutionEnvironment.
Best Regards,
Hilmi
Am 20.05.2015 um 10:42 schrieb Flavio Pompermaier:
I've added an example of HBase writing
I've added an example of HBase writing
at
flink-staging/flink-hbase/src/test/java/org/apache/flink/addons/hbase/example/HBaseWriteExample.java.
Otherwise you can look at these 2 URLs:
-
https://github.com/fpompermaier/flink/blob/hbaseOutExample/flink-staging/flink-hbase/src/test/java/org/a
Hi,
I want to write a stream continuously into an HBase. For example, I have
1 source and 4 workers. I want that each worker writes autonomously into
HBase. Is there a proper way to do it?
Best Regards,
--
--
Hilmi Yildirim
Software Developer R&D
T: +49 30 24627-281
hilmi.yildi...@neofonie.d
Hi,
in the for-loop you are always immediately returning. The code is only
getting i=0 (which is BUILDING).
You need to rework the for loop so that its not immediately returning.
Maybe
for(int i=0;i wrote:
> want return all row include all value in valuesfromsubquery this code just
> return row i
It worked!
We used the 0.9-SNAPSHOT as you said, and it worked perfectly, also with larger
data set we didn’t face any outofmemory problem
thanks
Il giorno 18/mag/2015, alle ore 21:19, Robert Metzger
mailto:rmetz...@apache.org>> ha scritto:
You are not getting the issue in your local environme
Any insight about this..?
On Tue, May 19, 2015 at 12:49 PM, Flavio Pompermaier
wrote:
> Hi to all,
>
> I tried to run my job on a brand new Flink cluster (0.9-SNAPSHOT) from the
> web client UI using the shading strategy of the quickstart example but I
> get this exception:
>
> Caused by: java.l
24 matches
Mail list logo