Yes Robert,
Unfortunately I discovered that the error was caused by Phoenix just a
little bit later the mail sending.
The error is generated in the finalize() method of Pheonix MemoryManager so
it seems somehow related to gc.
I rerun the experiment logging to a file so I can investigate deeper the
Hey Flavio,
I was not able to find the String "Orphaned chunk" in the Flink code base.
However, I found it here:
https://github.com/apache/phoenix/blob/master/phoenix-core/src/main/java/org/apache/phoenix/memory/GlobalMemoryManager.java#L157
Maybe you've send the message to the wrong mailing list
The table API (see
http://ci.apache.org/projects/flink/flink-docs-master/table.html) is
exactly for that.
Check it out!
On Wed, Apr 15, 2015 at 4:23 PM, hagersaleh wrote:
> Is there a data type stores name filed and datatype of field and return
> field
> by name
>
> i want handles operation by
Is there a data type stores name filed and datatype of field and return field
by name
i want handles operation by name filed
--
View this message in context:
http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/Is-there-a-data-type-stores-name-filed-and-datatype-of-field-and-re
Hi to all,
another error today :(
My job ended with a lot of "Orphaned chunk of bytes found during
finalize".
What could be the cause of this error?
Best,
Flavio
Hello all,
I am glad to report that the problem has been resolved.
The new version appears to have been pushed in maven already, since all I
did was "re-import dependencies" from InteliJ menu.
Thank you very much for the very quick response.
Fotis
2015-04-15 15:02 GMT+02:00 Ufuk Celebi :
> On
On 15 Apr 2015, at 14:18, Maximilian Michels wrote:
> The exception indicates that you're still using the old version. It takes
> some time for the new Maven artifact to get deployed to the snapshot
> repository. Apparently, a artifact has already been deployed this morning.
> Did you delete t
Yes , sorry for that..I found it somewhere in the logs..the problem was
that the program didn't die immediately but was somehow hanging and I
discovered the source of the problem only running the program on a subset
of the data.
Thnks for the support,
Flavio
On Wed, Apr 15, 2015 at 2:56 PM, Steph
This means that the TaskManager was lost. The JobManager can no longer
reach the TaskManager and consists all tasks executing ob the TaskManager
as failed.
Have a look at the TaskManager log, it should describe why the TaskManager
failed.
Am 15.04.2015 14:45 schrieb "Flavio Pompermaier" :
> Hi to
Hi to all,
I have this strange error in my job and I don't know what's going on.
What can I do?
The full exception is:
The slot in which the task was scheduled has been killed (probably loss of
TaskManager).
at org.apache.flink.runtime.instance.SimpleSlot.cancel(SimpleSlot.java:98)
at
org.apache
The exception indicates that you're still using the old version. It takes
some time for the new Maven artifact to get deployed to the snapshot
repository. Apparently, a artifact has already been deployed this morning.
Did you delete the jar files in your .m2 folder?
On Wed, Apr 15, 2015 at 1:38 PM
Hello,
I'm still facing the problem with 0.9-SNAPSHOT version. Tried to remove
the libraries and download them again but same issue.
Greetings,
Mohamed
Exception in thread "main"
org.apache.flink.runtime.client.JobTimeoutException: Lost connection to
JobManager
at
org.apache.flink.run
Sure, just ran an included example.
./bin/flink run examples/*WordCount.jar
By default, it will use a really small example DataSet but it should be
enough to verify the cluster setup.
You're welcome.
Best,
Max
On Wed, Apr 15, 2015 at 12:24 PM, Giacomo Licari
wrote:
> Thanks a lot Max,
> now
Hi Flavio,
Here's an simple example of a Left Outer Join:
https://gist.github.com/mxm/c2e9c459a9d82c18d789
As Stephan pointed out, this can be very easily modified to construct a
Right Outer Join (just exchange leftElements and rightElements in the two
loops).
Here's an excerpt with the most imp
Thanks a lot Max,
now I can set up ssh keys.
I started my cluster correctly with ./start-cluster.sh, do you use an
example program to test if everything works fine?
Thanks again,
Giacomo
On Wed, Apr 15, 2015 at 11:48 AM, Maximilian Michels wrote:
> Hi Giacomo,
>
> Have you tried setting up Fli
Hi Giacomo,
Have you tried setting up Flink on GCE using bdutil? It is very easy:
http://ci.apache.org/projects/flink/flink-docs-master/gce_setup.html
If you don't want to use bdutil:
I'm assuming you want to start Flink using the "start_cluster.sh" script
which uses ssh to start the task manager
Hi guys,
I'm trying to setup a simple Flink cluster on Google Compute Engine.
I'm running 3 nodes (1 master, 2 workers).
On master node I set up ssh key and moved it into authorized_keys.
When I try to copy my key to each worker node I got Permission denied
(publickey).
Someone had the same pro
I think this may be a great example to add as a utility function.
Or actually add as an function to the DataSet, internally realized as a
special case of coGroup.
We do not have a ready example of that, but it should be straightforward to
realize. Similar as for the join, coGroup on the join keys
please add link to explain left join using cogroup
or add example
very thanks
--
View this message in context:
http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/Left-outer-join-tp1031p1034.html
Sent from the Apache Flink User Mailing List archive. mailing list archive at
Nabb
Do you have an already working example of it? :)
On Wed, Apr 15, 2015 at 10:32 AM, Ufuk Celebi wrote:
>
> On 15 Apr 2015, at 10:30, Flavio Pompermaier wrote:
>
> >
> > Hi to all,
> > I have to join two datasets but I'd like to keep all data in the left
> also if there' no right dataset.
> > How
On 15 Apr 2015, at 10:30, Flavio Pompermaier wrote:
>
> Hi to all,
> I have to join two datasets but I'd like to keep all data in the left also if
> there' no right dataset.
> How can you achieve that in Flink? maybe I should use coGroup?
Yes, currently you have to implement this manually wit
Hi to all,
I have to join two datasets but I'd like to keep all data in the left also
if there' no right dataset.
How can you achieve that in Flink? maybe I should use coGroup?
Best,
Flavio
On 15 Apr 2015, at 09:37, Flavio Pompermaier wrote:
> I've received an error running the job saying to increase this parameter so I
> set it to 2048*4 and everything worked.
> However, could you explain me in detail how this number is computed?
> I'm running the job from my IDE (default paralle
Hi to all,
I've received an error running the job saying to increase this parameter so
I set it to 2048*4 and everything worked.
However, could you explain me in detail how this number is computed?
I'm running the job from my IDE (default parallelism so all my 8 cores) so
I was expecting no such e
24 matches
Mail list logo