Hi,
I opened a JIRA (FLINK-2442) and submitted a PR (#963) for the "Wrong field
type" problem.
Is the other problem is addressed in FLINK-2437?
Cheers, Fabian
2015-07-30 16:29 GMT+02:00 Gábor Gévay :
> Thanks for the response.
> As a temporary workaround, I tried to change these problematic lin
Thanks for the response.
As a temporary workaround, I tried to change these problematic lines:
} else {
Preconditions.checkArgument(fieldType instanceof AtomicType, "Wrong
field type: " + fieldType.toString());
keyFields.add(new FlatFieldDescriptor(keyId, fieldType));
}
into this:
} else i
Thanks for reporting this issue.
The "Wrong field type" error looks like a bug to me.
This happens, because PojoType is neither a TupleType nor an AtomicType. To
me it looks like the TupleTypeInfoBase condition should be generalized to
CompositeType.
I will look into this.
Cheers, Fabian
2015-07
You could try to use the TypeSerializerInputFormat.
On Thu, Jul 30, 2015 at 2:08 PM, Flavio Pompermaier
wrote:
> How can I create a Flink dataset given a directory path that contains a
> set of java objects serialized with kryo (one file per object)?
>
> On Thu, Jul 30, 2015 at 1:41 PM, Till R
Hello,
I am having some trouble building a graph where the vertex ID type is
a POJO. Specifically, it would be a class that has two fields: a long
field, and a field which is of another class that has four byte
fields. (Both classes implement the Comparable interface, as the Gelly
guide specifies.
How can I create a Flink dataset given a directory path that contains a set
of java objects serialized with kryo (one file per object)?
On Thu, Jul 30, 2015 at 1:41 PM, Till Rohrmann wrote:
> Hi Flavio,
>
> in order to use the Kryo serializer for a given type you can use the
> registerTypeWithKr
Hi Flavio,
in order to use the Kryo serializer for a given type you can use the
registerTypeWithKryoSerializer of the ExecutionEnvironment object. What you
provide to the method is the type you want to be serialized with kryo and
an implementation of the com.esotericsoftware.kryo.Serializer class.
Re-hi,
I have double –checked and actually there is an OutputFormat interface in flink
which can be extended.
I believe that for this kind of specific formats as mentioned by Michele, each
can develop the appropriate format.
On the other hand, having more outputformats I believe is something tha
I will double check and try to commit this in the next days
Dr. Radu Tudoran
Research Engineer
IT R&D Division
[cid:image007.jpg@01CD52EB.AD060EE0]
HUAWEI TECHNOLOGIES Duesseldorf GmbH
European Research Center
Riesstrasse 25, 80992 München
E-mail: radu.tudo...@huawei.com
Mobile: +49 15209084330
I have a project that produce RDF quads and I have to store to read them
with Flink afterwards.
I could use thrift/protobuf/avro but this means to add a lot of transitive
dependencies to my project.
Maybe I could use Kryo to store those objects..is there any example to
create a dataset of objects s
Hi,
it is currently not possible to isolate tasks that consume a lot of JVM
heap memory and schedule them to a specific slot (or TaskManager).
If you operate in a YARN setup, you can isolate different jobs from each
other by starting a new YARN session for each job, but tasks within the
same job c
Hi Stefan,
The problem is that the CsvParser does not know how to parse types other
than the ones that are supported. It would be nice if it supported a custom
parser which is either manually specified or included in the PoJo class
itself.
You can either change your PoJo fields to be of a support
Hi,
I'm new to Flink and just taking the first steps...
I want to parse a CSV file that contains a date and time as the first
field, then some values:
> 07.02.201549.9871 234.677 ...
So I’d like to use this POJO:
> import java.util.Date;
>
> public class DataPoint
> {
> private Strin
Hi Michele, hi Radu
Flink does not have such an OutputFormat, but I agree, it would be a
valuable addition.
Radu's approach looks like the way to go to implement this feature.
@Radu, is there a way to contribute your OutputFormat to Flink?
Cheers, Fabian
2015-07-30 10:24 GMT+02:00 Radu Tudoran
I think what you outline is the right way to go for the time being.
The "Client" class is being reworked to bet more of these
deployment/controlling methods, to make it easier to deploy/cancel/restart
jobs programatically, but that will probably take a few more weeks to
converge.
Stephan
On Thu
Quick response: I am not opposed to that, but there are tuple libraries
around already.
Do you need specifically the Flink tuples, for interoperability between
Flink and other projects?
On Thu, Jul 30, 2015 at 11:07 AM, Stephan Ewen wrote:
> Should we move this to the dev list?
>
> On Thu, Jul
Should we move this to the dev list?
On Thu, Jul 30, 2015 at 10:43 AM, Flavio Pompermaier
wrote:
> Any thought about this (move tuples classes in a separate self-contained
> project with no transitive dependencies so that to be easily used in other
> external projects)?
>
> On Mon, Jul 6, 2015 a
Any thought about this (move tuples classes in a separate self-contained
project with no transitive dependencies so that to be easily used in other
external projects)?
On Mon, Jul 6, 2015 at 11:09 AM, Flavio Pompermaier
wrote:
> Do you think it could be a good idea to extract Flink tuples in a s
Hi,
My 2 cents ... based on something similar that I have tried.
I have created an own implementation for OutputFormat where you define your own
logic for what happens in the "writerecord function". This logic would consist
in making a distinction between the ids and write each to the appropriat
Hi everybody,
I have a question about the writer
I have to save my dataset in different files according to a field of the tuples
let’s assume I have a groupId in the tuple, I need to store each group in a
different file, with a custom name: any idea on how i can do that?
thanks!
Michele
20 matches
Mail list logo