It seems, that transition 1.8.2->1.9.2 brings backwards incomatibility and
String
which did work to change generation from CharSequence to String, does not
work any more. Within 15 minutes search I'm not unable to find literary any
documentation of this plugin, so I don't know if t
, which is just awesome.
po 21. 6. 2021 v 11:17 odesílatel Martin Mucha napsal:
> It seems, that transition 1.8.2->1.9.2 brings backwards incomatibility and
>
>
> String
>
> which did work to change generation from CharSequence to String, does not
> work any
gt;>
>> https://issues.apache.org/jira/browse/AVRO-2702
>>
>> this should be solved in 1.10 (which it is not, incorrect code is still
>> generated). And if someone (like myself) is bound to 1.9.2 because of
>> confluent, there is no fix for this minor versi
Hi,
I have this avro schema:
{
"name" : "ARecord",
"type" : "record",
"namespace" : "A",
"fields" : [
{"name": "id", "type": "string" },
{
"name": "B",
"type": ["null", {
"type": "record",
"name": "BRecord",
"fields": [
{
"
;
}
]
}
}
]
}
does not require C. And that's not what I want ... I'd like optional B, and
once user provide B, then B.C is required.
Martin.
2017-11-27 15:06 GMT+01:00 Dan Schmitt :
> "name": "B",
> "type&q
ields": [
> { "name": "B", "type": "string" },
> { "name": "C", "type": "string" }
> ]
> }
> }
> ]
> }
>
> This gives me 0 or more ARecords, each with a
o, right now, even the top level is failing the spec:
>
> IV) valid (0 ARecords):
> { }
>
> V) valid (2 ARecords):
> {
> "id": "...",
> "B": {
> "C": "..."
> }
> } ,
> "id": "...",
>
Hi,
is it possible by design to deserialize JSON with schema having optional
value?
Schema:
{
"type" : "record",
"name" : "UserSessionEvent",
"namespace" : "events",
"fields" : [ {
"name" : "username",
"type" : "string"
}, {
"name" : "errorData",
"type" : [ "null", "string" ],
as
napsal:
> It is possible to do it with a custom JsonDecoder.
>
> I wrote one that does this at:
> https://github.com/zolyfarkas/avro/blob/trunk/lang/java/avro/src/main/java/org/apache/avro/io/ExtendedJsonDecoder.java
>
>
> hope it helps.
>
>
> —Z
>
> On
Hi,
I've got some issues/misunderstanding of AVRO schema evolution.
When reading through avro documentation, for example [1], I understood,
that schema evolution is supported, and if I added column with specified
default, it should be backwards compatible (and even forward when I remove
it again)
taken into account for
> data that is strictly missing from the binary input, just when a field
> is known to be in the reader schema but missing from the original
> writer.
>
> You may have more luck reading the GenericRecord with a
> GenericDatumReader with both schemas, a
DatumReader<>(Simple.getClassSchema(),
> SimpleV2.getClassSchema());
> Decoder decoder = DecoderFactory.get().binaryDecoder(v1AsBytes, null);
> SimpleV2 v2 = datumReader.read(null, decoder);
>
> assertThat(v2.getId(), is(1));
> assertThat(v2.getName(), is(
ialize the record. I really do not know how to do that, I'm pretty
sure I never saw this anywhere, and I cannot find it anywhere. But in
principle it must be possible, since reader need not necessarily have any
control of which schema writer used.
thanks a lot.
M.
út 30. 7. 2019 v 18:16 odesílat
> (+a magic byte) to the binary avro. Thus using the schema registry again
> you can get the writer schema.
>
> /Svante
>
> On Thu, Aug 1, 2019, 15:30 Martin Mucha wrote:
>
>> Hi,
>>
>> just one more question, not strictly related to the subject.
>>
>
t; In confluent world the id=N is the N+1'th registered schema in the
> database (a kafka topic) if I remember right. Loose that database and you
> cannot read your kafka topics.
>
> So you have to use some other encoder, homegrown or not that embeds either
> the full schema in
Hi, I encounter weird behavior and have no idea how to fix that. Any
suggestions welcomed.
The issue revolves around union type on top level, which I personally
dislike and consider to be hack, but I understand the motivation behind it:
someone wanted to declare N types withing single avsc file (p
Hi,
I'm relatively new to avro, and I'm still struggling with getting schema
evolution and related issues. But today it should be simple question.
What is recommended naming of types if we want to use schema evolution?
Should namespace contain some information about version of schema? Or
should it
e.
>
> HTH, Regards,
>
> Lee Hambley
> http://lee.hambley.name/
> +49 (0) 170 298 5667
>
>
> On Mon, 30 Dec 2019 at 17:26, Martin Mucha wrote:
>
>> Hi,
>> I'm relatively new to avro, and I'm still struggling with getting schema
>> evol
anguage integration, so assume
> this is ignorance on my part.
>
> Maybe it'd help to know what "evolution" you plan, and what type names and
> name schemas you plan to be changing? The "schema evolution" is mostly
> meant to make it easier to add and remove fie
19 matches
Mail list logo