Hi Vitali,
In this case, `input` and `output` will each be a pointer pointing to a
Point2 located elsewhere. So the `Aggregate` is 16 bytes (two pointers)
while each `Point2` is 8 bytes, for a total of 32 bytes. Whereas if you
wrote something like:
struct Aggregate {
input :group { x @0 :Int32; y @1 :Int32; }
output :group { x @2 :Int32; y @3 :Int32; }
}
This would represent the same data in 16 bytes total (however, `input` and
`output` have different, one-off types here, which is inconvenient).
-Kenton
On Sat, Jul 27, 2019 at 3:15 PM Vitali Lovich <[email protected]> wrote:
> That's great to hear & makes sense. What about non-list types?
>
> struct Point2 {
> x @0 :Int32;
> y @1 :Int32;
> }
>
> struct Aggregate {
> input @0 :Point2;
> output @1 :Point2;
> }
>
> I'm assuming in this case there is more of an overhead unrelated to
> padding?
>
> On Sat, Jul 27, 2019 at 2:45 PM Kenton Varda <[email protected]>
> wrote:
>
>> Hi Vitali,
>>
>> Good news: The two schemas you wrote have identical wire size. I'd
>> recommend going with the cleaner struct list rather than parallel lists.
>>
>> Details:
>> - Struct lists are "flattened", meaning the structs are stored
>> consecutively with nothing in between.
>> - Fields of a struct are not "tagged" like with protobuf; instead, fields
>> are identified by their fixed offsets. A struct containing two float32's is
>> exactly 8 bytes -- 4 byte for each field.
>> - A struct list is prefixed by a single 8-byte tag describing the size of
>> each element, whereas primitive lists (e.g. List(Float32)) requires no tag.
>> But, your parallel-arrays schema has two 8-byte pointers in Aggregate
>> whereas the struct-list version has 1. So these cancel out, and the size
>> ends up identical.
>>
>> However, note that if you had a Vector3f containing three Float32's, each
>> element would be padded up to an 8-byte boundary for alignment reasons. In
>> that case, parallel arrays would be 25% smaller than the struct list
>> because they avoid the padding. I would still recommend using a struct
>> list, though, and using some sort of compression to remove the padding if
>> it's important.
>>
>> -Kenton
>>
>> On Sat, Jul 27, 2019 at 2:33 PM <[email protected]> wrote:
>>
>>> Hi,
>>>
>>> New to cap'n'proto, but coming from deep familiarity with
>>> protobuf/thrift. I was wondering if cap'n'proto needed the decomposure like
>>> protobuf into parallel lists to reduce the wire overhead?
>>>
>>> The use-case is something like (hopefully getting the syntax correct
>>> writing it blind):
>>>
>>> struct Vector2f {
>>> x @0 :Float32;
>>> y @1 :Float32;
>>> }
>>>
>>> struct Aggregate {
>>> points @0 :List(Vector2f);
>>> }
>>>
>>> Do I need to transform it to
>>>
>>> struct Aggregate {
>>> xs @0 :List(Float32);
>>> ys @1 :List(Float32);
>>> }
>>>
>>> to reduce the size on the wire or is the overhead calculus different for
>>> Cap'n'Proto? Not sure about the overhead per-element one has vs the other.
>>>
>>> I'm also guessing cache locality is hurt in the decomposed case which
>>> might be an alternate axis to weigh against any wire overhead...
>>>
>>> Thanks,
>>> Vitali
>>>
>>> --
>>> You received this message because you are subscribed to the Google
>>> Groups "Cap'n Proto" group.
>>> To unsubscribe from this group and stop receiving emails from it, send
>>> an email to [email protected].
>>> To view this discussion on the web visit
>>> https://groups.google.com/d/msgid/capnproto/e67eac3b-e6cd-49c1-8065-8136b68a384c%40googlegroups.com
>>> <https://groups.google.com/d/msgid/capnproto/e67eac3b-e6cd-49c1-8065-8136b68a384c%40googlegroups.com?utm_medium=email&utm_source=footer>
>>> .
>>>
>>
--
You received this message because you are subscribed to the Google Groups
"Cap'n Proto" group.
To unsubscribe from this group and stop receiving emails from it, send an email
to [email protected].
To view this discussion on the web visit
https://groups.google.com/d/msgid/capnproto/CAJouXQ%3DDVv%3D6%2BuFxZ%2B2ztLWWN7VpYj%3DpOCG%2BZoJnzDuCO-V4XQ%40mail.gmail.com.