https://git-wip-us.apache.org/repos/asf/flex-sdk/repo?p=flex-sdk.git;a=blob;f=modules/swfutils/src/java/flash/swf/SwfEncoder.java;h=03a100dda92989d537b00b96033d614c73c47801;hb=HEAD#l320
This is the code I'm talking about. What is strange about it: it doesn't do what the comment above it says. For example, it says that you need 2 bits to write unsigned integer 1 - which is obviously false: you need only one bit. In fact, it overestimates the number of bits needed to be written by one, whenever all bits are set. (You can also point fingers at the... strange technique the author used for assertions, but this is less important at the moment). Why am I confused about this code: even though it's wrong, the results I see in the player agree with the results it produce. I.e. Flash player, when it decodes the Matrix record, uses some procedure, which expects this extra bit (I'm still struggling to understand how does it do it, but it sure works outside the spec). I'm trying to implement a SWF linker, something that assembles SWF files from a different format. I used HXSWFML library for that (it's written in Haxe), which, in turn, uses haxe.format.swf library. The problem is that when I write the Matrix record using these libraries, I do it to the spec, but the player reads the record in some wrong way, which I can't understand. Somehow Flex compiler manages to produce results coherent with the player... Below is my dissection of a generated PlaceOjbect3 tag, containing a Matrix record, and its interpretation using haxe.format.swf and swfdump Flex utility: | id | dummy | length | options | |-------------+--------+----------------------------------------+----------------------------------| | bf | 11 | 17 00 00 00 | 26 00 | | 70 | 63 | 23 | hasname, hasmatrix, hascharacter | | 00010001 10 | 111111 | 00010111 000000000 000000000 000000000 | 00100110 00000000 | | depth | characterid | matrix | name | |-------------------+-------------------+-------------------------------+----------------------| | 01 00 | 02 00 | a1 c5 c6 88 17 f7 3e c4 8b 98 | 73 6c 69 64 65 30 00 | | 1 | 2 | | "slide0" | | 00000001 00000000 | 00000010 00000000 | 10100001 11000101 11000110 | | | hasscale | nscalebits | scalex | scaley | hasrotate | nrotatebits | rotateskew0 | |----------+------------+-----------+-----------+-----------+-------------+-------------| | | a1 | c5 | c6 | | 88 | 17 | | true | 8 | | | true | 8 | | | 1 | 01000 | 01 110001 | 01 110001 | 1 | 0 1000 | 1000 0001 | | | | | | | | | | rotateskew1 | ntranslatebits | translatex | translatey | padding | |-------------+----------------+-----------------+------------------+---------| | f7 | 3e | c4 8b 98 | | | | | 14 | 8034 (401.7) | 4467 (223.35) | | | 0111 1111 | 0111 0 | 0111110 1100010 | 0 10001011 10011 | 000 | The calculations I made by hand agree with haxe.format.swf, and with my understanding of the SWF file specification. Exactly, they give me this: Matrix: HasScale = 1 ScaleX = 0.44140625 ScaleY = 0.44140625 HasRotate = 1 RotateSkew0 = 0.50390625 RotateSkew1 = 0.49609375 TranslateX = 8034 twips = 401.7 px TranslateY = 4467 twips = 223.35 px Swfdump gives me: <PlaceObject2 depth='1' matrix='s1.8934174,1.8934174 r-4.135132,2.1351318 t8034,4467'/> If I trace the matrix from inside the player, it gives me: (a=1.8934173583984375, b=-4.1351318359375, c=2.1351318359375, d=1.8934173583984375, tx=401.7, ty=223.35) A result I cannot explain: the matrix was symmetrical to begin with, how on Earth does it arrive at different scaleX and scaleY values? Sorry for the long post. It looks to me, like this is a bug in SWF format implementation in Adobe Flash player, which eventually made its way into Flex compiler too. I would be interested to understand in what way exactly does it make this mistake - no matter how I tried, I can't generate these numbers :/ Best, Oleg