I want to share my experience of garbage collection of the Java virtual
machine.
There are two common types of garbage collection, the agressive reference
count based and everything else.
The reference count system can garantee the quick response to memory
release. In such a system, we can safel
> A deterministic finalization means we shouldn't need to force programmers
> to have good ideas. Make it easy, remember? :)
I don't believe such an algorithm exists, unless you stick with reference
count.
Hong
> {
> my $fh = IO::File->new("file");
> print $fh "foo\n";
> }
> {
> my $fh = IO::File->new("file");
> print $fh "bar\n";
> }
>
> At present "file" will contain "foo\nbar\n". Without DF it could just
> as well be "bar\nfoo\n". Make no mistake, this is a major change to t
> Hong Zhang wrote:
>
> > This code should NEVER work, period. People will just ask for trouble
> > with this kind of code.
>
> Actually I meant to have specified ">>" as the mode, i.e. append, then
> what I originally said holds true. This behav
Hi, All,
I want to give some of my thougts about string encoding.
Personally I like the UTF-8 encoding. The solution to the
variable length can be handled by a special (virtual)
function like
class String {
virtual UV iterate(/*inout*/ int* index);
};
So in typical string iteration, the co
> On Thu, Feb 15, 2001 at 02:31:03PM -0800, Hong Zhang wrote:
> > Personally I like the UTF-8 encoding. The solution to the
> > variable length can be handled by a special (virtual)
> > function like
>
> I'm expecting that the virtual, internal representation
> On Thu, Feb 15, 2001 at 03:59:54PM -0800, Hong Zhang wrote:
> > The concept of characters have nothing to do with codepoints.
> > Many characters are composed by more than one codepoints.
>
> This isn't true.
What do you mean? Have you seen people using multi-by
> ...and because of this you can't randomly access the string, you are
> reduced to sequential access (*). And here I thought we could have
> left tape drives to the last millennium.
>
> (*) Yes, of course you could cache your sequential access so you only
> need to do it once, and build balance
> There are several concurrent GC algorithms that don't use
> mutexes -- but they usually depend on read or write barriers
> which may be really hard for us to implement. Making them run
> well always requires help from the OS memory manager and that
> would hurt portability. (If we don't have OS
> > What do you mean? Have you seen people using multi-byte encoding
> > in Japan/China/Korea?
>
> You're talking to the wrong person. Japanese data handling is my graduate
> dissertation. :)
>
> The Unified Hangul/Kanji/Ha'nzi' Characters in Unicode (so-called
"Unihan")
> occupy one and only one
> Then you would be incorrect. To find the character at position 233253 in a
> variable-length encoding requires scanning the string from the beginning,
> and has a rather significant potential cost. You've got a test for every
> character up to that point with a potential branch or two on each on
> substr's already been mentioned.
I have already given the counter argument. The codepoint position is useless
in many cases. They should be deprecated.
> Regular expressions. Perl does rather a lot of them. We've already found
from
> Perl 5 development that they get nasty when variable length
> And address arithmetic and mem(cmp|cpy) is faster than array iteration.
Ha Ha Ha. You must be kidding.
The mem(cmp|cpy) work just fine on UTF-8 string comparison and copy.
But the memcmp() can not be used for UTF-32 string comparison, because
of endian issue.
Hong
> >People in Japan/China/Korea have been using multi-byte encoding for
> >long time. I personally have used it for more 10 years. I never feel
> >much of the "pain". Do you think I are using my computer with O(n)
> >while you are using it with O(1)? There are 100 million people using
> >variable-l
> >Did it buy you much? I don't believe so. Can you give some examples why
> >random character access is so important? Most people are processing text
> >linearly.
>
> Most, but not all. And as this is the internals list, we have to deal with
> all. We can't choose a convenient subset and ignore t
> > But the memcmp() can not be used for UTF-32 string comparison, because
> > of endian issue.
>
> What endian issue? If you have two differently-endian strings being
> compared at the C level, you have *far* bigger design problems
> than the choice of UTF.
My argument was:
You can use memcmp(
I like to wrap up my argument.
I recommend to use UTF-8 as the sole string encoding.
If we end up with multiple encodings, there is absolutely
no point for this argument.
Benefits of UTF-8 is more compact, less encoding conversion,
more friendly to C API. UTF-16 is variable length encoding
too,
> > I have already given the counter argument. The codepoint position is
useless
> > in many cases. They should be deprecated.
>
> Uh? That doesn't make sense. Codepoint position is *exactly* what people
> expect when they use substr. When I say
>
> $a = substr($b,10);
>
> I want the 10th char
> On Fri, Feb 16, 2001 at 02:39:10PM -0800, Hong Zhang wrote:
> > But you can not use memcmp() to compare binary order of two UTF-32
> > strings on little endian machines, even both strings are using
> > the same endian.
>
> Yes, you can.
Yes and no. You can use for
> > I think you already mixed the codepoint vc character. What you will get
is
> > 10th codepoint, not 10th character.
>
> I think you're confused. Codepoints *are* characters. Combining characters
are
> taken care of as per the RFC.
If you define that way, I can agree with it. Since you still ha
I don't quite understand what is the intention here. Most of
C garbage collector is mark sweep based. It has all common
problems of gc, for example non-deterministic finalization
(destruction), or conservativeness. If we decide to use
GC for Perl, it will be trivial to implement a simple
mark swe
Almost every GC algorithm has its advantages and disadvantages.
Real-time gc normally carry high cost, both in memory and in
cpu time.
I believe that options is very important. We should make Perl
6 runtime compaible with multiple gc schemes, possibly including
reference counting. However, it wil
> Integer data types are generically referred to as Cs. There is an
> C typedef that is guaranteed to hold any integer type.
Does such thing exist? Unless it is BIGINT.
> Should we scrap the buffer pointer and just tack the buffer on the end
> of the structure? Saves a level of indirection, but
I believe we should use low bits for tagging. It will make switch
case much faster.
If you still emphasize on speed, we can make
0x05 => UTF-8
0x06 => UTF-16
0x07 => UTF-32
#define IS_UTF_ANY(a) \
(((a)->flags & 0x07) >= UTF-8)
However, I don't believe it is needed.
Hong
> If your inter
> >at some
> >points it becomes necessary to have an unsigned type for "the largest
> >integer" which in this case would be 72 bits.
> >[and on a machine with nothing larger than 32 will be 32]
>
> Sure. The size of an INT will probably be either 32 or 64 bits, depending
> both on the size of an I
> I was hoping to get us something that was guaranteed to hold an integer,
no
> matter what it was, so you could do something like:
>
>struct thingie {
> UV type;
> INT my_int;
>}
What is the purpose of doing this? The SV is guaranteed to hold anything.
Why we need a type that
>struct perl_string {
> void *string_buffer;
> UV length;
> UV allocated;
> UV flags;
>}
>
> The low three bits of the flags field is reserved for the type of the
> string. The various types are:
>
> =over 4
>
> =item BINARY (0)
>
> =item ASCII (1)
>
> =item EBCDIC
> >Here is an example, "re`sume`" takes 6 characters in Latin-1, but
> >could take 8 characters in Unicode. All Perl functions that directly
> >deal with character position and length will be sensitive to encoding.
> >I wonder how we should handle this case.
>
> My first inclination is to force n
> Unless I really, *really* misread the unicode standard (which is
distinctly
> possible) normalization has nothing to do with encoding,
I understand what you are trying to say. But it is not very easy in
practice.
The normalization has something to do with encoding. If you compare two
strings
wi
> Looks like they do operations with 16-bit integers. I'd as soon go with
> 32-bit ones--wastes a little space, but should be faster. (Except where we
> should shift to 64-bit words)
Using 32/31-bit requires general support of 64-bit arithmetics, for shift
and multiply. Without it, we have to use
> I was thinking maybe (length/4)*31-bit 2s complement to make portable
> overflow detection easier, but that would be only if there wasn't a good C
> library for this available to snag.
I believe Python uses (length/2)*15-bit 2's complement representation.
Because bigint and bitnum are complica
For bigint, we definite need a highly portable implementation.
People can do platform specific optimization on their own later.
We should settle the generic implementation first, with proper
encapsulation.
Hong
> >Do we need to settle on anything - can it vary by platform so that 64 bit
> >plat
> > The normalization has something to do with encoding. If you compare two
> > strings with the same encoding, of course you don't have to care about
it.
>
> Of course you do. Think about it.
I said "you don't have to". You can use "==" for codepoint comparison, and
something like "Normalizer.co
Here is some of my experience with HotSpot for Linux port.
> I've read, in the glibc info manuals, the the similar situation
> exists in C programming -- you don't want to do a lot inside the
> signal handler; just set a flag and return, then check that flag from
> your main loop, and run a "
> 6) There will be a glyph boundary/non-glyph boundary pair of regex
> characters to match the word/non-word boundary ones we already have.
(While
> I'd personally like \g and \G, that won't work as \G is already taken)
>
> I also realize that the decomposition flag on regexes would mean that
> s/
> >> What if, at the C level, you had a signal handler that sets or
> >> increments a flag or counter, stuffs a struct with information
about
> >> the signal's context, then pushes (by "push", I mean "(cons v ls)",
> >> not "(append! ls v)" 'whatever ;-) that struct on a stack...
>
> >I recommend to use 'u' flag, which indicates all operations are performed
> >against unicode grapheme/glyph. By default re is performed on codepoint.
>
> U doesn't really signal "glyph" to me, but we are sort of limited in what
> we have left. We still need a zero-width assertion for glyph boun
> > >We need the character equivalence construct, such as [[=a=]], which
> > >matches "a", "A ACUTE".
> >
> > Yeah, we really need a big list of these. PDD anyone?
> >
>
> But surely this is a locale issue, and not an encoding one? Not every
> language recognizes the same character equivalences
> The only problem with that is it means we'll be potentially altering the
> data as it comes in, which leads back to the problem of input and output
> files not matching for simple filter programs. (Plus it means we spend CPU
> cycles altering data that we might not actually need to)
>
I don't t
Are we over-optimizing? The Perl is just an interpreter language.
Who really needs this kind of optimization for Perl? Even C does
not provide this feature. Though Pascal/Ada have distinctions
like function/procedure, it does not make them any faster than C.
Just given its ugly name, I hate to se
> >Who really needs this kind of optimization for Perl?
>
> I do. Lots of people with web apps do. Pretty much anyone with a large or
> long-running perl program does.
I have to say that I agree to disagree. Since it has been so controversal,
I just don't think this optimization is a good one.
>
>IIRC, ISO C says you cannot have /^_[A-Z_][A-Za-z_0-9]*$/. That's reserved
>for the standard.
If you consider our prefix is "_Perl_" not just "_", we will be pretty safe.
There are just not many people follow the standard anyway :-)
Hong
> if we have a proper core event loop as dan and i want, multiple timers
> will be part of that. and that will mean we can have timed out
> operations without the mess of eval/die (or whatever 6 will have for
> that).
Event loop will be great for many applications. We probably need
a better way
> >There is no need to store pending signals. It will be impossible
> >to achieve in a multi-threaded perl runtime.
>
> No, it won't be that tough to get multiple pending signals for a thread.
> Not "real" Unix signals, perhaps, but what look like them, more or less.
If
> several alarms time o
> Register based. Untyped registers; I'm hoping that the vtable stuff can be
> sufficiently optimized that there'll be no major win in
> storing multiple copies of a PMC's data in different types knocking
around.
>
> For those yet to be convinced by the benefits of registers over stacks,
try
>
> here is an idea. if we use a pure stack design but you can access the
> stack values with an index, then the index number can get large. so a
> fixed register set would allow us to limit the index to 8 bits. so the
> byte code could look something like this:
>
> 16 bit op (plenty o
> While RISC style opcodes have a certain pleasant simplicity to them, the
> opcode overhead'll kill us. (If perl 5's anything to go by, and in this
> case I think it might be)
I don't understand what the opcode overhead you meant. It can not be worse
than stack based opcode.
> The size of the
> There's no reason why you can.t have a hybrid scheme. In fact I think
> it's a big win over a pure register-addressing scheme. Consider...
The hybrid scheme may be a win in some cases, but I am not sure if it
worth the complexity. I personally prefer a strict RISC style opcodes,
mainly load, s
> Courtesy of Slashdot,
> http://www.hastingsresearch.com/net/04-unicode-limitations.shtml
>
> I'm not sure if this is an issue for us or not, as we're generally
> language-neutral, and I don't see any technical issues with any of the
> UTF-* encodings having headroom problems.
I think the au
e computer, and
> information is lost.
This is a very common practice, nothing to surprise. As you can tell,
my name is "hong zhang", which already lost "chinese tone" and
"glyph". "hong" has 4 tones, each tone can be any of several
characters, each charac
> On Tue, Jun 05, 2001 at 11:25:09AM +0100, Dave Mitchell wrote:
> > This is the bit that scares me about unifying perl ops and regex ops:
> > can we really unify them without taking a performance hit?
>
> Coupl'a things: firstly, we can make Perl 6 ops as lightweight as we like.
>
> Second, Rub
> > I can't really believe that this would be a problem, but if they're
> > integrated alphabets from different locales, will there be issues
> > with sorting (if we're not planning to use the locale)? Are there
> > instances where like characters were combined that will affect the
> > sort order
> If this is the case, how would a regex like "^[a-zA-Z]" work (or other,
more
> sensitive characters)? If just about anything can come between A and Z,
and
> letters that might be there in a particular locale aren't in another
locale,
> then how will regex engine make the distinction?
This synt
> > What happens if unicode supported uppercase and lowercase numbers?
>
> > [I had a dig about, and it doesn't seem to mention lowercase or
> > uppercase digits. Are they just a typography distinction,
> and hence not
> > enough to be worthy of codepoints?]
>
> Damned if I know; I didn't know
> However, I don't think this actually affects your comments, except that
> I'd guess that the half digits mentioned by Hong don't have the same
> term "case" used with them that the letters of various alphabets do.
I am not sure if we mean the same thing. The regular ascii "0123456789"
are call
We should let external collator to handle all these fancy features.
People can always normalize/canonicalize/do-whatever-you-want
and send the result text/binary to regex. All the features we
argue about here can be easily done by a customized collator.
Do NOT expect the Perl regex be a linguist
> * Convert from and to UTF-32
> * lengths in bytes, characters, and possibly glyphs
> * character size (with the variable length ones reporting in negative
numbers)
What do you mean by character size if it does not support variable length?
> * get and set the locale (This might not be the spot
> >What do you mean by character size if it does not support variable
length?
>
> Well, if strings are to be treated relatively abstractly, and we still
want
> to poke around through the string buffer, we need to know how big a
> character is.
I agree on this. I think support variable length
This is the common approach of complicated text representation,
the implemetations I have seen includes IBM IText and SGI
rope. For "rope", each rope is represented by either of a simple
immutable string, a simple mutable string, a simple immutable
substring of another rope, or a binary node of
> The one problem with copy-on-write is that, if we implement it in
software,
> we end up paying the price to check it on every string write. (No free
> depending on the hardware, alas)
>
> Not that this should shoot down the idea of COW strings, but it is a cost
> that needs considering. (I
> >> Taiwanese read traditional chinese characters, but PRC people read
> > >> simplied chinese. Even we take the same data, and same program
(code),
> > >> people just read differently. As an end user, I want to make the
decision.
> > >> It will drive me crazy if Perl render/display the text fil
I don't think object inheritence has any significant advantage.
Since it is not widely used and understood, we should not use it
in Perl, period.
Its functionality can be achieved by many different ways. The
anonymous class is one of them. Personally I prefer using mixin.
The mixin is similar
>I tought about a posibility to access a HASH in way that the VALUES can
also
>be used like KEYS...i.e in perl6 I will say this :
>
>%hash{key} = value;
>
>I want to say also :
>
>{value}hash% = key;
Please forget about it. It is just syntactic sugar for yourself. The hash
mapping
is m-to-1, th
> >If we define caller-save and callee save. The 64 register may
> >not be bad, as long as caller-save set is small.
>
> At least a full push without a copy to the new frame is dead
> cheap, so it's not much of a cost.
May not be true. If we use gc, we have to clear (nullify) it,
so the gc won
Actually we can use "call-setup-gp" calling convention to avoid patch.
It works like this:
1) each bytecode contains data section and code section.
2) during load, the runtime construct the data segment from the
data section, such as string object from string data, floating
point object fr
There are many typos. Please correct them.
The branch instruction is wrong. It should be "branch #num".
The offset should be part of instruction, not from register.
The register set seems too big. It reduces cache efficiency
and uses too much stack. We also have to define caller saved
register
> >The branch instruction is wrong. It should be "branch #num".
> >The offset should be part of instruction, not from register.
>
> Nope, because that kills the potential for computed relative
> branches. (It's in there on purpose) Branches should work from
> both constants and registers.
Eve
I believe the advantage of
> if (...) {
> ...
> } else {
> ...
> }
is to write very dense code, especially when the block itself is single
line.
This style may not be readable to some people.
This style is not very consistent,
> if (...) {
> ...
> }
> else
> {
> ...
> }
I believe it w
> On Tue, Aug 28, 2001 at 09:13:25AM -0400, Michael G Schwern wrote:
> > As the pendulum swings in the other direction you get
> mind-bogglingly
> > silly things like finalize which I just learned of today.
>
> What's so silly about finalize? It's pretty much identical to Perl's
> DESTROY. (Ex
> I don't think speed is where the interest is coming from. GC should fix
> common memory problems, such as the nasty circular references issue that
has
> caught all of us at some time.
Normally, GC is more efficient than ref count, since you will have many
advanced gc algorith to choose and don'
> Sorry, I ment "final". final classes and methods. The idea that you
> can prevent someone from subclassing your class or overriding your
> methods. I've seen things that hinder reuse, but this is the first
> time I've seen one that violently blocks reuse!
"final" is only useful for strongly-
> You still need to malloc() your memory; however I realize that the
> allocator can be *really* fast here. But still, you give a lot of the
> gain back during the mark-and-sweep phase, especially if you also
> move/compact the memory.
As you said, the allocator can be really fast. Most advanced
What is the need for CLOS? Are we trying to build a kitchen
sink here?
Hong
> -Original Message-
> From: David L. Nicol [mailto:[EMAIL PROTECTED]]
> Sent: Wednesday, August 29, 2001 9:43 PM
> To: [EMAIL PROTECTED]
> Subject: CLOS multiple dispatch
>
>
> http://www.jwz.org/doc/java.htm
> For example, here is an event handler for a GUI:
>
> sub handle (Window $w, Event $e) : multi {
> log("Unknown event $($e->desc) called on window
$($w->name)");
> }
>
> sub handle (Window $w, CloseEvent $e) : multi {
> $w->close;
> }
>
>
> Because multimethods are inherently an OO technique.
>
You can say so, but I am not buying it very much.
> It doesn't. The multimethod consists of those variants that are currently
> loaded.
>
How do you define the currently loaded? If things are lazy loaded, the stuff
you expect has been
> >The only good justification I've heard for "final" is as a directive
> >for optimization. If you declare a variable to be of a final type, then
> >the compiler (JIT, or whatever) can resolve method dispatch at
> >compile-time. If it is not final, then the compiler can make no such
> >assumptio
> True, but it is easier to generate FAST code for a register machine.
> A stack machine forces a lot of book-keeping either run-time inc/dec of
sp,
> or alternatively compile-time what-is-offset-now stuff. The latter is a
real
> pain if you are trying to issue multiple instructions at once.
I
> If you really want a comparison, here's one. Take this loop:
>
> i = 0;
> while (i < 1000) {
>i = i + 7;
> }
>
> with the ops executed in the loop marked with pipes. The corresponding
> parrot code would be:
>
> getaddr P0, i
> store P0, 0
>
> Uri Guttman
> > we are planning automatic over/underflow to bigfloat. so there is no
> > need for traps. they could be provided at the time of the
> > conversion to big*.
>
> OK. But will Perl support signaling and non-signaling NANs?
I don't think we should go for automatic overflow/underf
> At 06:26 PM 9/9/2001 -0700, Wizard wrote:
> >into something using a processor op equivalent to the 8051C
> >testbit( byte_variable, bit_offset).
>
> This is pretty much
>
>testbit I0, 6
>
> to test whether bit 6 is set i I0, right?
What is the difference from
and I0, I0, (1 << 6)
> At 09:15 PM 9/10/2001 +0100, Simon Cozens wrote:
> >FWIW, it's just dawned on me that if we want all of these things to be
> >overloadable by PMCs, they need to have vtable entries. The PMC vtable
> >is going to be considerably bigger than we anticipated.
>
> Who the heck is going to override a
> Okay, i've thought things over a bit. Here's what we're going to do
> to deal with infant mortality, exceptions, and suchlike things.
>
> Important given: We can *not* use setjmp/longjmp. Period. Not an
> option--not safe with threads. At this point, having considered the
> alternatives, I w
> >The thread-package-compatible setjmp/longjmp can be easily implemented
> >using assembly code. It does not require access to any private data
> >structures. Note that Microsoft Windows "Structured Exception Handler"
> >works well under thread and signal. The assembly code of __try will
> >show
> I've checked with some Sun folks. My understanding is that if you
> don't do a list of what I'd consider obviously stupid things like:
>
> *) longjmp out of the middle of an interrupt handler
> *) longjmp across a system call boundary (user->system->user and the
> inner jumps to the outer)
>
> Actually I'd been given dire warnings from some of the Solaris folks.
> "Don't use setjmp with threads!"
>
> I've since gotten details, and it's "Don't use setjmp with threads
> and do Stupid Things."
I used to be at Sun. I knew those warnings too. If we use longjmp
carefully, we can make it.
> > I used to be at Sun. I knew those warnings too. If we use longjmp
> > carefully, we can make it. In the worst case, write our own version.
>
> ..Or we could use setcontext/getcontext, could we not?
The setcontext/getcontext will be much worse than setjmp/longjmp.
The are more platform specif
> > When I was working on HotSpot JVM, we had some problems with getcontext.
> > They work 99.99% of time. We added many workaround for the .01% cases. I
> > believe the Solaris guys have been improving the code. I am not sure of
> > the current status.
>
> Was that inside of a signal handler or
> OffsetLength Description
> 0 1 Magic Cookie (0x013155a1)
> 1 n Data
> n+1 m Directory Table
> m+n+1 1 Offset of beginning of directory table (i.e. n+1)
I think we need a version right after cookie for long term compatibility.
> The directory is after
> 8-byte word:endianness (magic value 0x123456789abcdef0)
> byte: word size
> byte[7]:empty
> word: major version
> word: minor version
>
> Where all word values are as big as the word size says they are.
>
> The magic value can be something else, but it
> We can't do that. There are platforms on both ends that
> have _no_ native 32-bit data formats (Crays, some 16-bit
> CPUs?). They still need to be able to load and generate
> bytecode without ridiculuous CPU penalties (your Palm III
> is not running on a 700MHz Pentium III, after all!)
If the p
> There's a one-off conversion penalty at bytecode load time, and I don't
> consider that excessive. I want the bytecode to potentially be in platform
> native format (4/8 byte ints, big or little endian) with a simple and
> well-defined set of conversion semantics. That way the bytecode loader
> Proposed: Parrot should never crash due to malformed bytecode. When
> choosing between execution speed and bytecode safety, safety should
> always win. Careful op design and possibly a validation pass before
> execution will hopefully keep the speed penalty to a minimum.
We can use similar mo
> DS> I'm also seriously considering throwing *all* PerlIO code into
> separate
> DS> threads (one per file) as an aid to asynchrony.
>
> but that will be hard to support on systems without threads. i still
> have that internals async i/o idea floating in my numb skull. it is an
> api that
Do we want the opcode to be so complicated? I thought we are
going to use this kind of thing for generic pointers. The "p"
member of opcode does not make any sense to me.
Hong
> Earlier there was some discussion about changing typedef long IV
> to
> typedef union {
> IV i;
> void* p;
> } op
> Nope. Internal I/O, at least as the interpreter will see it is async. You
> can build sync from async, it's a big pain to build async from sync.
> Doesn't mean we actually get asynchrony, just that we can.
>
It is trivial to build async from sync, just using thread. Most Unix async
are built
> Now works on Solaris and i386, but segfaults at the GRAB_IV call in
> read_constants_table on my Alpha. Problems with the integer-pointer
> conversions in memory.c? (line 29 is giving me a warning).
The line 29 is extremely wrong. It assigns IV to void* without casting.
The alignment calculatio
> you can't do non-blocking i/o on files without aio_read type calls. but
> what dan is saying is that the api the interpreter uses internally will
> be an async one. it will either use native/POSIX aio calls or simulate
> that with sync calls and callbacks or possibly with threads.
>
That sound
> >How does python handle MT?
>
> Honestly? Really, really badly, at least from a performance point of view.
> There's a single global lock and anything that might affect shared state
> anywhere grabs it.
Python uses global lock for multi-threading. It is reasonable for io thread,
which blocks mo
> This was failing here until I made the following change:
>
> PackFile_Constant_unpack_number(struct PackFile_Constant *
> self, char * packed, IV packed_size) {
> char * cursor;
> NV value;
> NV * aligned = mem_sys_allocate(sizeof(IV));
Are you sure this is correct? Or this
> > The memcpy() can handle alignment nicely.
>
> Not always. I tried. :(
How that could be possible? The memcpy() just does byte-by-byte
copy. It does not care anything about the alignment of source
or dest. How can it fail?
Hong
1 - 100 of 133 matches
Mail list logo