On
http://developer.apple.com/documentation/Porting/Conceptual/PortingUnix/comp
iling/chapter_4_section_3.html Apple states:

 

"However, applications that make configuration-time decisions about the size
of data structures will generally fail to build correctly in such an
environment (since those sizes may need to be different depending on whether
the compiler is executing a ppc pass, a ppc64 pass, or an i386 pass). When
this happens, the tool must be configured and compiled for each architecture
as separate executables, then glued together manually using lipo(1).

 

In rare cases, software not written with cross-compilation in mind will make
configure-time decisions by executing code on the build host. In these
cases, you will have to manually alter either the configuration scripts or
the resulting headers to be appropriate for the actual target architecture
(rather than the build architecture). In some cases, this can be solved by
telling the configure script that you are cross-compiling using the --host,
--build, and --target flags. However, this may simply result in defaults for
the target platform being inserted, which doesn't really solve the problem.

 

The best fix is to replace configure-time detection of endianness, data type
sizes, and so on with compile-time or run-time detection. For example,
instead of testing the architecture for endianness to obtain consistent byte
order in a file, you should do one of the following:

 

* Use C preprocessor macros like __BIG_ENDIAN__ and __LITTLE_ENDIAN__ to
test endianness at compile time.

* Use functions like htonl, htons, ntohl, and ntohs to guarantee a
big-endian representation on any architecture.

* Extract individual bytes by bitwise masking and shifting (for example,
lowbyte=word & 0xff; nextbyte = (word >> 8) & 0xff; and so on).

 

Similarly, instead of performing elaborate tests to determine whether to use
int or long for a 4-byte piece of data, you should simply use a standard
sized type such as uint32_t."

 

 

So a good test would be to check if one of both macros is defined and its an
apple gcc:

 

#if (defined(__APPLE__) || defined(__APPLE_CC__)) &&
(defined(__BIG_ENDIAN__) || defined(__LITTLE_ENDIAN__))

# undef WORDS_BIGENDIAN

# define WORDS_BIGENDIAN __BIG_ENDIAN__

#endif

 

If gcc is not the apple one the configure value is used. If it is an apple
one, but a very old one neither BIG or LITTLE endian is defined, which also
lets configure test it.

 

But if one of both is defined then we can rely on the macros.

 

Uwe Schindler

H.-H.-Meier-Allee 63, D-28213 Bremen

http://www.thetaphi.de

eMail: [EMAIL PROTECTED]

 

> -----Original Message-----

> From: Antony Dovgal [mailto:[EMAIL PROTECTED]

> Sent: Friday, July 27, 2007 12:32 PM

> To: Uwe Schindler

> Cc: internals@lists.php.net

> Subject: Re: [PHP-DEV] php5 as universal binary (Mac OS X)

> 

> On 27.07.2007 14:23, Uwe Schindler wrote:

> > The simpliest would be to create a patch that is included *after* the

> > configure-generated .h file. I do not exactly now, in which PHP/Zend

> > specific .h file the configure generated php_config.h one is included,

> but

> > that would be the place to place the following macro:

> >

> > #if defined(MACOSX)   (I do not know the exact macro for detecting osx)

> > # undef WORDS_BIGENDIAN

> > # define WORDS_BIGENDIAN __BIG_ENDIAN__

> > #endif

> 

> I'm not completely sure it's ok when you're NOT compiling a universal

> binary.

> Also the __BIG_ENDIAN__ constant my not exist IIRC.

> 

> But the main reason of course is the fact that the patch I use works just

> fine and

> I'm not relly interested in debugging something that isn't broken for me.

> If you are - feel free to spend some time on this issue, I'd help you

> where I can.

> 

> --

> Wbr,

> Antony Dovgal

Reply via email to