Regarding ITR Software
Saral ITR The easier way to filing IT returns Your wait for Income Tax Returns Software is over. Stop. Look no further. Saral ITR from Relyon will help you file the Income Tax Returns. Saral ITR, the Software to manage all ITR Forms is just released. Available online and free to file ITR 1, 2 & 3 without any catch. Go ahead, use, and file the returns for your clients or yourselves. ITR 1 Forms Covered ¡¤ ITR 1 - For Individuals & HUF with Salary and Pension and Interest Income ¡¤ ITR 2 - For Individuals & HUF not having Income from Business or Profession ¡¤ ITR 3 - For Individuals & HUF being Partners in Firms & not carrying out business or ¡¤ profession under any proprietorship ¡¤ ITR 4 - For Individuals and HUF having income from a Proprietary business or Prof. ¡¤ ITR 5 - For Firms AOPs and BOIs( including FBT return) ¡¤ ITR 6 - For Companies other than Companies claiming exemptions U/s 11 ¡¤ ITR 7- For Persons including Companies required to file returns U/s 139(4A), (4B), (4C) or (4D) ¡¤ ITR 8 - Return of Fringe Benefit (Please see Rule 12 of IT Rules 1962 For Details Contact : Advance Business solution Yashwant :09323930201 / 022-24125656 Ajit Patil : 9370348965
Re: recent troubles with float vectors & bitwise ops
Paolo Bonzini wrote: 2) selection operations on vectors, kind of (v1 <= v2 ? v3 : v4). These can be written for example like this: cmpleps xmm1, xmm2 ; xmm1 = xmm1 <= xmm2 ? all-ones : 0 andnps xmm4, xmm1 ; xmm4 = xmm1 <= xmm2 ? 0 : xmm4 andps xmm1, xmm3 ; xmm1 = xmm1 <= xmm2 ? xmm3 : 0 orpsxmm1, xmm4 ; xmm1 = xmm1 <= xmm2 ? xmm3 : xmm4 SSE4 introduces specific instruction support, with a shorter sequence for this purpose. It seems to be quite difficult to persuade gcc to use it.
Re: recent troubles with float vectors & bitwise ops
Mark Mitchell wrote: One option is for the user to use intrinsics. It's been claimed that results in worse code. There doesn't seem any obvious reason for that, but, if true, we should try to fix it; we don't want to penalize people who are using the intrinsics. So, let's assume using intrinsics is just as efficient, either because it already is, or because we make it so. I maintain that empirical claim; if i compare what gives a simple SOA hybrid 3 coordinates something implemented via intrinsics, builtins and vector when used as the basic component for a raytracer kernel i get as many codegen variations: register allocations differ, stack footprints differ, branches & code organization differ, etc... so it's not that surprising performance also differ. It appears the vector & builtin (which isn't using __m128 but straight v4sf) implementations are mostly on par while the intrinsic based version is slightly slower. Then you factor in how convenient it is, well... was, to use that vector extension to write such something... Another issue is that for MSVC and ICC, __m128 is a class, but not for gcc so you need more wrapping in C++ but if you know you can let some naked v4sf escape because the compiler always does the right thing with them. Now while there's some subtleties (and annoying 'features'), i should state that gcc4.3, if you're careful, generates mostly excellent SSE code (especially on x86-64, even more so if compared to icc). We still have the problem that users now can't write machine-independent code to do this operation. Assuming the operations are useful for That and writing, say, a generic something takes much much more work. What are these operation used for? Can someone give an example of a kernel than benefits from this kind of thing? There's of course what Paolo Bonzini described, but also all kind tricks that knowing such operations are extremely efficient encourages. While it would be nice to have such builtins also operate on vectors, if only because they are so common, it's not quite the same as having full freedom and hardware features exposed.
Re: recent troubles with float vectors & bitwise ops
Paolo Bonzini wrote: I'm not sure that it is *so* useful for a user to have access to it, except for specialized cases: As there's other means, it may not be that useful but for sure it's extremely convenient. 2) selection operations on vectors, kind of (v1 <= v2 ? v3 : v4). These can be written for example like this: cmpleps xmm1, xmm2 ; xmm1 = xmm1 <= xmm2 ? all-ones : 0 andnps xmm4, xmm1 ; xmm4 = xmm1 <= xmm2 ? 0 : xmm4 andps xmm1, xmm3 ; xmm1 = xmm1 <= xmm2 ? xmm3 : 0 orpsxmm1, xmm4 ; xmm1 = xmm1 <= xmm2 ? xmm3 : xmm4 I suppose you'll find such variant of a conditional move pattern in every piece of SSE code. But you can't condense bitwise vs float usage to a few patterns because when writing SSE, the efficiency of those operations is taken for granted. If we have a good extension for vector arithmetic, we should aim at improving it consistently rather than extending it in unpredictable ways. For example, another useful extension would be the ability to access vectors by item using x[n] (at least with constant expressions). Yes, yes and yes.
Re: recent troubles with float vectors & bitwise ops
Paolo Bonzini <[EMAIL PROTECTED]> writes: > 1) neg, abs and copysign operations on vectors. These we can make > available via builtins (for - of course you don't need it); we already > support them in many back-ends. Here is my point of view. People using the vector extensions are already writing inherently machine specific code, and they are (ideally) familiar with the instruction set of their processor. I see no significant disadvantage to gcc to granting them easy access to the capabilities of their processor. Saying that these capabilities are available in other ways just amounts to putting an obstacle in their path. If there is a reason to put in that obstacle--e.g., because we are implementing a language standard and the language standard forbids it--then fine. But citing a PowerPC specific standard to forbid code appropriate for the x86 does not count as a sufficient reason in my book. Permitting this extension continues the preexisting behaviour, and it helps programmers and helps existing code. Who does it hurt to permit this extension? Who does it help to forbid this extension? Ian
Re: recent troubles with float vectors & bitwise ops
On Friday 24 August 2007, Ian Lance Taylor wrote: > Paolo Bonzini <[EMAIL PROTECTED]> writes: > > 1) neg, abs and copysign operations on vectors. These we can make > > available via builtins (for - of course you don't need it); we already > > support them in many back-ends. > > Here is my point of view. People using the vector extensions are > already writing inherently machine specific code, and they are > (ideally) familiar with the instruction set of their processor. By the same argument, If you're already writing machine specific code then there shouldn't be a problem using machine specific intrinsics. I admit I've never been convinced that the generic vector support was sufficient to write useful code without resorting to machine specific intrinsics. > Permitting this extension continues the preexisting behaviour, and it > helps programmers and helps existing code. Who does it hurt to permit > this extension? Who does it help to forbid this extension? I'm partly worried about cross-platform compatibility, and what this imples for other SIMD targets. At minimum we need to fix the internals documentation to say how to support this extension. The current docs are unclear whether (or:V2SF ...) is valid RTL. Paul
Re: recent troubles with float vectors & bitwise ops
On Aug 24, 2007, at 8:02 AM, Ian Lance Taylor wrote: Permitting this extension continues the preexisting behaviour, and it helps programmers and helps existing code. Who does it hurt to permit this extension? Who does it help to forbid this extension? Aren't builtins the designated way to access processor-specific features like this? Why does there have to be C operators for obscure features like this? Wouldn't it be better to fix the code generator to do the right thing regardless of how the user presents it? There is a lot of code that uses casts (including the builtin implementations themselves) - it seems worthwhile to generate instructions for the right domain for this code as well. -Chris
Re: recent troubles with float vectors & bitwise ops
Chris Lattner <[EMAIL PROTECTED]> writes: > On Aug 24, 2007, at 8:02 AM, Ian Lance Taylor wrote: > > Permitting this extension continues the preexisting behaviour, and it > > helps programmers and helps existing code. Who does it hurt to permit > > this extension? Who does it help to forbid this extension? > > Aren't builtins the designated way to access processor-specific > features like this? Why does there have to be C operators for > obscure features like this? A fair question, but we've already decided to support vector + vector and such operations, and we've decided that that is one valid way to generate vector instructions. That decision may itself have been a mistake. But once we accept that decision, then, given that we know that the processor supports bitwise-or on floating point values, using a instruction different from that for bitwise-or on integer values, then it is fair to ask why we don't support vector | vector for floating point vectors. > Wouldn't it be better to fix the code generator to do the right thing > regardless of how the user presents it? There is a lot of code that > uses casts (including the builtin implementations themselves) - it > seems worthwhile to generate instructions for the right domain for > this code as well. I completely agree. Ian
Re: recent troubles with float vectors & bitwise ops
Paul Brook wrote: > On Friday 24 August 2007, Ian Lance Taylor wrote: >> Paolo Bonzini <[EMAIL PROTECTED]> writes: >>> 1) neg, abs and copysign operations on vectors. These we can make >>> available via builtins (for - of course you don't need it); we already >>> support them in many back-ends. >> Here is my point of view. People using the vector extensions are >> already writing inherently machine specific code, and they are >> (ideally) familiar with the instruction set of their processor. > > By the same argument, If you're already writing machine specific code then > there shouldn't be a problem using machine specific intrinsics. I admit I've > never been convinced that the generic vector support was sufficient to write > useful code without resorting to machine specific intrinsics. Our VSIPL++ team is using it for some things. My guess is that it's probably not sufficient for all things, but probably is sufficient for many things. Also, I expect some users get (say) a 4x speedup over C code easily by using the vector extension, and could get an 8x speedup by using intrinsics, but with a lot more work. So, the vector extensions give them a sweet spot on the performance/effort/portability curve. > I'm partly worried about cross-platform compatibility, and what this imples > for other SIMD targets. Yes. Here's a proposed definition: Let "a" and "b" be floating-point operands of type F, where F is a floating-point type. Let N be the number of bytes in F. Then, "a | b" is defined as: ({ union fi { F f; char bytes[N]; }; union fi au; union fi bu; au.f = a; bu.f = b; for (i = 0; i < N; ++i) au.bytes[i] |= bu.bytes[i]; au.f; }) If the resulting floating-point value is denormal, NaN, etc., whether or not exceptions are raised is unspecified. -- Mark Mitchell CodeSourcery [EMAIL PROTECTED] (650) 331-3385 x713
Re: recent troubles with float vectors & bitwise ops
On Aug 24, 2007, at 8:37 AM, Ian Lance Taylor wrote: Chris Lattner <[EMAIL PROTECTED]> writes: On Aug 24, 2007, at 8:02 AM, Ian Lance Taylor wrote: Permitting this extension continues the preexisting behaviour, and it helps programmers and helps existing code. Who does it hurt to permit this extension? Who does it help to forbid this extension? Aren't builtins the designated way to access processor-specific features like this? Why does there have to be C operators for obscure features like this? A fair question, but we've already decided to support vector + vector and such operations, and we've decided that that is one valid way to generate vector instructions. That decision may itself have been a mistake. But once we accept that decision, then, given that we know that the processor supports bitwise-or on floating point values, using a instruction different from that for bitwise-or on integer values, then it is fair to ask why we don't support vector | vector for floating point vectors. My personal opinion is that the grammar and type rules of the language should be defined independently of the target. "+" is allowed on all generic vectors for all targets. Allowing &^| to be used on FP vectors on some targets but not others seems extremely inconsistent (generic vectors are supposed to provide some amount of portability after all). Allowing these operators on all targets also seems strange to me, but is a better solution than allowing them on some targets but not others. I consider pollution of the IR to be a significant problem. If you allow this, you suddenly have tree nodes and RTL nodes for logical operations that have to handle operands that are FP vectors. I imagine that this will result in either 1) subtle bugs in various transformations that work on these or 2) special case code to handle this in various cases, spread through the optimizer. -Chris
Re: recent troubles with float vectors & bitwise ops
On 8/24/07, Mark Mitchell <[EMAIL PROTECTED]> wrote: > Let "a" and "b" be floating-point operands of type F, where F is a > floating-point type. Let N be the number of bytes in F. Then, "a | b" > is defined as: Yes that makes sense, not. Since most of the time, you have a mask and that is what is being used. Like masking the the sign bit or doing a selection. The mask is most likely a NaN anyways so having that undefined just does not make sense. So is this going to be on scalars? If not, then we should still not accept it on vectors. -- Pinski
Re: recent troubles with float vectors & bitwise ops
Andrew Pinski wrote: > On 8/24/07, Mark Mitchell <[EMAIL PROTECTED]> wrote: >> Let "a" and "b" be floating-point operands of type F, where F is a >> floating-point type. Let N be the number of bytes in F. Then, "a | b" >> is defined as: > > Yes that makes sense, not. I'm not following. Are you agreeing or disagreeing? > Since most of the time, you have a mask > and that is what is being used. Like masking the the sign bit or > doing a selection. The mask is most likely a NaN anyways so having > that undefined just does not make sense. I'm not following. What I meant was that if the result was a NaN, whether or not floating-point exceptions were signalled was unspecified. Where does undefined come into it, and what does that have to do with the mask? If we think that no hardware will ever signal an exception in this case, then we can say that the operation never signals an exception. But, I was afraid that might be too strong a constraint. > So is this going to be on > scalars? If not, then we should still not accept it on vectors. Yes, from a language-design point of view, it should be for both scalars and vectors, so I wrote the strawman definition in terms of scalars. Of course, if where it's actually useful is vectors, then implementing it for vectors is the important case, and whether or not we get around to doing it on scalars is secondary. -- Mark Mitchell CodeSourcery [EMAIL PROTECTED] (650) 331-3385 x713
RE: recent troubles with float vectors & bitwise ops
On 24 August 2007 17:04, Andrew Pinski wrote: > On 8/24/07, Mark Mitchell <[EMAIL PROTECTED]> wrote: >> Let "a" and "b" be floating-point operands of type F, where F is a >> floating-point type. Let N be the number of bytes in F. Then, "a | b" >> is defined as: > > Yes that makes sense, not. Since most of the time, you have a mask > and that is what is being used. http://en.wikipedia.org/wiki/Weasel_word. > Like masking the the sign bit or > doing a selection. The mask is most likely a NaN anyways so having > that undefined just does not make sense. What are you talking about? I can't even parse this rant. cheers, DaveK -- Can't think of a witty .sigline today
Re: recent troubles with float vectors & bitwise ops
> > I'm partly worried about cross-platform compatibility, and what this > > imples for other SIMD targets. > > Yes. Here's a proposed definition: I agree this is the only sane definition. I probably wasn't clear: My main concern is that if we do support this extension the internals should be implemented and documented in such a way that target maintainers (i.e. me) can figure out how to make it work on their favourite target. We should not just quietly flip some bit in the x86 backend. Paul
Re: recent troubles with float vectors & bitwise ops
Paul Brook wrote: > I probably wasn't clear: My main concern is that if we do support this > extension the internals should be implemented and documented in such a way > that target maintainers (i.e. me) can figure out how to make it work on their > favourite target. We should not just quietly flip some bit in the x86 > backend. Totally agreed. -- Mark Mitchell CodeSourcery [EMAIL PROTECTED] (650) 331-3385 x713
Re: bootstrap failure while compiling gcc/tree.c (ICE)
Brian Sidebotham wrote: Brian Sidebotham wrote: ../../gcc/gcc/tree.c: In function "build_string": ../../gcc/gcc/tree.c:1197: internal compiler error: in iterative_hash_expr, at tree.c:4189 Please submit a full bug report, with preprocessed source if appropriate. I have placed the pre-processed file here: http://www.valvers.com/gcc/arm-elf/tree.i (1.5Mb) The following configure line was used: ../gcc/configure --target=arm-elf --prefix=${installdir} --with-newlib --with-headers=${newlibdir}/src/newlib/libc/include --enable-languages=c,c++ --enable-interwork --enable-multilib codebase is svn revision 127732 The failing command line is: gcc -c -g -O2 -DIN_GCC -DCROSS_DIRECTORY_STRUCTURE -W -Wall -Wwrite-strings -Wstrict-prototypes -Wmissing-prototypes -Wold-style-definition -Wmissing-format-attribute -fno-common -DHAVE_CONFIG_H -I. -I. -I../../gcc/gcc -I../../gcc/gcc/. -I../../gcc/gcc/../include -I../../gcc/gcc/../libcpp/include -I../../gcc/gcc/../libdecnumber -I../../gcc/gcc/../libdecnumber/dpd -I../libdecnumber../../gcc/gcc/tree.c -o tree.o and lastly, the specs of the gcc compiler being used are: Using built-in specs. Target: i486-linux-gnu Configured with: ../src/configure -v --enable-languages=c,c++,java,f95,objc,ada,treelang --prefix=/usr --enable-shared --with-system-zlib --libexecdir=/usr/lib --without-included-gettext --enable-threads=posix --enable-nls --program-suffix=-4.0 --enable-__cxa_atexit --enable-clocale=gnu --enable-libstdcxx-debug --enable-java-awt=gtk-default --enable-gtk-cairo --with-java-home=/usr/lib/jvm/java-1.4.2-gcj-4.0-1.4.2.0/jre --enable-mpfr --disable-werror --with-tune=pentium4 --enable-checking=release i486-linux-gnu Thread model: posix gcc version 4.0.3 (Ubuntu 4.0.3-1ubuntu5) These are the two lines that fail: tree.c:1197: memcpy (CONST_CAST (TREE_STRING_POINTER (s)), str, len); memcpy (((__extension__(union {__typeof(((const char *)(STRING_CST_CHECK (s)->string.str)))_q; void *_v;})(((const char *)(STRING_CST_CHECK (s)->string.str._v), str, len); tree.c:1198: ((char *) CONST_CAST (TREE_STRING_POINTER (s)))[len] = '\0'; ((char *) ((__extension__(union {__typeof(((const char *)(STRING_CST_CHECK (s)->string.str)))_q; void *_v;})(((const char *)(STRING_CST_CHECK (s)->string.str._v))[len] = '\0'; Removing the CONST_CAST delcarlations allows compilation of gcc to complete with a warning (as expected) ../../gcc/gcc/tree.c: In function 'build_string': ../../gcc/gcc/tree.c:1197: warning: passing argument 1 of âmemcpyâ discards qualifiers from pointer target type This ICE is caused by the following patch: 2007-08-10 Kaveh R. Ghazi <[EMAIL PROTECTED]> * system.h (CONST_CAST): New. * c-decl.c (c_make_fname_decl): Use it. * c-lex.c (cb_ident, lex_string): Likewise. * c-typeck.c (free_all_tagged_tu_seen_up_to): Likewise. * gcc.c (set_spec, read_specs, for_each_path, execute, do_spec_1, give_switch, set_multilib_dir): Likewise. * gengtype-parse.c (string_seq, typedef_name): Likewise. * passes.c (execute_one_pass): Likewise. * prefix.c (update_path): Likewise. * pretty-print.c (pp_base_destroy_prefix): Likewise. * tree.c (build_string): Likewise. Reverting this patch lets gcc build again. Unfortunately, CONST_CAST is a mystery to me (I don't understand it!) so there is no hope of me changing it to compile okay AND be right! Best Regards, Brian Sidebotham.
Re: recent troubles with float vectors & bitwise ops
If there is a reason to put in that obstacle--e.g., because we are implementing a language standard and the language standard forbids it--then fine. But citing a PowerPC specific standard to forbid code appropriate for the x86 does not count as a sufficient reason in my book. The code I want to forbid is actually appropriate not only for the x86; the exact same code is appropriate for PowerPC, because the same kind of masking operations can be used there. However, for some reason, the PowerPC spec chose *not* to allow "vector float" bitwise operations, and I agree with it; the reason I want to avoid this, is that it goes against our guideline for vector extensions (i.e. valarray). Users can also achieve the same effect with casts, and in addition I would like to trade this lost ability with two gained abilities. First, I want GCC to produce the exact same code with and without casts. Second, I want GCC to have builtins supporting most common uses of the idiom, so that users can actually do without casts *and* bitwise operations 99% of the time. Paolo
Re: GCC 4.3.0 Status Report (2007-08-09)
Jie Zhang wrote: On 8/10/07, Mark Mitchell <[EMAIL PROTECTED]> wrote: Are there any folks out there who have projects for Stage 1 or Stage 2 that they are having trouble getting reviewed? Any comments re. timing for Stage 3? I have many bfin port patches which have not been merged into upstream. I hope I can pushed them out by the end of the next week. I have sent out all my patches (11). 3 of them have been reviewed and committed. Others are being reviewed. I have no access to computer this weekend. I'll be back next Monday or Tuesday. Jie
Re: recent troubles with float vectors & bitwise ops
On Fri, Aug 24, 2007 at 02:34:27PM -0400, Ross Ridge wrote: > Mark Mitchell > >Let's assume that the recent change is what we want, i.e., that the > >answer to (1) is "No, these operations should not be part of the vector > >extensions because they are not valid scalar extensions." > > I don't think we should assume that. If we were to we'd also have > to change vector casts to work like scalar casts and actually convert > the values. (Or like valarray, disallow them completely.) That would > force a solution like Paolo Bonzini's to use unions instead of casts, > making it even more cumbersome. In C++, you could use reinterpret_cast (meaning that values are not converted, just reinterpreted as integers of the same size). That would avoid the need for unions, you'd just cast. But this solution doesn't work for C. > Using vector casts that behave differently than > scalar casts has a lot more potential to generate confusion than allowing > bitwise operations on vector floats does. I suppose you could have an appropriately named intrinsic for doing a reinterpret_cast in C (that is, the type would be reinterpreted but it would be a no-op at machine level). Then, to do a masking operation you could write ovec = __as_float_vector(MASK | __as_int_vector(ivec));
Re: recent troubles with float vectors & bitwise ops
Mark Mitchell >Let's assume that the recent change is what we want, i.e., that the >answer to (1) is "No, these operations should not be part of the vector >extensions because they are not valid scalar extensions." I don't think we should assume that. If we were to we'd also have to change vector casts to work like scalar casts and actually convert the values. (Or like valarray, disallow them completely.) That would force a solution like Paolo Bonzini's to use unions instead of casts, making it even more cumbersome. If you look at what these bitwise operations are doing, they're taking a floating point vector and applying an operation (eg. negation) to certain members of the vector of according to a (normally) constant mask. They're really unaray floating-point vector operations. I don't think it's unreasonable to want to express these operations using floating-point vector types directly. Using vector casts that behave differently than scalar casts has a lot more potential to generate confusion than allowing bitwise operations on vector floats does. As I see it, there's two ways you can express these kinds operations without using casts that are both cumbersome and misleading. The easy way would be to just revert the change, and allow bitwise operations on vector floats. This is essentially an "old-school" programmer-knows-best solution where the compiler provides operators that represent the sort of operations generally supported by CPUs. Even on Altivec these bitwise operations on vector floats are meaningful and useful. The other way is to provide a complete set operations that would make using the bitwise operators pretty much unnecessary, like it is with scalar floats. For example, you can express masked negation by multiplying with a constant vector of -1.0 and 1.0 elements. It shouldn't be too hard for GCC to optimize this into an appropriate bitwise instruction for the target. For other operations the solution isn't as nice. You could implement a set of builtin functions easily enough, but it wouldn't be much better than using target specific intrinsics. Chances are though that operatations are going to be missed. For example, I doubt anyone unfamiliar with 3D programming would've seen the need for only negating part of a vector. (A more concise way to eliminate the need for the bitwise operations on vector floats would be to implement either the "swizzles" used in 3D shaders or array indexing on vectors. It would require a lot of work to implement properly, so I don't see it happening.) Ross Ridge
Re: compiler chain on AIX
> Ed S Peschko writes: Ed> which would be fine if the AIX linker works, but I'm getting segmentation Ed> faults when compiling perl out of the box, using the gcc-4.1.0 compiler Ed> provided.. I'm wondering if its the compiler, the linker, or both... You have not provided information for anyone to help with that assessment. Ed> (ps - if I can't get ld to work with gcc, my guess is that it's Ed> going to be very painful to compile the freeware packages on AIX Ed> that are needed, considering that half of them expect gnu ld flags, Ed> not native ones... so I'm hoping that someone has gotten gnu ld to Ed> work out there.) Again, there is no reason to assume that this is a problem with the interaction between AIX ld and GCC. Also, many freeware packages test the configuration and do not assume GNU+Linux linker options. David
gcc-4.3-20070824 is now available
Snapshot gcc-4.3-20070824 is now available on ftp://gcc.gnu.org/pub/gcc/snapshots/4.3-20070824/ and on various mirrors, see http://gcc.gnu.org/mirrors.html for details. This snapshot has been generated from the GCC 4.3 SVN branch with the following options: svn://gcc.gnu.org/svn/gcc/trunk revision 127789 You'll find: gcc-4.3-20070824.tar.bz2 Complete GCC (includes all of below) gcc-core-4.3-20070824.tar.bz2 C front end and core compiler gcc-ada-4.3-20070824.tar.bz2 Ada front end and runtime gcc-fortran-4.3-20070824.tar.bz2 Fortran front end and runtime gcc-g++-4.3-20070824.tar.bz2 C++ front end and runtime gcc-java-4.3-20070824.tar.bz2 Java front end and runtime gcc-objc-4.3-20070824.tar.bz2 Objective-C front end and runtime gcc-testsuite-4.3-20070824.tar.bz2The GCC testsuite Diffs from 4.3-20070817 are available in the diffs/ subdirectory. When a particular snapshot is ready for public consumption the LATEST-4.3 link is updated and a message is sent to the gcc list. Please do not use a snapshot before it has been announced that way.