On 28 March 2013 16:56, Andre Fischer <awf....@gmail.com> wrote:

> On 25.03.2013 20:52, janI wrote:
>
>> I might help here with the experience I have from vienna, where we had a
>> huge build system, and I am still in contact with the people who maintains
>> it. One idea just of the top, is not to start the compiler for each file,
>> but collect the filenames needing to be compiled, and then start the
>> compiler once with all filenames, that saves LOTS of cpu cycles.
>>
> Hi Jan,
>
> Please excuse the long delay, I got "distracted" by my sidebar work.
>
No problem, new build system is not for tomorrow :-)

I am right now rewriting helpcontent2 makefiles and build.lst, which gives
me an excellent inside knowledge of how the system works. Once that is
done, I will request a vote for integrating the l10n branch so I can change
all other modules and get rid of sdf files. I suspect this will take a
little month, then I would like to use the knowledge I have gained to
change the build system.

I would suggest that once I am finished with l10n and you with sidebar we
stick our heads together including others interested and make a
build-hackaton.


>
> Yes, I would very much like to learn more about your experience in Vienna.
>
> I am currently thinking about two things:
>
> 1 The one good part about our build system, the old and the new, is, in my
> opinion, the separation of data and behavior.  Makefiles in modules contain
> only (well, mostly) just data like which files to compile and what goes
> into which library.  The behavior is concentrated in solenv/inc or
> solenv/gbuild.  I wonder if others do that also?
>
> Yes that part everybody agreed was correct (I should explain that I had a
weekend evening meeting online with my colleagues where we dissected the
build system) for behavior that are common to more than one module !!
helpcontent2 is an example of where not to do it.

BUT the current system tries to do all in one passage, it would be good to
have:
   - a "prepare", that builds directories and other structural necessities
once, and not by each compile.
   - an optional "generate" step, which makes makefiles, in accordance with
the configure script. Normally people dont change configure all the time,
so in general this will produce faster and MUCH simpler makefiles (where
data and behaivour is mixed to keep the files as compact as possible).
   - a "compile", which is the actual compile/link etc.
   - a "clean", that either deletes everything and call "prepare", or only
deletes what needs to be deleted

The focus in gbuild on delivering everything in solver instead of having a
"deliver" step is good, BUT we need to seperate local files from public
files, in order to secure no modules start using internal data from another
module.

The had one comment, that the many small makefiles we have (in helpcontent2
I counted up to 5 makefiles being loaded for a simple make) slows the
system quite a lot. I do not have performance numbers on that statement.

I did not know, but they claimed that gnuMake has a scanner problem when
using multiple files, resulting in a higher cpu usage to generate the
execution list. They promised me to test if cmake has the same problem.


2. Most of us know at least one language with C like syntax (I would
> include Java into this).  Why are we still using Make with its rather
> unique, or shall I say bizarre, syntax or syntaxes (recipes are shell
> scripts, the rest is Makes own macro expansion language).   Would a
> language much close to C syntax not make much more sense?  I am currently
> making experiments in this direction.
> It seems to be so much more straightforward to use a C language and add a
> support for file dependencies and parallel jobs then take Make and define
> your own almost-object-oriented language on top, like in build.
>

We use cmake in vienna, with added functions (I do not have the details
here, but all functions are stored in a separate lib/dll that are loaded
with cmake).

The general consensus we all had, was that a lot of the macros in the
current system is not really needed if we make a prepare/generate step.
That would mean the very first compile after configure is a bit slower but
the rest is a factor faster (estimate was about 25%).


> Regarding the idea of calling the compiler with more than one file:
> Herbert recently made some experiments in this direction (on Windows) and
> had very good results.  Something like up to 40% reduction of compile time.
>

Our measuremts claims 60% on windows and 35% on linux, but only 28% on
solaris. Vienna uses native compilers (microsoft, sun etc...no mingw that
might have an effect).

after I left vienna, they have taken it one step further with a cmake
macro, that includes the link step.

In essence they use @? for the files that should be compiled, and add
object files for the rest, result is that with a single execution libraries
are generated, and only files that need to be compiled are actually
compiled.

A last comment, they were all quite chocked that we use patches version of
e.g. epm that prohibit us from using what the system has. I could not
really debate it, because I also dont really understand why we do it.

rgds
jan I.

Best regards,
> Andre
>
>
> ------------------------------**------------------------------**---------
> To unsubscribe, e-mail: 
> dev-unsubscribe@openoffice.**apache.org<dev-unsubscr...@openoffice.apache.org>
> For additional commands, e-mail: dev-h...@openoffice.apache.org
>
>

Reply via email to