Re: [dev] [OFFTOPIC] Recommended meta-build system

2010-01-26 Thread Daniel Bainton
2010/1/26 pancake :
>
>
> On Jan 26, 2010, at 8:10 AM, Daniel Bainton  wrote:
>
>> 2010/1/25 pancake :
>>>
>>> I have been using make(1) and acr(1) for most of my projects for a long
>>> while
>>
>> acr seems to have the OS guessing quite bad. It checks if uname is the
>> GNU version and then adds -gnu to the system type if it is? What if
>> the system is a uClibc based one that uses the GNU version of uname?
>>
>> gcc -dumpmachine would be a better way IMO (though probably not the
>> best anyway, atleast if the system has some other compiler than gcc..)
>>
> It cannot depend on gcc. What about crosscompiling? What about non-C
> projects?
>
> That string is just orientative imho. So i simplified the algorithm to
> handle most common situations.
>
> The code in autoconf that do this is really painful. And i dont really get
> the point of having a moré accurate and complex host string resolution.
>
> Do you have any other proposal to enhace it? With --target, --host and
> --build you can change the default string.

I can't think of a better way currently, but for example stali, that
will give the wrong build string.

--
Daniel



Re: [dev] [OFFTOPIC] Recommended meta-build system

2010-01-26 Thread Anselm R Garbe
2010/1/26 pancake :
> On Jan 26, 2010, at 8:10 AM, Daniel Bainton  wrote:
>
>> 2010/1/25 pancake :
>>>
>>> I have been using make(1) and acr(1) for most of my projects for a long
>>> while
>>
>> acr seems to have the OS guessing quite bad. It checks if uname is the
>> GNU version and then adds -gnu to the system type if it is? What if
>> the system is a uClibc based one that uses the GNU version of uname?
>>
>> gcc -dumpmachine would be a better way IMO (though probably not the
>> best anyway, atleast if the system has some other compiler than gcc..)
>>
> It cannot depend on gcc. What about crosscompiling? What about non-C
> projects?
>
> That string is just orientative imho. So i simplified the algorithm to
> handle most common situations.
>
> The code in autoconf that do this is really painful. And i dont really get
> the point of having a moré accurate and complex host string resolution.
>
> Do you have any other proposal to enhace it? With --target, --host and
> --build you can change the default string.

What about the good old way of providing one master makefile for each
platform instead of these scripts that are doomed to fail anyways
sooner or later?

Cheers,
Anselm



Re: [dev] [OFFTOPIC] Recommended meta-build system

2010-01-26 Thread pancake

Anselm R Garbe wrote:

What about the good old way of providing one master makefile for each
platform instead of these scripts that are doomed to fail anyways
sooner or later?

  

It's not only about platform, for small projects i find single makefiles ok,
but for big ones you need to separate the configuration/build steps, because
you need to get information about which libraries, include files, programs,
etc.. are in the system. In some situations just create a .mk file with such
information, but you will probably need to export some of the information
into a .h file.

Sometimes the configuration step is not as simple as run this and tell me
if it works or not. This makes a shellscript much more manageable than
a makefile, because it's not a task of the make and you end up forking from
make which is inneficient and ugly to maintain.

About having one different mk file for each platform..well I always find 
anyoing

to have to maintain N files that does the same, or for building..it's a mess
for packaging because there's no standard on this, and automatize the
build or do some packaging becomes a mess.

You get the package and you have to spend few seconds to identify which
makefile you have to use, and then if you get compilation errors you have to
guess which dependencies are missing. then you can try to find INSTALL or
README files to see whats missing or try to fix the program if you get that
its not a dependency problem. So this makes the build process simpler, but
less error-prone and more annoying for packagers and users.

The only good thing from autotools is that they are the standard, and this
hardly simplified the steps of development, packaging, compilation and
deployment. This is: make dist, make mrproper, automatic detection of
file dependencies, check for dependencies, etc.

For suckless projects i dont find logic to use such a monster, but for big
projects it is many times a must. Because you end up with conditional
dependencies, recursive checks to ensure consistence of a program, etc.

If you have a build farm or any massive-compilation environment, you expect
that all the packages are going to build and react in the same way. But this
is not true. There are some basics in the software packaging that not
everybody understand or know.

Things like sand-boxing installation (make DISTDIR=/foo), cross-path 
compilation (VPATH),
optimization flags detection for compiler, make dist, etc. are things 
that many of the
makefile-only projects fail to do. I'm not trying to say that those are 
things that all
packages must have, but it's something that standarizes the build and 
install process,

and simplifies the development and maintainance.

I wrote 'acr' because i was looking in something ./configure-compatible, 
but lightweight
and simpler to maintain than the m4 approach. It works for me quite 
well, but I try to
only use it for the projects that really need to split the build in two 
steps. For building
in plan9 I just distribute a separate mkfile which doesnt depends on the 
configure stage.


But plan9 is IMHO a quite different platform to not try to support it 
from the acr side,

because the makefiles should be completely different too.

--pancake



Re: [dev] [OFFTOPIC] Recommended meta-build system

2010-01-26 Thread Anselm R Garbe
2010/1/26 pancake :
> Anselm R Garbe wrote:
>>
>> What about the good old way of providing one master makefile for each
>> platform instead of these scripts that are doomed to fail anyways
>> sooner or later?
>>
>>
>
> It's not only about platform, for small projects i find single makefiles ok,
> but for big ones you need to separate the configuration/build steps, because
> you need to get information about which libraries, include files, programs,
> etc.. are in the system. In some situations just create a .mk file with such
> information, but you will probably need to export some of the information
> into a .h file.
>
> Sometimes the configuration step is not as simple as run this and tell me
> if it works or not. This makes a shellscript much more manageable than
> a makefile, because it's not a task of the make and you end up forking from
> make which is inneficient and ugly to maintain.
>
> About having one different mk file for each platform..well I always find
> anyoing
> to have to maintain N files that does the same, or for building..it's a mess
> for packaging because there's no standard on this, and automatize the
> build or do some packaging becomes a mess.
>
> You get the package and you have to spend few seconds to identify which
> makefile you have to use, and then if you get compilation errors you have to
> guess which dependencies are missing. then you can try to find INSTALL or
> README files to see whats missing or try to fix the program if you get that
> its not a dependency problem. So this makes the build process simpler, but
> less error-prone and more annoying for packagers and users.
>
> The only good thing from autotools is that they are the standard, and this
> hardly simplified the steps of development, packaging, compilation and
> deployment. This is: make dist, make mrproper, automatic detection of
> file dependencies, check for dependencies, etc.
>
> For suckless projects i dont find logic to use such a monster, but for big
> projects it is many times a must. Because you end up with conditional
> dependencies, recursive checks to ensure consistence of a program, etc.
>
> If you have a build farm or any massive-compilation environment, you expect
> that all the packages are going to build and react in the same way. But this
> is not true. There are some basics in the software packaging that not
> everybody understand or know.
>
> Things like sand-boxing installation (make DISTDIR=/foo), cross-path
> compilation (VPATH),
> optimization flags detection for compiler, make dist, etc. are things that
> many of the
> makefile-only projects fail to do. I'm not trying to say that those are
> things that all
> packages must have, but it's something that standarizes the build and
> install process,
> and simplifies the development and maintainance.
>
> I wrote 'acr' because i was looking in something ./configure-compatible, but
> lightweight
> and simpler to maintain than the m4 approach. It works for me quite well,
> but I try to
> only use it for the projects that really need to split the build in two
> steps. For building
> in plan9 I just distribute a separate mkfile which doesnt depends on the
> configure stage.
>
> But plan9 is IMHO a quite different platform to not try to support it from
> the acr side,
> because the makefiles should be completely different too.

Well I've heared these reasons before and I don't buy them. There are
toolchains like the BSD ones and they proof pretty much that the
"everything is a Makefile approach" is the most portable and
sustainable one. Running a configure script from 10 years ago will
fail immediately.

I know that your problem vector is different, but I think reinventing
square wheels like autoconf again is not helping us any further. And I
really believe that sticking to mk or make files in large projects
saves you a lot of headaches in the long term (think years ahead, like
10 years or so).

Cheers,
Anselm



Re: [dev] [OFFTOPIC] Recommended meta-build system

2010-01-26 Thread pancake
I just wanted to say that few months ago I started to write a build 
system in C,
aiming to fix all those issues, but it's quite in early stage, only 
works for simple

projects.

I named it 'cake', as for cooking :)

You will find the source here:

 hg clone http://hg.youterm.com/cake

Instead of makefiles you have Cakefiles which are just plain C include 
files which

are included from cake.c which should be distributed on every package. Each
Cakefile can include other ones, and they describe by filling structures the
dependencies between modules, programs, libraries that are going to be 
built.


At this point you have another .h file that you can use to change the 
compiler
profile which is a struct with information about the flags, name, etc.. 
and when
you type 'make' it compiles 'cake' and runs 'cake' to get the build 
done. (make is

used as just a proxy).

cake -i is used to install the compiled results into the system.

I would really like to move this project forward, but as like many other 
projects of
mine I only use to develop for them when I have time, or I just simply 
need it.


So, if any of you is interested on it, feel free to send me patches or 
discuss ideas

about how to design/implement it.

One of the limitations I found is the lack of paralel compilation (like 
make -j) that
should be implemented, but having all those structs in memory saves some 
time

of Makefile parsing and dependency calculation.

For complex things I would run shellscripts from cake, like in make, but 
more explicitly,

so you are always splitting each functionality in a separate file.

--pancake



[dev] dwm and torbutton weirdness

2010-01-26 Thread ilf
Using dwm 5.7.2 and Torbutton 1.2.4 in Firefox 3.5.7, I get the
following weird behavior when enabling Tor usage:

Before: http://vorratsdatenspeicher.ath.cx/dwm_torbutton_1.png
After:  http://vorratsdatenspeicher.ath.cx/dwm_torbutton_2.png

See the right and bottom. Any idea what causes this and how to fix it?

-- 
ilf i...@jabber.berlin.ccc.de

Über 80 Millionen Deutsche benutzen keine Konsole. Klick dich nicht weg!
-- Eine Initiative des Bundesamtes für Tastaturbenutzung


signature.asc
Description: Digital signature


Re: [dev] [OFFTOPIC] Recommended meta-build system

2010-01-26 Thread David Tweed
Thanks to everyone for all the help.

I'm looking more at the development process than the distribution
process which means different issues are most important for me. The
big issue I'm looking at is that I've got lots of programs which can
be visualised as having "conventional" dependencies with the twist
that suppose executable "foo" depends upon "colourSegmentation.o", if
the target processor has SSE3 instructions the IF there's an processor
optimised segmentation.c in the SSE3 directory compile and link
against that, IF it doesn't exist compile and link against the version
in the GENERIC_C directory. I think maintaining separate makefiles
that are manually kept up to date in each case as new processor
oprtimised code gets written is going to be reliable in the longer
term. I think I'll follow the general advice to maintain a single
makefile that describes the non-processor specific dependencies by
hand and then try some homebrew script to automatically infer and add
appropriate paths to object files in each processor-capability
makefile depending on availability for each processor-capability set.
(This is probaby not a common problem.)

> I recommend mk from Plan 9, the syntax is clean and clearly defined
> (not the problem is it BSD make, is it GNU make or is it some archaic
> Unix make?). I found that all meta build systems suck in one way or
> another -- some do a good job at first glance, like scons, but they
> all hide what they really do and in the end it's like trying to
> understand configure scripts if something goes wrong. make or mk are
> better choices in this regard.

Yeah. I don't mind powerful languages for doing stuff "automatically",
the problem is systems that aren't designed to be easily debuggable
when they go wrong.

-- 
cheers, dave tweed__
computer vision reasearcher: david.tw...@gmail.com
"while having code so boring anyone can maintain it, use Python." --
attempted insult seen on slashdot



Re: [dev] [OFFTOPIC] Recommended meta-build system

2010-01-26 Thread Ryan R
I switched the development process over to gentoo where I work and
it's been awesome to say the least.



Re: [dev] dwm and torbutton weirdness

2010-01-26 Thread twfb
On 16:58 Tue 26 Jan, ilf wrote:
> Using dwm 5.7.2 and Torbutton 1.2.4 in Firefox 3.5.7, I get the
> following weird behavior when enabling Tor usage:
> 
> Before: http://vorratsdatenspeicher.ath.cx/dwm_torbutton_1.png
> After:  http://vorratsdatenspeicher.ath.cx/dwm_torbutton_2.png
> 
> See the right and bottom. Any idea what causes this and how to fix it?

Uncheck Preferences > Security Settings > Resize window to multiples ...

-- 
TWFB  -  PGP: D7A420B3



Re: [dev] gsoc 2010

2010-01-26 Thread Uriel
I sure hope so.

uriel

On Mon, Jan 25, 2010 at 6:26 PM, markus schnalke  wrote:
> Are there plans to apply for Google Summer of Code, this year?
>
> I ask because I want to apply as student.
>
>
> meillo
>
>



Re: [dev] [OFFTOPIC] Recommended meta-build system

2010-01-26 Thread Uriel
On Mon, Jan 25, 2010 at 4:40 PM, pancake  wrote:
> PD: Is there any tutorial or good documentation about how to use mk?

http://doc.cat-v.org/plan_9/4th_edition/papers/mk
http://doc.cat-v.org/plan_9/4th_edition/papers/mkfiles

> because
> 'make' is
> nice, but its too shell dependend and this forces the execution to fork for
> most of basic
> operations slowing down the execution and there are many other things that
> makes
> 'make' innefficient in some situations.

If any of those things are a concern, you are clearly doing things
*completely wrong*.

uriel

> But I dont know if mk will be better
> for that.
>
> About cmake. i never liked because its c++ and it is not everywhere (you
> have to
> explicitly install it), and that's a pain in the ass for distributing apps.
> I like to depend on
> as less things as possible.
>
> Another build system I tried was 'waf'[3], and I got really exhausted of
> changing the
> rule files to match the last version of waf (they changed the API many times
> (at least
> when I was using it). The good thing of waf is that its python ( i dont like
> python, but
> its everywhere) so there's no limitation on shell commands and forks, and
> the
> configure/make stages are done nicer than in make(1) or autotools (they only
> install
> files that are diffeerent in timestamp for example) resulting in a faster
> compilation
> and installation.
>
> Another good thing of waf is that it can be distributed with the project, so
> you dont
> need to install waf to compile the project. Only depends on python which is
> something
> you can sadly find in all current distributions :)
>
> [1] http://hg.youterm.com/acr
> [2] http://radare.org
>
> Armando Di Cianno wrote:
>>
>> David,
>>
>> I worked with the people at Kitware, Inc. for a while (here in
>> beautiful upstate New York), and they wrote and maintain CMake [1].  I
>> believe KDE, IIRC, has used CMake for a while now (which is at least a
>> testament to the complexity it can handle).
>>
>> IMHO, CMake does not have a great syntax, but it's easy to learn and
>> write.  Again, IMHO, orders of magnitude easier to understand than GNU
>> auto*tools -- although it is a bit pedantic (e.g. closing if branches
>> with the condition to match the opening).
>>
>> However, for all its faults, it's *really* easy to use, and the
>> for-free GUIs (ncurses or multi-platforms' GUIs), are icing on the
>> cake.  The simple ncurses GUI is nice to have when reconfiguring a
>> project -- it can really speed things up.
>>
>>
>>>
>>> stuff like "has vsnprintf?" that configure deals with.) In addition,
>>> it'd be nice to be able to have options like "debugging", "release",
>>> "grof-compiled", etc, similar to procesor specification.
>>> It would be preferrable if all
>>> object files and executables could coexist (because it's a C++
>>> template heavy
>>>
>>
>> CMake can do all this for you, and it works great with C and C++
>> projects (really, that's the only reason one would use it).
>>
>> 2¢,
>> __armando
>>
>> [1] http://cmake.org/
>>
>>
>
>
>



Re: [dev] [OFFTOPIC] Recommended meta-build system

2010-01-26 Thread Uriel
On Tue, Jan 26, 2010 at 8:10 AM, Daniel Bainton  wrote:
> 2010/1/25 pancake :
>> I have been using make(1) and acr(1) for most of my projects for a long while
>
> acr seems to have the OS guessing quite bad. It checks if uname is the
> GNU version and then adds -gnu to the system type if it is? What if
> the system is a uClibc based one that uses the GNU version of uname?

Why the fucking hell should the fucking build tool know shit about the
OS it is running on?!?!?!

If you need to do OS guessing, that is a clear sign that you are doing
things *wrong* 99% of the time.

uriel

> gcc -dumpmachine would be a better way IMO (though probably not the
> best anyway, atleast if the system has some other compiler than gcc..)
>
> --
> Daniel
>
>



Re: [dev] [OFFTOPIC] Recommended meta-build system

2010-01-26 Thread David Tweed
On Wed, Jan 27, 2010 at 6:25 AM, Uriel  wrote:
> Why the fucking hell should the fucking build tool know shit about the
> OS it is running on?!?!?!
>
> If you need to do OS guessing, that is a clear sign that you are doing
> things *wrong* 99% of the time.

[In what follows by "OS" I mean kernel plus userspace libraries that
provide a higher level interface to the hardware than runs in the
kernel.]

It would be great if "conceptual interfaces" that are a decade or more
old were universally standardised (so you don't have to worry about
whether mkstemp() is provided, etc) so that a lot of the configuration
processing could go away, and maybe that's the situation for most
"text and filesystem applications". But there are and are will be in
the future new interfaces that haven't solidified into a common form
yet, eg, webcam access, haptic input devices, accelerometers/GPS,
cloud computing APIs, etc, for which figuring out what is provided
will still necessary in meta-build/configuration systems for years to
come for any software that will be widely distributed.

-- 
cheers, dave tweed__
computer vision reasearcher: david.tw...@gmail.com
"while having code so boring anyone can maintain it, use Python." --
attempted insult seen on slashdot



Re: [dev] gsoc 2010

2010-01-26 Thread Anselm R Garbe
Hi Markus,

2010/1/25 markus schnalke :
> Are there plans to apply for Google Summer of Code, this year?

Yes. We will apply again. This time we need to focus the scope a bit
more, the feedback from Google regarding last application was
basically that they didn't see a strong focus into a particular
direction of our project ideas, so we need to do some cleaning and
achieve some hygiene ;)

Cheers,
Anselm



Re: [dev] [OFFTOPIC] Recommended meta-build system

2010-01-26 Thread Anselm R Garbe
2010/1/27 David Tweed :
> On Wed, Jan 27, 2010 at 6:25 AM, Uriel  wrote:
>> Why the fucking hell should the fucking build tool know shit about the
>> OS it is running on?!?!?!
>>
>> If you need to do OS guessing, that is a clear sign that you are doing
>> things *wrong* 99% of the time.
>
> [In what follows by "OS" I mean kernel plus userspace libraries that
> provide a higher level interface to the hardware than runs in the
> kernel.]
>
> It would be great if "conceptual interfaces" that are a decade or more
> old were universally standardised (so you don't have to worry about
> whether mkstemp() is provided, etc) so that a lot of the configuration
> processing could go away, and maybe that's the situation for most
> "text and filesystem applications". But there are and are will be in
> the future new interfaces that haven't solidified into a common form
> yet, eg, webcam access, haptic input devices, accelerometers/GPS,
> cloud computing APIs, etc, for which figuring out what is provided
> will still necessary in meta-build/configuration systems for years to
> come for any software that will be widely distributed.

In my observation one should stick to one platform, which is nowadays
Linux+common libraries (most of the time) when packaging some source
code. In >90% of all cases it will work fine, because the other 95% of
users use Linux as well and the  5% remainder either uses some BSD
where the likelihood is high that it'll just compile as well and some
<<1% users will use some exotic platform where we shouldn't bother at
all if it'll work or not.

And if there is a problem, some user will report it and one can think
of a change case by case. I applied this to most of the projects I'm
working on (also commercial ones) and it works fine. I don't want to
count the time I saved in not running configure ;)

Cheers,
Anselm