Re: [dev] [OFFTOPIC] Recommended meta-build system

2010-01-25 Thread Anselm R Garbe
Hi David,

2010/1/25 David Tweed :
> I'm wondering if anyone has had particularly good experiences with any
> meta-build system (cmake, etc) in the following circumstances:
>
> I will have a large codebase which consists of some generic files and
> some processor specific files. (I'm not worried about OS environent
> stuff like "has vsnprintf?" that configure deals with.) In addition,
> it'd be nice to be able to have options like "debugging", "release",
> "grof-compiled", etc, similar to procesor specification. I need to be
> able to select the appropriate files for a given build and compile
> them together to form an executable. It would be preferrable if all
> object files and executables could coexist (because it's a C++
> template heavy source-base that means individual files compile
> relatively slowly, so it'dbe preferrable only to recompile if the
> source has actually changed) using directories or naming conventions.
>
> I've been doing some reading about things like cmake and SCons but
> most strike me as having "built-in logic for their normal way of doing
> things and are relatively clunky if you specify something different".
> (Incidentally, when I say meta-build system I mean that I don't mind
> if it builds things directly or if it outputs makefiles that can be
> invoked.) Does anyone have any experiences of using any tool for this
> kind of purpose?
>
> (One option would be to just have a static makefile and then do some
> include-path hackery to select processor specific directories to pick
> a specific versions of files depending on options and then rely on
> ccache to pick up the correct object file from the cache rather than
> recompiling. But that feels like a hack for avoiding having a more
> expressive build system.)
>
> Many thanks for sharing any experiences,

I recommend mk from Plan 9, the syntax is clean and clearly defined
(not the problem is it BSD make, is it GNU make or is it some archaic
Unix make?). I found that all meta build systems suck in one way or
another -- some do a good job at first glance, like scons, but they
all hide what they really do and in the end it's like trying to
understand configure scripts if something goes wrong. make or mk are
better choices in this regard.

Being involved in a lot of embedded development for the last years I'd
also say that one build chain that I really like is what Google did to
build android (which also inspired my stali efforts). They git rid of
configure and meta build systems nearly completely and are using make
files instead. Well of course the BSDs did this also for decades ;)

But make based build chains seem to be the ones with the least
headaches in my experience. It's worth the extra effort to write all
those make or mk files from scratch, in the end it'll safe you a lot
of time.

Cheers,
Anselm



Re: [dev] saving program options

2010-01-25 Thread markus schnalke
[2010-01-25 02:10] anonymous 
> > TAOUP also recommends small programs that do just one thing.  If you
> > have so many options that you need a "huge structure" to store them,
> > that might be a sign that your program is overly complex.  Consider
> > factoring it into a set of smaller cooperating processes.
> 
> It is not too big, but there are more than 7 options anyway. What
> about troff, for example? It is not very small and has more than 7
> options. 

Mind the difference between accidental complexity and essential
complexity. (see ``No Silver Bullet'' by Brooks [0])

[0] http://en.wikipedia.org/wiki/No_Silver_Bullet

Troff's complexity is mostly essential. (In fact, it is one of the
best examples of how to avoid complexity, IMO.)

I don't know about your program. If the structures are ``huge'', then
you likely chose bad data structures. If they are large and the
problem is essentially complex, it might be okay. If the data
structures are huge, but the program(s) small, then you might want to
use global variables.

The advice is easy: Organize your code in a way to make it as simple,
clear, and general as possible. IMO, you may use global variables,
gotos, and other ``bad stuff'' without shame, *if* you have good
reasons for them. Or better: You *should* use this stuff if they help
you to make the code more simple.


meillo



Re: [dev] saving program options

2010-01-25 Thread Nick Guenther
On Sun, Jan 24, 2010 at 2:57 PM, anonymous  wrote:
> Where programs should store their options? Sometimes it is said that
> global variables are bad, but what is better? Some huge structure
> storing all options? Of course, they can be divided into many
> structures or they can be passed on a stack instead of passing pointer
> to structure but it is not effective.
>
> Here is citation from TAOUP
> (http://www.catb.org/esr/writings/taoup/html/ch04s06.html):
> "[...]"
>
> So lots of global variables are bad, but saving them in a local
> structure is bad too? Is there any other other solution?


Unless you do craziness like allocate a giant datastructure at the
start of your app and pass it around as a param everything, you'll
always need -something- to be global. There's nothing wrong with
having 'int rflag;' at the top of your program. Globals only become a
problem when every subsection of your program is using them to store
information that really should be private.

Settings information I've always found necessary to store in globals,
even if it is just a single giant struct Options {...} options. Trying
to do it any other way (IoC comes to mind) has some benefits but makes
your program complexity shoot up; you might end up spending more time
designing the interactions of your components than actually making
them  work.

If you're working in C you're kinda stuck with information leakage
that.. .but you can just use prefixes to make yourself namespaces and
you'll be dandy. If you're working in another language that supports
modules inherently you can just use that.

-Nick



Re: [dev] [OFFTOPIC] Recommended meta-build system

2010-01-25 Thread Armando Di Cianno
David,

I worked with the people at Kitware, Inc. for a while (here in
beautiful upstate New York), and they wrote and maintain CMake [1].  I
believe KDE, IIRC, has used CMake for a while now (which is at least a
testament to the complexity it can handle).

IMHO, CMake does not have a great syntax, but it's easy to learn and
write.  Again, IMHO, orders of magnitude easier to understand than GNU
auto*tools -- although it is a bit pedantic (e.g. closing if branches
with the condition to match the opening).

However, for all its faults, it's *really* easy to use, and the
for-free GUIs (ncurses or multi-platforms' GUIs), are icing on the
cake.  The simple ncurses GUI is nice to have when reconfiguring a
project -- it can really speed things up.

> stuff like "has vsnprintf?" that configure deals with.) In addition,
> it'd be nice to be able to have options like "debugging", "release",
> "grof-compiled", etc, similar to procesor specification.
> It would be preferrable if all
> object files and executables could coexist (because it's a C++
> template heavy

CMake can do all this for you, and it works great with C and C++
projects (really, that's the only reason one would use it).

2¢,
__armando

[1] http://cmake.org/



Re: [dev] [OFFTOPIC] Recommended meta-build system

2010-01-25 Thread pancake
I have been using make(1) and acr(1) for most of my projects for a long 
while

and Im pretty happy with them. But I have to agree that make lacks so many
things and it is bloated enought to think on moving to mk(1).

Some projects like perl use perl (miniperl) to generate makefiles from 
simplest

template files, so the makefile maintainance is easier.

For me I used to write makefiles manually (not using any GNU auto-shit), 
I have
started to write 'amr' like a minimal automake-compatible replacement 
together

with 'acr' (auto conf replacement) which is already an usable solution.

AMR is quite broken atm and works only for simple test cases, but ACR is 
probably
the best alterative for autoconf. It generates a 15KB configure script 
with compatible
posix shellscript instead of the common 300K from GNU, and yeah its 
readable.
I have used ACR for building on solaris, *BSD, Linux and cygwin/mingw32 
and I'm

happy with it.

In radare2[2] I used acr to check and configure the build system for a 
prefix, check
dependencies, system, bitsize, plugins, compilation options, etc.. and 
then it generates
two makefiles which imports some .mk files containing the rules for the 
rest of modules.


I think that if you have a big project and you have to maintain it by 
makefiles is better
to group common blocks by same rules by just configuring those elements 
with few
variables used by the .mk files to set up some rules or others depending 
on the module
type. or just including a different .mk file. This will make your 
project makefiles be 3-4 lines
long and much more maintainable. Check the radare2 hg repo if you are 
interested on

this.

If you are looking for ACR use examples, check any of the other projects 
in hg.youterm.com

or just see the one in radare.

PD: Is there any tutorial or good documentation about how to use mk? 
because 'make' is
nice, but its too shell dependend and this forces the execution to fork 
for most of basic
operations slowing down the execution and there are many other things 
that makes
'make' innefficient in some situations. But I dont know if mk will be 
better for that.


About cmake. i never liked because its c++ and it is not everywhere (you 
have to
explicitly install it), and that's a pain in the ass for distributing 
apps. I like to depend on

as less things as possible.

Another build system I tried was 'waf'[3], and I got really exhausted of 
changing the
rule files to match the last version of waf (they changed the API many 
times (at least
when I was using it). The good thing of waf is that its python ( i dont 
like python, but
its everywhere) so there's no limitation on shell commands and forks, 
and the
configure/make stages are done nicer than in make(1) or autotools (they 
only install
files that are diffeerent in timestamp for example) resulting in a 
faster compilation

and installation.

Another good thing of waf is that it can be distributed with the 
project, so you dont
need to install waf to compile the project. Only depends on python which 
is something

you can sadly find in all current distributions :)

[1] http://hg.youterm.com/acr
[2] http://radare.org

Armando Di Cianno wrote:

David,

I worked with the people at Kitware, Inc. for a while (here in
beautiful upstate New York), and they wrote and maintain CMake [1].  I
believe KDE, IIRC, has used CMake for a while now (which is at least a
testament to the complexity it can handle).

IMHO, CMake does not have a great syntax, but it's easy to learn and
write.  Again, IMHO, orders of magnitude easier to understand than GNU
auto*tools -- although it is a bit pedantic (e.g. closing if branches
with the condition to match the opening).

However, for all its faults, it's *really* easy to use, and the
for-free GUIs (ncurses or multi-platforms' GUIs), are icing on the
cake.  The simple ncurses GUI is nice to have when reconfiguring a
project -- it can really speed things up.

  

stuff like "has vsnprintf?" that configure deals with.) In addition,
it'd be nice to be able to have options like "debugging", "release",
"grof-compiled", etc, similar to procesor specification.
It would be preferrable if all
object files and executables could coexist (because it's a C++
template heavy



CMake can do all this for you, and it works great with C and C++
projects (really, that's the only reason one would use it).

2¢,
__armando

[1] http://cmake.org/

  





Re: [dev] [OFFTOPIC] Recommended meta-build system

2010-01-25 Thread anonymous
Radare INSTALL says "The WAF build system is supossed to replace the
ACR one.". This means that ACR is going to be replaced with WAF?

In the section "HOW TO COMPILE" there is "Standard way" with
configure && make && make install and "Alternative (going to be
deprecated)" based on waf. This means that WAF is going to be replaced
with ACR?




[dev] gsoc 2010

2010-01-25 Thread markus schnalke
Are there plans to apply for Google Summer of Code, this year?

I ask because I want to apply as student.


meillo



Re: [dev] [wmii] notifications only work from tag 1

2010-01-25 Thread Yuval Hager
On Monday 25 January 2010, Kris Maglione wrote:
> On 2010-01-14, Yuval Hager  wrote:
> > I am trying to send notifications using 'notify-send' (libnotify) and I
> > found out they work only if they are sent from tag 1.
> > otherwise, I get the following message from dbus-daemon:
> >
> > ,
> >
> > | (:5963): Wnck-WARNING **: Someone set a weird number of
> > | desktops
> >
> > in _NET_NUMBER_OF_DESKTOPS, assuming the value is 1
> >
> > | Wnck-CRITICAL **: wnck_window_is_on_workspace: assertion
> >
> > `WNCK_IS_WORKSPACE (workspace)' failed
> >
> > | aborting...
> >
> > `
> >
> > Anyone has a clue?
> 
> Yes, this seems to be a bug. I'm surprised that libnotify chokes (or
> even cares), though.
> 
> Fixed as of 2600:5973799cd90a.
> 

Cool! thanks :)

--yuval


signature.asc
Description: This is a digitally signed message part.


Re: [dev] [OFFTOPIC] Recommended meta-build system

2010-01-25 Thread pancake

anonymous wrote:

Radare INSTALL says "The WAF build system is supossed to replace the
ACR one.". This means that ACR is going to be replaced with WAF?

In the section "HOW TO COMPILE" there is "Standard way" with
configure && make && make install and "Alternative (going to be
deprecated)" based on waf. This means that WAF is going to be replaced
with ACR?


  

That's deprecated stuff. I will not change to waf, but im keeping both build
systems for people having problems with make (dunno who actually).

waf is configure+build, and i got bored of it because they changed the 
api many

times, and i spend more time fixing .py files than coding :P

I recommend you to check radare2 build system. r1 is a mess :) but its 
kinda funny

trash and dirty coding ;)

--pancake



Re: [dev] a suckless computer algebra system

2010-01-25 Thread Fernan Bolando
On Wed, Nov 25, 2009 at 4:22 PM, Anselm R Garbe  wrote:
> 2009/11/24 Preben Randhol :
>> On Fri, 20 Nov 2009 18:39:04 +
>> Anselm R Garbe  wrote:
>>
>>> Why not? I think it should be possible to have very minimalist and
>>> specialized CAS', they managed to do that in the 50s and 60s, why not
>>> today?
>>
>> We are not living in the 50's nor 60's... If the suckless approach is to
>
> Mankind was able to visit the moon based on these very simple systems
> at the end of that era, but hasn't ever been since (the modern excuse
> is lack of money, but I disagree). I don't think that they did
> everything wrong in the past or that most of the past technology has
> no value to learn from.
>

I think the main problem in going suckless for mathematics software,
is a whole bunch of code
written by some of the most briliant matematicians. Those poeple
however are not always good coders. So you end up with thousands of
fortran code accumulated through the years. Fortran codes that still
works, but nobody understands anymore.

fernan

-- 
http://www.fernski.com



Re: [dev] [OFFTOPIC] Recommended meta-build system

2010-01-25 Thread Daniel Bainton
2010/1/25 pancake :
> I have been using make(1) and acr(1) for most of my projects for a long while

acr seems to have the OS guessing quite bad. It checks if uname is the
GNU version and then adds -gnu to the system type if it is? What if
the system is a uClibc based one that uses the GNU version of uname?

gcc -dumpmachine would be a better way IMO (though probably not the
best anyway, atleast if the system has some other compiler than gcc..)

--
Daniel



Re: [dev] [OFFTOPIC] Recommended meta-build system

2010-01-25 Thread Andres Perera
I'd say stay away from cmake. It's very complicated.

I'd like to try plan9 mk, but in the meantime, one more vote for good old make.

Andres



Re: [dev] [OFFTOPIC] Recommended meta-build system

2010-01-25 Thread pancake



On Jan 26, 2010, at 8:10 AM, Daniel Bainton  wrote:


2010/1/25 pancake :
I have been using make(1) and acr(1) for most of my projects for a  
long while


acr seems to have the OS guessing quite bad. It checks if uname is the
GNU version and then adds -gnu to the system type if it is? What if
the system is a uClibc based one that uses the GNU version of uname?

gcc -dumpmachine would be a better way IMO (though probably not the
best anyway, atleast if the system has some other compiler than gcc..)

It cannot depend on gcc. What about crosscompiling? What about non-C  
projects?


That string is just orientative imho. So i simplified the algorithm to  
handle most common situations.


The code in autoconf that do this is really painful. And i dont really  
get the point of having a moré accurate and complex host string  
resolution.


Do you have any other proposal to enhace it? With --target, --host and  
--build you can change the default string.

--
Daniel