2020-05-09

Chosing a new build system

I've started a couple of little software projects which might evolve into something worth putting out in public. Indeed, if I'm committed to open source approaches I should probably put them out there fairly early on. But even with the bare bones, just-preparing-the-ground state of these projects I'd like potential users and contributors to have a smooth path to seeing what little functionality is actually done. Which means having a well designed and at least somewhat cross-platform build.

Now, I'm a Unix guy. I can write a simple makefile starting from a bare editor without breaking a sweat. I've maintained or extended the make-based build on projects with several hundred-thousand lines of code and many hundreds of source files targeting Unix, Windows, and MacOS. I respect make. But...I've maintained or extended the make-based build on projects with several hundred-thousand lines of code and many hundreds of source files targeting Unix, Windows, and MacOS. So I know how hard it is to get a make-based build to scale up without hiccups and what a pain it is to get it to do cross-platform smoothly and reliably. There ought to be something better.

XKCD Standards
Hat-tip to Randall Monroe
I've started looking into what options are available. If you've asked this question any time in the last fifteen years or so you know exactly what I've found.

Lots and lots of contenders, lots and lots of reviews and comparisons, and lots and lots of opinions. But nothing like a consensus. A great many of the contenders are stagnant or abandoned, while others never seem to have evolved beyond some niche or another. I suppose we can conclude that a lot of people think this issue is a pain point and that the problem is harder than it looks. Certainly building has a lot of inherent complexity and and cross-platform building is more complex still.

I'm tempted by meson (and I really appreciate Evan Martin's philosophical position about separating a fast DAG walking component from a intelligent decision making component), but I'm far from confident that it is the best choice. I'm also tempted by tup simply because I'm impressed by the improved asymptotic performance of the underlying process1 but I think the syntax is a little wonky.

Any thoughts?

Aside: If any one of my handful of occasional readers thinks that cmake is the obvious choice, I'll warn you that you're facing an uphill battle to convince me.2 I've had the dubious pleasure of getting to tweak the build of projects supported by that thing and I like it rather less than qmake (which I don't like much and don't consider a contender for projects not involving Qt).




1 There is an aspect of the linked paper that I really don't like. A couple of times the authors criticize Recursive Make Considered Harmful for insisting that you have to have the full DAG for correct build and they claim that they've proved that this isn't true. Except that they do have and use the full graph (remember that they have to cache the graph). They've found a way to avoid walking the whole thing (by using the change list to select the affected sub-trees), but they have to have it to start with or they have to build it before they begin. This is basically a problem with language and the way they form they claims, but it is also basically wrong. When writing a paper you certainly want to promote your work but you shouldn't over claim which is exactly what they've done there.

2 I've read many claims that the syntax of cmake 3 is so much better than that of earlier editions that people who dismissed the previous version should really look again. I find that scary because I've only ever worked with the "new" syntax and it's more than enough to put me off. Can you get Stockholm syndrome from working with difficult software tools?

No comments:

Post a Comment