Right now the build in all of our modules works like this:
1. Package everything up in to a source code tar ball
2. Extract this tar ball in a temporary build directory
3. Build the software
4. Install the software to a temporary directory
5. Package up the installed software
We do it this way primarily for two reasons:
a) We re-use components in some places, so the working tree might not have all the components that module needs.
b) We do the build multiple times for different target architectures and we need to make sure they don't influence each other.
Too much of the things we use do not support having different build and source directories (à la autotools), so we have to do some magic.
The problems we're seeing have to do with how we do the packaging. We have a lot of magic in place in order to try to guarantee that:
- The source tar ball will be re-generated when something changes.
- Conversely, the source tar ball will _not_ be re-generated if nothing has changed.
- No previously built or generated files will be included and possibly influence the new build.
- No proprietary things are included in the shipped open source tar balls.
Right now we're having various success fulfilling these goals, and the added down sides of:
- Duplication of file lists, e.g. a .c file is mentioned both where it is built and where it is packaged. It's easy to get these out of sync. It gets significantly worse by the fact that many things are built and packaged in two very different parts of the tree.
- Incomprehensible magic in the Makefile:s to try and automatically generate file lists, or filter out unwanted things.
It seems worthwhile to see if we can find a better approach.
One approach is trying to sort out a recursive chain of 'make dist'. Each module is then responsible for packaging its own files, and calling 'make dist' on any submodules (and then merging those files). If we use the autotools style of naming source files, then this should avoid duplicating things.
The big hurdle to sort out is submodules that lack a Makefile to start with. We could add checks that run the necessary autotools commands first. But do we trust the generated files that get included, or would we prefer to make sure autoreconf is always run when it is time to actually build the resulting tar ball?
Some improvement to this was made in r35763 for bug 7583 where we now use rsync to generate the prerequisite lists for sub-modules, rather than explicitly listing them.
Files that are directly in a module still have this problem though. But that is probably a minor issue.
So the big problem remaining is the problem with dirty working trees. Moving everything to out-of-tree builds should solve that, and hopefully completely get rid of the separate source packaging step.