Sujet : Re: 32 bits time_t and Y2038 issue
De : david.brown (at) *nospam* hesbynett.no (David Brown)
Groupes : comp.arch.embeddedDate : 21. Mar 2025, 13:54:40
Autres entêtes
Organisation : A noiseless patient Spider
Message-ID : <vrjnig$1hmmn$1@dont-email.me>
References : 1 2 3 4 5 6 7 8
User-Agent : Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101 Thunderbird/102.11.0
On 21/03/2025 10:20, Michael Schwingen wrote:
On 2025-03-18, David Brown <david.brown@hesbynett.no> wrote:
>
These days I happily use it on Windows with recursive make (done
/carefully/, as all recursive makes should be), automatic dependency
generation, multiple makefiles, automatic file discovery, parallel
builds, host-specific code (for things like the toolchain installation
directory), and all sorts of other bits and pieces.
I converted to the "recursive make considered harmful" group long ago.
Having one makefile for the whole build makes it possible to have
dependencies crossing directories, and gives better performance in parallel
builds - with recursive make, the overhead for entering/exiting directories
and waiting for sub-makes to finish piles up. If a compile takes 30 minutes
on a fast 16-cpu machine, that does make a difference.
using ninja instead of make works even better in such a scenario.
cu
Michael
I fully agree with the points in "recursive make considered harmful", which I also read long ago. But that does not mean that recursive make can't be used well - it just means you have to use it appropriately, and carefully.
In particular, using one "outer" make to run make on makefiles in different directories is asking for trouble - you can easily get dependencies wrong or miss cross-directory dependencies. And it is often difficult to figure out what is happening if something fails in one of the builds. And with older makes (from the days when that paper was written), there was no inter-make job server meaning you either had to give each submake too few parallel jobs (and thus wait for some to finish), or too many (and slow the system down).
The way I use recursive makes is /really/ recursive - the main make (typically split into a few include makefiles for convenience, but only one real make) handles everything, and it does some of that be calling /itself/ recursively. It is quite common for me to build multiple program images from one set of source - perhaps for different variants of a board, with different features enabled, and so on. So I might use "make prog=board_a" to build the image for board a, and "make prog=board_b" for board b. Each build will be done in its own directory - builds/build_a or builds/build_b. Often I will want to build for both boards - then I will do "make prog="board_a board_b"" (with a default setting for the most common images).
These different boards can require different settings for compiler flags, directories, and various other options. Rather than having to track multiple sets of variables in the makefiles for when multiple board images are being handled within the one make, I have a far simpler solution - if there is more than one image being build, I simply spin off recursive makes for each build - thus after "make prog="board_a board_b"", I start "make prog=board_a" and "make prog=board_b". Each make instance lives on its own, so I only need one set of flags at a time, compiling into one build directory - but they share a job server, and all the dependencies are correct.
It is not the only way to handle such things, but it is definitely a convenient and efficient method.