Liste des Groupes | Revenir à cl c |
David Brown <david.brown@hesbynett.no> wrote:I talk about "building" a project when I build the project - produce the relevant output files (typically executables of some sort, appropriately post-processed).On 04/12/2024 16:09, Bart wrote:Without word "make" it was not clear if you mean full build (sayOn 04/12/2024 09:02, David Brown wrote:>On 03/12/2024 19:42, Bart wrote:Yesterday you tried to give the misleading impression that compiling a>
substantial 200Kloc project only took 1-3 seconds with gcc.
>
No, I did not. I said my builds of that project typically take 1-3
seconds. I believe I was quite clear on the matter.
after checkout from repository). Frequently people talk about re-making
when they mean running make after a small edit and reserve build
for full build. So it was not clear if you claim to have a compile
farm with few hundred cores (so you can compile all files in parallel).
That is not development, which is the topic here.If I do a full, clean re-compile of the code, it takes about 12 secondsWell, when I download a project from internt the first (ant frequently
or so. But only a fool would do that for their normal builds. Are you
such a fool? I haven't suggested you are - it's up to you to say if
that's how you normally build projects.
>
If I do a full, clean re-compile /sequentially/, rather than with
parallel jobs, it would be perhaps 160 seconds. But only a fool would
do that.
the only compilation is a full build).
And if build fails, IMEThat does not sound to me like a particularly efficient way of doing things. However, it is presumably not something you are doing countless times while working. And even then, the build time is only a small proportion of the time you spend finding the project, reading its documentation, and other related tasks. It's a one-off cost.
to is much harder to find problem from log of parallel build.
So I frequently run full builds sequentially. Of course, I find
something to do when computer is busy (300sec of computer time
spent on full build is not worth extra 30 seconds to find trouble
in parallel log (and for bigger things _both_ times grow so
conclusion is the same)).
You misunderstand.I somewhat disagree. You probaly represent opinion of majority ofI gave some timings that showed gcc-O0 taking 50 times longer than tcc,>
and 150 times longer with -O2.
>
That is the real picture. Maybe your machine is faster than mine, but I
doubt it is 100 times faster. (If you don't like my benchmark, then
provide another in portable C.)
>
All this just so you can crap all over the benefits of small, faster,
simpler tools.
Your small, fast, simple tools are - as I have said countless times -
utterly useless to me. Perhaps you find them useful, but I have never
known any other C programmer who would choose such tools for anything
but very niche use-cases.
>
The real picture is that real developers can use real tools in ways that
they find convenient. If you can't do that, it's your fault. (I don't
even believe it is true that you can't do it - you actively /choose/ not
to.)
>
And since compile speed is a non-issue for C compilers under most
circumstances, compiler size is /definitely/ a non-issue, and
"simplicity" in this case is just another word for "lacking useful
features", there are no benefits to your tools.
developers. But that leads to uncontrolled runaway complexity and
bloat.
You clearly see need to have fast and resonably small codeRaspberry Pi's have no problem at all running native gcc.
on your targets. But there are also machines like Raspberry Pi,
where normal tools, including compilers can be quite helpful.
But such machines may have rather tight "disc" space and CPU
use corresponds to power consumption which preferably should be
low. So there is some interest and benefits from smaller, more
efficient tools.
OTOH, people do not want to drop all features. And concerningThe more a program is used, the more important its efficiency is. Yes, gcc and clang/llvm developers care about speed. (They don't care much about disk space. Few users are bothered about $0.10 worth of disk space.)
gcc, AFAIK is is actually a compromise for good reason. Some
other projects are slow and bloated apparenty for no good
reason. Some time ago I found a text about Netscape mail
index file. The author (IIRC Jame Zawinsky) explained how
it fastures ensured small size and fast loading. But in
later developement it was replaced by some generic DB-like
solution leadind to huge slowdown and much higher space
use (apparently new developers were not willing to spent
a litte time to learn how old code worked). And similar
examples are quite common.
And concerning compiler size, I do not know if GCC/clang
developers care. But cleary Debian developers care,
they use shared libraries, split debug info to separate
packages and similar to reduce size.
Les messages affichés proviennent d'usenet.